Neural Aesthetic w6 class notes

  • Generative models synthesize new samples that resemble the training data
    • applications: visual content generation, language models (chatbots, assistants, duplexes), music, etc
    • Models the probability distribution of all possible images; images that look like the dataset have a high probability
  • PCA projects down in lower dimensions and back out
  • latent space: space of all possible generated outputs
  • later layers can be used as a feature extractor because it is a compact but high-level representation
    • distance calculations between feature extractors can be used to determine similarities between images
    • transfer learning
    • can use PCA to reduce redundancies, then calculate distances
      • images (points) can then be embedded in feature space
        • vectors between points imply relationships
  • Autoencoders reconstruct its inputs as its outputs; networks learns an essential representation of the data via compression through a small middle layer
    • first half encoder, second half decoder
    • can throw in labels for a conditional distribution
    • can encode images and get their latent representation to project outward
      • smile vector
  • GANs: circa 2014
    • hard to train
    • hard to evaluate
    • can’t encode images directly
    • structured like a decoupled autoencoder
      • generator > discriminator
        • generator: basically like the decoder in an autoencoder
          • takes in random numbers, not images
          • tries to create images to trick the discriminator into thinking they’re real
        • discriminator: takes in an input image (from generator), decides if it is real or fake
        • “adversarial”: trained to work against each other
  • DC GANs
    • unsupervised technique, but can give it labels
      • interpolations through latent space AND through labels
        • labels are one hot vectors
        • MINST: glyph between integers
  • Deep Generator Network
    • similar to deep dream; optimizes an output image to maximize a class label
  • Progressively grown GANs
    • super high res, super realistic

cloud computing + data architecture

ML as a Service (Comprehend, Rekognition)

import boto3

AWS_client = boto3.client('comprehend', region_name='us-east-1')

AWS_sentiment_response = AWS_client.detect_sentiment(Text='i am so tired',LanguageCode='en')

 

AWS Lambda: functions as a service

  • run code with Lambda > analyze with Comprehend > store on S3 > serve with EC2
  • can use AWS Lambda as an endpoint
  • EC2 has tensorflow instance
  • can ask alexa to run lambda
  • cloudwatch events: trigger at a certain time

python for basic data analysis

https://colab.research.google.com/drive/19VyDXEnM-PUXF8obu8Huafc6jOCQU13E

https://colab.research.google.com/drive/1JWdybI_RpXrZVP24MSitA-l0rSlNKWgT#scrollTo=eqGt7V6qEls-

pandas:

pd.read_csv

pd.read_json

data_file1.merge(data_file2, how=“outer”) will assign null values to misaligned data files

merge all data to find correlations

data.corr()

 

neuroscience/eeg lecture at 6 metrotech

external input is processed in occipital lobe

brain waves are sub-threshold activity in cells; some are action potentials

action potentials: from neurotransmitters

subthreshold potentials: smaller change in voltage in neuron

aggregate signal is picked up by EEG

closed eyes > depriving brain of inputs > alpha waves (higher amplitude because neurons are firing together with smaller range of frequencies > signal adds up)

open eyes > external stimuli > different neurons, inputs at different times > activity at a range of frequencies > shorter amplitude > beta waves

 

ML4A with Gene Kogan

http://ml4a.github.io/

paperspace or colab

ml4a suite:

  • ConvnetOSC meant for wekinator
    • ml4a ConvnetOSC + wekinator + processing or p5 w/ OSC
  • KeyboardOSC: controls keyboard with wekinator
  • AudioClassifier: classify sounds that you make

more on http://ml4a.github.io/guides

 

github basics

to create a git repo:

  1. navigate to project folder
  2. git init

to commit a change:

  1. changes on which files?
    1. specific file: git add index.html
    2. every file that was changed: git add -A
  2. commit changes to repo: git commit -m "added index.html"

to push a repo to github:

    1. create a new repo on github.com
    2. git remote add origin https://github.com/xujenna/project.git
    3. git push origin master

from then on:

  1. make changes to file
  2. git add -A
  3. git commit
  4. git push origin master

 

To run someone else’s project: git clone https://github.com/xujenna/project.git

To get updates from someone else’s project (from within the local repo): git pull

To make a copy of someone else’s repo onto your own github account to push to: clone from existing repo’s github page

To prevent files from uploading to github, create a file called .gitignore with a list of files and folders (ie, file.png, *.txt, images) that you want hidden

  • password = read from config.js; .gitignore config.js
  • gitignore.io for list of files to ignore based on your project