EROFT reading

All known peoples on earth have practised some form of divination. It has had a critical role in the classical world, ancient Egypt and the Middle East, in the Americas, India, Tibet, Mongolia, Japan, China, Korea and Africa (Loewe and Blacker 1981; Peek 1991).

Over the years, many so-called inductive or rational forms of divination have been compared with Western scientific techniques. 

  • psychological tests, eg Rorschachs
  • diagnostic procedure
  • sociopsy— comparable to biopsy

They suggest that in Orisha ceremonies, certain distinguishing drum rhythms and oriki chants are used to attract particular energies, create certain moods, and evoke certain responses.

This alternative way draws its knowledge from “women’s ways of knowing,” from intuitive thought, from dreams, from nature, from the deep recesses of the human psyche. This way of knowing is performative in nature, rich in symbol, ritual, and metaphor, evoking responses that lie deep within the human psyche. For many, it all started with divination as a
sacred compass locating self. For others it started with the rhythm of the drums, the lure of the dance, the transforming experience of symbolic interaction with an unseen, unknown, other dimension of power, the ritualistic replenishing of the primal life force ashe, or the awesome realization that “Words uttered in a particular sequence, rhythm, and tone can bring a rock to ‘action,’ cause rain to fall, or heal a sick person a hundred miles away” (Teish 1988, 62)

Neural Aesthetic w6 class notes

  • Generative models synthesize new samples that resemble the training data
    • applications: visual content generation, language models (chatbots, assistants, duplexes), music, etc
    • Models the probability distribution of all possible images; images that look like the dataset have a high probability
  • PCA projects down in lower dimensions and back out
  • latent space: space of all possible generated outputs
  • later layers can be used as a feature extractor because it is a compact but high-level representation
    • distance calculations between feature extractors can be used to determine similarities between images
    • transfer learning
    • can use PCA to reduce redundancies, then calculate distances
      • images (points) can then be embedded in feature space
        • vectors between points imply relationships
  • Autoencoders reconstruct its inputs as its outputs; networks learns an essential representation of the data via compression through a small middle layer
    • first half encoder, second half decoder
    • can throw in labels for a conditional distribution
    • can encode images and get their latent representation to project outward
      • smile vector
  • GANs: circa 2014
    • hard to train
    • hard to evaluate
    • can’t encode images directly
    • structured like a decoupled autoencoder
      • generator > discriminator
        • generator: basically like the decoder in an autoencoder
          • takes in random numbers, not images
          • tries to create images to trick the discriminator into thinking they’re real
        • discriminator: takes in an input image (from generator), decides if it is real or fake
        • “adversarial”: trained to work against each other
  • DC GANs
    • unsupervised technique, but can give it labels
      • interpolations through latent space AND through labels
        • labels are one hot vectors
        • MINST: glyph between integers
  • Deep Generator Network
    • similar to deep dream; optimizes an output image to maximize a class label
  • Progressively grown GANs
    • super high res, super realistic

cloud computing + data architecture

ML as a Service (Comprehend, Rekognition)

import boto3

AWS_client = boto3.client('comprehend', region_name='us-east-1')

AWS_sentiment_response = AWS_client.detect_sentiment(Text='i am so tired',LanguageCode='en')

 

AWS Lambda: functions as a service

  • run code with Lambda > analyze with Comprehend > store on S3 > serve with EC2
  • can use AWS Lambda as an endpoint
  • EC2 has tensorflow instance
  • can ask alexa to run lambda
  • cloudwatch events: trigger at a certain time

python for basic data analysis

https://colab.research.google.com/drive/19VyDXEnM-PUXF8obu8Huafc6jOCQU13E

https://colab.research.google.com/drive/1JWdybI_RpXrZVP24MSitA-l0rSlNKWgT#scrollTo=eqGt7V6qEls-

pandas:

pd.read_csv

pd.read_json

data_file1.merge(data_file2, how=“outer”) will assign null values to misaligned data files

merge all data to find correlations

data.corr()

 

neuroscience/eeg lecture at 6 metrotech

external input is processed in occipital lobe

brain waves are sub-threshold activity in cells; some are action potentials

action potentials: from neurotransmitters

subthreshold potentials: smaller change in voltage in neuron

aggregate signal is picked up by EEG

closed eyes > depriving brain of inputs > alpha waves (higher amplitude because neurons are firing together with smaller range of frequencies > signal adds up)

open eyes > external stimuli > different neurons, inputs at different times > activity at a range of frequencies > shorter amplitude > beta waves

 

ML4A with Gene Kogan

http://ml4a.github.io/

paperspace or colab

ml4a suite:

  • ConvnetOSC meant for wekinator
    • ml4a ConvnetOSC + wekinator + processing or p5 w/ OSC
  • KeyboardOSC: controls keyboard with wekinator
  • AudioClassifier: classify sounds that you make

more on http://ml4a.github.io/guides