Embeddings
- data points are combinations inside feature space
- Embeddings give us relationships between data points (closer points are more similar)
- magnitude and direction have meaning, allow many basic retrieval applications
- feature vectors and latent spaces are examples of embeddings
- two vectors between two pairs of points have meaning
features are patterns of activations
- every layer becomes less abstract/ more specific:Â edges, parallel lines, shapes, categories
- last layer of activations; distance or correlation between vectors
transfer learning with images
- dimensionality reduction; tries to preserve geometries
- linearly-independent components
word2vec
- man>woman; country>capital; singular>plural
- words are units; sentences are infinite—sentences and paragraphs can be embedded in feature space
- word vectors are learned implicitly
- question-inversion vector
- http://web.stanford.edu/class/cs224n/
principle component analysis to reduce
t-SNE better for visualization and discovery of similar neighbors, but for smaller datasets;