Features are:
- patterns in data
- implicit
- indicative of salient aspects of objects
- closely related to bias
Fitting
- Linear regression doesn’t give much flexibility; you can give a neuron more by outputting it through a non-linearity, ie a sigmoid function
- ReLU (rectified linear unit) is preferred over a sigmoid function
- adding a hidden layer gives y (the output) even more flexibility
Convolutional NNs: scans for certain patterns throughout the entire image
activation= value of the neuron
weights on the connections