pcomp video notes

prototyping tips:

  • wire wrapper for sensors not meant for soldering
  • drill to twist wires
  • solid core wire is rigid; stranded core wire is more flexible but needs to be soldered to a header pin
  • panel-mount components just screw-on

Questions:

  • difference between wires (solid core, stranded, smaller wires for wrapping, ribbon cables, jumper wires)
  • ribbon cable to permaboard intermediary
  • panel-mounted vs board-mounted controls
  • how do arduino shields work (stacking)

pcomp help session: soldering

electrical tape to keep wires from touching

mouser, digikey for components; McMaster-Carr for ideas

Soldering

  • can’t solder aluminum or steel (acid core soldering)
  • clamps
  • silver bearing vs lead solder
  • NASA student workbook for hand soldering: https://nepp.nasa.gov/docuploads/06AA01BA-FC7E-4094-AE829CE371A7B05D/NASA-STD-8739.3.pdf
  • three tools need: wire stripper, needle-nose pliers, wire cutters (copper only= soft metals only= silver, lead, aluminum)
    • consult components about how much insulation to strip from wire
  • Steps for splicing wires:
    1. strip wire of insulation, twist wires together (no twisting for lap joints)
    2. solder iron transfers heat (650-700F for the Wellers); flashing LED means at temperature
    3. melt silver bearing to tip to form a bead of solder, add to joint
    4. add solder to joint—move silver bearing, not solder
    5. pull tools away immediately
    6. clean solder iron tip regularly (needs to be shiny)
    7. add heat shrink tubing (yellow bin) over joint
  • add solder to take solder away (new hot solder flows the old solder)
  • stranded core wire will not go into a breadboard
  • perma-proto breadboard PCB for long-term projects
  • flux pens helps remove extra solder
  • pixel tape (some have direction)

pcomp: wk5 class notes

Measurements:

  • current: amps
  • charge potential: volts
  • resistance: ohms
  • frequency: hertz
  • duty cycle: %
  • pulse width: ms (time)

PWM: duty cycle changes, pulse width changes, frequency (tone() changes frequency)

Other microcontrollers

  • particle spark: wifi, javascript
  • arduino BT connects over bluetooth

analog conversion notes

Analog input:

  • analog to digital conversion: voltage to digital number at a resolution of 10 bits (2^10; ie takes 0-5 volts and breaks it into 0-1023 parts)
  • voltage = digital number * (5/1023)
    • smallest change it can read is 5/1023, or 0.048 volts
  • analogRead(pin);
    • pint is the analog input pin
    • reading will be between 0 and 1023

Analog “Output” (PWM):

  • analogWrite(pin, duty);
    • pin refers to the pin you’re pulsing
    • duty is a value from 0-255— 8 bits (2^8; corresponds to 0-5 volts)
    • every one-point change changes the voltage by 5/255, or  0.0196 volts

Pcomp: W4 HW pt 2

So my theremin from Friday basically ended up a keyboard, so I figured I might as well complete the thought. All I needed to do was remove the photoresistor and add a few more buttons, right? Here’s what I started with:

It worked! Kind of. So I attempted to create a full keyboard, but the eighth key didn’t fit in my breadboard…

Despite lining up the resistors in increasing order, the readings seemed to be all over the place (~970, ~8, ~510, ~698, ~930, ~970, ~1023)—hence the notes coming in at random when the keys are played in order. Hopefully it’s only a matter of replacing a few faulty resistors…

Here’s the code:

Update: DONE!

Okay not really, as the speaker makes it completely impossible to actually play the keys. But that’s nothing a trip to Tinkersphere can’t fix.

Final update:

Learning Machine W4: Class Notes

(Grapher: Mac Os app)

Multilayer Perceptron:

  • Perceptron changes the weights to get better answers; multilayer perceptron gets better outputs with hidden weights that represent facets of a problem that the machine determines
    • rather than inputs going straight into the output, there is a hidden layer of factors of indeterminate length that sit between input and output
    • inputs are what are visible to the computer; computer assumes there’s more than what is visible and compensates with the hidden layer to explain what’s visible
      • “wires” are layers; each layer has inputs and outputs
      • “input” is output of the previous layer
  • backpropagation: “error” of one layer is a function of previous layer, so for loop runs backward through layers to adjust weights
    • relates to gradient descent: moves toward lowest point (error rate) one step at a time
    • HW visualizing error rate is helpful (over epochs, aka training iterations)
      • if error rate flattens out well before zero, it’s hit a local minimum or is overfit
      • if error rate nears zero then suddenly increases, it’s overfit to your training examples
        • solutions: give more examples, lower the learning rate; most effective are drop out (every time we train, we block some neurons from changing) and regularization (Occam’s razor: simplest solution is usually the correct one; penalizes extreme conclusions)
          • can attach an unsupervised learner to the supervised learner
  • learning rate is just a multiplier so we don’t learn too much too quickly (would require fewer examples/iterations, and therefore develop perceptions too early that are hard to back out of)

Supervised Learning:

  • two categories of problems: regression and classification problems
    • regression: function (curve, ie stock prices)
      • linear regression: take a curve and fit it to a straight line that is the best approximation
      • nonlinear regression: try to fit trend to a curve that is the best approximation
      • deals with a continuous function
    • classification: discreet (as opposed to continuous) outputs
      • one-hot encoding: each category has its own dimension
        • as many output dimensions as there are categories; training data= 1 for the thing it’s representing, 0 for things it’s not
      • one-cold encoding: inverse of one-hot
  • 90-95% accuracy is best
  • can create sub-datasets within historic data set and loop inside subsets as training: three-day input /one-day output

Activation functions:

  • Sigmoid (0 to 1) and tanh (-1 to 1)
    • squashes any values above max and below min
    • outputs need to be within the domain (use mapping function) of activation function
    • tanh twice as much precision (ie twice as many numbers)

HW data sets: http://archive.ics.uci.edu/ml/index.php

  • Just use one hidden layer, variation can be how many nodes
    • number of nodes = somewhere between average number of input dimensions + number of output dimensions and twice the largest dimension (x+y/2 and 2x)

Learning Machines W3 HW: Perceptron Implementation

This week’s homework was to implement the Perceptron algorithm and train it on data sets based on the AND, OR, and XOR logic gates. Since the outputs we want the Perceptron to predict are known, this is considered supervised training. Here is my spaghetti code:

This code returns the results once its predictions match the known outputs. I had it print each step to see what it was thinking:

AND results


OR results

Then, as expected, my Perceptron was not able to reach 100% accuracy for the XOR dataset, and thus does not exit the loop. Here it is trying very hard:

Full code here: https://github.com/xujenna/learning_machines/blob/master/perceptron.py