Home    

Homework 6

TOC

Task 1 (30 pts)

Draw a single perceptron that computes the NOR (not-OR) function.

Be sure to give the weights of the two inputs and the bias term. Assume the threshold function is that demonstrated in the lecture notes (where a sum \(\geq 0.5\) causes a \(1\) to be output, otherwise a \(0\)).

Task 2 (30 pts)

Draw a small perceptron network that computes addition of two one-bit numbers. This cannot be done with only one perceptron. Here is the table of inputs/outputs:

\(x_1\)\(x_2\)out 1out 2 (carry bit)
0000
0110
1010
1101

Use this perceptron configuration. Determine all the weights.

./images/bitsum-perceptron.png

Task 3 (20 pts)

Find a dataset to load into Weka, and evaluate a MultilayerPerceptron (in the “functions” category). Compare its performance to na├»ve Bayesian and k-nn.

Task 4 (20 pts)

These are Asimov’s “Three Laws of Robotics”:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Give your opinion about whether these are good “laws,” how they might be improved, and whether we could build systems that respect them. (1 paragraph)

Extra credit 1 (60 pts)

Write a single-layer logistic perceptron network. Use the logistic perceptron learning rule from the lecture notes. Report the “loss” after each epoch. Divide the input file into an initial 90% for training, the latter 10% for testing. (The input file has already been shuffled.)

The input file is specified on the command line. Its format is:

5620 64 10
0 0 0.0625 0.875 0.375 0 0 0 0 0 0.375 0.9375  ... 1 0 0 0 0 0 0 0 0 0 
0 0 0 0 0.5625 1 0.25 0 0 0 0 0.3125 0.9375 1  ... 0 0 0 0 0 0 0 0 0 1
0 0.0625 0.375 0.9375 0.75 0.0625 0 0 0 0.4375 ... 0 1 0 0 0 0 0 0 0 0 

The first number, 5620, is the number of examples in the file, with one example per line, starting on the second line. The second number, 64, is the number of inputs (or “features”) for each example; the inputs are floating-point values. The third number, 10, is the number of binary outputs for each example.

The input file (partially) shown here is from the optdigits.dat (handwritten digits) input file (from UCI). Each example has 64 floats (integer values 0-16 divided by 16.0 to get values in the range [0,1]) followed by 10 binary numbers (0/1), such that the only 1 in this group of binary numbers is in the position N, and this N is the true digit of the example. So if the example is the digit 2, then the binary numbers are: “0 0 1 0 0 0 0 0 0 0 0” (first binary number indicates digit 0, then 1, then 2, etc.).

I have written templates for you to start with: C++, Java, and Python. (Words of wisdom: Python is horrendously slow; my experiments with this assignment show it to be 30 times slower than Java.) These templates load the data file, build the data structures, and print the weights of the trained network. You need to build the network, train it, and finish the code that tests reserved instances from the input file.

You can also test, for debugging purposes, the and.dat, or.dat, and not.dat cases. The primary test case for your code will be the optdigits.dat file.

Extra credit 2 (20 pts)

Train your neural net with a different dataset you find on the web. Tell me about the dataset (its inputs and outputs) and describe the performance of your neural net.

AI Su13 material by Joshua Eckroth is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Source code for this website available at GitHub.