Assignment 1 (Neural Computation, Spring 2009)

 

Number of problems/points: Five problems for total of 100 points

Out: February 7, 2009

Due: February 18, in class.

 

Homework Policies (applicable for all assignments):

1.    You are required to do the homework problems in order to pass. If you do not do substantially all of the problem sets, then do not expect to pass.

2.    Understandability of the solution is as desired as correctness. Sloppy answers will be likely to receive fewer points, even if they are correct.

3.    Penalty for late homework assignments will be high, and so do it on time.

4.    Solutions are expected to be your own work. Group work is not allowed unless explicitly approved for a particular problem. If you need hints first talk to instructor. If you obtain a solution with help (e.g., through library work, using hint provided by another person, etc.) acknowledge your source, and write up the solution on your own. Plagiarism and other anti-intellectual behavior will be dealt with severely.

 

Problem 1: (20 points)

Suppose we modify the Perceptron learning so that the change step adds (or

subtracts) twice the example chosen. Will the algorithm still find

a solution for any linearly separable problem making only a finite number

of changes? If no, give a counterexample; if yes, give a proof and a bound

on the number of change steps if inputs are restricted to +1 and -1.

 

Problem 2: (25 points)

Use the Perceptron algorithm to solve a double-moon classification problem described in Section 1.5 of the textbook (page 61, Third edition). Experiment using 1,000 randomly picked points from “Region A” and another 1,000 from “Region B”. Fix radius of each moon to r=10 and width to w=6. Explore the effect of reducing the vertical distance d separating the two moons from d=1 to d=0.5 and d=0. Then, for a fixed d explore effects of using learning rate of 1, 0.5, and 0.1.

 

Problem 3: (25 points)

Repeat tasks defined at Problem 2 using the least mean square Adaline algorithm (textbook chapter 2; to be discussed in class on Feb. 11).

 

Problem 4: (15 points)

(a) Express derivative of hyperbolic tangent function o(x)=(1-exp(-x))/(1+exp(-x)) in terms of its output.

(b) What is the value of this derivative at the origin?

 

Problem 5: (15 points)

Consider a multilayer feedforward network with linear activation function

for all neurons. Show that such a network is equivalent to a single-layer feedforward neural network.