Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Single Layer Perceptrons - Lecture Slides | CPSC 636, Study notes of Computer Science

Material Type: Notes; Professor: Choe; Subject: COMPUTER SCIENCE; University: Texas A&M University; Term: Unknown 1989;

Typology: Study notes

Pre 2010

Uploaded on 02/13/2009

koofers-user-d3a
koofers-user-d3a 🇺🇸

10 documents

1 / 12

Toggle sidebar

Related documents


Partial preview of the text

Download Single Layer Perceptrons - Lecture Slides | CPSC 636 and more Study notes Computer Science in PDF only on Docsity! Slide03 Haykin Chapter 3: Single-Layer Perceptrons CPSC 636-600 Instructor: Yoonsuck Choe Spring 2008 1 Historical Overview • McCulloch and Pitts (1943): neural networks as computing machines. • Hebb (1949): postulated the first rule for self-organizing learning. • Rosenblatt (1958): perceptron as a first model of supervised learning. • Widrow and Hoff (1960): adaptive filters using least-mean-square (LMS) algorithm (delta rule). 2 Multiple Faces of a Single Neuron What a single neuron does can be viewed from different perspectives: • Adaptive filter: as in signal processing • Classifier: as in perceptron The two aspects will be reviewed, in the above order. 3 Part I: Adaptive Filter 4 Adaptive Filtering Problem • Consider an unknown dynamical system, that takes m inputs and generates one output. • Behavior of the system described as its input/output pair: T : {x(i), d(i); i = 1, 2, ..., n, ...} where x(i) = [x1(i), x2(i), ..., xm(i)] T is the input and d(i) the desired response (or target signal). • Input vector can be either a spatial snapshot or a temporal sequence uniformly spaced in time. • There are two important processes in adaptive filtering: – Filtering process: generation of output based on the input: y(i) = xT (i)w(i). – Adapative process: automatic adjustment of weights to reduce error: e(i) = d(i)− y(i). 5 Unconstrained Optimization Techniques • How can we adjust w(i) to gradually minimize e(i)? Note that e(i) = d(i)− y(i) = d(i)− xT (i)w(i). Since d(i) and x(i) are fixed, only the change in w(i) can change e(i). • In other words, we want to minimize the cost function E(w) with respect to the weight vector w: Find the optimal solution w∗ . • The necessary condition for optimality is ∇E(w∗) = 0, where the gradient operator is defined as ∇ = » ∂ ∂w1 , ∂ ∂w2 , ... ∂ ∂wm –T With this, we get ∇E(w∗) = » ∂E ∂w1 , ∂E ∂w2 , ... ∂E ∂wm –T . 6 Steepest Descent • We want the iterative update algorithm to have the following property: E(w(n + 1)) < E(w(n)). • Define the gradient vector∇E(w) as g. • The iterative weight update rule then becomes: w(n + 1) = w(n)− ηg(n) where η is a small learning-rate parameter. So we can say, ∆w(n) = w(n + 1)−w(n) = −ηg(n) 7 Steepest Descent (cont’d) We now check if E(w(n + 1)) < E(w(n)). Using first-order Taylor expansion† of E(·) near w(n), E(w(n + 1)) ≈ E(w(n)) + gT (n)∆w(n) and ∆w(n) = −ηg(n), we get E(w(n + 1)) ≈ E(w(n))− ηgT (n)g(n) = E(w(n))− η‖g(n)‖2| {z } Positive! . So, it is indeed (for small η): E(w(n + 1)) < E(w(n)). † Taylor series: f(x) = f(a) + f ′(a)(x− a) + f ′′(a)(x−a)2 2! + .... 8 Linear Least-Square Filter (cont’d) Points worth noting: • X does not need to be a square matrix! • We get w = (XT X)−1XT d off the bat partly because the output is linear (otherwise, the formula would be more complex). • The Jacobian of the error function only depends on the input, and is invariant wrt the weight w. • The factor (XT X)−1XT (let’s call it X+) is like an inverse. Multiply X to both sides of d = Xw then we get: w = X+d = X+X| {z } =I w. 17 Linear Least-Square Filter: Example See src/pseudoinv.m. X = ceil(rand(4,2)*10), wtrue = rand(2,1)*10 , d=X*wtrue, w = inv(X’*X)*X’*d X = 10 7 3 7 3 6 5 4 wtrue = 0.56644 4.99120 d = 40.603 36.638 31.647 22.797 w = 0.56644 4.99120 18 Least-Mean-Square Algorithm • Cost function is based on instantaneous values. E(w) = 1 2 e 2 (w) • Differentiating the above wrt w, we get ∂E(w) ∂w = e(w) ∂e(w) ∂w . • Pluggin in e(w) = d− xT w, ∂e(w) ∂w = −x, and hence ∂E(w) ∂w = −xe(w). • Using this in the steepest descent rule, we get the LMS algorithm: ŵn+1 = ŵn + ηxnen. • Note that this weight update is done with only one (xi, di) pair! 19 Least-Mean-Square Algorithm: Evaluation • LMS algorithm behaves like a low-pass filter. • LMS algorithm is simple, model-independent, and thus robust. • LMS does not follow the direction of steepest descent: Instead, it follows it stochastically (stochastic gradient descent). • Slow convergence is an issue. • LMS is sensitive to the input correlation matrix’s condition number (ratio between largest vs. smallest eigenvalue of the correl. matrix). • LMS can be shown to converge if the learning rate has the following property: 0 < η < 2 λmax where λmax is the largest eigenvalue of the correl. matrix. 20 Improving Convergence in LMS • The main problem arises because of the fixed η. • One solution: Use a time-varying learning rate: η(n) = c/n, as in stochastic optimization theory. • A better alternative: use a hybrid method called search-then-converge. η(n) = η0 1 + (n/τ) When n < τ , performance is similar to standard LMS. When n > τ , it behaves like stochastic optimization. 21 Search-Then-Converge in LMS η(n) = η0 n vs. η(n) = η0 1 + (n/τ) 22 Part II: Perceptron 23 The Perceptron Model • Perceptron uses a non-linear neuron model (McCulloch-Pitts model). v = mX i=1 wixi + b, y = φ(v) = 8<: 1 if v > 00 if v ≤ 0 • Goal: classify input vectors into two classes. 24 Boolean Logic Gates with Perceptron Units −1 t=1.5 W1=1 W2=1 −1 W1=1 W2=1 −1t=0.5 W1=−1 t=−0.5 AND OR NOT Russel & Norvig • Perceptrons can represent basic boolean functions. • Thus, a network of perceptron units can compute any Boolean function. What about XOR or EQUIV? 25 What Perceptrons Can Represent t−1 I0 I1 w0 w1 I0 I1 W1 t Slope = −W0 W1 Output = 1 Output=0fs Perceptrons can only represent linearly separable functions. • Output of the perceptron: W0 × I0 + W1 × I1 − t > 0, then output is 1 W0 × I0 + W1 × I1 − t ≤ 0, then output is 0 26 Geometric Interpretation t−1 I0 I1 w0 w1 I0 I1 W1 t Slope = −W0 W1 Output = 1 Output=0fs • Rearranging W0 × I0 + W1 × I1 − t > 0, then output is 1, we get (if W1 > 0) I1 > −W0 W1 × I0 + t W1 , where points above the line, the output is 1, and 0 for those below the line. Compare with y = −W0 W1 × x + t W1 . 27 The Role of the Bias −1 I0 I1 w0 w1 t = 0 W1 t Slope = −W0 W1 I0= 0 I1 • Without the bias (t = 0), learning is limited to adjustment of the slope of the separating line passing through the origin. • Three example lines with different weights are shown. 28 Learning in Perceptron: Another Look w − − − − − − − −−− −− + + + − − − − − − − −−− −− + + + + + + + ++ +x x w−x + + − + + + + + x+w w w−x • When a positive example (C1) is misclassified, w(n + 1) = w(n) + η(n)x(n). • When a negative example (C2) is misclassified, w(n + 1) = w(n)− η(n)x(n). • Note the tilt in the weight vector, and observe how it would change the decision boundary. 37 Perceptron Convergence Theorem • Given a set of linearly separable inputs, Without loss of generality, assume η = 1, w(0) = 0. • Assume the first n examples∈ C1 are all misclassified. • Then, using w(n + 1) = w(n) + x(n), we get w(n + 1) = x(1) + x(2) + ... + x(n). (1) • Since the input set is linearly separable, there is at least on solution w0 such that wT0 x(n) > 0 for all inputs in C1 . – Define α = minx(n)∈C1 w T 0 x(n) > 0. – Multiply both sides in eq. 1 with w0 , we get: w T 0 w(n+1) = w T 0 x(1)+w T 0 x(2)+...+w T 0 x(n). (2) – From the two steps above, we get: w T 0 w(n + 1) > nα (3) 38 Perceptron Convergence Theorem (cont’d) • Using Cauchy-Schwartz inequality ‖w0‖2‖w(n + 1)‖2 ≥ h w T 0 w(n + 1) i2 • From the above and wT0 w(n + 1) > nα, ‖w0‖2‖w(n + 1)‖2 ≥ n2α2 So, finally, we get ‖w(n + 1)‖2 ≥ n2α2 ‖w0‖2| {z } First main result (4) 39 Perceptron Convergence Theorem (cont’d) • Taking the Euclidean norm of w(k + 1) = w(k) + x(k), ‖w(k + 1)‖2 = ‖w(k)‖2 + 2wT (k)x(k) + ‖x(k)‖2 • Since all n inputs in C1 are misclassified, wT (k)x(k) ≤ 0 for k = 1, 2, ..., n, ‖w(k + 1)‖2 − ‖w(k)‖2 − ‖x(k)‖2 = 2wT (k)x(k) ≤ 0, ‖w(k + 1)‖2 ≤ ‖w(k)‖2 + ‖x(k)‖2 ‖w(k + 1)‖2 − ‖w(k)‖2 ≤ ‖x(k)‖2 • Summing up the inequalities for all k = 1, 2, ..., n, and w(0) = 0, we get ‖w(k + 1)‖2 ≤ nX k=1 ‖x(k)‖2 ≤ nβ, (5) where β = maxx(k) ∈ C1‖x(k)‖2 . 40 Perceptron Convergence Theorem (cont’d) • From eq. 4 and eq. 5, n2α2 ‖w0‖2 ≤ ‖w(n + 1)‖2 ≤ nβ • Here, α is a constant, depending on the fixed input set and the fixed solution w0 (so, ‖w0‖ is also a constant), and β is also a constant since it depends only on the fixed input set. • In this case, if n grows to a large value, the above inequality will becomes invalid (n is a positive integer). • Thus, n cannot grow beyond a certain nmax , where n2maxα 2 ‖w0‖2 = nmaxβ nmax = β‖w0‖2 α2 , and when n = nmax , all inputs will be correctly classified 41 Fixed-Increment Convergence Theorem Let the subsets of training vectors C1 and C2 be linearly separable. Let the inputs presented to perceptron originate from these two subsets. The perceptron converges after some n0 iterations, in the sense that w(n0) = w(n0 + 1) = w(n0 + 2) = .... is a solution vector for n0 ≤ nmax. 42 43 Summary • Adaptive filter using the LMS algorithm and perceptrons are closely related (the learning rule is almost identical). • LMS and perceptrons are different, however, since one uses linear activation and the other hard limiters. • LMS is used in continuous learning, while perceptrons are trained for only a finite number of steps. • Single-neuron or single-layer has severe limits: How can multiple layers help? 44 XOR with Multilayer Perceptrons XOR AND 1 10 01 10 1 1 1 1 0 Note: the bias units are not shown in the network on the right, but they are needed. • Only three perceptron units are needed to implement XOR. • However, you need two layers to achieve this. 45
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved