Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

CS6650 Study Guide: Machine Learning Concepts and Techniques - Prof. Donald H. Cooley, Study notes of Computer Science

This study guide covers various machine learning concepts and techniques including normalization, lvq networks, svm, rbf networks, and principle component analysis. Topics include normalizing vectors, understanding lvq network architecture, matching terms and phrases, solving problems related to svms, rbf networks, and perceptrons.

Typology: Study notes

Pre 2010

Uploaded on 07/30/2009

koofers-user-sva
koofers-user-sva 🇺🇸

10 documents

1 / 3

Toggle sidebar

Related documents


Partial preview of the text

Download CS6650 Study Guide: Machine Learning Concepts and Techniques - Prof. Donald H. Cooley and more Study notes Computer Science in PDF only on Docsity! CS6650 Study Guide 1.) Normalize the following vectors for 0 mean. 2.) Normalize the following vector so that it lies on the surface of a unit sphere. 3.) Consider a LVQ network defined as follows:                                  1 0 0 output C Class 0 1 0 output B class 0 0 1 outputA class........, 21 WW This is a two-layer network with five neurons in the first layer and three in the second. The three output layer neurons each output a 0 or a 1 depending on whether or not their input is a 0 or a 1. The input layer neurons form a competitive layer, using Euclidean distance for the competition. Thus, for these input layer neurons, one outputs a one and the others output 0’s for any given input. In the space below, show the regions labeled by their classes (A, B, or C) 4.) In the space provided, give the letter of the term or phrase on the right which best matches with the term or phrase on the left. Each term or phrase on the right can match with only one term or phrase on the left 1. Orthogonal 2. SVM 3. Rosenblatt 4.Kohonen layer 5.Simulated Annealing 6. Bias 7.LVQ 8. Backpropagation 9. Minsky & Pappert 10. Overtraining 11.SVM with RBF 12.SOM 13.objective function 14. correlation 15. recurrent 16. unsupervised learning 17. supervised learning 18.Euclidean distance 19. linearly independent vectors 20. Dot product 5.) True/False 6.) Multiple Choice 7.) I have a support vector machine defined by the following weight matrix: 6.1] 5 4.2- 3.5 2 1- 1[W The machine outputs a 1 for a class A data item and a -1 for a class B data item. Assuming the data are linearly separable, what is the minimum distance from any class A or B item to the hyperplane defined by W? 8.) Consider the following eigen values and eigen vectors Eigen value = 2.78 eigen vector = [ 1 0 -2 3 4] Eigen value = 3.6 eigen vector = [ 2 -1 3 5 8] Eigen value = 3.7 eigen vector = [ -1 0 0 4 1] Eigen value = 1.9 eigen vector = [ 0 2 4 6 8] Eigen value = 4.9 eigen vector = [-1 -1 -1 -1 -1] For the data vector X=[1 2 3 4 5], what would be its value if I wished to use principle components analysis and reduce its dimensionality from 5 to 2 dimensions? 9.) Given the kernel function for a SVM, develop the kernel matrix for it. Find K1 (X) and K2(x) 10.) For a perfect match for the following sample points in a RBF neural network, what would be the centers of the neurons? 11.) Learning/training in any of the networks we have used. 12.) Given a set of linearly separable data points, what will be the weight vector in a machine trained used the perceptron learning rule? What will be the weight vector for a SVM? 13.) State Cover’s theorem 14.) Given a RBF network, the weights, variance, the learning rate, a training input, and its target, what will be the new value(s) of the weight(s)? 15.) Backpropagation??? 16.) Perceptron learning rule
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved