Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Artificial Intelligence Programming - Instance Based Learning | CS 662, Exams of Computer Science

Material Type: Exam; Class: Artificial Intelligence Prog; Subject: Computer Science; University: University of San Francisco (CA); Term: Unknown 1989;

Typology: Exams

Pre 2010

Uploaded on 07/30/2009

koofers-user-pbe-1
koofers-user-pbe-1 🇺🇸

10 documents

1 / 5

Toggle sidebar

Related documents


Partial preview of the text

Download Artificial Intelligence Programming - Instance Based Learning | CS 662 and more Exams Computer Science in PDF only on Docsity! Artificial Intelligence Programming Instance Based Learning Chris Brooks Department of Computer Science University of San Francisco Instance-Based Learning So far, all of the learning algorithms we’ve studied construct an explicit hypothesis about the data set. This is nice because it lets us do a lot of the training ahead of time. It has the weakness that we must then use the same hypothesis fro each element in the test set. One way to get around this is to construct different hypotheses for each test example. Potentially better results, but more computation needed at evaluation time. We can use this in either a supervised or unsupervised setting. Department of Computer Science — University of San Francisco – p.1/?? k-nearest neighbor The most basic instance-based method is k-nearest neighbor. Assume: Each individual can be represented as an N-dimensional vector: < v1, v2, ..., vn >. We have a distance metric that tells us how far apart two individuals are. Euclidean distance is common: d(x1, x2) = √ ∑ (x1[i] − x2[i])2 Department of Computer Science — University of San Francisco Supervised kNN Training is trivial. Store training set. Assume each individual is a n-dimensional vector, plus a classification. Testing is more computationally complex: Find the k closest points and collect their classifications. Use majority rule to classify the unseen point. Department of Computer Science — University of San Francisco – p.3/?? kNN Example Suppose we have the following data points and are using 3-NN: X1 X2 Class 4 3 + 1 2 - 2 2 + 5 0 - We see the following data point: x1=3, x2 = 1. How should we classify it? Department of Computer Science — University of San Francisco – p.4/?? kNN Example Begin by computing distances: X1 X2 Class Distance 4 3 + √ 5 = 2.23 1 1 - 2 2 2 + √ 2 = 1.41 5 1 - 2 The three closest points are 2,3,4. There are 2 ‘-’, and 1 ‘+’. Therefore the new example is negative. Department of Computer Science — University of San Francisco Discussion K-NN can be a very effective algorithm when you have lots of data. Easy to compute Resistant to noise. Bias: points that are “close” to each other share classification. Department of Computer Science — University of San Francisco – p.6/?? Discussion Issues: How to choose the best k? Search using cross-validation Distance is computed globally. Recall the data we used for decision tree training. Part of the goal was eliminate irrelevant attributes. All neighbors get an equal vote. Department of Computer Science — University of San Francisco – p.7/?? Distance-weighted voting One extension is to weight a neighbor’s vote by its distance to the example to be classified. Each ’vote’ is weighted by the inverse square of the distance. Once we add this, we can actually drop the ’k’, and just use all instances to classify new data. Department of Computer Science — University of San Francisco Attribute Weighting A more serious problem with kNN is the presence of irrelevant attributes. In many data sets, there are a large number of attributes that are completely unrelated to classification. More data actually lowers classification performance. This is sometimes called the curse of dimensionality. Department of Computer Science — University of San Francisco – p.9/?? Attribute Weighting We can address this problem by assigning a weight to each component of the distance calculation. d(p1, p2) = √ ( ∑ w[i](p1[i] − p2[i]))2 where w is a vector of weights. This has the effect of transforming or stretching the instance space. More useful features have larger weights Department of Computer Science — University of San Francisco – p.10/?? Learning Attribute Weights We can learn attribute weights through a hillclimbing search. let w = random weights let val(w) be the error rate for w under n-fold cross-validation while not done : for i in range(len(w)) : w[i] = w[i] + δ if val(w + w[i]) > val(w) : keep new weights We could also use a GA or simulated annealing to do this. Department of Computer Science — University of San Francisco
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved