Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Instance-Based Learning: Understanding k-Nearest Neighbors and k-Means Clustering, Exams of Computer Science

An introduction to instance-based learning, focusing on the k-nearest neighbors (k-nn) algorithm and k-means clustering. Instance-based learning is an effective approach for dealing with large numeric data sets. K-nn can be used in both supervised and unsupervised settings, while k-means is commonly used for unsupervised clustering. The basics of these methods, including how they work, their advantages, and challenges.

Typology: Exams

Pre 2010

Uploaded on 07/30/2009

koofers-user-2x8
koofers-user-2x8 🇺🇸

2

(1)

10 documents

1 / 27

Toggle sidebar

Related documents


Partial preview of the text

Download Instance-Based Learning: Understanding k-Nearest Neighbors and k-Means Clustering and more Exams Computer Science in PDF only on Docsity! Artificial Intelligence Programming Instance Based Learning Chris Brooks Department of Computer Science University of San Francisco Instance-Based Learning So far, all of the learning algorithms we’ve studied construct an explicit hypothesis about the data set. This is nice because it lets us do a lot of the training ahead of time. It has the weakness that we must then use the same hypothesis fro each element in the test set. One way to get around this is to construct different hypotheses for each test example. Potentially better results, but more computation needed at evaluation time. We can use this in either a supervised or unsupervised setting. Department of Computer Science — University of San Francisco – p.1/?? kNN Example Suppose we have the following data points and are using 3-NN: X1 X2 Class 4 3 + 1 2 - 2 2 + 5 0 - We see the following data point: x1=3, x2 = 1. How should we classify it? Department of Computer Science — University of San Francisco – p.4/?? kNN Example Begin by computing distances: X1 X2 Class Distance 4 3 + √ 5 = 2.23 1 1 - 2 2 2 + √ 2 = 1.41 5 1 - 2 The three closest points are 2,3,4. There are 2 ‘-’, and 1 ‘+’. Therefore the new example is negative. Department of Computer Science — University of San Francisco – p.5/?? Discussion K-NN can be a very effective algorithm when you have lots of data. Easy to compute Resistant to noise. Bias: points that are “close” to each other share classification. Department of Computer Science — University of San Francisco – p.6/?? Attribute Weighting A more serious problem with kNN is the presence of irrelevant attributes. In many data sets, there are a large number of attributes that are completely unrelated to classification. More data actually lowers classification performance. This is sometimes called the curse of dimensionality. Department of Computer Science — University of San Francisco – p.9/?? Attribute Weighting We can address this problem by assigning a weight to each component of the distance calculation. d(p1, p2) = √ ( ∑ w[i](p1[i] − p2[i]))2 where w is a vector of weights. This has the effect of transforming or stretching the instance space. More useful features have larger weights Department of Computer Science — University of San Francisco – p.10/?? Learning Attribute Weights We can learn attribute weights through a hillclimbing search. let w = random weights let val(w) be the error rate for w under n-fold cross-validation while not done : for i in range(len(w)) : w[i] = w[i] + δ if val(w + w[i]) > val(w) : keep new weights We could also use a GA or simulated annealing to do this. Department of Computer Science — University of San Francisco – p.11/?? K-means Clustering To evaluate this, we measure the sum of all distances between instances and the center of their cluster. But how do we know that we picked good centers? Department of Computer Science — University of San Francisco – p.14/?? K-means Clustering To evaluate this, we measure the sum of all distances between instances and the center of their cluster. But how do we know that we picked good centers? We don’t. We need to adjust them. Department of Computer Science — University of San Francisco – p.15/?? Tuning the centers For each cluster, find its mean. This is the point c that minimizes the total distance to all points in the cluster. But what if some points are now in the wrong cluster? Department of Computer Science — University of San Francisco – p.16/?? Hierarchical Clustering K-means produces a flat set of clusters. Each document is in exactly one cluster. What if we want a tree of clusters? Topics and subtopics. Relationships between clusters. We can do this using hierarchical clustering Department of Computer Science — University of San Francisco – p.19/?? Hierarchical Clustering One application is in document processing. Given a collection of documents, organize them into clusters based on topic. No preset list of potential categories, or labeled documents. Algorithm: D = {d1, d2, ..., dn} While |D| > k : Find the documents di and dj that are closest according so some similarity measure. Remove them from D Construct a new d′ that is the “union” of di and dj and add it to D Result: a tree of categories emerges from a collection of Department of Computer Science — University of San Francisco – p.20/?? Recommender Systems One application of these sorts of approaches is in recommender systems Netflix, Amazon Goal: Suggest items to users that they’re likely to be interested in. Real goal: For a given user, find other users she is similiar to. Department of Computer Science — University of San Francisco – p.21/?? Algorithmic Challenges Curse of dimensionality Not all items are independent We might want to learn weights for items, or combine items into larger groups. This approach tends to recommend popular items. They’re likely to have been rated by lots of people. Department of Computer Science — University of San Francisco – p.24/?? Practical Challenges How to get users to rate items? How to get users to rate truthfully? What about new and unrated items? What if a user is not similiar to anyone? Department of Computer Science — University of San Francisco – p.25/?? Summary Instance-based learning is a very effective approach to dealing with large numeric data sets. k-NN can be used in supervised settings. In unsupervised settings, k-means is a simple and effective choice. Most recommender systems use a form of this approach. Department of Computer Science — University of San Francisco – p.26/??
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved