Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Instance based on Introduction to Machine Learning | CSI 5325, Study notes of Computer Science

Material Type: Notes; Professor: Hamerly; Class: Introduction to Machine Learning; Subject: Computer Science; University: Baylor University; Term: Spring 2008;

Typology: Study notes

Pre 2010

Uploaded on 08/18/2009

koofers-user-1iu
koofers-user-1iu ๐Ÿ‡บ๐Ÿ‡ธ

10 documents

1 / 10

Toggle sidebar

Related documents


Partial preview of the text

Download Instance based on Introduction to Machine Learning | CSI 5325 and more Study notes Computer Science in PDF only on Docsity! Intro. to machine learning (CSI 5325) Lecture 18: Instance-based learning Intro. to machine learning (CSI 5325) Lecture 18: Instance-based learning Greg Hamerly Spring 2008 Some content from Tom Mitchell. 1 / 10 Intro. to machine learning (CSI 5325) Lecture 18: Instance-based learning Outline 1 k-Nearest Neighbor 2 / 10 Intro. to machine learning (CSI 5325) Lecture 18: Instance-based learning k-Nearest Neighbor When To Consider Nearest Neighbor Instances map to points in Rd Less than 20 attributes per instance Lots of training data Advantages: Training is very fast Learn complex target functions Donโ€™t lose information Disadvantages: Slow at query time Easily fooled by irrelevant attributes 5 / 10 Intro. to machine learning (CSI 5325) Lecture 18: Instance-based learning k-Nearest Neighbor Voronoi Diagram + + โˆ’ โˆ’ โˆ’ + โˆ’ โˆ’ + xq 6 / 10 Intro. to machine learning (CSI 5325) Lecture 18: Instance-based learning k-Nearest Neighbor Behavior in the Limit Consider p(x) defines probability that instance x will be labeled 1 (positive) versus 0 (negative). Nearest neighbor: As number of training examples โ†’โˆž, approaches Gibbs Algorithm Gibbs: with probability p(x) predict 1, else 0 k-Nearest neighbor: As number of training examples โ†’โˆž and k gets large, approaches Bayes optimal Bayes optimal: if p(x) > .5 then predict 1, else 0 Note Gibbs has at most twice the expected error of Bayes optimal 7 / 10
Docsity logo



Copyright ยฉ 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved