Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Instance-Based Learning - Lecture Slides | CSI 5325, Study notes of Computer Science

Material Type: Notes; Professor: Hamerly; Class: Introduction to Machine Learning; Subject: Computer Science; University: Baylor University; Term: Unknown 1989;

Typology: Study notes

Pre 2010

Uploaded on 08/19/2009

koofers-user-d49
koofers-user-d49 🇺🇸

10 documents

1 / 12

Toggle sidebar

Related documents


Partial preview of the text

Download Instance-Based Learning - Lecture Slides | CSI 5325 and more Study notes Computer Science in PDF only on Docsity! Intro. to machine learning (CSI 5325) Lecture 20: Instance-based learning Intro. to machine learning (CSI 5325) Lecture 20: Instance-based learning Greg Hamerly Some content from Tom Mitchell. 1 / 12 Intro. to machine learning (CSI 5325) Lecture 20: Instance-based learning Outline 1 Locally weighted regression 2 Radial basis functions 3 Learning linear functions 4 Lazy and eager learning 2 / 12 Intro. to machine learning (CSI 5325) Lecture 20: Instance-based learning Radial basis functions Radial Basis Function Networks Global approximation to target function, in terms of linear combination of local approximations Used, e.g., for image classification A different kind of neural network Closely related to distance-weighted regression, but “eager” instead of “lazy” 5 / 12 Intro. to machine learning (CSI 5325) Lecture 20: Instance-based learning Radial basis functions Radial Basis Function Networks Where ai (x) are the attributes describing instance x , and f (x) = w0 + k∑ u=1 wuKu(d(xu, x)) One common choice for Ku(d(xu, x)) is Ku(d(xu, x)) = e − 1 2σ2u d2(xu ,x) 6 / 12 Intro. to machine learning (CSI 5325) Lecture 20: Instance-based learning Radial basis functions Training Radial Basis Function Networks Q1: What xu to use for each kernel function Ku(d(xu, x)) Scatter uniformly throughout instance space Or use training instances (reflects instance distribution) Or use the means of clusters (found by k-means, Gaussian EM, etc.) Q2: How to train weights (assume here Gaussian Ku) First choose variance (and perhaps mean) for each Ku e.g., use EM Then hold Ku fixed, and train linear output layer efficient methods to fit linear function 7 / 12 Intro. to machine learning (CSI 5325) Lecture 20: Instance-based learning Learning linear functions Deriving linear weights Take the derivative of E with respect to the weights: E (f̂ ) = 1 2 (Xw − y)T (Xw − y) ∂E ∂w = (XT )(Xw − y) = XTXw − XT y Setting this equal to zero to find the minimum value: XTXw − XT y = 0 XTXw = XT y (XTX )−1XTXw = (XTX )−1XT y ŵ = (XTX )−1XT y Easily computed in any numerical package (e.g. Matlab), with the majority of the cost being a (d + 1)× (d + 1) matrix inversion. 10 / 12 Intro. to machine learning (CSI 5325) Lecture 20: Instance-based learning Learning linear functions General applicability of linear models Linear models are extremely popular because: they can be solved efficiently (see previous slides) represent more complex functions through ‘basis expansions’ construct a linear combination of nonlinear features Many machine learning algorithms are related to some sort of linear model: linear output perceptron (without thresholding) radial basis network locally weighted regression non-linear (e.g. polynomial) regression support vector machines etc. 11 / 12 Intro. to machine learning (CSI 5325) Lecture 20: Instance-based learning Lazy and eager learning Lazy and Eager Learning Lazy: wait for query before generalizing k-Nearest Neighbor, Case based reasoning Eager: generalize before seeing query Radial basis function networks, ID3, Backpropagation, NaiveBayes, . . . Does it matter? Eager learner must create global approximation Lazy learner can create many local approximations if they use same H, lazy can represent more complex functions (e.g., consider H = linear functions) 12 / 12
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved