Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

ML questions ML questions, Summaries of Pattern Classification and Recognition

ML questions ML questions ML questions ML questions ML questions ML questions ML questions ML questions ML questions

Typology: Summaries

2022/2023

Uploaded on 11/23/2023

amir-sharifi
amir-sharifi 🇮🇷

1 document

1 / 4

Toggle sidebar

Related documents


Partial preview of the text

Download ML questions ML questions and more Summaries Pattern Classification and Recognition in PDF only on Docsity! Machine Learning Midterm This exam is open book. You may bring in your homework, class notes and text- books to help you. You will have 1 hour and 15 minutes. Write all answers in the blue books provided. Please make sure YOUR NAME is on each of your blue books. Square brackets [] denote the points for a question. ANSWER ALL THREE QUESTIONS FOR FULL CREDIT 1. Linear Algebra (a) [7] Show for a matrix A and and eigenvector v that if Av = λv then Akv = λkv Multiply both sides by Aand then use the eigenvector equation. Repeat k − 1 times. (b) [8] A colleague wants to know whether the dynamical system ẋ = Ax is stable where A = [ 3 1 2 2 ] Show how to settle this question. Eigenvalue equation is (3− λ)(2− λ)− 2 = 0 which has roots λ = 1, 4 both positive therefore unstable. (c) [10] In coding face images one can get the top M eigenvectors and eigenvalues. The same colleague now is thinking that some subset of these vectors might do a good job for a given two-class classification problem. What advice would you give him regarding the evaluation of the separation distance of the classes given a subset of the eigenvectors? One stratgy, use EM to fit two classes assuming Gaussian distributions. For different feature sets compare error rates. Another strategy, use a SUPPORT VECTOR MACHINE to estimate the separation distance for the two classes (assuming they separate). 1 2. Information Theory (a) [5] A certain probability distribution for (x1, x2, x3, x4) is specified by p(x1) = 1 2 , (x1) = 1 4 , (x1) = 1 8 , (x1) = 1 8 . What is its entropy? Entropy, H is given by −1 2 log 1 2 − 1 4 log 1 4 − 1 8 log 1 8 − 1 8 log 1 8 or 1 2 + 1 4 × 2 + 1 8 × 3 + 1 8 × 3 = 7 4 (b) [5] Given a uniform distribution q(xi) = 1 4 , i = 1, . . . , 4, what is the Kullback-Liebler distance KLp||q? KLp||q = 1 4 (−1 + 0 + 1 + 1) = 1 4 (c) [5]Why is the KL distance useful? Many algorithms searching for a good approximation p for a distribu- tion can usefully compare it to a target distribution p. (d) A given sound signal can be encoded by expressing the raw signal x(t) as the sum of specialized functions called gammatones that are time- limited functions containing different frequencies. The figure below provides an example for K = 6 where the gammmatones in the bottom six rows can be used to code the small segment of sound source in the top row. More generally, a sound signal can be expressed as x(t) = K∑ k=1 Nk∑ i=1 aik gk(t− tik) 2
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved