Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Course Lecture Notes - Introduction to Computer Science | CPSC 201, Study notes of Computer Science

Material Type: Notes; Class: Intro to Computer Science; Subject: Computer Science; University: Yale University; Term: Spring 2007;

Typology: Study notes

Pre 2010

Uploaded on 11/08/2009

koofers-user-t6o
koofers-user-t6o 🇺🇸

10 documents

1 / 27

Toggle sidebar

Related documents


Partial preview of the text

Download Course Lecture Notes - Introduction to Computer Science | CPSC 201 and more Study notes Computer Science in PDF only on Docsity! CPSC 201 Course Lecture Notes Spring 2008 Day 1 (Jan 14) Welcome and preliminaries:  Are you in the right course? (distribute handout)  Email: via classes*v2: classesv2.yale.edu  Course website: plucky.cs.yale.edu/cs201  Class on Friday this week, and not next Monday  TF: Amittai Aviram – see website for office hours (TBD)  Two required texts: Omnibus and SOE  More-or-less weekly homework assignments, plus two exams. CPSC 201 is the hardest course to teach in the CS curriculum. Primarily because this question is hard to answer: “What is computer science?” Is it a science? …a physical science? …a natural science? Or is it more like mathematics? …or engineering? One could make an argument for all or none of these! It is not the study of computers – any more than astronomy is the study of telescopes. Nor is it is the study of programming – any more than literature is the study of grammar. Dewdney’s approach (our textbook) is to give enough examples of cool ideas in CS that you will just “know” what it is kind of by osmosis (in the same sense that one just “knows” pornography when one sees it). Broadly speaking, computer science is the study of information, computation, and communication. The purpose of CPSC 201 is to give a broad overview of this. There is another cross-cutting theme that underlies CS, and that is abstraction. There is an interesting CACM article (Vol 50(4), April 2007), called, Is Abstraction the Key to Computing?” In my textbook, I ask the question, “What are the three most important ideas in programming?” to which I offer the answer, “Abstraction, abstraction, abstraction.” (Like the real-estate question.) Functions are a good abstraction for programming, because they imply computation (taking an input and returning an output) and determinacy (one answer instead of many). Haskell is particularly good at abstraction. (More on this later.) Abstraction is not that unfamiliar to the average person. In fact, there are often multiple levels of abstraction that we are familiar with. Let’s ask ourselves some simple questions: 1. How does a toilet work? 2. How does a car work? 3. How does a refrigerator work? 4. How does a microwave oven work? For each of these we can describe a “user interface” version of “how X works” that is perfectly adequate for the user. But it doesn’t tell us what’s “under the hood,” so to speak. We can then give a more concrete (less abstract) description of how something works, and in some cases this is done in several steps – thus yielding different “levels of abstraction.” We can also ask ourselves computer-related questions: 1. How does an iPod work? 2. How does a cell phone work? 3. How does a computer work? 4. How does the Internet work? 5. How does a particular software program work? i. Google (or some other search engine). ii. Microsoft Word. iii. A video game. iv. A GUI. In this class we will try to answer a smorgasbord of these questions, using different levels of abstraction. A key concept will be that of an algorithm: an abstract recipe for computation. We will use Haskell to actually execute our abstract algorithms. Speaking of which… Haskell was for many years considered an “academic language”. In recent years, it has become “cool”! It has also become a practical language, and many commercial aps that use Haskell are starting to appear. See haskell.org for all you will ever need to know about Haskell. Also start reading SOE! We will start our “tour” of CS with hardware: There are eight chapters in the Omnibus that cover hardware – we will not cover all of them, but will get through most. And we will use Haskell to build “hardware abstractions”. Reading Assignment: the first 2 chapters in SOE, and chapters 13, 28, and 38 in the Omnibus. Day 3 (Jan 18) [Friday instead of Monday because of MLK] Combinational circuits have no loops – and no state! How do we get state? Answer: Loops! But not all loops yield useful results…. [Show NOT gate tied back on itself.] We can write an equation that describes this: x = x’ This equation has no solutions! Q: If we did this with a real logic gate, what do you think will happen? A: Either oscillation or some value between “1” and “0”. Q: If we wrote this: x = notB x in Haskell, what would happen? A: Non-termination!! Let’s look at something more useful: [Draw diagram of two NAND gates tied back on one another – see Fig 38.2.] Q: What does this do?? Before answering, let’s write the equations that correspond to this: Q = (RQ’)’ Q’ = (SQ)’ If R and S are both 1, then there are two stable states: Q could be 1 and Q’ 0, or vice versa. More importantly, we can force these two conditions by momentarily making R or S go to 0. This is called an RS flip-flop, and is a simple kind of one-bit memory. Unfortunately, unless we simulate continuous voltages, and delays in wires, etc., we cannot simply write the above equations in Haskell and expect an answer. In a computer it is more common that we use sequential circuits – i.e. circuits that are clocked and typically have loops. [ Draw diagram. ] This begins with coming up with a clocked version of the RS flip-flop, which we can do like this: [ Draw Fig 38.4, but put an inverter between R and S. ] This is called D flip-flop. In Haskell we can take clocked flip-flops (and other kinds of clocked primitives) as given – i.e. we abstract away the details. But we still need to represent clocks and, in general, signals that vary over time. We can do this using streams, which are infinite lists in Haskell. [Give basic explanation of lists and streams, and then go on-line to the file Circuits.lhs.] Day 4 (Jan 23) A more thorough tutorial on Haskell… Haskell requires thinking differently… [elaborate] x = x+1 in an imperative language is a command – it says take the old value of x, and add 1 to it. In contrast, x = x+1 in Haskell is a definition – it says what x is, not how to compute it. In this case, x is defined as a number that is the same as 1 plus that number – i.e. it is the solution to the equation x = x+1. But in fact there is no solution to this equation, and thus x is undefined, and Haskell will either not terminate or give you a run-time error if you try to execute it. So how does one increment x? [Elaborate: (a) introduce new definition, say y, or (b) just use x+1 where you need it.] Generally speaking: no side effects! The implications of this run deep. For example, there are no iteration (loop) constructs (while, until, etc.). Instead, recursion is used. Also, IO needs to be done in a peculiar way (more later). Example: Factorial [write mathematical definition first] Data structures: Haskell has a very general mechanism for defining new data structures. We saw one example: Bit [show again]. The built-in list data type has special syntax, but is otherwise nothing special. [Elaborate in steps: start with integer lists, then, say, character lists. Then point out that they have the same structure – so let’s use polymorphism to capture the structure (example of abstraction). Then make connection to Haskell’s lists, including syntax: x:[] = [x], etc.] Another “built-in” data structure is tuples. If there were no special syntax, we could do: data Pair a b = Pair a b === (a,b) data Triple a b c = Triple a b c === (a,b,c) etc. === etc. Discuss pattern-matching (x:xs, [x], (x,y), etc.) Now introduce type synonyms: first for String, then for Sig (i.e. [Bit]). Suppose now we want to (a) flip every bit in a stream, and (b) uppercase every character in a string. [Elaborate: write the monomorphic functions to do this.] Now note the “repeating pattern.” [Elaborate: distinguish repeating stuff from changing stuff – introduce variables to handle the changing stuff. Then develop code for the map function.] Point out the use of polymorphism and higher-order functions. Note: In “Circuits” I defined: notS :: Sig -> Sig notS (x:xs) = notB x : notS xs but this is more easily defined as: notS xs = map notB xs Discuss the syntax and semantics of types; talk about currying. Then point out: notS = map notB As another example, define “lift2” from the Circuits module: andS (x:xs) (y:ys) = andB x y : andS xs ys orS (x:xs) (y:ys) = orB x y : orS xs ys nandS (x:xs) (y:ys) = nandB x y : nandS xs ys norS (x:xs) (y:ys) = norB x y : norS xs ys Note repeating pattern; let’s capture it using higher-order functions and polymorphism: lift2 :: (a->b->c) -> [a] -> [b] -> [c] lift2 op (x:xs) (y:ys) = op x y : lift2 op xs ys In Haskell this is actually called “zipWith”. So now: andS, orS, nandS, norS, xorS :: Sig -> Sig -> Sig andS = lift2 andB orS = lift2 orB nandS = lift2 nandB norS = lift2 norB Day 6 (Jan 30) How does one do other things with numbers – such as subtract, multiply, and divide? To do subtraction, we can use two’s-complement arithmetic: a number is represented in binary, where the most significant bit determines whether the number is positive or negative. A number is negated by taking the one’s complement (i.e. flipping all the bits), and adding one. For example, assume 4-bit numbers + sign, i.e. 5 bits. 01111 31 01110 30 … 00001 1 00000 0 11111 -1 (the one’s complement of 1111 is 0000, and 0000+1 = 0001) 11110 -2 … 10001 -31 10000 -32 (the “weird number” -- note that there is no +32) Note that the two-complement of 0 is zero. Q: What is the two’s complement of -32? A: -32!! Note that using a 5-bit adder, n + (-n) = 0, as it should. For example: 00101 5 11011 -5 -------- 00000 0 So to compute a - b, just do a + (-b) in two’s-complement arithmetic. [Give two examples – one with positive result, the other negative.] Q: How do you compute the two’s complement of a number? A: Compute the one’s complement, and add one. Q: How do you compute one’s complement? A: Use inverters, but if one wants to do it selectively, then use XOR gates! [Draw circuit diagram. Note that using 8-bit (say) building blocks means we have 7-bit signed numbers. And the carry-in can be used to perform the negation.] What about multiplication? We could do it sequentially as in the Circuits module, but how about combinationally? To do this we could simulate the standard way of doing long-hand multiplication. For example: 110 6 101 5 ------- 110 1100 ------- 11110 30 Q: How many bits does the result of an n-bit multiplication have? A: 2n. Draw circuit diagram to mimic a*b for 4-bit words:  Need a 5-bit, 6-bit, and 7-bit adder.  Word a is gated by b0, b1, b2, and b3 at each stage: o At first stage, b0 and b1 gate both inputs. o At each stage a is “shifted” by adding a zero at LSB. In practice, even more sophisticated circuits are used to do all of this better (even addition – mention “look-ahead” for carry) and to do other operations such as division. In a computer, we sometimes want to add, sometimes multiply, and so on. And we want to do so on different pieces of data. For your homework, you are required to design a circuit that takes a single instruction and does one of eight things with it. [Look at Assignment 2 on-line. Ask if there are any questions – in particular regarding Haskell.] Von Neumann Computers The von Neumann computer is an abstract version of the modern-day digital computer. (Sometimes also called a Random Access Machine.) A more concrete, but still fairly simple and abstract, version of this is the SCRAM (Simple but Complete Random Access Machine) described in Ch 48 of the Omnibus. [Show overhead transparency of Fig 48.1] [Time permitting, explain concepts of micro-code, machine code, assembler code, and high level code.] Day 7 (Feb 4) [Amittai’s Lecture] From Amittai: I went over problems 1 and 3 of HW2 and then went to Chapter 48, figuring that the review of Chapter 48 would be a useful way of covering the ideas in Problem 2. I also reviewed the differences between multiplexers, demulitplexers, encoders, and decoders, and why decoders are so important in building a SCRAM or a machine such as the one in Problem 2 -- since I had noticed some confusion among some students about that. Also clarified the function of registers. Day 8 (Feb 6) Chapter 17 describes SCRAM from a more abstract point of view – namely, from the point of view of machine code. This is yet-another abstraction – you can forget about MBR and MAR, etc, and focus instead on just the memory, the AC (accumulator), the PC (program counter), and the instructions. Note: Chapter 17 makes one other assumption – namely, that the memory is divided into two sections, one for code, and the other for data. Ch 17 gives an example of a program for detecting duplicates in an array of numbers. Your homework is to add together all the elements of an array. Note:  Any constants that you need (even the integer 1, say) must be loaded into a memory (data) location.  Any variables that you need also need a memory (data) location.  To do indexing you need to use indirection through a memory (data) location – i.e. the index is itself a variable. [give example]  All conditional branches need to be “translated” into a test for zero in the accumulator (i.e. use the JMZ instruction). Ch 17 also gives the high-level “pseudo-code” for the removing-duplicates algorithm. Q: How does one get from this code to RAL? A: Using a compiler. In practice the situation is slightly more complex: [Draw diagram of: Program in High-level Language -> Compiler -> Assembly Language Program -> Assembler -> Object (Machine) Code -> Linker -> Executable (Binary) Code -> Computer] Note: RAL is an example of a Machine Language – the concept of an Assembly Language is only a slight elaboration of a Machine Language to include, for example, symbolic names for addresses. Key concept: The compiler, assembler, and linker are all examples of compilation, or translation, of one language into another. The final “executable” is then actually executed, or, in a technical sense, interpreted by the computer. Another interesting fact: We can try to generalize FA’s by allowing non- deterministic transitions between states (basically, add out-going arcs that have the same transition symbol). But this does not help: the class of languages recognized by non-deterministic FA’s (NFA’s) is the same as that of deterministic FA’s (DFA’s). Here is an example of a language that cannot be recognized by a FA: palin- dromes w/center mark. [Try writing a grammar for it, or constructing a FA for it.] Q: What is the problem here? Hint: Has to do with state. A FA has a finite state – thus the name!! We can add output, thus creating a Mealy Machine [Elaborate: show transitions], but that doesn’t help. We need unbounded memory – at the abstract level, this is done with a tape. Push-Down Automata (PDA) A PDA has a tape on which it can do one of two things:  Advance the tape and write a symbol, or  Erase the current symbol and move back a cell. [Elaborate: show transitions as in Fig 7.3.] [Give example: palindromes w/center marker.] PDA’s are more powerful than FA’s – the class of languages that they except is called deterministic context-free languages, which is a superset of regular languages. Furthermore, non-determinism buys us something with this class of automata – non-deterministic PDA’s recognize non-deterministic context-free languages – which is an even larger class of language. What if we allow the tape to be read and written in arbitrary ways and directions? [Elaborate: show the transitions as in Fig 7.7] Then we come up with two other classes of machines:  If we put a linear-factor bound on the size of the tape, then we have a linear-bounded automaton, which recognizes the class of context- sensitive languages.  If we place no bound on the tape size, then we have a Turing Machine, which recognizes the class of recursively enumerable languages. Putting this all together yields what is known as the Chomsky Hierarchy. [Draw Table on page 43] Interestingly, the Turing Machine is the most powerful machine possible – equivalent in power to a RAM, and, given its infinite tape, more powerful than any computer in existence today! Day 10 (Feb 13) Chapter 23 is about generative grammars – which are really no different from ordinary grammars, but they are used differently. A grammar describes a language – one can then either design a recognizer (or parser) for that language, or design a generator that generates sentences in that language. A generative grammar is a four-tuple (N,T,n,P), where:  N is the set of non-terminal symbols  T is the set of terminal symbols  n is the initial symbol  P is a set of productions, where each production is a pair (X,Y), often written X -> Y, where X and Y are words over the alphabet N U T, and X contains at least one non-terminal. A Lindenmayer system, or L-system, is an example of a generative, but is different in two ways:  The sequence of sentences is as important as the individual sentences, and  A new sentence is generated from the previous one by applying as many productions as possible on each step – a kind of “parallel production”. Lindenmayer was a biologist and mathematician, and he used L-systems to describe the growth of certain biological organisms (such as plants, and in particular algae). The particular kind of L-system demonstrated in Chapter 23 has the following additional characteristics:  It is context-free – the left-hand side of each production (i.e. X above) is a single non-terminal.  No distinction is made between terminals and non-terminals (with no loss of expressive power – why?).  It is deterministic – there is exactly one production corresponding to each symbol in the alphabet. [Go over Problem 2 in the homework.] [Haskell hacking: Go over PPT slides for Chapters 5 and 9 in SOE.] Day 11 (Feb 18) [Go over solution to Assignment 4 – spend as much time on Haskell as needed.] Day 12 (Feb 20) This week: Chapter 31 in Omnibus: Turing Machines. At the “top” of the Chomsky Hierarchy is the Turing Machine – the most powerful computer in the Universe . Although woefully impractical, the TM is the most common abstract machine used in theoretical computer science. Invented by Alan Turing, famous mathematician, in the 1930’s (mention Turing Award and the Turing Test). In terms of language recognition, a TM is capable of recognizing sentences generated by “recursively enumerable languages”. We will not study that in detail… rather, we will consider a TM as a computer: it takes as input some symbols on a tape, and returns as output some (presumably other) symbols on a tape. [Draw diagram] In that sense a TM is a partial function f : Σ* -> Σ*, where Σ is the tape’s alphabet. A TM program is a set of 5-tuples (q,s,q’,s’,d) where:  q in Q is the current state,  s in Σ is the current symbol (under the head of the TM),  q’ is the next state,  s’ in Σ is the next symbol to be written in place of s, and  d is the direction to move the head (left, right, or stop). (Q and Σ are finite.) The program can be more conveniently written as a finite-state automaton, where the labels on the arcs also convey the symbol to write and the direction to move. [Example: Unary subtraction.] [Example: Unary multiplication as described in Omnibus.] Q: How small can the alphabet be, and still have full power of TM? A: Two. (Perhaps not surprising.) Q: How small can the set of states be, and still have full power of TM? A: Two! (This is surprising…) Q: Does adding more tapes make a TM more powerful? A: No, but it does make it more efficient. Q: Does making the tape “semi-infinite” make the TM less powerful? A: No! The Omnibus has a proof that an n-tape machine is no more powerful than a one-tape machine. We can do better by: 1. “Parameterizing” the complexity measure in terms of input size. 2. Measuring things in terms of “abstract steps”. So we can say, for example, that “for an input of size N, program P executes 19N+27 steps.” However, this still isn’t abstract enough because usually we are only interested in order-of-magnitude estimates of complexity. The collapse of these complexity measures is usually achieved by something called the “big-O” notation, which, effectively, removes “constant factors and lower-order terms”. For example: 6N, 19.6N+27, 100N, . . . these are all linear in N – i.e. O(N) 6N2, 5N2+6N+7 . . . quadratic in N – i.e. O(N2) 5N13, N13+2N12-3N11 . . . proportional to N13 – i.e. O(N13) 5N, 10N+N100, . . . exponential in N – i.e. O(aN) log2N, log10N, . . . logarithmic in N – i.e. O (log N) Definition of Big-O Notation: R(n) has order of growth f(n), written R(n) = O(f(n)), if there is some constant k for which R(n) ≤ k*f(n) for a sufficiently large value of n. But note: 1. Complexity measures depend on assumptions about what is a valid step or operation. For example, if “sort” was a valid operation, then sorting would take O(1) operations! 2. Constant factors can matter! For example, 1060N > N2 for fairly large values of N! 3. Complexity of algorithms is done abstractly – complexity of programs is more concrete and depends on a careful understanding of the operational semantics of our language, which may be non-trivial! Kinds of Complexity Measures 1. Worst case: complexity given worst-case assumptions about the inputs. 2. Average case: complexity given average-case assumptions about the inputs. 3. Best case: complexity given best-case assumptions about the inputs. Upper and Lower Bounds Sorting can be done in O(N log N) steps, but can we do better? A problem has a lower bound of L(n) if there is no algorithm for solving the problem having lower complexity. (Absence of an algorithm requires a proof.) A problem has an upper bound of U(n) if there exists an algorithm for solving the problem with that complexity. (Existence of an algorithm requires the algorithm.) So, finding an algorithm establishes an upper bound. But, lower bounds amount to proving the absence of such an algorithm. If the upper and lower bounds are equal then we have found an optimal solution! The search for upper and lower bounds can be seen as approaching the optimal solution from above and below. Problems for which lower bound = upper bound are said to be closed. Otherwise, they are open, leaving an “algorithmic gap”. Open and closed thus have dual meanings! Day 18 (Mar 26) Review:  Worst case, average case, best case.  “Big O” notation.  Upper and lower bounds.  Optimal algorithm. Point out that “constants can matter”. Discuss bubble-sort and a way to improve it by noting that at each step, the list is partially sorted. Doesn’t improve complexity class, but improves constant factor. Detailed case study: Min-Max algorithm (find the minimum and maximum elements in a list). Iterative version in Haskell (what are the types?): iterMinMax (x:xs) = iMM x x xs iMM min max [] = (min, max) iMM min max (x:xs) | x<min = iMM x max xs iMM min max (x:xs) | x>max = iMM min x xs Consider the number of comparisons C(n) needed for iterMinMax. In the worst case, each iteration requires 2 comparisons, and there are n iterations. Therefore C(n) = 2n. Alternatively, here is a “divide and conquer” solution in Haskell (what is its type?): dcMinMax (x:[]) = (x,x) dcMinMax (x1:x2:[]) = if x1<x2 then (x1,x2) else (x2,x1) dcMinMax xs = let (min1, max1) = dcMinMax (leftHalf xs) (min2, max2) = dcMinMax (righthalf xs) in (if min1<min2 then min1 else min2, if max1>max2 then max1 else max2) What is the complexity of this? We can write a set of recurrence equations that describe the number of comparisons: C(1) = 0 C(2) = 1 C(n) = 2 * C(n/2) + 2 which, for simplicity, assumes that n is a power of 2 (and thus is always divisible by 2). We can arrive at a closed form solution to these equations by first “guessing” that the solution has the form C(n) = kn+d. Plugging this in we get: C(n) = kn + d = 2 * C(n/2) + 2 = 2 * (kn/2 + d) + 2 = kn + 2d + 2 i.e. d = 2d + 2, so d = -2. Now plugging this into the equation for C(2): C(2) = 2k + d = 2k -2 = 1 So k = 3/2 Therefore a closed-form solution to C(n) is: C(n) = 3n/2 – 2 This is a (modest) improvement over the 2n comparisons given earlier. Here is a list of common recurrence equations and their solutions: C(1) = 0 C(n) = C(n/2) + 1 Solution: C(n) = log n C(1) = 0 C(n) = C(n-1) + 1 Solution: C(n) = n-1 C(1) = 0 C(n) = 2 * C(n/2) + 1 Solution: C(n) = n-1 C(1) = 0 C(n) = 2 * C(n/2) + n Solution: C(n) = n Log n Day 20 (Apr 2) Recall the “G3C” problem. Review what a graph is. There is an algorithm for converting any instance of G3C to an instance of SAT: For each vertex vi in G, output (ri+yi+bi) For each edge (vi,vj) in G, output (~ri+~rj) (~yi+~yj) (~bi+~bj) This transformation has the property that, solving the SAT problem will solve the G3C problem (why?). Note:  To say that no two adjacent vertices have the same color is to say: ~( rirj + yiyj + bibj ) which, by deMorgan’s law, is just: (~ri+~rj) (~yi+~yj) (~bi+~bj)  The constraint (ri+yi+bi) allows a vertex to be more than one color! But if such an assignment satisfies the Boolean expression, that just means the vertex could be any of those colors. (Example: consider singleton graph with one vertex and no edges.) This is an example of a reduction from one problem to another, and is a common technique used in theoretical computer science. In case of NP-complete problems, they are all reducible (in polynomial time) to one another – and therefore are equivalently “hard”, or equivalently “easy”. In general, to show that a problem P is in NPC, we must find two problems P1 and P2 that are already known to be in NPC. Then: 1. Reduce P to P1 – i.e. P can’t be any harder than P1. 2. Reduce P2 to P – i.e. P can’t be any easier than P2. Note:  P1 and P2 may be the same problem, but often aren’t.  Establishing (1) shows that P is no harder than any problem in NPC.  Establishing (2) shows that P is at least as hard as any problem in NPC, and is therefore “NP hard”. Harder problems Some problems are provably exponential – for example, finding a winning strategy in many kinds of board games. It’s interesting to note that the “solutions” themselves are usually exponential – e.g. an exponential number of moves in a board game. Also, there are problems that are even harder than exponential. For example:  Presburger arithmetic: (forall x, exists y,z) s.t. x+z=y /\ (exists w) w+w=y. This is doubly exponential (2 to the (2 to the n)).  WS1S: a logic for talking about sets of positive integers, w/existential and universal quantification, +, =, etc. This logic has no solution having k-fold exponential complexity for any k!!! But it is still decidable… This is called non-elementary. Space complexity is also important. In the end, we have a bunch of complexity classes:  PTIME = P solvable in polynomial time  NPTIME = NP solvable in non-deterministic polynomial time  PSPACE solvable in polynomial space  NPSPACE … etc.  EXPTIME …  EXPSPACE … Note that if something takes XSPACE, then it must also take at least XTIME, since just creating that much space takes an equivalent amount of time. And thus we have a complexity hierarchy. [ Show slide of picture of complexity hierarchy ] How do we deal with intractability?  Could focus on doing well on “typical” problems, rather than worst case.  Could look for solutions that are “near optimal”, perhaps within some tolerance. For example, the traveling salesman problem is easily solvable if we allow “twice” optimal.  Could look for probabilistic solutions that give good (perhaps even optimal) results “most of the time”. Day 21 (April 7) Go over solution to Assignment 8. Artificial Intelligence: When a problem is intractable, heuristics can be used to come up with approximate solutions. These heuristics often come from our intuitions about how humans solve problems – thus the phrase “artificial intelligence”. Example: computer vision (read Chapter 19 of Omnibus). Briefly explain the techniques used in Chapter 19 to understand the three- dimensional structure of a 2D projection of a scene. Day 22 (April 9) Another example of artificial intelligence: Programs to play games. (Read Chapter 6 of the Omnibus.) Discuss the minimax algorithm – draw a game tree on blackboard and work through the example from the Omnibus. Then discuss alpha-beta pruning using the same example. Go over Assignment 9 – explain the game of Macala (Kalah) and show the code provided to play the game. Day 23 (April 14) New topic: Program Verification. Read Chapter 10 in the Omnibus, and Chapter 11 in SOE. Go over PowerPoint slides of Chapter 11 in SOE. Day 24 (April 16) Go over solution to Assignment 9. Program verification, continued. Finish slides from Chapter 11 in SOE. Day 25 (April 21) Computer Music – read Chapters 20 and 21 of SOE. Go through PowerPoint slides for those chapters. Day 26 (April 23) Go over solution to Assignment 10. Explain how to run MDL programs from SOE code in GHCi. Finish going over computer music slides for Chapter 21. Discuss final exam.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved