Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Haskell Programming Language Cheat Sheet, Cheat Sheet of Advanced Computer Programming

Haskell guide for everyone

Typology: Cheat Sheet

2020/2021
On special offer
30 Points
Discount

Limited-time offer


Uploaded on 04/27/2021

eekanath
eekanath 🇺🇸

4.7

(18)

18 documents

Partial preview of the text

Download Haskell Programming Language Cheat Sheet and more Cheat Sheet Advanced Computer Programming in PDF only on Docsity! M. Al-hassy April 2020 Haskell CheatSheet Hello, Home! main = do putStr "What’s your name? " name <- getLine putStrLn ("It’s 2020, " ++ name ++ "! Stay home, stay safe!") Pattern Matching Functions can be defined using the usual if_then_else_ construct, or as expressions guarded by Boolean expressions as in mathematics, or by pattern matching —a form of ‘syntactic comparision’. fact n = if n == 0 then 1 else n * fact (n - 1) fact’ n | n == 0 = 1 | n != 0 = n * fact’ (n - 1) fact’’ 0 = 1 fact’’ n = n * fact’’ (n - 1) The above definitions of the factorial function are all equal. Guards, as in the second version, are a form of ‘multi-branching conditional’. In the final version, when a call, say, fact 5 happens we compare syntactically whether 5 and the first pattern 0 are the same. They are not, so we consider the second case with the understanding that an identifier appearing in a pattern matches any argument, so the second clause is used. Hence, when pattern matching is used, order of equations matters: If we declared the n-pattern first, then the call fact 0 would match it and we end up with 0 * fact (-1), which is not what we want! If we simply defined the final fact using only the first clause, then fact 1 would crash with the error Non-exhaustive patterns in function fact. That is, we may define partial functions by not considering all possible shapes of inputs. See also “view patterns”. Local Bindings An equation can be qualified by a where or let clause for defining values or functions used only within an expression. ...e...e...e where e = expr ≈ let e = expr in ...expr...expr...expr It sometimes happens in functional programs that one clause of a function needs part of an argument, while another operators on the whole argument. It it tedious (and inef- ficient) to write out the structure of the complete argument again when referring to it. Use the “as operator” @ to label all or part of an argument, as in f label@(x:y:ys) = · · · Operators Infix operators in Haskell must consist entiry of ‘symbols’ such as &, ^, !, ... rather than alphanumeric characters. Hence, while addition, +, is written infix, integer division is written prefix with div. We can always use whatever fixity we like:  If f is any prefix binary function, then x ‘f‘ y is a valid infix call.  If ⊕ is any infix binary operator, then (⊕) x y is a valid prefix call. It is common to fix one argument ahead of time, e.g., λ x → x + 1 is the successor operation and is written more tersely as (+1). More generally, (⊕r) = λ x → x ⊕ r. The usual arithmeic operations are +, /, *, - but % is used to make fractions. The Boolean operations are ==, /=, &&, || for equality, discrepancy, conjunction, and disjunction. Types Type are inferred, but it is better to write them explicitly so that you communicate your intentions to the machine. If you think that expression e has type τ then write e :: τ to communicate that to the machine, which will silently accept your claim or reject it loudly. Type Name Example Value Small integers Int 42 Unlimited integers Integer 7376541234 Reals Float 3.14 and 2 % 5 Booleans Boolean True and False Characters Char ’a’ and ’3’ Strings String "salam" Lists [α] [] or [x1, ..., xn] Tuples (α, β, γ) (x1, x2, x3) Functions α → β λ x → · · · Polymorphism is the concept that allows one function to operate on different types.  A function whose type contains variables is called a polymorphic function.  The simplest polymorphic function is id : : a -> a, defined by id x = x. Tuples Tuples (α1, ..., αn) are types with values written (x1, ..., xn) where each xi :: αi. The are a form of ‘record’ or ‘product’ type. E.g., (True, 3, ’a’) :: (Boolean, Int, Char). 1 Tuples are used to “return multiple values” from a function. Two useful functions on tuples of length 2 are: fst :: (α, β) → α fst (x, y) = x snd :: (α, β) → β snd (x, y) = β If in addition you import Control.Arrow then you may use: first :: (α → τ) → (α, β) → (τ, β) first f (x, y) = (f x, y) second :: (β → τ) → (α, β) → (α, τ) second g (x, y) = (x, g y) (***) :: (α → α’) → (β → β) → (α, β) → (α’, β’) (f *** g) (x, y) = (f x, g y) (&&&) :: (τ → α) → (τ → β) → τ → (α, β) (f &&& g) x = (f x, g x) Lists Lists are sequences of items of the same type. If each xi : : α then [x1, ..., xn] : : [α]. Lists are useful for functions that want to ‘non-deterministicly’ return a value: They return a list of all possible values.  The empty list is []  We “cons”truct nonempty lists using (:) : : α → [α] → [α]  Abbreviation: [x1, ..., xn] = x1 : (x2 : (· · · (xn : [])))  List comprehensions: [f x | x <- xs, p x] is the list of elements f x where x is an element from list xs and x satisfies the property p ◦ E.g., [2 * x | x <- [2, 3, 4], x < 4] ≈ [2 * 2, 2 * 3] ≈ [4, 6]  Shorthand notation for segments: u may be ommitted to yield infinite lists ◦ [l .. u] = [l, l + 1, l + 2, ..., u]. ◦ [a, b, .., u] = [a + i * step | i <- [0 .. u - a] ] where step = b - a Strings are just lists of characters: "c0c1...cn" ≈ [’c0’, ..., ’cn’].  Hence, all list methods work for strings. Pattern matching on lists prod [] = 1 prod (x:xs) = x * prod xs fact n = prod [1 .. n] If your function needs a case with a list of say, length 3, then you can match directly on that shape via [x, y, z] —which is just an abbreviation for the shape x:y:z:[]. Likewise, if we want to consider lists of length at least 3 then we match on the shape x:y:z:zs. E.g., define the function that produces the maximum of a non-empty list, or the function that removes adjacent duplicates —both require the use of guards. [x0, ..., xn] !! i = xi [x0, ..., xn] ++ [y0, ..., ym] = [x0, ..., xn, y0, ..., ym] concat [xs0, ..., xsn] = xs0 ++ · · · ++ xsn {- Partial functions -} head [x0, ..., xn] = x0 tail [x0, ..., xn] = [x1, ..., xn] init [x0, ..., xn] = [x0, ..., xn−1] last [x0, ..., xn] = xn take k [x0, ..., xn] = [x0, ..., xk−1] drop k [x0, ..., xn] = [xk, ..., xn] sum [x0, ..., xn] = x0 + · · · + xn prod [x0, ..., xn] = x0 * · · · * xn reverse [x0, ..., xn] = [xn, ..., x0] elem x [x0, ..., xn] = x == x0 || · · · || x == xn zip [x0, ..., xn] [y0, ..., ym] = [(x0, y0), ..., (xk, yk)] where k = n ‘min‘ m unzip [(x0, y0), ..., (xk, yk)] = ([x0, ..., xk], [y0, ..., yk]) Duality: Let ∂f = reverse . f . reverse, then init = ∂ tail and take k = ∂ (drop k); even pure . head = ∂ (pure . last) where pure x = [x]. List ‘Design Patterns’ Many functions have the same ‘form’ or ‘design pattern’, a fact which is taken advan- tage of by defining higher-order functions to factor out the structural similarity of the individual functions. map f xs = [f x | x <- xs]  Transform all elements of a list according to the function f. filter p xs = [x | x <- xs, p x]  Keep only the elements of the list that satisfy the predicate p.  takeWhile p xs ≈ Take elements of xs that satisfy p, but stop stop at the first element that does not satisfy p.  dropWhile p xs ≈ Drop all elements until you see one that does not satisfy the predicate.  xs = takeWhile p xs ++ dropWhile p xs. Right-folds let us ‘sum’ up the elements of the list, associating to the right. foldr (⊕) e ≈ λ (x0 : (x1 : (... : (xn : [])))) → (x0 ⊕ (x1 ⊕ (... ⊕ (xn ⊕ e)))) This function just replaces cons “:” and [] with ⊕ and e. That’s all.  E.g., replacing :,[] with themselves does nothing: foldr (:) [] = id. 2 Functor Examples Let f1, f2 be functors and  be a given type. Type Former f α f <$> x Identity α f <$> x = f x Constant  f <$> x = x List [α] f <$> [x0, ..., xn] = [f x0, ..., f xn] Either Either  α f <$> x = right f Product (f1 α, f2 α) f <$> (x, y) = (f <$> x, f <$> y) Composition f1 (f2 α) f <$> x = (fmap f) <$> x Sum Either (f1 α) (f2 α) f <$> ea = f +++ f Writer (, α) f <$> (e, x) = (e, f x) Reader  → α f <$> g = f . g State  → (, α) f <$> g = second f . g Notice that writer is the product of the constant and the identity functors. Unlike reader, the type former f α = α →  is not a functor since there is no way to implement fmap. In contrast, f α = (α → , α) does have an implementation of fmap, but it is not lawful. Applicative Applicatives are collection-like types that can apply collections of functions to collections of elements. In particular, applicatives can fmap over multiple arguments; e.g., if we try to add Just 2 and Just 3, we find (+) <$> Just 2 :: Maybe (Int → Int) and this is not a function and so cannot be applied further to Just 3 to get Just 5. We have both the function and the value wrapped up, so we need a way to apply the former to the latter. The answer is (+) <$> Just 2 <*> Just 3. class Functor f => Applicative f where pure :: a -> f a (<*>) :: f (a -> b) -> f a -> f b {- “apply” -} {- Apply associates to the left: p <*> q <*> r = (p <*> q) <*> r) -} The method pure lets us inject values, to make ‘singleton collections’.  Functors transform values inside collections; applicatives can additionally combine values inside collections.  Exercise: If α is a monoid, then so too is f α for any applicative f. The applicative axioms ensure that apply behaves like usual functional application:  Identity: pure id <*> x = x —c.f., id x = x  Homomorphism: pure f <*> pure x = pure (f x) —it really is function appli- cation on pure values! ◦ Applying a non-effectful function to a non-effectful argument in an effectful context is the same as just applying the function to the argument and then injecting the result into the content.  Interchange: p <*> pure x = pure ($ x) <*> p —c.f., f x = ($ x) f ◦ Functions f take x as input ≈ Values x project functions f to particular values ◦ When there is only one effectful component, then it does not matter whether we evaluate the function first or the argument first, there will still only be one effect. ◦ Indeed, this is equivalent to the law: pure f <*> q = pure (flip ($)) <*> q <*> pure f.  Composition: pure (.) <*> p <*> q <*> r = p <*> (q <*> r) —c.f., (f . g) . h = f . (g . h). If we view f α as an “effectful computation on α”, then the above laws ensure pure cre- ates an “effect free” context. E.g., if f α = [α] is considered “nondeterminstic α-values”, then pure just treats usual α-values as nondeterminstic but with no ambiguity, and fs <*> xs reads “if we nondeterminsticly have a choice f from fs, and we nondetermin- sticly an x from xs, then we nondeterminsticly obtain f x.” More concretely, if I’m given randomly addition or multiplication along with the argument 3 and another argument that could be 2, 4, or 6, then the result would be obtained by considering all possible combinations: [(+), (*)] <*> pure 3 <*> [2, 4, 6] = [5,7,9,6,12,18]. The name “<*>” is suggestive of this ‘cartesian product’ nature. Given a definition of apply, the definition of pure may be obtained by unfolding the identity axiom. Using these laws, we regain the original fmap —since fmap’s are unique in Haskell— thereby further cementing that applicatives model “collections that can be functionally applied”: f <$> x = pure f <*> x. ( Hence, every applicative is a functor whether we like it or not. )  The identity applicative law is then just the identity law of functor.  The homomorphism law now becomes: pure . f = fmap f . pure. ◦ This is the “naturality law” for pure. The laws may be interpreted as left-to-right rewrite rules and so are a procedure for trans- forming any applicative expression into the canonical form of “a pure function applied to effectful arguments”: pure f <*> x1 <*> · · · <*> xn. In this way, one can compute in-parallel the, necessarily independent, xi then combine them together. Notice that the canonical form generalises fmap to n-arguments: Given f : : α1 → · · · → αn → β and xi : : f αi, we obtain an (f β)-value. The case of n = 2 is called liftA2, n = 1 is just fmap, and for n = 0 we have pure! Notice that lift2A is essentially the cartesian product in the setting of lists, or (<&>) below —c.f., sequenceA :: Applicative f ⇒ [f a] → f [a]. (<&>) :: f a → f b → f (a, b) {- Not a standard name! -} (<&>) = liftA2 (,) -- i.e., p <&> q = (,) <$> p <*> q This is a pairing operation with properties of (,) mirrored at the applicative level: {- Pure Pairing -} pure x <&> pure y = pure (x, y) {- Naturality -} (f &&& g) <$> (u <&> v) = (f <$> u) <&> (g <&> v) {- Left Projection -} fst <$> (u <&> pure ()) = u {- Right Projection -} snd <$> (pure () <&> v) = v {- Associtivity -} assocl <$> (u <&> (v <&> w)) = (u <&> v) <&> w The final three laws above suffice to prove the original applicative axioms, and so we may define p <*> q = uncurry ($) <$> (p <&> q). 5 Applicative Examples Let f1, f2 be functors and let  a type. Functor f α f <*> x Identity α f <*> x = f x Constant  e <*> d = e <> d List [α] fs <*> xs = [f x | f <- fs, x <- xs] Either Either  α ef <*> ea = right (λ f → right f ea) ef Composition f1 (f2 α) f <*> x = (<*>) <$> f <*> x Product (f1 α, f2 α) (f, g) <*> (x, y) = (f <*> x, g <*> y) Sum Either (f1 α) (f2 α) Challenge: Assume η : : f1 a → f2 a Writer (, α) (a , f) <*> (b, x) = (a <> b, f x) Reader  → α f <*> g = λ e → f e (g e) —c.f., SKI State  → (, α) sf <*> sa = λ e → let (e’, f) = sf e in second f (sa e’) In the writer and constant cases, we need  to also be a monoid. When  is not a monoid, then those two constructions give examples of functors that are not applicatives —since there is no way to define pure. In contrast, f α = (α → ) → Maybe  is not an applicative since no definition of apply is lawful. Since readers ((->) r) are applicatives, we may, for example, write (⊕) <$> f <*> g as a terse alternative to the “pointwise ⊕” method λ x → f x ⊕ g x. E.g., using (&&) gives a simple way to chain predicates. Do-Notation —Subtle difference between applicatives and monads Recall the map operation on lists, we could define it ourselves: map’ :: (α -> β) -> [α] -> [β] map’ f [] = [] map’ f (x:xs) = let y = f x ys = map’ f xs in (y:ys) If instead the altering function f returned effectful results, then we could gather the results along with the effect: {-# LANGUAGE ApplicativeDo #-} mapA :: Applicative f => (a -> f b) -> [a] -> f [b] mapA f [] = pure [] mapA f (x:xs) = do y <- f x ys <- mapA f xs pure (y:ys) {- ≈ (:) <$> f x <*> mapA f xs -} Applicative syntax can be a bit hard to write, whereas do-notation is more natural and reminiscent of the imperative style used in defining map’ above. For instance, the intu- ition that fs <*> ps is a cartesian product is clearer in do-notation: fs <*> ps ≈ do {f ← fs; x ← ps; pure (f x)} where the right side is read “for-each f in fs, and each x in ps, compute f x”. In-general, do {x1 ← p1; ...; xn ← pn; pure e} ≈ pure (λ x1 ... xn → e) <*> p1 <*> · · · <*> pn provided pi does not mention xj for j < i; but e may re- fer to all xi. If any pi mentions an earlier xj , then we could not translate the do-notation into an applicative expression. If do {x ← p; y ← qx; pure e} has qx being an expression depending on x, then we could say this is an abbreviation for (λ x → (λ y → e) <$> qx) <$> p but this is of type f (f β)). Hence, to allow later computations to depend on earlier computations, we need a method join :: f (f α) → f α with which we define do {x ← p; y ← qx; pure e} ≈ join $ ~(λ x -> (λ y → e) <$> qx) <$> p. Applicatives with a join are called monads and they give us a “programmable semi- colon” . Since later items may depend on earlier ones, do {x ← p; y ← q; pure e} could be read “let x be the value of computation p, let y be the value of computation q, then combine the values via expression e”. Depending on how <*> is implemented, such ‘let declarations’ could short-circuit (Maybe) or be nondeterministic (List) or have other effects such as altering state. As the do-notation clearly shows, the primary difference between Monad and Applicative is that Monad allows dependencies on previous results, whereas Applicative does not. Do-syntax also works with tuples and functions –c.f., reader monad below— since they are monadic; e.g., every clause x <- f in a functional do-expression denotes the resulting of applying f to the (implicit) input. More concretely: go :: (Show a, Num a) => a -> (a, String) go = do {x <- (1+); y <- show; return (x, y)} -- go 3 = (4, "3") Likewise, tuples, lists, etc. Formal Definition of Do-Notation For a general applicative f, a do expression has the form do {C; r}, where C is a (pos- sibly empty) list of commands separated by semicolons, and r is an expression of type f β, which is also the type of the entire do expression. Each command takes the form x ← p, where x is a variable, or possibly a pattern; if p :: f α then x :: α. In the particular case of the anonymous variable, _ ← p may be abbreviated to p. The translation of a do expression into <*>/join operations and where clauses is governed by three rules —the last one only applies in the setting of a monad. (1) do {r} = r (2A) do {x ← p; C; r} = q <*> p where q x = do {C; r} --Provided x 6∈ C (2M) do {x ← p; C; r} = join $ map q p where q x = do {C; r} {- Fact: When x 6∈ C, (2A) = (2M). -} By definition chasing and induction on the number of commands C, we have: [CollapseLaw] do {C; do {D; r}} = do {C; D; r} Likewise: [Map ] fmap f p = do {x ← p; pure (f x)} -- By applicative laws [Join] join ps = do {p ← ps; p} -- By functor laws 6 Do-Notation Laws: Here are some desirable usability properties of do-notation. [RightIdentity] do {B; x ← p; pure x} = do {B; p} [LeftIdentity ] do {B; x ← pure e; C; r} = do {B; C[x := e]; r[x := e]} [Associtivity ] do {B; x ← do {C; p}; D; r} = do {B; C; x ← p; D; r} Here, B, C, D range over sequences of commands and C[x := e] means the sequence C with all free occruences of x replaced by e.  Associtivity gives us a nice way to ‘inline’ other calls.  The LeftIdentity law, read right-to-left, lets us “locally give a name” to the possibly complex expression e. If pure forms a singleton collection, then LeftIdentity is a “one-point rule”: We consider all x ← pure e, but there is only one such x, namely e! In the applicative case, where the clauses are independent, we can prove, say, RightIdentity using the identity law for applicatives —which says essentially do {x <- p; pure x} = p— then apply induction on the length of B. What axioms are needed for the monad case to prove the do-notation laws? Monad Laws Here is the definition of the monad typeclass. class Applicative m => Monad (m :: * -> *) where (>>=) :: m a -> (a -> m b) -> m b (<=<) :: Monad m => (b -> m c) -> (a -> m b) -> a -> m c f <=< g = join . fmap f . g Where’s join!? Historically, monads entered Haskell first with interface (»=), return; later it was realised that return = pure and the relationship with applicative was ce- mented. ‘Bind’ (»=) is definable from join by ma »= f = join (fmap f ma), and, for this reason, bind is known as “flat map” or “concat map” in particular instances. For instance, the second definition of do-notation could be expressed: (2M’) do {x ← p; C; r} = p >>= q where q x = do {C; r} Conversely, join ps = do {p ← ps; p} = ps »= id. Likewise, with (2M’), note how (<*>) can be defined directly in-terms of (»=) —c.f., mf <*> mx = do {f ← mf; x ← mx; return (f x)}. Since fmap f p = do {x ← p; return (f x)} = p »= return . f, in the past monad did not even have functor as a superclass —c.f., liftM. The properties of »=, return that prove the desired do-notation laws are: [LeftIdentity ] return a >>= f ≡ f a [RightIdentity] m >>= return ≡ m [Associtivity ] (m >>= f) >>= g ≡ m >>= (\x -> f x >>= g) i.e., (m >>= (\x -> f x)) >>= g = m >>= (\x -> f x >>= g) Equivalently, show the ‘fish’ (<=<) is associative with identity being pure —c.f., monoids! It is pretty awesome that (»=), return give us a functor, an applicative, and (depen- dent) do-notation! Why? Because bind does both the work of fmap and join. Thus, pure, fmap, join suffice to characterise a monad. Join determines how a monad behaves! The monad laws can be expressed in terms of join directly: [Associativity] join . fmap join = join . join {- The only two ways to get from “m (m (m α))” to “m α” are the same. -} [Identity Laws] join. fmap pure = join . pure = id {- Wrapping up “m α” gives an “m (m α)” which flattens to the original element. -} Then, notice that the (free) naturality of join is: join . fmap (fmap f) = fmap f . join : : m (m α) → m β Again, note that join doesn’t merely flatten a monad value, but rather performs the necessary logic that determines how the monad behaves. E.g., suppose m α =  → (, α) is the type of α-values that can be configured accord- ing to a fixed environment type , along with the possibly updated configuration —i.e., functions  → (, α). Then any a :  → (,  → (, α)) in m (m α) can be consid- ered an element of m α if we propagate the environment configuration through the outer layer to obtain a new configuration for the inner layer: λ e → let (e’, a’) = a e in a’ e’. The join dictates how a configuration is modified then passed along : We have two actions, a and a’, and join has sequenced them by pushing the environment through the first thereby modifying it then pushing it through the second. Monad Examples Let f1, f2 be functors and let  a type. Applicative m α join :: m (m α) → m α Identity α λ x → x Constant  λ x → x —Shucks! Not a monad! List [α] λ xss → foldr (++) [] xss Either Either  α Exercise ˆ_ˆ Composition f1 (f2 α) Nope! Not a monad! Product (f1 α, f2 α) λ p → (fst <$> p, snd <$> p) Writer (, α) λ (e, (e’, a)) → (e <> e’, a) Reader  → α λ ra → λ e → ra e e State  → (, α) λ ra → λ e → let (e’, a) = ra e in a e’ In writer, we need  to be a monoid.  Notice how, in writer, join merges the outer context with the inner context: Se- quential writes are mappended together!  If pure forms ‘singleton containers’ then join flattens containers of containers into a single container. Excluding the trivial monoid, the constant functor is not a monad: It fails the monad identity laws for join. Similarly, f α = Maybe (α, α) is an applicative but not a monad —since there is no lawful definition of join. Hence, applicatives are strictly more gener- ally than monads. 7
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved