Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

BLAST: A High-Speed Sequence Comparison Tool in Bioinformatics, Assignments of Mathematics

Blast (basic local alignment search tool) is a popular bioinformatics tool developed by the national center for biotechnology information (ncbi) in 1990. It allows researchers to compare gene and protein sequences with those in public databases, enabling the discovery of gene functions and evolutionary relationships. Blast's speed, easy access, and availability of specialized versions make it an essential resource in the field.

Typology: Assignments

Pre 2010

Uploaded on 09/07/2009

koofers-user-0tj
koofers-user-0tj 🇺🇸

10 documents

1 / 6

Toggle sidebar

Related documents


Partial preview of the text

Download BLAST: A High-Speed Sequence Comparison Tool in Bioinformatics and more Assignments Mathematics in PDF only on Docsity! Math 127 (Spring 2007) Homework #1 30 Jan 2007 Question 1 BLAST stands for Basic Local Alignment Search Tool. It is a family of programs developed by the National Center for Biotechnology Information (NCBI) for the purpose of comparing gene and protein se- quences with those in public databases. BLAST was born in 1990 when five researchers – Stephen Altschul, Warren Gish, David Lipman (all from NCBI), Webb Miller (Pennsylvania State University), and Gene My- ers (University of Arizona) – designed and implemented an algorithm that was able to search large genomic databases more than 50 times faster than traditional exhaustive methods with only a slight loss in accuracy. The programs in the BLAST family are based on that algorithm. Since then, it has been a popular tool used by researchers for a variety of biological enquires, such as discovering the function of a gene sequence in one animal by comparing it with sequences with known functions in other animals, or finding evolutionary relationships between different animals. The popularity of BLAST stems from its speed, easy access, and availability of specialized versions for different types of problems. The user is also allowed to adjust various parameters in order to fine-tune the search to suit the needs of his/her enquiry. Examples of the parameters include gap costs, statistical significance threshold and choice of database to search. At http://www.ncbi.nlm.nih.gov/Education/BLASTinfo/tut1.html, one can find a tutorial introduc- ing the use of BLAST. The following example was extracted from this website. It is the amino acid sequence of the uncharacterized archaebacterial protein MJ0577 from the Methanococcus Jannaschii, and we wish to search for sequence relatives in the amino acid database. We click on the ”Protein-protein BLAST (blastp)” tool on the BLAST homepage and paste the following sequence in FASTA format into the search box: >gi|2501594|sp|Q57997|Y577_METJA PROTEIN MJ0577 MSVMYKKILYPTDFSETAEIALKHVKAFKTLKAEEVILLHVIDEREIKKRDIFS LLLGVAGLNKSVEEFENELKNKLTEEAKNKMENIKKELEDVGFKVKDIIVVGIP HEEIVKIAEDEGVDIIIMGSHGKTNLKEILLGSVTENVIKKSNKPVLVVKRKNS Leaving all the other options unchanged, we click on the BLAST button and the FORMAT button on the following page. A colorful chart showing the 114 BLAST hits appears on the screen, with the hits ranked according to their alignment scores. The red hits are sequences in the database demonstrating incredibly close alignment with our input sequence. It is not surprising that we have one red hit, which is the original amino acid sequence MJ0577 in the database. Next, we have 12 pink hits with relatively good alignment. Studying these sequences reveals that most of them are stress response proteins. This tells us that the MJ0577 is likely to be a stress response protein. The chart also shows several green and blue hits which are lower on the alignment scale. Below the chart, we see our original amino acid sequence placed side-by-side with the hit sequences in the database. The individual acids which match up, as well as gaps in the alignments, are clearly shown. For instance, in our first pink hit, we have 43% exact matches, 68% positive matches, and 7% in gaps. The above example demonstrates the efficiency and usefulness of BLAST in biological enquiries. Question 2 The likelihood function of the given problem is L(θ) = (0.2 + θ)u1(0.3 + θ)u2(0.5− 2θ)u3 , and the log-likelihood function is l(θ) = u1 log(0.2 + θ) + u2 log(0.3 + θ) + u3 log(0.5− 2θ). 1 The maximum likelihood estimate θ̂ is satisfies when 0 = l′(θ̂) = u1 0.2 + θ̂ + u2 0.3 + θ̂ − 2u3 0.5− 2θ̂ (1) Clearing the denominators gives 0 = u1(0.3 + θ̂)(0.5− 2θ̂) + u2(0.2 + θ̂)(0.5− 2θ̂)− 2u3(0.2 + θ̂)(0.3 + θ̂) = −2(u1 + u2 + u3)θ̂2 + (−0.1u1 + 0.1u2 − u3)θ̂ + (0.15u1 + 0.10u2 − 0.12u3) The solution to this quadratic equation is θ̂ = b · u± √ uT Au c · u (2) where A =  1.21 0.99 0.020.99 0.81 0.02 0.02 0.02 0.04  , b =  −0.10.1 1  , c =  44 4  , u =  u1u2 u3  . It remains for us to determine which of the above two possible solutions for θ̂ lies in the natural parameter space of the linear model (i.e. θ̂ satisfies 0 ≤ fi(θ̂) ≤ 1 for i = 1, 2, 3). We consider the hyperplane arrangement {fi = 0}i∈[3] in R, and note that the above inequalities imply that θ̂ must lie in the bounded region −0.2 < θ̂ < 0.25. Varchenko’s Formula tells us that equation (1) has precisely one root θ̂0 lying in this bounded region and also another root θ̂1 lying in the bounded region −0.3 < θ̂1 < 0.2. Thus, θ̂1 < θ̂0 and they are the two roots in equation (2). Now, c · u is positive so b · u− √ uT Au c · u < b · u + √ uT Au c · u Therefore, we must have θ̂ = b · u + √ uT Au c · u as the desired maximum likelihood estimate. Question 3 To prove Varchenko’s Formula for d = 2,m = 4, we need to show that: 1. There is exactly one real solution in each bounded region of the hyperplane arrangement {fi = 0}i∈[4]. 2. There are no real solutions in each unbounded region of the hyperplane arrangement. 3. Each solution has multiplicity one. 4. The imaginary part of each solution is 0. Proof of 1 and 2: Given a region R of the hyperplane arrangement, we want to find the number of distinct real solutions of the likelihood equations in (1.23) in R. Now, if any of the linear maps fi(θ1, θ2) = ai1θ1 + ai2θ2 + bi, i ∈ [4] is negative in R, we may set a′i1 = −ai1, a′i2 = −ai2 and b′i = −bi, consider the likelihood equations (1.23) with these new coefficients and note that the equations remain unchanged. Thus, we may assume that all the maps fi, i ∈ [4] are positive in R, and define the log-likelihood function l(θ1, θ2) = 4∑ i=1 ui log(ai1θ1 + ai2θ2 + bi). 2 Question 5 Let λh, λt be the probabilities that the gambler’s first coin comes up heads and tails respectively. Let ρh, ρt be the corresponding probabilities for his second coin. Let θ be the probability he picks the first coin. Then, we have the vector of parameters π = (θ, λh, λt, ρh, ρt) satisfying λh +λt = 1 and ρh + ρt = 1, so there are a total of three free parameters with the parameter space Θ = ∆1 ×∆1 ×∆1. The model is given by the polynomial map f : R3 → R5, π 7−→ (fi)0≤i≤4, fi = ( 4 i ) θλihλ 4−i t + ( 4 i ) (1− θ)ρihρ4−it . Our goal is to find estimates π̂ which maximize the log-likelihood function lobs(π) = 4∑ i=0 ui log(fi(π)) To test our hypothesis for some u = (u0, . . . , u4), we compare the theoretical probability distribution f(π̂) = (f0(π̂), . . . , f4(π̂)) with the observed distribution 11000u = ( 1 1000u0, . . . , 1 1000u4). If the two distributions are close, we can conclude that our hypothesis explains the observed data well. Question 6 We define the hidden data by decomposing the observed data into the contributions made by each of the gambler’s two coins, i.e. ui = ui1 + ui2 for 0 ≤ i ≤ 4. Here, ui1 is the number of times the gambler’s first coin produced i heads when picked and thrown 4 times. Similarly, ui2 is defined for the second coin. The hidden model is given by F : R3 → R2×5, π 7−→ (fi1, fi2)0≤i≤4, fi1 = ( 4 i ) θλihλ 4−i t , fi2 = ( 4 i ) (1− θ)ρihρ4−it . First, we solve the problem of maximizing the hidden log-likelihood function lhid(π) = 4∑ i=0 ui1 log( ( 4 i ) θλihλ 4−i t ) + 4∑ i=0 ui2 log( ( 4 i ) (1− θ)ρihρ4−it ) = α1 log(λh) + α2 log(1− λh) + β1 log(ρh) + β2 log(1− ρh) +γ1 log(θ) + γ2 log(1− θ) + c where α1 = ∑4 i=0 ui1i, α2 = 4∑ i=0 ui1(4− i), β1 = ∑4 i=0 ui2i, β2 = 4∑ i=0 ui2(4− i), γ1 = ∑4 i=0 ui1, γ2 = 4∑ i=0 ui2 c = ∑4 i=0 ui log( ( 4 i ) ). 5 It is useful to note that α1 + α2 = 4γ1, β1 + β2 = γ2, γ1 + γ2 = 1000. Now, the function g(x) = a log(x) + b log(1− x), where a, b are constants, is maximized when 0 = g′(x) = a x − b 1− x =⇒ x = a a + b . Hence, given the hidden data (ui1), (ui2), the hidden log-likelihood function is maximized when λh = α1 α1 + α2 , ρh = β1 β1 + β2 , θ = γ1 γ1 + γ2 = γ1 1000 . To summarize, the EM-algorithm for this problem is Input: The observed data u = (u0, . . . , u4). Output: A proposed maximum π̂ ∈ Θ of the log-likelihood function lobs(π). Step 0: Pick initial parameters θ, λh, ρh. Our vector of parameters is now π = (θ, λh, 1− λh, ρh, 1− ρh). E-Step: Define the hidden data ui1 = ui ( 4 i ) θλihλ 4−i t( 4 i ) θλihλ 4−i t + ( 4 i ) (1− θ)ρihρ 4−i t ui2 = ui ( 4 i ) (1− θ)ρihρ 4−i t( 4 i ) θλihλ 4−i t + ( 4 i ) (1− θ)ρihρ 4−i t M-Step: Calculate the maximum likelihood estimate for the hidden model λ∗h = ∑4 i=0 ui1i 4 ∑4 i=0 ui1 , ρ∗h = ∑4 i=0 ui2i 4 ∑4 i=0 ui2 , θ∗ = ∑4 i=0 ui1 1000 Step 3: If lobs(π ∗)− lobs(π) > , then set π := π ∗ and go back to E-Step. Step 4: Output the parameter vector π̂ := π∗ and the corresponding probability distribution p̂ = f(π̂). We ran the algorithm on the observed data u = (150, 150, 150, 350, 200) and tried different starting parameters. The listing below shows some of the parameters used and the estimates obtained. (θ, λh, ρh) = (0.5, 0.5, 0.5) f(π̂) ≈ 11000 (33, 177, 358, 323, 109) lobs(π̂) = −1765.5 (θ, λh, ρh) = (0.8, 0.8, 0.8) f(π̂) ≈ 11000 (33, 177, 358, 323, 109) lobs(π̂) = −1765.5 (θ, λh, ρh) = (0.1, 0.1, 0.1) f(π̂) ≈ 11000 (33, 177, 358, 323, 109) lobs(π̂) = −1765.5 (θ, λh, ρh) = (0.3, 0.6, 0.6) f(π̂) ≈ 11000 (33, 177, 358, 323, 109) lobs(π̂) = −1765.5 (θ, λh, ρh) = (0.3, 0.599, 0.6) f(π̂) ≈ 11000 (154, 140, 181, 304, 221) lobs(π̂) = −1550.6 (θ, λh, ρh) = (0.25, 0.65, 0.35) f(π̂) ≈ 11000 (154, 140, 181, 304, 221) lobs(π̂) = −1550.6 (θ, λh, ρh) = (0.90, 0.15, 0.45) f(π̂) ≈ 11000 (154, 140, 181, 304, 221) lobs(π̂) = −1550.6 It seems that when λh = ρh, the estimate with lobs(π̂) = −1765.5 is obtained; and for every other case, the estimate with lobs(π̂) = −1550.6 is obtained. A trial running the algorithm with 1000 different random initial parameters could not produce any result better than the latter estimate. 6
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved