Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Efficient Matrix Multiplication by Strassen: Gaussian Elimination Not Optimal, Papers of Mathematical Methods for Numerical Analysis and Optimization

Volker strassen's paper introduces an algorithm for matrix multiplication that requires fewer arithmetic operations than the gaussian elimination method. The paper describes the 'strassen algorithm' which can multiply two matrices of order m2k with 7k number multiplications and less than 6.7(m2k) additions and subtractions. The document also discusses the implications of this algorithm for matrix inversion, solving linear equations, and computing determinants.

Typology: Papers

Pre 2010

Uploaded on 08/19/2009

koofers-user-oaj
koofers-user-oaj 🇺🇸

10 documents

1 / 3

Toggle sidebar

Related documents


Partial preview of the text

Download Efficient Matrix Multiplication by Strassen: Gaussian Elimination Not Optimal and more Papers Mathematical Methods for Numerical Analysis and Optimization in PDF only on Docsity! Numer. Math. t3, 354--356 (t969) Gaussian Elimination is not Optimal VOLKER ~TRASSEN * Received December 12, t 968 t. Below we will give an algorithm which computes the coefficients of the product of two square matrices A and B of order n from the coefficients of A and B with tess than 4 . 7 - n l°g7 arithmetical operations (all logarithms in this paper are for base 2, thus tog 7 ~ 2.8; the usual method requires approximately 2n 3 arithmetical operations). The algorithm induces algorithms for invert ing a matr ix of order n, solving a system of n linear equations in n unknowns, com- put ing a determinant of order n etc. all requiring less than const n l°g 7 arithmetical operations. This fact should be compared with the result of KLYUYEV and KOKOVKIN- SHCHERBAK [1 ] tha t Gaussian elimination for solving a system of l inearequations is optimal if one restricts oneself to operations upon rows and columns as a whole. We also note tha t WlNOGRAD [21 modifies the usual algorithms for matr ix multiplication and inversion and for solving systems of linear equations, trading roughly half of the multiplications for additions and subtractions. I t is a pleasure to thank D. BRILLINGER for inspiring discussions about the present subject and ST. COOK and B. PARLETT for encouraging me to write this paper. 2. We define algorithms e~, ~ which mult iply matrices of order m2 ~, by in- duction on k: ~ , 0 is the usual algorithm, for matr ix multiplication (requiring m a multiplications and m 2 ( m - t) additions), e~,k already being known, define ~ , ~ +t as follows: If A, B are matrices of order m 2 k ~ to be multiplied, write (All A~2 t (B~I B12~ (Cll C1~ A = \A21 A~2], B --- \B.21 B2~], A B = \C21 C22], where the Ai~ ,, Bik, Cik are matrices of order m2 ~. Then compute I = (All + A ~ ) (Bit + Be2 ) , I I = ( A ~ I + A ~ 2 )Bl I , I I I = A l l (Bt~ - - B~.~), IV = A ~ ( - - Bl l + B21 ), V = (Alx+AI~)B~2, VI = (--Ax~ + A~) (Bll + B~2 ), VII = (AI~--A~2 ) (B21 + B22 ), * The results have been found while the author was at the Department of Statistics of the University of California, Berkeley. The author wishes to thank the National Science Foundation for their support (NSF GP-7454). Gaussian El imina t ion is no t Opt imal 3 5 5 C ~ = I + I V - - V + V I I , C ~ x = I I + I V , Q s = I I I + V , C s s = I + I I I - - I I + V I , using ~ , k for multiplication and the usual algorithm for addition and subtraction of matrices of order m 2 ~. B y induction on k one easily sees Fact 1. ~ , ~ computes the product of two matrices of order m 2 * with m ~ 7 k multi- plications and (5 + m) m s 7 ~ -- 6 (m2k) s additions and subtractions of numbers. Thus one may mult iply two matrices of order 2 k with 7 k numbermultiplications and less than 6 . 7 ~ additions and subtractions. Fact 2. The product of two matrices of order n m a y be computed with ~ 4 . 7 - n 1°g7 arithmetical operations. then Proo]. Put k = [log n - - 4], m = [n2-* l + t , n ~ m2 k. Imbedding matrices of order n into matrices of order m2 k reduces our task to tha t of estimating the number of operations of ~ , k. By Fact t this number is (5 + 2m) mS7 k - - 6 (m2k) s < (5 + 2 ( n 2 - ~ + 1))(n2 -~ + t ) s 7 k < 2n 3 (7/8) * + t 2.03 n s (7/4) k (here we have used 16- 2 ' ~ n) = (2 (8/7) l°gn-k + t2.03 (4/7) l°gn-k) n/°g7 =< max (2 (8/7) t + t2.03 (4/7) *) n 1°g7 4 < t < 5 =< 4.7" n l°g 7 by a convexi ty argument. We now turn to matr ix inversion. To apply the algorithms below it is necessary to assume not only tha t the matr ix is invertible bu t tha t all occurring divisions make sense (a similar assumption is of course necessary for Gaussian elimination). We define algorithms fl~,~ which invert matrices of order m2 k, by induction on k: fl~, o is the usual Gaussian elimination algorithm, fl,~, k already being known, define fl,~,k +1 as follows: If A is a matr ix of order m2 ~+t to be inverted, write A l l A la i , A = \As1 As2/ Cli CI \
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved