\[
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%
%%%% This text file is part of the source of
%%%% `Introduction to High-Performance Scientific Computing'
%%%% by Victor Eijkhout, copyright 2012-2020
%%%%
%%%% mathjax.tex : macros to facility mathjax use in html version
%%%%
%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand\inv{^{-1}}\newcommand\invt{^{-t}}
\newcommand\bbP{\mathbb{P}}
\newcommand\bbR{\mathbb{R}}
\newcommand\defined{
\mathrel{\lower 5pt \hbox{${\equiv\atop\mathrm{\scriptstyle D}}$}}}
\newcommand\macro[1]{$\langle$#1$\rangle$}
\newcommand\dtdxx{\frac{\alpha\Delta t}{\Delta x^2}}
\]
Back to Table of Contents
# 14 Complexity

**Exercise**

Back to Table of Contents
At various places in this book we are interested in how many
operations an algorithm takes. It depends on the context what these
operations are, but often we count additions (or subtractions) and
multiplications. This is called the
*arithmetic*
or
*computational complexity*
of an algorithm. For
instance, summing $n$ numbers takes $n-1$ additions. Another quantity
that we may want to describe is the amount of space (computer memory)
that is needed. Sometimes the space to fit the input and output of an
algorithm is all that is needed, but some algorithms need temporary
space. The total required space is called the
*space complexity*
of an algorithm.

Both arithmetic and space complexity depend on some description of the input, for instance, for summing an array of numbers, the length $n$ of the array is all that is needed. We express this dependency by saying `summing an array of numbers has time complexity $n-1$ additions, where $n$ is the length of the array'.

The time (or space) the summing algorithm takes is not dependent on
other factors such as the values of the numbers. By contrast, some
algorithms such as computing the greatest common divisor of an array
of integers
*can*
be dependent on the actual values.

What is the time and space complexity of multiplying two square matrices of size $n\times n$? Assume that an addition and a multiplication take the same amount of time.

Often we aim to simplify the formulas that describe time or space complexity. For instance, if the complexity of an algorithm is $n^2+2n$, we see that for $n>2$ the complexity is less than $2n^2$, and for $n>4$ it is less than $(3/2)n^2$. On the other hand, for all values of~$n$ the complexity is at least~$n^2$. Clearly, the quadratic term $n^2$ is the most important, and the linear term~$n$ becomes less and less important by ratio. We express this informally by saying that the complexity is quadratic in $n$ as $n\rightarrow\infty$: there are constants~$c,C$ so that for $n$ large enough the complexity is at least $cn^2$ and at most~$Cn^2$.

This is
expressed for short by saying that the complexity is of
*order*
$n^2$, written as~$O(n^2)$ as
$n\rightarrow\infty$. In
chapter~
Numerical treatment of differential equations
you will see phenomena that we want to
describe as orders of a parameter that goes to zero. In that case we
write for instance $f(h)=O(h^2)$ as $h\downarrow0$, meaning that $f$
is bounded by $ch^2$ and $Ch^2$ for certain constants $c,C$ and $h$
small enough.