The trace is the sum of the elements on the diagonal of a matrix. Is there a similar operation for the sum of all the elements in a matrix?
$\endgroup$ 09 Answers
$\begingroup$I don't know if it has a nice name or notation, but for the matrix $\mathbf A$ you could consider the quadratic form $\mathbf e^\top\mathbf A\mathbf e$, where $\mathbf e$ is the column vector whose entries are all $1$'s.
$\endgroup$ $\begingroup$The term "grand sum" is commonly used, if only informally, to represent the sum of all elements.
By the way, the grand sum is a very important quantity in the contexts of Markovian transition matrices and other probabilistic applications of linear algebra.
Regards, Scott
$\endgroup$ $\begingroup$Using the sum of all elements does not contain any information about endomorphisms, which is the reason why you will not find such an operation in the literature.
If this is interesting enough, you can get the sum of all squares using the scalar product $$ \phi(A,B) := \mathrm{tr}(A^T B)$$ In fact $\mathrm{tr}(A^T A) = \sum_{i=1}^n \sum_{j=1}^n a_{i,j}^2$
$\endgroup$ 5 $\begingroup$The max norm:
The max norm is the elementwise norm with $p = \infty$: $$ \|A\|_{\text{max}} = \max \{|a_{ij}|\}. $$ This norm is not sub-multiplicative. $p=\infty$ refers to $\Vert A \Vert_{p} = \left( \sum_{i=1}^m \sum_{j=1}^n |a_{ij}|^p \right)^{1/p}. \, $
If you want something without absolute bars, think of the projection of your matrix on $E$, $\text{tr}\left(E\cdot A\right)$, where $E$ is a matrix full of $1$'s, which is equivalent to calculate the scalar product $\langle e |Ae \rangle$, with $e$ being a vector full of $1$'s, since $|e \rangle \langle e|=E$.
$\endgroup$ $\begingroup$You can certainly consider the sum of all the entries in a square matrix. But what would it be good for?
Mind that square matrices are a way to write explicitly endomorphisms (i.e. linear transformations of a space into itself) so that any quantity you attach to a matrix should be actually say something about the endomorphisms. Trace and determinant remain unchanged if the matrix $A$ is replaced by the matrix $PAP^{-1}$ where $P$ is any invertible matrix. Thus, trace and determinant are numbers that you can attach to the endomorphism represented by $A$.
It wouldn't be the case for the sum of all entries, which does not remain invariant under the said matrix transformation.
$\endgroup$ 3 $\begingroup$I just want to add that the "grandsum" operation, as Scott's answer calls it, does in fact show up in (vector) geometry.
Consider the sequence of formulae:
$|x|^2 = x\bullet x$
$|x+y|^2 = |x|^2+|y|^2+2x\bullet y$
$|x+y+z|^2 = |x|^2+|y|^2+|z|^2+2x\bullet y+2x\bullet z+2y\bullet z$
etc.
A nice way of remembering these is to instead remember the following, more intuitive formulae:
$$|x|^2 = x\bullet x$$
$$|x+y|^2 = \mathrm{grandsum}\left( \begin{array}{ccc} x \bullet x & x \bullet y \\ y \bullet x & y \bullet y \end{array} \right)$$
$$|x+y+z|^2 = \mathrm{grandsum} \left( \begin{array}{ccc} x \bullet x & x \bullet y & x \bullet z \\ y \bullet x & y \bullet y & y \bullet z \\ z \bullet x & z \bullet y & z \bullet z \end{array} \right)$$
etc.
Now one might object - "but those aren't really matrices, they're just arrays! The important thing is really matrix multiplication - that's what sets matrices apart from arrays, so if you haven't used matrix multiplication, you're not really using matrices."
Actually, by making use of J.M.'s answer, we can involve matrix multiplication in the proofs of these identities. For example, here's the $n=3$ case.
$$|x+y+z|^2 = (x+y+z)^\top (x+y+z) = ([x,y,z]\tilde{1}_3)^\top([x,y,z]\tilde{1}_3) = \tilde{1}_3^\top[x,y,z]^\top[x,y,z] \tilde{1}_3 = \mathrm{grandsum}([x,y,z]^\top[x,y,z]) = \mathrm{grandsum} \left( \begin{array}{ccc} x \bullet x & x \bullet y & x \bullet z \\ y \bullet x & y \bullet y & y \bullet z \\ z \bullet x & z \bullet y & z \bullet z \end{array} \right)$$
Of course, this isn't really necessary, since we can just expand things out by hand. Still, it's nice to know that there's a proof out there that involves matrix multiplication in a very real way, since reassures us that we're really taking the sum of a matrix, and not just a "mere array."
$\endgroup$ $\begingroup$If your matrix $A$ is invertible, then the sum over all of its elements is given by $$\sum_{i,j}A_{ij} = 1 - \det (I-AJ)$$ where $J$ is the matrix all of whose entries are $1$. To see why, consider the determinant $\det (B-J)$. If ${\bf b}_i$ are the column vectors of $B$ and ${\bf j}$ is the column vector whose only entry is $1$, we have \begin{align} \det ({\bf b}_1 - {\bf j}, \ldots, {\bf b}_n - {\bf j}) &= \det ({\bf b}_1, {\bf b}_2 - {\bf j}, \ldots, {\bf b}_n - {\bf j}) - \det ({\bf j}, {\bf b}_2 - {\bf j}, \ldots, {\bf b}_n - {\bf j})\\ &=\det B - \displaystyle\sum_{k=1}^n \left( {\bf b}_1, \ldots, {\bf b}_{k-1}, {\bf j}, {\bf b}_{k+1}, \ldots, {\bf b}_n \right). \end{align} Notice that the last term is the sum over all entries of the adjugate matrix of $B$, and so we have $$ \displaystyle\sum_{k=1}^n \left( {\bf b}_1, \ldots, {\bf b}_{k-1}, {\bf j}, {\bf b}_{k+1}, \ldots, {\bf b}_n \right) = \det B \left(\sum_{i,j=1}^n (B^{-1})_{ij} \right) $$ and so setting $B^{-1} = A$ gives the result.
$\endgroup$ 1 $\begingroup$I refer you to the article Merikoski: On the trace and the sum of elements of a matrix, Linear Algebra and its applications, Volume 60, August 1984, pp. 177-185.
$\endgroup$ 1 $\begingroup$Consider the $m\times n$ matrix $A$:$$\begin{bmatrix}A_{11} & \cdots & A_{1n}\\ \vdots & \ddots & \vdots\\ A_{m1} & \cdots & A_{mn}\end{bmatrix}$$Also consider the $n\times m$ matrix $B$ such that $B_{ij}=1$. As a visual aide, $B$ is equal to:$$\begin{bmatrix}1 & \cdots & 1\\ \vdots & \ddots & \vdots\\ 1 & \cdots & 1\end{bmatrix}$$In terms of some sort of proof, it can be shown by induction that each element in any given row is equal to the sum of the elements of that same row in $A$, or:$$AB=\begin{bmatrix}\sum_{j=1}^{n}A_{1k} & \cdots & \sum_{j=1}^{n}A_{1k}\\ \vdots & \ddots & \vdots\\ \sum_{j=1}^{n}A_{nk} & \cdots & \sum_{j=1}^{n}A_{nk}\end{bmatrix}$$Try it out yourself. Note that the dot product $AB$ results in a $m\times m$ matrix, and recall that the definition of the trace operation $\operatorname{tr}$ of some $y\times y$ matrix $X$ is the sum of the diagonal elements of $X$:$$\operatorname{tr}\left(X\right)=X_{11}+X_{22}+\dots+X_{yy}=\sum_{i=1}^{y} X_{ii}$$Together, these facts show us that $\operatorname{tr}\left(AB\right)$ is equivalent to the sum of all the elements in $A$. More explicitly:$$\operatorname{tr}\left(AB\right)=\sum_{i=1}^{n}\sum_{j=1}^{n} A_{ij}$$With multi-index notation, this can be equivalently expressed as:$$\large{\operatorname{tr}\left(AB\right)=\sum_{{}^{\ \ \ \ \ \ \ i,j}_{1\leq i\leq j\leq n}}^{n}} A_{ij}$$Or more simply:$$\sum_{i,j} A_{ij}$$Now, if you want to go really far down the rabbit hole, I can't exactly help you find matrix $B$ with regular matrix operations. I hope this will suffice.
$\endgroup$