I have two square matrices: $A$ and $B$. $A^{-1}$ is known and I want to calculate $(A+B)^{-1}$. Are there theorems that help with calculating the inverse of the sum of matrices? In general case $B^{-1}$ is not known, but if it is necessary then it can be assumed that $B^{-1}$ is also known.
$\endgroup$ 312 Answers
$\begingroup$In general, $A+B$ need not be invertible, even when $A$ and $B$ are. But one might ask whether you can have a formula under the additional assumption that $A+B$ is invertible.
As noted by Adrián Barquero, there is a paper by Ken Miller published in the Mathematics Magazine in 1981 that addresses this.
He proves the following:
Lemma. If $A$ and $A+B$ are invertible, and $B$ has rank $1$, then let $g=\operatorname{trace}(BA^{-1})$. Then $g\neq -1$ and$$(A+B)^{-1} = A^{-1} - \frac{1}{1+g}A^{-1}BA^{-1}.$$
From this lemma, we can take a general $A+B$ that is invertible and write it as $A+B = A + B_1+B_2+\cdots+B_r$, where $B_i$ each have rank $1$ and such that each $A+B_1+\cdots+B_k$ is invertible (such a decomposition always exists if $A+B$ is invertible and $\mathrm{rank}(B)=r$). Then you get:
Theorem. Let $A$ and $A+B$ be nonsingular matrices, and let $B$ have rank $r\gt 0$. Let $B=B_1+\cdots+B_r$, where each $B_i$ has rank $1$, and each $C_{k+1} = A+B_1+\cdots+B_k$ is nonsingular. Setting $C_1 = A$, then$$C_{k+1}^{-1} = C_{k}^{-1} - g_kC_k^{-1}B_kC_k^{-1}$$where $g_k = \frac{1}{1 + \operatorname{trace}(C_k^{-1}B_k)}$. In particular,$$(A+B)^{-1} = C_r^{-1} - g_rC_r^{-1}B_rC_r^{-1}.$$
(If the rank of $B$ is $0$, then $B=0$, so $(A+B)^{-1}=A^{-1}$).
$\endgroup$ 9 $\begingroup$It is shown in On Deriving the Inverse of a Sum of Matrices that
$(A+B)^{-1}=A^{-1}-A^{-1}B(A+B)^{-1}$.
This equation cannot be used to calculate $(A+B)^{-1}$, but it is useful for perturbation analysis where $B$ is a perturbation of $A$. There are several other variations of the above form (see equations (22)-(26) in this paper).
This result is good because it only requires $A$ and $A+B$ to be nonsingular. As a comparison, the SMW identity or Ken Miller's paper (as mentioned in the other answers) requires some nonsingualrity or rank conditions of $B$.
$\endgroup$ 3 $\begingroup$This I found accidentally.
Suppose given $A$, and $B$, where $A$ and $A+B$ are invertible. Now we want to know the expression of $(A+B)^{-1}$ without imposing the all inverse. Now we follow the intuition like this. Suppose that we can express $(A+B)^{-1} = A^{-1} + X$, next we will present simple straight forward method to compute $X$ \begin{equation} (A+B)^{-1} = A^{-1} + X \end{equation} \begin{equation} (A^{-1} + X) (A + B) = I \end{equation} \begin{equation} A^{-1} A + X A + A^{-1} B + X B = I \end{equation} \begin{equation} X(A + B) = - A^{-1} B \end{equation} \begin{equation} X = - A^{-1} B ( A + B)^{-1} \end{equation} \begin{equation} X = - A^{-1} B (A^{-1} + X) \end{equation} \begin{equation} (I + A^{-1}B) X = - A^{-1} B A^{-1} \end{equation} \begin{equation} X = - (I + A^{-1}B)^{-1} A^{-1} B A^{-1} \end{equation}
This lemma is simplification of lemma presented by Ken Miller, 1981
$\endgroup$ 5 $\begingroup$$(A+B)^{-1} = A^{-1} - A^{-1}BA^{-1} + A^{-1}BA^{-1}BA^{-1} - A^{-1}BA^{-1}BA^{-1}BA^{-1} + \cdots$
provided $\|A^{-1}B\|<1$ or $\|BA^{-1}\| < 1$ (here $\|\cdot\|$ means norm). This is just the Taylor expansion of the inversion function together with basic information on convergence.
(posted essentially at the same time as mjqxxx)
$\endgroup$ $\begingroup$I'm surprising that no one realize it's a special case of the well-known matrix inverse lemma or [Woodbury matrix identity], it says,
$ \left(A+UCV \right)^{-1} = A^{-1} - A^{-1}U \left(C^{-1}+VA^{-1}U \right)^{-1} VA^{-1}$ ,
just set U=V=I, it immediately gets
$ \left(A+C \right)^{-1} = A^{-1} - A^{-1} \left(C^{-1}+A^{-1} \right)^{-1} A^{-1}$ .
$\endgroup$ $\begingroup$A formal power series expansion is possible: $$ \begin{eqnarray} (A + \epsilon B)^{-1} &=& \left(A \left(I + \epsilon A^{-1}B\right)\right)^{-1} \\ &=& \left(I + \epsilon A^{-1}B\right)^{-1} A^{-1} \\ &=& \left(I - \epsilon A^{-1}B + \epsilon^2 A^{-1}BA^{-1}B - ...\right) A^{-1} \\ &=& A^{-1} - \epsilon A^{-1} B A^{-1} + \epsilon^2 A^{-1} B A^{-1} B A^{-1} - ... \end{eqnarray} $$ Under appropriate conditions on the eigenvalues of $A$ and $B$ (such that $A$ is sufficiently "large" compared to $B$), this will converge to the correct result at $\epsilon=1$.
$\endgroup$ 3 $\begingroup$Assuming everything is nicely invertible, you are probably looking for the SMW identity (which, i think, can also be generalized to pseudoinverses if needed)
Please see caveat in the comments below; in general if $B$ is low-rank, then you'd be happy using SMW.
$\endgroup$ 3 $\begingroup$It is possible to come up with pretty simple examples where $A$,$A^{-1}$,$B$, and $B^{-1}$ are all very nice, but applying $(A+B)^{-1}$ is considered very difficult.
The canonical example is where $A = \Delta$ is a finite difference implementation of the Laplacian on a regular grid (with, for example, Dirichlet boundary conditions), and $B=k^2I$ is a multiple of the identity. The finite difference laplacian and it's inverse are very nice and easy to deal with, as is the identity matrix. However, the combination $$\Delta + k^2 I$$ is the Helmholtz operator, which is widely known as being extremely difficult to solve for large $k$.
$\endgroup$ $\begingroup$If A and B were numbers, there is no simpler way to write $\frac{1}{A+B}$ in term of $ \frac{1}{A}$ and $B$ so I don't know why you would expect there to be for matrices. It is even possible to have matrices, A and B, so that neither $A^{-1}$ nor $B^{-1}$ exist but $(A+B)^{-1}$ does or, conversely, such that both $A^{-1}$ and $B^{-1}$ exist but $(A+B)^{-1}$ doesn't.
$\endgroup$ $\begingroup$Actually we can directly from @Shiyu answer about perturbations by subtracting $(A+B)^{-1}$ and factoring arrive at
$$0=A^{-1}-(A^{-1}B+I)(A+B)^{-1}$$ followed by$$(A+B)^{-1}=(A^{-1}B+I)^{-1}A^{-1}$$
And by symmetry of course
$$(A+B)^{-1}=(B^{-1}A+I)^{-1}B^{-1}$$
Now remember, $(I+X)^{-1}$ can be expanded as $I-X+X^2+\cdots$ by geometric series.
So if $X=B^{-1}A$ or $X=A^{-1}B$ and multiplication by $A,B$ and either of $A^{-1}$ or $B^{-1}$ are cheap, then this could work nicer than some other method of finding inverse.
$\endgroup$ $\begingroup$Extending Muhammad Fuady's approach: We have:\begin{equation} (A+B)^{-1} = A^{-1} + X \end{equation}\begin{equation} X = - (I + A^{-1}B)^{-1} A^{-1} B A^{-1} \end{equation}So\begin{equation} (A+B)^{-1} = A^{-1} - (I + A^{-1}B)^{-1} A^{-1} B A^{-1} \tag{1}\label{eq1} \end{equation}This rearranges to:\begin{equation} (A+B)^{-1} = (I - (I + A^{-1}B)^{-1} A^{-1} B )A^{-1} \tag{2}\label{eq2} \end{equation}If we consider the part\begin{equation} (I + A^{-1}B)^{-1} \end{equation}Then, this is an inverse of a sum of two matrices, so we can use \eqref{eq2}, setting $A=I$ and $B = A^{-1}B$, this gives:\begin{equation} (I + A^{-1}B)^{-1} = (I - (I + A^{-1}B)^{-1}A^{-1}B ) \end{equation}so we can substitute the LHS of this for the right hand side which appears in \eqref{eq2}, giving:\begin{equation} (A+B)^{-1} = (I + A^{-1}B)^{-1}A^{-1} \tag{3}\label{eq3} \end{equation}Which is simpler than \eqref{eq1} and is very similar to the scalar identity:\begin{equation} \frac{1}{a+b}=\frac{1}{\left(1+\frac{b}{a}\right)a} \tag{4}\label{eq4} \end{equation}
The technique is useful in computation, because if the values in A and B can be very different in size then calculating $\frac{1}{A+B}$ according to \eqref{eq3} gives a more accurate floating point result than if the two matrices are summed.
$\endgroup$ $\begingroup$I know the question has been answered multiple times with great answers, but with my answer you don't need to memorize any lemmas or formulas.
Suppose $(A+B)x=y$, then $x=(A+B)^{-1}y$. This is all we need to get. The steps are:
(1) Start with $(A+B)x=y$.
(2) Then $Ax=y-Bx$, so $x=A^{-1}y -A^{-1}Bx$.
(3) Multiply $x$ in step (2) by $B$ to get $$Bx=BA^{-1}y -BA^{-1}Bx$$ which is equivalent to $$(I+BA^{-1})Bx=BA^{-1}y $$ or, $$Bx=(I+BA^{-1})^{-1}BA^{-1}y $$
(3) Substitute this $Bx$ into the $x$ in step (2) to get $$x=A^{-1}y -A^{-1}(I+BA^{-1})^{-1}BA^{-1}y $$
(4) Now factorizing the $y$ gives you the required result. $$x=(A^{-1} -A^{-1}(I+BA^{-1})^{-1}BA^{-1})y $$
(5)The assumptions we have used are $A$ and $I+BA^{-1}$ are nonsingular.
(6) We can factorize the $A^{-1}$ to get:$$(A+B)^{-1}=A^{-1}(I -(I+BA^{-1})^{-1}BA^{-1})$$
$\endgroup$