If I have an equation of the form $$e^{ax} + e^{bx} = c,$$ where $a$, $b$, and $c$ are constants, how can I simplify the equation to solve for $x$?
Taking the logarithm of both sides is tricky, since I know $\log(ab) = log(a) + log(b)$, but I don't know how to simplify $\log(a + b)$...
$\endgroup$ 35 Answers
$\begingroup$Write the equation as $z + r z^s = 1$ where $z = e^{ax}/c$, $r = c^{b/a-1}$, $s = b/a$. There is a series for a solution of this, which should converge for sufficiently small $r$:
$$ z = \sum_{k=0}^\infty \frac{(-1)^k a_k}{k!} r^k \ \text{where} \ a_k = \prod_{j=0}^{k-2} (ks-j)$$
(taking $a_0 = a_1 = 1$)
$\endgroup$ 5 $\begingroup$Following up with Alex's Becker's answer, you can turn your equation into an equation of the form $$ y^a + y^b = c, $$ which, for $a$ and $b$ distinct positive integers with one of them greater or equal to $5$, we know by Galois theory that there exists no solution in terms of traditional arithmetic (i.e. addition, subtraction, multiplication, division, and taking $n^{\text{th}}$ roots) to this polynomial equation. I've tried to find a website that speaks about it but a quick look over google and wikipedia gave me nothing ; this is a very well known result though. Therefore we expect no general solution to your equation, because it would imply very specific results for which we know there exists no general method to solve.
Hope that helps,
EDIT : There wasn't enough space in the comment box to detail this.
If you want computer accuracy, you can use numerical methods. Find a root of $f(x) = c - e^{ax} - e^{bx}$ using, for instance, Newton's method. But analytically I have not much hope. There is one thing you could do though : using the Taylor expansion of $e^x$, $$ 0 = e^{ax} + e^{bx} - c \ge (1 + ax) + (1 + bx) - c = (2-c) + (a+b) x, $$ which gives you a rough upper bound on $x$ like this : $$ x \le \frac{c-2}{a+b}. $$ I have no idea how to get a lower bound though. Note that this bound feels very crappy after you give some though about it ; fix $a=b=1$, which means you're trying to solve $2e^x = c$, which means $e^x = c/2$ and $x = \log(c/2) < \frac{c-2}2$. Here's an idea of how crappy this bound is :
We see that for $c > 4$, it's already very crappy. Anyway.
$\endgroup$ 2 $\begingroup$If $a/b$ is 2 or 1/2, then it will reduce to a quadratic. This can be useful if you have an experiment and you can control the time at which measurements are taken to be at $t_0$, $t_0+d$, and $t_0+2d$, and the process proceeds exponentially (like Newton's law of cooling).
$\endgroup$ $\begingroup$Unfortunately, no elementary solution exists for general $a,b$. This is because solving $$e^{ax}+e^{by}=c$$ is equivalent to solving $y^{a/b}+y=c$ where $y=e^{bx}$, and even in the case $a/b=5$ the solution is expressed in terms of Bring radicals.
$\endgroup$ 1 $\begingroup$As others have pointed out, there isn't a formula to solve this type of equation. However, I have developed my own numeric algorithm for solving any equation of the form $$f(x) = A_1 e^{B_1x} + A_2 e^{B_2x} + \ldots + A_N e^{B_Nx} = 0$$ and find all real values of x. Unlike some of the suggestions like using Newton's method, this will always converge and never miss any roots.
If one of the B terms is set to 0 then this will contain a constant like in the question. I'm not a mathematician so I don't know if such a method has already been described. If not, I will christen it "Eng's method" after yours truly.
The basic method is as follows:
Sort the terms by ascending value of exponent: $B_1 < B_2 < \ldots < B_N$
Find a range where there could possibly be a root. To do so, consider that as x increases, the Nth term will begin growing faster than all other terms, so find a value of x where $$ |A_N e^{B_Nx}| > |\sum_{i=1}^{N-1}A_ie^{B_ix}| $$Actually we will do the following. Count the number of terms with an opposite sign as $A_n$. Call this number $P$. For all $P$ terms $A_n$, calculate$$|A_N e^{B_Nx_i}| = P\cdot|A_i e^{B_ix_i}|$$$$x_i = \frac{\ln(P\cdot|A_i/A_N|)}{B_N - B_i} $$$$x_{max} = max(x_i)$$So basically at $x_{max}$ we have guaranteed that the fastest growing term is growing $P$ times faster than any other term with opposite sign. So there will definitely not be a root for $x > x_{max}$.
Using the same sort of reasoning, looking at the slowest growing term $A_1e^{b_1x}$ and find an x where this term is $Q$ times more than all other terms with opposite sign. The minimum such term is $x_{min}$. Since the slowest growing term is also the slowest to shrink as we move towards $x = -\infty $, we can confidentially say that for $x < x_{min}$ the sign of the first term will dominate and we will never cross $f(x) = 0$.
Take repeated derivatives (k of them) of $f(x)$.$$\frac{d^{k}}{d x^k}f(x) = B_1^{k}A_1e^{B_1x} + B_2^{k}A_2e^{B_2x} + \ldots + B_N^{k}A_Ne^{B_Nx}$$Note that the more derivatives we take, the faster the coefficients grow for the terms with the larger $|B_i|$. For a large enough value of k, we can see that one of two things will happen:$$|B_N^{k}A_Ne^{b_Nx_{min}}| > |\sum_{i=1}^{N-1}B_i^{k}A_ie^{b_ix_{min}}|$$ or$$|B_1^{k}A_1e^{b_1x_{min}}| > |\sum_{i=2}^{N}B_i^{k}A_ie^{b_ix_{max}}|$$Basically, what ends up happening is that either the fastest growing exponential term (Nth term since we order them this way) ends up starting out dominant over the range of interest or the slowest shrinking term ends up dominant over the range of interest. (This can happen if we have something like $-40e^{-0.73x}+5e^{-0.67x}-0.1e^{0.125x}-0.2=0$). In either case, we can keep increasing k until one of these two conditions is met.
At this point we know that for the kth derivative a single term is dominant over the entire range where there could possibly be a root. What this means is that the sign of the kth derivative is either always positive or always negative for $x_{min} < x < x_{max}$. Therefore it follows that the (k-1)th derivative must be monatomic over this range. This is great news because for a monatomic function we can find numerically find roots using algorithms like bisection or Brent's method. In fact, all we have to do is evaluate the (k-1)th derivative at $x=x_{min}$ and at $x=x_{max}$. If these endpoints are the same sign, then we know that the (k-1)th derivative is also the same sign over the entire range. However, if they are the opposite sign, then we can use bisection to find the root of the (k-1)th derivative with guaranteed convergence to any desired degree of accuracy. If we can find $x_{d-root}$ where $\frac{d^{k-1}}{d x^{k-1}}f(x_{d-root}) = 0$ then this will leave us with two range, $x_{min} < x < x_{d-root}$ and $x_{d-root} < x < x_{max}$.
Now we either know there is one range where the (k-1)th derivative has a consistent sign or two ranges where the (k-1)th derivative has a consistent sign for the whole range (but one range is positive and the other is negative). Either way, we know that on these one or two ranges we can use bisection (or another bracketed root finding method) to find if and where $\frac{d^{k-2}}{d x^{k-2}}f(x) = 0$.
We continue in this manner, breaking our original range into smaller ranges when we find that the current Kth derivative is 0 and then proceeding to use bracketed root finding at one less level derivative. At some point, we will keep backing out of layers of derivatives until we are just doing bracketed root finding on the original function.
Note: There are many ways this could be further optimized. I have currently coded the algorithm and posted some proof of concept code on Github solve-exponentials.
TLDR;
Find a range where roots could possibly occur. Take a bunch (N) derivatives until its obvious that one term is dominant in the range where roots occur. (With exponents this always happens eventually). This tells us that the (N-1)th derivative is a monatomic function so we can find a root (if it has one) via bisection. Now we have some more regions on which we know the the (N-2) derivative is a monatomic function. Apply logic recursively until we have regions of the original curve where we can find roots using bisection. This way we won't miss any roots.
$\endgroup$