## Solving an Equation as a Sequence of Equation Replacement Operations

Part 1 was so long because I wanted to be extremely thorough and to present things to an audience that perhaps hadn’t thought much about the logic of equation solving at all. Since we’re now all experts, perhaps it’s worth it to summarize everything very succinctly.

Given an equation in one free variable, we want to find the solution set. To do this, we replace that equation with an equivalent equation whose solution set is more obvious.

(1)

(2)

(3)

(4)

If in the transition from (1)-(2), from (2)-(3), and from (3)-(4) we are careful to replace each equation with an equivalent equation, then by the transitivity of equivalence, the original equation and terminal equation are guaranteed to be equivalent. Since the solution set of the terminal equation is obvious, we know the solution set of the original equation, as well. Thus solving an equation requires establishing that certain equation replacement operations are indeed equivalence preserving and having the creativity and experience to know which ones to apply and in what order.

## What are the Equivalence-Preserving Operations on Equations?

If , then for any well-defined function . If and are expressions containing a free-variable, then any value of that variable which satisfies will also satisfy. In other words, if you find it useful, feel free to replace any equation with a new equation which is the result of applying any function to both sides of the original equation. Any solution to the original equation will also be a solution to the new equation.

If the function is also one-to-one, then by definition, so any solution of will also be a solution to . Thus applying to both sides of an equation is equivalence-preserving. If is not one-to-one, then in general, the operation is not equivalence-preserving.

In solving equation (1), we applied and in that order. Since all three of the functions are one-to-one, we are assured that (1) and (4) are equivalent. If we had cause to apply a non-one-to-one function, then we should be vigilant for extraneous solution.

## A More Interesting Example

Consider

(5)

As I mentioned in the other post, these square roots are begging to be squared, but since there are two of them, one squaring will not be enough. Even though it’s not necessary to do so, it’s helpful to move one radical expression to the other side.

(6)

(7) We squared!

(8)

(9) We squared again!

(10)

(11)

So

Since in the transition from (6)-(7) and again in the transition from (8)-(9) we had reason to apply the non-one-to-one function , we should be vigilant for extraneous solutions. [Note: since both sides of (6) are necessarily positive, applying is equivalence-preserving, so no extraneous roots will be created there.] By checking back in the original equation, we see that 3 is a solution, but is not. I am more or less content to leave it at that. But some may ask for more clarity as to exactly what happened and when, so let’s indulge them.

I will now list each equation in reverse order along with its solution set:

(11)

(10)

(9)

(8)

Since

So we have isolated the precise moment when the extraneous solution is created and it appears exactly where we would expect it, in the transition from (8) to (9) as we replaced (8) with the result of applying the non-one-to-one function to both sides.

More specifically, if , (8) reads , which is false, but (9) reads , which is true. For this particular value of , we squared both sides and replaced a false statement with a true statement. In retrospect, we can say that is not a solution to (8) or to any previous equation in the solving sequence, but is a solution to (9) and thus to all subsequent equations in the solving sequence.

(7)

Since both sides of (7) are positive when , it does not surprise us that,

(6)

(5)

By fully analyzing the logic behind each step of our equation replacement sequence, we not only:

- confirm that is a solution and that is not
*and* - understand that squaring both sides
*may*produce an extraneous solution

but also

- isolate the precise step in the solving sequence in which this extraneous solution was created answering the
*why*,*how*, and*when*for this problem - confirm that the non-solution status of is not merely due to an error of algebra or arithmetic, but is a direct result of that fact that this value produces an equation (8) of the form

That last point is crucial in distinguishing the phenomenon of extraneous roots from the phenomenon of user error in algebra or arithmetic. If our equation solving sequence consists solely of equivalence-preserving operations, we do not even need to check to see if solutions to our terminal equation are also solutions to our original equation. If we do decide to check, perhaps out of an abundance of caution, and find a discrepancy, then user error must be to blame.

On the other hand, if a solver does employ solution-set-enlarging operations in the solving sequence and finds that a solution to the terminal equation is not a solution to the original equation, is this because the solution is extraneous or due to user error? One could perform an analysis like I did above and confirm that the non-solution is not due to user error, but instead to the logic of the process.