Problem 6 - IMO 1985

This is not a separate answer, but rather a (too long) comment in response to the proper answer by boywholived. Sure, there exists exactly one such value of $x_1$, but the comment by barak manos still stands: any chance you can tell us what that value of $x_1$ is? Here is a little computer program that does the job:
program IMO_1985;
procedure find; const LARGE : integer = 50; var n,i : integer; x,x_1,bit : double; H,L : boolean; begin { Initialize } x_1 := 1/2; bit := 1/2; for i := 0 to LARGE do begin n := 1; H := true; L := true; { Writeln(n:2,' : ',x_1); } x := x_1; while true do begin x := x*(x+1/n); { Writeln((n+1):2,' : ',x); } { Decreasing towards zero: } L := (x < 1-1/n); { Exploding towards infinity: } H := (x > 1); if H or L then Break; n := n+1; end; { Readln; } { Find x_1 bit by bit: } bit := bit/2; if H then x_1 := x_1 - bit; if L then x_1 := x_1 + bit; end; Writeln(x_1:0:16); end;
begin find; end.
The program uses a few easy to prove statements about the sequence: It turns out that the convergence of the sequence towards the desired limit $\;x_n \to 1\;$ is extremely unstable numerically. So what we actually have is a switch between $\,0\,$ and $\,\infty\,$ that generates the outcome for $x_1$ bit by bit, until double precision is reached.
Now I almost forgot to tell you what the outcome is: $$ x_1 \approx 0.4465349172642295 $$ Reverse method. If a method is (numerically) unstable, then try the reverse. Because it's not uncommon that the latter will be stable then. So let's solve $x_n \ge 0$ from $x_{n+1} = x_n(x_n+1/n)$: $$ x_n^2 + \frac{1}{n} x_n - x_{n+1} = 0 \quad \Longrightarrow \quad x_n = -\frac{1}{2n}+\sqrt{\left(\frac{1}{2n}\right)^2+x_{n+1}} = \frac{x_{n+1}}{\frac{1}{2n}+\sqrt{\left(\frac{1}{2n}\right)^2+x_{n+1}}} $$ The latter trick is for reasons of numerical stability as well. Now start with some LARGE value of $n$ and put $x_{n+1}=1-1/(n+1)$. Then iterate backwards until $x_1$ is reached. The resulting (Pascal) program has been an almost one-liner (i.e. without the Error Analysis):
program IMO_1985;
procedure find; const LARGE : integer = 50; var n : integer; x,E : Extended; begin { Initialize } x := 1-1/(LARGE+1); E := 1/(LARGE+1); for n := LARGE downto 1 do begin x := x/(sqrt(1/sqr(2*n)+x)+1/(2*n)); { Error Analysis: } E := E/(2*x+1/n); end; Writeln('Outcome x_1 = ',x:0:16); Writeln(' # decimals = ',-Trunc(ln(E)/ln(10))); Writeln('# good bits = ',-Trunc(ln(E)/ln(2))); end;
begin find; end.
Needless to say that the outcome is the same as with the previous method.

Error Analysis. Estimate with infinitesimals: $$ x_{n+1} = x_n(x_n+1/n) \quad \Longrightarrow \quad dx_{n+1} = (2x_n+1/n)dx_n $$ Initialize with $\;dx_{n+1} = 1/(n+1)$ . It follows that: $$ dx_1 = \frac{dx_{n+1}}{(2x_1+1)(2x_2+1/2)(2x_3+1/3)\cdots(2x_n+1/n)} $$ The Reverse method program has been slightly modified to take this into account. The variable E corresponds with the errors $dx_n$ . Minus the 10-logarithm of the error $dx_1$ in $x_1$ is the number of significant decimals, $16$ in our case, which indeed seems to be alright. Minus the 2-logarithm of the error $dx_1$ in $x_1$ is the number of significant bits: $53$ . This number, not at all by coincidence (why?), is close to the number LARGE $=50$ in both programs.

Update. Actually, our value $\,x_1\,$ as well as the error $\,dx_1\,$ are functions of $\,n$ , so let's replace $\,x_1\,$ by $\,X(n)\,$ for all finite values of $\,n\,$ and likewise $\,dx_1\,$ by $\,\epsilon(n)$ : $$ \epsilon(n) = \frac{1/(n+1)}{(2x_1+1)(2x_2+1/2)(2x_3+1/3)\cdots(2x_n+1/n)} \quad \Longrightarrow \\ \epsilon(n) = \epsilon(n-1)\frac{n/(n+1)}{2x_n+1/n} $$ With $\,1-1/n < x_n < 1\,$ we have $\,2-1/n < 2x_n+1/n < 2+1/n\,$ and: $$ \epsilon(n) < \epsilon(n-1)\frac{1}{(1+1/n)(2-1/n)} = \frac{1}{2+1/n-1/n^2} < \frac{1}{2} \quad \Longrightarrow \quad \epsilon(n) < \epsilon(1)\left(\frac{1}{2}\right)^{n-1} $$ For $\,n=1\,$ we have $\,1 < 2x_1+1 < 3\,$ hence $\,\epsilon(1) < 1/2$ . This proves a
Lemma. (see above: so that's why!) : $$ \epsilon(n) < \left(\frac{1}{2}\right)^n $$ Note. I'm not quite sure of this, because $\,\epsilon(1)=dx_1=1/2\,$ is not an infinitesimal ( i.e sufficiently small ) error; perhaps $\,\epsilon(n)\,$ must be multiplied with a proper constant?
Theorem. As expected: $$ \lim_{n\to\infty} X(n) = x_1 $$ Proof. For all real $\epsilon > 0$ there exists an integer $N > 0$ such that if $n > N$ then $\left|X(n) - x_1\right| < \epsilon$ .
Indeed, define $N$ by $\,N = \ln(1/\epsilon)/\ln(2)$ , then $\,\epsilon = (1/2)^N\,$ and for $n > N$ we have with the above Lemma: $\,\left|X(n)-x_1\right| < (1/2)^n < \epsilon$ . This completes the proof.
Disclaimer: though apart from a few technicalities perhaps, see the above Note.