The method used to evaluate *L*(2, 20) was quite unlike
the classic search algorithm, which, after reading about the efforts of
Frederick Groth's team and John Miller, I felt had been developed just
about as far as it could be. The difficulty faced by any direct search is
not simply the rapid branching of the search tree, as it is found that
the effort required per sequence increases relatively slowly with
*n*. Rather, the problem lies in the very large number of
solutions that inevitably *will* be found by that method. A
simple-minded statistical argument had suggested that, for the soluble
cases, *L*(2, *n*) should increase roughly as
(4*n* / *e*^{3})^{n}, a
result supported by the values calculated to date. Using Groth and
Miller's value for *L*(2, 19) to estimate the prefactor, the
exponential formula predicted *L*(2, 20) =
2.7 10^{12}, a number so large that it would put the
*n*=20 calculation far out of reach, at least for me, running any
existing program on a small number of PCs.

Because an exact calculation seemed difficult, I began to look for an
analytical method to estimate the pre-exponential factor in my crude
formula, so that at least the behaviour of *L*(2, *n*)
would be known for large *n*. That was the motivation for
investigating algebraic expressions from which
*L*(2, *n*) might be extracted, the hope being that
for large *n* the extraction could be done relatively easily, by
asymptotic evaluation of an integral, for example. So far I have
*not* got very far with this, but one candidate for a suitable
algebraic expression (illustrated for the case *n*=3) was

*F*({*x _{k}*}, 3)
= (

where *x _{k}* is a variable associated with location

Now, multiplying out the *n* factors of
*F*({*x _{k}*},

Suppose that each of the variables *x*_{1} to
*x*_{2n} takes the values +1 and -1. You can
easily convince yourself that if you add up all the resulting quantities
*x*_{1}*x*_{2} ...
*x*_{2n}
*F*({*x _{k}*},

Now, a method whose complexity in time increases roughly as
4^{n} may not sound good, but it turns out to be a great
improvement over the simple search when *n* is large. Remember
that *L*(2, *n*) varies roughly as
(4*n* / *e*^{3})^{n}, so
that searching for all the Langford sequences takes at least this long;
longer, in fact, as the time taken per sequence increases
with *n*. The new method could be expected to be faster than
the classic search by a factor of at least
*A* (*n* / *e*^{3})^{n},
where *A* is a relatively slowly varying function
of *n*.

The main doubt was whether the unknown *A* would be
large enough for the new method to overtake the classic search at a
useful value of *n*, but results from a test program were
encouraging. After correcting for the difference in clock speeds between
the PC used to develop the program and the best of those used by Groth's
team, an extrapolation of the time taken on a calculation of
*L*(2, 8) suggested that the new method would begin to show
its advantage over the classic search somewhere near *n*=19.
Improvements to the code (and to the method of extrapolating the
timings!) quickly brought the crossover point to around *n*=16, so
that it appeared that *L*(2, 20) could be completed in a few
weeks on a single 650 MHz PC.

By now I was impatient to make a start on
*L*(2, 20). Although I had a number of ideas for improving
the speed of my program, these would require me to re-think a large
portion of the code, so it was unlikely that they could be tried out for
several weeks to come. If the aim was simply to get a result, it
would be quicker to break the problem into small parts that could be
calculated independently on a number of PCs.

I made minimal changes to my program to get the long-integer arithmetic working properly (the test program had used 8-byte floating-point arithmetic) and divided the summations into 1024 equal parts, the result from each of which would be appended to a file when complete. The program was set running on two PCs (a 650 MHz AMD Duron and an 800 MHz Pentium III) on Monday 11 February 2002.

Next day I was joined by Ron van Bruchem of the Netherlands, who had kindly offered time on a 1.4 GHz AMD Thunderbird. He took on about half the calculation and gave a lot of his own time to get the program running on his machine. All of the partial results were complete within one week of starting. When these were combined, the final answer was found to be

which was reassuringly close to the value 2.7 10^{12},
estimated by using the exponential formula to extrapolate from the known
result for *L*(2, 19). It was also something of a
relief to find that there was no remainder on division
by 4^{20}, as the program had not been tested on the
case *n*=19.

Since then, the program has been improved (see below) and modified to
calculate the variant Langford quantities
*V*(2, *n*), giving the results

computed 4-7 March 2002 (taking 65.5 hours on the 800 MHz Pentium III), and

computed 7-12 March 2002. As in the earlier calculation of
*L*(2, 20), the computation was broken into 1024 parts: 825
were calculated on Ron's 1.4 GHz AMD (total time 94 hours) and 199 on
the 800 MHz Pentium III (43 hours).

A number of enhancements to the speed of the program were possible.
For example, the symmetry
*F*({*x _{k}*},

A third symmetry is closely related to the usual argument for
insolubility for the cases *n* = 4*m*+1
and 4*m*+2. If all the variables
*x*_{2k} on the even-numbered sites change sign,
the product *x*_{1}*x*_{2} ...
*x*_{2n} changes sign when *n* is odd,
while *F* changes sign only when *n* is 2 or 3 more than a
multiple of 4. The combination
*x*_{1}*x*_{2} ...
*x*_{2n}
*F*({*x _{k}*},

It is found that one of the slowest parts of the calculation is
the evaluation of the *n* factors contained in
*F*({*x _{k}*},

The two additional symmetries and the Gray-code idea were exploited
in the calculation of the variant quantities *V*(2, 20) and
*V*(2, 21). Together, these improvements reduced the time
needed for the calculation by a factor of nearly 8 for *n*=20.

Mike Godfrey (the author)
is a condensed matter physicist currently working at the University of
Manchester Institute of Science and Technology in the United Kingdom. He
became interested in Langford's problem many years ago after reading
about it in D. St P. Barnard's puzzle column in the *Daily
Telegraph*. His silliest Langford project has been to program the
classic search to run on a Texas Instruments TI58 programmable
calculator.

Ron van Bruchem is a speed-solving puzzle fanatic, and is the host of www.speedcubing.com.