The method used to evaluate L(2, 20) was quite unlike the classic search algorithm, which, after reading about the efforts of Frederick Groth's team and John Miller, I felt had been developed just about as far as it could be. The difficulty faced by any direct search is not simply the rapid branching of the search tree, as it is found that the effort required per sequence increases relatively slowly with n. Rather, the problem lies in the very large number of solutions that inevitably will be found by that method. A simple-minded statistical argument had suggested that, for the soluble cases, L(2, n) should increase roughly as (4n / e3)n, a result supported by the values calculated to date. Using Groth and Miller's value for L(2, 19) to estimate the prefactor, the exponential formula predicted L(2, 20) = 2.7 1012, a number so large that it would put the n=20 calculation far out of reach, at least for me, running any existing program on a small number of PCs.
Because an exact calculation seemed difficult, I began to look for an analytical method to estimate the pre-exponential factor in my crude formula, so that at least the behaviour of L(2, n) would be known for large n. That was the motivation for investigating algebraic expressions from which L(2, n) might be extracted, the hope being that for large n the extraction could be done relatively easily, by asymptotic evaluation of an integral, for example. So far I have not got very far with this, but one candidate for a suitable algebraic expression (illustrated for the case n=3) was
F
({xk}, 3) = (x1x3 + x2x4 + x3x5 + x4x6) (x1x4 + x2x5 + x3x6) (x1x5 + x2x6) ,where xk is a variable associated with location k (= 1 to 6). Each of the factors in brackets corresponds to the different ways in which a given number-pair can be placed in the pattern. For example, the two 1s can be placed in locations (1,3), (2,4), (3,5) and (4,6), and these four possibilities are represented by the first factor in F({xk}, 3). If F is expanded until a sum of monomials is obtained, a term such as (x2x4) (x3x6) (x1x5) corresponds to the Langford sequence 312132. Continuing in this way, each Langford sequence will contribute 1 to the coefficient of x1x2x3x4x5x6, so that the full coefficient of that term will be 2L(2, 3), if mirror-image patterns are regarded as identical. This idea can be generalized to any n, and to any of the related Langford quantities such as V(2, n).
Now, multiplying out the n factors of F({xk}, n) simply to find the term in x1x2 ... x2n is probably not a good way to make progress when n is large, because there are (2n - 2)!/(n - 2)! terms in total. You can, however, use F to extract L(2, n) in another way.
Suppose that each of the variables x1 to x2n takes the values +1 and -1. You can easily convince yourself that if you add up all the resulting quantities x1x2 ... x2n F({xk}, n), the result is 22n+1 L(2, n). This is because each of the terms in F other than x1x2 ... x2n lacks at least one of the variables, xp, say, so that these terms, multiplied by xp, give zero after summing over xp = +1 and -1. The advantage of doing the calculation this way is that you find L(2, n) after 22n evaluations of F. Each of these function evaluations takes O(n2) multiplications and additions, so the time taken is of order 4n times a power of n.
Now, a method whose complexity in time increases roughly as 4n may not sound good, but it turns out to be a great improvement over the simple search when n is large. Remember that L(2, n) varies roughly as (4n / e3)n, so that searching for all the Langford sequences takes at least this long; longer, in fact, as the time taken per sequence increases with n. The new method could be expected to be faster than the classic search by a factor of at least A (n / e3)n, where A is a relatively slowly varying function of n.
The main doubt was whether the unknown A would be large enough for the new method to overtake the classic search at a useful value of n, but results from a test program were encouraging. After correcting for the difference in clock speeds between the PC used to develop the program and the best of those used by Groth's team, an extrapolation of the time taken on a calculation of L(2, 8) suggested that the new method would begin to show its advantage over the classic search somewhere near n=19. Improvements to the code (and to the method of extrapolating the timings!) quickly brought the crossover point to around n=16, so that it appeared that L(2, 20) could be completed in a few weeks on a single 650 MHz PC.
By now I was impatient to make a start on L(2, 20). Although I had a number of ideas for improving the speed of my program, these would require me to re-think a large portion of the code, so it was unlikely that they could be tried out for several weeks to come. If the aim was simply to get a result, it would be quicker to break the problem into small parts that could be calculated independently on a number of PCs.
I made minimal changes to my program to get the long-integer arithmetic working properly (the test program had used 8-byte floating-point arithmetic) and divided the summations into 1024 equal parts, the result from each of which would be appended to a file when complete. The program was set running on two PCs (a 650 MHz AMD Duron and an 800 MHz Pentium III) on Monday 11 February 2002.
Next day I was joined by Ron van Bruchem of the Netherlands, who had kindly offered time on a 1.4 GHz AMD Thunderbird. He took on about half the calculation and gave a lot of his own time to get the program running on his machine. All of the partial results were complete within one week of starting. When these were combined, the final answer was found to be
which was reassuringly close to the value 2.7 1012, estimated by using the exponential formula to extrapolate from the known result for L(2, 19). It was also something of a relief to find that there was no remainder on division by 420, as the program had not been tested on the case n=19.
Since then, the program has been improved (see below) and modified to calculate the variant Langford quantities V(2, n), giving the results
computed 4-7 March 2002 (taking 65.5 hours on the 800 MHz Pentium III), and
computed 7-12 March 2002. As in the earlier calculation of L(2, 20), the computation was broken into 1024 parts: 825 were calculated on Ron's 1.4 GHz AMD (total time 94 hours) and 199 on the 800 MHz Pentium III (43 hours).
A number of enhancements to the speed of the program were possible. For example, the symmetry F({xk}, n) = F({-xk}, n) can be used to reduce the number of terms in the sum by a factor of two, and a further factor of two can be gained by using the symmetry under interchange of every variable xk with x2n+1-k. Only the first of these symmetries was used in the program used to calculate L(2, 20).
A third symmetry is closely related to the usual argument for insolubility for the cases n = 4m+1 and 4m+2. If all the variables x2k on the even-numbered sites change sign, the product x1x2 ... x2n changes sign when n is odd, while F changes sign only when n is 2 or 3 more than a multiple of 4. The combination x1x2 ... x2n F({xk}, n) therefore changes sign if n = 4m+1 or 4m+2, so that, in those cases, terms in the sum for L(2, n) occur in cancelling pairs. In the remaining cases, n = 4m and 4m+3, the terms in the sum are invariant under the change in sign of x2k, and this symmetry can be used to halve the number of terms considered.
It is found that one of the slowest parts of the calculation is the evaluation of the n factors contained in F({xk}, n). It was stated above that this could be achieved in O(n2) arithmetic operations, but this, too, can be improved, as each of the factors contains a given xk no more than twice. If it can be arranged for only one of the variables xk to change sign each time, the corrections to the factors can be made in O(n) operations, and this is a substantial improvement, even for quite small n. This is achieved in practice by associating the two allowed values of xk with the values 0 and 1 of the bits of a Gray code: the complete sequence of Gray codes contains every possible bit pattern exactly once, but consecutive patterns differ in only one bit, which determines the xk to be changed.
The two additional symmetries and the Gray-code idea were exploited in the calculation of the variant quantities V(2, 20) and V(2, 21). Together, these improvements reduced the time needed for the calculation by a factor of nearly 8 for n=20.
Mike Godfrey (the author) is a condensed matter physicist currently working at the University of Manchester Institute of Science and Technology in the United Kingdom. He became interested in Langford's problem many years ago after reading about it in D. St P. Barnard's puzzle column in the Daily Telegraph. His silliest Langford project has been to program the classic search to run on a Texas Instruments TI58 programmable calculator.
Ron van Bruchem is a speed-solving puzzle fanatic, and is the host of www.speedcubing.com.