Tag Archive for "projecteuler" tag

Problem 4 ver. 4: optimization

December 18th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

Find the largest palindrome made from the product of two 3-digit numbers.

And the last scratch for now. It is possible to prove that 11 divides a palindromic number. Indeed,

and here is a multiple of 11 (divisibility by 11 criterion).

The factor 11 can belong to a - and in this case we step just 1 in b. But if 11 doesn't divide a, then we can increase b by 11 each time.

  1.  

This speeds up the previous result around ten times, leaving an asymptotic behavior the same. The memory use is the same O(1).

Let's look at results:
k = 2 => N = 9009
k = 3 => N = 906609
k = 4 => N = 99000099
k = 5 => N = 9966006699
k = 6 => N = 999000000999
k = 7 => N = 99956644665999
k = 8 => N = 9999000000009999
...
We could improve our algo drastically, if proven that the sought-for palindrome is less or equal (and mirrored). I have the feeling that for even k it is equal. But I don't know how to prove it. (I calculated for k = 10 and this does not hold, N = 99999834000043899999).

Problem 4 ver. 3: optimization

December 17th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

Find the largest palindrome made from the product of two 3-digit numbers.

An author, however, advises a simpler approach. As we are looking for a palindrome a*b, let's iterate a and b in a top-down direction. After finding some palindrome, impose it as a top boundary for palindromes, that is, iterating in the inner loop for b, we stop when a*b cannot be large than that anymore. If we found a new palindrome, it will replace the boundary. Stop condition is finishing the outer loop in a, i.e. when it drops to 2-digits number (k-1, generally speaking).

  1.  

Complexity in memory now is just O(1). Performance complexity by my impression is better than in the previous variant. The outer loop has n - n/10 steps, so it cannot be less than O(n). Assuming that a desired palindrome (left half of it, actually) lies close to (which should be proved, strictly speaking), we make no more than operations until find it, and no more than the same afterwards.

This is the worst case, however, and I hope that we find some worse-than-ideal palindrome quick enough. Suppose, we can use the estimate ab origin, i.e. the inequality holds, where f = n - a, g = n - b. Then we can calculate an estimate of operations as area under a curve y = n / x:

So, the actual algo performance is between and .

Problem 4 ver. 2: optimization

November 29th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

Find the largest palindrome made from the product of two 3-digit numbers.

Last time we had a straightforward algo with complexity and at least O(n) memory use. Now let's enhance that. Instead of iterating over multipliers it's reasonable to iterate over palindromes, starting from the largest. I.e. over sequence 999999, 998899, 997799, and so on.

Remark. The largest product of two 3-digit numbers is 999 * 999 = 998001. So, in principle, we could start from the palindrome 997799. But this saves just 2 iterations.

Having a palindrome m, we factorize it and look at all the subsets of the factorization. Assume, we have one subset already. Let's name the product of those factors as p. If this number p has k digits (k = 3 for now) and the number m/p has k digits, than we found the palindrome, which is a product of two k-digit numbers.

In Scheme that will be written as:

  1. span class="co1">;(display (list n '= (car factors) '* (/ n (car factors)))))))

Here I used a few new util functions:

  1.  

which make numbers out of their base-k representation.

Complexity now is hard to calculate. The worst case scenario gives quite a bad upper boundary. However, the worst case will never be realized.

Looking at what it gives out (9009, 906609, 99000099, 9966006699, 999000000999, ...), I could guess that the required palindrome is found after roughly iterations. So, in total I hope for less than complexity.

The memory use depends on factorizations - we store one whenever a palindrome is taken and lose it when proceed with the next palindrome.

Problem 3: a note

May 5th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags:

Find the largest prime factor of a composite number.

The problem of integer factorization is one of the most important in the number theory. Last time I implemented a classical algorithm with complexity. The author of the Project Euler website suggests a small improvement --- iterating over only odd numbers, but I consider this as too tiny thing to do.

As regards the problem of factorization, I'd rather look in special literature, what are the known approaches (and I'll do that later, let's switch to the next one).

Problem 2 ver. 2, 3, 4: logarithmic complexity

April 17th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

Find the sum of all the even-valued terms in the Fibonacci sequence which do not exceed four million

The last time we had the straightforward O(n) solution: building a sequence, filtering out even values and adding them. We can improve a bit, noticing that actually, every third member of the Fibonacci sequence is even. We don't check then for evennes, but just jump over three components each time. This version 2 (I don't publish it here) should be several times faster, but still is O(n) in performance.

We can also express a member of the Fibonacci sequence via the third and sixth members from behind: and compute those values as the values of a new sequence: . This version 3 is essentially the same as the previous one and again, I don't publish it here.

The drastic improvement is obtained using the expression (I've added it and a proof to the wikipedia article, but they immediately reverted my changes as "unsourced" --- this is pathetic). Now the sum is obtained just computing one Fibonacci member, and this can be done with O(log n).

Indeed, we can compute a Fibonacci member exponentiating the appropriate matrix, and this exponentiation, just like usual one, can be done with O(log n). I prefer this solution over using the golden ratio exponentiation formula (again logarithmic complexity), because only integer-operations are involved. So, this is the version 4 of the solution.

  1.  

I quickly outlined a class for 2D matrices and operations with it:

  1.  

The solution is O(1) in memory and O(log n) in performance - of course, where n denotes the index of a number in the Fibonacci sequence. And we have been questioned about the cutset, where members of a sequence are less than certain number. Then an additional function (closest-fibonacci-index) comes in handy (see the wiki for explanation):

  1.  

The final touch is asking ourselves about complexity of the (log) function. Well, it can be computed fast enough not to spoil complexity of the algo's main part.

Problem 1 ver. 3: optimization

April 5th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

Find the sum of all the multiples of 3 or 5 below 1000.

Let us generalize again to a finite set of factors.

There is a formula for the power of finite sets

which can be generalized to a finite number of finite sets

or in a somewhat less understandable, but concise notation

Here is a measure (i.e. it commutes with the union sign) and can be replaced with --- power of a set sign or, if we are in the natural numbers space, with the sum of elements sign, as in our case. is not a multiindex, but a subset of the natural numbers cut from to .

Now by we denote the set of all the multiples of factors , less than certain number N, where i varies from 1 to n (each is respectively the set of multiples of a factor ). We use the above-mentioned formula to compute the measure of the union via measures of all and measures of all finite intersections of them.

Suppose, we have a number , prime or not, and the set of all it's multiples (they include only numbers less than N). Power of this set is of course (div operation) and the sum of its members can be calculated by the well-known formula for the sum of an arithmetic progression.

As regards all the intersections, it is understandable that we ought to calculate the least common multiple (LCM) of taken factors, and the set-intersection of their multiples will be just a set of its multiples. However, the current version of the solution assumes that we take primes as factors, then the LCM of them is just their product. When I calculate proper LCM in Problem 5 (up to now there is a bruteforce version), I will switch the temp version to it.

Let's see the solution. New util functions:

  1.  

The function that calculates subsets of a set:

  1.  

Important thing about this function is that it returns the empty set as the first element and the full set as the last element of a result list, all other subsets are in between. The number of subsets of a finite subset is just , so the complexity is --- it would be better visible with an imperative-iterative version of this function (I'm not posting it here). As regards memory, the function generates all the subsets as lists which in whole contain elements (strange, this neat formula isn't on Wikipedia yet, I should add it there), that is the memory load is . This is a not-so-good idea to load everything into memory, as we can rewrite this function (and the function that is down here in the post) iteratively with O(n) memory complexity --- taking advantage of combinadics, but for now I am satisfied enough with this version.

Using the formula above the solution now as simple as

  1.  

With that (cdr) I cut off the empty subset, whose measure is zero (otherwise the (sum-of-one) function has to be a bit more complex).

Let's be careful with notation: n here is actually not the same n, as in the (subsets) function, but the number up there, the maximum of our multiples-sets. The performance complexity depends on k and N, but we are interested only in complexity, depending on N. Let's assume that k is small comparing to N, which should be the usual case. Then the complexity is roughly speaking O(1), doesn't depend on N, as we wanted (I remind that in the previous version we had O(N) complexity).

The final touches are the regression tests:

  1. span class="co1">;(assert (=
  2. ;        (sum-list (multiples-less-than-bruteforce 1000 '(3 5 15)))
  3. ;       (sum-multiples-less-than 1000 '(3 5 15))))

The last commented one breaks, of course, as 15 is not prime - the LCM algo should be updated still.

Problem 7 ver. 1 and 2: bruteforce and optimization

March 8th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

Find the 10001st prime.

The problem of finding primes is the most famous computational task in the number theory.

At first, let's update the (generate-list-iteratively) function. It now includes i - the parameter, which denotes number of eligible elements up to this moment. We use it in the stopping condition.

  1.  

Dependant functions are rewritten also. I refactored the factorization functions and added primality tests (usual straightforward ones):

  1.  

Now, having an extra function that returns the last element in a list, we can easily solve the 7-th problem:

  1.  

The complexity in memory is O(n), as we store n primes. It could be O(1), of course, but I prefer this way for now. The performance complexity of the algorithm is hard to determine. Probably, I need to overlook some literature for that.

Problem 6 ver. 1: brute-force

March 6th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

What is the difference between the sum of the squares and the square of the sums?

No problems with brute-force here.

  1.  

The number of operations (additions and multiplications) here is 3*n + 2. The complexity is O(n), and the algo is O(n) in memory as well (of course, can be done with O(1) in memory).

Problem 5 ver. 1: brute-force

March 6th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

What is the smallest number divisible by each of the numbers 1 to 20?

This is also quite known problem --- finding a least common multiple (LCM).

As usual, let's start with the brute-force solution.

  1.  

This is the most inefficient way we can invent for the task of finding LCM. The complexity in the worst case is something like O(p^n), when n divisors out there are prime numbers, each is p at the average (but different, of course, then their LCM is just their product). More strictly, the number of divisions is up to n*p^n in the worst case (would be that if not shorcutted logical operations in (all-divide?) function).

Problem 4 ver. 1: brute-force

March 5th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

Find the largest palindrome made from the product of two 3-digit numbers.

An additional function that filters out empty lists from a list of lists:

  1.  

The function that represents a number in arbitrary number system and the function that returns 10-base digits for a number (starting from the rightmost digit):

  1.  

(digits) function could use standard (number->string), but this generalized solution I like more. Now a solution to the problem looks as easy as:

  1.  

Notice, for comparing lists we use (equal?), not (eq?) and not (eqv?) - they would return just #f. However, we could use (eq?) instead of (null?) in (filter-empty-lists) to compare a list to the empty one. As regards number of operations, we have n^2 / 2 multiplications, as many (palindrome-number?) calls that create digits lists (up to 6 divmod per each) and compare them (up to 6 comparisons per each). So, the complexity is O(n^2), and memory use is proportional to the palindrome numbers density in the n by n matrix (half of which we actually iterate over to find an answer).