Category : Programming

Problem 3: a note

May 5th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags:

Find the largest prime factor of a composite number.

The problem of integer factorization is one of the most important in the number theory. Last time I implemented a classical algorithm with complexity. The author of the Project Euler website suggests a small improvement --- iterating over only odd numbers, but I consider this as too tiny thing to do.

As regards the problem of factorization, I'd rather look in special literature, what are the known approaches (and I'll do that later, let's switch to the next one).

Problem 2 ver. 2, 3, 4: logarithmic complexity

April 17th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

Find the sum of all the even-valued terms in the Fibonacci sequence which do not exceed four million

The last time we had the straightforward O(n) solution: building a sequence, filtering out even values and adding them. We can improve a bit, noticing that actually, every third member of the Fibonacci sequence is even. We don't check then for evennes, but just jump over three components each time. This version 2 (I don't publish it here) should be several times faster, but still is O(n) in performance.

We can also express a member of the Fibonacci sequence via the third and sixth members from behind: and compute those values as the values of a new sequence: . This version 3 is essentially the same as the previous one and again, I don't publish it here.

The drastic improvement is obtained using the expression (I've added it and a proof to the wikipedia article, but they immediately reverted my changes as "unsourced" --- this is pathetic). Now the sum is obtained just computing one Fibonacci member, and this can be done with O(log n).

Indeed, we can compute a Fibonacci member exponentiating the appropriate matrix, and this exponentiation, just like usual one, can be done with O(log n). I prefer this solution over using the golden ratio exponentiation formula (again logarithmic complexity), because only integer-operations are involved. So, this is the version 4 of the solution.

  1.  

I quickly outlined a class for 2D matrices and operations with it:

  1.  

The solution is O(1) in memory and O(log n) in performance - of course, where n denotes the index of a number in the Fibonacci sequence. And we have been questioned about the cutset, where members of a sequence are less than certain number. Then an additional function (closest-fibonacci-index) comes in handy (see the wiki for explanation):

  1.  

The final touch is asking ourselves about complexity of the (log) function. Well, it can be computed fast enough not to spoil complexity of the algo's main part.

Problem 1 ver. 3: optimization

April 5th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

Find the sum of all the multiples of 3 or 5 below 1000.

Let us generalize again to a finite set of factors.

There is a formula for the power of finite sets

which can be generalized to a finite number of finite sets

or in a somewhat less understandable, but concise notation

Here is a measure (i.e. it commutes with the union sign) and can be replaced with --- power of a set sign or, if we are in the natural numbers space, with the sum of elements sign, as in our case. is not a multiindex, but a subset of the natural numbers cut from to .

Now by we denote the set of all the multiples of factors , less than certain number N, where i varies from 1 to n (each is respectively the set of multiples of a factor ). We use the above-mentioned formula to compute the measure of the union via measures of all and measures of all finite intersections of them.

Suppose, we have a number , prime or not, and the set of all it's multiples (they include only numbers less than N). Power of this set is of course (div operation) and the sum of its members can be calculated by the well-known formula for the sum of an arithmetic progression.

As regards all the intersections, it is understandable that we ought to calculate the least common multiple (LCM) of taken factors, and the set-intersection of their multiples will be just a set of its multiples. However, the current version of the solution assumes that we take primes as factors, then the LCM of them is just their product. When I calculate proper LCM in Problem 5 (up to now there is a bruteforce version), I will switch the temp version to it.

Let's see the solution. New util functions:

  1.  

The function that calculates subsets of a set:

  1.  

Important thing about this function is that it returns the empty set as the first element and the full set as the last element of a result list, all other subsets are in between. The number of subsets of a finite subset is just , so the complexity is --- it would be better visible with an imperative-iterative version of this function (I'm not posting it here). As regards memory, the function generates all the subsets as lists which in whole contain elements (strange, this neat formula isn't on Wikipedia yet, I should add it there), that is the memory load is . This is a not-so-good idea to load everything into memory, as we can rewrite this function (and the function that is down here in the post) iteratively with O(n) memory complexity --- taking advantage of combinadics, but for now I am satisfied enough with this version.

Using the formula above the solution now as simple as

  1.  

With that (cdr) I cut off the empty subset, whose measure is zero (otherwise the (sum-of-one) function has to be a bit more complex).

Let's be careful with notation: n here is actually not the same n, as in the (subsets) function, but the number up there, the maximum of our multiples-sets. The performance complexity depends on k and N, but we are interested only in complexity, depending on N. Let's assume that k is small comparing to N, which should be the usual case. Then the complexity is roughly speaking O(1), doesn't depend on N, as we wanted (I remind that in the previous version we had O(N) complexity).

The final touches are the regression tests:

  1. span class="co1">;(assert (=
  2. ;        (sum-list (multiples-less-than-bruteforce 1000 '(3 5 15)))
  3. ;       (sum-multiples-less-than 1000 '(3 5 15))))

The last commented one breaks, of course, as 15 is not prime - the LCM algo should be updated still.

Learned a bit about PLT-Scheme

March 20th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags:

I've looked through the Quick: An Introduction to PLT Scheme with Pictures document. And what I've learned is:

  • There is a library (#lang slideshow) embedded into PLT-Scheme, which provides some easy-to-use graphic primitives and a GUI library (scheme/gui/base). The first can be used for drawing on GUI's canvases.
  • There is an OOP library (scheme/class). I should look what's the backbone later.
  • PLT has the distribution system for libraries. The first eval of (require (planet something)) downloads from the PLaneT server and caches 'something' locally.

The last is quite nice, I should run through that server and look which libraries are actually implemented.

Problem 7 ver. 1 and 2: bruteforce and optimization

March 8th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

Find the 10001st prime.

The problem of finding primes is the most famous computational task in the number theory.

At first, let's update the (generate-list-iteratively) function. It now includes i - the parameter, which denotes number of eligible elements up to this moment. We use it in the stopping condition.

  1.  

Dependant functions are rewritten also. I refactored the factorization functions and added primality tests (usual straightforward ones):

  1.  

Now, having an extra function that returns the last element in a list, we can easily solve the 7-th problem:

  1.  

The complexity in memory is O(n), as we store n primes. It could be O(1), of course, but I prefer this way for now. The performance complexity of the algorithm is hard to determine. Probably, I need to overlook some literature for that.

Problem 6 ver. 1: brute-force

March 6th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

What is the difference between the sum of the squares and the square of the sums?

No problems with brute-force here.

  1.  

The number of operations (additions and multiplications) here is 3*n + 2. The complexity is O(n), and the algo is O(n) in memory as well (of course, can be done with O(1) in memory).

Problem 5 ver. 1: brute-force

March 6th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

What is the smallest number divisible by each of the numbers 1 to 20?

This is also quite known problem --- finding a least common multiple (LCM).

As usual, let's start with the brute-force solution.

  1.  

This is the most inefficient way we can invent for the task of finding LCM. The complexity in the worst case is something like O(p^n), when n divisors out there are prime numbers, each is p at the average (but different, of course, then their LCM is just their product). More strictly, the number of divisions is up to n*p^n in the worst case (would be that if not shorcutted logical operations in (all-divide?) function).

Problem 4 ver. 1: brute-force

March 5th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

Find the largest palindrome made from the product of two 3-digit numbers.

An additional function that filters out empty lists from a list of lists:

  1.  

The function that represents a number in arbitrary number system and the function that returns 10-base digits for a number (starting from the rightmost digit):

  1.  

(digits) function could use standard (number->string), but this generalized solution I like more. Now a solution to the problem looks as easy as:

  1.  

Notice, for comparing lists we use (equal?), not (eq?) and not (eqv?) - they would return just #f. However, we could use (eq?) instead of (null?) in (filter-empty-lists) to compare a list to the empty one. As regards number of operations, we have n^2 / 2 multiplications, as many (palindrome-number?) calls that create digits lists (up to 6 divmod per each) and compare them (up to 6 comparisons per each). So, the complexity is O(n^2), and memory use is proportional to the palindrome numbers density in the n by n matrix (half of which we actually iterate over to find an answer).

Problem 3 ver. 1 and 2: brute-force and optimization

March 4th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

Problem 3

Find the largest prime factor of a composite number.

is a very well known problem called factorization. The fundamental theorem of arithmetics comes in handy.

Brute-force in this case is just iterating from 2 upwards, searching for factors:

  1.  

If a factor is found, we divide the number by it, and continue with searching factors of a quotient. We don't need to start from 2 again (see the theorem).

The complexity for finding one factor is O(n) in the worst case. Let us optimize at once, stopping the iteration at sqrt(n). Indeed, a number cannot have a factor greater than its square root. This improves the worst case complexity drastically to O(\sqrt(n)) and this is the classical factorization algo.

  1.  

Now, having written additional function (max-list) for finding maximums through lists, we can solve the third problem:

  1.  

Problem 2 ver. 1: brute-force solution

March 4th, 2009 by Ivan Lakhturov | 0 Category: Programming | Tags: |

I've written the generic procedure to generate lists iteratively:

  1.  

I've refactored (create-filtered-numbers-list from to filter) function into (numbers-filters) and added also (numbers) for future use:

  1.  

Now the brute-force solution for the problem 2

Find the sum of all the even-valued terms in the Fibonacci sequence which do not exceed four million

looks like this:

  1.  

The complexity is O(n). I decided not to generate a filtered list, but rather first generate, then filter, (fibonacci) function goes to the library for future use (for future functions I will not mention placing into my library). This is an optimal algorithm for generating a Fibonacci sequence, unlike computing the n-th member of it, which can be done more efficiently. However, for computing a sum of members, perhaps, there is a better solution, I will think about it later.