Baby-step giant-step
|
In group theory, the baby-step giant-step algorithm is an algorithm for computing discrete logarithms.
Theory
The algorithm is based on a space-time tradeoff. It is a fairly simple modification of trial multiplication, the naïve method of finding discrete logarithms.
The problem is to find x where
- <math>\alpha^x\equiv\beta\pmod{n}<math>
where α, β and n are given. The baby-step giant-step algorithm is based on rewriting x as x = im + j, with m constant and 0 ≤ i, j < m. Therefore, we have:
- <math>\beta(\alpha^{-m})^i\equiv\alpha^j\pmod{n}.<math>
The algorithm precomputes αj for several values of j. Then it fixes an m and tries values of i in the left-hand side of the congruence above, in the manner of trial multiplication. It tests to see if the congruence is satisfied for any value of j, using the precomputed values of αj.
The algorithm
Input: A cyclic group G of order n, having a generator α and an element β.
Output: A value x satisfying αx ≡ β (mod n).
- m ← Ceiling(√n)
- For all j where 0 ≤ j < m:
- Compute αj mod n and store the pair (j, αj) in a table. (See section "In practice")
- Compute α−m.
- γ ← β.
- For i = 0 to (m − 1):
- Check to see if γ is the second component (αj) of any pair in the table.
- If so, return im + j.
- If not, γ ← γ • α−m mod n.
In practice
The best way to speed up the baby-step giant-step algorithm is to use an efficient table lookup scheme. The best in this case is a hash table. The hashing is done on the second component, and to perform the check in step 1 of the main loop, γ is hashed and the resulting memory address checked. Since hash tables can retrieve and add elements in O(1) time (constant time), this does not slow down the overall baby-step giant-step algorithm.
The running time of the algorithm is:
- <math>O(\sqrt{n}).<math>
The space complexity is the same.