# Generating primes

In , a variety of algorithms make it possible to generate efficiently. These are used in various applications, for example , public-key cryptography, and search of in large numbers.

For relatively small numbers, it is possible to just apply to each successive . Prime sieves are almost always faster.

## Contents

## Prime sieves

A **prime sieve** or **prime number sieve** is a fast type of algorithm for finding primes. There are many prime sieves. The simple (250s BCE), the (1934), the still faster but more complicated (2004), and various are most common.

A prime sieve works by creating a list of all integers up to a desired limit and progressively removing (which it directly generates) until only primes are left. This is the most efficient way to obtain a large range of primes; however, to find individual primes, direct are more efficient. Furthermore, based on the sieve formalisms, some integer sequences are constructed which they also could be used for generating primes in certain intervals.

## Large primes

For the large primes used in cryptography, it is usual to use a modified form of sieving: a randomly chosen range of odd numbers of the desired size is sieved against a number of relatively small primes (typically all primes less than 65,000). The remaining candidate primes are tested in random order with a standard probabilistic primality test such as the or the for .

Alternatively, a number of techniques exist for efficiently generating . These include generating prime numbers *p* for which the of *p* − 1 or *p* + 1 is known, for example , and their generalizations.

## Complexity

The is generally considered the easiest sieve to implement, but it is not the fastest in the sense of the number of operations for a given range for large sieving ranges. In its usual standard implementation (which may include basic wheel factorization for small primes), it can find all the primes up to *N* in <math>O( N log log N )</math>, while basic implementations of the and run in linear time <math>O(N)</math>. Special versions of the Sieve of Eratosthenes using wheel sieve principles can have this same linear <math>O(N)</math> time complexity. A special version of the Sieve of Atkin and some special versions of wheel sieves which may include sieving using the methods from the Sieve of Eratosthenes can run in time complexity of <math>O(N / log log N)</math>. Note that just because an algorithm has decreased asymptotic time complexity does not mean that a practical implementation runs faster than an algorithm with a greater asymptotic time complexity: If in order to achieve that lesser asymptotic complexity the individual operations have a constant factor of increased time complexity that may be many times greater than for the simpler algorithm, it may never be possible within practical sieving ranges for the advantage of the reduced number of operations for reasonably large ranges to make up for this extra cost in time per operation.

Some sieving algorithms, such as the Sieve of Eratosthenes with large amounts of wheel factorization, take much less time for smaller ranges than their asymptotic time complexity would indicate because they have large negative constant offsets in their complexity and thus don’t reach that asymptotic complexity until far beyond practical ranges. For instance, the Sieve of Eratosthenes with a combination of wheel factorization and pre-culling using small primes up to 19 uses time of about a factor of two less than that predicted for the total range for a range of 10<sup>19</sup>, which total range takes hundreds of core-years to sieve for the best of sieve algorithms.

The simple naive “one large sieving array” sieves of any of these sieve types take memory space of about <math>O(N)</math>, which means that 1) they are very limited in the sieving ranges they can handle to the amount of memory available and 2) that they are typically quite slow since RAM memory access speed typically becomes the speed bottleneck more than computational speed once the array size grows beyond the size of the CPU caches. The normally implemented page segmented sieves of both Eratosthenes and Atkin take space <math>O(N / log N)</math> plus small sieve segment buffers which are normally sized to fit within the CPU cache sizes; page segmented wheel sieves including special variations of the Sieve of Eratosthenes typically take much more space than this by a significant factor in order to store the required wheel representations; Pritchard’s variation of the linear time complexity sieve of Eratosthenes/wheel sieve takes <math>O(N^{1/2} log log N / log N)</math> space. The better time complexity special version of the Sieve of Atkin takes space <math>N^{1/2+o(1)}</math>. Sorenson shows an improvement to the wheel sieve that takes even less space at <math>O(N /((log N)^{L} log log N))</math> for any <math>L > 1</math>. However, the following is a general observation: the more the amount of memory is reduced, the greater the constant factor increase in the cost in time per operation even though the asymptotic time complexity may remain the same, meaning that the memory-reduced versions may run many times slower than the non-memory-reduced versions by quite a large factor.