Berlekamp–Welch algorithm

 

The Berlekamp–Welch algorithm, also known as the Welch–Berlekamp algorithm, is named for and . The algorithm efficiently corrects errors in BCH codes and Reed–Solomon codes (which are a subset of BCH codes). Unlike many other decoding algorithms, and in correspondence with the code-domain Berlekamp–Massey algorithm that uses and the dual of the codes, the Berlekamp–Welch decoding algorithm provides a method for decoding Reed–Solomon codes using just the generator matrix and not syndromes.

Contents

History on decoding Reed–Solomon codes

  • In 1960, Peterson developed an algorithm for decoding . His algorithm solves the important second stage of the generalized BCH decoding procedure and is used to calculate the error locator polynomial coefficients that in turn provide the error locator polynomial. This is crucial to the decoding of BCH codes.
  • In 1963, Gorenstein–Zierler saw that BCH codes and Reed–Solomon codes have a common generalization and that the decoding algorithm extends to more general situation.
  • In 1968 / 69, invented an algorithm for decoding BCH codes. recognized its application to linear feedback shift registers and simplified the algorithm. Massey termed the algorithm the LFSR Synthesis Algorithm (Berlekamp Iterative Algorithm) but it is now known as the Berlekamp–Massey algorithm.
  • In 1975, Sugiyama et al. developed a decoder based on the . Reed–Solomon_error_correction#Euclidean_decoder
  • In 1986, The Welch–Berlekamp algorithm was developed to solve the decoding equation of Reed–Solomon codes, using a fast method to solve a certain polynomial equation. The Berlekamp–Welch algorithm has a running time complexity of mathcal{O}(N^3). The following sections look at the Gemmel and Sudan’s exposition of the Berlekamp–Welch algorithm.

Error locator polynomial of Reed–Solomon codes

In the problem of decoding Reed–Solomon codes, the inputs are pair wise distinct evaluation points alpha_1, cdots, alpha_n where alpha_i in mathbb{F} with k and d = n - k + 1 and a codeword y = (y_1, cdots, y_n) in mathbb{F}^n. Our goal is to describe an algorithm that can correct e < tfrac{1}{2}(n-k+1) many errors in polynomial time. To do so we have to find P in mathbb{F}[X] such that deg(P) < k - 1 and the number of indices for which P(alpha_i) ne y_i is less than or equal to e. We can assume that there exists a polynomial P such that

Delta(y, (P (alpha_i))^N _{i=1}) leqslant e leqslant tfrac{d}{2} = tfrac{1}{2}(n - k + 1).

Note that the coefficients of P are the encoded information. To solve this, we use an indicator for those indices where an error may have occurred. Thus we define an error locator polynomial, E in mathbb{F}[X], by:

E(X) = prod_{1leqslant i leqslant n atop y_i ne P(alpha_i)} (X - alpha_i)

Note that deg(E) leqslant tfrac{1}{2}(n - k). We can also claim that y_i E(alpha_i) = P(alpha_i) E(alpha_i) holds for all 1 leqslant i leqslant n. This fact holds true because in the event of y_i ne P(alpha_i), both sides of the above equation vanish because E(alpha_i) = 0.

However, since E and P are both unknown, the main task of the decoding algorithm would be to find P. To do this we use a seemingly useless yet very powerful method and define another polynomial Q = PE. This is because the n equations with e + k we need to solve are quadratic in nature. Thus by defining a product of two variables that gives rise to a quadratic term as one unknown variable, we increase the number of unknowns but make the equations linear in nature. This method is called linearization and is a very powerful tool.

Thus Q in mathbb{F}[X] having the properties:

  1. deg(Q) leqslant tfrac{1}{2}(n - k) + k - 1
  2. Q(alpha_i) = E(alpha_i)y_i, qquad 1 leqslant i leqslant n

This helps because if we now manage to find Q and E, we can easily find P using P = tfrac{Q}{E}. The main purpose of the Berlekamp Welch algorithm is to find out P using degree bounded polynomials Q and E and the properties of E and N.

Computing E is as hard as finding the end solution P. Once E is computed, using erasure decoding for Reed–Solomon codes, we can easily recover P. However, in a few cases, even the polynomial Q is as hard to find as E. As an example, given Q and y (such that y_i ne 0 for 1 leqslant i leqslant n), by checking positions where Q(i) = 0, we can find the error locations. Thus the algorithm works on the principle that while each of the polynomials E and Q are hard to find individually; computing them together is much easier.

The Berlekamp–Welch decoder and algorithm

The Welch–Berlekamp decoder for Reed–Solomon codes consists of the Welch– Berlekamp algorithm augmented by some additional steps that prepare the received word for the algorithm and interpret the result of the algorithm.

The inputs given to the Berlekamp Welch decoder are the integers denoting Block Length n, the number of errors e such that e < tfrac{1}{2}(n - k + 1), and the received word (y_i ,alpha_i)^n _{i=1} satisfying the condition that there exists at most one P with deg(P) leqslant k - 1 with Delta(y, P(alpha_i)_i) leqslant e.

The output of the decoder is either the polynomial P, or in some cases, a failure. This decoder functions in two steps as follows:

  1. This step is called the interpolation step in which the decoder computes a non zero polynomial E of degree e (This implies that the coefficient of X^e must be 1) and another polynomial Q with deg(Q) leqslant e + k - 1. These polynomials are created such that the condition y_i E(alpha_i) = Q(alpha_i) holds for all 1 leqslant i leqslant n. In the case that polynomials satisfying the above condition cannot be computed, the output of the decoder would be a failure.
  2. If E mid Q then a P is defined which equals tfrac{Q}{E}. If Delta(y, (P(alpha_i)_i ) leqslant e, then the decoder outputs P. If the above condition is not satisfied, i.e. if E nmid Q then a failure is returned by the decoder.

According to the algorithm, in the cases where it does not output a failure, it outputs a P that is the correct and desired polynomial. To prove that, the algorithm always outputs the desired polynomial, we need to prove a few claims we have made while describing the algorithm. Let us go ahead and do so now.

Claim 1. There exist a pair of polynomials, (E, Q), that satisfy Step 1 of the BW algorithm and tfrac{Q}{E} = P.

Let E be the error-locating polynomial for P :

E(X) = X^{e - Delta(y, P(alpha_i)_i)} prod_{1leqslant i leqslant n atop y_i ne P(alpha_i)} left (X - alpha_i right )

Notice that E has the following properties by definition:

deg(E) = e, qquad E(alpha_i) = 0 Leftrightarrow y_i ne P(alpha_i).

Now define Q = PE and note that:

deg(Q) leqslant deg(P) + deg(E) leqslant e + k - 1.

We can now claim that y_i E(alpha_i) = Q(alpha_i) from the first step of the BW algorithm holds. If E(alpha_i) = 0, then Q(alpha_i) = P(alpha_i)E(alpha_i) = y_iE(alpha_i) = 0. For E(alpha_i) ne 0 we have P(alpha_i) = y_i and therefore Q(alpha_i)= P(alpha_i)E(alpha_i) = y_iE(alpha_i) just as we claimed.

This above claim however just reiterates and proves the fact that there exists a pair of polynomials E and Q such that P = tfrac{Q}{E}. It however does not necessarily guarantee the fact that the algorithm we discussed above would indeed output such a pair of polynomials. We therefore move on to look at another claim that helps establish this fact using the above claim and thereby proving the correctness of the algorithm.

Claim 2. If (E_1, Q_1), (E_2, Q_2) are two distinct solutions that satisfy the first step of the Berlekamp Welch algorithm, then we have tfrac{Q_1}{E_1} = tfrac{Q_2}{E_2}.

First note that

deg(Q_1E_2), deg(Q_2E_1) leqslant 2e + k - 1.

Then we define:

R := Q_1E_2 - Q_2E_1

Note that deg(R) leqslant 2e + k - 1. From step 1 of the Berlekamp Welch algorithm we also know that y_iE_1(alpha_i) = Q_1(alpha_i) and y_iE_2(alpha_i) = Q_2(alpha_i). Now for all i in {1, cdots, n} we calculate:

begin{align} R(alpha_i) &= Q_1(alpha_i)E_2(alpha_i) - Q_2(alpha_i)E_1(alpha_i)  &= y_iE_1(alpha_i)E_2(alpha_i) - y_iE_2(alpha_i)E_1(alpha_i)  &= 0 end{align}

Thus R has n roots, on the other hand

deg(R(X)) leqslant 2e + k - 1 < 2 tfrac{1}{2}(n - k + 1) + k - 1 = n.

Therefore, R is the zero polynomial which means that Q_1E_2 and Q_2E_1 are identical. Since E_1, E_2 are non-zero we can write: tfrac{Q_1}{E_1} = tfrac{Q_2}{E_2} as per our initial claim.

Thus based on the above claims, we can safely state that the output of the Berlekamp Welch algorithm, when outputting the polynomial P(X) is correct.

We can now claim that the algorithm can be implemented such that it has a running time of O(n^3). This can be proved as follows: In Step 1 of the algorithm, the polynomials Q and E have e + k and e + 1 unknown values respectively and the constraints y_i E(alpha_i) = Q(alpha_i) for all 1 leqslant i leqslant n acts as a linear equation with these unknowns. We therefore get a system of n linear equations in 2e + k + 1 < n + 2 unknowns. Using our first claim, this system of equations has a solution since deg(E) = e. This can be solved in O(n^3) time, by say Gaussian elimination. Finally, we can note that Step 2 of the algorithm can also be implemented in time O(n^3) by “long division” method. Hence we can state that the Berlekamp Welch algorithm can be used to uniquely decode any [n,k]_q Reed–Solomon code in O(n^3) time for a maximum of tfrac{1}{2}(n - k + 1) errors.

Example

Consider a simple example where a redundant set of points are used to represent the line y = 5 - x, and one of the points is incorrect. The points that the algorithm gets as an input are (1,4), (2,3), (3,4), (4,1), where (3,4) is the defective point. The algorithm must solve the following system of equations:

 begin{align} Q(1) & = 4*E(1)  Q(2) & = 3*E(2)  Q(3) & = 4*E(3)  Q(4) & = 1*E(4) end{align}

Given a solution pair (Q, E) to this system of equations, it is evident that at any of the points x = 1,2,3,4 one of the following must be true:

Q(alpha_i) = E(alpha_i) = 0, quad text{or} quad P(alpha_i) = frac{Q(alpha_i)}{E(alpha_i)} = y_i.

Since E is defined as only having a degree of one, the former can only be true in one point. Therefore, P(alpha_i) = y_i at the three other points.

Letting E(x) = x + e_0 and Q = q_0 + q_1x + q_2x^2 we can rewrite the system:

begin{cases} q_0 + q_1 + q_2 -4 e_0 -4 = 0  q_0 +2q_1 + 4q_2 - 3e_0 - 6 = 0  q_0 + 3q_1+ 9q_2 - 4e_0 - 12 = 0  q_0 + 4q_1 + 16q_2 - e_0 -4 = 0 end{cases}

This system can be solved through , and gives the values:

q_0 = -15, q_1 = 8, q_2 = -1, e_0 = -3

Thus:

Q = -x^2 + 8x - 15, E = x - 3, quad text{and} quad frac{Q}{E} = P = 5-x.

5-x fits three of the four points given, so it is the most likely to be the original polynomial.

See Also on BitcoinWiki

Source

http://wikipedia.org/