Generalized minimum-distance decoding
In coding theory, generalized minimum-distance (GMD) decoding provides an efficient algorithm for decoding , which is based on using an -and- decoder for the .
A naive decoding algorithm for concatenated codes can not be an optimal way of decoding because it does not take into account the information that (MLD) gives. In other words, in the naive algorithm, inner received are treated the same regardless of the difference between their . Intuitively, the outer decoder should place higher confidence in symbols whose inner are close to the received word. in 1966 devised a better algorithm called generalized minimum distance (GMD) decoding which makes use of those information better. This method is achieved by measuring confidence of each received codeword, and erasing symbols whose confidence is below a desired value. And GMD decoding algorithm was one of the first examples of soft-decision decoders. We will present three versions of the GMD decoding algorithm. The first two will be while the last one will be a .
Contents
Setup
- : Given two
the Hamming distance between
and
, denoted by
, is defined to be the number of positions in which
and
differ.
- Minimum distance: Let
be a . The minimum distance of code
is defined to be
where
- Code concatenation: Given
, consider two codes which we call outer code and inner code
-
- and their distances are
and
. A concatenated code can be achieved by
where
Finally we will take
to be , which has an errors and erasure decoder, and
, which in turn implies that MLD on the inner code will be polynomial in
time.
- Maximum likelihood decoding (MLD): MLD is a decoding method for error correcting codes, which outputs the codeword closest to the received word in Hamming distance. The MLD function denoted by
is defined as follows. For every
.
- : A
on a sample space
is a mapping from events of
to such that
for any event
, and
for any two mutually exclusive events
and
- : The expected value of a
is
Randomized algorithm
Consider the received word which was corrupted by a . The following is the algorithm description for the general case. In this algorithm, we can decode y by just declaring an erasure at every bad position and running the errors and erasure decoding algorithm for
on the resulting vector.
Randomized_Decoder
Given : .
- For every
, compute
.
- Set
.
- For every
, repeat : With probability
, set
otherwise set
.
- Run errors and erasure algorithm for
on
.
Theorem 1. Let y be a received word such that there exists a such that
. Then the deterministic GMD algorithm outputs
.
Note that a can correct up to errors.
- Lemma 1. Let the assumption in Theorem 1 hold. And if
has
errors and
erasures (when compared with
) after Step 1, then
Remark. If , then the algorithm in Step 2 will output
. The lemma above says that in expectation, this is indeed the case. Note that this is not enough to prove Theorem 1, but can be crucial in developing future variations of the algorithm.
Proof of lemma 1. For every define
This implies that
Next for every , we define two :
We claim that we are done if we can show that for every :
Clearly, by definition
Further, by the of expectation, we get
To prove (2) we consider two cases: -th block is correctly decoded (Case 1),
-th block is incorrectly decoded (Case 2):
Case 1:
Note that if then
, and
implies
and
.
Further, by definition we have
Case 2:
In this case, and
Since . This follows another case analysis when
or not.
Finally, this implies
In the following sections, we will finally show that the deterministic version of the algorithm above can do unique decoding of up to half its design distance.
Modified randomized algorithm
Note that, in the previous version of the GMD algorithm in step “3”, we do not really need to use “fresh” for each . Now we come up with another randomized version of the GMD algorithm that uses the same randomness for every
. This idea follows the algorithm below.
Modified_Randomized_Decoder
Given : , pick
at random. Then every for every
:
- Set
.
- Compute
.
- If
, set
otherwise set
.
- Run errors and erasure algorithm for
on
.
For the proof of , we only use the randomness to show that
In this version of the GMD algorithm, we note that
The second above follows from the choice of . The proof of Lemma 1 can be also used to show
for version2 of GMD. In the next section, we will see how to get a deterministic version of the GMD algorithm by choosing
from a polynomially sized set as opposed to the current infinite set
.
Deterministic algorithm
Let Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): Q = {0,1} cup (y_i) for .
- Set
for every
.
- If
, set
otherwise set
.
- Run errors-and-erasures algorithm for
on
. Let
be the codeword in
corresponding to the output of the algorithm, if any.
- Among all the
output in 4, output the one closest to
Every loop of 1~4 can be run in , the algorithm above can also be computed in polynomial time. Specifically, each call to an errors and erasures decoder of errors takes
time. Finally, the runtime of the algorithm above is
where
is the running time of the outer errors and erasures decoder.