Parallel LDPC Decoder

Andreev; Alexander ;   et al.

Patent Application Summary

U.S. patent application number 13/069105 was filed with the patent office on 2011-07-14 for parallel ldpc decoder. This patent application is currently assigned to LSI CORPORATION. Invention is credited to Alexander Andreev, Sergey Gribok, Igor Vikhliantsev.

Application Number20110173510 13/069105
Document ID /
Family ID39477303
Filed Date2011-07-14

United States Patent Application 20110173510
Kind Code A1
Andreev; Alexander ;   et al. July 14, 2011

Parallel LDPC Decoder

Abstract

An LDPC decoder that implements an iterative message-passing algorithm, where the improvement includes a pipeline architecture such that the decoder accumulates results for row operations during column operations, such that additional time and memory are not required to store results from the row operations beyond that required for the column operations.


Inventors: Andreev; Alexander; (San Jose, CA) ; Vikhliantsev; Igor; (San Jose, CA) ; Gribok; Sergey; (Santa Clara, CA)
Assignee: LSI CORPORATION
Milpitas
CA

Family ID: 39477303
Appl. No.: 13/069105
Filed: March 22, 2011

Related U.S. Patent Documents

Application Number Filing Date Patent Number
11565670 Dec 1, 2006 7934139
13069105

Current U.S. Class: 714/752 ; 714/E11.032
Current CPC Class: H03M 13/1137 20130101; H03M 13/116 20130101; H03M 13/114 20130101
Class at Publication: 714/752 ; 714/E11.032
International Class: H03M 13/11 20060101 H03M013/11; G06F 11/10 20060101 G06F011/10

Claims



1. In an LDPC decoder implementing an iterative message-passing algorithm, the improvement comprising a pipeline architecture such that the decoder accumulates results for row operations during column operations, such that additional time and memory are not required to store results from the row operations beyond that required for the column operations.
Description



FIELD

[0001] This patent application is a continuation of and claims all rights and priority on prior pending U.S. patent application Ser. No. 11/565,670 filed 2006.12.01. This invention relates to the field of integrated circuit fabrication. More particularly, this invention relates to an efficient, parallel low-density parity-check (LDPC) decoder for a special class of parity check matrices that reduces the amount of memory and time that are required for the necessary calculations.

BACKGROUND

[0002] LDPC code is typically a linear stream of data in a self-correcting format that can be represented by an (m,n)-matrix with a relatively small, fixed number of ones (nonzero for arbitrary GF(q)) in each row and column, where m is the number of check bits and n is the code length in bits.

[0003] The most famous algorithm for decoding LDPC codes is called the iterative message-passing algorithm. Each iteration of this algorithm consists of two stages. In stage 1 (the row operations), the algorithm computes messages for all of the check nodes (the rows). In stage 2 (the column operations), the algorithm computes messages for all of the bit nodes (the columns), and sends them back to the check nodes associated with the given bit nodes. There are many different implementations of this message-passing algorithm, but all of them use two-stage operations. Further, in each of these implementations, the second step starts only after all of the messages for all of the rows have been calculated.

[0004] As with all information processing operations, it is desirable for the procedure to operate as quickly as possible, while consuming as few resources as possible. Unfortunately, LDPC codes such as those described above typically require a relatively significant overhead in terms of the time and the memory required for them to operate.

[0005] What is needed is an LDPC code that operates in a more efficient manner, such as by reducing the amount of time or the amount of memory that is required by the operation.

SUMMARY

[0006] The above and other needs are met by an LDPC encoder that implements an iterative message-passing algorithm, where the improvement includes a pipeline architecture such that the decoder accumulates results for row operations during column operations, such that additional time and memory are not required to store results from the row operations beyond that required for the column operations.

BRIEF DESCRIPTION OF THE DRAWINGS

[0007] Further advantages of the invention are apparent by reference to the detailed description when considered in conjunction with the figures, which are not to scale so as to more clearly show the details, wherein like reference numbers indicate like elements throughout the several views, and wherein:

[0008] FIG. 1 is a functional block diagram of an LDPC decoder according to an embodiment of the present invention.

[0009] FIG. 2 is an example of an LDPC encoding matrix according to an embodiment of the present invention.

DETAILED DESCRIPTION

[0010] The LDPC algorithm described herein accumulates results for the row operations during the column operations, so that additional time and memory are not required to store the results from the row operations while the column operations are conducted. One embodiment of the method according to the present invention is presented below for the purpose of example. The method is described in reference to a hardware embodiment of the invention, as given in FIG. 1.

Initialization Step

[0011] For each parity bit w and code bit v, calculate:

md.sub.--m[v]=Pv(0)/Pv(1),

md_g[v][w]=md_m[v], and

md.sub.--R[w]=md.sub.--m[v]),

[0012] where Pv(0) and Pv(1) are the possibilities (from the Viterbi decoder) that bit v is equal to either 0 or 1, respectively. O(v) denotes the set of all parity bits w that include code bit v.

First Step of Iteration Process

[0013] Compute the following:

S [ v ] = ( w .di-elect cons. O ( v ) md_R [ w ] md_g [ v ] [ w ] ) md_m [ v ] ( 1 ) loc_item [ v ] [ w ] = md_R [ w ] md_g [ v ] [ w ] ( 2 ) md_g _new [ v ] [ w ] = S [ v ] loc_item [ v ] [ w ] ( 3 ) md_R _new [ w ] = f - 1 ( v .di-elect cons. O ( w ) f ( md_g _new [ v ] [ w ] ) ) , ( 4 ) ##EQU00001##

[0014] where

f ( x ) = 1 + x 1 - x ##EQU00002##

(the Gallager function), O(w) is all of the code bits from the parity bit w, and O(v) is all of the parity bits w that include the code bit v.

[0015] Calculations (1) and (2) above are performed for v=i, and calculations (3) and (4) are performed for v=i-1. Then, calculations (1) and (2) are performed for v=i+1, and calculations (3) and (4) are performed for v=i, and so on, through a pipeline architecture in the arithmetic unit, depicted in FIG. 1. When all of the code bits v have been processed, the values are assigned as given below,

md_g[v][w]=md_g_new[v][w] (6)

md_R[w]=md_R_new[w] (7)

[0016] for each message bit v and parity bit w. A single iteration as described above is performed. The so-called "hard decision" for each code bit v is performed during this single iteration, where:

hard_decision [ v ] = 0 if sign ( w .di-elect cons. O ( v ) loc_item [ v ] [ w ] ) = 1 ( 8 ) ##EQU00003##

[0017] and

hard_decision [ v ] = 1 if sign ( w .di-elect cons. O ( v ) loc_item [ v ] [ w ] ) = - 1 ( 9 ) ##EQU00004##

[0018] Products for the formulas (8) and (9) were already calculated during calculation (1) for S[v]. Preferably, the calculations are performed in the logarithmic domain, so all products are replaced by sums as implemented in the arithmetic unit.

Parallel Architecture

[0019] One embodiment of the LDPC decoder as described herein includes a controller, an input fifo (first-in, first-out buffer, from the Viterbi decoder), an output fifo (first-in, first-out buffer for the final hard decision, or to another process, such as a Reed-Solomon computation), a pipeline, two interleavers, and t functional units of two types: Bit units and Par units, all as depicted in FIG. 1. The Bit units calculate data on bit nodes, and the Par units calculate data on check nodes.

[0020] Each Par unit preferably contains pipelined memories for storing values of md_R[w] and md_R_new[w]. Each Bit unit preferably contains pipelined memories for storing values of S[v], md_m[v], and loc_item. Each unit is preferably pipelined, meaning that it can store data for a few different nodes at the same time. In the embodiment depicted the arithmetic unit is separated for simplification and to show all the more relevant connections. However, the present invention is applicable to a wide variety of arithmetic unit architectures that are capable of performing calculations (1) through (9) above. Also, in the embodiment as depicted in FIG. 1, memories are embedded into the arithmetic unit, but in other embodiments they could be separate from the arithmetic unit.

[0021] A special parity check is used for (m,n) matrices H for LDPC-codes, which parity check can be represented by a matrix (M,N) from permutation (r,r) cell H.sub.i,j, where m=Mr, n=Nr, and r(mod t)=0. An example of the matrix H is given in FIG. 2, where M=3, N=7, r=8, m=24, and n=56. The permutation matrix contains exactly one value of one in each sub row and sub column. To reduce the number of operations per circuit gate, circulant permutation matrices are used in one embodiment, which matrices are determined by formula:

p(j)=p(0)+j(mod r)

[0022] where p(i) is the index of the column with a value of one in i.sup.th row. For example, p(0)=2 for the upper left cell in FIG. 2 (where counting of both rows and columns starts with zero). Thus, we can use the initial index p(0) of one in the first row to determine each circulant permutation matrix. Similarly, the function c(j) returns the index of row with a value of one in the j.sup.th column.

[0023] Groups of t columns from the matrix H are logically divided into stripes. Assume that we already have a value of md_g[v][w] for each pair (v,w), where w.epsilon.O(v), and a value of mg_R[w] for each parity bit w. Starting from the first stripe, the following operations are performed in one embodiment. Calculate the addresses for the t memories that contain md_R for all of the check nodes, according to the following formula:

address(w)=cell_index(H.sub.ij)(mod M)(r/t)+c(v)/t (10)

[0024] where c(v) is the row index of the value one in the column for v from Rd and cell_index(H.sub.ij)=i+jM.

[0025] The value of md_R[w] for the t memories is input on the reverse interleaver that computes the permutation, according to the function:

.pi.(i)=c(i)(mod t) for given H.sub.i,j. (11)

[0026] Then, all of the values of md_R[w] are input to the right-most Bit unit to produce the sum S[v]. The method then continues with the same stripe in H.sub.i+1,j, H.sub.i+2,j, and so on.

[0027] For the second and subsequent stripes, we calculate the value loc_item and accumulate the sum S[v] for the current bits as described above, and retain the previously computed values of S[v] and loc_item for the bits from the previous stripe in the pipeline in the Bit unit. Then the values of S[v] and loc_item are retrieved from the pipeline and rearranged through the direct interleaver, which computes the permutation .tau. according to the function:

.tau.(.pi.(i))=i, where .pi.,.tau..epsilon.H.sub.i,j. (12)

[0028] and then calculates the values md_g_new and md_R_new according to formulas (3) and (4) for both v and w from the pipeline. When all the stripes have been processed in this manner, the values of md_g_new and md_R_new are used to replace the values of md_g and md_R as given in equations (6) and (7), and one cycle of the iteration is completed.

Block-Schema of Algorithm

[0029] 1. Starting with a k.sup.th stripe and a cell H.sub.i,j with index s. [0030] 2. Calculate AR_BIT[i].md_m=md_m[v[k.sub.t+i]] where i=0, . . . , t-1. [0031] 3. Calculate the addresses for w.sup.s.epsilon.O(v) for v from cell H.sub.i,j with index s according to formula (11). [0032] 4. Calculate AR_BIT[i].md_R=AR_PAR[.pi..sup.s(i)].md_R[w.sup.s], where .pi..sup.s is the reverse permutation for the cell with index s. [0033] 5. Calculate AR_BIT[i].md_g=md_g[v[k.sub.t+i]][w.sup.s]. [0034] 6. Calculate AR_BIT[i] item[v[k.sub.t+i]][w.sup.s] according to formula (2). [0035] 7. Calculate AR_BIT[i].S[v[k.sub.t+i]] according to formula (1). [0036] 8. Calculate AR_PAR[i].loc_item[v[(k-1).sub.t+i]][w.sup.s-M]=AR_BIT[.tau..sup.s-M[i]]]- .loc_item[v[(k-1).sub.t+i]][w.sup.s-M] and AR_PAR[i].S[v[(k-1).sub.t+i]]=AR_BIT[.tau..sup.s-M[i]].S[v[(k-1).sub.t+i]- ], where .tau..sup.s-M is the direct permutation for the cell with an index of s-M. [0037] 9. Calculate AR_BIT[i].md_g_new[v[(k-1).sub.t+i]][w.sup.s-M] according to formula (3). [0038] 10. Calculate AR_BIT[i].md_R_new[w.sup.s-M] according to formula (4). [0039] 11. Go to the next cell, with an index of s+1. [0040] 12. If s+1(modM)=0, then go to the next stripe (k+1). [0041] 13. If all cells pass step 12 above, then assign the values as given in equations (6) and (7), and start a new iteration for the 0.sup.th stripe and the 0.sup.th cell.

[0042] The foregoing description of preferred embodiments for this invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Obvious modifications or variations are possible in light of the above teachings. The embodiments are chosen and described in an effort to provide the best illustrations of the principles of the invention and its practical application, and to thereby enable one of ordinary skill in the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. All such modifications and variations are within the scope of the invention as determined by the appended claims when interpreted in accordance with the breadth to which they are fairly, legally, and equitably entitled.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed