U.S. patent application number 13/779999 was filed with the patent office on 2013-10-31 for eigen-vector approach for coil sensitivity maps estimation.
This patent application is currently assigned to Siemens Aktiengesellschaft. The applicant listed for this patent is Ti-chiun Chang, Nirmal Janardhanan, Alban Lefebvre, Jun Liu, Edgar Mueller, Mariappan S. Nadar, Marcel Dominik Nickel, Qiu Wang, Hui Xue, Zhili Yang, Michael Zenge. Invention is credited to Ti-chiun Chang, Nirmal Janardhanan, Alban Lefebvre, Jun Liu, Edgar Mueller, Mariappan S. Nadar, Marcel Dominik Nickel, Qiu Wang, Hui Xue, Zhili Yang, Michael Zenge.
Application Number | 20130289912 13/779999 |
Document ID | / |
Family ID | 49478038 |
Filed Date | 2013-10-31 |
United States Patent
Application |
20130289912 |
Kind Code |
A1 |
Liu; Jun ; et al. |
October 31, 2013 |
EIGEN-VECTOR APPROACH FOR COIL SENSITIVITY MAPS ESTIMATION
Abstract
A method for estimating a coil sensitivity map for a magnetic
resonance (MR) image includes providing (61) a matrix A of sliding
blocks of a 2D image of coil calibration data, calculating (62) a
left singular matrix V.sub..parallel. from a singular value
decomposition of A corresponding to .tau. leading singular values,
calculating (63) P=V.sub..parallel.V.sub..parallel..sup.H,
calculating (64) a matrix S that is an inverse Fourier transform of
a zero-padded matrix P, and solving (65)
M.sup.Hc.sup.r=(S.sup.r).sup.Hc.sup.r for c.sup.r, where c.sup.r is
a vector of coil sensitivity maps for all coils at spatial location
r, and M ( ( 1 1 1 0 0 0 0 0 0 ) ( 0 0 0 1 1 1 0 0 0 ) ( 0 0 0 0 0
0 1 1 1 ) ) . ##EQU00001##
Inventors: |
Liu; Jun; (Plainsboro,
NJ) ; Xue; Hui; (Franklin Park, NJ) ; Nickel;
Marcel Dominik; (Erlangen, DE) ; Chang; Ti-chiun;
(Princeton Junction, NJ) ; Nadar; Mariappan S.;
(Plainsboro, NJ) ; Lefebvre; Alban; (Jersey City,
NJ) ; Mueller; Edgar; (Heroldsbach, DE) ;
Wang; Qiu; (Ithaca, NY) ; Yang; Zhili; (West
Windsor, NJ) ; Janardhanan; Nirmal; (Plainsboro,
NJ) ; Zenge; Michael; (Nurnberg, DE) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Liu; Jun
Xue; Hui
Nickel; Marcel Dominik
Chang; Ti-chiun
Nadar; Mariappan S.
Lefebvre; Alban
Mueller; Edgar
Wang; Qiu
Yang; Zhili
Janardhanan; Nirmal
Zenge; Michael |
Plainsboro
Franklin Park
Erlangen
Princeton Junction
Plainsboro
Jersey City
Heroldsbach
Ithaca
West Windsor
Plainsboro
Nurnberg |
NJ
NJ
NJ
NJ
NJ
NY
NJ
NJ |
US
US
DE
US
US
US
DE
US
US
US
DE |
|
|
Assignee: |
Siemens Aktiengesellschaft
Munchen
NJ
Siemens Corporation
Iselin
|
Family ID: |
49478038 |
Appl. No.: |
13/779999 |
Filed: |
February 28, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61618002 |
Mar 30, 2012 |
|
|
|
61724023 |
Nov 8, 2012 |
|
|
|
61724001 |
Nov 8, 2012 |
|
|
|
Current U.S.
Class: |
702/65 |
Current CPC
Class: |
G01R 33/246 20130101;
G01R 33/5611 20130101; G01R 33/243 20130101 |
Class at
Publication: |
702/65 |
International
Class: |
G01R 33/24 20060101
G01R033/24 |
Claims
1. A method for estimating a coil sensitivity map for a magnetic
resonance (MR) image, comprising the steps of: providing a matrix A
of sliding blocks of a 2D n.sub.x.times.n.sub.y image of coil
calibration data, wherein A = ( a 1 , 1 a 1 , 2 a 1 , n b a 2 , 1 a
2 , 2 a 2 , n b a n c , 1 a n c , 2 a n c , n b ) , ##EQU00064##
n.sub.c is a number of coils, n.sub.b is a number of sliding blocks
extracted from the coil calibration data, and a.sub.i,j is a
k.sub.xk.sub.y.times.1 column vector that represents a jth sliding
block of an ith coil; determining a unit eigenvector .alpha. that
maximizes (S.sup.r).sup.H.alpha.,M.sup.H.alpha., wherein S.sup.r is
an inverse Fourier transform of a zero-padded matrix
P=V.sub..parallel.V.sub..parallel..sup.H at spatial location r in
the 2D n.sub.x.times.n.sub.y image, wherein V = [ v 1 , v 2 , , v
.tau. ] = ( v 1 , 1 v 1 , 2 v 1 , .tau. v 2 , 1 v 2 , 2 v 2 , .tau.
v n c , 1 v n c , 2 v n c , .tau. ) ##EQU00065## is a matrix
composed of left singular vectors of A corresponding to .tau.
leading singular values wherein v.sub.i,k is a
k.sub.xk.sub.y.times.1 column vector that is an i-th block in
vector v.sub.k, M = ( ( 1 1 1 0 0 0 0 0 0 ) ( 0 0 0 1 1 1 0 0 0 ) (
0 0 0 0 0 0 1 1 1 ) ) ##EQU00066## is a
n.sub.c.times.k.sub.xk.sub.yn.sub.c sparse matrix of the same size
as S.sup.r; and finding an optimizer c.sup.r that maximizes a
correlation between (S.sup.r).sup.H.alpha. and M.sup.H.alpha.
wherein said optimizer c.sup.r is a column vector of coil
sensitivities of a coil at spatial position r in the 2D
n.sub.x.times.n.sub.y image.
2. A method for estimating a coil sensitivity map for a magnetic
resonance (MR) image, comprising the steps of: providing a matrix A
of sliding blocks of a 2D n.sub.x.times.n.sub.y image of coil
calibration data, wherein A = ( a 1 , 1 a 1 , 2 a 1 , n b a 2 , 1 a
2 , 2 a 2 , n b a n c , 1 a n c , 2 a n c , n b ) , ##EQU00067##
n.sub.c is a number of coils, n.sub.b is a number of sliding blocks
extracted from the coil calibration data, and a.sub.i,j is a
k.sub.xk.sub.y.times.1 column vector that represents a jth sliding
block of an ith coil; calculating a left singular matrix
V.sub..parallel. from a singular value decomposition of A, wherein
A=V.SIGMA.U.sup.H and V = [ v 1 , v 2 , , v .tau. ] = ( v 1 , 1 v 1
, 2 v 1 , .tau. v 2 , 1 v 2 , 2 v 2 , .tau. v n c , 1 v n c , 2 v n
c , .tau. ) ##EQU00068## is a matrix of left singular vectors of A
corresponding to .tau. leading singular values wherein v.sub.i,k is
a k.sub.xk.sub.y.times.1 column vector that is an i-th block in
vector v.sub.k; calculating P = V V H = ( ( p 1 , 1 , 1 p 1 , 1 , 2
p 1 , 1 , k x k y p 1 , 1 , 1 p 1 , 2 , 2 p 1 , 2 , k x k y p 1 , n
c , 1 p 1 , n c , 2 p 1 , n c , k x k y ) ( p n c , 1 , 1 p n c , 1
, k x k y p n c , 2 , 1 p n c , 2 , k x , k y p n c , n c , 1 p n c
, n c , k x , k y ) ) ##EQU00069## wherein p.sub.i,j,t is a
k.sub.xk.sub.y.times.1 column vector; calculating
S.sub.i,j,t=F.sup.H(P.sub.t(p.sub.i,j,t)) wherein F.sup.H
represents an inverse Fourier transform and P.sub.t represents a
zero-padding operator; and solving M.sup.Hc.sup.r
(S.sup.r).sup.Hc.sup.r for c.sup.r, where c.sup.r is a vector of
coil sensitivity maps for all coils at spatial location r, S r = (
( s 1 , 1 , 1 r s 1 , 1 , 2 r s 1 , 1 , k x k y r s 1 , 2 , 1 r s 1
, 2 , 2 r s 1 , 2 , k x k y r s 1 , n c , 1 r s 1 , n c , 2 r s 1 ,
n c , k x k y r ) ( s n c , 1 , 1 r s n c , 1 , k x k y r s n c , 2
, 1 r s n c , 2 , k x , k y r s n c , n c , 1 r s n c , n c , k x ,
k y r ) ) ##EQU00070## and ##EQU00070.2## M = ( ( 1 1 1 0 0 0 0 0 0
) ( 0 0 0 1 1 1 0 0 0 ) ( 0 0 0 0 0 0 1 1 1 ) ) .
##EQU00070.3##
3. The method of claim 2, further comprising finding an optimizer
c.sub.r that maximizes a correlation between (S.sup.r).sup.H.alpha.
and M.sup.H.alpha. wherein said optimizer c.sup.r is vector of coil
sensitivity maps for all coils at spatial location r in the 2D
n.sub.x.times.n.sub.y image.
4. The method of claim 2, wherein solving
M.sup.Hc.sup.r=(S.sub.r).sup.Hc.sup.r comprises solving eigenvalue
equations M ( S r ) H k x k y .beta. = .lamda. _ .beta. , and ( S r
( S r ) H ) .gamma. = .lamda. ~ .gamma. , ##EQU00071## wherein
.lamda. and {tilde over (.lamda.)} denote eigenvalues, and .beta.
and .gamma. denote eigenvectors.
5. The method of claim 4, further comprising calculating
S.sup.rM.sup.H using m i , j = t = 1 k x k y s i , j , t = t = 1 k
x k y F H ( P t ( p i , j , t ) ) = F H ( t = 1 k x k y P t ( p i ,
j , t ) ) , ##EQU00072## wherein m.sub.i,j is an
n.sub.xn.sub.y.times.1 column vector containing the entries of
m.sub.i,j.sup.r at all spatial locations, and m.sub.i,j.sup.r is
the i,j elements of the 2D MR image at spatial location r.
6. The method of claim 4, further comprising calculating
S.sup.r(S.sup.r).sup.H as
S.sup.r(S.sub.r).sup.H=k.sub.xk.sub.yZ.sup.r(Z.sup.r).sup.H,
wherein Z r = ( z 1 , 1 r z 1 , 2 r z 1 , .tau. r z 2 , 1 r z 2 , 2
r z 2 , .tau. r z n c , 1 r z n c , 2 r z n c , .tau. r )
##EQU00073## is an n.sub.c.times..tau. matrix with the j-row and
k-th column being z.sub.j,k.sup.r, z.sub.j,k.sup.r denotes the r-th
entry of z.sub.j,k=F.sup.H(P.sub.c(v.sub.j,k)), and P.sub.c denotes
the zero-padding operator that maps a center of v.sub.j,k to a
center of z.sub.j,k.
7. The method of claim 6, further comprising computing the i-th row
and j-th column of Z r ( Z r ) H as k = 1 .tau. z i , k r z i , k r
_ ##EQU00074## wherein z.sub.i,k.sup.r denotes the i-th row and
k-th column in Z.sub.r.
8. The method of claim 2, wherein the coil calibration data is
acquired from a center square of frequency space data.
9. The method of claim 2, wherein the matrix A is constructed from
calibration data of size c.sub.x.times.c.sub.y.times.c.sub.z by
gathering sliding blocks of kernel size
k.sub.x.times.k.sub.y.times.k.sub.z, wherein A has
[(c.sub.x-k.sub.x+1).times.(c.sub.y-k.sub.y+1).times.(c.sub.z-k.sub.z+1)]
columns and k.sub.xk.sub.yk.sub.zn.sub.c rows.
10. A non-transitory program storage device readable by a computer,
tangibly embodying a program of instructions executed by the
computer to perform the method steps for estimating a coil
sensitivity map for a magnetic resonance (MR) image, the method
comprising the steps of: providing a matrix A of sliding blocks of
a 2D n.sub.x.times.n.sub.y image of coil calibration data, wherein
A = ( a 1 , 1 a 1 , 2 a 1 , n b a 2 , 1 a 2 , 2 a 2 , n b a n c , 1
a n c , 2 a n c , n b ) , ##EQU00075## n.sub.c is a number of
coils, n.sub.b is a number of sliding blocks extracted from the
coil calibration data, and a.sub.i,j is a k.sub.xk.sub.y.times.1
column vector that represents a jth sliding block of an ith coil;
calculating a left singular matrix V.sub..parallel. from a singular
value decomposition of A, wherein A=V.SIGMA.U.sup.H and V = [ v 1 ,
v 2 , , v .tau. ] = ( v 1 , 1 v 1 , 2 v 1 , .tau. v 2 , 1 v 2 , 2 v
2 , .tau. v n c , 1 v n c , 2 v n c , .tau. ) ##EQU00076## is a
matrix of left singular vectors of A corresponding to .tau. leading
singular values wherein v.sub.i,k is a k.sub.xk.sub.y.times.1
column vector that is an i-th block in vector v.sub.k; calculating
P = V V H = ( ( p 1 , 1 , 1 p 1 , 1 , 2 p 1 , 1 , k x k y p 1 , 1 ,
1 p 1 , 2 , 2 p 1 , 2 , k x k y p 1 , n c , 1 p 1 , n c , 2 p 1 , n
c , k x k y ) ( p n c , 1 , 1 p n c , 1 , k x k y p n c , 2 , 1 p n
c , 2 , k x , k y p n c , n c , 1 p n c , n c , k x , k y ) )
##EQU00077## wherein p.sub.i,j,t is a k.sub.xk.sub.y.times.1 column
vector; calculating S.sub.i,j,t=F.sup.H(P.sub.t(p.sub.i,j,t))
wherein F.sup.H represents an inverse Fourier transform and P.sub.t
represents a zero-padding operator; and solving
M.sup.Hc.sup.r=(S.sup.r).sup.Hc.sub.r for c.sup.r, where c.sup.r is
a vector of coil sensitivity maps for all coils at spatial location
r, S r = ( ( s 1 , 1 , 1 r s 1 , 1 , 2 r s 1 , 1 , k x k y r s 1 ,
2 , 1 r s 1 , 2 , 2 r s 1 , 2 , k x k y r s 1 , n c , 1 r s 1 , n c
, 2 r s 1 , n c , k x k y r ) ( s n c , 1 , 1 r s n c , 1 , k x k y
r s n c , 2 , 1 r s n c , 2 , k x , k y r s n c , n c , 1 r s n c ,
n c , k x , k y r ) ) ##EQU00078## and ##EQU00078.2## M = ( ( 1 1 1
0 0 0 0 0 0 ) ( 0 0 0 1 1 1 0 0 0 ) ( 0 0 0 0 0 0 1 1 1 ) ) .
##EQU00078.3##
11. The computer readable program storage device of claim 10, the
method further comprising finding an optimizer c.sup.r that
maximizes a correlation between (S.sup.r).sup.H.alpha. and
M.sup.H.alpha. wherein said optimizer c.sup.r is vector of coil
sensitivity maps for all coils at spatial location r in the 2D
n.sub.x.times.n.sub.y image.
12. The computer readable program storage device of claim 10,
wherein solving M.sup.Hc.sup.r=(S.sup.r).sup.Hc.sup.r comprises
solving eigenvalue equations M ( S r ) H k x k y .beta. = .lamda. _
.beta. , ##EQU00079## and (S.sup.r(S.sup.r).sup.H).gamma.={tilde
over (.lamda.)}.gamma., wheren .lamda. and {tilde over (.lamda.)}
denote eigenvalues, and .beta. and .gamma. denote eigenvectors.
13. The computer readable program storage device of claim 12, the
method further comprising calculating S.sup.rM.sup.H using m i , j
= t = 1 k x k y s i , j , t = t = 1 k x k y F H ( P t ( p i , j , t
) ) = F H ( t = 1 k x k y P t ( p i , j , t ) ) , ##EQU00080##
wherein m.sub.i,j is an n.sub.xn.sub.y.times.1 column vector
containing the entries of m.sub.i,j.sup.r at all spatial locations,
and m.sub.i,j.sup.r is the i,j elements of the 2D MR image at
spatial location r.
14. The computer readable program storage device of claim 12, the
method further comprising calculating S.sup.r(S.sup.r).sup.H as
S.sup.r(S.sup.r).sup.H=k.sub.xk.sub.yZ.sup.r(Z.sup.r).sup.H,
wherein Z r = ( z 1 , 1 r z 1 , 2 r z 1 , .tau. r z 2 , 1 r z 2 , 2
r z 2 , .tau. r z n c , 1 r z n c , 2 r z n c , .tau. r )
##EQU00081## is an n.sub.c.times..tau. matrix with the j-row and
k-th column being z.sub.j,k.sup.r, z.sub.j,k.sup.r denotes the r-th
entry of z.sub.j,k=F.sup.H(P.sup.c(v.sub.j,k)), and P.sub.c denotes
the zero-padding operator that maps a center of v.sub.j,k to a
center of z.sub.j,k.
15. The computer readable program storage device of claim 14, the
method further comprising computing the i-th row and j-th column of
Z.sup.r(Z.sup.r).sup.H as k = 1 .tau. z i , k r z i , k r _
##EQU00082## wherein z.sub.i,k.sup.r denotes the i-th row and k-th
column in Z.sup.r.
16. The computer readable program storage device of claim 10,
wherein the coil calibration data is acquired from a center square
of frequency space data.
17. The computer readable program storage device of claim 10,
wherein the matrix A is constructed from calibration data of size
c.sub.x.times.c.sub.y.times.c.sub.z by gathering sliding blocks of
kernel size k.sub.x.times.k.sub.y.times.k.sub.z, wherein A has
[(c.sub.x-k.sub.x+1).times.(c.sub.y-k.sub.y+1).times.(c.sub.z-k.sub.z+1)]
columns and k.sub.xk.sub.yk.sub.zn.sub.c rows.
Description
CROSS REFERENCE TO RELATED UNITED STATES APPLICATIONS
[0001] This application claims priority from "On the Eigen-Vector
Approach for Coil Sensitivity Maps Estimation", U.S. Provisional
Application No. 61/618,002 of Liu, et al., filed Mar. 30, 2012,
"Coil Sensitivity maps Estimation in Dynamic CMRI: Square Could be
Better than Rectangular", U.S. Provisional Application No.
61/724,023 of Liu, et al., filed Nov. 8, 2012, and "An Eigen-Vector
Approach for Coil Sensitivity Estimation in the 3D Scenario", U.S.
Provisional Application No. 61/724,001 of Liu, et al., filed Nov.
8, 2012, the contents of all of which are herein incorporated by
reference in their entireties.
TECHNICAL FIELD
[0002] This disclosure is directed to methods for estimating coil
sensitivity maps (CSM) of magnetic resonance imaging (MRI)
apparatuses.
DISCUSSION OF THE RELATED ART
[0003] Parallel imaging makes use of multiple receiver coils to
acquire the image in parallel. It can be used to accelerate the
image acquisition by exploiting the spatially varying sensitivities
of the multiple receiver coils since each coil image is weighted
differently by the coil sensitivity maps (CSM). The accurate
estimation of coil sensitivity maps can ensure the success of
approaches that make use of the sensitivity maps either in the
reconstruction formulation or in coil combination. Some methods
explicitly make use of the coil sensitivities in reconstruction,
and the goodness of the CSM's is important to the quality of the
reconstructed image. Other approaches implicitly make use of the
coil sensitivities by performing an auto-calibrating coil-by-coil
reconstruction, but the CSM's are needed if one wants to obtain the
coil combined complex image or the phase image.
[0004] The most common way to determine the sensitivity maps is to
obtain low-resolution pre-scans. However, when the object is not
static, the sensitivity functions are different between pre-scan
and under-sampled scans, and this could lead to reconstruction
errors. To compensate for this, joint estimation approaches have
been proposed, however, these approaches usually have high
computation cost and are restricted to explicit
reconstructions.
[0005] The eigenvector approach proposed tries to build a
connection between implicit and explicit approaches, by showing
that the CSM can be computed with the auto-calibrated coil-by-coil
reconstruction. It has been shown that the coil sensitivities can
be computed as the eigenvector of a given matrix in the image space
corresponding to the eigenvalue's `1"s. However, to the best of the
inventor's knowledge: (1) the detailed mathematical derivations for
the eigenvector approach are not well understood; (2) the
optimization criterion for computing the CSM is not very clear; and
(3) there lacks an efficient approach.
SUMMARY
[0006] Exemplary embodiments of the invention as described herein
generally include methods for coil sensitivity maps (CSM)
estimation for 2-D MR images. Embodiments of the invention
mathematically derive an eigenvector approach and show that the
coil sensitivity maps at a given spatial location can be obtained
by solving a generalized eigenvalue system. Embodiments of the
invention establish relationships between the generalized
eigenvalue system and two related eigenvalue systems where the
associated matrices are Hermitian. Embodiments of the invention can
reduce the computational and storage costs and avoid the
computation of large matrices by using equivalent representations.
Experimental results on simulated and real data sets show that, a
method according to an embodiment of the invention can obtain CSM's
with high quality and the resulting reconstructed images have fewer
artifacts compared to those reconstructed with CSM's obtained by a
B1-Map.
[0007] According to an aspect of the invention, there is provided a
method for estimating a coil sensitivity map for a magnetic
resonance (MR) image, including providing a matrix A of sliding
blocks of a 2D n.sub.x.times.n.sub.y image of coil calibration
data, where
A = ( a 1 , 1 a 1 , 2 a 1 , n b a 2 , 1 a 2 , 2 a 2 , n b a n c , 1
a n c , 2 a n c , n b ) , ##EQU00002##
n.sub.c is a number of coils, n.sub.b is a number of sliding blocks
extracted from the coil calibration data, and a.sub.i,j is a
k.sub.xk.sub.y.times.1 column vector that represents a jth sliding
block of an ith coil, determining a unit eigenvector .alpha. that
maximizes (S.sup.r).sup.H.alpha.,M.sup.H.alpha., where S.sup.r is
an inverse Fourier transform of a zero-padded matrix
P=V.sub..parallel.V.sub..parallel..sup.H at spatial location r in
the 2D n.sub.x.times.n.sub.y image, where
V = [ v 1 , v 2 , , v .tau. ] = ( v 1 , 1 v 1 , 2 v 1 , .tau. v 2 ,
1 v 2 , 2 v 2 , .tau. v n c , 1 v n c , 2 v n c , .tau. )
##EQU00003##
is a matrix composed of left singular vectors of A corresponding to
.tau. leading singular values where v.sub.i,k is a
k.sub.xk.sub.y.times.1 column vector that is an i-th block in
vector v.sub.k,
M ( ( 1 1 1 0 0 0 0 0 0 ) ( 0 0 0 1 1 1 0 0 0 ) ( 0 0 0 0 0 0 1 1 1
) ) ##EQU00004##
is a n.sub.c.times.k.sub.xk.sub.yn.sub.c sparse matrix of the same
size as S.sup.r, and finding an optimizer c.sup.r that maximizes a
correlation between (S.sup.r).sup.H.alpha. and M.sup.H.alpha. where
the optimizer c.sup.r is a column vector of coil sensitivities of a
coil at spatial position r in the 2D n.sub.x.times.n.sub.y
image.
[0008] According to another aspect of the invention, there is
provided a method for estimating a coil sensitivity map for a
magnetic resonance (MR) image, including providing a matrix A of
sliding blocks of a 2D n.sub.x.times.n.sub.y image of coil
calibration data, where
A = ( a 1 , 1 a 1 , 2 a 1 , n b a 2 , 1 a 2 , 2 a 2 , n b a n c , 1
a n c , 2 a n c , n b ) , ##EQU00005##
n.sub.c is a number of coils, n.sub.b is a number of sliding blocks
extracted from the coil calibration data, and a.sub.i,j is a
k.sub.xk.sub.y.times.1 column vector that represents a jth sliding
block of an ith coil, calculating a left singular matrix
V.sub..parallel. from a singular value decomposition of A, where
A=V.SIGMA.U.sup.H and
V = [ v 1 , v 2 , , v .tau. ] = ( v 1 , 1 v 1 , 2 v 1 , .tau. v 2 ,
1 v 2 , 2 v 2 , .tau. v n c , 1 v n c , 2 v n c , .tau. )
##EQU00006##
is a matrix of left singular vectors of A corresponding to .tau.
leading singular values where v.sub.i,k is a k.sub.xk.sub.y.times.1
column vector that is an i-th block in vector V.sub.k,
calculating
P = V V H = ( ( p 1 , 1 , 1 p 1 , 1 , 2 p 1 , 1 , k x k y p 1 , 1 ,
1 p 1 , 2 , 2 p 1 , 2 , k x k y p 1 , n c , 1 p 1 , n c , 2 p 1 , n
c , k x k y ) ( p n c , 1 , 1 p n c , 1 , k x k y p n c , 2 , 1 p n
c , 2 , k x k y p n c , n c , 1 p n c , n c , k x k y ) )
##EQU00007##
where p.sub.i,j,t is a k.sub.xk.sub.y.times.1 column vector,
calculating S.sub.i,j,t=F.sup.H(P.sub.t(p.sub.i,j,t)) where F.sup.H
represents an inverse Fourier transform and P.sub.t represents a
zero-padding operator, and solving
M.sup.Hc.sup.r=(S.sup.r).sup.Hc.sub.r for c.sup.r, where c.sup.r is
a vector of coil sensitivity maps for all coils at spatial location
r,
S r = ( ( s 1 , 1 , 1 r s 1 , 1 , 2 r s 1 , 1 , k x k y r s 1 , 2 ,
1 r s 1 , 2 , 2 r s 1 , 2 , k x k y r s 1 , n c , 1 r s 1 , n c , 2
r s 1 , n c , k x k y r ) ( s n c , 1 , 1 r s n c , 1 , k x k y r s
n c , 2 , 1 r s n c , 2 , k x k y r s n c , n c , 1 r s n c , n c ,
k x k y r ) ) and ##EQU00008## M = ( ( 1 1 1 0 0 0 0 0 0 ) ( 0 0 0
1 1 1 0 0 0 ) ( 0 0 0 0 0 0 1 1 1 ) ) . ##EQU00008.2##
[0009] According to a further aspect of the invention, the method
includes finding an optimizer c.sup.r that maximizes a correlation
between (S.sup.r).sup.H.alpha. and M.sup.H.alpha. where the
optimizer c.sup.r is vector of coil sensitivity maps for all coils
at spatial location r in the 2D n.sub.x.times.n.sub.y image.
[0010] According to a further aspect of the invention, solving
M.sup.Hc.sup.r=(S.sup.r).sup.Hc.sup.r comprises solving eigenvalue
equations
M ( S r ) H k x k y .beta. = .lamda. _ .beta. , ##EQU00009##
and (S.sub.r(S.sup.r).sup.H).gamma.={tilde over (.lamda.)}.gamma.,
wherein .lamda. and {tilde over (.lamda.)} denote eigenvalues,
.beta. and .gamma. denote eigenvectors.
[0011] According to a further aspect of the invention, the method
includes calculating
S r M H using m i , j = t = 1 k x k y s i , j , t = t = 1 k x k y F
H ( P t ( p i , j , t ) ) = F H ( t = 1 k x k y P t ( p i , j , t )
) , ##EQU00010##
where m.sub.i,j is an n.sub.xn.sub.y.times.1 column vector
containing the entries of m.sub.i,j.sup.r at all spatial locations,
and m.sub.i,j.sup.r is the i,j elements of the 2D MR image at
spatial location r.
[0012] According to a further aspect of the invention, the method
includes calculating S.sup.r(S.sup.r).sup.H as
S.sup.r(S.sup.r).sup.H=k.sub.xk.sub.yZ.sup.r(Z.sup.r).sup.H,
where
Z r = ( z 1 , 1 r z 1 , 2 r z 1 , .tau. r z 2 , 1 r z 2 , 2 r z 2 ,
.tau. r z n c , 1 r z n c , 2 r z n c , .tau. r ) ##EQU00011##
is an n.sub.c.times..tau. matrix with the j-row and k-th column
being Z.sub.j,k.sup.r, Z.sub.j,k.sup.r denotes the r-th entry of
Z.sub.j,k=F.sup.H (P.sub.c(v.sub.j,k)) and P.sub.c denotes the
zero-padding operator that maps a center of v.sub.j,k to a center
of z.sub.j,k.
[0013] According to a further aspect of the invention, the method
includes computing the i-th row and j-th column of
Z.sup.r(Z.sup.r).sup.H as
k = 1 .tau. z i , k r z i , k r _ ##EQU00012##
where z.sub.i,k.sup.r denotes the i-th row and k-th column in
Z.sup.r.
[0014] According to a further aspect of the invention, the coil
calibration data is acquired from a center square of frequency
space data.
[0015] According to a further aspect of the invention, the matrix A
is constructed from calibration data of size
c.sub.x.times.c.sub.y.times.c.sub.z by gathering sliding blocks of
kernel size k.sub.x.times.k.sub.y.times.k.sub.z, where A has
[(c.sub.x-k.sub.x+1).times.(c.sub.y-k.sub.y+1).times.(c.sub.z-k.sub.z+1)]
columns and k.sub.xk.sub.yk.sub.zn.sub.c rows.
[0016] According to another aspect of the invention, there is
provided a non-transitory program storage device readable by a
computer, tangibly embodying a program of instructions executed by
the computer to perform the method steps for estimating a coil
sensitivity map for a magnetic resonance (MR) image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] FIG. 1 is a table summarizing the notation used in this
disclosure, according to an embodiment of the invention.
[0018] FIG. 2 illustrates zero padding for the case
k.sub.x=k.sub.y=3, according to an embodiment of the invention.
[0019] FIG. 3 illustrates an eigenvector approach where each
k-space point is represented as a linear combination of the k-space
points of all the coils in its neighborhood, according to an
embodiment of the invention.
[0020] FIG. 4 illustrates the principle of an eigenvector approach
of EQ. (13), according to an embodiment of the invention.
[0021] FIG. 5 is a table summarizing the value of mean absolute
correlation (mac) under different number of kept singular vectors
and different kernel sizes, according to an embodiment of the
invention.
[0022] FIG. 6 is a flowchart of an eigenvector approach to
estimating coil sensitivity maps, according to an embodiment of the
invention.
[0023] FIGS. 7(a)-(d) depict CSM's obtained by various approaches,
according to an embodiment of the invention.
[0024] FIGS. 8(a)-(c) are graphs of the mean absolute correlation
as a function of different number of kept vectors and different
kernel sizes, according to an embodiment of the invention.
[0025] FIG. 9 illustrates a reconstruction of a 2D cardiac image,
according to an embodiment of the invention.
[0026] FIGS. 10(a)-(b) illustrates a reconstruction of a 3D abdomen
image, according to an embodiment of the invention.
[0027] FIG. 11 illustrates a typical Cartesian sampling pattern
used in dynamic cardiac MR imaging, according to an embodiment of
the invention.
[0028] FIGS. 12(a)-(b) illustrate the influence of the amount of
used calibration data on the reconstruction, according to an
embodiment of the invention.
[0029] FIG. 13 compares the square and the rectangular calibration
data in terms of the mac criterion, according to an embodiment of
the invention.
[0030] FIG. 14 illustrates the conversion of the calibration data
into a calibration matrix for 3D CSM estimation, according to an
embodiment of the invention.
[0031] FIGS. 15(a)-(h) illustrate a comparison of a 2D eigenvector
approach and a 3D eigenvector approach according to an embodiment
of the invention.
[0032] FIG. 16 is a block diagram of an exemplary computer system
for implementing an eigenvector approach to coil sensitivity maps
estimation for 2-D MR images, according to an embodiment of the
invention.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
[0033] Exemplary embodiments of the invention as described herein
generally include systems and methods for an eigenvector approach
to coil sensitivity maps (CSM) estimation for 2-D MR images.
Accordingly, while the invention is susceptible to various
modifications and alternative forms, specific embodiments thereof
are shown by way of example in the drawings and will herein be
described in detail. It should be understood, however, that there
is no intent to limit the invention to the particular forms
disclosed, but on the contrary, the invention is to cover all
modifications, equivalents, and alternatives falling within the
spirit and scope of the invention.
[0034] As used herein, the term "image" refers to multi-dimensional
data composed of discrete image elements (e.g., pixels for
2-dimensional images and voxels for 3-dimensional images). The
image may be, for example, a medical image of a subject collected
by computer tomography, magnetic resonance imaging, ultrasound, or
any other medical imaging system known to one of skill in the art.
The image may also be provided from non-medical contexts, such as,
for example, remote sensing systems, electron microscopy, etc.
Although an image can be thought of as a function from R.sup.3 to R
or R.sup.7, the methods of the inventions are not limited to such
images, and can be applied to images of any dimension, e.g., a
2-dimensional picture or a 3-dimensional volume. For a 2- or
3-dimensional image, the domain of the image is typically a 2- or
3-dimensional rectangular array, wherein each pixel or voxel can be
addressed with reference to a set of 2 or 3 mutually orthogonal
axes. The terms "digital" and "digitized" as used herein will refer
to images or volumes, as appropriate, in a digital or digitized
format acquired via a digital acquisition system or via conversion
from an analog image.
Notation:
[0035] Throughout this disclosure, scalars are denoted by italic
lowercase letters (e.g., n.sub.c, n.sub.x, n.sub.y, k.sub.x,
k.sub.y), vectors by boldface lowercase letters (e.g., x.sub.i,
c.sub.i), and matrices by italic uppercase letters (e.g., S
S.sup.r). The main notations used in this paper are summarized in
Table I, shown in FIG. 1.
[0036] Let { }.sup.H denote the complex-conjugate transpose
operator, x denote the complex-conjugate of a scalar x, and x be
the complex-conjugate of a vector x.
[0037] Let i and j denote the indices for the coils in the interval
[1, n.sub.c], r denote the spatial location in the 2D image of size
n.sub.x.times.n.sub.y, and t denote the spatial location in the
image block of size k.sub.x.times.k.sub.y.
[0038] For an n.sub.xn.sub.y.times.1 column vector x.sub.i,
representing an image by concatenating its columns, x.sub.i.sup.r
corresponds to its value at the spatial location r, where r goes
from 1 to n.sub.xn.sub.y. For a k.sub.xk.sub.y.times.1 column
vector v.sub.i,k, representing an image block by concatenating its
columns, v.sub.i,k.sup.t denotes its value at the spatial location
t, where t goes from 1 to k.sub.xk.sub.y.
[0039] Let y=P.sub.t(x) denote the zero padding operator defined as
follows. The input x, a column vector of size
k.sub.xk.sub.y.times.1, represents a k.sub.x.times.k.sub.y image
block by concatenating its columns; t, a scalar in the interval [1,
k.sub.xk.sub.y], determines which value in x to be put at the
center of the matrix of size n.sub.x.times.n.sub.y; the output y is
an n.sub.x.times.n.sub.y.times.1 column vector representing an
n.sub.x.times.n.sub.y matrix, and its non-zero entries are the
values of x. In particular, we let y=P.sub.c(x) denote the zero
padding operator that put the center point of x to the center
location of y. For an image of size n.sub.x.times.n.sub.y, define
the index of its center location as
n x + 1 2 + ( n y + 1 2 - 1 ) n x ; ##EQU00013##
and for an image block of size k.sub.x.times.k.sub.y, define the
index of its center location as
k x + 1 2 + ( k y + 1 2 - 1 ) k x . ##EQU00014##
Embodiments of the present disclosure assume that n.sub.x and
n.sub.y are even; but k.sub.x and ky can be either even or odd.
[0040] FIG. 2 illustrates zero padding for the case
k.sub.x=k.sub.y=3. The nine squares correspond to the input x, the
square 21 is placed at the center of the n.sub.x.times.n.sub.y
image, t in determines which value in x to be put at the center of
the 2D image; the remaining entries are filled with zero except for
the nine squares; and the three rectangles 22, 23, 24 denote the
outputs of P.sub.1, P.sub.5, and P.sub.9, respectively.
Eigenvector Approach
[0041] One eigenvector approach is illustrated in FIG. 3, where
each k-space point is represented as a linear combination of the
k-space points of all the coils in its neighborhood. Let
x.sub.i.sup.r denote the value of x.sub.i at the spatial location
r. Let w.sub.i,j be the convolution kernel for representing the
k-space point of the i-th coil by the j-th coil. Mathematically,
x.sub.i.sup.r is estimated as:
x i r = j = 1 n c w i , j * x j , .A-inverted. r , ( 1 )
##EQU00015##
where * denotes the convolution operator. Define
q.sub.i,j=F.sup.H(P.sub.C(w.sub.i,j)). (2)
q.sub.i,j is the result of applying an inverse Fourier
transformation to the convolution kernel w.sub.i,j with zero
padding, illustrated in FIG. 2. Applying the inverse Fourier
transform to both sides of the equation in FIG. 3, one has
F H ( x i ) = j = 1 n c q ij F H ( x j ) ( 3 ) ##EQU00016##
where the convolution theorem has been applied.
F.sup.H(x.sub.i)=mc.sub.i can be inserted into EQ. (3) to
obtain
c i = j = 1 n c q ij c j , i = 1 , 2 , , n c . ( 4 )
##EQU00017##
EQ. (4) indicates that all the entries in this equation can be
decoupled with respect to the spatial location. Let c.sub.i.sup.r,
q.sub.i,j.sup.r and c.sub.j.sup.r denote the entries of c.sub.i,
q.sub.i,j, and c.sub.j at the spatial location r, respectively. EQ.
(4) can be rewritten as
c i r = j = 1 n c q i , j r c j r , i = 1 , 2 , , n c ,
.A-inverted. r . ( 5 ) ##EQU00018##
Denote Q.sup.r as a matrix with the value at the i-row and the j-th
column being q.sub.i,j.sup.r. Let c.sup.r be an n.sub.c.times.1
column vector composed of c.sub.i's at the spatial location r. EQ.
(5) can be rewritten as the following compact matrix format:
c.sup.r=Q.sup.rc.sup.r,.A-inverted.r (6)
[0042] FIG. 3 illustrates the principle of CSM: the three
rectangles denote the k-space data of the i-th coil, the first
coil, and the last coil, respectively; x.sub.i.sup.r the square 31
in the left plot, is estimated as the linear combination of the
k-space points of all the coils in its neighborhood of size
k.sub.x.times.k.sub.y (here k.sub.x=k.sub.y=3); the nine small
squares 32 in X.sub.1 and 33 in X.sub.nc correspond to the values
in the convolution kernel w.sub.i,l and w.sub.i,n.sub.c.
[0043] In an eigenvector approach according to an embodiment of the
invention, sliding blocks of size k.sub.x.times.k.sub.y) are
gathered from the calibration data to form a matrix:
A = ( a 1 , 1 a 1 , 2 a 1 , n b a 2 , 1 a 2 , 2 a 2 , n b a n c , 1
a n c , 2 a n c , n b ) ( 7 ) ##EQU00019##
where n.sub.b denotes the number of sliding blocks extracted from
the calibration data, and a.sub.i,k, which is a
k.sub.xk.sub.y.times.1 column vector, denotes the k-th sliding
block of the i-th coil.
[0044] Let
A=V.SIGMA.U.sup.H (8)
be the Singular Value Decomposition of A. Denote
[0045] V = [ v 1 , v 2 , , v .tau. ] = ( v 1 , 1 v 1 , 2 v 1 ,
.tau. v 2 , 1 v 2 , 2 v 2 , .tau. v n c , 1 v n c , 2 v n c , .tau.
) ( 9 ) ##EQU00020##
as a matrix composed of the left singular vectors of A
corresponding to the leading .tau. singular values. Here,
v.sub.i,k, a k.sub.xk.sub.y.times.1 column vector, is the i-th
block in the vector v.sub.k. It can be observed from EQS. (7) and
(9) that, A and V.sub..parallel. share a similar structure in that:
(1) the block a.sub.i,k and v.sub.i,k have the same size
(k.sub.xk.sub.y.times.1); and (2) the number of such blocks for
each column is same.
[0046] Let P=V.sub..parallel.V.sub..parallel..sup.H, and denote
P = ( ( p 1 , 1 , 1 p 1 , 1 , 2 p 1 , 1 , k x k y p 1 , 1 , 1 p 1 ,
2 , 2 p 1 , 2 , k x k y p 1 , n c , 1 p 1 , n c , 2 p 1 , n c , k x
k y ) ( p n c , 1 , 1 p n c , 1 , k x k y p n c , 2 , 1 p n c , 2 ,
k x k y p n c , n c , 1 p n c , n c , k x k y ) ) ( 10 )
##EQU00021##
where p.sub.i,j,t is a k.sub.xk.sub.y.times.1 column vector.
Embodiments have used a similar block representation for P as A and
V.sub..parallel. in that they all have n.sub.c "rows". Note that
the right hand side of EQ. (10) also represents P.sup.H, as P is
Hermitian (i.e., P=P.sup.H).
[0047] As V.sub..parallel. and V have orthogonal column
vectors,
V.sub..parallel.V.sub..parallel..sup.HA=V.sub..parallel.V.sub..parallel.-
.sup.H.SIGMA.U.sup.H=[V.parallel.0].SIGMA.U.sup.H.apprxeq.A,
(11)
where I denotes an identity matrix. When .tau. equals the rank of
A, the ".apprxeq." becomes "=". EQ. (11) indicates that, each
column of A lies in the space spanned by orthogonal projection
V.sub..parallel.V.sub..parallel..sup.H when .tau. is set to the
rank of A.
[0048] Next, embodiments generate EQ. (11) for all k-space blocks.
Specifically, let x.sub.i,r be a k.sub.xk.sub.y.times.1 column
vector that denotes the k-space block of the i-th coil at the
spatial location r. Assuming that the k-space blocks lie in the
space spanned by V.sub..parallel.,
( x 1 , r x 2 , r x n c , r ) = V V H ( x 1 , r x 2 , r x n c , r )
, .A-inverted. r . ( 12 ) ##EQU00022##
The case when EQ. (12) holds approximately will be discussed in the
next subsection.
[0049] Picking a column of P defined in EQ. (10) and making use of
P=P.sup.H=V.sub..parallel.V.sub..parallel..sup.H P, EQ. (12)
yields
x i , r t = j = 1 n c ( p i , j , t ) H x i , r , .A-inverted. r ,
( 13 ) ##EQU00023##
where X.sub.i,r.sup.t denotes the t-th element in the vector
x.sub.i,r. FIG. 4 illustrates the principle of an eigenvector
approach according to an embodiment of the invention of EQ. (13)
with k.sub.x=k.sub.y=3: x.sub.i,r.sup.t denoted as the square 41 in
the left plot, is represented by the summation of the convolutions
between the k-space data of the j-th coil with kernels
(p.sub.i:j:t).sup.H for j=1, 2, . . . , n.sub.c; the plots in the
first, second, and third row correspond to t=1, 5, 9, respectively;
and the nine small squares 42 in X.sub.1 and 43 in X.sub.nc, which
denote (p.sub.i,j,t).sup.H, constitute the convolution kernel. Note
that, by moving the sliding block x.sub.i,r over all the spatial
locations r=1, 2, . . . , n.sub.xn.sub.y, the left hand side of EQ.
(13) covers the k-space data of the i-th coil; and the right hand
side of EQ. (13) corresponds to the summation of the convolutions
between the k-space data of the j-th coil and the kernel
(p.sub.i:j:t).sup.H for j=1, . . . , n.sub.c.
[0050] For a given i and t, applying the inverse Fourier
transformation to both sides of EQ. (13) yields
c i = j = 1 n c s i , j , t _ c j , ( 14 ) ##EQU00024##
where i=1, 2, . . . , n.sub.c; t=1, 2, . . . , k.sub.xk.sub.y,
and
s.sub.i,j,t=F.sup.H(P.sub.t(p.sub.i,j,t)) (15)
is the result of applying the inverse Fourier transform to
p.sub.i,j,t with proper zero padding, and s.sub.i,j,t denotes the
complex conjugate of s.sub.i,j,t.
[0051] EQ. (14) indicates that the values at different spatial
locations can be decoupled. Let c.sub.i.sup.r, s.sub.i,j,t.sup.r
and c.sub.j.sup.r denote the entries of c.sub.i, s.sub.i,j,t, and
c.sub.j at the spatial location r, respectively. Embodiments can
rewrite EQ. (14) as
c i r = j = 1 n c s i , j , t r _ c j r , ( 16 ) ##EQU00025##
where i=1, 2, . . . , n.sub.c; t=1, 2, . . . , k.sub.xk.sub.y.
[0052] Let c.sup.r be a column vector composed of c.sub.i's at the
spatial location r. Then,
S r = ( ( s 1 , 1 , 1 r s 1 , 1 , 2 r s 1 , 1 , k x k y r s 1 , 2 ,
1 r s 1 , 2 , 2 r s 1 , 2 , k x k y r s 1 , n c , 1 r s 1 , n c , 2
r s 1 , n c , k x k y r ) ( s n c , 1 , 1 r s n c , 1 , k x k y r s
n c , 2 , 1 r s n c , 2 , k x k y 2 s n c , n c , 1 r s n c , n c ,
k x k y r ) ) Let ( 17 ) M = ( ( 1 1 1 0 0 0 0 0 0 ) ( 0 0 0 1 1 1
0 0 0 ) ( 0 0 0 0 0 0 1 1 1 ) ) ( 18 ) ##EQU00026##
be a sparse matrix of the same size as S.sup.r (i.e.,
n.sub.c.times.k.sub.xk.sub.yn.sub.c). It can be verified that
MM.sup.H=k.sub.xk.sub.yI, where I denotes an identity matrix. EQ.
(16) can be rewritten as
M.sup.Hc.sup.r=(S.sup.r).sup.Hc.sup.r. (19).
[0053] EQ. (19) indicates that c.sup.r is the eigenvector of a
generalized eigenvalue equation
(S.sup.r).sup.H.alpha.=.lamda.M.sup.H.alpha., (20)
corresponding to the eigenvalue 1, where .alpha. is an
n.sub.c.times.1 column vector representing the generalized
eigenvector and .lamda. is a scalar representing the generalized
eigenvalue.
[0054] Since S.sup.r and M are n.sub.c.times.k.sub.xk.sub.yn.sub.c
non-square matrices, it is unclear (1) whether there exist a real
eigenvalue .lamda. and (complex) eigenvector .alpha. that satisfy
EQS. (19) and (20); and (2) how one can solve EQ. (19) efficiently
and robustly. The above situation is further complicated by the
fact that EQ. (12) usually only holds approximately as .tau. is
usually set less than or equal to the rank of A, even for the
calibration data. Therefore, the left and right sides of EQS. (13),
(14), (16), and (19) may only be approximately equal
[0055] To deal with the aforementioned issues, embodiments of the
invention solve the following two eigenvalue equations:
M ( S r ) H k x k y .beta. = .lamda. _ .beta. , ( 21 ) ( S r ( S r
) H ) .gamma. = .lamda. ~ .gamma. , ( 22 ) ##EQU00027##
where .lamda. and .lamda. denote the eigenvalues, .beta. and
.gamma. denote the eigenvectors. Specifically, embodiments of the
invention can compute c.sup.r as the eigenvector of EQ. (21)
corresponding to the largest eigenvalue. In other words, c.sup.r is
the solution to the following equation
c r = arg max .alpha. : || .alpha. || 2 = 1 ( S r ) H .alpha. , M H
.alpha. ( 23 ) ##EQU00028##
and c.sup.r is the optimizer that maximizes the correlation between
(S.sup.r).sup.H.alpha. and M.sup.H.alpha., i.e., the correlation
between the left and right hand sides of EQ. (19). Embodiments of
the invention may refer to this approach as MACO, which stands for
maximal correlation. In addition, EQ. (23) also applies to the case
in which EQ. (19) holds exactly.
[0056] It will now be shown that M(S.sup.r).sup.H is Hermitian:
Theorem 1: The matrix M(S.sup.r).sup.H is Hermitian, i.e.,
M(S.sup.r).sup.H=S.sup.rM.sup.H. Thus, the eigenvalues of EQ. (21)
are all real.
[0057] The proof of Theorem 1 is provided in the Appendix, below.
The eigen-decomposition of
M ( S r ) H k x k y ##EQU00029##
has the following features: (1) the matrix is Hermitian, and the
existence of real eigenvalues is guaranteed; (2) there exists
numerically stable and efficient approaches for the eigenvalue
decomposition of a Hermitian matrix; and (3) the size of
M(S.sup.r).sup.H, n.sub.c.times.n.sub.c, is much smaller than
S.sup.r. In addition, experiments suggest that M(S.sup.r).sup.H may
be positive semi-definite.
[0058] Next, the relationships among EQS (20), (21), and (22) will
be defined.
Theorem 2: For any .lamda. and .alpha. satisfying the generalized
eigenvalue equation of EQ. (20), .lamda. is real, and
M ( S r ) H k x k y .alpha. = .lamda..alpha. , ( 24 )
##EQU00030##
i.e., .lamda. and .alpha. are also the eigenvalue and eigenvector
of EQ, (21), respectively.
[0059] The proof of Theorem 2 is given in the Appendix, below.
Theorem 2 shows that any .lamda. and .alpha. that satisfy EQ. (19)
can be computed by applying the eigenvalue decomposition to the
square Hermitian matrix
M ( S r ) H k x k y . ##EQU00031##
Theorem 3: For any .lamda. and .alpha. satisfying the generalized
eigenvalue equation of EQ. (20), then
S r ( S r ) H k x k y .alpha. = .lamda. 2 .alpha. , ( 25 )
##EQU00032##
i.e., .lamda..sup.2 and .alpha. are also the eigenvalue and
eigenvector of EQ. (22), respectively. In addition, .lamda..sup.2
and .alpha. are the eigenvalue and eigenvector of EQ. (25), if and
only if .lamda..sup.2 and .alpha. are the eigenvalue and
eigenvector of
( M ( S r ) H k x k y ) 2 .alpha. = .lamda. 2 .alpha. , ( 26 )
##EQU00033##
[0060] The proof of Theorem 3 is given in the Appendix, below. This
theorem shows that: (1) if a generalized eigenvalue equation has
solutions, it can also be solved by EQ. (22); (2) the eigenvalue
decomposition of
S r ( S r ) H k x k y ##EQU00034##
has the same eigenvalues and eigenvectors as
( M ( S r ) H k x k y ) 2 ; ##EQU00035##
and (3) the eigenvector decomposition of
S r ( S r ) H k x k y ##EQU00036##
is also the eigenvector of
M ( S r ) H k x k y , ##EQU00037##
as
M ( S r ) H k x k y ##EQU00038##
is Hermitian. If the conjecture that
M ( S r ) H k x k y ##EQU00039##
is positive semidefinite is true, then
S r ( S r ) H k x k y ##EQU00040##
has the same eigenvalues as
M ( S r ) H k x k y . ##EQU00041##
[0061] A naive implementation would be to compute S.sup.r for all
the spatial locations and then apply eigenvalue decomposition to
EQ. (21) to obtain c.sup.r. However, the computational cost is
n.sub.c.sup.2k.sub.xk.sub.y 2D Fourier transformations, leading to
a computation cost of O(n.sub.e.sup.2k.sub.xk.sub.yn.sub.xn.sub.y
log(n.sub.xn.sub.y)) and a storage cost of
O(n.sub.c.sup.2k.sub.xk.sub.yn.sub.xn.sub.y). This can be too
demanding for both computation and storage. For example, when
n.sub.x=n.sub.y=256, k.sub.x=k.sub.y=6, and n.sub.c=32, it costs
about 10 GB for storage using single precision in Matlab and
O(10.sup.9) floating operations for computation. When the size
kernel size k.sub.y.times.k.sub.3, is large, the computational and
storage costs are even higher.
[0062] To deal with the above issues, a MACO approach according to
embodiments of the invention need only to compute S.sup.rM.sup.H or
S.sup.r(S.sup.r).sup.H. A useful operation for efficiently
computing S.sup.rM.sup.H is
m i , j r = t = 1 k x k y s i , j , t r , .A-inverted. r , ( 27 )
##EQU00042##
where m.sub.i,j.sup.r is the i,j elements of the 2D MR image at
spatial location r, which can be rewritten as
m i , j = t = 1 k x k y s i , j , t , ( 28 ) ##EQU00043##
where m.sub.i,j is an n.sub.xn.sub.y.times.1 column vector
containing the entries of m.sub.i,j.sup.r at all spatial locations.
Incorporating EQ. (15):
m i , j = t = 1 k x k y F H ( P t ( p i , j , t ) ) = F H ( t = 1 k
x k y P t ( p i , j , t ) ) , ( 29 ) ##EQU00044##
which shows that only one Fourier transformation is needed to
obtain n.sub.i,j. As i and j goes from 1 to n.sub.c and
S.sup.rM.sup.H is Hermitian,
n c ( n c + 1 ) 2 ##EQU00045##
Fourier transformation are needed. With m.sub.i,j, it can be shown
that the i-th row and j-th column of the matrix S.sup.rM.sup.H is
m.sub.i,j.sup.r. The computation cost is
O(n.sub.c.sup.2n.sub.xn.sub.y log(n.sub.xn.sub.y)), and the memory
cost is O(n.sub.e.sup.2n.sub.xn.sub.y).
[0063] When .tau. is small, EQ (22) can be more efficiently
computed than EQ. (21). Let
z.sub.j,k=F.sup.H(P.sub.c(v.sub.j,k)) (30)
and Z.sub.j,k.sup.r denotes the r-th entry of z.sub.i,k. Let
Z.sup.r be an n.sub.c.times..tau. matrix with the j-row and k-th
column being z.sub.j,k.sup.r, i.e.,
Z r = ( z 1 , 1 r z 1 , 1 r z 1 , .tau. r z 2 , 1 r z 2 , 2 r z 2 ,
.tau. r z n c , 1 r z n c , 2 r z n c , .tau. r ) ( 31 )
##EQU00046##
Theorem 4: S.sup.r(S.sup.r).sup.H can be computed as:
S.sup.r(S.sup.r).sup.H=k.sub.xk.sub.yZ.sup.r(Z.sup.r).sup.H
(32)
The proof of Theorem 4 can be found in the Appendix, below.
[0064] By transforming the computation of S.sup.r(S.sup.r).sup.H to
Z.sup.r(Z.sup.r).sup.H in EQ. (32), the number of 2-D Fourier
transforms can be reduced from n.sub.c.sup.2k.sub.xk.sub.y
n.sub.c.tau.. As .tau. is typically far less than
n.sub.ck.sub.xk.sub.y, this can save the computation cost. In
addition, the storage cost can be reduced from
O(n.sub.c.sup.2k.sub.xk.sub.yn.sub.xn.sub.y) to
O(n.sub.c.sup.2k.sub.xk.sub.y), when Z.sup.r(Z.sup.r).sup.H is
implemented as follows. An implementation according to an
embodiment of the invention is based on the observation that the
i-th row and j-th column of Z.sup.r(Z.sub.r).sup.H can be computed
as
k = 1 .tau. z i , k r z i , k r _ ( 33 ) ##EQU00047##
where z.sub.i,k.sup.r denotes the i-th row and k-th column in
Z.sup.r. This shows that, instead of pre-computing Z.sup.r,
embodiments of the invention can compute one column of Z.sup.r each
time.
[0065] Table II, shown in FIG. 5, summarizes the computation cost
for computing S.sup.r, M(S.sup.r).sup.H, and
S.sup.r(S.sup.r).sup.H. It can be seen that the computational and
storage costs of computing M(S.sup.r).sup.H and
S.sup.r(S.sup.r).sup.H are much slower than for S.sup.r. In
addition, when
.tau. < n c + 1 2 , ##EQU00048##
the computation cost of S.sup.r(S.sup.r).sup.H is lower than
M(S.sup.r).sup.H.
[0066] FIG. 6 is a flowchart of an eigenvector approach to
estimating coil sensitivity maps, according to an embodiment of the
invention. Referring now to the figure, an eigenvector approach
begins at step 61 by providing the matrix A of sliding blocks of a
2D n.sub.x.times.n.sub.y image of coil calibration data as defined
in EQ. 7. At step 62, the left singular matrix V.sub..parallel. of
EQ. (9) is calculated from the singular value decomposition of A of
EQ. (8), and matrix P is calculated at step 63 from
V.sub..parallel. using P=V.sub..parallel.V.sub..parallel..sup.H. At
step 64, the elements s.sub.i,j,t.sup.r of matrix S.sup.r are
calculated by applying an inverse Fourier transform to a
zero-padded P, and at step 65, the eigenvalue equation
M.sup.Hc.sup.r=(S.sup.r).sup.Hc.sup.r of EQ. (19) is solved using
the eigenvalue systems of EQS. (21) and (22). The system of EQ.
(21) can be solved using EQS. (27), (28), and (29) at step 66, and
the system of EQ. (22) can be solved using EQ. (32), at step
67.
EXPERIMENTS
[0067] Simulation:
[0068] Experiments were performed on a simulated phantom. A 2D
brain phantom and coil sensitivity maps of eight coil channels,
were generated using publicly available codes, and the size of the
brain phantom is set to n.sub.x=n.sub.y=256. The simulated eight
coil sensitivity maps are shown in FIG. 7(a). With the simulated
brain phantom and the simulated sensitivity maps, k-space data was
generated for the eight coils. To estimate the coil sensitivity
maps, the center 32.times.32 k-space data was used as the
calibration data. The estimated CSM's of a MACO approach according
to an embodiment of the invention are shown in FIG. 7(b), where the
kernel size is set to k.sub.x=k.sub.y=6 and .tau.=55. For
comparison, FIG. 7(c) depicts CSM's estimated by the B1-Map
approach with spatial scaling, and FIG. 7(d) depicts CSM's obtained
by dividing the low resolution coil images by the sum-square
images. It can be observed that, a MACO approach according to an
embodiment of the invention generates CSM's that are very close to
the ground truth. Further, the estimated sensitivity maps in FIG.
7(d) in fact correspond to: (1) those estimated by the B1-Map
approach if the spatial smoothing is not applied; and (2) those
estimated by a MACO approach according to an embodiment of the
invention when the calibration kernel size is set to the size of
the calibration data.
[0069] Next, CSM's estimated by a MACO approach according to an
embodiment of the invention were quantitatively compared with the
ground truth and the B1-Map approach. Let c*.sup.r be the ground
truth coil sensitivities for all the coils at the spatial location
r, assuming that c*.sup.r has a unique length, i.e.,
.parallel.c*.sup.r.parallel.=1. Note that, in both a MACO approach
according to an embodiment of the invention and a B1-Map, if
c.sup.r is the obtained eigenvector, then .eta.c.sup.r is also a
valid eigenvector for the corresponding eigenvalue if .eta. is a
complex scalar with unit length (i.e., |.eta.|=1). The similarity
between the estimated c.sup.r and the ground truth c*.sup.r was
measured using the absolute correlation defined as:
.rho.(c.sup.r,*c.sup.r)=c*.sup.r,c.sup.r|, (34)
As .parallel.c*.sup.r.parallel.=.parallel.c.sup.r.parallel.=1, if
.rho.(c.sup.r,c*.sup.r)=1, then c.sup.r=.eta.c*.sup.r, where .eta.
is a complex scalar with unit length. It is clear that
0.ltoreq..rho.(c.sup.r,c*.sup.r).ltoreq.1, and the larger the
absolute correlation is, the closer is the estimated c.sup.r to the
ground truth. With the absolute correlation, the goodness of c'
r=1, 2, . . . , n.sub.xn.sub.y can be measured using the mean
absolute correlation (mac) defined as
m a c = r = 1 n x n y .rho. ( c r , c * r ) n x n y . ( 35 )
##EQU00049##
[0070] The kernel size is set to k.sub.x=k.sub.y=6 and the values
of the mean absolute correlation (mac) were explored using
different numbers of kept singular vectors, with the results shown
in FIGS. 8(a)-(b). In each of FIGS. 8(a), (b), and (c), plot 81
represents mac values as calculated using a MACO approach according
to an embodiment of the invention, plot 82 represents mac values as
calculated using a B1-Map, and plot 83 represents mac values as
calculated by dividing the low resolution coil image by the
sum-of-square image. Since A in EQ. (7) is a 288.times.729 matrix,
.tau. can be selected in the interval [1, 288]. As can be observed
from FIG. 8(a), the value of mac is low, when .tau. is either very
small or very large. This is reasonable, because on one hand, when
.tau. is very small, V.sub..parallel. cannot capture the full range
space of the sliding blocks, and thus the left and right hand sides
of EQ. (9) do not match well. On the other hand, when .tau. is very
large, P=V.sub..parallel.V.sub..parallel..sup.H defined in EQ. (10)
tends to an identity matrix, and in this case, the left and right
hand sides of EQ. (9) match very well. However, a near identity
matrix for P gives too much freedom to the sliding blocks in EQ.
(12), and thus P contains less information about the calibration
data. In the extreme case, when P is an identity matrix, EQ. (12)
always holds and it does not have any information about the
calibration data. Nevertheless, FIGS. 8(a)-(b) indicate that one
can find a large range of is .tau.'s with which a MACO approach
according to an embodiment of the invention yields larger values of
mac than both the B1-Map approach and the approach which divides
the low resolution coil image by the sum-of-square image. The
optimal value of .tau. may be related to the noise in the
calibration data (note that, for the simulated phantom, no noise is
added). Experiments on dozens of data sets show that setting .tau.
as the smallest value such that
.SIGMA..sub.k=1.sup..tau..sigma..sub.k/.SIGMA..sub.k=1.sup.288.sigma..sub-
.k.gtoreq.0.95 works well, where .sigma..sub.k is the k-th largest
singular vector of A in EQ. (7).
[0071] In addition, the performance of a MACO approach according to
an embodiment of the invention was tested under varying kernel
sizes, and the results are depicted in FIG. 8(c). As can be
observed from the figure, the kernel size can change from 2.times.2
to 10.times.10 with k.sub.x=k.sub.y. The value of .tau. was set to
the smallest value for which
.SIGMA..sub.k=1.sup..tau..sigma..sub.k/.SIGMA..sub.k=1.sup.288.sigm-
a..sub.k.gtoreq.0.95. The results again show that a MACO approach
according to an embodiment of the invention can achieve a higher
mac than the B1-Map. The optimal kernel size may depend on the size
of the calibration data, the size of the image, and the noise
contained in the calibration data.
Reconstruction For MR reconstruction, the following formulation was
used:
min m 1 2 i = 1 n c F u ( c i m ) - yi 2 2 + .lamda. Wm 1 , ( 36 )
##EQU00050##
where F.sub.u is an under-sampling Fourier transformation operator,
c.sub.i denotes the coil sensitivity map for the i-th coil, m
denotes the image to be reconstructed, y, is the observed
undersampled k-space data for the i-th coil, and W is a redundant
Haar wavelet transformation.
[0072] Reconstruction of Two-dimensional Cardiac Image:
[0073] Data was acquired from a healthy volunteer on a 1.5T
clinical MR scanner. The imaging parameters include: field of
view=313.times.370 mm.sup.2, matrix size=176.times.208, and 20
coils.
[0074] FIG. 9 illustrates the reconstruction of a 2D Cardiac Image.
Images in the first row are the coil sensitivity maps estimated by
a MACO approach according to an embodiment of the invention for
coils 1, 10, and 20, respectively; images in the second row are the
coil sensitivity maps estimated by the B1-Map for coils 1, 10, and
20, respectively; the first image in the last row illustrates
incoherent Cartesian sampling of the k-space; the second image and
the third image in the last row correspond to the reconstructions
with the CSM's estimated a MACO approach according to an embodiment
of the invention and the B1-Map, respectively.
[0075] The first two rows of FIG. 9 depicts the estimated coil
sensitivity maps by a MACO approach according to an embodiment of
the invention and B1-Map with the center 24 lines as calibration
data. It can be seen that the CSMs estimated by a MACO approach
according to an embodiment of the invention, shown in the first
row, are smoother than the B1-Map, shown in the second row. The
first image in the last row illustrates the incoherent Cartesian
sampling pattern for generating the under-sampled k-space data.
Such incoherent sampling leads to an acceleration factor of 6.3.
Images obtained by solving EQ. (36) with CSM's estimated by a MACO
approach according to an embodiment of the invention, the second
image in the last row, are smoother than those estimated by the
B1-Map, shown in the third image in the last row. The CSM estimated
by the B1-Map leads to several artifacts, while the image
reconstructed with the CSM's estimated by a MACO approach according
to an embodiment of the invention contains fewer artifacts due to a
more accurate estimation of the sensitivity maps.
[0076] Reconstruction of Three-Dimensional Abdomen Image:
[0077] Data was acquired from a healthy volunteer on a 1.5T
clinical MR scanner. Imaging parameters include: field of
view=380.times.320 mm.sup.2, matrix size=320.times.270, 72
partitions, and 24 coils. A Partial Fourier transform is used in
the acquisition of the k-space data, and the sampling pattern in
the phase encoding partition plane is shown in FIG. 10(a). Such
sampling pattern leads to an acceleration factor of 9. An inverse
Fourier transformation is applied to decouple the samples in the
readout direction, and the CSM's are estimated and reconstructed
for each decoupled slice along this direction. Specifically, the
CSM's for each slice are estimated from the center calibration data
of size 24.times.24. Images in the first and second rows of FIG.
10(b) show the coil sensitivity maps estimated by a MACO approach
according to an embodiment of the invention and the B1-Map for one
partition, and the reconstructed images for the same partition are
shown in the last row of FIG. 10(b). Note that: (1) the CSM's
obtained by a MACO approach according to an embodiment of the
invention are smoother than those by obtained from a B1-Map; (2)
some infolding artifacts are observed in the image reconstructed
with the CSM's estimated by the B1-Map; and (3) the CSM's estimated
by a MACO approach according to an embodiment of the invention have
fewer artifacts.
[0078] Like the B1-Map approach, if c.sup.r is the obtained
eigenvector by a MACO approach according to an embodiment of the
invention, then .eta.c.sup.r is also a valid eigenvector for the
corresponding eigenvalue if .eta. is a complex scalar with unit
length, i.e., |.eta.|=1. This indicates that a MACO approach
according to an embodiment of the invention may not be able to
recover the phase of the reconstructed image. According to
embodiments of the invention, coil sensitivity maps were normalized
to the first coil, and thus the phase of the reconstructed image is
relative to the first coil. This may not be an issue when the
magnitude of the reconstructed image is the focus.
Influence of the Amount of Calibration Data on the Estimated CSM
Quality
[0079] In dynamic cardiac MR imaging, the k-space data can be
acquired in an interleaving way, as illustrated in the left plot of
FIG. 11, which illustrates a typical Cartesian sampling pattern
used in dynamic cardiac MR imaging. In the figure, solid lines
represent acquired data, and dashed lines represent missing data.
The average k-space data over time, shown in the right plot of FIG.
11, can be used for calibration purpose. One might think that using
more the calibration data will improve the quality of the estimated
CSM. However, in real applications, the data are usually
accompanied by noise. More importantly, the boundary high frequency
data might be more sensitive to noise than the center low frequency
data, as the high frequencies are typically much weaker in strength
than the low frequencies, leading to less robustness when the same
noise is added. Therefore, one could conjecture that "the center
square calibration data could perform better than the center
rectangular calibration data for coil sensitivity maps
estimation".
[0080] Experiments were conducted using k-space data acquired on a
1.5T clinical MR scanner. The acquired k-space data was subsampled
to generate the following under-sampled k-space data: 24 temporal
phases each containing 18 frequency encodings, 30 coils, and image
size=192.times.164. FIGS. 12(a)-(b) illustrate the influence of the
amount of used calibration data on the reconstruction. FIG. 12(a)
depicts the mask for generating the calibration k-space data and
the estimated coil sensitivity maps, where white denotes that the
corresponding average k-space data are used for calibration, and
FIG. 12(b) depicts the reconstruction images. FIG. 12(a)
illustrates four coil sensitivity maps estimated by an eigenvector
approach according to an embodiment of the invention, using (1) the
center rectangular 44 lines (top row), (2) the center rectangular
32 lines (2.sup.nd row), (3) the center 44.times.44 square block
(3.sup.rd row), and (4) the center 32.times.32 square block (bottom
row). It can be seen that, compared with the more rectangular
calibration data, the square center calibration data can achieve a
smoother CSM. For example, the black holes 121, 122 in coil 3 and
coil 4 are observed when using center rectangular calibration data,
but not when using center square calibration data. FIG. 12(b)
compares the reconstruction results achieved by a reconstruction
approach according to an embodiment of the invention. In FIG.
12(b), the boxed regions 124 at the upper left are zoomed out and
displayed in the rest of each image. The noise observed using a CSM
with rectangular calibration data disappears when using the square
calibration data.
[0081] In addition, experiments were conducted on synthetic static
2D data as follows. A phantom of size 256.times.256 was generated,
and then 8 coil sensitivity maps were synthesized from which the
k-space data was generated. The CSM was estimated by using (1) the
center k.sub.xk.sub.y square data, and (2) the center rectangular k
lines, where k ranges from 32 to 64. The mac (maximum correlation)
criterion defined above was computed. The value of mac is between 0
and 1, and a larger mac value means that the estimated coil profile
is closer to the simulated ground truth. FIG. 13 compares the
square calibration data 131 and the rectangular calibration data
132 in terms of the mac criterion, from which can be seen a clear
advantage of the square calibration data. The underlying reason
might be that the center low frequency data are less sensitive to
the noise than the boundary high frequency data due to the larger
intensities.
Extension to 3-Dimensions
[0082] When the data is acquired using 3D Cartesian sampling, one
way for estimating the coil profiles is to decouple the data along
the frequency-encoding direction and then perform the estimation in
a 2D manner. Then, the 3D coil profiles can be obtained by stacking
the estimated 2D coil profiles. The independent estimation of the
coil profiles for each slice causes discontinuity in the profile
along the frequency encoding direction. This discontinuity can be
propagated into the reconstruction, causing artifacts. In addition,
applying a 2D eigen-vector approach ignores the correlation between
the calibration data in the frequency encoding direction and
downgrades the effect of rank reduction in the eigen-vector
approach achieved by singular value decomposition.
[0083] A 3D eigen-vector approach according to an embodiment of the
invention utilizes all the calibration data together and estimates
the coil profiles from different slices simultaneously. FIG. 14
illustrates the conversion of the calibration data into a
calibration matrix for 3D CSM estimation, according to an
embodiment of the invention. As shown in FIG. 14, the calibration
matrix A is constructed from calibration data of size
c.sub.x.times.c.sub.y.times.c.sub.z by gathering sliding blocks of
kernel size k.sub.x.times.k.sub.y.times.k.sub.z. The number of
columns in A is
[(c.sub.x-k.sub.x+1).times.(c.sub.y-k.sub.y+1).times.(c.sub.z-k.sub.-
z+1)], and the number of rows of A is k.sub.xk.sub.yk.sub.zn.sub.c,
where n.sub.c is the number of coils. A low rank subspace of A can
be computer by singular value decomposition A=V.SIGMA.U.sup.H, and
setting V.sub..parallel. to the left singular vectors of A
corresponding to the leading .tau. singular values. Let S.sup.c be
the coil sensitivity at coil c.epsilon.[1, 2, . . . , c], which are
computed following an eigenvector approach according to an
embodiment of the invention as discussed above, i.e., set to the
eigenvectors corresponding to the largest eigenvalues.
[0084] A 3D CSM estimation method according to an embodiment of the
invention was compared with a 2D method using 3D data. The data
were acquired from a healthy volunteer on a 1.5T clinical MR
scanner. The imaging parameters include a field of view of
320.times.260.times.96 mm.sup.2, and 26 coils. In the phase
encoding-partition plane, a total of 2370 (out of 24960) points
were acquired, and the central 24.times.24 block was treated as
calibration data. FIGS. 15(a)-(h) illustrate a comparison of a 2D
eigenvector approach and a 3D eigenvector approach according to an
embodiment of the invention. FIGS. 15(a), (c), and (e) depict coil
profiles of coils nos. 1, 5 and 10 using a 2D eigenvector approach,
and FIGS. 15(b), (d), and (f) depict coil profiles of coils nos. 1,
5 and 10 using a 3D eigenvector approach according to an embodiment
of the invention. As shown in FIGS. 15(a)-(f), the vertical
direction is the frequency encoding direction. The coil profiles
from the 3D method are smoother than the ones from the 2D method,
especially along the frequency encoding direction. FIGS. 15(g)-(h)
compare the central region of the reconstructed image at slice 72
along the transversal axis, using a 2D eigenvector approach for
FIG. 15(g) and a 3D eigenvector approach for FIG. 15(h). The
overall image using coil profiles from a 3D method according to an
embodiment of the invention is smoother than the image from the 2D
method. The artifact marked by the circle 152 in FIG. 15(h) is less
obvious than the circle 151 in the image of FIG. 15(g).
[0085] Reconstruction with the coil profile from a 3D method
according to an embodiment of the invention substantially
eliminates artifacts caused by discontinuities along the frequency
encoding direction from the 2D method. The overall image noise is
lower in a 3D method according to an embodiment of the invention
due to a smoother coil profile. Efficient computation of a 3D
eigenvector method according to an embodiment of the invention can
be achieved by utilizing methods according to embodiments of the
invention disclosed above, which has a storage cost of
O(k.sub.xn.sub.yn.sub.c.sup.2).
System Implementations
[0086] It is to be understood that embodiments of the present
invention can be implemented in various forms of hardware,
software, firmware, special purpose processes, or a combination
thereof. In one embodiment, the present invention can be
implemented in software as an application program tangible embodied
on a computer readable program storage device. The application
program can be uploaded to, and executed by, a machine comprising
any suitable architecture.
[0087] FIG. 16 is a block diagram of an exemplary computer system
for implementing an eigenvector approach to coil sensitivity maps
(CSM) estimation for 2-D MR images according to an embodiment of
the invention. Referring now to FIG. 16, a computer system 161 for
implementing the present invention can comprise, inter alia, a
central processing unit (CPU) 162, a memory 163 and an input/output
(I/O) interface 164. The computer system 161 is generally coupled
through the I/O interface 164 to a display 165 and various input
devices 166 such as a mouse and a keyboard. The support circuits
can include circuits such as cache, power supplies, clock circuits,
and a communication bus. The memory 163 can include random access
memory (RAM), read only memory (ROM), disk drive, tape drive, etc.,
or a combinations thereof. The present invention can be implemented
as a routine 167 that is stored in memory 163 and executed by the
CPU 162 to process the signal from the signal source 168. As such,
the computer system 161 is a general purpose computer system that
becomes a specific purpose computer system when executing the
routine 167 of the present invention.
[0088] The computer system 161 also includes an operating system
and micro instruction code. The various processes and functions
described herein can either be part of the micro instruction code
or part of the application program (or combination thereof) which
is executed via the operating system. In addition, various other
peripheral devices can be connected to the computer platform such
as an additional data storage device and a printing device.
[0089] It is to be further understood that, because some of the
constituent system components and method steps depicted in the
accompanying figures can be implemented in software, the actual
connections between the systems components (or the process steps)
may differ depending upon the manner in which the present invention
is programmed. Given the teachings of the present invention
provided herein, one of ordinary skill in the related art will be
able to contemplate these and similar implementations or
configurations of the present invention.
[0090] While the present invention has been described in detail
with reference to exemplary embodiments, those skilled in the art
will appreciate that various modifications and substitutions can be
made thereto without departing from the spirit and scope of the
invention as set forth in the appended claims.
APPENDIX
Proof of Theorem 1:
[0091] The proof begins with two technical lemmas.
Lemma 1:
[0092] Assume n.sub.x and n.sub.y are even numbers. Let A, B, P and
Q denote n.sub.x.times.n.sub.y complex matrices, with the entry at
the i-th-row and the j-th column being a.sub.i,j, b.sub.i,j,
p.sub.i,j and q.sub.i,j, respectively. Let P and Q be the 2-D
inverse Fourier transformations of A and B:
a m , n = u = 1 n x .upsilon. = 1 n y a u , .upsilon. exp - 2 .pi.
j ( ( u - 1 ) m / n x + ( .upsilon. - 1 ) n / n y ) / n x n y , (
37 ) q m , n = u = 1 n x .upsilon. = 1 n y b u , .upsilon. exp - 2
.pi. j ( ( u - 1 ) m / n x + ( .upsilon. - 1 ) n / n y ) / n x n y
, where j = - 1 . If ( 38 ) a u , .upsilon. = b n x + 2 - u , n y +
2 - .upsilon. _ , u = 2 , , n x , .upsilon. = 2 , , n y and ( 39 )
q 1 , .upsilon. = a u , 1 = b 1 , .upsilon. = b u , 1 = 0 , u = 1 ,
, n x , .upsilon. = 1 , , n y , one has p m , n = q m , n _ , m = 1
, , n x , n = 1 , , n y . ( 40 ) ##EQU00051##
Proof:
[0093] Inserting EQS. (39) and (40) into EQ. (37), one obtains EQ.
(41):
p m , n = u = 2 n x .upsilon. = 2 n y b n x + 2 - u , n y + 2 -
.upsilon. _ exp - 2 .pi. j ( ( u - 1 ) m / n x + ( .upsilon. - 1 )
n / n y ) / n x n y = u ' = 2 n x .upsilon. ' = 2 n y b u ' ,
.upsilon. ' _ exp - 2 .pi. j ( ( n x + 1 - u ' ) m / n x + ( n y +
1 - .upsilon. ' ) n / n y ) / n x n y = u ' = 2 n x .upsilon. ' = 2
n y b u ' , .upsilon. ' _ exp - 2 .pi. j ( ( 1 - u ' ) m / n x + (
1 - .upsilon. ' ) n / n y ) / n x n y = u ' = 2 n x .upsilon. ' = 2
n y b u ' , .upsilon. ' exp - 2 .pi. j ( ( u ' - 1 ) m / n x + (
.upsilon. ' - 1 ) n / n y ) _ / n x n y = u ' = 1 n x .upsilon. ' =
1 n y b u ' , .upsilon. ' exp - 2 .pi. j ( ( u ' - 1 ) m / n x + (
.upsilon. ' - 1 ) n / n y ) _ / n x n y = q m , n _ . ( 41 )
##EQU00052##
Lemma 2:
[0094] Let u and v denote k.sub.xk.sub.y.times.1 column vectors
with complex entries. Let u.sub.t and v.sub.t denote the t-th
elements of u and v, respectively. Define
p = t = 1 k x k y H ( t ( u t _ v ) ) , ( 42 ) q = t = 1 k x k y H
( t ( .upsilon. t _ u ) ) , ( 43 ) ##EQU00053##
which are n.sub.xn.sub.y.times.1 column vectors. Then,
p= q. (44)
Proof:
[0095] Let
a = k = 1 k x k y t ( u t _ v ) , ( 45 ) b = t = 1 k x k y t (
.upsilon. t _ u ) . ( 46 ) ##EQU00054##
Let A and B be n.sub.x.times.n.sub.y complex matrices that are the
matrix version of a and b respectively. Making use of the
definition of the zero padding operator, and
a.sub.u,v=b.sub.n.sub.x.sub.+2-u,n.sub.y.sub.+2-v,u=2, . . . ,
n.sub.x,v=2, . . . , n.sub.y, (47)
and
a.sub.1,v=a.sub.u,1=b.sub.1,v=b.sub.u,1=0,u=0,u=1, . . . ,
n.sub.x,v=1, . . . , n.sub.y. (48)
Applying Lemma 2, one can verify EQ. (44).
[0096] Theorem 1 can now be proved. Denote q.sub.j,i.sup.r as the
j-th row and the i-th column of S.sup.rM.sup.H. It follows from
EQS. (17) and (18) that
q j , i r = t = 1 k x k y s i , j , t r , .A-inverted. r . ( 49 )
##EQU00055##
Denote q.sub.j,i as an n.sub.x.times.n.sub.y column vector with the
r-th entry being q.sub.j,i.sup.r. Then
q j , i = t = 1 k x k y s i , j , t . ( 50 ) ##EQU00056##
To prove that S.sup.rM.sup.H is Hermitian, it suffices to
verify
q.sub.j,i= q.sub.i,j (51)
[0097] It follows from EQS. (9) and (10) that
p i , j , t = k = 1 .tau. .upsilon. i , k t _ v j , k , ( 52 )
##EQU00057##
where v.sub.i,k.sup.t denotes the t-th element in v.sub.i,k.
Therefore,
q j , i = t = 1 k x k y H ( t ( p i , j , t ) ) = t = 1 k x k y H (
t ( k = 1 .tau. .upsilon. i , k t _ v j , k ) = k = 1 .tau. ( t = 1
k x k y H ( t ( .upsilon. i , k t _ v j , k ) ) . ( 53 )
##EQU00058##
By utilizing Lemma 2, one can obtain EQ (51).
Since
[0098] M ( S r ) H k x k y ##EQU00059##
is Hermitian, its eigenvalues are real. This ends the proof of this
theorem.
Proof of Theorem 2:
[0099] For any pair of .lamda. and .alpha. that satisfy EQ. (20),
left multiplying both sides of EQ. (20) by M yields
M(S.sup.r).sup.H.alpha.=.lamda.MM.sup.H.alpha.=.lamda.k.sub.xk.sub.y.alp-
ha. (54)
where MM.sup.H=k.sub.xk.sub.yI was used for establishing the last
equality. Therefore, .lamda. and .alpha. are also the eigenvalue
and eigenvector of EQ. (21), respectively.
Proof of Theorem 3:
[0100] For any pair of .lamda. and .alpha. that satisfy EQ. (20),
left multiplying both sides of EQ. (20) by S.sup.r yields
S.sup.r(S.sup.r).sup.H.alpha.=.lamda.S.sup.rM.sup.H.alpha.=.lamda..sup.1-
k.sub.xk.sub.y.alpha. (55)
where Theorem 2 establishes the second eqaulity. Therefore, .lamda.
and .alpha. are also the eigenvalue and eigenvector of EQ. (22),
respectively.
[0101] For any pair of .lamda. and .alpha. that satisfy EQ.
(25),
S.sup.r(S.sup.r).sup.HMM.sup.H.alpha.=(.lamda.k.sub.xk.sub.y).sup.2.alph-
a.=S.sup.rM.sup.HS.sup.rM.sup.H.alpha. (56)
where MM.sup.H=k.sub.xk.sub.yI and (S.sup.r).sup.HM=M.sup.HS.sup.r
have been used (see Theorem 1). The last equality in EQ. (56)
indicates that .alpha. is the eigenvector of
S.sup.rM.sup.HS.sup.rM.sup.H corresponding to eigenvalue
(.lamda.k.sub.xk.sub.y).sup.2.
Proof of Theorem 4:
[0102] The proof begins with a technical lemma, whose proof is
omitted here.
Lemma 3: F.sup.H(P.sub.t(v.sub.j,k)) F.sup.H(P.sub.t(v.sub.j',k'))
is invariant to the spatial location t, i.e.,
F.sup.H(P.sub.t(v.sub.j,k))
F.sup.H(P.sub.t(v.sub.j',k'))=F.sup.H(P.sub.t'(v.sub.j,k))
F.sup.H(P.sub.t'(v.sub.j',k')).
[0103] Now, the theorem may be proved. The j-th row and j'-th
column of S.sup.r(S.sup.r).sup.H can be computed as
i = 1 n c m i , j , j ' r , where ( 57 ) m i , j , j ' r = t = 1 k
x k y s i , j , t r s i , j ' , t r , .A-inverted. r . ( 58 )
##EQU00060##
EQ. (58) can be rewritten as the following matrix format:
m i , j , j ' = t = 1 k x k y s i , j , t s i , j ' , t _ . ( 59 )
##EQU00061##
Inserting EQ. (52) into EQ. (15) yields
s i , j , t = F H ( P t ( p i , j , t ) ) = k = 1 .tau. v i , k t _
F H ( P t ( v j , k ) ) ( 60 ) ##EQU00062##
In computing s.sub.i,j,t s.sub.i,j',t, a useful operation is
F.sup.H(P.sub.t(v.sub.j,k)) F.sup.H(P.sub.t(v.sub.j',k')) (61)
which is identical to
F.sup.H(P.sub.c(v.sub.j,k)) F.sup.H(P.sub.c(v.sub.j',k')) (62)
using Lemma 3. Let .GAMMA..sub.j,j'.sup.r be a .tau..times..tau.
matrix whose k-th row and k'-th column are the value of
z.sub.j,k.sup.r z.sub.j',k'.sup.r. Denote v.sub.i,: as the i-th row
of V.sub..parallel.. Then
m.sub.i,j,j'.sup.r=k.sub.xk.sub.y v.sub.i,:.GAMMA..sub.j,j'.sup.r
v.sub.i,:.sup.H. (63)
It is interesting to note that
i = 1 n c m i , j , j ' r = k x k y i = 1 n c v i , : _ .GAMMA. j ,
j ' r v i , : H _ = k x k y i = 1 n c trace ( .GAMMA. j , j ' r v i
, : H v i , : _ ) = k x k y trace ( .GAMMA. j , j ' r i = 1 n c v i
, : H v i , : _ ) = k x k y trace ( .GAMMA. j , j ' r ) = k x k y k
= 1 .tau. z j , k r z j , k r _ , ( 64 ) ##EQU00063##
where trace(A) denotes the summation of the diagonal entries of a
matrix A. Therefore, EQ. (32) holds.
* * * * *