System And Method For Image Guided Medical Procedures

Kumar; Dinesh ;   et al.

Patent Application Summary

U.S. patent application number 13/835479 was filed with the patent office on 2014-03-13 for system and method for image guided medical procedures. This patent application is currently assigned to CONVERGENT LIFE SCIENCES, INC.. The applicant listed for this patent is CONVERGENT LIFE SCIENCES, INC.. Invention is credited to Dinesh Kumar, Daniel S. Sperling, Amit Vohra.

Application Number20140073907 13/835479
Document ID /
Family ID50233964
Filed Date2014-03-13

United States Patent Application 20140073907
Kind Code A1
Kumar; Dinesh ;   et al. March 13, 2014

SYSTEM AND METHOD FOR IMAGE GUIDED MEDICAL PROCEDURES

Abstract

A system and method combines information from a plurality of medical imaging modalities, such as PET, CT, MRI, MRSI, Ultrasound, Echo Cardiograms, Photoacoustic Imaging and Elastography for a medical image guided procedure, such that a pre-procedure image using one of these imaging modalities, is fused with an intra-procedure imaging modality used for real time image guidance for a medical procedure for any soft tissue organ or gland such as prostate, skin, heart, lung, kidney, liver, bladder, ovaries, and thyroid, wherein the soft tissue deformation and changes between the two imaging instances are modeled and accounted for automatically.


Inventors: Kumar; Dinesh; (Roseville, CA) ; Vohra; Amit; (Roseville, CA) ; Sperling; Daniel S.; (West Orange, NJ)
Applicant:
Name City State Country Type

CONVERGENT LIFE SCIENCES, INC.

Los Angeles

CA

US
Assignee: CONVERGENT LIFE SCIENCES, INC.
Los Angeles
CA

Family ID: 50233964
Appl. No.: 13/835479
Filed: March 15, 2013

Related U.S. Patent Documents

Application Number Filing Date Patent Number
61700273 Sep 12, 2012

Current U.S. Class: 600/414 ; 600/407; 600/426; 600/437
Current CPC Class: A61B 34/10 20160201; A61B 2090/365 20160201; A61B 2017/00274 20130101; A61B 10/00 20130101; A61B 34/20 20160201; A61B 2090/378 20160201; A61B 90/39 20160201; A61B 2090/364 20160201; A61B 2034/107 20160201; A61B 10/0241 20130101; A61B 90/361 20160201; A61B 10/02 20130101; A61B 18/20 20130101
Class at Publication: 600/414 ; 600/407; 600/426; 600/437
International Class: A61B 19/00 20060101 A61B019/00

Claims



1. A method for combining information from plurality of medical imaging modalities, comprising: acquiring a first volumetric image using a first volumetric imaging modality of an anatomical region; defining an elastic soft tissue model for at least a portion of the anatomical region encompassed by the first volumetric image; labeling features of the anatomical region based on at least the first volumetric image and the soft tissue model, comprising at least features of the anatomical region which are visualized by both the first imaging modality and a second imaging modality, and features of the anatomical region which are poorly visualized in the second imaging modality; acquiring a second volumetric image of the anatomical region using the second imaging modality comprising a real time image; registering the features of the anatomical region which are visualized by both the first imaging modality and a second imaging modality, and the features of the anatomical region which are poorly visualized in the second imaging modality, with respect to the soft tissue model, such that the features of the anatomical region which are visualized by both the first imaging modality and a second imaging modality are linked, compensating for at least one distortion of the portion of the anatomical region between a first time of the first volumetric image and a second time of the second volumetric image; and presenting an output based on at least the features of the anatomical region which are poorly visualized in the second imaging modality in the real time image, compensated based on at least the registered features and the soft tissue model.

2. The method according to claim 1, wherein the first imaging modality comprises at least one selected from the group consisting of positron emission tomography, computed tomography, magnetic resonance imaging, magnetic resonance spectrography imaging, photoacoustic imaging, high frequency ultrasound, and elastography.

3. The method according to claim 1, wherein the anatomical region comprises at least one organ selected from the group consisting of prostate, skin, heart, lung, kidney, liver, bladder, ovaries, and thyroid.

4. The method according to claim 1, wherein further comprising acquiring a tissue sample from a location determined based on at least the first imaging modality and the second imaging modality.

5. The method according to claim 1, wherein further comprising delivering a therapeutic intervention at a location determined based on at least the first imaging modality and the second imaging modality.

6. The method according to claim 5, wherein the therapeutic intervention includes one or more selected from the group consisting of laser ablation, radiofrequency ablation, high intensity focused ultrasound, brachytherapy, stem cell injection for ischemia of the heart, cryotherapy, direct injection of a photothermal or photodynamic agent, and radiotherapy.

7. The method according to claim 1, further comprising performing at least one image-guided at least partially automated procedure selected from the group consisting of high intensity focused ultrasound, IMRT, and robotic surgery.

8. The method according to claim 1, wherein the differentially visualized anatomical region comprises at least one selected from the group consisting of a suspicious lesion for targeted biopsy, a suspicious lesion for targeted therapy, and a lesion for targeted dose delivery.

9. The method according to claim 1, wherein the differentially visualized anatomical region is at least one anatomical structure to be spared in an invasive procedure, selected from the group consisting of a nerve bundle, a urethra, a rectum and a bladder.

10. The method according to claim 1, wherein the registered features comprise at least one anatomical landmark selected from the group consisting of a urethra, a urethra at a prostate base, a urethra at an apex, a verumontanum, a calcification and a cyst, a seminal vesicle, an ejaculatory duct, a bladder and a rectum.

11. The method according to claim 1, further comprising automatically defining a plan comprising a target and an invasive path to reach the target.

12. The method according to claim 11, wherein the plan is defined based on the first imaging modality, and is adapted in real time based on at least the second imaging modality.

13. The method according to claim 11, wherein the plan comprises a plurality of targets.

14. The method according to claim 1, wherein a plurality of anatomical features are consistently labeled in the first volumetric image and the second volumetric image.

15. The method according to claim 1, wherein the soft tissue model comprises an elastic triangular mesh approximating a surface of an organ.

16. The method according to claim 1, wherein the anatomical landmark registration is performed rigidly using a simultaneous landmark and surface registration algorithm.

17. The method according to claim 16, further comprising performing an affine registration.

18. The method according to claim 1, wherein the registering comprises an elastic registration based on at least one parameter selected from the group consisting of an intensity, a binary mask, and surfaces and landmarks.

19. The method according to claim 1, wherein the model is derived from a plurality of training datasets representing different states of deformation of an organ of a respective human using the first imaging modality and the second imaging modality.

20. The method according to claim 1, further comprising identifying a mismatch of corresponding anatomical features of the first volumetric image and the second volumetric image, and updating the registration to converge the corresponding anatomical features to reduce the mismatch based on corrections of an elastic deformation model constrained by object boundaries.

21. A method for combining information from plurality of medical imaging modalities, comprising: acquiring volumetric images using a first volumetric imaging modality of an anatomical region of a person under a plurality of states of deformation; acquiring volumetric images using a second volumetric imaging modality of the anatomical region of the person under a plurality of states of deformation; defining an elastic soft tissue model for the anatomical region comprising model parameters representing tissue compliance and surface properties; labeling features of the anatomical region based on at least the volumetric images of the first imaging modality, the volumetric images of the second imaging modality, and the soft tissue model, wherein the labeling aligns corresponding features and compensates for rigid, elastic and affine transform of the anatomical region between times for acquiring the volumetric images of the first imaging modality and the volumetric images of the second imaging modality; and presenting an output based on at least the labeled features of the anatomical region.

22. A system for combining information from plurality of medical imaging modalities, comprising: an input port configured to receive at least two first volumetric images using a first volumetric imaging modality of an anatomical region representing respectively different states of elastic deformation, and at least two second volumetric images using a second volumetric imaging modality, of the anatomical region representing respectively different states of elastic deformation; at least one processor configured to define an elastic soft tissue model for at least a portion of the anatomical region encompassed by the first volumetric image, and to label features of the anatomical region based on at least the first volumetric image and the soft tissue model; and a memory configured to store the defined elastic soft tissue model.
Description



CROSS REFERENCE TO RELATED APPLICATIONS

[0001] The present application is a non-provisional of U.S. Provisional Patent Application 61/691,758, filed Aug. 12, 2012, the entirety of which is expressly incorporated herein by reference.

BACKGROUND OF THE INVENTION

[0002] 1. Field of the Invention

[0003] The present disclosure relates to systems and methods for image guided medical and surgical procedures.

[0004] 2. Description of the Art

[0005] U.S. Pat Pub. 2009/0054772 (EP20050781862), expressly incorporated herein by reference, entitled "Focused ultrasound therapy system", provides a method for performing a High Intensity Focused Ultrasound (HIFU) procedure for specific clinical application. Basic image registration is performed for fusion from a diagnostic modality such as Computed Tomography (CT) or Magnetic Resonance Imaging (MRI) to ultrasound, only through body positioning, referred to as "immobilization", resulting in only image registration via horizontal movement and zoom factor. See also U.S. Pat. No. 8,224,420, expressly incorporated herein by reference, which provides a mechanical positioning means for moving said ultrasound energy applicator to position the applicator so that the energy application zone intersects said magnetic resonance volume within said region of the subject.

[0006] U.S. Pub. Pat. 2007/0167762, expressly incorporated herein by reference, entitled "Ultrasound System for interventional treatment", provides an ultrasound system that can load a "wide-area" image signal such as CT or MRI that can be loaded and fused with the ultrasound image, using a manual definition of position of lesions and needle insertion position at the time of procedure.

[0007] U.S. Pub. App. 2010/02906853, expressly incorporated herein by reference, entitled "Fusion of 3D volumes with CT reconstruction" discloses a method for registration of ultrasound device in three dimensions to a C-arm scan, the method including acquiring a baseline volume, acquiring images in which ultrasound device is disposed, locating the device within the images, registering the location of the device to the baseline volume, acquiring an ultrasound volume from the ultrasound device, registering the ultrasound volume to the baseline volume, and performing fusion imaging to display a view of the ultrasound device in the baseline volume. Thus, a mutual information based method is provided to register and display a 3D ultrasound image fused with a CT image.

[0008] U.S. Pub. App. 2011/0178389, expressly incorporated herein by reference, entitled "Fused image modalities guidance" discloses a system and method for registration of medical images, which registers a previously obtained volume(s) onto an ultrasound volume during an ultrasound procedure, to produce a multimodal image, which may be used to guide a medical procedure. In one arrangement, the multimodal image includes MRI information presented in the framework of a Trans Rectal Ultrasound (TRUS) image during a TRUS procedure.

[0009] Prostate cancer is one of the most common types of cancer affecting men. It is a slow growing cancer, which is easily treatable if identified at an early stage. A prostate cancer diagnosis often leads to surgery or radiation therapy. Such treatments are costly and can cause serious side effects, including incontinence and erectile dysfunction. Unlike many other types of cancer, prostate cancer is not always lethal and often is unlikely to spread or cause harm. Many patients who are diagnosed with prostate cancer receive radical treatment even though it would not prolong the patient's life, ease pain, or significantly increase the patient's health.

[0010] Prostate cancer may be diagnosed by taking a biopsy of the prostate, which is conventionally conducted under the guidance of ultrasound imaging. Ultrasound imaging has high spatial resolution, and is relatively inexpensive and portable. However, ultrasound imaging has relatively low tissue discrimination ability. Accordingly, ultrasound imaging provides adequate imaging of the prostate organ, but it does not provide adequate imaging of tumors within the organ due to the similarity of cancer tissue and benign tissues, as well as the lack of tissue uniformity. Due to the inability to visualize the cancerous portions within the organ with ultrasound, the entire prostate must be considered during the biopsy. Thus, in the conventional prostate biopsy procedure, a urologist relies on the guidance of two-dimensional ultrasound to systematically remove tissue samples from various areas throughout the entire prostate, including areas that are free from cancer.

[0011] Magnetic Resonance Imaging (MRI) has long been used to evaluate the prostate and surrounding structures. MRI is in some ways superior to ultrasound imaging because it has very good soft tissue contrast. There are several types of MRI techniques, including T2 weighted imaging, diffusion weighted imaging, and dynamic contrast imaging. Standard T2-weighted imaging does not discriminate cancer from other processes with acceptable accuracy. Diffusion-weighted imaging and dynamic contrast imaging may be integrated with traditional T2-weighted imaging to produce multi-parametric MRI. The use of multi-parametric MRI has been shown to improve sensitivity over any single parameter and may enhance overall accuracy in cancer diagnosis.

[0012] As with ultrasound imaging, MRI also has limitations. For instance, it has a relatively long imaging time, requires specialized and costly facilities, and is not well-suited for performance by a urologist at a urology center. Furthermore, performing direct prostate biopsy within MRI machines is not practical for a urologist at a urology center.

[0013] To overcome these shortcomings and maximize the usefulness of the MRI and ultrasound imaging modalities, methods and devices have been developed for digitizing medical images generated by multiple imaging modalities (e.g., ultrasound and MRI) and fusing or integrating multiple images to form a single composite image. This composite image includes information from each of the original images that were fused together. A fusion or integration of Magnetic Resonance (MR) images with ultrasound-generated images has been useful in the analysis of prostate cancer within a patient. Image-guided biopsy systems, such as the Artemis produced by Eigen (See, e.g., U.S. Pub. App. Nos. 2012/0087557, 2011/0184684, 2011/0178389, 2011/0081057, 2010/0207942, 2010/0172559, 2010/0004530, 2010/0004526, 2010/0001996, 2009/0326555, 2009/0326554, 2009/0326363, 2009/0324041, 2009/0227874, and U.S. Pat. Nos. 8,278,913, 8,175,350, 8,064,664, 7,942,829, 7,942,060, 7,875,039, 7,856,130, 7,832,114, and 7,804,989, expressly incorporated herein by reference), and UroStation developed by Koelis (see, e.g., U.S. Pub. App. Nos. 2012/0245455, 2011/0081063, and U.S. Pat. No. 8,369,592, expressly incorporated herein by reference), have been invented to aid in fusing MRI and ultrasonic modalities. These systems are three-dimensional (3D) image-guided prostate biopsy systems that provide tracking of biopsy sites within the prostate.

[0014] Until now, however, such systems have not been adequate for enabling MRI-ultrasound fusion to be performed by a urologist at a urology center. The use of such systems for MRI-ultrasound fusion necessarily requires specific MRI data, including MRI scans, data related to the assessment of those scans, and data produced by the manipulation of such data. Such MRI data, however, is not readily available to urologists and it would be commercially impractical for such MRI data to be generated at a urology center. This is due to many reasons, including urologists' lack of training or expertise, as well as the lack of time, to do so. Also, it is uncertain whether a urologist can profitably implement an image-guided biopsy system in his or her practice while contemporaneously attempting to learn to perform MRI scans. Furthermore, even if a urologist invested the time and money in purchasing MRI equipment and learning to perform MRI scans, the urologist would still be unable to perform the MRI-ultrasound fusion because a radiologist is needed for the performance of advanced MRI assessment and manipulation techniques which are outside the scope of a urologist's expertise.

[0015] MRI is generally considered to offer the best soft tissue contrast of all imaging modalities. Both anatomical (e.g., T.sub.1, T.sub.2) and functional MRI, e.g. dynamic contrast-enhanced (DCE), magnetic resonance spectroscopic imaging (MRSI) and diffusion-weighted imaging (DWI) can help visualize and quantify regions of the prostate based on specific attributes. Zonal structures within the gland cannot be visualized clearly on T.sub.1 images. However a hemorrhage can appear as high-signal intensity after a biopsy to distinguish normal and pathologic tissue. In T.sub.2 images, zone boundaries can be easily observed. Peripheral zone appears higher in intensity relative to the central and transition zone. Cancers in the peripheral zone are characterized by their lower signal intensity compared to neighboring regions. DCE improves specificity over T.sub.2 imaging in detecting cancer. It measures the vascularity of tissue based on the flow of blood and permeability of vessels. Tumors can be detected based on their early enhancement and early washout of the contrast agent. DWI measures the water diffusion in tissues. Increased cellular density in tumors reduces the signal intensity on apparent diffusion maps.

[0016] The use of imaging modalities other than trans-rectal ultrasound (TRUS) for biopsy and/or therapy typically provides a number of logistic problems. For instance, directly using MRI to navigate during biopsy or therapy can be complicated (e.g. requiring use of nonmagnetic materials) and expensive (e.g., MRI operating costs). This need for specially designed tracking equipment, access to an MRI machine, and limited availability of machine time has resulted in very limited use of direct MRI-guided biopsy or therapy. CT imaging is likewise expensive and has limited access, and poses a radiation risk for operators and patient.

[0017] Accordingly, one known solution is to register a pre-acquired image (e.g., an MRI or CT image), with a 3D TRUS image acquired during a procedure. Regions of interest identifiable in the pre-acquired image volume may be tied to corresponding locations within the TRUS image such that they may be visualized during/prior to biopsy target planning or therapeutic application. This solution allows a radiologist to acquire, analyze and annotate MRI/CT scan at the image acquisition facility while a urologist can still perform the procedure using live ultrasound in his/her clinic.

[0018] Consequently, there exists a need for improved systems and methods for performing image fusion for image-guided medical procedures.

SUMMARY

[0019] The present technology provides a method for combining information from plurality of medical imaging modalities, such as positron Emission Tomography (PET), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Magnetic Resonance Spectroscopic Imaging (MRSI), Ultrasound, Echo Cardiograms and Elastography, supplemented by information obtained in advance by at least one other modality, which is properly registered to the real time image despite soft tissue movement, deformation, or change in size. Advantageously, the real time image is of a soft tissue organ or gland such as prostate, skin, heart, lung, kidney, liver, bladder, ovaries, and thyroid, and the supplemented real time image is used for a medical image guided procedure. The real time image may also be used for orthopedic or musculoskeletal procedures, or exercise physiology. A further real-time imaging type is endoscopy, or more generally, videography, which is in growing use, especially for minimally invasive procedures.

[0020] The medical procedure may be a needle based procedure, such as but not limited to biopsy, laser ablation, brachytherapy, stem cell injection for ischemia of the heart, cryotherapy, and/or direct injection of a photothermal or photodynamic agent. In these cases, for example, the medical professional seeks to treat a highly localized portion of an organ, while either avoiding a toxic or damaging therapy to adjacent structures, or to avoid waste of a valuable agent. However, the available real-time medical imaging modalities for guiding the localized treatment visualize the organ, but do not clearly delineate the portion of the organ to be treated. On the other hand, non-real time imaging modalities are available for defining locations sought to be treated with the localized treatment. In the case of soft tissues, in the time between the non-real time imaging and the real time procedure, the organ can shift, deform (especially as a result of the procedure itself), or change in size, thus substantially distorting the relationship between the real time image used to guide the procedure and the non-real time diagnostic or tissue localization image. A further complication is that the non-real time image may have a different intrinsic coordinate system from the real time imaging, leading to artifacts. Therefore, the present technology seeks to address these issues by compensating for differences in the patient's anatomy between acquisition of the non-real time image and the real time procedure, using, for example, general anatomical information, landmarks common to both images, and tissue and procedure models.

[0021] Typical medical procedures comprise image-guided non-needle based procedures such as but not limited to HIFU, IMRT, and robotic surgery.

[0022] The pre-operative imaging modality may thus be used to a identify target object or gland, and suspicious lesions of the object or gland, for targeted biopsy, targeted therapy, targeted dose delivery or a combination of the above.

[0023] The pre-operative imaging modality may be used to identify and annotate surrounding structures that need to be spared in order to minimize the impact of procedure on quality of life. In a specific embodiment, in a prostate related procedure, such structures may be nerve bundles, the urethra, rectum and bladder identified in a magnetic resonance (MR) image.

[0024] The pre-operative imaging modality may be used to identify and uniquely label anatomical landmarks for manual, semi-automated or automated registration. In a specific embodiment, in a prostate related procedure, such anatomical landmarks may be urethra at prostate base, urethra at apex, verumontanum, calcifications and cysts.

[0025] The pre-operative imaging modality may be used to identify and uniquely label anatomical structures for manual, semi-automated or automated registration. In a specific embodiment of the invention, in a prostate related procedure, such structures may be urethra, seminal vesicles, ejaculatory ducts, bladder and rectum.

[0026] A targeted biopsy may be performed for a malignancy to determine the extent of malignancy and best treatment option.

[0027] Needle guidance procedures may be provided where the pre-operative imaging modality is analyzed to plan the complete procedure or part of the procedure, such that anatomical locations of targets for needle placement is planned in advance, and the anatomical locations are guided by the real time imaging modality.

[0028] The needle locations and trajectories may be identified in advance based on the non-real time, pre-operative imaging modality, such that the target region is adequately sampled for biopsy to maximize the accuracy while minimizing number of samples for each target region.

[0029] The needle locations and trajectories may be identified in advance, such that a target region is effectively treated in a therapeutic procedure within the target area, while minimizing the damage to the surrounding tissue and structures. The trajectory may be optimized in a prostate procedure such that the needle insertion minimizes damage to important structures such as rectum and nerve bundles.

[0030] The duration of needle placement at each location in a therapeutic procedure may be optimized using a pre-operative imaging modality, to effectively design a treatment for the target region locally while sparing the surrounding tissues and structures.

[0031] Anatomical landmarks and/or structures identified in pre-operative image may also identified in the intra-operative (live) image and labeled consistently. The pre-operative image may also identify surfaces and boundaries, which can be defied or modeled as, for example, triangulated meshes. The surfaces may represent the entire anatomical structure/object or a part thereof. In some cases, a boundary may have no real anatomical correlate, and be defined virtually; however, an advantage arises if the virtual boundary can be consistently and accurately identified in both the pre-operative imaging and the real-time inter-operative imaging, since the facilitates registration and alignment of the images. The virtual features of the images may be based on generic anatomy, e.g., of humans or animals, or patient-specific. Labeled surfaces and landmarks in pre-operative and intra-operative images may be used for rigid registration. In a specific embodiment, if the bladder is labeled as "1" in pre-operative image, it is registered with object labeled "1" in intra-operative image. More generally, regions on an image are classified or segmented, and that classification or segment definition from the pre-operative imaging is applied to the inter-operative real time imaging.

[0032] There may be a plurality of landmarks and objects that are registered concurrently. In a specific embodiment for prostate procedures, if the bladder, prostate, rectum, urethra and seminal vesicles are labeled "1", "2", "3", "4" and "5", respectively, they intra-operative image employs the same labels to concurrently register the corresponding objects. The correspondence may be "hard-correspondence" or "soft-correspondence", i.e., the landmarks may have absolute correspondence or a "fuzzy" correspondence. The availability of "soft-correspondence" permits or facilitates automated or semi-automated labeling of objects, since the real-time imaging is typically not used by a fully automated system to perform a procedure, and the skilled medical professional can employ judgment, especially if the labeling indicates a possible degree of unreliability, in relying on the automated labeling. Thus, a urologist in a prostate procedure can review the fused image in real time to determine whether there is sufficient consistency to proceed and rely on the pre-operative imaging information, or whether only the inter-operative real-time imaging is to be employed. Likewise, in some cases the pre-operative imaging labeling boundaries are imprecise, and therefore that the medical professional might wish to treat such boundaries as being advisory and not absolute.

[0033] The landmark and object registration may be performed rigidly using a simultaneous landmark and surface registration algorithm. A rigid registration may be optionally followed by an affine registration. An elastic registration method may follow the rigid or affine registration. An elastic registration method may be at least one of intensity based, binary mask based, surfaces- and landmarks-based method or a combination of these methods. A deformation model may be computed from a number of training datasets is used for image registration. The deformation model models the deformation of the object of interest, for example, a prostate goes through deformation upon application of an external tool such as ultrasound transducer or endo-rectal coil. The training datasets may include sets of corresponding planning images and live modality images for same patient. Thus, one aspect of the technology provides that pre-operative imaging is obtained under conditions that model a soft tissue deformation that might occur during the real-time imaging. The correspondence may be further refined by identifying and defining mismatching corresponding features between the pre-procedure and intra-procedure images. In a specific embodiment, in a prostate, a calcification may be seen in both MRI (pre-procedure) and ultrasound (inter-procedure) images, and if these anatomical landmarks mismatch slightly, a user may identify these landmarks visually and select them by click of a mouse; alternately, an automated indication of mismatch may be generated. An algorithm can then refine the correspondence such that the boundaries of the object of interest do not move while the deformation inside the object gets updated. The deformation inside the object of interest may thus follow an elastic deformation model based on the new landmarks constrained by object boundaries.

[0034] An image registration method may therefore be provided that maps a region of interest from a pre-procedural (planning) image to an intra-procedural (live) image, along with a complete plan such that the plan and the region of interest are mapped and deformed to conform to the shape of the object during the procedure.

[0035] The technology provides a method of image, fusion where the mapped plan may be displayed as one or more overlays on a live imaging modality display during an image guided procedure. In some cases, the fusion need not be an overlay, and may be supplemental information through a different modality, such as voice or sonic feedback, force-feedback or proprioceptive feedback, a distinct display (without overlay), or the like. In the case of an overlay, different types of information may be distinguished by color, intensity, depth (on a stereoscopic display), icons, or other known means. The plan may be indicated by static images or graphics, animated graphics, and/or acoustic information (e.g., voice synthesis feedback).

[0036] A planning image can also be overlaid on the live imaging modality during an image guided procedure such that the images can be toggled back and forth, or displayed together in a real-time "fused" display.

[0037] The mapped plan may be further adjusted to account for a new shape of object revealed during real-time imaging. This may be done using an automated method, semi-automatic method, or manual method or a combination thereof.

[0038] A pre-procedure planning image may be used to plan the procedure such that the plan is embedded in an electronic, printed, or interactive web-based report.

[0039] The present technology identifies imaging modalities clearly including landmarks, objects and intensity information, to perform registration using a combination of rigid, affine and non-rigid elastic registration.

[0040] The modeling of the objects within an image may thus comprise a segmentation of anatomical features.

[0041] The method may further comprise transforming coordinate systems of various imaging modalities. The system may further comprise at least one modeling processor configured to perform real-time model updates of a patient soft-tissue to ensure that a pre-operative image remains accurately registered with an intra-operative image.

[0042] The annotated regions of the medical imaging scan or the plan may be generated by a computer-aided diagnosis or therapeutic planning system. At least apportion of the pre-operative imaging may be conducted at a remote location from the therapeutic or diagnostic procedure, and the information conveyed between the two through the Internet, preferably over a virtual private network. A true private network may also be used, or simply encrypted files communicate over otherwise public channels. The physical separation of the imaging modalities facilitates professional specialization, since experts at different aspects of the process need not be collocated.

[0043] The present technology permits porting information from a planning image frame of reference to a live imaging modality for guiding a medical procedure. The plan may be defined as a region of interest and needle placement or a method to plan a treatment or biopsy, for example.

[0044] The present technology may employ not only object boundaries, but also surrounding or internal information for registration, and thus is may be employed in applications where there is significant internal deformation that cannot be modeled using boundaries alone.

[0045] The phrase "image fusion" is sometimes used to define the process of registering two images that are acquired via different imaging modalities or at different time instances. The registration/fusion of images obtained from different modalities creates a number of complications. The shape of soft tissues in two images may change between acquisitions of each image. Likewise, a diagnostic or therapeutic procedure can alter the shape of the object that was previously imaged. Further, in the case of prostate imaging the frame of reference (FOR) of the acquired images is typically different. That is, multiple MRI volumes are obtained in high resolution transverse, coronal or sagittal planes respectively, with lower resolution representing the slice distance. These planes are usually in rough alignment with the patient's head-toe, anterior-posterior or left-right orientations. In contrast, TRUS images are often acquired while a patient lies on his side in a fetal position by reconstructing multiple rotated samples 2D frames to a 3D volume. The 2D image frames are obtained at various instances of rotation of the TRUS probe after insertion in to the rectal canal. The probe is inserted at an angle (approximately 30-45 degrees) to the patient's head-toe orientation. As a result the gland in MRI and TRUS will need to be rigidly aligned because their relative orientations are unknown at scan time. Typically, well-defined and invariant anatomical landmarks may be used to register the images, though since the margins of landmarks themselves vary with imaging modality, the registration may be imperfect or require discretion in interpretation. A further difficulty with these different modalities is that the intensity of objects in the images do not necessarily correspond. For instance, structures that appear bright in one modality (e.g., MRI) may appear dark in another modality (e.g., ultrasound). Thus, the logistical process of overlaying or merging the images requires perceptual optimization. In addition, structures identified in one image (soft tissue in MRI) may be entirely absent in another image. TRUS imaging causes further deformation of gland due to pressure exerted by the TRUS transducer on prostate. As a result, rigid registration is not sufficient to account for difference between MRI and TRUS images. Finally, the resolution of the images may also impact registration quality.

[0046] Due to the FOR differences, image intensity differences between MRI and TRUS images, and/or the potential for the prostate to change shape between imaging by the MRI and TRUS scans, one of the few known correspondences between the prostate images acquired by MRI and TRUS is the boundary/surface model of the prostate. That is, the prostate is an elastic object that has a gland boundary or surface model that defines the volume of the prostate. By defining the gland surface boundary in the dataset for each modality, the boundary can then be used as a reference for aligning both images. Thus, each point of the volume defined within the gland boundary of the prostate in one image should correspond to a point within a volume defined by a gland boundary of the prostate in the other image, and vice versa.

[0047] In seeking to register the surfaces, the data in each data set may be transformed, assuming elastic deformation of the prostate gland. Thus, the characteristics of soft tissue under shear and strain, compression, heating and/or inflammation, bleeding, coagulation, biopsy sampling, tissue resection, etc., as well as normal physiological changes for healthy and pathological tissue, over time, are modeled, and therefore these various effects accounted for during the pre-operative imaging and real-time intraprocedural imaging.

[0048] According to a first aspect, a system and method is provided for use in medical imaging of a prostate of a patient. The utility includes obtaining a first 3D image volume from an MRI imaging device. Typically, this first 3D image volume is acquired from data storage. That is, the first 3D image volume is acquired at a time prior to a current procedure. A first shape or surface model may be obtained from the MRI image (e.g., a triangulated mesh describing the gland). The surface model can be manually or automatically extracted from all co-registered MRI image modalities. That is, multiple MRI images may themselves be registered with each other as a first step. The 3D image processing may be automated, so that a technician need not be solely occupied by the image processing, which may take seconds or minutes. The MRI images may be T.sub.1, T.sub.2, DCE (dynamic contrast-enhanced), DWI (diffusion weighted imaging), ADC (apparent diffusion coefficient) or other.

[0049] Similarly, data from other imaging modalities, e.g., computer aided (or axial) tomography (CAT) scans can also be registered. In the case of a CAT scan, the surface of the prostate may not represent a high contrast feature, and therefore other aspects of the image may be used; typically, the CAT scan is used to identify radiodense features, such as calcifications, or brachytherapy seeds, and therefore the goal of the image registration process would be to ensure that these features are accurately located in the fused image model. A co-registered CT image with PET scan can also provide diagnostic information that can be mapped to TRUS frame of reference for image guidance.

[0050] In one embodiment, the pre-operative imaging comprises use of the same imaging modality as used intra-operatively, generally along with an additional imaging technology. Thus, an ultrasound volume of the patient's prostate may be obtained, for example, through rotation of a TRUS probe, and the gland boundary is segmented in an ultrasound image. The ultrasound images acquired at various angular positions of the TRUS probe during rotation can be reconstructed to a rectangular grid uniformly through intensity interpolation to generate a 3D TRUS volume. Of course, other ultrasound methods may be employed without departing from the scope of the technology. The MRI or CAT scan volume is registered to the 3D TRUS volume (or vice versa), and a registered image of the 3D TRUS volume is generated in the same frame of reference (FOR) as the MRI or CAT scan image. According to a preferred aspect, this registration occurs prior to a diagnostic or therapeutic intervention. The advantage here is that both data sets may be fully processed, with the registration of the 3D TRUS volume information completed. Thus, during a later real-time TRUS guided diagnostic or therapeutic procedure, a fully fused volume model is available. In general, the deviation of a prior 3D TRUS scan from a subsequent one will be small, so features from the real-time scan can be aligned with those of the prior imaging procedure. The fused image from the MRI (or CAT) scan provides better localization of the suspect pathological tissue, and therefore guidance of the diagnostic biopsy or therapeutic intervention. Therefore, the suspect voxels from the MRI are highlighted in the TRUS image, which during a procedure would be presented in 2D on a display screen to guide the urologist. The process therefore seeks to register 3 sets of data; the MRI (or other scan) information, the pre-operative 3D TRUS information, and the real time TRUS used during the procedure. Ideally, the preoperative 3D TRUS and the inter-operative TRUS are identical apparatus, and therefore would provide maximum similarity and either minimization of artifacts or present the same artifacts. Indeed, the 3D TRUS preoperative scan can be obtained using the same TRUS scanner and immediately pre-operative, though it is preferred that the registration of the images proceed under the expertise of a radiologist or medical scanning technician, who may not be immediately available during that period.

[0051] A plan may be defined manually, semi-automatically, or in certain cases, automatically. The plan, for example in a prostate biopsy procedure, defines both the location of the samples to be acquired, as well as the path to be taken by the biopsy instrument in order to avoid undue damage to tissues. In some cases, the plan may be updated in real-time. For example, if the goal of the plan is to sample a volume of tissue on 1.5 mm spatial distances, but the accuracy of sampling is .+-.0.5 mm, then subsequent sampling targets may be defined adaptively based on the actual sampling location during the procedure. Likewise, in laser therapy, the course of treatment, including both the location of the laser and its excitation parameters, may be determined based on both the actual location of a fiber optic tip, as well as a measured temperature, and perhaps an inter-operatively determined physiological response to the therapy, such as changes in circulation pattern, swelling, and the like.

[0052] The registered image and the geometric transformation that relates, for example, a MRI scan volume with an ultrasound volume can be used to guide a medical procedure such as, for example, biopsy or brachytherapy.

[0053] These regions of interest identified on the MRI scan are usually defined by a radiologist based on information available in MRI prior to biopsy, and may be a few points, point clouds representing regions, or triangulated meshes. Likewise, the 3D TRUS may also reveal features of interest for biopsy, which may also be marled as regions of interest. Because of the importance of registration of the regions of interest in the MRI scan with the TRUS used intra-operatively, in a manual or semiautomated pre-operative image processing method, the radiologist can override or control the image fusion process according to his or her discretion.

[0054] In a preferred embodiment, a segmented MRI and 3D TRUS is obtained from a patient for the prostate grand. The MRI and TRUS data is registered and transformations applied to form a fused image in which voxels of the MRI and TRUS images physically correspond to one another. Regions of interest are then identified either from the source images or from the fused image. The regions of interest are then communicated to the real-time ultrasound system, which tracks the earlier TRUS image. Because the ultrasound image is used for real time guidance, typically the transformation/alignment takes place on the MRI data, which can then be superposed or integrated with the ultrasound data.

[0055] During the procedure, the real-time TRUS display is supplemented with the MRI (or CAT or other scan) data, and an integrated display presented to the operating urologist. In some cases, haptic feedback may be provided so that the urologist can "feel" features when using a tracker.

[0056] It is noted that as an alternate, the MRI or CAT scan data may be used to provide a coordinate frame of reference for the procedure, and the TRUS image modified in real-time to reflect an inverse of the ultrasound distortion. That is, the MRI or CAT data typically has a precise and undistorted geometry. On the other hand the ultrasound image may be geometrically distorted by phase velocity variations in the propagation of the ultrasound waves through the tissues, and to a lesser extent, by reflections and resonances. Since the biopsy instrument itself is rigid, it will correspond more closely to the MRI or CAT model than the TRUS model, and therefore a urologist seeking to acquire a biopsy sample may have to make corrections in course if guided by the TRUS image. If the TRUS image, on the other hand, is normalized to the MRI coordinate system, then such corrections may be minimized. This requires that the TRUS data be modified according to the fused image volume model in real time. However, modern graphics processors (GPU or APU, multicore CPU, FPGA) and other computing technologies make this feasible.

[0057] According to another aspect, the urologist is presented with a 3D display of the patient's anatomy, supplemented by and registered to the real-time TRUS data. Such 3D displays may be effectively used with haptic feedback.

[0058] It is noted that two different image transformations are at play; the first is a frame of reference transformation, due to the fact that the MRI image is created as a set of slices in parallel planes (rectangular coordinate system) which will generally differ from the image plane of the TRUS, defined by the probe angle (cylindrical coordinate system, with none of the cylindrical axes aligned with a coordinate axis of the MRI). The second transformation represents the elastic deformation of the objects within the image to properly aligned surfaces, landmarks, etc.

[0059] The segmentation and/or digitizing may be carried out semi-automatically (manual control over automated image processing tasks) or automatically using computer software. One example of computer software which may be suitable includes 3D Slicer (www.slicer.org), an open source software package capable of automatic image segmentation, manual editing of images, fusion and co-registering of data using rigid and non-rigid algorithms, and tracking of devices for image-guided procedures.

[0060] See, e.g. (each of which is expressly incorporated herein by reference):

[0061] Caskey C F, Hlawitschka M, Qin S, Mahakian L M, Cardiff R D, et al. "An Open Environment CT-US Fusion for Tissue Segmentation during Interventional Guidance", PLoS ONE 6(11): e27372. doi:10.1371/journal.pone.0027372 (Nov. 23, 2011) www.plosone.org/article/info %3Adoi %2F10.1371%2Fjournal.pone.0027372; Shogo Nakano, Miwa Yoshida, Kimihito Fujii, Kyoko Yorozuya, Yukako Mouri, Junko Kousaka, Takashi Fukutomi, Junko Kimura, Tsuneo Ishiguchi, Kazuko Ohno, Takao Mizumoto, and Michiko Harao, "Fusion of MRI and Sonography Image for Breast Cancer Evaluation Using Real-time Virtual Sonography with Magnetic Navigation: First Experience", Jpn. J. Clin. Oncol. (2009) 39(9): 552-559 first published online Aug. 4, 2009 doi:10.1093/jjco/hyp087; Porter, Brian C., et al. "Three-dimensional registration and fusion of ultrasound and MRI using major vessels as fiducial markers." Medical Imaging, IEEE Transactions on 20.4 (2001): 354-359; Kaplan, Irving, et al. "Real time MRI-ultrasound image guided stereotactic prostate biopsy." Magnetic resonance imaging 20.3 (2002): 295-299; Jung, E. M., et al. "New real-time image fusion technique for characterization of tumor vascularisation and tumor perfusion of liver tumors with contrast-enhanced ultrasound, spiral CT or MRI: first results." Clinical hemorheology and microcirculation 43.1 (2009): 57-69; Lindseth, Frank, et al. "Multimodal image fusion in ultrasound-based neuronavigation: improving overview and interpretation by integrating preoperative MRI with intra-operative 3D ultrasound." Computer Aided Surgery 8.2 (2003): 49-69; Xu, Sheng, et al. "Real-time MRI-TRUS fusion for guidance of targeted prostate biopsies." Computer Aided Surgery 13.5 (2008): 255-264; Singh, Anurag K., et al. "Initial clinical experience with real-time transrectal ultrasonography-magnetic resonance imaging fusion-guided prostate biopsy." BJU international 101.7 (2007): 841-845; Pinto, Peter A., et al. "Magnetic resonance imaging/ultrasound fusion guided prostate biopsy improves cancer detection following transrectal ultrasound biopsy and correlates with multiparametric magnetic resonance imaging." The Journal of urology 186.4 (2011): 1281-1285; Reynier, Christophe, et al. "MRI/TRUS data fusion for prostate brachytherapy. Preliminary results." arXiv preprint arXiv:0801.2666 (2008); Schlaier, J. R., et al. "Image fusion of MR images and real-time ultrasonography: evaluation of fusion accuracy combining two commercial instruments, a neuronavigation system and a ultrasound system." Acta neurochirurgica 146.3 (2004): 271-277; Wein, Wolfgang, Barbara Roper, and Nassir Navab. "Automatic registration and fusion of ultrasound with CT for radiotherapy." Medical Image Computing and Computer-Assisted Intervention--MICCAI 2005 (2005): 303-311; Krucker, Jochen, et al. "Fusion of real-time transrectal ultrasound with preacquired MRI for multimodality prostate imaging." Medical Imaging. International Society for Optics and Photonics, 2007; Singh, Anurag K., et al. "Simultaneous integrated boost of biopsy proven, MRI defined dominant intra-prostatic lesions to 95 Gray with IMRT: early results of a phase I NCI study." Radiat Oncol 2 (2007): 36; Hadaschik, Boris A., et al. "A novel stereotactic prostate biopsy system integrating pre-interventional magnetic resonance imaging and live ultrasound fusion." The Journal of urology (2011); Narayanan, R., et al. "MRI-ultrasound registration for targeted prostate biopsy." Biomedical Imaging: From Nano to Macro, 2009. ISBI'09. IEEE International Symposium on. IEEE, 2009; Natarajan, Shyam, et al. "Clinical application of a 3D ultrasound-guided prostate biopsy system." Urologic Oncology: Seminars and Original Investigations. Vol. 29. No. 3. Elsevier, 2011; Daanen, V., et al. "MRI/TRUS data fusion for brachytherapy." The International Journal of Medical Robotics and Computer Assisted Surgery 2.3 (2006): 256-261; Sannazzari, G. L., et al. "CT-MRI image fusion for delineation of volumes in three-dimensional conformal radiation therapy in the treatment of localized prostate cancer." British journal of radiology 75.895 (2002): 603-607; Kadoury, Samuel, et al. "Realtime TRUS/MRI fusion targeted-biopsy for prostate cancer: a clinical demonstration of increased positive biopsy rates." Prostate Cancer Imaging. Computer-Aided Diagnosis, Prognosis, and Intervention (2010): 52-62; Comeau, Roch M., et al. "Intraoperative ultrasound for guidance and tissue shift correction in image-guided neurosurgery." Medical Physics 27 (2000): 787; Turkbey, Baris, et al. "Documenting the location of prostate biopsies with image fusion." BJU international 107.1 (2010): 53-57; Constantinos, S. P., Marios S. Pattichis, and Evangelia Micheli-Tzanakou. "Medical imaging fusion applications: An overview." Signals, Systems and Computers, 2001. Conference Record of the Thirty-Fifth Asilomar Conference on. Vol. 2. IEEE, 2001; Xu, Sheng, et al. "Closed-loop control in fused MR-TRUS image-guided prostate biopsy." Medical Image Computing and Computer-Assisted Intervention--MICCAI 2007 (2007): 128-135; Turkbey, Baris, et al. "Documenting the location of systematic transrectal ultrasound-guided prostate biopsies: correlation with multi-parametric MRI." Cancer imaging: the official publication of the International Cancer Imaging Society 11 (2011): 31; Tang, Annie M., et al. "Simultaneous ultrasound and MRI system for breast biopsy: compatibility assessment and demonstration in a dual modality phantom." Medical Imaging, IEEE Transactions on 27.2 (2008): 247-254; Wong, Alexander, and William Bishop. "Efficient least squares fusion of MRI and CT images using a phase congruency model." Pattern Recognition Letters 29.3 (2008): 173-180; Ewertsen, Caroline, et al. "Biopsy guided by real-time sonography fused with MRI: a phantom study." American Journal of Roentgenology 190.6 (2008): 1671-1674; Khoo, V. S., and D. L. Joon. "New developments in MRI for target volume delineation in radiotherapy." British journal of radiology 79. Special Issue 1 (2006): S2-S15; and Nakano, Shogo, et al. "Fusion of MRI and sonography image for breast cancer evaluation using real-time virtual sonography with magnetic navigation: first experience." Japanese journal of clinical oncology 39.9 (2009): 552-559.

[0062] See also U.S. Pat. Nos. 5,227,969; 5,299,253; 5,389,101; 5,411,026; 5,447,154; 5,531,227; 5,810,007; 6,200,255; 6,256,529; 6,325,758; 6,327,490; 6,360,116; 6,405,072; 6,512,942; 6,539,247; 6,561,980; 6,662,036; 6,694,170; 6,996,430; 7,079,132; 7,085,400; 7,171,255; 7,187,800; 7,201,715; 7,251,352; 7,266,176; 7,313,430; 7,379,769; 7,438,685; 7,520,856; 7,582,461; 7,619,059; 7,634,304; 7,658,714; 7,662,097; 7,672,705; 7,727,752; 7,729,744; 7,804,989; 7,831,082; 7,831,293; 7,850,456; 7,850,626; 7,856,130; 7,925,328; 7,942,829; 8,000,442; 8,016,757; 8,027,712; 8,050,736; 8,052,604; 8,057,391; 8,064,664; 8,067,536; 8,068,650; 8,077,936; 8,090,429; 8,111,892; 8,180,020; 8,135,198; 8,137,274; 8,137,279; 8,167,805; 8,175,350; 8,187,270; 8,189,738; 8,197,409; 8,206,299; 8,211,017; 8,216,161; 8,249,317; 8,275,182; 8,277,379; 8,277,398; 8,295,912; 8,298,147; 8,320,653; 8,337,434; and US Patent Application No. 2011/0178389, each of which is expressly incorporated herein by reference.

[0063] It is therefore an object to provide a method for combining information from plurality of medical imaging modalities, comprising: acquiring a first volumetric image using a first volumetric imaging modality of an anatomical region; defining an elastic soft tissue model for at least a portion of the anatomical region encompassed by the first volumetric image; labeling features of the anatomical region based on at least the first volumetric image and the soft tissue model, comprising at least features of the anatomical region which are visualized by both the first imaging modality and a second imaging modality, and features of the anatomical region which are poorly visualized in the second imaging modality; acquiring a second volumetric image of the anatomical region using the second imaging modality comprising a real time image; registering the features of the anatomical region which are visualized by both the first imaging modality and a second imaging modality, and the features of the anatomical region which are poorly visualized in the second imaging modality, with respect to the soft tissue model, such that the features of the anatomical region which are visualized by both the first imaging modality and a second imaging modality are linked, compensating for at least one distortion of the portion of the anatomical region between a first time of the first volumetric image and a second time of the second volumetric image; and presenting an output based on at least the features of the anatomical region which are poorly visualized in the second imaging modality in the real time image, compensated based on at least the registered features and the soft tissue model.

[0064] It is also an object to provide a method for combining information from plurality of medical imaging modalities, comprising: acquiring volumetric images using a first volumetric imaging modality of an anatomical region of a person under a plurality of states of deformation; acquiring volumetric images using a second volumetric imaging modality of the anatomical region of the person under a plurality of states of deformation; defining an elastic soft tissue model for the anatomical region comprising model parameters representing tissue compliance and surface properties; labeling features of the anatomical region based on at least the volumetric images of the first imaging modality, the volumetric images of the second imaging modality, and the soft tissue model, wherein the labeling aligns corresponding features and compensates for rigid, elastic and affine transform of the anatomical region between times for acquiring the volumetric images of the first imaging modality and the volumetric images of the second imaging modality; and presenting an output based on at least the labeled features of the anatomical region.

[0065] A further object provides a system for combining information from plurality of medical imaging modalities, comprising: an input port configured to receive at least two first volumetric images using a first volumetric imaging modality of an anatomical region representing respectively different states of elastic deformation, and at least two second volumetric images using a second volumetric imaging modality, of the anatomical region representing respectively different states of elastic deformation; at least one processor configured to define an elastic soft tissue model for at least a portion of the anatomical region encompassed by the first volumetric image, and to label features of the anatomical region based on at least the first volumetric image and the soft tissue model; and a memory configured to store the defined elastic soft tissue model

[0066] The first imaging modality may comprise at least one of positron emission tomography, computed tomography, magnetic resonance imaging, magnetic resonance spectrography imaging, and elastography. The anatomical region may comprise an organ selected from the group consisting of prostate, heart, lung, kidney, liver, bladder, ovaries, and thyroid. The therapeutic intervention may be selected from one or more selected from the group consisting of laser ablation, brachytherapy, stem cell injection for ischemia of the heart, cryotherapy, direct injection of a photothermal or photodynamic agent, and radiotherapy. The differentially visualized anatomical region may be at least one anatomical structure to be spared in an invasive procedure, selected from the group consisting of a nerve bundle, a urethra, a rectum and a bladder. The registered features may comprise anatomical landmarks selected from the group consisting of a urethra, a urethra at a prostate base, a urethra at an apex, a verumontanum, a calcification and a cyst, a seminal vesicle, an ejaculatory duct, a bladder and a rectum.

[0067] The method may further comprise acquiring a tissue sample from a location determined based on at least the first imaging modality and the second imaging modality.

[0068] The method may further comprise delivering a therapeutic intervention at a location determined based on at least the first imaging modality and the second imaging modality.

[0069] The method may further comprise performing an image-guided at least partially automated procedure selected from the group consisting of laser ablation, high intensity focused ultrasound, cryotherapy, radio frequency, brachytherapy, IMRT, and robotic surgery.

[0070] The differentially visualized anatomical region may comprise at least one of a suspicious lesion for targeted biopsy, a suspicious lesion for targeted therapy, and a lesion for targeted dose delivery.

[0071] The method may further comprise automatically defining a plan comprising an target and an invasive path to reach the target.

[0072] The plan may be defined based on the first imaging modality, and is adapted in real time based on at least the second imaging modality. The plan may comprise a plurality of targets.

[0073] A plurality of anatomical features may be consistently labeled in the first volumetric image and the second volumetric image. The soft tissue model may comprise an elastic triangular mesh approximating a surface of an organ. The anatomical landmark registration may be performed rigidly using a simultaneous landmark and surface registration algorithm. An affine registration may be performed. The registering may comprise an elastic registration based on at least one of an intensity, a binary mask, and surfaces and landmarks.

[0074] The model may be derived from a plurality of training datasets representing different states of deformation of an organ of a respective human using the first imaging modality and the second imaging modality.

[0075] The method may further comprise identifying a mismatch of corresponding anatomical features of the first volumetric image and the second volumetric image, and updating the registration to converge the corresponding anatomical features to reduce the mismatch based on corrections of an elastic deformation model constrained by object boundaries.

BRIEF DESCRIPTION OF THE DRAWINGS

[0076] FIG. 1 shows a typical workflow for a surgeon in using a fusion platform for mapping plan from a pre-procedural planning image to the intra-procedural live image;

[0077] FIG. 2 shows a method for rigid registration, in which I.sub.1(x) and I.sub.2(x) represent the planning and live images, respectively with x being the coordinate system, .OMEGA..sub.1,i and .OMEGA..sub.2,i represent domains of the objects labeled in images I.sub.1 and I.sub.2, respectively such that i=1, 2, 3, . . . represent object labels 1, 2, 3, etc., and w.sub.i's are relative weights for different costs and Sim(A,B) represents the similarity cost between two objects A and B, and R represents the rigid transformation matrix that includes rotation and translation in 3D frame of reference;

[0078] FIG. 3 shows a method for affine registration, in which I.sub.1(x) and I.sub.2(x) represent the planning and live images, respectively with x being the coordinate system, .OMEGA..sub.1,i and .OMEGA..sub.2,i represent domains of the objects labeled in images I.sub.1 and I.sub.2, respectively such that i=1, 2, 3, . . . represent object labels 1, 2, 3, etc., and w.sub.i's are relative weights for different costs and Sim(A,B) represents the similarity cost between two objects A and B;

[0079] FIG. 4 shows an object process diagram for non-rigid elastic image registration, using rigid and/or affine registration as an initialization, wherein multiple labeled objects are used to compute the correspondence while satisfying the regularization constraints;

[0080] FIG. 5 shows an object process diagram for planning a laser ablation of the prostate gland, in which a radiologist/radiation oncologist analyzes multiparametric MRI (mpMRI) images of a prostate and plans the location of needle tip, trajectory and duration of needle application; and

[0081] FIGS. 6A and 6B show a conceptual diagram for planning a laser ablation of the prostate gland, in which FIG. 6A shows the target lesion identified by an expert in sagittal and transverse images, and FIG. 6B shows the plan for laser ablation in the two orthogonal directions.

DESCRIPTION OF THE EMBODIMENTS

[0082] The present invention will be described with respect to a process, which may be carried out through interaction with a user or automatically. One skilled in the art will appreciate that various types of imaging systems, including but not limited to MRI, ultrasound, PET, CT, SPECT, X-ray, and the like may be used for either pre-operative or intra-operative imaging, but that a preferred scheme employs a fusion of MRI and/or CT and/or PET and ultrasound imaging for the pre-operative imaging, and trans-urethral ultrasound for intra-operative real time imaging in a prostate diagnosis or therapeutic procedure.

[0083] According to an embodiment of the present technology, one or more pre-procedure "planning" images are used to plan a procedure and one or more intra-procedure "live" images used to guide the procedure. For example, prostate biopsy and ablation is typically done under ultrasound guidance. While speed of imaging and cost make ultrasound an ideal imaging modality for guiding biopsy, ultrasound images are insufficient and ineffective at identifying prostate cancer. Multi-parametric MRI (mpMRI) has been shown to be very sensitive and specific for identifying, localizing and grading of prostate cancer. mpMRI consists of various protocols for MR imaging including T2-weighted imaging, Diffusion-weighted imaging (DWI), Dynamic contrast-enhanced (DCE) and MR Spectroscopic imaging (MRSI). Radiologists are best placed at analyzing the MRI images for detection and grading the prostate cancer. However, it remains challenging to take the information from radiologists and present to urologists or surgeons who use ultrasound as imaging method for performing a biopsy. Likewise, MRI is ideal for identifying the sensitive surrounding structures that must be spared in order to preserve quality of life after the procedure.

[0084] Recent advances in clinical research and accurate ablation have increased interest in focal ablation of the prostate, where the location of malignancy is known and the malignancy is treated locally with the surroundings remaining intact. For example, high-intensity focused ultrasound ablation of the prostate is performed under ultrasound guidance. However, due to limitations of ultrasound, it is hard to correlate the findings in pre-procedure MRI with the intra-procedure ultrasound. As a result, a much larger area is treated to ensure that the malignancy was treated properly. In other words, most such users perform a "cognitive" registration, i.e., use their own knowledge and interpretation of prostate anatomy to guide such a procedure while using an ineffective imaging method. The same challenge applies in robotic surgery where the nerve bundles are not seen very clearly under live optical imaging. As a result, nerve sparing remains a challenge in robotic surgery. Again, MR imaging provides the necessary information but there are no insufficient tools available to apply that information to a surgical method.

[0085] Although methods exist for performing MRI-TRUS image fusion, the methods suffer from significant drawbacks. For example, Kumar et al.'s method (see, U.S. Pub. App. 2010/02906853) uses a prostate surface-based non-rigid image registration method. The method uses only triangulated prostate boundaries as input to registration and performs a point-wise image registration only at the surface boundaries and then interpolates the information from boundaries to inside the object. Significant drawbacks include lack of information from surrounding structures, requiring significant skills, knowledge and effort to provide a good manual image registration between MRI and ultrasound, which is very challenging, especially when surgeons are not very skilled at reading and interpreting MR images. As a result, the results can be variable since there can be significant difference in orientation and shape of gland between MRI and transrectal ultrasound. In addition to outside structures for orienting or rigidly registering the prostate, the prostate internal structures and details are also completely ignored. Therefore, any internal twisting, rotation or non-rigid distortion is not accounted for, which may lead to poor results especially when an endo-rectal coil is used in MRI. In addition, the plan is mapped as a region of interest, leaving it up to the surgeon to interpret how to properly sample a certain region. Also, in case of misregistration, there is no way disclosed to edit or refine the registration.

[0086] In a specific embodiment of the invention, for a fusion guided biopsy procedure (see FIG. 2), the method plans location, trajectory and depth of needle insertion optimized such that there is maximum likelihood of sampling the malignancy while minimizing number of biopsy cores.

[0087] FIG. 2 shows a method according to the present technology for rigid registration. In FIG. 2, I.sub.1(x) and I.sub.2(x) represent the planning and live images, respectively with x being the coordinate system. .OMEGA..sub.1,i and .OMEGA..sub.2,i represent domains of the objects labeled in images I.sub.1 and I.sub.2, respectively such that i=1, 2, 3, . . . represent object labels 1, 2, 3, etc. For example, i=1, 2 and 3 may correspond to prostate, bladder and rectum, respectively. w.sub.i's are relative weights for different costs and Sim(A,B) represents the similarity cost between two objects A and B. For example, for intensity based metrics, the cost could be sum of the squared intensity differences or a mutual information based metric, in case of binary objects, the cost may be relative overlap. In case of surfaces, the cost could be a symmetric distance between corresponding points. R represents the rigid transformation matrix that includes rotation and translation in 3D frame of reference.

[0088] Likewise, in another embodiment for a fusion guided focal ablation (see FIG. 3), the needle placement is computed in advance such that the computed location, depth and trajectory maximize dosage/energy delivery at the malignancy while minimizing exposure to surrounding region.

[0089] FIG. 3 shows a method for affine registration. In FIG. 3, I.sub.1(x) and I.sub.2(x) represent the planning and live images, respectively with x being the coordinate system. .OMEGA..sub.1,i and .OMEGA..sub.2,i represent domains of the objects labeled in images I.sub.1 and I.sub.2, respectively such that i=1, 2, 3, . . . represent object labels 1, 2, 3, etc. For example, i=1, 2 and 3 may correspond to prostate, bladder and rectum, respectively. w.sub.i's are relative weights for different costs and Sim(A,B) represents the similarity cost between two objects A and B. For example, for intensity based metrics, the cost could be sum of squared intensity differences or a mutual information based metric, in case of binary objects, the cost may be relative overlap. In case of surfaces, the cost could be symmetric distance between corresponding points. A represents the affine transformation matrix that registers image I.sub.1 to frame of reference of image I.sub.2.

[0090] The procedure is preferably performed under intra-procedural image guidance, with the information from pre-procedure mapped to an intra-procedure image using a combination of rigid, affine and elastic registration, as shown in FIG. 4, which shows an object process diagram for non-rigid elastic image registration using rigid and/or affine registration as an initialization. The method uses multiple labeled objects to compute the correspondence while satisfying the regularization constraints. During the procedure, the surgeon identifies the same landmarks, features and structures as the pre-procedure image and labels them consistently. This may be done automatically or manually after acquiring an initial intra-procedural scan. The registration method then uses the labels in pre-procedure and intra-procedure images to identify the structural correspondence and registers the images using a combination of rigid, affine and elastic registration.

[0091] According to the algorithm detailed in FIG. 4, two inputs are provided: Rigid or rigid+ affine registered planning image I.sub.1', having labeled objects .OMEGA..sub.1,i' for i.gtoreq.1, and landmarks X.sub.j's for j.gtoreq.1; and Intra-operative planning image I.sub.2, labeled objects .OMEGA..sub.2,i, and landmarks, Y.sub.j's for ij.gtoreq.1.

The algorithm initializes T=I; perform an identity transformation; Minimize with respect to T.sup.iter:

i w 1 , i x T iter ( .OMEGA. 1 , i ' ) .OMEGA. 2 , i .intg. sim ( T iter ( I 1 ) ' , I 2 ) x + i w 3 , i x T iter ( .OMEGA. 1 , i ' ) .OMEGA. 2 , i .intg. Reg ( T iter ) x ; ##EQU00001##

Updated T based on intensity cost;

Minimize:

[0092] j w 2 , i sim ( T iter ( X j ' ) , Y j ) + i w 2 , j Reg ( T iter ) ; ##EQU00002##

Update T.sup.iter based on landmarks; If convergence,

[0093] Transform T=T.sup.iter

[0094] Registered image T(I.sub.1'),

[0095] Output mapped plan and labeled objects

If no convergence, iterate minimizations.

[0096] There are two different methods to perform the registrations: use the landmarks and features as "soft-correspondence" or "hard-correspondence" points and the structures as binary images, as shown in FIG. 5, which shows an object process diagram for planning a laser ablation of the prostate gland. A radiologist/radiation oncologist performs analysis of mpMRI images of prostate and plans the location of needle tip, trajectory and duration of needle application. The landmarks and features are used as "soft-landmarks" or "hard-landmarks" points and the structures as surface meshes (FIG. 6). "Soft-landmarks" represent the landmarks that may not correspond exactly with each other and there may be some tolerance or level of confidence that will be refined during registration. "Hard-landmarks" refer to landmarks that are assumed to match exactly and their correspondence is not allowed to change during registration.

[0097] FIGS. 6A and 6B show a conceptual diagram for planning a laser ablation of the prostate gland. FIG. 6A shows the target lesion identified by an expert in sagittal and transverse images. FIG. 6B shows the plan for laser ablation in the two orthogonal directions. A and B represent the planned needles, which ablate the area shown in hatched lines. The ablated area covers the planned target.

[0098] The registration provides a surgeon with image fusion such that the information from pre-procedure or planning images is mapped to the frame of reference of the intra-procedure or live images. The mapped information contains at least one structural image, target area to be treated and a plan for the procedure. The plan may be in the form of needle location and trajectory along with the duration of needle application, if needed.

[0099] FIG. 1 shows the overall workflow of a surgeon, where the images planned by an expert (radiologist/radiation oncologist) are fused with a live imaging modality such as ultrasound for real-time guidance while taking advantage of diagnostic capabilities of the pre-procedural planning image. The pre-procedure image is registered with the live image using a combination of rigid, affine and non-rigid elastic registration. The registration provides a correspondence or a deformation map, which is used to map planning information from the frame of reference of the planning image to the live image. The method permits a radiologist, radiation oncologist or an oncological image specialist to analyze pre-operative images, identify and label various structures including the objects of interest, say the prostate from the above detailed examples. The structures identified and labeled by the imaging specialist could include external and internal structures and landmarks such as bladder, urethra, rectum, seminal vesicles, nerve bundles, fibromuscular stroma and prostate zones. These structures are identified and stored as either points, binary masks or surface meshes. Each such structure is labeled uniquely. In addition, the method includes an automatically (or semi-automatically) generated plan for the entire procedure.

[0100] FIGS. 2, 3 and 4 represent the rigid, affine and non-rigid elastic registration methods. An expert or a computer algorithm identifies and labels various anatomical structures and landmarks in the planning image. Let image I.sub.1(x) represent the structural planning image. In one embodiment, the structural image could be a T2-weighted transversally acquired MRI image. The subscript 1 corresponds to the planning or pre-procedural image. Let .OMEGA..sub.1,i represent the object labeled i, where i=1, 2, 3, . . . represent a unique label for an anatomical object. For example, if bladder is labeled as 1 in planning image, .OMEGA..sub.1,1 consists of all the voxels corresponding to bladder in the image I.sub.1. Alternatively, objects may also be represented by surfaces, in which case, the objects will consist of the vertices and triangles joining the vertices. Let X.sub.i represent the point landmarks in the planning image, where i=1, 2, 3, . . . represents the index of the point landmarks identified in the planning image either manually or using an automated method. In addition, the expert provides the plan for a procedure on the structural image.

[0101] During the procedure, a surgeon loads the planning image I.sub.1 along with the object labels or surface meshes, landmarks and the plan. The planning image I.sub.1 is projected to the intra-procedure image I.sub.2 acquired during the procedure. The labels and landmarks may be defined in the image I.sub.2 either manually or automatically. In one embodiment, the labels in the target image I.sub.2 are automatically computed by letting the planning image I.sub.1 deform to the shape of the target image I.sub.2. The object maps defined in planning image also participate in the registration such that segmentation (object labeling) and registration (computation of correspondence) happens at the same time in the target image.

[0102] FIG. 4 shows one way of performing the registration between the pre-procedure planning image and the intra-operative image. The method uses the object maps along with the intensity information and the landmark correspondences to compute the correspondence between the images. The resulting deformation map is used to map the plan from frame of reference of the planning image to the intra-procedural image.

[0103] FIGS. 5, 6A and 6B represent an embodiment where the plan is a needle-based laser ablation plan. In this embodiment, the radiologist or radiation oncologist analyses the MRI image and automatically or manually computes a target region along with labeling the surrounding sensitive tissue, i.e., the safety zone. The automated method embedded in the current method computes the trajectory, location, energy settings and the duration of application of laser such that the target region is completely ablated while the safety zone is spared.

[0104] MRI data, which may include post-segmented MR image data, pre-segmented interpreted MRI data, the original MRI scans, suspicion index data, and/or instructions or a plan, may be communicated to a urologist, The MRI data may be stored in a DICOM format, in another industry-standard format, or in a proprietary format unique to the imaging modality or processing platform generating the medical images.

[0105] The urology center where the MRI data is received may contain an image-guided biopsy or therapy system such as the Artemis, UroStation (Koelis, La Tronche, France), or BiopSee (MedCom GmbH, Darmstadt, Germany). Alternatively, the image-guided biopsy system may comprise hardware and/or software configured to work in conjunction with a urology center's preexisting hardware and/or software. For example, a mechanical tracking arm may be connected to a preexisting ultrasound machine, and a computer programmed with suitable software may be connected to the ultrasound machine or the arm. In this way, the equipment already found in a urology center can be adapted to serve as an image-guided biopsy system of the type described in this disclosure. A tracking arm on the system may be attached to an ultrasound probe and an ultra sound scan is performed.

[0106] A two-dimensional (2-D) or 3D model of the prostate may be generated using the ultrasonic images produced by the scan, and segmentation of the model may be performed. Pre-processed ultrasound image data and post-processed ultrasound image data may be transmitted to the urology center. Volumetry may also be performed, including geometric or planimetric volumetry. Segmentation and/or volumetry may be performed manually or automatically by the image guided biopsy system. Preselected biopsy sites (e.g., selected by the radiologist during the analysis) may be incorporated into and displayed on the model. All of this ultrasound data generated from these processes may be electronically stored on the urology center's server via a communications link.

[0107] As described above, processing of the MRI data or ultrasound data, including segmentation and volumetry, may be carried out manually, automatically, or semi-automatically. This may be accomplished through the use of segmentation software, such as Segasist Prostate Auto-Contouring, which may be included in the image-guided biopsy system. Such software may also be used to perform various types of contour modification, including manual delineation, smoothing, rotation, translation, and edge snapping. Further, the software is capable of being trained or calibrated, in which it observes, captures, and saves the user's contouring and editing preferences over time and applies this knowledge to contour new images. This software need not be hosted locally, but rather, may be hosted on a remote server or in a cloud computing environment. At the urology center, MRI data may be integrated with the image-guided biopsy system.

[0108] The fusion process may be aided by the use of the instructions included with the MRI data. The fusion process may include registration of the MR and ultrasonic images, which may include manual or automatic selection of fixed anatomical landmarks in each image modality. Such landmarks may include the base and apex of the prostatic urethra. The two images may be substantially aligned and then one image superimposed onto the other. Registration may also be performed with models of the regions of interest. These models of the regions of interest, or target areas, may also be superimposed on the digital prostate model.

[0109] The fusion process thus seeks to anatomically align the 3D models obtained by the radiological imaging, e.g., MRI, with the 3D models obtained by the ultrasound imaging, using anatomical landmarks as anchors and performing a warping of at least one of the models to confirm with the other. The radiological analysis is preserved, such that information from the analysis relevant to suspicious regions or areas of interest are conveyed to the urologist. The fused models are then provided for use with the real-time ultrasound system, to guide the urologist in obtaining biopsy samples or performing a therapeutic procedure.

[0110] Through the use of the described methods and systems, the 3D MR image is integrated or fused with real-time ultrasonic images, based on a 3D ultrasound model obtained prior to the procedure (perhaps immediately prior). This allows the regions of interest to be viewed under real-time ultrasonic imaging so that they can be targeted during biopsy or therapy.

[0111] In this way, biopsy tracking and targeting using image fusion may be performed by the urologist for diagnosis and management of prostate cancer. Targeted biopsies may be more effective and efficient for revealing cancer than non-targeted, systematic biopsies. Such methods are particularly useful in diagnosing the ventral prostate gland, where malignancy may not always be detected with biopsy. The ventral prostate gland, as well as other areas of the prostate, often harbor malignancy in spite of negative biopsy. Targeted biopsy addresses this problem by providing a more accurate diagnosis method. This may be particularly true when the procedure involves the use of multimodal MRI. Additionally, targeting of the suspicious areas may reduce the need for taking multiple biopsy samples or performing saturation biopsy.

[0112] The described methods and systems may also be used to perform saturation biopsy. Saturation biopsy is a multicore biopsy procedure in which a greater number of samples are obtained from throughout the prostate than with a standard biopsy. Twenty or more samples may be obtained during saturation biopsy, and sometimes more than one hundred. This procedure may increase tumor detection in high-risk cases. However, the benefits of such a procedure are often outweighed by its drawbacks, such as the Inherent trauma to the prostate, the higher incidence of side effects, the additional use of analgesia or anesthesia, and the high cost of processing the large amount of samples. Through use of the methods and systems of the current invention, focused saturation biopsy may be performed to exploit the benefits of a saturation biopsy while minimizing the drawbacks. After target areas suspicious of tumor are identified, a physician may sample four or more cores, all from the suspected area. This procedure avoids the need for high-concentration sampling in healthy areas of the prostate. Further, this procedure will not only improve detection, but will enable one to determine the extent of the disease.

[0113] These methods and systems of the current invention also enable physicians to later revisit the suspected areas for resampling over time in order to monitor the cancer's progression. Through active surveillance, physicians can assess the seriousness of the cancer and whether further treatment would be of benefit to the patient. Since many prostate cancers do not pose serious health threats, a surveillance program may often provide a preferable alternative to radical treatment, helping patients to avoid the risk of side effects associated with treatment.

[0114] In addition to MRI-ultrasound fusion, image-guided biopsy systems such as the Artemis may also be used in accordance with the current technology for performing an improved non-targeted, systematic biopsy under 3D ultrasonic guidance. When using conventional, unguided, systematic biopsy, the biopsy locations are not always symmetrically distributed and may be clustered. However, by attaching the image-guided biopsy system to an ultrasound probe, non-targeted systematic biopsy may be performed under the guidance of 3D ultrasonic imaging. This may allow for more even distribution of biopsy sites and wider sampling over conventional techniques. During biopsies performed using either MRI-ultrasound fusion or 3D ultrasonic guidance, the image data may be used as a map to assist the image-guided biopsy system in navigation of the biopsy needle, as well as tracking and recording the navigation.

[0115] The process described above may further include making treatment decisions and carrying out the treatment of prostate cancer using the image-guided biopsy system. The current invention provides physicians with information that can help them and patients make decisions about the course of care, whether it be watchful waiting, hormone therapy, targeted thermal ablation, nerve sparing robotic surgery, or radiation therapy. While computed tomography (CT) may be used, it can overestimate prostate volume by 35%. However, CT scans may be fused with MRI data to provide more accurate prediction of the correct staging, more precise target volume identification, and improved target delineation. For example, MRI, in combination with biopsy, will enhance patient selection for focal ablation by helping to localize clinically significant tumor foci.

[0116] White ultrasound at low intensities is commonly used for diagnostic and imaging applications, it can be used at higher intensities for therapeutic applications due to its ability to interact with biological tissues both thermally and mechanically. Thus, a further embodiment of the current invention contemplates the use of HIFU for treatment of prostate cancer in conjunction with the methods and apparatus previously described. An example of a commercially available HIFU system is the Sonablate 500 by Focus Surgery, Inc. (Indianapolis, Ind.), which is a HIFU therapy device that operates under the guidance of 3D ultrasound imaging. Such treatment systems can be improved by being configured to operate under the guidance of a fused MRI-ultrasound image.

[0117] During ablative therapy, temperatures in the tissue being ablated may be closely monitored and the subsequent zone of necrosis (thermal lesion) visualized, and used to update a real-time tissue model. Temperature monitoring for the visualization of a treated region may reduce recurrence rates of local tumor after therapy. Techniques for the foregoing may include microwave radiometry, ultrasound, impedance tomography, MRI, monitoring shifts in diagnostic pulse-echo ultrasound, and the real-time and in vivo monitoring of the spatial distribution of heating and temperature elevation, by measuring the local propagation velocity of sound through an elemental volume of such tissue structure, or through analysis of changes in backscattered energy. Other traditional methods of monitoring tissue temperature include thermometry, such as ultrasound thermometry and the use of a thermocouple.

[0118] MRI may also be used to monitor treatment, ensure tissue destruction, and avoid overheating surrounding structures. Further, because ultrasonic imaging is not always adequate for accurately defining areas that have been treated, MRI may be used to evaluate the success of the procedure. For instance, MRI may be used for assessment of extent of necrosis shortly after therapy and for long-term surveillance for residual or recurrent tumor that may then undergo targeted biopsy. Thus, another aspect of the technology provides post-operative image fusion, that is, performing an imaging procedure after completion of an interventional procedure, and fusing or integrating pre-operative and/or intra-operative imaging data to help understand the post-operative anatomy. For example, after aggressive therapy, a standard anatomical model of soft tissue may no longer be accurate, but by integrating the therapeutic intervention data, a more accurate understanding, imaging, and image analysis may be provided.

[0119] According to another aspect of the invention, a diagnostic and treatment image generation system includes at least one database containing image data from two different modalities, such as MRI and ultrasound data, and an image-guided biopsy and/or therapy system. The diagnostic and treatment image generation system may also include a computer programmed to aid in the transmission of the image data and/or the fusion of the data using the image-guided biopsy system.

[0120] In accordance with yet another aspect of the present invention, a computer readable storage medium has a non-transitory computer program stored thereon, to control an automated system to carry out various methods disclosed herein.

[0121] The present invention has been described in terms of the preferred embodiment, and it is recognized that equivalents, alternatives, and modifications, aside from those expressly stated, are possible and within the scope of the invention.

* * * * *

References


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed