CN210136501U - System and apparatus for visualization - Google Patents

System and apparatus for visualization Download PDF

Info

Publication number
CN210136501U
CN210136501U CN201720854062.8U CN201720854062U CN210136501U CN 210136501 U CN210136501 U CN 210136501U CN 201720854062 U CN201720854062 U CN 201720854062U CN 210136501 U CN210136501 U CN 210136501U
Authority
CN
China
Prior art keywords
image data
imaging device
workstation
image
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201720854062.8U
Other languages
Chinese (zh)
Inventor
M.芬彻尔
G.埃莫西洛巴拉德斯
B.基弗
Y.詹
X.S.周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Healthcare GmbH
Original Assignee
Siemens AG
Siemens Medical Solutions USA Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/735,243 external-priority patent/US10460508B2/en
Application filed by Siemens AG, Siemens Medical Solutions USA Inc filed Critical Siemens AG
Application granted granted Critical
Publication of CN210136501U publication Critical patent/CN210136501U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

Systems and devices for visualization. A system and apparatus for visualization includes: an imaging device for collecting image data; a workstation in communication with the imaging device; a computer system connected to the imaging device and the workstation through a wired or wireless network, the computer system comprising: a processor coupled to one or more non-transitory computer-readable media; an input device coupled to the processor via an input-output interface; an output device coupled to the processor via an input-output interface; an image processing unit configured to: receiving magnetic resonance image data from the imaging device and rendering the reformatted image data at the output device or workstation for display to a user for detecting bone metastases.

Description

System and apparatus for visualization
Cross Reference to Related Applications
This application claims the benefit of U.S. application No. 14/735,243 filed on day 10, 6.2015 and U.S. provisional application No. 62/011,273 filed on day 12, 6.2014, which are incorporated herein by reference in their entirety.
Technical Field
The present disclosure relates generally to diagnostic imaging and more particularly to systems and devices for visualization.
Background
The field of medical imaging has experienced significant progress because temporal X-rays are used first to determine anatomical abnormalities. Medical imaging hardware has evolved from modern machines such as Medical Resonance Imaging (MRI) scanners, Computed Tomography (CT) scanners, and Positron Emission Tomography (PET) scanners to multi-modality imaging systems such as PET-CT and PET-MRI systems. Due to the large amount of image data generated by such modern medical scanners, there has been and remains a need to develop some or all of the automated image processing techniques that can be used in the process of determining the presence of anatomical abnormalities in scanned medical images.
A digital medical image is constructed using raw image data obtained from a scanner. Digital medical images are typically two-dimensional ("2D") images composed of pixel elements or three-dimensional ("3D") images composed of volume elements ("voxels"). Such 2D or 3D images are processed using medical image recognition techniques to determine the presence of anatomical abnormalities such as cysts, tumors, polyps, and the like. Given the amount of image data generated by any given image scan, it is preferable that the automated technique should indicate to the physician anatomical features in a selected region of the image for further diagnosis of any disease or condition.
Automated image processing and recognition of structures within medical images is generally referred to as Computer Aided Detection (CAD). The CAD system may process the medical image and identify anatomical structures including possible abnormalities for further examination. Such possible abnormalities are often referred to as candidates and are considered to be generated by CAD systems based on medical images.
Bone metastases or metastatic bone disease are a type of abnormality of major clinical concern. Bone metastases are a type of cancer metastasis caused by the invasion of a primary tumor into bone. Although primary cancer of bone is rare, bone is a common target for cancer cell spread and retention. Metastasis from a primary tumor is the most common malignancy involving bone. Its clinical relevance results from the fact that it often afflicts the patient and affects the quality of life of the patient due to its effects on the stability and mobility of the patient's skeleton. Diagnosing bone metastases is therefore highly relevant to therapy determination.
Medical imaging techniques provide important clues for diagnosing and evaluating the development of bone metastases. Bone scintigraphy (or scanning) is the current standard of care. Bone scintigraphy is a nuclear scan test used to find certain abnormalities in bone. The test is highly sensitive, fast and legible. However, it is not very specific and therefore requires an additional imaging scan.
SUMMERY OF THE UTILITY MODEL
A system for visualization, comprising: an imaging device for collecting image data; a workstation in communication with the imaging device; a computer system connected to the imaging device and the workstation through a wired or wireless network, the computer system comprising: a processor coupled to one or more non-transitory computer-readable media; an input device coupled to the processor via an input-output interface; an output device coupled to the processor via an input-output interface; an image processing unit configured to: receiving magnetic resonance image data from the imaging device and rendering the reformatted image data at the output device or workstation for display to a user for detecting bone metastases.
An apparatus for visualization, comprising: an imaging device for collecting image data; a workstation in communication with the imaging device; a computer system connected to the imaging device and the workstation through a wired or wireless network, the computer system comprising: a processor coupled to one or more non-transitory computer-readable media; an input device coupled to the processor via an input-output interface; an output device coupled to the processor via an input-output interface; an image processing unit configured to: receiving magnetic resonance image data from an imaging device and rendering the reformatted image data at the output device or workstation for display via a display device.
The present disclosure is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Drawings
A more complete appreciation of the present disclosure and many of the attendant aspects thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings. Further, it should be noted that the same numbers are used throughout the figures to reference like elements and features.
Figure 1 shows slices of a T2-weighted coronary Magnetic Resonance (MR) series;
FIG. 2 is a block diagram illustrating an exemplary imaging system;
FIG. 3 illustrates an exemplary visualization method;
FIG. 4 shows an exemplary image of a patient's body;
FIG. 5 illustrates an exemplary planarization of the spine in the image data;
FIG. 6a shows an exemplary mapping of voxels; and
fig. 6b shows an exemplary coronal multi-planar reconstruction (MPR) image and a coronal Volume Rendering Technique (VRT) image of a flattened rib skeleton.
Detailed Description
In the following description, numerous specific details are set forth, such as examples of specific components, devices, methods, etc., in order to provide a thorough understanding of embodiments of the present invention. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice embodiments of the present invention. In other instances, well-known materials or methods have not been described in detail in order to avoid unnecessarily obscuring embodiments of the present invention. While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.
As used herein, the term "x-ray image" may mean a visible x-ray image (e.g., displayed on a video screen) or a digital representation of an x-ray image (e.g., a file corresponding to the pixel output of an x-ray detector). As used herein, the term "in-treatment x-ray image" may refer to an image captured at any point during the treatment delivery phase of a radiosurgery or radiotherapy procedure, which may include times when the radiation source is on or off. Sometimes, for ease of description, MR imaging data may be used herein as an exemplary imaging modality. However, it will be appreciated that data from any type of imaging modality may also be used in various embodiments of the present invention, including, but not limited to, X-ray radiographs, MRI, CT, PET (positron emission tomography), PET-CT, SPECT-CT, MR-PET, 3D ultrasound images, and the like.
Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that terms such as "segmenting," "generating," "registering," "determining," "aligning," "positioning," "processing," "computing," "selecting," "estimating," "detecting," "tracking," or the like, may refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Embodiments of the methods described herein may be implemented using computer software. If written in a programming language conforming to an accepted standard, sequences of instructions designed to implement the methods can be compiled for execution on a variety of hardware platforms and for interface to a variety of operating systems. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement embodiments of the invention.
As used herein, the term "image" refers to multi-dimensional data composed of discrete image elements (e.g., pixels for 2D images and voxels for 3D images). The image may be a medical image of the subject (subject) collected, for example, with computed tomography, magnetic resonance imaging, ultrasound, or any other medical imaging system known to those skilled in the art. The image may also be provided from a non-medical context (context), such as, for example, a remote sensing system, electron microscopy, and the like. The method of the present invention can be applied to images of any dimensionality, for example 2D photographs or 3D volumes. For 2 or 3 dimensional images, the domain of the image is typically a 2 or 3 dimensional rectangular array, where each pixel or voxel can be addressed with reference to a set of two or three mutually orthogonal axes. The terms "digital" and "digitized" as used herein will refer to images or volumes in digital or digitized format acquired via a digital acquisition system or via conversion from analog images, as appropriate.
Whole-body MRI offers high sensitivity and specificity for bone metastasis and a large field of view that will cover most of the bone, compared to other imaging modalities. However, it often takes a long time to read the whole-body MR scan data and report all suspicious bone metastases. For example, fig. 1 shows a slice 100 of a T2-weighted coronal MR series. Triangle 102 indicates a suspicious bone lesion. Although the T2-weighted coronal MR series shows high resolution and sensitivity of bone metastases, it cannot show all vertebrae in one slice due to the curved spinal geometry. To find all lesions on the vertebrae, the radiologist would need to examine very carefully over 15 slices, which is time consuming and inefficient.
A framework for visualization is described herein. According to one aspect, the framework provides anatomically intelligent visualization to increase the efficiency of reading image data to detect abnormalities such as bone metastases. To accomplish this, the image data is processed to highlight anatomical structures of interest (e.g., bone structures). In some embodiments, the image data is processed to display only the structures of interest. Alternatively, the structure of interest may be displayed in a smaller number of slices to make the reading more efficient. Both types of visualization modes may be constructed by algorithms that are capable of automatically localizing structures of interest in the image data. The framework advantageously provides an efficient and easy way of reading diagnostic images. These exemplary advantages and features will be described in more detail in the following description.
Fig. 2 is a block diagram illustrating an exemplary imaging system 200. The imaging system 200 includes a computer system 201 for implementing a framework as described herein. The computer system 201 may also be connected to the imaging device 202 and the workstation 203 through a wired or wireless network. The imaging device 202 may be a radiation scanner, such as a Magnetic Resonance (MR) scanner, PET/MR, X-ray or CT scanner.
Computer system 201 may be a desktop personal computer, a portable laptop computer, another portable device, a microcomputer, a mainframe computer, a server, a storage system, a special-purpose digital appliance, or another device having a storage subsystem configured to store a number of digital data items. In one embodiment, the computer system 201 includes a processor or Central Processing Unit (CPU) 204 coupled via an input-output interface 221 to one or more non-transitory computer-readable media 205 (e.g., computer storage or memory), an output device 208 (e.g., a monitor, display, printer, etc.), and various input devices 210 (e.g., a mouse, keyboard, touchpad, voice recognition module, etc.). The computer system 201 may also include support circuits to support circuits such as cache, power supplies, clock circuits, and a communication bus. Even further, computer system 201 may be provided with a graphics controller chip, such as a Graphics Processing Unit (GPU) that supports high performance graphics functions.
It is to be understood that the present techniques may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. In one embodiment, the techniques described herein are implemented with an image processing unit 206. The image processing unit 206 may include computer readable program code tangibly embodied in a non-transitory computer readable medium 205. The non-transitory computer readable medium 205 may include Random Access Memory (RAM), Read Only Memory (ROM), magnetic floppy disk, flash memory, and other types of memory, or a combination thereof. The computer readable program code is executed by the CPU 204 to control and/or process image data from the imaging device 202.
Thus, the computer system 201 is a general-purpose computer system that becomes a specific purpose computer system when executing computer readable program code. The computer readable program code is not intended to be limited to any particular programming language or implementation thereof. It will be appreciated that a variety of programming languages and their coding may be used to implement the teachings of the disclosure contained herein. The computer system 201 may also include an operating system and microinstruction code. The various techniques described herein may be implemented as part of the microinstruction code or part of the application program or software product, or a combination thereof, which is executed via the operating system. Various other peripheral devices may be connected to computer system 201 such as an additional data storage device and a printing device.
The workstation 203 may include a computer and appropriate peripherals, such as a keyboard and a display, and may operate in conjunction with the overall system 200. For example, workstation 203 may be in communication with imaging device 202 such that image data collected by imaging device 202 may be rendered at workstation 203 and viewed on a display. The workstation 203 may include a user interface that allows a radiologist or any other skilled user (e.g., physician, technician, operator, scientist, etc.) to manipulate the image data. For example, a user may identify a structure or region of interest in the image data or annotate the structure or region of interest with a predefined descriptor via a user interface. Further, the workstation 203 may communicate directly with the computer system 201 to display the processed image data. For example, a radiologist may interactively manipulate a displayed representation of processed image data and view it from various perspectives and in various reading modes.
Fig. 3 illustrates an exemplary visualization method 300. It should be noted that the steps of method 300 may be performed in the order shown or in a different order. Moreover, different, additional, or fewer steps may be implemented. Even further, the method 300 may be implemented with the system 200 of FIG. 2, a different system, or a combination thereof.
At 302, the image processing unit 206 receives raw MR image data. In certain embodiments, the image data is a three-dimensional medical image data set. The MR image data may represent the entire body of the patient or a portion thereof. The image data may be received from, for example, an imaging device 202, a storage device, a database system, or an archiving system such as a Picture Archiving and Communication (PACS) system.
At 304, the image processing unit 206 automatically localizes the at least one anatomical structure of interest in the image data. The anatomical structure of interest may be, for example, a bony structure such as a vertebra, rib, femur, skull, or the like. It should be appreciated that the structure of interest may be any other type of anatomical structure.
In some embodiments, the structure of interest is localized by performing a segmentation technique that generates a segmentation mask that describes the anatomical structure of interest. The segmentation technique automatically finds voxels belonging to a particular anatomical structure of interest. The segmentation techniques may include, but are not limited to, atlas-based segmentation, distortable model-based segmentation, classification-based tissue labeling, and the like.
Alternatively, the structure of interest may be localized by detecting key markers associated with the structure of interest. A landmark (or semantic point) is any easily discernible or anatomically significant point on an image. For example, the marker may represent a vertex in which the contour is convex or concave. The detection methods may include, but are not limited to, learning-based detection, bump detection, and the like.
At 306, the image processing unit 206 highlights the localized structure of interest by reformatting the image data. The structure of interest is highlighted to advantageously increase the efficiency of reading the image data. The structures of interest may be highlighted, for example, by removing the structures outside the segmentation mask, such that only the structures of interest remain in the image data. Alternatively, the image data may be reformatted so that structures of interest appear in fewer slices for compact reading.
More particularly, in some embodiments, the image processing unit 206 reformats the image data by applying a segmentation mask to the original image data to remove structures outside the mask. Thus, based on the segmentation mask, anatomical structures other than the structure under study may be removed or masked (mask out) to show only the structure of interest. The different MR contrast images and/or images from other modalities may be registered with segmentation masks to apply the masks accordingly and allow for fusion and multi-modality reading. A rigid (e.g., linear transformation) or distortable (e.g., similarity metric) registration may be performed to align the mask with the image. Such registration may be performed manually, semi-automatically, or automatically. Alternatively, such registration may be performed inherently during the imaging process, which allows the segmentation mask to be applied directly to the different contrast images without performing a registration algorithm.
Fig. 4 shows an exemplary image of a patient's body. More particularly, a coronal MR image 402 and a sagittal MR image 404 are extracted from the raw image dataset. The original MR images 402-404 are reformatted to show only the bone structure or bone 410 of interest. The reformatted 3D volume having only bone is rendered as a rotated maximum/minimum projection (MIP) image 406. The suspected bone metastasis 412 is much more distinct than the original MR MIP image 402. This finding is also consistent with the finding in the MIP image 408 of the normalized uptake value (SUV) PET.
In other implementations, the image processing unit 206 reformats the image data by mapping the detected landmarks of the structure of interest into shapes and extrapolating the original image data by the displacement of the mapped landmarks throughout the image data to warp the original image data. The shape may be a simple two-dimensional or three-dimensional geometric shape, such as a line or a plane. Each marker on the structure of interest may be mapped to a corresponding point along the shape, resulting in a distorted structure (e.g., a flattened spine).
One way of extrapolating the displacement of the map-markers is by isomorphic extrapolation (diffeomorphicxtraplation), which is advantageously done in order to minimize distortion of the surrounding tissueWhile distorting the image data. In Twining, Carole J, Stephen Marstand and Christopher J. TaylorBMVCAn exemplary isomorphic extrapolation method is described in "Measuring Geodesic Distances on the Space of BoundDiffeomorphs" in Vol. 2.2002, which is incorporated herein by reference. In reformatting the image data, the bone structure of interest appears in a smaller number of slices than in the original image data. This allows the structure of interest to be presented in a more compact visualization for rapid analysis by the user. This one-to-one correspondence may be maintained to allow the user to return to referencing the original image for confirmation if desired.
Fig. 5 illustrates the flattening of the spine in the image data. A sagittal slice 502 of an MR image of the spine before spine planarization is shown. Here, the vertebral center 51 is detected and mapped to a corresponding point 514 along a straight line. Isomorphic extrapolation is then performed to warp the MR image data. Sagittal and coronal slices (504 and 506) of the warped image data are shown. All of the vertebrae of the flattened spine 516 are now visible in the same coronal slice 506. Thus, a radiologist can look for bone metastases by viewing far fewer coronal slices.
In other embodiments, the image processing unit 206 reformats the image data by estimating the shape of the structure of interest based on the detected landmarks and mapping voxels on the shape to corresponding points on the visualization plane. The shape may be a three-dimensional geometric shape such as an elliptical cylinder, a triangular cylinder, a circular cylinder, a square, or the like. The image data may be reformatted by resampling voxels on the shape from the original image data and displaying them on the visualization plane.
Fig. 6a shows an exemplary mapping of voxels. More particularly, a transverse slice 601 of raw image data of a rib cage is shown. The shape of the rib cage is approximated by elliptical cylinders 602 a-c. The voxels on each elliptical cylinder 602 a-c may be mapped to a respective visualization plane 604 a-c, resulting in a roughly flattened (or undistorted) coronal rib skeleton visualization, such as those shown in fig. 6 b.
Fig. 6b shows a coronal multi-planar reconstruction (MPR) image 610 and a coronal Volume Rendering Technique (VRT) image 612 of a flattened rib skeleton. In the flattened images (610 and 612), each rib can be examined in only a plurality of coronal images or slices. Such visualization advantageously provides a more efficient way to navigate the ribs.
In conventional techniques, ribs are tracked and examined across the entire field of view. More particularly, the total number of coronal slices that are typically required to be read is proportional to the total number of horizontal lines of voxels (X) in the transverse image 601. The present exemplary framework advantageously improves reading efficiency by reducing the number of slices that need to be examined as compared to such techniques. The reduced number of visualization slices (a) required may be determined by the following exemplary equation:
Figure 10351DEST_PATH_IMAGE001
----- (1)
where R1 is the radius of the largest ellipse and R0 is the radius of the smallest ellipse.
Rib flattening algorithms have been designed for CT images. Such algorithms achieve flattening by tracking rib centerlines. Flattening the rib views helps to improve the efficiency of CT image reading. Although this concept is also applicable to MR images, it is technically difficult to track ribs in MR images due to the low MR signal and large slice thickness in cortical bone. In the present framework, the rib cage is estimated with an elliptical cylinder or other suitable shape. The center, orientation and size of these cylinders can be estimated with several anatomical landmarks visible in the MR imaging modality. The present framework is advantageously applicable in more imaging modalities than rib flattening techniques based on rib tracking.
Returning to FIG. 3, at 308, the image processing unit 206 renders the reformatted image data for display to the user. Different rendering methods may be applied on the reformatted image data with the highlighted structure of interest, including multi-planar reformatting (MPR), maximum/minimum projection image (MIP), and Volume Rendering Techniques (VRT). The rendered image data may be displayed, for example, at output device 208 or workstation 203. The user can easily read the displayed image data to detect, for example, bone metastases or other abnormalities. In some implementations, the raw image data received at 302 is displayed along with the rendered image data. Point-to-point correspondence between two sets of image data may be presented to allow a user to verify the detection results by returning reference to the original image data.
According to one embodiment, the framework localizes at least one anatomical structure of interest in the image data. The structure of interest is then highlighted by reformatting the image data. The resulting reformatted image data is then rendered for display to the user.
According to another embodiment, the framework automatically localizes at least one bone structure of interest appearing only in a first number of slices in the image data. The image data may be reformatted to generate reformatted image data in which the structure of interest appears only in a second number of slices that is less than the first number of slices. The resulting reformatted image data is then rendered for display to the user for detecting bone metastases.
While the present invention has been described in detail with reference to exemplary embodiments, those skilled in the art will recognize that various modifications and substitutions can be made thereto without departing from the spirit and scope of the present invention as set forth in the appended claims. For example, elements and/or features of different illustrative embodiments may be combined with each other and/or substituted for each other within the scope of this disclosure and appended claims.

Claims (2)

1. A system for visualization, comprising:
an imaging device for collecting image data;
a workstation in communication with the imaging device;
a computer system connected to the imaging device and the workstation through a wired or wireless network, the computer system comprising:
a processor coupled to one or more non-transitory computer-readable media;
an input device coupled to the processor via an input-output interface;
an output device coupled to the processor via an input-output interface;
an image processing unit configured to:
receiving magnetic resonance image data from the imaging device and rendering the reformatted image data at the output device or workstation for display to a user for detecting bone metastases.
2. An apparatus for visualization, comprising:
an imaging device for collecting image data;
a workstation in communication with the imaging device;
a computer system connected to the imaging device and the workstation through a wired or wireless network, the computer system comprising:
a processor coupled to one or more non-transitory computer-readable media;
an input device coupled to the processor via an input-output interface;
an output device coupled to the processor via an input-output interface;
an image processing unit configured to: receiving magnetic resonance image data from an imaging device and rendering the reformatted image data at the output device or workstation for display via a display device.
CN201720854062.8U 2014-06-12 2015-06-12 System and apparatus for visualization Active CN210136501U (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201462011273P 2014-06-12 2014-06-12
US62/011273 2014-06-12
US14/735,243 US10460508B2 (en) 2014-06-12 2015-06-10 Visualization with anatomical intelligence
US14/735243 2015-06-10
CN201520444856.8 2015-06-12

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201520444856.8 Division 2014-06-12 2015-06-12

Publications (1)

Publication Number Publication Date
CN210136501U true CN210136501U (en) 2020-03-10

Family

ID=69709350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201720854062.8U Active CN210136501U (en) 2014-06-12 2015-06-12 System and apparatus for visualization

Country Status (1)

Country Link
CN (1) CN210136501U (en)

Similar Documents

Publication Publication Date Title
US10460508B2 (en) Visualization with anatomical intelligence
US8625869B2 (en) Visualization of medical image data with localized enhancement
US7653263B2 (en) Method and system for volumetric comparative image analysis and diagnosis
US9741131B2 (en) Anatomy aware articulated registration for image segmentation
JP5632913B2 (en) Multi-modality chest imaging system, method and program
US9129362B2 (en) Semantic navigation and lesion mapping from digital breast tomosynthesis
US8983156B2 (en) System and method for improving workflow efficiences in reading tomosynthesis medical image data
US9082231B2 (en) Symmetry-based visualization for enhancing anomaly detection
US9064332B2 (en) Fused-image visualization for surgery evaluation
EP3447733B1 (en) Selective image reconstruction
US20070003118A1 (en) Method and system for projective comparative image analysis and diagnosis
CN108876794B (en) Isolation of aneurysm from parent vessel in volumetric image data
US20090129650A1 (en) System for presenting projection image information
US20070014448A1 (en) Method and system for lateral comparative image analysis and diagnosis
US20110200227A1 (en) Analysis of data from multiple time-points
US20150043799A1 (en) Localization of Anatomical Structures Using Learning-Based Regression and Efficient Searching or Deformation Strategy
US9691157B2 (en) Visualization of anatomical labels
US9020215B2 (en) Systems and methods for detecting and visualizing correspondence corridors on two-dimensional and volumetric medical images
CN114943714A (en) Medical image processing system, medical image processing apparatus, electronic device, and storage medium
US9286688B2 (en) Automatic segmentation of articulated structures
US10896501B2 (en) Rib developed image generation apparatus using a core line, method, and program
US8712119B2 (en) Systems and methods for computer-aided fold detection
CN210136501U (en) System and apparatus for visualization
Tsagaan et al. Image Processing in Medicine
Cunningham A Novel Mammogram Registration Algorithm for Improving Breast Cancer Detection

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200414

Address after: Erlangen

Patentee after: Siemens Healthcare GmbH

Address before: Munich, Germany

Co-patentee before: Siemens Medical Solutions USA, Inc.

Patentee before: SIEMENS AG

TR01 Transfer of patent right