EP3355273B1 - Coarse orientation detection in image data - Google Patents
Coarse orientation detection in image data Download PDFInfo
- Publication number
- EP3355273B1 EP3355273B1 EP18153887.7A EP18153887A EP3355273B1 EP 3355273 B1 EP3355273 B1 EP 3355273B1 EP 18153887 A EP18153887 A EP 18153887A EP 3355273 B1 EP3355273 B1 EP 3355273B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- anatomical structure
- interest
- images
- training
- coarse orientation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims description 22
- 210000003484 anatomy Anatomy 0.000 claims description 96
- 238000012549 training Methods 0.000 claims description 68
- 238000000034 method Methods 0.000 claims description 57
- 238000013527 convolutional neural network Methods 0.000 claims description 33
- 238000003384 imaging method Methods 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 18
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 238000011176 pooling Methods 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 7
- 238000004891 communication Methods 0.000 claims description 5
- 230000015654 memory Effects 0.000 description 8
- 238000012360 testing method Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000002059 diagnostic imaging Methods 0.000 description 5
- 210000002569 neuron Anatomy 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000005856 abnormality Effects 0.000 description 4
- 238000002591 computed tomography Methods 0.000 description 4
- 238000002595 magnetic resonance imaging Methods 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000002604 ultrasonography Methods 0.000 description 3
- 238000013170 computed tomography imaging Methods 0.000 description 2
- 238000002790 cross-validation Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000002603 single-photon emission computed tomography Methods 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 208000037062 Polyps Diseases 0.000 description 1
- 238000002583 angiography Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 208000031513 cyst Diseases 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 238000001493 electron microscopy Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002594 fluoroscopy Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000013152 interventional procedure Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10108—Single photon emission computed tomography [SPECT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
- G06T2207/10136—3D ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- the present disclosure generally relates to digital medical image data processing, and more particularly to coarse orientation detection in image data.
- Digital medical images are constructed using raw image data obtained from a scanner, for example, a computerized axial tomography (CAT) scanner, magnetic resonance imaging (MRI), etc.
- Digital medical images are typically either a two-dimensional (“2D") image made of pixel elements, a three-dimensional (“3D”) image made of volume elements ("voxels") or a four-dimensional (“4D”) image made of dynamic elements ("doxels").
- 2D, 3D or 4D images are processed using medical image recognition techniques to determine the presence of anatomical abnormalities or pathologies, such as cysts, tumors, polyps, etc.
- an automatic technique should point out anatomical features in the selected regions of an image to a doctor for further diagnosis of any disease or condition.
- CAD Computer-Aided Detection
- a CAD system can process medical images, localize and segment anatomical structures, including possible abnormalities (or candidates), for further review. Recognizing anatomical structures within digitized medical images presents multiple challenges. For example, a first concern relates to the accuracy of recognition of anatomical structures within an image. A second area of concern is the speed of recognition. Because medical images are an aid for a doctor to diagnose a disease or condition, the speed with which an image can be processed and structures within that image recognized can be of the utmost importance to the doctor in order to reach an early diagnosis.
- MR scans of anatomical structures may be acquired with the anatomical structure in an arbitrary position and orientation relative to the magnetic bores.
- FIG. 1 shows typical positions and orientations of the elbow 104a-c during MR scanning. Scans acquired with standardized orientation of the anatomical structure facilitates visualization or reading, comparison with past scans or imaging studies across patient populations.
- application specialists first acquire a scout scan of the elbow. This scout scan is then manually examined and a high-quality scan is acquired after the magnetic bore's or the elbow's position and orientation are adjusted to satisfy the desired imaging specifications.
- image acquisition procedures are typically tedious and time-consuming.
- US patent 2015/324999 discloses a method for automatic liver segmentation, whereby a marginal space learning (MSL)-based 3D object detection as a learning structure estimates the position, orientation, and scale of the target anatomical structure.
- MSL marginal space learning
- Non patent literature " Recognition of Chest Radiograph Orientation for Picture Archiving and Communications Systems Display using Neural Networks” (by BOONE J. M. ET. AL. in JOURNAL OF DIGITAL IMA, SPRINGER-VERLAG, vol. 5, no. 3, 1 August 1992, pages 190-193 ) discloses a neural network classification scheme as a learning structure to determine the correct orientation of a chest image.
- Non patent literature " A Steering Engine: Learning 3-D Anatomy Orientation Using Regression Forests” discloses a method of determining the orientation of anatomical structures based on a pre-trained regression technique.
- the framework trains a learning structure to recognize a coarse orientation of the anatomical structure of interest based on training images.
- the framework may then pass one or more current images through the trained learning structure to generate a coarse orientation of the anatomical structure of interest.
- the framework then outputs the generated coarse orientation of the anatomical structure of interest.
- a system or coarse orientation detection is provided.
- the system comprising: a non-transitory memory device for storing computer readable program code; and a processor device in communication with the memory device, the processor being operative with the computer readable program code to perform steps including receiving training images of an anatomical structure of interest, training a convolutional neural network to recognize a coarse orientation of the anatomical structure of interest based on the training images, receiving one or more current images of the anatomical structure of interest, passing the one or more current images through the trained convolutional neural network to generate the coarse orientation of the anatomical structure of interest, and controlling an imaging device for image acquisition based on the generated coarse orientation.
- the training images of the anatomical structure of interest comprises two-channel two-dimensional (2D) images generated from one or more three-dimensional (3D) image volumes.
- the 2D images can comprise receiving pairs of coronal and sagittal slices of the anatomical structure of interest.
- the processor is operative with the computer readable program code to train the convolutional neural network to recognize the coarse orientation of the anatomical structure of interest based on the training images by training the convolutional neural network to recognize a principal hemisphere of an axis of the structure of the structure of interest.
- a method or coarse orientation detection is provided.
- the method comprising: receiving training images of an anatomical structure of interest; training a learning structure to recognize a coarse orientation of the anatomical structure of interest based on the training images; receiving one or more current images of the anatomical structure of interest; passing the one or more current images through the trained learning structure to generate the coarse orientation of the anatomical structure of interest; and outputting the generated coarse orientation of the anatomical structure of interest.
- receiving the training images of the anatomical structure of interest comprises receiving two-channel two-dimensional (2D) images generated from one or more three-dimensional (3D) image volumes.
- Receiving the 2D images can comprise receiving pairs of coronal and sagittal slices of the anatomical structure of interest.
- training the learning structure to recognize the coarse orientation of the anatomical structure of interest based on the training images comprises training the learning structure to recognize a principal hemisphere of an axis of the structure of the structure of interest. Training the learning structure to recognize the principal hemisphere can comprise training the learning structure to identify an UP or DOWN orientation. Further, a method is preferred, wherein training the learning structure to recognize the coarse orientation of the anatomical structure of interest based on the training images comprises training a convolutional neural network (CNN) classifier. Training the convolutional neural network (CNN) classifier can comprise feeding the training images through hidden layers including convolutional layers and max-pooling layers.
- Training the convolutional neural network (CNN) classifier further can comprise feeding one of the max-pooling layers to a fully connected layer, and feeding the fully connected layer to a soft-max classification layer that outputs a coarse orientation vote.
- a method is preferred, wherein training the convolutional neural network (CNN) classifier further comprises feeding one of the max-pooling layers to a dropout layer for regularization.
- receiving the one or more current images of the anatomical structure of interest comprises receiving one or more two-channel two-dimensional (2D) images generated from a current three-dimensional (3D) image volume.
- Receiving the one or more 2D images of the anatomical structure of interest can comprise receiving one or more pairs of coronal and sagittal slices of the anatomical structure of interest.
- a method wherein passing the one or more current images through the trained learning structure to generate the coarse orientation of the anatomical structure of interest comprises assigning the current images with coarse orientation labels based on output results of the trained learning structure and determining the coarse orientation using a simple majority voting scheme based on the coarse orientation labels.
- outputting the generated coarse orientation of the anatomical structure of interest comprises automatically controlling an imaging device for image acquisition based on the generated coarse orientation, and/or wherein outputting the generated coarse orientation of the anatomical structure of interest comprises inputting the generated coarse orientation to another image processing algorithm, and/or wherein outputting the generated coarse orientation of the anatomical structure of interest comprises using the generated coarse orientation to initialize images prior to performing registration or segmentation.
- one or more non-transitory computer readable media embodying a program of instructions executable by a machine to perform operations for coarse orientation detection are provided.
- the operations comprising: receiving training images of an anatomical structure of interest; training a learning structure to recognize a coarse orientation of the anatomical structure of interest based on the training images; receiving one or more current images of the anatomical structure of interest; passing the one or more current images through the trained learning structure to generate the coarse orientation of the anatomical structure of interest; and outputting the generated coarse orientation of the anatomical structure of interest.
- x-ray image may mean a visible x-ray image (e.g., displayed on a video screen) or a digital representation of an x-ray image (e.g., a file corresponding to the pixel output of an x-ray detector).
- in-treatment x-ray image may refer to images captured at any point in time during a treatment delivery phase of an interventional or therapeutic procedure, which may include times when the radiation source is either on or off. From time to time, for convenience of description, CT imaging data (e.g ., cone-beam CT imaging data) may be used herein as an exemplary imaging modality.
- HRCT computed tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- PET-CT positron emission tomography
- SPECT SPECT-CT
- MR-PET 3D ultrasound images or the like
- sequences of instructions designed to implement the methods can be compiled for execution on a variety of hardware platforms and for interface to a variety of operating systems.
- implementations of the present framework are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used.
- the term "image” refers to multi-dimensional data composed of discrete image elements (e.g ., pixels for 2D images and voxels for 3D images).
- the image may be, for example, a medical image of a subject collected by computer tomography, magnetic resonance imaging, ultrasound, or any other medical imaging system known to one of skill in the art.
- the image may also be provided from non-medical contexts, such as, for example, remote sensing systems, electron microscopy, etc.
- an image can be thought of as a function from R 3 to R, or a mapping to R 3
- the present methods are not limited to such images, and can be applied to images of any dimension, e.g ., a 2D picture or a 3D volume.
- the domain of the image is typically a 2- or 3-dimensional rectangular array, wherein each pixel or voxel can be addressed with reference to a set of 2 or 3 mutually orthogonal axes.
- digital and digitized as used herein will refer to images or volumes, as appropriate, in a digital or digitized format acquired via a digital acquisition system or via conversion from an analog image.
- pixels for picture elements, conventionally used with respect to 2D imaging and image display, and "voxels” for volume image elements, often used with respect to 3D imaging, can be used interchangeably.
- the 3D volume image is itself synthesized from image data obtained as pixels on a 2D sensor array and displayed as a 2D image from some angle of view.
- 2D image processing and image analysis techniques can be applied to the 3D volume image data.
- techniques described as operating upon pixels may alternately be described as operating upon the 3D voxel data that is stored and represented in the form of 2D pixel data for display.
- techniques that operate upon voxel data can also be described as operating upon pixels.
- variable x is used to indicate a subject image element at a particular spatial location or, alternately considered, a subject pixel.
- subject pixel or “subject voxel” are used to indicate a particular image element as it is operated upon using techniques described herein.
- Automatically detecting anatomy orientation is very useful in medical image analysis.
- the ability to automatically detect coarse orientation of anatomical structures is useful for minimizing the resources required by fine (or accurate) orientation detection algorithms, to initialize non-rigid deformable registration algorithms or to align models to target structures in model-based segmentation algorithms.
- Automating scan acquisition procedures is also important to (a) minimize the overall time taken by the image acquisition procedure; and (b) achieve standardized and reproducible acquisition protocols.
- automatic identification of coarse orientation provides several advantages as a pre-processing step for more accurate and robust image processing and can also lead to more efficient clinical workflows.
- the framework uses a deep convolutional neural network (DCNN)-based method to learn features that are well suited for fast and robust identification of coarse orientation.
- Coarse orientation may be identified by the hemi-sphere where the principal axis of a structure lies.
- the framework may predict whether the principal orientation of a structure is in the northern hemisphere (i.e., UP) or southern hemisphere (i.e., DOWN).
- the framework is based on the assumption that the entire anatomical structure is located within the scan's field-of-view (FOV).
- FOV field-of-view
- Identifying the coarse orientation of an anatomical structure (e.g ., elbow) in a given scan image (e.g., MR) is challenging owing to variations induced by different bore sizes, system fields-of-view and/or different anatomy angulations due to injuries and blockades.
- 3D CNNs are generally challenging to train due to scarcity of data and high-dimensional input space.
- a multi-planar two-dimensional (2D) deep learning framework may be used instead of working directly in the 3D space.
- a large number of coronal-sagittal slice pairs of the anatomical structure of interest may be constructed as two-channel images to train a DCNN to classify whether a scan is UP or DOWN.
- a small number of coronal-sagittal two-channel images are passed through the trained network.
- coarse orientation of the anatomical structure may be determined using majority voting.
- the framework essentially learns various possible articulations and forms that an anatomical structure of interest may take in the multi-channel 2D images. Hence, a few randomly selected slices around a region of interest are sufficient to obtain desirable results.
- the present framework was tested using many elbow MR scan images. Experimental results suggest that only five two-channel images were sufficient to achieve a high success rate of 97.39%.
- the framework was also extremely fast and takes approximately 50 milliseconds per 3D MR scan.
- the framework is advantageously insensitive to the precise location of the anatomical structure in the FOV.
- FIG. 2 is a block diagram illustrating an exemplary system 200.
- the system 200 includes a computer system 201 for implementing the framework as described herein.
- computer system 201 operates as a standalone device.
- computer system 201 may be connected ( e.g., using a network) to other machines, such as imaging device 202 and workstation 203.
- computer system 201 may operate in the capacity of a server (e.g ., thin-client server), a cloud computing platform, a client user machine in server-client user network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
- computer system 201 comprises a processor or central processing unit (CPU) 204 coupled to one or more non-transitory computer-readable media 205 (e.g., computer storage or memory), display device 210 ( e.g., monitor) and various input devices 211 (e.g., mouse or keyboard) via an input-output interface 221.
- Computer system 201 may further include support circuits such as a cache, a power supply, clock circuits and a communications bus.
- Various other peripheral devices such as additional data storage devices and printing devices, may also be connected to the computer system 201.
- the present technology may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof, either as part of the microinstruction code or as part of an application program or software product, or a combination thereof, which is executed via the operating system.
- the techniques described herein are implemented as computer-readable program code tangibly embodied in non-transitory computer-readable media 205.
- the present techniques may be implemented by learning module 206, processing module 207 and database 209.
- Non-transitory computer-readable media 205 may include random access memory (RAM), read-only memory (ROM), magnetic floppy disk, flash memory, and other types of memories, or a combination thereof.
- the computer-readable program code is executed by CPU 204 to process medical data retrieved from, for example, database 209.
- the computer system 201 is a general-purpose computer system that becomes a specific purpose computer system when executing the computer-readable program code.
- the computer-readable program code is not intended to be limited to any particular programming language and implementation thereof. It will be appreciated that a variety of programming languages and coding thereof may be used to implement the teachings of the disclosure contained herein.
- the same or different computer-readable media 205 may be used for storing a database (or dataset) 209. Such data may also be stored in external storage or other memories.
- the external storage may be implemented using a database management system (DBMS) managed by the CPU 204 and residing on a memory, such as a hard disk, RAM, or removable media.
- DBMS database management system
- the external storage may be implemented on one or more additional computer systems.
- the external storage may include a data warehouse system residing on a separate computer system, a cloud platform or system, a picture archiving and communication system (PACS), or any other hospital, medical institution, medical office, testing facility, pharmacy or other medical patient record storage system.
- PPS picture archiving and communication system
- Imaging device 202 acquires medical images 220 associated with at least one patient. Such medical images 220 may be processed and stored in database 209. Imaging device 202 may be a radiology scanner (e.g ., MR scanner) and/or appropriate peripherals ( e.g ., keyboard and display device) for acquiring, collecting and/or storing such medical images 220.
- MR scanner e.g., MR scanner
- peripherals e.g ., keyboard and display device
- the workstation 203 may include a computer and appropriate peripherals, such as a keyboard and display device, and can be operated in conjunction with the entire system 200.
- the workstation 203 may communicate directly or indirectly with the imaging device 202 so that the medical image data acquired by the imaging device 202 can be rendered at the workstation 203 and viewed on a display device.
- the workstation 203 may also provide other types of medical data 222 of a given patient.
- the workstation 203 may include a graphical user interface to receive user input via an input device (e.g. , keyboard, mouse, touch screen voice or video recognition interface, etc.) to input medical data 222.
- an input device e.g. , keyboard, mouse, touch screen voice or video recognition interface, etc.
- FIG. 3 shows an exemplary method 300 of coarse orientation detection by a computer system. It should be understood that the steps of the method 300 may be performed in the order shown or a different order. Additional, different, or fewer steps may also be provided. Further, the method 300 may be implemented with the system 201 of FIG. 2 , a different system, or a combination thereof.
- learning module 206 receives training images.
- the training images may be acquired by using techniques such as high-resolution computed tomography (HRCT), magnetic resonance (MR) imaging, computed tomography (CT), helical CT, X-ray, angiography, positron emission tomography (PET), fluoroscopy, ultrasound, single photon emission computed tomography (SPECT), or a combination thereof.
- the training images may be retrieved from, for example, database 209 and/or acquired by imaging device 202.
- the training images may be randomly generated from one or more 3D image volumes acquired in one or more imaging scans of an anatomical structure of interest.
- the training images may include two-channel 2D images.
- Each two-channel image may include a pair of corresponding coronal and sagittal slices of the anatomical structure of interest.
- the anatomical structure of interest is a body portion that has been identified for investigation.
- the anatomical region of interest may be, for example, at least a section of a subject's elbow, spine, vertebra, and so forth.
- FIG. 4 shows exemplary 2D images for MR elbow scans. More particularly, the 2D images 402 in the top two rows are in the UP orientation, while the 2D images 404 in the bottom two rows are in the DOWN orientation. Images 406a-b in first and third rows are along the sagittal plane, while images 408a-b in the second and fourth rows are along coronal plane. It can be observed that there are intra-class variations in the UP and DOWN orientations due to different articulations and flipping, which makes identifying the elbow orientation challenging even for the trained human. Fortunately, coronal and sagittal slices together provide sufficient information for this task.
- the present framework assumes that the entire anatomical structure is located within the scan's (e.g ., MR) field-of-view (FOV).
- FOV field-of-view
- Several two-channel training images may be randomly generated within the FOV. All training images generated from same scan are assigned the same label as the global orientation of the scan.
- learning module 206 trains a learning structure to recognize coarse orientation of the anatomical structure of interest based on the training images.
- all the training images may be shuffled and passed through the learning structure to learn its parameters and identify anatomy coarse orientation.
- the coarse orientation may be identified by the principal hemisphere of the structure axis ( i.e., UP or DOWN orientation), thereby reducing the recognition task to a binary classification task.
- the learning structure is an unsupervised learning structure that automatically discovers representations needed for feature detection instead of relying on labeled input.
- the learning structure may be a deep learning architecture that includes stacked layers of learning nodes.
- the learning structure may be represented by, for example, a convolutional neural network (CNN) classifier.
- CNN is a class of deep, feed-forward artificial neural network that uses a variation of multilayer perceptrons designed to require minimal preprocessing.
- Other types of classifiers, such as random forests, may also be used.
- FIG. 5 shows an exemplary architecture 501 of the CNN learning structure.
- the CNN learning structure may include an input layer 502, an output layer 508, as well as multiple hidden layers 504 and 506.
- the hidden layers 504 and 506 are either convolutional, pooling or fully connected.
- Convolutional layers apply a convolution operation to the input, passing the result to the next layer, thereby emulating the response of an individual neuron to visual stimuli.
- Pooling layers combine the outputs of neuron clusters at one layer into a single neuron in the next layer, while fully connected layers connect every neuron in one layer to every neuron in another layer.
- the input image (e.g ., training image) 502 is fed to hidden layers 504.
- the number of convolutional layers as well as number of channels in each convolutional layer in the hidden layers 504 may be reduced to achieve real-time performance.
- four convolutional layers, each with 20 channels and followed by a max-pooling layer, are sufficient to attain the desired accuracy.
- the number of channels and filter size in all the convolution layers may be, for example, 20 and 3x3 respectively.
- the final max-pooling layer may be fed into one fully connected layer 506, which may be finally fed to soft-max classification layer 508 with two units.
- the soft-max classification layer 508 may then output a coarse orientation vote (i.e., UP or DOWN).
- a dropout mechanism may be used.
- the dropout mechanism is a regularization technique for reducing overfitting in neural networks by preventing complex co-adaptations on training data. See, for example, Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R., Dropout: a simple way to prevent neural networks from overfitting, Journal of Machine Learning Research 15(1), 1929-1958 (2014 ).
- a dropout layer with dropout of a probability of, for example, 0.5 is inserted before the fully connected layer 506.
- the network weights may then be updated using a stochastic gradient descent algorithm.
- processing module 207 receives one or more current images.
- the one or more current images may be acquired by the same imaging modality as the training images. Additionally, the one or more current images may be randomly generated from a current 3D image volume acquired in a single imaging scan ( e.g., MR scan). Each current image may be a two-channel 2D image including a pair of corresponding coronal and sagittal slices of the anatomical structure of interest.
- processing module 207 passes the one or more current images through the trained learning structure to generate a coarse orientation of the anatomical structure.
- Each current image may be assigned a coarse orientation (e.g ., UP or DOWN) label based on the output results of the trained learning structure.
- the final orientation of the anatomical structure of interest may be decided based on the coarse orientations labels using, for example, simple majority voting scheme.
- processing module 207 outputs the coarse orientation of the anatomical structure.
- the coarse orientation may be displayed at, for example, workstation 203.
- the coarse orientation may be used to automatically control an imaging device (e.g., MR scanner) for image acquisition.
- the coarse orientation may also be input into another image processing algorithm to provide accurate and robust processing results.
- the coarse orientation detection method 300 may serve as a pre-processing process for a medical image processing algorithm.
- the medical imaging algorithm may be, for instance, a fine orientation detection algorithm (e.g ., Steering-Engine 1), end-to-end orientation detection or marginal space learning (MSL).
- the Steering-Engine may take more iterations to converge at a final solution if the initial and the actual coarse orientations lie in different hemi-spheres.
- the MSL search space for quantized orientation may be effectively reduced to half given the hemi-sphere of structure of interest.
- the coarse orientation is used to initialize images prior to performing an image processing algorithm, such as registration (e.g ., non-rigid registration) or segmentation (e.g. , active shape models, statistical shape models).
- image registration techniques aim to establish correspondence between images, and are at the core of many applications in medical imaging. Many registration methods are dependent on a good initialization.
- the initialization may be performed by manually aligning the images, or by calculating landmark-based point set transformation between two images.
- manual alignment is not a desirable solution as it is very time consuming.
- landmark detection algorithms may suffer in precisely locating points of interest owing to variations like rotation and articulation.
- Segmentation algorithms like active shape models and statistical shape models also rely on initialization and perform best when initialization is not too far from final solution.
- coarse orientation detection may facilitate precise landmark localization as well as provide better initialization strategies for registration, fine orientation detection and segmentation algorithms.
- FIG. 6a shows the average confusion matrix 601 over 100 simulations.
- FIG. 6b shows graphs 610 and 612 of the running time (in milliseconds) and average accuracy respectively of the present framework with respect to the number of multi-planar images used for majority voting. All timings were obtained on a single core standard central processing unit (CPU) setting. Notice that the accuracy saturates at 98% after using 9 images for majority voting. Further notice that the time taken for 9 images is almost doubled compared to time taken when 5 images are used for majority voting with only minor increase in accuracy. The gain is not statistically huge compared to the overall increase in computation time. Hence, the number of images may be kept to 5 during testing to obtain desirable results.
- CPU central processing unit
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Description
- The present disclosure generally relates to digital medical image data processing, and more particularly to coarse orientation detection in image data.
- The field of medical imaging has seen significant advances since the time X-Rays were first used to determine anatomical abnormalities. Medical imaging hardware has progressed from modern machines, such as Magnetic Resonance (MR) imaging scanners, Computed Tomographic (CT) scanners and Positron Emission Tomographic (PET) scanners, to multimodality imaging systems such as PET-CT and PET-MRI systems. Because of large amount of image data generated by such modern medical scanners, there has been and remains a need for developing image processing techniques that can automate some or all of the processes to determine the presence of anatomical abnormalities in scanned medical images.
- Digital medical images are constructed using raw image data obtained from a scanner, for example, a computerized axial tomography (CAT) scanner, magnetic resonance imaging (MRI), etc. Digital medical images are typically either a two-dimensional ("2D") image made of pixel elements, a three-dimensional ("3D") image made of volume elements ("voxels") or a four-dimensional ("4D") image made of dynamic elements ("doxels"). Such 2D, 3D or 4D images are processed using medical image recognition techniques to determine the presence of anatomical abnormalities or pathologies, such as cysts, tumors, polyps, etc. Given the amount of image data generated by any given image scan, it is preferable that an automatic technique should point out anatomical features in the selected regions of an image to a doctor for further diagnosis of any disease or condition.
- Automatic image processing and recognition of structures within a medical image are generally referred to as Computer-Aided Detection (CAD). A CAD system can process medical images, localize and segment anatomical structures, including possible abnormalities (or candidates), for further review. Recognizing anatomical structures within digitized medical images presents multiple challenges. For example, a first concern relates to the accuracy of recognition of anatomical structures within an image. A second area of concern is the speed of recognition. Because medical images are an aid for a doctor to diagnose a disease or condition, the speed with which an image can be processed and structures within that image recognized can be of the utmost importance to the doctor in order to reach an early diagnosis.
- Due to several logistical or patient comfort constraints, MR scans of anatomical structures (e.g., elbow) may be acquired with the anatomical structure in an arbitrary position and orientation relative to the magnetic bores.
FIG. 1 shows typical positions and orientations of theelbow 104a-c during MR scanning. Scans acquired with standardized orientation of the anatomical structure facilitates visualization or reading, comparison with past scans or imaging studies across patient populations. To achieve a standardized MR elbow scan reading, application specialists first acquire a scout scan of the elbow. This scout scan is then manually examined and a high-quality scan is acquired after the magnetic bore's or the elbow's position and orientation are adjusted to satisfy the desired imaging specifications. However, such image acquisition procedures are typically tedious and time-consuming. -
US patent 2015/324999 discloses a method for automatic liver segmentation, whereby a marginal space learning (MSL)-based 3D object detection as a learning structure estimates the position, orientation, and scale of the target anatomical structure. Non patent literature "Recognition of Chest Radiograph Orientation for Picture Archiving and Communications Systems Display using Neural Networks" (by BOONE J. M. ET. AL. in JOURNAL OF DIGITAL IMA, SPRINGER-VERLAG, vol. 5, no. 3, 1 August 1992, pages 190-193) discloses a neural network classification scheme as a learning structure to determine the correct orientation of a chest image. Non patent literature "A Steering Engine: Learning 3-D Anatomy Orientation Using Regression Forests" (by REDA FITSUM A ET. AL. in ECCV 2016, pp 612 - 619.) discloses a method of determining the orientation of anatomical structures based on a pre-trained regression technique. - Described herein is a framework for coarse orientation detection in image data. In accordance with one aspect, the framework trains a learning structure to recognize a coarse orientation of the anatomical structure of interest based on training images. The framework may then pass one or more current images through the trained learning structure to generate a coarse orientation of the anatomical structure of interest. The framework then outputs the generated coarse orientation of the anatomical structure of interest. In a first aspect, a system or coarse orientation detection is provided. The system comprising: a non-transitory memory device for storing computer readable program code; and a processor device in communication with the memory device, the processor being operative with the computer readable program code to perform steps including receiving training images of an anatomical structure of interest, training a convolutional neural network to recognize a coarse orientation of the anatomical structure of interest based on the training images, receiving one or more current images of the anatomical structure of interest, passing the one or more current images through the trained convolutional neural network to generate the coarse orientation of the anatomical structure of interest, and controlling an imaging device for image acquisition based on the generated coarse orientation.
A system is preferred, wherein the training images of the anatomical structure of interest comprises two-channel two-dimensional (2D) images generated from one or more three-dimensional (3D) image volumes. The 2D images can comprise receiving pairs of coronal and sagittal slices of the anatomical structure of interest.
Further, a system is preferred, wherein the processor is operative with the computer readable program code to train the convolutional neural network to recognize the coarse orientation of the anatomical structure of interest based on the training images by training the convolutional neural network to recognize a principal hemisphere of an axis of the structure of the structure of interest.
In a second aspect, a method or coarse orientation detection is provided. The method comprising: receiving training images of an anatomical structure of interest; training a learning structure to recognize a coarse orientation of the anatomical structure of interest based on the training images; receiving one or more current images of the anatomical structure of interest; passing the one or more current images through the trained learning structure to generate the coarse orientation of the anatomical structure of interest; and outputting the generated coarse orientation of the anatomical structure of interest.
Further, a method is preferred, wherein receiving the training images of the anatomical structure of interest comprises receiving two-channel two-dimensional (2D) images generated from one or more three-dimensional (3D) image volumes.
Receiving the 2D images can comprise receiving pairs of coronal and sagittal slices of the anatomical structure of interest.
A method is preferred, wherein training the learning structure to recognize the coarse orientation of the anatomical structure of interest based on the training images comprises training the learning structure to recognize a principal hemisphere of an axis of the structure of the structure of interest. Training the learning structure to recognize the principal hemisphere can comprise training the learning structure to identify an UP or DOWN orientation.
Further, a method is preferred, wherein training the learning structure to recognize the coarse orientation of the anatomical structure of interest based on the training images comprises training a convolutional neural network (CNN) classifier. Training the convolutional neural network (CNN) classifier can comprise feeding the training images through hidden layers including convolutional layers and max-pooling layers. Training the convolutional neural network (CNN) classifier further can comprise feeding one of the max-pooling layers to a fully connected layer, and feeding the fully connected layer to a soft-max classification layer that outputs a coarse orientation vote.
A method is preferred, wherein training the convolutional neural network (CNN) classifier further comprises feeding one of the max-pooling layers to a dropout layer for regularization.
According to a preferred method, receiving the one or more current images of the anatomical structure of interest comprises receiving one or more two-channel two-dimensional (2D) images generated from a current three-dimensional (3D) image volume. Receiving the one or more 2D images of the anatomical structure of interest can comprise receiving one or more pairs of coronal and sagittal slices of the anatomical structure of interest.
Further, a method is preferred, wherein passing the one or more current images through the trained learning structure to generate the coarse orientation of the anatomical structure of interest comprises assigning the current images with coarse orientation labels based on output results of the trained learning structure and determining the coarse orientation using a simple majority voting scheme based on the coarse orientation labels.
Further, a method is preferred, wherein outputting the generated coarse orientation of the anatomical structure of interest comprises automatically controlling an imaging device for image acquisition based on the generated coarse orientation, and/or wherein outputting the generated coarse orientation of the anatomical structure of interest comprises inputting the generated coarse orientation to another image processing algorithm, and/or wherein outputting the generated coarse orientation of the anatomical structure of interest comprises using the generated coarse orientation to initialize images prior to performing registration or segmentation.
In a third aspect, one or more non-transitory computer readable media embodying a program of instructions executable by a machine to perform operations for coarse orientation detection, are provided. The operations comprising: receiving training images of an anatomical structure of interest; training a learning structure to recognize a coarse orientation of the anatomical structure of interest based on the training images; receiving one or more current images of the anatomical structure of interest; passing the one or more current images through the trained learning structure to generate the coarse orientation of the anatomical structure of interest; and outputting the generated coarse orientation of the anatomical structure of interest. - A more complete appreciation of the present disclosure and many of the attendant aspects thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings.
-
FIG. 1 shows typical positions and orientations of the elbow during MR scanning; -
FIG. 2 is a block diagram illustrating an exemplary system; -
FIG. 3 shows an exemplary method of coarse orientation detection by a computer system; -
FIG. 4 shows exemplary 2D images for MR elbow scans; -
FIG. 5 shows an exemplary architecture of the convolutional neural network (CNN) learning structure; -
FIG. 6a shows the average confusion matrix over 100 simulations; and -
FIG. 6b shows graphs of the running time (in milliseconds) and average accuracy respectively of the present framework with respect to the number of multi-planar images used for majority voting. - In the following description, numerous specific details are set forth such as examples of specific components, devices, methods, etc., in order to provide a thorough understanding of implementations of the present framework. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice implementations of the present framework. In other instances, well-known materials or methods have not been described in detail in order to avoid unnecessarily obscuring implementations of the present framework. While the present framework is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the invention to the particular forms disclosed. Furthermore, for ease of understanding, certain method steps are delineated as separate steps; however, these separately delineated steps should not be construed as necessarily order dependent in their performance.
- The term "x-ray image" as used herein may mean a visible x-ray image (e.g., displayed on a video screen) or a digital representation of an x-ray image (e.g., a file corresponding to the pixel output of an x-ray detector). The term "in-treatment x-ray image" as used herein may refer to images captured at any point in time during a treatment delivery phase of an interventional or therapeutic procedure, which may include times when the radiation source is either on or off. From time to time, for convenience of description, CT imaging data (e.g., cone-beam CT imaging data) may be used herein as an exemplary imaging modality. It will be appreciated, however, that data from any type of imaging modality including but not limited to high-resolution computed tomography (HRCT), x-ray radiographs, MRI, PET (positron emission tomography), PET-CT, SPECT, SPECT-CT, MR-PET, 3D ultrasound images or the like may also be used in various implementations.
- Unless stated otherwise as apparent from the following discussion, it will be appreciated that terms such as "segmenting," "generating," "registering," "determining," "aligning," "positioning," "processing," "computing," "selecting," "estimating," "detecting," "tracking" or the like may refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Embodiments of the methods described herein may be implemented using computer software. If written in a programming language conforming to a recognized standard, sequences of instructions designed to implement the methods can be compiled for execution on a variety of hardware platforms and for interface to a variety of operating systems. In addition, implementations of the present framework are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used.
- As used herein, the term "image" refers to multi-dimensional data composed of discrete image elements (e.g., pixels for 2D images and voxels for 3D images). The image may be, for example, a medical image of a subject collected by computer tomography, magnetic resonance imaging, ultrasound, or any other medical imaging system known to one of skill in the art. The image may also be provided from non-medical contexts, such as, for example, remote sensing systems, electron microscopy, etc. Although an image can be thought of as a function from R3 to R, or a mapping to R3, the present methods are not limited to such images, and can be applied to images of any dimension, e.g., a 2D picture or a 3D volume. For a 2- or 3-dimensional image, the domain of the image is typically a 2- or 3-dimensional rectangular array, wherein each pixel or voxel can be addressed with reference to a set of 2 or 3 mutually orthogonal axes. The terms "digital" and "digitized" as used herein will refer to images or volumes, as appropriate, in a digital or digitized format acquired via a digital acquisition system or via conversion from an analog image.
- The terms "pixels" for picture elements, conventionally used with respect to 2D imaging and image display, and "voxels" for volume image elements, often used with respect to 3D imaging, can be used interchangeably. It should be noted that the 3D volume image is itself synthesized from image data obtained as pixels on a 2D sensor array and displayed as a 2D image from some angle of view. Thus, 2D image processing and image analysis techniques can be applied to the 3D volume image data. In the description that follows, techniques described as operating upon pixels may alternately be described as operating upon the 3D voxel data that is stored and represented in the form of 2D pixel data for display. In the same way, techniques that operate upon voxel data can also be described as operating upon pixels. In the following description, the variable x is used to indicate a subject image element at a particular spatial location or, alternately considered, a subject pixel. The terms "subject pixel" or "subject voxel" are used to indicate a particular image element as it is operated upon using techniques described herein.
- Automatically detecting anatomy orientation is very useful in medical image analysis. The ability to automatically detect coarse orientation of anatomical structures is useful for minimizing the resources required by fine (or accurate) orientation detection algorithms, to initialize non-rigid deformable registration algorithms or to align models to target structures in model-based segmentation algorithms. Automating scan acquisition procedures is also important to (a) minimize the overall time taken by the image acquisition procedure; and (b) achieve standardized and reproducible acquisition protocols. Hence, automatic identification of coarse orientation provides several advantages as a pre-processing step for more accurate and robust image processing and can also lead to more efficient clinical workflows.
- A framework for automatic coarse orientation detection is described herein. In accordance with one aspect, the framework uses a deep convolutional neural network (DCNN)-based method to learn features that are well suited for fast and robust identification of coarse orientation. Coarse orientation may be identified by the hemi-sphere where the principal axis of a structure lies. The framework may predict whether the principal orientation of a structure is in the northern hemisphere (i.e., UP) or southern hemisphere (i.e., DOWN). The framework is based on the assumption that the entire anatomical structure is located within the scan's field-of-view (FOV).
- Identifying the coarse orientation of an anatomical structure (e.g., elbow) in a given scan image (e.g., MR) is challenging owing to variations induced by different bore sizes, system fields-of-view and/or different anatomy angulations due to injuries and blockades. 3D CNNs are generally challenging to train due to scarcity of data and high-dimensional input space. To efficiently solve the problem in three-dimensional (3D) space and accommodate such variations, a multi-planar two-dimensional (2D) deep learning framework may be used instead of working directly in the 3D space. In the training stage of the framework, a large number of coronal-sagittal slice pairs of the anatomical structure of interest may be constructed as two-channel images to train a DCNN to classify whether a scan is UP or DOWN. During testing, a small number of coronal-sagittal two-channel images are passed through the trained network. Finally, coarse orientation of the anatomical structure may be determined using majority voting.
- The framework essentially learns various possible articulations and forms that an anatomical structure of interest may take in the multi-channel 2D images. Hence, a few randomly selected slices around a region of interest are sufficient to obtain desirable results. The present framework was tested using many elbow MR scan images. Experimental results suggest that only five two-channel images were sufficient to achieve a high success rate of 97.39%. The framework was also extremely fast and takes approximately 50 milliseconds per 3D MR scan. The framework is advantageously insensitive to the precise location of the anatomical structure in the FOV. These and other features and advantages will be described in more details herein.
-
FIG. 2 is a block diagram illustrating anexemplary system 200. Thesystem 200 includes acomputer system 201 for implementing the framework as described herein. In some implementations,computer system 201 operates as a standalone device. In other implementations,computer system 201 may be connected (e.g., using a network) to other machines, such asimaging device 202 andworkstation 203. In a networked deployment,computer system 201 may operate in the capacity of a server (e.g., thin-client server), a cloud computing platform, a client user machine in server-client user network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. - In some implementations,
computer system 201 comprises a processor or central processing unit (CPU) 204 coupled to one or more non-transitory computer-readable media 205 (e.g., computer storage or memory), display device 210 (e.g., monitor) and various input devices 211 (e.g., mouse or keyboard) via an input-output interface 221.Computer system 201 may further include support circuits such as a cache, a power supply, clock circuits and a communications bus. Various other peripheral devices, such as additional data storage devices and printing devices, may also be connected to thecomputer system 201. - The present technology may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof, either as part of the microinstruction code or as part of an application program or software product, or a combination thereof, which is executed via the operating system. In some implementations, the techniques described herein are implemented as computer-readable program code tangibly embodied in non-transitory computer-readable media 205. In particular, the present techniques may be implemented by learning
module 206,processing module 207 anddatabase 209. - Non-transitory computer-readable media 205 may include random access memory (RAM), read-only memory (ROM), magnetic floppy disk, flash memory, and other types of memories, or a combination thereof. The computer-readable program code is executed by CPU 204 to process medical data retrieved from, for example,
database 209. As such, thecomputer system 201 is a general-purpose computer system that becomes a specific purpose computer system when executing the computer-readable program code. The computer-readable program code is not intended to be limited to any particular programming language and implementation thereof. It will be appreciated that a variety of programming languages and coding thereof may be used to implement the teachings of the disclosure contained herein. - The same or different computer-readable media 205 may be used for storing a database (or dataset) 209. Such data may also be stored in external storage or other memories. The external storage may be implemented using a database management system (DBMS) managed by the CPU 204 and residing on a memory, such as a hard disk, RAM, or removable media. The external storage may be implemented on one or more additional computer systems. For example, the external storage may include a data warehouse system residing on a separate computer system, a cloud platform or system, a picture archiving and communication system (PACS), or any other hospital, medical institution, medical office, testing facility, pharmacy or other medical patient record storage system.
-
Imaging device 202 acquiresmedical images 220 associated with at least one patient. Suchmedical images 220 may be processed and stored indatabase 209.Imaging device 202 may be a radiology scanner (e.g., MR scanner) and/or appropriate peripherals (e.g., keyboard and display device) for acquiring, collecting and/or storing suchmedical images 220. - The
workstation 203 may include a computer and appropriate peripherals, such as a keyboard and display device, and can be operated in conjunction with theentire system 200. For example, theworkstation 203 may communicate directly or indirectly with theimaging device 202 so that the medical image data acquired by theimaging device 202 can be rendered at theworkstation 203 and viewed on a display device. Theworkstation 203 may also provide other types ofmedical data 222 of a given patient. Theworkstation 203 may include a graphical user interface to receive user input via an input device (e.g., keyboard, mouse, touch screen voice or video recognition interface, etc.) to inputmedical data 222. - It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying figures can be implemented in software, the actual connections between the systems components (or the process steps) may differ depending upon the manner in which the present framework is programmed. Given the teachings provided herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present framework.
-
FIG. 3 shows anexemplary method 300 of coarse orientation detection by a computer system. It should be understood that the steps of themethod 300 may be performed in the order shown or a different order. Additional, different, or fewer steps may also be provided. Further, themethod 300 may be implemented with thesystem 201 ofFIG. 2 , a different system, or a combination thereof. - At 302,
learning module 206 receives training images. The training images may be acquired by using techniques such as high-resolution computed tomography (HRCT), magnetic resonance (MR) imaging, computed tomography (CT), helical CT, X-ray, angiography, positron emission tomography (PET), fluoroscopy, ultrasound, single photon emission computed tomography (SPECT), or a combination thereof. The training images may be retrieved from, for example,database 209 and/or acquired byimaging device 202. The training images may be randomly generated from one or more 3D image volumes acquired in one or more imaging scans of an anatomical structure of interest. The training images may include two-channel 2D images. Each two-channel image may include a pair of corresponding coronal and sagittal slices of the anatomical structure of interest. The anatomical structure of interest is a body portion that has been identified for investigation. The anatomical region of interest may be, for example, at least a section of a subject's elbow, spine, vertebra, and so forth. -
FIG. 4 shows exemplary 2D images for MR elbow scans. More particularly, the2D images 402 in the top two rows are in the UP orientation, while the2D images 404 in the bottom two rows are in the DOWN orientation.Images 406a-b in first and third rows are along the sagittal plane, whileimages 408a-b in the second and fourth rows are along coronal plane. It can be observed that there are intra-class variations in the UP and DOWN orientations due to different articulations and flipping, which makes identifying the elbow orientation challenging even for the trained human. Fortunately, coronal and sagittal slices together provide sufficient information for this task. The present framework assumes that the entire anatomical structure is located within the scan's (e.g., MR) field-of-view (FOV). Several two-channel training images may be randomly generated within the FOV. All training images generated from same scan are assigned the same label as the global orientation of the scan. - Returning to
FIG. 3 , at 304,learning module 206 trains a learning structure to recognize coarse orientation of the anatomical structure of interest based on the training images. To train the learning structure, all the training images may be shuffled and passed through the learning structure to learn its parameters and identify anatomy coarse orientation. The coarse orientation may be identified by the principal hemisphere of the structure axis (i.e., UP or DOWN orientation), thereby reducing the recognition task to a binary classification task. - In some implementations, the learning structure is an unsupervised learning structure that automatically discovers representations needed for feature detection instead of relying on labeled input. The learning structure may be a deep learning architecture that includes stacked layers of learning nodes. The learning structure may be represented by, for example, a convolutional neural network (CNN) classifier. CNN is a class of deep, feed-forward artificial neural network that uses a variation of multilayer perceptrons designed to require minimal preprocessing. Other types of classifiers, such as random forests, may also be used.
-
FIG. 5 shows anexemplary architecture 501 of the CNN learning structure. The CNN learning structure may include aninput layer 502, anoutput layer 508, as well as multiplehidden layers hidden layers - As shown, the input image (e.g., training image) 502 is fed to hidden
layers 504. The number of convolutional layers as well as number of channels in each convolutional layer in thehidden layers 504 may be reduced to achieve real-time performance. In some implementations, four convolutional layers, each with 20 channels and followed by a max-pooling layer, are sufficient to attain the desired accuracy. The number of channels and filter size in all the convolution layers may be, for example, 20 and 3x3 respectively. The final max-pooling layer may be fed into one fully connectedlayer 506, which may be finally fed to soft-max classification layer 508 with two units. The soft-max classification layer 508 may then output a coarse orientation vote (i.e., UP or DOWN). - To regularize the structure, a dropout mechanism may be used. The dropout mechanism is a regularization technique for reducing overfitting in neural networks by preventing complex co-adaptations on training data. See, for example, Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R., Dropout: a simple way to prevent neural networks from overfitting, Journal of Machine Learning Research 15(1), 1929-1958 (2014). In some implementations, a dropout layer with dropout of a probability of, for example, 0.5, is inserted before the fully connected
layer 506. The network weights may then be updated using a stochastic gradient descent algorithm. - Returning to
FIG. 3 , at 306,processing module 207 receives one or more current images. The one or more current images may be acquired by the same imaging modality as the training images. Additionally, the one or more current images may be randomly generated from a current 3D image volume acquired in a single imaging scan (e.g., MR scan). Each current image may be a two-channel 2D image including a pair of corresponding coronal and sagittal slices of the anatomical structure of interest. - At 308,
processing module 207 passes the one or more current images through the trained learning structure to generate a coarse orientation of the anatomical structure. Each current image may be assigned a coarse orientation (e.g., UP or DOWN) label based on the output results of the trained learning structure. The final orientation of the anatomical structure of interest may be decided based on the coarse orientations labels using, for example, simple majority voting scheme. - At 310,
processing module 207 outputs the coarse orientation of the anatomical structure. The coarse orientation may be displayed at, for example,workstation 203. The coarse orientation may be used to automatically control an imaging device (e.g., MR scanner) for image acquisition. The coarse orientation may also be input into another image processing algorithm to provide accurate and robust processing results. - In some implementations, the coarse
orientation detection method 300 may serve as a pre-processing process for a medical image processing algorithm. The medical imaging algorithm may be, for instance, a fine orientation detection algorithm (e.g., Steering-Engine 1), end-to-end orientation detection or marginal space learning (MSL). The Steering-Engine may take more iterations to converge at a final solution if the initial and the actual coarse orientations lie in different hemi-spheres. Similarly, the MSL search space for quantized orientation may be effectively reduced to half given the hemi-sphere of structure of interest. - In other implementations, the coarse orientation is used to initialize images prior to performing an image processing algorithm, such as registration (e.g., non-rigid registration) or segmentation (e.g., active shape models, statistical shape models). Image registration techniques aim to establish correspondence between images, and are at the core of many applications in medical imaging. Many registration methods are dependent on a good initialization. The initialization may be performed by manually aligning the images, or by calculating landmark-based point set transformation between two images. However, manual alignment is not a desirable solution as it is very time consuming. Furthermore, landmark detection algorithms may suffer in precisely locating points of interest owing to variations like rotation and articulation. Segmentation algorithms like active shape models and statistical shape models also rely on initialization and perform best when initialization is not too far from final solution. Hence, coarse orientation detection may facilitate precise landmark localization as well as provide better initialization strategies for registration, fine orientation detection and segmentation algorithms.
- To validate the present framework, an experiment was performed based on a total of 114 MR elbow scans. There were 64 and 50 scans in the UP and DOWN orientation respectively. A two-fold cross-validation was performed. Due to a limited number of dataset, the validation set was not explicitly created. Instead, the model was selected after training it for 5 epochs. During training, 500 randomly generated 2D multi-channel images were used from each MR scan, leading to a total of 28500 training patches. During testing, majority voting was used on 5 randomly generated images to obtain the final coarse orientation result. Since during testing the images were generated at random, slightly different results may be obtained over different runs on the same MR scan. Hence, to showcase performance numbers, experiments were simulated by running the
test 100 times using the same trained learning structure and the average accuracy was reported. An average accuracy of 97.39% was obtained in a two-fold cross-validation using majority voting with 5 images. -
FIG. 6a shows theaverage confusion matrix 601 over 100 simulations.FIG. 6b showsgraphs - While the present framework has been described in detail with reference to exemplary embodiments, those skilled in the art will appreciate that various modifications and substitutions can be made thereto without departing from the scope of the invention as set forth in the appended claims. For example, elements and/or features of different exemplary embodiments may be combined with each other and/or substituted for each other within the scope of the appended claims.
Claims (14)
- A system for coarse orientation detection, comprising:a non-transitory memory device for storing computer readable program code; anda processor device in communication with the memory device, the processor being operative with the computer readable program code to perform steps includingreceiving training images of an anatomical structure of interest, wherein the entire anatomical structure is located within the field of view of the image;training a convolutional neural network to recognize a coarse orientation of the anatomical structure of interest based on the training images,receiving one or more current images of the anatomical structure of interest, wherein the entire anatomical structure is located within the field of view of the image;passing the one or more current images through the trained convolutional neural network to generate the coarse orientation of the anatomical structure of interest, andcontrolling an imaging device for image acquisition based on the generated coarse orientation,characterized in that receiving the training images of the anatomical structure of interest comprises receiving two-channel two-dimensional (2D) images generated from one or more three-dimensional (3D) image volumes and wherein receiving the two-channel 2D images comprises receiving pairs of coronal and sagittal slices of the anatomical structure of interest.
- The system according to claim 1, wherein the processor is operative with the computer readable program code to train the convolutional neural network to recognize the coarse orientation of the anatomical structure of interest based on the training images by training the convolutional neural network to recognize a principal hemisphere of an axis of the structure of the structure of interest.
- A method of coarse orientation detection, comprising:receiving training images of an anatomical structure of interest, wherein the entire anatomical structure is located within the field of view of the image;training a learning structure to recognize a coarse orientation of the anatomical structure of interest based on the training images;receiving one or more current images of the anatomical structure of interest, wherein the entire anatomical structure is located within the field of view of the image;passing the one or more current images through the trained learning structure to generate the coarse orientation of the anatomical structure of interest; andoutputting the generated coarse orientation of the anatomical structure of interest, characterized in that receiving the training images of the anatomical structure of interest comprises receiving two-channel two-dimensional (2D) images generated from one or more three-dimensional (3D) image volumes and wherein receiving the two-channel 2D images comprises receiving pairs of coronal and sagittal slices of the anatomical structure of interest.
- The method according to claim 3, wherein training the learning structure to recognize the coarse orientation of the anatomical structure of interest based on the training images comprises training the learning structure to recognize a principal hemisphere of an axis of the structure of the structure of interest.
- The method according to claim 4, wherein training the learning structure to recognize the principal hemisphere comprises training the learning structure to identify an UP or DOWN orientation.
- The method according to claim 3, wherein training the learning structure to recognize the coarse orientation of the anatomical structure of interest based on the training images comprises training a convolutional neural network (CNN) classifier.
- The method according to claim 6, wherein training the convolutional neural network (CNN) classifier comprises feeding the training images through hidden layers including convolutional layers and max-pooling layers.
- The method according to claim 7, wherein training the convolutional neural network (CNN) classifier further comprises feeding one of the max-pooling layers to a fully connected layer, and feeding the fully connected layer to a soft-max classification layer that outputs a coarse orientation vote.
- The method according to claims 7 or 8, wherein training the convolutional neural network (CNN) classifier further comprises feeding one of the max-pooling layers to a dropout layer for regularization.
- The method according to any of the claims 3 to 9, wherein receiving the one or more current images of the anatomical structure of interest comprises receiving one or more two-channel two-dimensional (2D) images generated from a current three-dimensional (3D) image volume.
- The method according to claim 10, wherein receiving the one or more two-channel 2D images of the anatomical structure of interest comprises receiving one or more pairs of coronal and sagittal slices of the anatomical structure of interest.
- The method according to any of the claims 3 to 11, wherein passing the one or more current images through the trained learning structure to generate the coarse orientation of the anatomical structure of interest comprises assigning the current images with coarse orientation labels based on output results of the trained learning structure and determining the coarse orientation using a simple majority voting scheme based on the coarse orientation labels.
- The method according to any of the claims 3 to 12, wherein outputting the generated coarse orientation of the anatomical structure of interest comprises automatically controlling an imaging device for image acquisition based on the generated coarse orientation, and/or
wherein outputting the generated coarse orientation of the anatomical structure of interest comprises inputting the generated coarse orientation to another image processing algorithm, and/or
wherein outputting the generated coarse orientation of the anatomical structure of interest comprises using the generated coarse orientation to initialize images prior to performing registration or segmentation. - One or more non-transitory computer readable media embodying a program of instructions executable by a machine to perform operations for coarse orientation detection, the operations comprising:receiving training images of an anatomical structure of interest, wherein the entire anatomical structure is located within the field of view of the image;training a learning structure to recognize a coarse orientation of the anatomical structure of interest based on the training images;receiving one or more current images of the anatomical structure of interest, wherein the entire anatomical structure is located within the field of view of the image;passing the one or more current images through the trained learning structure to generate the coarse orientation of the anatomical structure of interest; andoutputting the generated coarse orientation of the anatomical structure of interest, characterized in that receiving the training images of the anatomical structure of interest comprises receiving two-channel two-dimensional (2D) images generated from one or more three-dimensional (3D) image volumes and wherein receiving the two-channel 2D images comprises receiving pairs of coronal and sagittal slices of the anatomical structure of interest.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762452462P | 2017-01-31 | 2017-01-31 | |
US15/877,485 US10580159B2 (en) | 2017-01-31 | 2018-01-23 | Coarse orientation detection in image data |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3355273A1 EP3355273A1 (en) | 2018-08-01 |
EP3355273B1 true EP3355273B1 (en) | 2019-11-27 |
Family
ID=61074402
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18153887.7A Active EP3355273B1 (en) | 2017-01-31 | 2018-01-29 | Coarse orientation detection in image data |
Country Status (2)
Country | Link |
---|---|
US (1) | US10580159B2 (en) |
EP (1) | EP3355273B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11847730B2 (en) | 2020-01-24 | 2023-12-19 | Covidien Lp | Orientation detection in fluoroscopic images |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3553740A1 (en) * | 2018-04-13 | 2019-10-16 | Koninklijke Philips N.V. | Automatic slice selection in medical imaging |
CN110728274A (en) * | 2018-06-29 | 2020-01-24 | 通用电气公司 | Medical device computer-assisted scanning method, medical device and readable storage medium |
US10842445B2 (en) | 2018-11-08 | 2020-11-24 | General Electric Company | System and method for unsupervised deep learning for deformable image registration |
US11475565B2 (en) * | 2018-12-21 | 2022-10-18 | GE Precision Healthcare LLC | Systems and methods for whole-body spine labeling |
WO2020176064A1 (en) * | 2018-12-31 | 2020-09-03 | Didi Research America, Llc | Method and system of annotation densification for semantic segmentation |
CN109902680A (en) * | 2019-03-04 | 2019-06-18 | 四川长虹电器股份有限公司 | The detection of picture rotation angle and bearing calibration based on convolutional neural networks |
CN110874842B (en) * | 2019-10-10 | 2022-04-29 | 浙江大学 | Chest cavity multi-organ segmentation method based on cascade residual full convolution network |
CN110874614B (en) * | 2019-11-13 | 2023-04-28 | 上海联影智能医疗科技有限公司 | Brain image classification method, computer device, and readable storage medium |
US11682135B2 (en) * | 2019-11-29 | 2023-06-20 | GE Precision Healthcare LLC | Systems and methods for detecting and correcting orientation of a medical image |
EP3862969A1 (en) * | 2020-02-07 | 2021-08-11 | Siemens Healthcare GmbH | Orientation detection in medical image data |
EP3996102A1 (en) * | 2020-11-06 | 2022-05-11 | Paul Yannick Windisch | Method for detection of neurological abnormalities |
CN112529901B (en) * | 2020-12-31 | 2023-11-07 | 江西飞尚科技有限公司 | Crack identification method in complex environment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9367924B2 (en) | 2014-05-06 | 2016-06-14 | Siemens Aktiengesellschaft | Method and system for segmentation of the liver in magnetic resonance images using multi-channel features |
CN108603922A (en) * | 2015-11-29 | 2018-09-28 | 阿特瑞斯公司 | Automatic cardiac volume is divided |
-
2018
- 2018-01-23 US US15/877,485 patent/US10580159B2/en active Active
- 2018-01-29 EP EP18153887.7A patent/EP3355273B1/en active Active
Non-Patent Citations (1)
Title |
---|
None * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11847730B2 (en) | 2020-01-24 | 2023-12-19 | Covidien Lp | Orientation detection in fluoroscopic images |
Also Published As
Publication number | Publication date |
---|---|
US10580159B2 (en) | 2020-03-03 |
EP3355273A1 (en) | 2018-08-01 |
US20180218516A1 (en) | 2018-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3355273B1 (en) | Coarse orientation detection in image data | |
US10304198B2 (en) | Automatic medical image retrieval | |
Zhang et al. | Context-guided fully convolutional networks for joint craniomaxillofacial bone segmentation and landmark digitization | |
US10803354B2 (en) | Cross-modality image synthesis | |
US11074688B2 (en) | Determination of a degree of deformity of at least one vertebral bone | |
US8958614B2 (en) | Image-based detection using hierarchical learning | |
CN104346821B (en) | Automatic planning for medical imaging | |
US20160321427A1 (en) | Patient-Specific Therapy Planning Support Using Patient Matching | |
JP2022025095A (en) | System and method for translation of medical imaging using machine learning | |
EP3432215B1 (en) | Automated measurement based on deep learning | |
US10796464B2 (en) | Selective image reconstruction | |
US11941812B2 (en) | Diagnosis support apparatus and X-ray CT apparatus | |
US9741131B2 (en) | Anatomy aware articulated registration for image segmentation | |
US9336457B2 (en) | Adaptive anatomical region prediction | |
US9135696B2 (en) | Implant pose determination in medical imaging | |
US20170221204A1 (en) | Overlay Of Findings On Image Data | |
US11327773B2 (en) | Anatomy-aware adaptation of graphical user interface | |
US11164308B2 (en) | System and method for improved medical images | |
WO2017148502A1 (en) | Automatic detection of an artifact in patient image data | |
US9286688B2 (en) | Automatic segmentation of articulated structures | |
US20210133356A1 (en) | Anonymization of Medical Image Data | |
US20210150739A1 (en) | Capturing a misalignment | |
CN116420165A (en) | Detection of anatomical anomalies by segmentation results with and without shape priors | |
US20230022549A1 (en) | Image processing apparatus, method and program, learning apparatus, method and program, and derivation model | |
US20240233129A1 (en) | Quantification of body composition using contrastive learning in ct images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20180129 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: BHATIA, PARMEET SINGH Inventor name: ZHAN, YIQIANG Inventor name: ZHOU, XIANG SEAN Inventor name: REDA, FITSUM AKLILU |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20190704 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1207555 Country of ref document: AT Kind code of ref document: T Effective date: 20191215 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602018001311 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20191127 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: NV Representative=s name: SIEMENS SCHWEIZ AG, CH |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200227 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200228 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200227 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200327 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200419 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602018001311 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1207555 Country of ref document: AT Kind code of ref document: T Effective date: 20191127 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20200131 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200129 |
|
26N | No opposition filed |
Effective date: 20200828 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200131 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200129 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210131 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210131 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191127 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 602018001311 Country of ref document: DE Owner name: SIEMENS HEALTHINEERS AG, DE Free format text: FORMER OWNER: SIEMENS HEALTHCARE GMBH, 91052 ERLANGEN, DE |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240318 Year of fee payment: 7 Ref country code: GB Payment date: 20240212 Year of fee payment: 7 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20240116 Year of fee payment: 7 |