WO2018036893A1 - Image processing apparatus and method for segmenting a region of interest - Google Patents

Image processing apparatus and method for segmenting a region of interest Download PDF

Info

Publication number
WO2018036893A1
WO2018036893A1 PCT/EP2017/070813 EP2017070813W WO2018036893A1 WO 2018036893 A1 WO2018036893 A1 WO 2018036893A1 EP 2017070813 W EP2017070813 W EP 2017070813W WO 2018036893 A1 WO2018036893 A1 WO 2018036893A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
region
interest
image
ultrasound
Prior art date
Application number
PCT/EP2017/070813
Other languages
French (fr)
Inventor
Christian Buerger
Irina Waechter-Stehle
Thomas Heiko STEHLE
Frank Michael WEBER
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2018036893A1 publication Critical patent/WO2018036893A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the present invention relates to an image processing apparatus for segmenting a region of interest in image data of an object, which comprises sub-regions of the region of interest to be segmented. Further, the present invention relates to a method for segmenting a region of interest in image data of an object, wherein the image data comprises a plurality of image data of sub-regions of the region of interest. The present invention further relates to an ultrasound imaging apparatus and a computer program comprising program code means for causing a computer to carry out the steps of the method according to the present invention.
  • One specific processing task which is a fundamental task in many image processing applications is a segmentation of a region of interest, e.g. a specific organ or a specific part of an organ.
  • the segmentation is necessary for identifying specific cavities of an organ or for determining specific functions of the organ or for special diagnosis, e.g. based on volume quantification in order to improve the determination of treatment parameters.
  • the cardiac function such as chamber size and ejection fraction can be quantified by medical imaging in particular by means of the ultrasound imaging. Due to the technical limitations of the today's ultrasound systems the acquisition of images with high temporal and spatial resolution is not possible. However, for accurate cardiac quantification, such a high spatial and temporal resolution is needed.
  • the 3D volume e.g. of the heart is not acquired in one shot but in capturing multiple sub-volumes from different time points at similar positions in the cardiac cycle, which are consequently combined or stitched together in order to form image data covering the complete object to be segmented.
  • a corresponding ultrasound system is e.g. known from US 2011/0118608 Al .
  • the final image might show combination or stitching artefacts, i.e. unrealistic intensity steps in the final image data and, therefore, also in the final organ image from which e.g. the volume shall be measured.
  • a corresponding combined image which does not represent the actual soft tissue anatomy of the organ to be quantified should be discarded and not considered for the special diagnosis in order to achieve a reliable diagnosis.
  • an image processing apparatus for segmenting a region of interest in image data of an object, comprising:
  • an interface adapted to receive at least two image data of the object each including a sub-region of the region of interest to be segmented
  • processing unit adapted to combine the at least two image data of the two different sub-regions to combined image data of the region of interest by stitiching the at least two image data at an image border
  • segmentation unit adapted to segment the region of interest of the combined image data and to provide a segmentation model of the region of interest
  • processing unit is adapted to identify stitching artefacts in the combined image data, wherein a stitching artefact is identified by the processing unit if:
  • an intensity gradient along a normal direction that is orthogonal to the image border (34) is above a first predefined threshold,
  • a sum of intensity gradients along a plurality of normal directions that are parallel to one another and orthogonal to the image border is above a second predefined threshold;
  • the surface of the segmentation model comprises an uneveness above a fourth predefined threshold level.
  • a method for segmenting a region of interest in image data of an object comprising the steps:
  • the surface of the segmentation model comprises an uneveness above a fourth predefined threshold level.
  • an ultrasound imaging apparatus comprising:
  • an ultrasound acquisition unit adapted to provide ultrasound image data including sub-regions of an object in a field of view
  • a computer program comprising program code means for causing a computer to carry out the steps of the method according to the present invention, when said computer program is carried out on a computer.
  • the present invention is based on the idea to identify wrongly or stitched sub- volume image data by identifying artefacts in the combined image data so that such images are discarded and not quantified and the reliability and the accuracy of the segmentation and the diagnosis can be improved.
  • This is achieved according to the present invention by identifying the artefacts based on the intensity variation or a segmentation deformation in the image data at the combination or stitching border of the combined image data of the sub- regions so that the wrongly combined image data can be easily detected.
  • a misfit of the combined image data can be easily and reliably identified, such data is discarded and not used for quantification and hence the reliability of the diagnosis can be improved.
  • the processing unit is adapted to discard the combined image data based on the identified artefacts. This is a possibility to select artefact- free image data and to avoid wrong volume measurements and, further, wrong diagnoses.
  • the image data of the different sub-regions is captured at different time frames. This is a possibility to stitch different image data of the object from different sub-regions so that a 3D volume of the organ to be segmented can be acquired.
  • the processing unit is adapted to determine the intensity variation along a normal direction of a border plane of the image data of the different sub-regions. This is a possibility to detect stitching artefacts by detecting intensity jumps along the connection plane of the different image data captured during different time frames.
  • the processing unit is adapted to determine the intensity variation along a segmentation surface of the segmentation data. This is a possibility to determine intensity variations between the image data of the different sub- regions if the intensity of one sub-region is low and the intensity of the other sub-region is high.
  • the processing unit is adapted to determine the segmentation deformation based on a model-based segmentation and the segmentation data. This is a possibility to determine a misfit of the image data from the different sub- regions, since an edge in the combined image data can be easily detected, which is usually not present in the anatomic object to be segmented.
  • the processing unit is adapted to determine the segmentation deformation based on an outer surface of the model-based segmentation. This is a possibility to determine a mismatch between two image data portions of the sub- regions to be combined to the combined image data of the region of interest.
  • the image data is multi-dimensional image data of the object. This is a possibility to provide detailed image data of the object to be segmented.
  • the image data is multi-dimensional ultrasound image data of the object. It is further preferred that the image data is three dimensional ultrasound image data of the object. This is a possibility to provide detailed ultrasound image data of the object in order to determine a three dimensional volume of the object be segmented, e.g. the heart.
  • the image data is based on a high frame ultrasound imaging comprising at least 1000 captured image frames per second. This is a possibility to improve the image data quality and to reduce the possibility of artefacts due to a movement of the probe or a movement of the patient.
  • the present invention provides a possibility to identify a mismatch and the corresponding artefacts of combined image data, which combine sub- regions of the region of interest to be segmented, wherein the border of the combined or stitched sub-regions is checked for intensity variations, segmentation deformation or different intensities in an organ cavity to be analyzed so that mismatched image data of sub-regions can be identified and discarded and the overall reliability of the segmentation can be improved.
  • Fig. 1 shows a schematic representation of a medical imaging system in use to scan a volume of a patient's body
  • Fig. 2a shows combined image data having an artefact due to a mismatch between two image data of sub-regions
  • Fig. 2b shows the combined image data of Fig. 2a segmented by the segmentation unit
  • Fig. 3a shows combined ultrasound image data comprising correctly matched image data of different sub-regions
  • Fig. 3b shows the ultrasound image data of Fig. 3a including segmentations data provided by the segmentation unit.
  • Fig. 1 shows a schematic illustration of a medical imaging system 10 according to one embodiment, in particular a medical three dimensional (3D) ultrasound imaging system 10.
  • the medical imaging system 10 is applied to inspect a volume of an anatomical site, in particular an anatomical site of a patient 12.
  • the medical imaging system 10 comprises an ultrasound probe 14 having at least one transducer array including a multitude of transducer elements for transmitting and/or receiving ultrasound waves.
  • the transducer elements are preferably arranged in a two-dimensional array, in particular for providing a multi dimensional imaging data.
  • the ultrasound transducer is formed as a high frame ultrasound transducer for high frame ultrasound imaging which is capable for capturing more than 1000 image frames per second.
  • the high frame rate ultrasound imaging is a possibility to provide detailed ultrasound imaging also of moving organs like the heart.
  • a 3D ultrasound scan typically involves emitting ultrasound waves that illuminate a particular volume or object within the patient 12, which may be designated as target volume or region of interest 15. This can be achieved in general by emitting ultrasound waves at multiple different angles.
  • a set of volume data is obtained by receiving and processing reflected waves.
  • the setup volume data is a representation of the region of interest 15 within the patient 12.
  • the medical imaging system 10 is preferably provided for displaying and analyzing large and moving organs like the heart in order to quantify a chamber size of the organ and a function of the respective organ.
  • the cardiac function such as chamber size and ejection fraction of the heart can be quantified.
  • the acquisition of images which can capture and measure the entire heart chamber in order to quantify the chamber size and the ejection fraction is not possible.
  • the 3D volume of the heart is usually acquired in a plurality of image captures which are captured at different time frames. Multiple sub-volumes are usually acquired at similar time frames during the cardiac cycle and stitched together in order to provide 3D image data which display the entire heart to determine the chamber size and the ejection fraction.
  • the combined image data may comprise artefacts due to incorrect combination of the sub-volume image data. Those incorrect images should be discarded and not used for analysis or quantification in order to achieve reliable analysis data and reliable diagnoses. Presence of image artifacts especially may get pronounced for a new generation of ultrafast ultrasound imaging systems, wherein a frame rate may reach as high 1000 Hz or even more. Therefore, it is desirable to have an ultrasound system capable of automatically providing compounded images of an anatomy suitable for the precise quantification analyses.
  • the medical imaging system 10 comprises an image processing apparatus 20 for providing an image via the medical imaging system 10.
  • the image processing apparatus 20 controls the image processing and can form an image out of the echoes of the ultrasound beams received by the transducer array of the ultrasound probe 14.
  • the image processing apparatus 20 comprises a control unit 22 that controls the acquisition of image data via the transducer array of the ultrasound probe 14 and is connected to an interface 24 for receiving the respective image data.
  • the control unit 22 is connected to a processing unit 26, which receives the image data and performs image processing on overall image data in order to achieve ultrasound images in general to be displayed.
  • the processing unit 26 receives the image data of the object comprising sub- regions of the region of interest which is acquired by the ultrasound probe 14 during different time frames.
  • the processing unit 26 stitches the image data of the different sub-regions together and forms combined (or compounded) image data of the whole region of interest 15, e.g. the heart in order to provide images of the whole organ to be analyzed.
  • the processing unit 26 is connected to a segmentation unit 28, which is adapted to segment the anatomic object in the region of interest 15.
  • the segmentation unit 28 may perform the segmentation based on a deformable known or stored model of the anatomical structure in the region of interest 15 shown by the combined image data.
  • the combined image data and the segmentation data provided by the segmentation unit 28 are checked for plausibility by the processing unit 26 as described in the following and the image data including the segmentation data can be provided to a display unit 30 connected to the image processing apparatus 20.
  • the display unit 30 may be connected to an input device 32 for controlling the display unit 30 and/or the image processing apparatus 20.
  • Fig. 2a illustrates a combined ultrasound image data 33 of a heart of the patient 12, which is formed of two sub-volume image data Al and A2 originating from two sub-volumes of the heart. These sub-volume image data Al and A2 are stitched at a border line or a border plane 34.
  • the combined image data 33 comprises a mismatch between the sub-volume image data Al and A2, wherein the image data of the sub- volumes Al and A2 are displaced with respect to each other e.g. by a movement of the patient 12 or the ultrasound probe 14 between capturing the different sub-volume image data or segmentation data of the sub-volumes Al and A2.
  • irregularities in the compounded image data such as combination artefacts or stitching artefacts, are detected and the image data is discarded and not used for quantification of the chamber size and the ejection fraction of the heart.
  • the combination artefacts or stitching artefacts can be identified by intensity variations or intensity jumps in the ultrasound image data along a normal direction 36 of the border line 34 or the border plane 34.
  • the image data 33 is discarded. I.e. if the intensity or contrast of the ultrasound image data of two neighbored points of an e.g. Cartesian grid have a difference above the predefined threshold level, the respective image data is discarded.
  • Fig. 2b shows segmentation data 38 provided by the segmentation unit 28 as a model-based segmentation on the combined image data 33 including the combination or stitching artefacts as shown in Fig. 2a.
  • the image data intensity is checked over the model-based segmentation surface and the artefacts are identified if the segmentation model surface adapted to the image intensity or contrast data is not smooth. I.e. if the segmentation model of the surface of the organ shows an unexpected unevenness due to a mismatch as shown in Fig. 2b, which is larger than a threshold level, the image data is discarded. This is a possibility to identify the intensity variation of the image data 33 in the combined image data.
  • image data 33 including combination or stitching artefacts
  • the image intensities at the inside of the heart contour are high within the sub-volume A2 and low within the sub- volume image data Al .
  • the combination or stitching artefacts are identified and the image data 33 or the segmentation data 38 should be discarded.
  • model-based segmentation 38 can be executed by means of the segmentation unit 28 to identify artefacts in the combined image data 33.
  • the segmentation model surface adapted to the combined image data shows deformation which is not realistic or above a predefined threshold level, an artefact can be identified and the segmentation data 38 should be discarded and not used for the analysis or the quantification.
  • the image data irregularities i.e. artefacts
  • the image data irregularities can be identified on the basis of intensity variation in the combined image data 33 or based on segmentation deformation at the border line or border plane 34 between the sub-volume image data Al and A2.
  • Fig. 3a shows combined image data 33, wherein the sub-volume image data Al and A2 are correctly combined and no artefacts are detected.
  • Fig. 3b the segmentation data 38 provided by the segmentation unit 28 is shown on the basis of which a quantification of the chamber size and the ejection fraction of the heart can be executed.
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the

Abstract

The present invention relates to an image processing apparatus (20) for segmenting a region of interest (15) in image data (33) of an object (12). An interface (24) of the image processing apparatus is adapted to receive a plurality of image data of the object including sub-regions (A1, A2) of the region of interest to be segmented. The image processing apparatus comprises a processing unit (26) which is adapted to combine the plurality of image data of the sub-regions to combined image data of the region of interest, and a segmentation unit (28) which is adapted to segment the region of interest of the combined image data and to provide segmentation data (38). The processing unit is further adapted to identify artefacts in the combined image data on the basis of intensity variation or segmentation deformation at a border (34) of the combined plurality of image data.

Description

Image processing apparatus and method for segmenting a region of interest
FIELD OF THE INVENTION
The present invention relates to an image processing apparatus for segmenting a region of interest in image data of an object, which comprises sub-regions of the region of interest to be segmented. Further, the present invention relates to a method for segmenting a region of interest in image data of an object, wherein the image data comprises a plurality of image data of sub-regions of the region of interest. The present invention further relates to an ultrasound imaging apparatus and a computer program comprising program code means for causing a computer to carry out the steps of the method according to the present invention.
BACKGROUND OF THE INVENTION
In the field of medical processing, various processing tasks are typically performed on the medical images like ultrasound images, MRT images, computer tomography images or the like. One specific processing task, which is a fundamental task in many image processing applications is a segmentation of a region of interest, e.g. a specific organ or a specific part of an organ. The segmentation is necessary for identifying specific cavities of an organ or for determining specific functions of the organ or for special diagnosis, e.g. based on volume quantification in order to improve the determination of treatment parameters.
In the transthoracic echo (TTE) cardiography, the cardiac function such as chamber size and ejection fraction can be quantified by medical imaging in particular by means of the ultrasound imaging. Due to the technical limitations of the today's ultrasound systems the acquisition of images with high temporal and spatial resolution is not possible. However, for accurate cardiac quantification, such a high spatial and temporal resolution is needed.
To generate such images, the 3D volume e.g. of the heart is not acquired in one shot but in capturing multiple sub-volumes from different time points at similar positions in the cardiac cycle, which are consequently combined or stitched together in order to form image data covering the complete object to be segmented. A corresponding ultrasound system is e.g. known from US 2011/0118608 Al . However, in the case that the patient or the probe is moved between the capturing of the different sub-volumes, the final image might show combination or stitching artefacts, i.e. unrealistic intensity steps in the final image data and, therefore, also in the final organ image from which e.g. the volume shall be measured. A corresponding combined image which does not represent the actual soft tissue anatomy of the organ to be quantified should be discarded and not considered for the special diagnosis in order to achieve a reliable diagnosis.
Further stitching techniques for medical images are known from Lang, R.M. et al.: "EAE/ASE Recommendations for Image Acquisition and Siplay Using Three- Dimensional Echicardiography", European Heart Journal - Cardiovascular Imaging, vol. 13, no. 1, pp. 1-46, from US 2014/071125 Al, and from Brekke at al: "Volume Stitching in Three-Dimensional Echocardiography: Distortion Analysis and Extension to Real Time", Ultrasound in Medicine and Biology, NY, US, vol. 33, no. 5, pp. 782-796.
SUMMARY OF THE INVENTION
It is therefore an object of the present invention to provide an image processing apparatus and a corresponding method for segmenting a region of interest in image data of an object having an improved reliability and accuracy.
According to one aspect of the present invention, an image processing apparatus is provided for segmenting a region of interest in image data of an object, comprising:
- an interface adapted to receive at least two image data of the object each including a sub-region of the region of interest to be segmented,
- a processing unit adapted to combine the at least two image data of the two different sub-regions to combined image data of the region of interest by stitiching the at least two image data at an image border,
- a segmentation unit adapted to segment the region of interest of the combined image data and to provide a segmentation model of the region of interest,
wherein the processing unit is adapted to identify stitching artefacts in the combined image data, wherein a stitching artefact is identified by the processing unit if:
(i) an intensity gradient along a normal direction that is orthogonal to the image border (34) is above a first predefined threshold, (ii) a sum of intensity gradients along a plurality of normal directions that are parallel to one another and orthogonal to the image border is above a second predefined threshold;
(iii) an intensity gradient along a surface of the segmentation model is above a third predefined threshold; and/or
(iv) the surface of the segmentation model comprises an uneveness above a fourth predefined threshold level.
According to another aspect of the present invention a method for segmenting a region of interest in image data of an object is provided, comprising the steps:
- receiving at least two image data of the object each including a sub-region of the region of interest to be segmented,
- combining the at least two image data of the two different sub-regions to combined image data of the region of interest by stitiching the at least two image data at an image border,
- segmenting the region of interest of the combined image data and providing a segmentation data model of the region of interest,
- identifying stitching artefacts in the combined image data, wherein a stitching artefact is identified by the processing unit if:
(i) an intensity gradient along a normal direction that is orthogonal to the image border (34) is above a first predefined threshold,
(ii) a sum of intensity gradients along a plurality of normal directions that are parallel to one another and orthogonal to the image border is above a second predefined threshold;
(iii) an intensity gradient along a surface of the segmentation model is above a third predefined threshold; and/or
(iv) the surface of the segmentation model comprises an uneveness above a fourth predefined threshold level.
According to another aspect of the invention, an ultrasound imaging apparatus is provided, comprising:
- an ultrasound acquisition unit adapted to provide ultrasound image data including sub-regions of an object in a field of view, and
- an image processing apparatus according to the present invention for segmenting the ultrasound data of the object acquired by the ultrasound acquisition unit. According to a further aspect of the invention a computer program is provided comprising program code means for causing a computer to carry out the steps of the method according to the present invention, when said computer program is carried out on a computer.
Preferred embodiments of the invention are defined in the dependent claims. It should be understood that the claimed method has similar and identical preferred
embodiments as the claimed device and as defined in the dependent claims.
The present invention is based on the idea to identify wrongly or stitched sub- volume image data by identifying artefacts in the combined image data so that such images are discarded and not quantified and the reliability and the accuracy of the segmentation and the diagnosis can be improved. This is achieved according to the present invention by identifying the artefacts based on the intensity variation or a segmentation deformation in the image data at the combination or stitching border of the combined image data of the sub- regions so that the wrongly combined image data can be easily detected. Hence, a misfit of the combined image data can be easily and reliably identified, such data is discarded and not used for quantification and hence the reliability of the diagnosis can be improved.
In a preferred embodiment, the processing unit is adapted to discard the combined image data based on the identified artefacts. This is a possibility to select artefact- free image data and to avoid wrong volume measurements and, further, wrong diagnoses.
In a preferred embodiment, the image data of the different sub-regions is captured at different time frames. This is a possibility to stitch different image data of the object from different sub-regions so that a 3D volume of the organ to be segmented can be acquired.
In a further preferred embodiment, the processing unit is adapted to determine the intensity variation along a normal direction of a border plane of the image data of the different sub-regions. This is a possibility to detect stitching artefacts by detecting intensity jumps along the connection plane of the different image data captured during different time frames.
In a further preferred embodiment, the processing unit is adapted to determine the intensity variation along a segmentation surface of the segmentation data. This is a possibility to determine intensity variations between the image data of the different sub- regions if the intensity of one sub-region is low and the intensity of the other sub-region is high.
In a further preferred embodiment, the processing unit is adapted to determine the segmentation deformation based on a model-based segmentation and the segmentation data. This is a possibility to determine a misfit of the image data from the different sub- regions, since an edge in the combined image data can be easily detected, which is usually not present in the anatomic object to be segmented.
In a further preferred embodiment, the processing unit is adapted to determine the segmentation deformation based on an outer surface of the model-based segmentation. This is a possibility to determine a mismatch between two image data portions of the sub- regions to be combined to the combined image data of the region of interest.
In a further preferred embodiment, the image data is multi-dimensional image data of the object. This is a possibility to provide detailed image data of the object to be segmented.
In a further preferred embodiment, the image data is multi-dimensional ultrasound image data of the object. It is further preferred that the image data is three dimensional ultrasound image data of the object. This is a possibility to provide detailed ultrasound image data of the object in order to determine a three dimensional volume of the object be segmented, e.g. the heart.
In a further preferred embodiment, the image data is based on a high frame ultrasound imaging comprising at least 1000 captured image frames per second. This is a possibility to improve the image data quality and to reduce the possibility of artefacts due to a movement of the probe or a movement of the patient.
As mentioned above, the present invention provides a possibility to identify a mismatch and the corresponding artefacts of combined image data, which combine sub- regions of the region of interest to be segmented, wherein the border of the combined or stitched sub-regions is checked for intensity variations, segmentation deformation or different intensities in an organ cavity to be analyzed so that mismatched image data of sub-regions can be identified and discarded and the overall reliability of the segmentation can be improved.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter. In the following drawings
Fig. 1 shows a schematic representation of a medical imaging system in use to scan a volume of a patient's body;
Fig. 2a shows combined image data having an artefact due to a mismatch between two image data of sub-regions; Fig. 2b shows the combined image data of Fig. 2a segmented by the segmentation unit;
Fig. 3a shows combined ultrasound image data comprising correctly matched image data of different sub-regions; and
Fig. 3b shows the ultrasound image data of Fig. 3a including segmentations data provided by the segmentation unit.
DETAILED DESCRIPTION OF THE INVENTION
Fig. 1 shows a schematic illustration of a medical imaging system 10 according to one embodiment, in particular a medical three dimensional (3D) ultrasound imaging system 10. The medical imaging system 10 is applied to inspect a volume of an anatomical site, in particular an anatomical site of a patient 12. The medical imaging system 10 comprises an ultrasound probe 14 having at least one transducer array including a multitude of transducer elements for transmitting and/or receiving ultrasound waves. The transducer elements are preferably arranged in a two-dimensional array, in particular for providing a multi dimensional imaging data.
The ultrasound transducer is formed as a high frame ultrasound transducer for high frame ultrasound imaging which is capable for capturing more than 1000 image frames per second. The high frame rate ultrasound imaging is a possibility to provide detailed ultrasound imaging also of moving organs like the heart.
A 3D ultrasound scan typically involves emitting ultrasound waves that illuminate a particular volume or object within the patient 12, which may be designated as target volume or region of interest 15. This can be achieved in general by emitting ultrasound waves at multiple different angles. A set of volume data is obtained by receiving and processing reflected waves. The setup volume data is a representation of the region of interest 15 within the patient 12.
The medical imaging system 10 is preferably provided for displaying and analyzing large and moving organs like the heart in order to quantify a chamber size of the organ and a function of the respective organ.
In the transthoracic echo cardiography (TTE), the cardiac function such as chamber size and ejection fraction of the heart can be quantified. Due to the technical limitations of the currently available ultrasound transducer arrays, in particular the volume of the anatomical site which can be captured and displayed, the acquisition of images which can capture and measure the entire heart chamber in order to quantify the chamber size and the ejection fraction is not possible. In order to generate images which can quantify the chamber size and the ejection fraction, the 3D volume of the heart is usually acquired in a plurality of image captures which are captured at different time frames. Multiple sub-volumes are usually acquired at similar time frames during the cardiac cycle and stitched together in order to provide 3D image data which display the entire heart to determine the chamber size and the ejection fraction. In the case that the patient 12 or the ultrasound probe 14 is moved between the different image captures of the cardiac cycle, the combined image data may comprise artefacts due to incorrect combination of the sub-volume image data. Those incorrect images should be discarded and not used for analysis or quantification in order to achieve reliable analysis data and reliable diagnoses. Presence of image artifacts especially may get pronounced for a new generation of ultrafast ultrasound imaging systems, wherein a frame rate may reach as high 1000 Hz or even more. Therefore, it is desirable to have an ultrasound system capable of automatically providing compounded images of an anatomy suitable for the precise quantification analyses.
The medical imaging system 10 comprises an image processing apparatus 20 for providing an image via the medical imaging system 10. The image processing apparatus 20 controls the image processing and can form an image out of the echoes of the ultrasound beams received by the transducer array of the ultrasound probe 14. The image processing apparatus 20 comprises a control unit 22 that controls the acquisition of image data via the transducer array of the ultrasound probe 14 and is connected to an interface 24 for receiving the respective image data. The control unit 22 is connected to a processing unit 26, which receives the image data and performs image processing on overall image data in order to achieve ultrasound images in general to be displayed.
The processing unit 26 receives the image data of the object comprising sub- regions of the region of interest which is acquired by the ultrasound probe 14 during different time frames. The processing unit 26 stitches the image data of the different sub-regions together and forms combined (or compounded) image data of the whole region of interest 15, e.g. the heart in order to provide images of the whole organ to be analyzed. The processing unit 26 is connected to a segmentation unit 28, which is adapted to segment the anatomic object in the region of interest 15. The segmentation unit 28 may perform the segmentation based on a deformable known or stored model of the anatomical structure in the region of interest 15 shown by the combined image data.
The combined image data and the segmentation data provided by the segmentation unit 28 are checked for plausibility by the processing unit 26 as described in the following and the image data including the segmentation data can be provided to a display unit 30 connected to the image processing apparatus 20. The display unit 30 may be connected to an input device 32 for controlling the display unit 30 and/or the image processing apparatus 20.
Fig. 2a illustrates a combined ultrasound image data 33 of a heart of the patient 12, which is formed of two sub-volume image data Al and A2 originating from two sub-volumes of the heart. These sub-volume image data Al and A2 are stitched at a border line or a border plane 34. As indicated in Fig. 2a, the combined image data 33 comprises a mismatch between the sub-volume image data Al and A2, wherein the image data of the sub- volumes Al and A2 are displaced with respect to each other e.g. by a movement of the patient 12 or the ultrasound probe 14 between capturing the different sub-volume image data or segmentation data of the sub-volumes Al and A2. According to the present invention irregularities in the compounded image data, such as combination artefacts or stitching artefacts, are detected and the image data is discarded and not used for quantification of the chamber size and the ejection fraction of the heart.
Since the geometry of the sub-volumes Al and A2 is known the combination artefacts or stitching artefacts can be identified by intensity variations or intensity jumps in the ultrasound image data along a normal direction 36 of the border line 34 or the border plane 34. In the case the sum of all gradients jump at a Cartesian grid on the stitching plane 34 and along the plane normal is larger than a predefined threshold, the image data 33 is discarded. I.e. if the intensity or contrast of the ultrasound image data of two neighbored points of an e.g. Cartesian grid have a difference above the predefined threshold level, the respective image data is discarded.
Fig. 2b shows segmentation data 38 provided by the segmentation unit 28 as a model-based segmentation on the combined image data 33 including the combination or stitching artefacts as shown in Fig. 2a. To determine the compounded image irregularities, such as the combination or stitching artefacts, the image data intensity is checked over the model-based segmentation surface and the artefacts are identified if the segmentation model surface adapted to the image intensity or contrast data is not smooth. I.e. if the segmentation model of the surface of the organ shows an unexpected unevenness due to a mismatch as shown in Fig. 2b, which is larger than a threshold level, the image data is discarded. This is a possibility to identify the intensity variation of the image data 33 in the combined image data. In a case of image data 33 including combination or stitching artefacts, the image intensities at the inside of the heart contour are high within the sub-volume A2 and low within the sub- volume image data Al . In that case, the combination or stitching artefacts are identified and the image data 33 or the segmentation data 38 should be discarded.
Further, the model-based segmentation 38 can be executed by means of the segmentation unit 28 to identify artefacts in the combined image data 33. In the case that the segmentation model surface adapted to the combined image data shows deformation which is not realistic or above a predefined threshold level, an artefact can be identified and the segmentation data 38 should be discarded and not used for the analysis or the quantification.
Consequently, the image data irregularities (i.e. artefacts) can be identified on the basis of intensity variation in the combined image data 33 or based on segmentation deformation at the border line or border plane 34 between the sub-volume image data Al and A2.
In comparison to Fig. 2a and 2b, Fig. 3a shows combined image data 33, wherein the sub-volume image data Al and A2 are correctly combined and no artefacts are detected. In Fig. 3b, the segmentation data 38 provided by the segmentation unit 28 is shown on the basis of which a quantification of the chamber size and the ejection fraction of the heart can be executed.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. For example, the compounded image irregularities detection was illustrated using the ultrasound images visualized to the user. This artifact detection may be performed automatically based on the compounded image data and do not necessarily need to be visualized to the user.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single element or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the

Claims

CLAIMS:
1. Image processing apparatus (20) for segmenting a region of interest (15) in image data (33) of an object (12), comprising:
- an interface (24) adapted to receive at least two image data of the object each including a sub-region (Al, A2) of the region of interest to be segmented,
- a processing unit (26) adapted to combine the at least two image data of the two different sub-regions to combined image data of the region of interest by stitiching the at least two image data at an image border (34),
- a segmentation unit (28) adapted to segment the region of interest of the combined image data and to provide a segmentation model (38) of the region of interest, wherein the processing unit is adapted to identify stitching artefacts in the combined image data, wherein a stitching artefact is identified by the processing unit if:
(i) an intensity gradient along a normal direction (36) that is orthogonal to the image border (34) is above a first predefined threshold,
(ii) a sum of intensity gradients along a plurality of normal directions (36) that are parallel to one another and orthogonal to the image border (34) is above a second predefined threshold;
(iii) an intensity gradient along a surface of the segmentation model is above a third predefined threshold; and/or
(iv) the surface of the segmentation model comprises an uneveness above a fourth predefined threshold level.
2. Image processing apparatus as claimed in claim 1, wherein the processing unit is adapted to discard the combined image data if a stitching artefact is identified.
3. Image processing apparatus as claimed in claim 1, wherein the at least two image data of the two different sub-regions are captured at different time frames.
4. Image processing apparatus as claimed in claim 1, wherein each of the at least two image data is multi-dimensional image data of the object.
5. Image processing apparatus as claimed in claim 4, wherein each of the at least two image data is multi-dimensional ultrasound image data of the object.
6. Image processing apparatus as claimed in claim 1, wherein the image data is based on a high frame rate ultrasound imaging comprising at least 1000 captured image frames per second.
7. Method for segmenting a region of interest (15) in image data (30) of an object (12), comprising the steps of:
- receiving at least two of image data (33) of the object each including a sub- region (Al, A2) of the region of interest to be segmented,
- combining the at least two image data of the two different sub-regions to combined image data of the region of interest by stitiching the at least two image data at an image border (34),
- segmenting the region of interest of the combined image data and providing a segmentation model (38) of the region of interest,
- identifying stitching artefacts in the combined image data, wherein a stitching artefact is identified by the processing unit if:
(i) an intensity gradient along a normal direction (36) that is orthogonal to the image border (34) is above a first predefined threshold,
(ii) a sum of intensity gradients along a plurality of normal directions (36) that are parallel to one another and orthogonal to the image border (34) is above a second predefined threshold;
(iii) an intensity gradient along a surface of the segmentation model is above a third predefined threshold; and/or
(iv) the surface of the segmentation model comprises an uneveness above a fourth predefined threshold level.
8. Ultrasound imaging apparatus (10) comprising:
- an ultrasound acquisition unit (14) adapted to provide ultrasound image data (33) including sub-regions (Al, A2) of an object (12) in a field of view (15), and
- an image processing apparatus (20) as claimed in claim 1 for segmenting the ultrasound data (33) of the object acquired by the ultrasound acquisition unit.
9. Computer program comprising program code means for causing a computer to carry out the steps of the method as claimed in claim 7, when said computer program is carried out on a computer.
PCT/EP2017/070813 2016-08-23 2017-08-17 Image processing apparatus and method for segmenting a region of interest WO2018036893A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP16185243 2016-08-23
EP16185243.9 2016-08-23

Publications (1)

Publication Number Publication Date
WO2018036893A1 true WO2018036893A1 (en) 2018-03-01

Family

ID=56842651

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/070813 WO2018036893A1 (en) 2016-08-23 2017-08-17 Image processing apparatus and method for segmenting a region of interest

Country Status (1)

Country Link
WO (1) WO2018036893A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035271A (en) * 2018-07-19 2018-12-18 迈克医疗电子有限公司 Image partition method and device, analysis instrument and storage medium
CN114419074A (en) * 2022-03-25 2022-04-29 青岛大学附属医院 4K medical image processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110118608A1 (en) 2009-09-04 2011-05-19 Bjoern Lindner System and method for detection of a temperature on a surface of a body
US20120179044A1 (en) * 2009-09-30 2012-07-12 Alice Chiang Ultrasound 3d imaging system
US20140071125A1 (en) 2012-09-11 2014-03-13 The Johns Hopkins University Patient-Specific Segmentation, Analysis, and Modeling from 3-Dimensional Ultrasound Image Data
WO2016038491A1 (en) * 2014-09-11 2016-03-17 Koninklijke Philips N.V. Quality metric for multi-beat echocardiographic acquisitions for immediate user feedback

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110118608A1 (en) 2009-09-04 2011-05-19 Bjoern Lindner System and method for detection of a temperature on a surface of a body
US20120179044A1 (en) * 2009-09-30 2012-07-12 Alice Chiang Ultrasound 3d imaging system
US20140071125A1 (en) 2012-09-11 2014-03-13 The Johns Hopkins University Patient-Specific Segmentation, Analysis, and Modeling from 3-Dimensional Ultrasound Image Data
WO2016038491A1 (en) * 2014-09-11 2016-03-17 Koninklijke Philips N.V. Quality metric for multi-beat echocardiographic acquisitions for immediate user feedback

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BREKKE ET AL: "Volume Stitching in Three-Dimensional Echocardiography: Distortion Analysis and Extension to Real Time", ULTRASOUND IN MEDICINE AND BIOLOGY, NEW YORK, NY, US, vol. 33, no. 5, 29 April 2007 (2007-04-29), pages 782 - 796, XP022052437, ISSN: 0301-5629, DOI: 10.1016/J.ULTRASMEDBIO.2006.10.020 *
BREKKE: "Volume Stitching in Three-Dimensional Echocardiography: Distortion Analysis and Extension to Real Time", ULTRASOUND IN MEDICINE AND BIOLOGY, vol. 33, no. 5, pages 782 - 796, XP022052437, DOI: doi:10.1016/j.ultrasmedbio.2006.10.020
HAN DONGFENG ET AL: "Characterization and identification of spatial artifacts during 4D-CT imaging", MEDICAL PHYSICS, AIP, MELVILLE, NY, US, vol. 38, no. 4, 23 March 2011 (2011-03-23), pages 2074 - 2087, XP012145200, ISSN: 0094-2405, DOI: 10.1118/1.3553556 *
LANG, R.M. ET AL.: "EAE/ASE Recommendations for Image Acquisition and Siplay Using Three-Dimensional Echicardiography", EUROPEAN HEART JOURNAL - CARDIOVASCULAR IMAGING, vol. 13, no. 1, pages 1 - 46
R. M. LANG ET AL: "EAE/ASE Recommendations for Image Acquisition and Display Using Three-Dimensional Echocardiography", EUROPEAN HEART JOURNAL - CARDIOVASCULAR IMAGING, vol. 13, no. 1, 1 January 2012 (2012-01-01), pages 1 - 46, XP055128564, ISSN: 2047-2404, DOI: 10.1093/ehjci/jer316 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035271A (en) * 2018-07-19 2018-12-18 迈克医疗电子有限公司 Image partition method and device, analysis instrument and storage medium
CN114419074A (en) * 2022-03-25 2022-04-29 青岛大学附属医院 4K medical image processing method

Similar Documents

Publication Publication Date Title
JP6994494B2 (en) Elastography measurement system and its method
JP5670324B2 (en) Medical diagnostic imaging equipment
US8659603B2 (en) System and method for center point trajectory mapping
JP6430498B2 (en) System and method for mapping of ultrasonic shear wave elastography measurements
US8577441B2 (en) System and method for image based physiological monitoring of cardiovascular function
JP6535088B2 (en) Quality Metrics for Multibeat Echocardiography Acquisition for Immediate User Feedback
US10743844B2 (en) Ultrasound imaging apparatus
US8487933B2 (en) System and method for multi-segment center point trajectory mapping
WO2016041855A1 (en) Ultrasound imaging apparatus
WO2010113633A1 (en) Image processing apparatus and image processing method
WO2011041244A1 (en) Contrast-enhanced ultrasound assessment of liver blood flow for monitoring liver therapy
US20190125309A1 (en) Systems and methods for estimating cardiac strain and displacement using ultrasound
US20210177375A1 (en) Image-Based Diagnostic Systems
JP2017522092A (en) Ultrasonic imaging device
US20200352547A1 (en) Ultrasonic pulmonary assessment
WO2016037969A1 (en) Medical imaging apparatus
US20120008833A1 (en) System and method for center curve displacement mapping
WO2018036893A1 (en) Image processing apparatus and method for segmenting a region of interest
US11246564B2 (en) Ultrasound diagnosis apparatus
CN117202842A (en) Method for determining heart wall movement
KR101024857B1 (en) Ultrasound system and method for performing color modeling processing on three-dimensional ultrasound image
JP2010246777A (en) Medical image processing device, method, and program
WO2017223563A1 (en) Systems and methods for estimating cardiac strain and displacement using ultrasound
Rohling et al. Correcting Motion-Induced Registration Errors in 3-D Ultrasound Images.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17757505

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17757505

Country of ref document: EP

Kind code of ref document: A1