WO2015139937A1 - Image processing apparatus and method for segmenting a region of interest - Google Patents

Image processing apparatus and method for segmenting a region of interest Download PDF

Info

Publication number
WO2015139937A1
WO2015139937A1 PCT/EP2015/054189 EP2015054189W WO2015139937A1 WO 2015139937 A1 WO2015139937 A1 WO 2015139937A1 EP 2015054189 W EP2015054189 W EP 2015054189W WO 2015139937 A1 WO2015139937 A1 WO 2015139937A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
deformable model
interest
processing apparatus
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2015/054189
Other languages
English (en)
French (fr)
Inventor
Irina WÄCHTER-STEHLE
Juergen Weese
Christian Buerger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips NV filed Critical Koninklijke Philips NV
Priority to CN201580015322.6A priority Critical patent/CN106133789B/zh
Priority to JP2016557640A priority patent/JP6400725B2/ja
Priority to US15/126,626 priority patent/US10043270B2/en
Priority to EP15707134.1A priority patent/EP3120323B1/en
Publication of WO2015139937A1 publication Critical patent/WO2015139937A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4483Constructional features of the ultrasonic, sonic or infrasonic diagnostic device characterised by features of the ultrasound transducer
    • A61B8/4494Constructional features of the ultrasonic, sonic or infrasonic diagnostic device characterised by features of the ultrasound transducer characterised by the arrangement of the transducer elements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/54Control of the diagnostic device
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/100764D tomography; Time-sequential 3D tomography
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20116Active contour; Active surface; Snakes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20124Active shape model [ASM]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic

Definitions

  • the present invention relates to an image processing apparatus for segmenting a region of interest in multi-dimensional image data of an object. Further, the present invention relates to a method for segmenting a region of interest in a multi-dimensional image data of an object. The present invention further relates to a computer program for segmenting a region of interest in a multi-dimensional image data of an object. The present invention finally relates to a medical imaging system comprising an image acquisition unit.
  • processing tasks are typically performed on medical images like ultrasound images, MRT images, computer tomography images or the like.
  • One specific processing task which is a fundamental task in many image processing applications, is the segmentation of a region of interest e.g. a specific organ. The segmentation is necessary for identifying organs or for special diagnosis e.g. based on volume quantification in order to improve the determination of treatment parameters.
  • the image segmentation can successfully be performed with deformable models, which are based on a mesh structure with a topology which remains unchanged during adaptation to the image being segmented.
  • Model-based segmentation has been considered very efficient for a wide variety of simple to complex organs like bones, liver, heart with nested structures.
  • a corresponding method for facilitating of images using deformable meshes is e.g. known from WO 2007/072363.
  • WO 2011/070464 A2 discloses a system and a method for automatic segmentation, performed by selecting a deformable model of an anatomical structure of inerest imaged in a volumetric image, wherein the deformable model is formed of a plurality of polygons including vertices and edges, wherein a feature point of the anatomical structure of interest corresponding to each of the plurality of polygons is detected and the deformable model is adapted by moving each of the vertices toward the corresponding feature point until the deformable model morphs to a boundary of the anatomical structure of interest.
  • WO 2006/029037 A2 discloses a system and a method for defining and tracking a deformable shape of a candidate anatomical structure wall in a three-dimensional image, wherein the shape of the candidate anatomical structure is represented by a plurality of labeled 3D landmark points and at least one 3D landmark point of the deformable shape in an image frame is defined.
  • model-based segmentations generally provide a reliable and accurate identification and segmentation of the organs
  • some anatomical structures like the apex of the left ventricle of the heart, or the tip of the liver or the horn of the ventricles of the brain are often difficult to detect in the medical images so that the segmentation of these features is challenging.
  • an image processing apparatus for segmenting a region of interest in multi-dimensional image data of an object, comprising:
  • a selection unit for selecting a deformable model of an anatomical structure corresponding to the object in the image data
  • processing unit for segmenting the region of interest by adapting the deformable model on the basis of the image data and additional anatomical information of the object.
  • a method for segmenting a region of interest in a multi-dimensional image data of an object comprising the steps of:
  • a computer program for segmenting a region of interest in a multi-dimensional image data of an object comprising program means for causing a computer to carry out the steps:
  • a medical imaging system comprising an image acquisition unit for acquiring medical images and comprising an image processing apparatus according to the present invention for segmenting a region of interest in the multi-dimensional image data of an object acquired by the image acquisition unit.
  • the present invention is based on the idea to apply a deformable model of an anatomical structure to the image data and to adjust the deformable model on the basis of known anatomical information of the object within the image data.
  • known anatomical features of the object to be segmented are used to adjust the deformable model in order to improve the segmentation of the selected deformable model.
  • an additional parameter can be utilized for the adaption so that the segmentation becomes more reliable and more accurate and also complicated structures and structures which are badly detectable in the image data can be segmented.
  • the processing unit is designed to adapt a position of an element of the deformable model on the basis of the image data and a known position of an anatomical feature of the object. This is a possibility to adjust the deformable model to a known position so that the uncertainty is reduced and a more reliable and more accurate segmentation can be achieved.
  • the processing unit is designed to adapt a position of an element of the deformable model on the basis of the image data and a known position within the field of view of the image data. This is a possibility to add a known position to the segmentation with low technical effort, since the field of view of the image data usually comprise edge conditions which can be utilized as additional parameter for the segmentation.
  • the processing unit is designed to adapt a position of an element of the deformable model on the basis of the image data and corresponding positions in consecutive time frames of the image data.
  • This is a possibility to utilize an anatomical feature of the object which is derivable from a known motion or a known stability of anatomical structures so that an additional parameter can be used to increase the reliability and the accuracy of the segmentation.
  • the corresponding positions are anatomical structures, which are usually stable over time so that the respective anatomical structure can be correlated and the deformable model can be adapted accordingly.
  • the corresponding positions are known corresponding anatomical features of the object. This is a possibility to correlate anatomical features of different time frames so that a time-dependent feature can be added as additional parameter to improve the segmentation.
  • the processing unit is designed to adapt a position of an element of the deformable model on the basis of the image data and a vector or plane derived from a field of view of the image data. This is a possibility to to include external information when the position is not known in all three dimensions.
  • the processing unit is designed to adapt the position of the element on the basis of a distance between the position of the element and the known position or between the corresponding positions. This is a possibility to reduce the technical effort for adapting the deformable model, since distances can be easily calculated in deformable models.
  • the deformable model is adapted to minimize a model energy comprising a spring energy derived from the distance.
  • the model energy comprises an external energy derived from the difference of the deformable model to corresponding image features in the region of interest and the spring energy. This is a possibility to combine information from the different source so that an overall improvement of the deformable model shape is achieved.
  • the spring energy is weighted by a weight factor. This is a possibility to adjust the spring energy within the overall model energy so that a balance between the different energies within the model energy can be achieved.
  • the processing unit is designed to adapt the deformable model further on the basis of an expected shape of the object. This is a possibility to improve the segmentation, since the deformable model is adapted to the medical image which includes the region of interest to be segmented.
  • the deformable model is formed by a mesh including polygons having vertices and edges. This is a possibility to achieve a segmentation with a high resolution so that the diagnosis of the anatomical structure can be improved.
  • the deformable model is adapted by adjusting the position of the vertices of the mesh. This is a possibility to adapt the deformable model with low technical effort.
  • the present invention is based on the idea to improve the reliability and accuracy of a model-based segmentation by using external knowledge about the acquisition of the image data or anatomical structures or features to guide the
  • the anatomical features are positions of anatomical structures, anatomical distances or anatomical points, which are laterally stable over time so that additional known parameters can be added in order to reduce the uncertainty of the segmentation and to improve the reliability and accuracy of the model.
  • the external known parameters are implemented as spring energy, which are derived from distances of points of the deformable model and corresponding points within the image data so that the deformable model can be adapted by a stepwise movement of the elements.
  • the overall convergence of the model adaptation can be achieved by minimizing a model energy, which is formed of an external model energy, internal model energy and the spring energy. Hence, all relevant adaptation methods can be considered so that an optimal segmentation can be achieved.
  • Fig. 1 shows a schematic representation of a medical imaging system in use to scan a volume of a patient's body
  • Fig. 2 shows a schematic diagram of a deformable model including polygons and vertices for segmenting a region of interest
  • Fig. 3 shows a sectional view of the deformable model in order to explain the adaptation of the model
  • Fig. 4 shows an embodiment to adapt the deformable model on the basis of external knowledge in the field of use
  • Fig. 5a, b show examples of an improved segmentation on the basis of external knowledge
  • Fig. 6 shows a flow diagram of a method for segmenting a region of interest.
  • Fig. 1 shows a schematic illustration of a medical imaging system 10 according to one embodiment, in particular a medical three-dimensional (3D) ultrasound imaging system 10.
  • the medical imaging system 10 is applied to inspect a volume of an anatomical site, in particular an anatomical site of a patient 12.
  • the medical imaging system 10 comprises an ultrasound probe 14 having at least one transducer array having a multitude of transducer elements for transmitting and/or receiving ultrasound waves.
  • the transducer elements are preferably arranged in a two-dimensional array, in particular for providing a multi-dimensional image data.
  • a 3D ultrasound scan typically involves emitting ultrasound waves that illuminate a particular volume or object within the patient 12, which may be designated as target volume or region of interest 15. This can be achieved by emitting ultrasound waves at multiple different angles.
  • a set of volume data is then obtained by receiving and processing reflected waves.
  • the set of volume data is a representation of the region of interest 15 within the patient 12.
  • the medical imaging system 10 comprises an image processing apparatus 16 for providing an image via the medical imaging system 10.
  • the image processing apparatus 16 controls the image processing that forms the image out of the echoes of the ultrasound beams received by the transducer array of the ultrasound probe 14.
  • the image processing apparatus 16 comprises a control unit 18 that controls the acquisition of image data via the transducer array of the ultrasound probe 14 and serves as an interface 18 for receiving the image data.
  • the control unit 18 is connected to a selection unit 20 that selects a deformable model of an anatomical structure corresponding to an object in the region of interest 15 of the image data and provides the selected deformable model and the image data to a processing unit 22 for segmenting the object in the region of interest 15 by applying the deformable model received from the selection unit 20 to the image data.
  • the segmentation image and the ultrasound image of the region of interest 15 may be superposed and displayed on a display screen 24.
  • an input device 26 may be provided connected to the display unit 24 or to the image processing apparatus 16 in order to control the image acquisition, the segmentation and/or the display of the ultrasound image and the segmentation results.
  • the medical imaging system 10 is connected to an MR device, a CT device or an X-ray device in order to provide corresponding medical images of the patient's body 12 which can be segmented by the image processing apparatus 16 and displayed on the display unit 24.
  • Fig. 2 shows a deformable model 30 that represents surfaces of an anatomical structure and their spatial constellation.
  • the deformable model 30 is selected by the selection unit 20 and e.g. loaded from a memory of the image processing apparatus 16.
  • the selected deformable model 30 and the image data received from the interface 18 are then provided to the processing unit 22 to adapt the deformable model to the anatomical structure identified in the image data.
  • the deformable model 30 is formed of a mesh comprising triangles which are formed by three neighbored vertices and a surface of the model which are connected to each other by edges.
  • the mesh forms a surface of the anatomical structure.
  • the surface of the anatomical structure can be defined by polygons replacing the triangles of the mesh.
  • the deformable model 30 comprises an initial spatial shape which forms a mean mesh or a shape model and the deformable model 30 is adapted to the anatomical structure within the region of interest of the image data as described in the following.
  • pattern detection of the image data is performed and the corresponding positions are correlated accordingly.
  • Fig. 3 shows a sectional view of the deformable model 30.
  • the deformable model 30 comprises a plurality of triangles Ti to T 5 .
  • the triangles T are formed by vertices v, which are connected to each other by edges.
  • Each triangle T comprises a triangle center Ci to c 5 .
  • the deformable model 30 is adapted to the detected anatomical structures within the image data, wherein the positions of the triangles T are adjusted so that a so-called general energy function
  • E E ex t + aEi n t is minimized.
  • E ex t is in general an external energy which moves the vertices and triangles T to the corresponding positions of the anatomical structure detected in the image data by the pattern detection, wherein Ei nt is an internal energy, which moves the vertices and triangles T to the original positions of the deformable model i.e. to the mean mesh or the shape model.
  • the external energy is calculated by
  • the sum is formed over each of the triangles
  • the internal energy is calculated by the formula:
  • v are the vertices of the adapted deformable model 30 and m are the vertices of the shape model or the mean mesh as originally provided or selected.
  • the sum is formed over all vertices of the deformable model 30, wherein each of the distances to the neighbored vertices Vj is determined.
  • the deformable model 30 is also adapted on the basis of known features of the anatomical structure in order to improve the segmentation of the deformable and parametric adaptation.
  • the anatomical features of the anatomical structure may be a certain point or position or a distance within the image data, which is known and which can be identified. This may for example be the apex of the heart which position is generally known and which distance to the skin of the thorax is well known and can be derived from the image data due to the position of the ultrasound probe 14.
  • the anatomical features of the anatomical structure may be a plane derived from the field of view with respect to the position of the ultrasound probe 14 or may be anatomical structures, which are known to be stable over time e.g. the apex of the heart so that corresponding points or positions in different time frames of the image data can be correlated.
  • an additional spring energy is calculated and included in the function of the general energy E.
  • the additional spring energy can pull certain centers c of the triangles T to a selected point x if a certain anatomical structure in the image data is identified or determined.
  • the selection of the known anatomical structure can be performed automatically by means of the image processing apparatus 16 or may be performed manually by the user.
  • the spring energy formed with respect to a certain selected point can be calculated by the formula:
  • Cj are the respective centers of the triangles T
  • x * are the selected corresponding points within the image data
  • Wj is a weight factor.
  • the selected point j j* is schematically shown, wherein the spring energy E s moves the center Ci of the first triangle Ti to the selected point j j* .
  • the spring energy E s is calculated on the basis of the difference between the triangle centers c and the corresponding selected anatomical structure x weighed by the weight factor Wj to weight the three energies within the formula of the general energy.
  • Fig. 4 a schematic illustration of a field of view and the deformable model 30 is shown to explain an adaptation of elements of the deformable model 30 to a plane within a field of view of the medical imaging system 10.
  • the field of view of the medical imaging system 10 and in particular of the ultrasound probe 14 is generally denoted by 32.
  • the anatomical structure of the object to be segmented is the heart of the patient 12 as shown in Fig. 4.
  • the segmentation of the apex of the heart is difficult, wherein according to the present invention the distance of the apex to the tip of the ultrasound probe 14 is approximately known.
  • the distance of the ultrasound probe 14 to the apex is known since the ultrasound probe 14 is located at the skin of the patient 12 and the anatomical distance of the apex to the skin is approximately known.
  • the known distance of the apex can be used to adapt the deformable model 30.
  • the target point x is implemented as a plane 34 at a certain distance from the position of the ultrasound probe 14 and a direction n of the triangle center Cj of the apex position of the deformable model 30 to the plane 34 is determined.
  • the spring energy Es can be determined to adapt the deformable model 30.
  • the spring energy with respect to the plane 34 can be calculated by the formula:
  • n is the direction to the plane 34, Cj the center of the triangle of the deformable model 30 to be adapted and x the target point at which the anatomical structure is actually located.
  • the so determined spring energy E s pulls the apex Cj of the deformable model 30 to the plane 34.
  • the position of the ultrasound probe 14 is estimated as the intersection of the two border rays 36, 38 describing the frustum of the field of view in the center slice.
  • the position of a certain element of the deformable model 30 in different time frames of the image data is used.
  • Certain anatomical structures, in particular the apex of the heart are spatially stable within a cardiacal cycle so that the position of the apex within the deformable models 30 in different time frame of the image data can be used to adapt the deformable model 30.
  • a spring energy is calculated on the basis of a distance of the corresponding elements of the deformable model 30, in particular on the basis of the distance of the triangle centers Cj between the different time frames of the image data.
  • the time-dependent spring energy to adapt the deformable model 30 is calculated by the formula: s
  • Cj, t i and Cj, t2 are the triangle centers Cj of the different time frames tl and t2, wherein Wj is a weight factor. If the position of the triangle centers Cj in the different time frames tl, t2 have a large difference, the time-dependent spring energy E s is high and the respective triangles of the deformable model 30 are adapted accordingly in order to reduce the spring energy E s .
  • the general energy E of the deformable model to be minimized during the adaptation can be calculated by means of the formula:
  • E E ext + aE int + E s .
  • the additional spring energy and the additional information regarding the anatomical features of the object can improve the segmentation of the field of view 52, in particular the segmentation of the apex of the heart. This is in particular useful if the apex is not visible in the image data received from the ultrasound probe 14. The resulting
  • Fig. 5a, b shows a schematic flow diagram illustrating the method for segmenting the region of interest in the multi-dimensional image data of the patient 12. The method is generally denoted by 40.
  • the method 40 starts with the acquisition of the image data by means of the ultrasound probe 14 and with the receiving of the image data by means of the interface 18 of the image processing apparatus 16 shown at step 42.
  • the deformable model 30 is selected by the selection unit 20 corresponding to the anatomical structure of the object in the region of interest 15 to be segmented as shown in step 44.
  • the deformable model 30 is selected from a memory of the processing apparatus 16.
  • the region of interest 15 is segmented by applying the deformable model 30 to the image data as shown in step 46.
  • the deformable model 30 is finally adapted on the basis of known features of the object in the image data as shown at step 48.
  • the deformable model 30 may be further adapted on the basis of the image features determined by a pattern detection and on the basis of the mean mesh of the deformable model as originally selected in order to minimize the general energy E as described above.
  • the image processing apparatus 16 can be formed as a computer program carried out on a computer and that the method 40 can be carried out by the computer program.
  • the computer program may be stored/distributed on a suitable (non-transitory) medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
  • a suitable (non-transitory) medium such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
  • embodiments can take the form of a computer program product accessible from a computer usable or computer readable medium providing program code for use by or in connection with a computer or any device or system that executes instructions.
  • a computer usable or computer readable medium can generally be any tangible apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution device.
  • the different embodiments can take the form of a computer program product accessible from a computer usable or computer readable medium providing program code for use by or in connection with a computer or any device or system that executes instructions.
  • a computer usable or computer readable medium can generally be any tangible device or apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution device.
  • non-transitory machine-readable medium carrying such software such as an optical disk, a magnetic disk, semiconductor memory or the like, is also considered to represent an embodiment of the present disclosure.
  • the computer usable or computer readable medium can be, for example, without limitation, an electronic, magnetic, optical, electromagnetic, infrared, or
  • Non- limiting examples of a computer readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and an optical disk.
  • Optical disks may include compact disk - read only memory (CD-ROM), compact disk - read/write (CD-R/W), and DVD.
  • a computer usable or computer readable medium may contain or store a computer readable or usable program code such that when the computer readable or usable program code is executed on a computer, the execution of this computer readable or usable program code causes the computer to transmit another computer readable or usable program code over a communications link.
  • This communications link may use a medium that is, for example, without limitation, physical or wireless.
  • a data processing system or device suitable for storing and/or executing computer readable or computer usable program code will include one or more processors coupled directly or indirectly to memory elements through a communications fabric, such as a system bus.
  • the memory elements may include local memory employed during actual execution of the program code, bulk storage, and cache memories, which provide temporary storage of at least some computer readable or computer usable program code to reduce the number of times code may be retrieved from bulk storage during execution of the code.
  • I/O devices can be coupled to the system either directly or through intervening I/O controllers. These devices may include, for example, without limitation, keyboards, touch screen displays, and pointing devices. Different communications adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems, remote printers, or storage devices through intervening private or public networks. Non-limiting examples are modems and network adapters and are just a few of the currently available types of communications adapters. While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Gynecology & Obstetrics (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)
PCT/EP2015/054189 2014-03-21 2015-02-27 Image processing apparatus and method for segmenting a region of interest Ceased WO2015139937A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201580015322.6A CN106133789B (zh) 2014-03-21 2015-02-27 用于分割感兴趣区域的图像处理装置和方法
JP2016557640A JP6400725B2 (ja) 2014-03-21 2015-02-27 関心領域を区分化するための画像処理機器及び方法
US15/126,626 US10043270B2 (en) 2014-03-21 2015-02-27 Image processing apparatus and method for segmenting a region of interest
EP15707134.1A EP3120323B1 (en) 2014-03-21 2015-02-27 Image processing apparatus and method for segmenting a region of interest

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP14161050.1 2014-03-21
EP14161050 2014-03-21

Publications (1)

Publication Number Publication Date
WO2015139937A1 true WO2015139937A1 (en) 2015-09-24

Family

ID=50396869

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/054189 Ceased WO2015139937A1 (en) 2014-03-21 2015-02-27 Image processing apparatus and method for segmenting a region of interest

Country Status (5)

Country Link
US (1) US10043270B2 (enExample)
EP (1) EP3120323B1 (enExample)
JP (1) JP6400725B2 (enExample)
CN (1) CN106133789B (enExample)
WO (1) WO2015139937A1 (enExample)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11715196B2 (en) * 2017-07-18 2023-08-01 Koninklijke Philips N.V. Method and system for dynamic multi-dimensional images of an object

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6884211B2 (ja) * 2016-12-15 2021-06-09 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. 合成視野を有するx線装置
CN106887000B (zh) * 2017-01-23 2021-01-08 上海联影医疗科技股份有限公司 医学图像的网格化处理方法及其系统
JP6868712B2 (ja) * 2017-05-10 2021-05-12 ボストン サイエンティフィック サイムド,インコーポレイテッドBoston Scientific Scimed,Inc. 心臓情報の表示を支援するためのシステム及び心臓情報を表す方法
US11006926B2 (en) 2018-02-27 2021-05-18 Siemens Medical Solutions Usa, Inc. Region of interest placement for quantitative ultrasound imaging
EP3657435A1 (en) * 2018-11-26 2020-05-27 Koninklijke Philips N.V. Apparatus for identifying regions in a brain image
EP3683773A1 (en) * 2019-01-17 2020-07-22 Koninklijke Philips N.V. Method of visualising a dynamic anatomical structure
WO2021069606A1 (en) * 2019-10-10 2021-04-15 Koninklijke Philips N.V. Segmenting a medical image
EP3866107A1 (en) * 2020-02-14 2021-08-18 Koninklijke Philips N.V. Model-based image segmentation
EP4012650A1 (en) 2020-12-14 2022-06-15 Koninklijke Philips N.V. Segmentation of anatomical structure in image
WO2024194226A1 (en) 2023-03-21 2024-09-26 Koninklijke Philips N.V. Method(s) and/or system(s) for generating a medical imaging report

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002073536A2 (en) * 2001-03-09 2002-09-19 Koninklijke Philips Electronics N.V. Image segmentation
WO2007072363A2 (en) * 2005-12-19 2007-06-28 Koninklijke Philips Electronics, N.V. Method for facilitating post-processing of images using deformable meshes
WO2011070464A2 (en) * 2009-12-10 2011-06-16 Koninklijke Philips Electronics N.V. A system for rapid and accurate quantitative assessment of traumatic brain injury

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6343936B1 (en) * 1996-09-16 2002-02-05 The Research Foundation Of State University Of New York System and method for performing a three-dimensional virtual examination, navigation and visualization
DE10111661A1 (de) 2001-03-09 2002-09-12 Philips Corp Intellectual Pty Verfahren zum Segmentieren einer in einem Objekt enthaltenen dreidimensionalen Struktur, insbesondere für die medizinische Bildanalyse
US6961454B2 (en) * 2001-10-04 2005-11-01 Siemens Corporation Research, Inc. System and method for segmenting the left ventricle in a cardiac MR image
US7343365B2 (en) * 2002-02-20 2008-03-11 Microsoft Corporation Computer system architecture for automatic context associations
WO2004036500A2 (en) 2002-10-16 2004-04-29 Koninklijke Philips Electronics N.V. Hierarchical image segmentation
US7555151B2 (en) 2004-09-02 2009-06-30 Siemens Medical Solutions Usa, Inc. System and method for tracking anatomical structures in three dimensional images
US7672492B2 (en) 2005-01-31 2010-03-02 Siemens Medical Solutions Usa, Inc. Method of incorporating prior knowledge in level set segmentation of 3D complex structures
ATE520101T1 (de) * 2005-02-11 2011-08-15 Koninkl Philips Electronics Nv Bildverarbeitungsvorrichtung und -verfahren
JP4991697B2 (ja) * 2005-04-01 2012-08-01 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ データセットにおいて構造体を分割する方法、システム及びコンピュータプログラム
US7545979B2 (en) * 2005-04-12 2009-06-09 General Electric Company Method and system for automatically segmenting organs from three dimensional computed tomography images
US9129387B2 (en) * 2005-06-21 2015-09-08 Koninklijke Philips N.V. Progressive model-based adaptation
US7876938B2 (en) * 2005-10-06 2011-01-25 Siemens Medical Solutions Usa, Inc. System and method for whole body landmark detection, segmentation and change quantification in digital images
CN101310305A (zh) * 2005-11-18 2008-11-19 皇家飞利浦电子股份有限公司 用于在3d图像中描绘预定结构的方法
US8023703B2 (en) 2006-07-06 2011-09-20 The United States of America as represented by the Secretary of the Department of Health and Human Services, National Institues of Health Hybrid segmentation of anatomical structure
EP2143071B1 (en) * 2007-04-26 2014-02-26 Koninklijke Philips N.V. Risk indication for surgical procedures
US8150116B2 (en) * 2007-07-02 2012-04-03 Siemens Corporation Method and system for detection of deformable structures in medical images
RU2521692C2 (ru) * 2008-10-10 2014-07-10 Конинклейке Филипс Электроникс Н.В. Система и способ получения ангиографических изображений с автоматической регулировкой затвора для получения уменьшенного поля обзора, охватывающего сегментированную целевую структуру или патологическое изменение для уменьшения дозы рентгеновского излучения при минимально инвазивных вмешательствах с рентгенографическим контролем
RU2533626C2 (ru) * 2008-11-05 2014-11-20 Конинклейке Филипс Электроникс Н.В. Автоматическое последовательное планирование мр-сканирования
EP2446416B1 (en) * 2009-06-24 2014-04-30 Koninklijke Philips N.V. Establishing a contour of a structure based on image information
US8699768B2 (en) * 2009-11-16 2014-04-15 Koninklijke Philips N.V. Scan plan field of view adjustor, determiner, and/or quality assessor
US10300246B2 (en) * 2011-08-23 2019-05-28 Jaywant Philip Parmar EM guidance device for a device enabled for endovascular navigation placement including a remote operator capability and EM endoluminal imaging technique
EP3080777B1 (en) * 2013-12-10 2017-08-02 Koninklijke Philips N.V. Model-based segmentation of an anatomical structure
KR20150098119A (ko) * 2014-02-19 2015-08-27 삼성전자주식회사 의료 영상 내 거짓양성 병변후보 제거 시스템 및 방법

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002073536A2 (en) * 2001-03-09 2002-09-19 Koninklijke Philips Electronics N.V. Image segmentation
WO2007072363A2 (en) * 2005-12-19 2007-06-28 Koninklijke Philips Electronics, N.V. Method for facilitating post-processing of images using deformable meshes
WO2011070464A2 (en) * 2009-12-10 2011-06-16 Koninklijke Philips Electronics N.V. A system for rapid and accurate quantitative assessment of traumatic brain injury

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11715196B2 (en) * 2017-07-18 2023-08-01 Koninklijke Philips N.V. Method and system for dynamic multi-dimensional images of an object

Also Published As

Publication number Publication date
CN106133789A (zh) 2016-11-16
EP3120323A1 (en) 2017-01-25
US20170084023A1 (en) 2017-03-23
JP2017507754A (ja) 2017-03-23
JP6400725B2 (ja) 2018-10-03
US10043270B2 (en) 2018-08-07
EP3120323B1 (en) 2020-05-20
CN106133789B (zh) 2022-11-18

Similar Documents

Publication Publication Date Title
EP3120323B1 (en) Image processing apparatus and method for segmenting a region of interest
KR101908520B1 (ko) 메디컬 이미징에서 공간 및 시간 제약들을 이용하는 랜드마크 검출
JP6993334B2 (ja) 自動化された心臓ボリュームセグメンテーション
US8582854B2 (en) Method and system for automatic coronary artery detection
US9155470B2 (en) Method and system for model based fusion on pre-operative computed tomography and intra-operative fluoroscopy using transesophageal echocardiography
EP3100236B1 (en) Method and system for constructing personalized avatars using a parameterized deformable mesh
US9495725B2 (en) Method and apparatus for medical image registration
US9687204B2 (en) Method and system for registration of ultrasound and physiological models to X-ray fluoroscopic images
CN103456037B (zh) 心脏介入的术前和术中图像数据的模型融合的方法和系统
CN109863534B (zh) 用于分割解剖结构的二维图像的方法和装置
CN104346821B (zh) 用于医学成像的自动规划
US8948484B2 (en) Method and system for automatic view planning for cardiac magnetic resonance imaging acquisition
KR101835873B1 (ko) 의료 이미지들의 분할 불확실성의 계산 및 시각화를 위한 시스템들 및 방법들
US20100195881A1 (en) Method and apparatus for automatically identifying image views in a 3d dataset
US10019804B2 (en) Medical image processing apparatus, method, and program
EP3242602B1 (en) Ultrasound imaging apparatus and method for segmenting anatomical objects
JP2014144156A (ja) 医用画像表示制御装置および方法並びにプログラム
JP2022546303A (ja) 管状フィーチャのセグメント化
Butakoff et al. Left-ventricular epi-and endocardium extraction from 3D ultrasound images using an automatically constructed 3D ASM
Orkisz et al. Models, algorithms and applications in vascular image segmentation
CN110892447A (zh) 用于对象的动态多维图像的方法和系统
Dey et al. Estimation of cardiac respiratory-motion by semi-automatic segmentation and registration of non-contrast-enhanced 4D-CT cardiac datasets
JP2023551131A (ja) 解剖学的構造の3d表現のガイド取得
WO2016131955A1 (en) Automatic 3d model based tracking of deformable medical devices with variable appearance
Dangi et al. Endocardial left ventricle feature tracking and reconstruction from tri-plane trans-esophageal echocardiography data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15707134

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015707134

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015707134

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2016557640

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 15126626

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE