CN106537452B - Device, system and method for segmenting an image of an object - Google Patents

Device, system and method for segmenting an image of an object Download PDF

Info

Publication number
CN106537452B
CN106537452B CN201580038336.XA CN201580038336A CN106537452B CN 106537452 B CN106537452 B CN 106537452B CN 201580038336 A CN201580038336 A CN 201580038336A CN 106537452 B CN106537452 B CN 106537452B
Authority
CN
China
Prior art keywords
image
contour
motion
segmentation
image control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201580038336.XA
Other languages
Chinese (zh)
Other versions
CN106537452A (en
Inventor
N·沙德瓦尔特
H·舒尔茨
D·贝斯特罗夫
A·R·弗兰茨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips NV filed Critical Koninklijke Philips NV
Publication of CN106537452A publication Critical patent/CN106537452A/en
Application granted granted Critical
Publication of CN106537452B publication Critical patent/CN106537452B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20116Active contour; Active surface; Snakes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20168Radial search
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

The invention relates to a device for segmenting an image of an object (36), comprising: a data interface for receiving an image of the object (36), the image depicting a structure of the object (36); a conversion unit for converting a user-initiated motion of the image locator device into a first contour (38) around the structure; a motion parameter registration unit for registering motion parameters of the user initiated motion to the first contour (38), the motion parameters comprising a velocity and/or an acceleration of the image locator device; an image control point unit for distributing a plurality of image control points (40) on the first contour at a density decreasing with the motion parameter; and a segmentation unit for segmenting the image by determining a second contour (44) within the first contour based on the plurality of image control points (40), the segmentation unit being configured to use one or more segmentation functions.

Description

Device, system and method for segmenting an image of an object
Technical Field
The present invention relates to a device, system and method for segmenting an image of an object. It finds application in medical imaging, particularly for diagnostic or treatment planning.
Background
In medical images, image segmentation is one of the most important tasks for the operator to obtain in-depth analysis of medical images. The goal of image segmentation is to identify a region of interest (ROI) and highlight the boundary of the ROI so that an operator performing image analysis can distinguish the ROI from the rest of the image content.
There are many approaches for image segmentation. For example, automated processing is advantageous when it is applied to well-defined structures and standard contour definitions. However, this technique is sometimes not realizable due to defects such as image acquisition errors, anomalies in image content, and the presence of local image blur. To overcome the above-mentioned drawbacks of image segmentation based on purely automatic image processing, many methods have been developed to incorporate user information into automatic image processing. In particular, user input is involved in contour correction in order to speed up and simplify delineation of structures contained in medical images.
There are several main types of input that can be provided by the user during the interactive segmentation process. For example, the user may set values for one or more segmentation parameters, such as a threshold level for binarization, weighting factors in the cost function of the deformable model, quality levels for defining quality criteria by the objective function, and so on.
Other methods enable the user to draw an initial contour around the target structure and adjust the initial contour by improving its match to the target structure. This can be accomplished using one or more algorithmic models known in the art, including active contour (active contour), graph cut, elastic contour model, and model-based segmentation, among others.
There are also interactive segmentation approaches that take into account user-initiated motion. When the user draws the initial contour, he typically moves an image positioner device such as a mouse. The user-initiated motion is then converted into an initial contour. One example of these methods is the survivor route method, where mouse speed is used as an indication of local image quality to dynamically calibrate the weights in the cost function. However, the methods known in the art are limited in their accuracy and efficiency, since information about user-initiated motion is not effectively applied to accelerate the process of image segmentation.
In addition, slice-by-slice delineation of structures on 3D medical images is cumbersome and time consuming. While current delineation tools allow a user to outline a structure or fill it from within, the required accuracy of the user is quite high. Some intelligent tools align to image gradients, but delineation still requires a large amount of precise mouse movement. Planning of intensity modulated radiation therapy requires delineation of risk structures in the planning CT. Typically, this is done manually and the user must carefully contour the structure with a contouring tool, which is very time consuming. For the head & neck RT planning case, it may take up to 10 hours to outline all structures. There are some image processing techniques to assist this process: for example, the lungs can be contoured by setting thresholds and seed points. However, this strategy is only applicable for few architectures. Automated or semi-automated contouring methods exist, but still need to be corrected or redone for many situations. In addition, many algorithms require a priori knowledge from a library about the structure to be contoured. Thus, for many known structures and all new or unusual structures, a significant amount of time is still required for an accurate delineation.
In the article "Interaction in the segmentation of Medical images: A retrieval" (Medical Image Analysis 5(2001)127-142) by Olabariaga et al, existing interactive segmentation methods are discussed with respect to aspects including the type of user input, how the user input affects the computational part of the segmentation process, and the purpose of the user Interaction.
Disclosure of Invention
It is an object of the present invention to provide an apparatus, system and method for segmenting an image of an object that enables improved utilization of user input with respect to one or more user-initiated motions in order to more efficiently and reliably perform image segmentation.
In a first aspect of the present invention, a device for segmenting an image of an object is presented, comprising: a data interface for receiving an image of the object, the image depicting a structure of the object; a conversion unit for converting a user-initiated motion of the image locator device into a first contour around the structure; a motion parameter registration unit for registering motion parameters of the user initiated motion to the first contour, the motion parameters comprising a velocity and/or an acceleration of the image locator device; an image control point unit for distributing a plurality of image control points on the first contour at a density that decreases with the motion parameter; and a segmentation unit for segmenting the image by determining a second contour within the first contour based on the plurality of image control points, the segmentation unit being configured to use one or more segmentation functions.
In another aspect of the invention, a system for segmenting an image of an object is presented, comprising: an imaging device for generating at least one image of the subject; and a device as disclosed herein for segmenting the generated at least one image. The system according to the invention thus combines the above-mentioned advantages of the device disclosed herein with the possibility of generating an image. This is particularly advantageous for applications such as diagnostic or therapy planning where both image generation and image segmentation need to be performed with high efficiency and accuracy.
In another aspect of the present invention, a method for segmenting an image of an object is presented, comprising the steps of: receiving an image of the object, the image depicting a structure of the object; translating a user-initiated motion of an image locator device into a first contour around the structure; registering motion parameters of the user-initiated motion to the first contour, the motion parameters including a velocity and/or an acceleration of the image locator device; distributing a plurality of image control points over the first contour at a density that decreases with the motion parameter; and segmenting the image by determining a second contour within the first contour based on the plurality of image control points, the segmentation unit being configured to use one or more segmentation functions, in particular an active contour and/or a model-based segmentation and/or a graph cut.
In still other aspects of the present invention, there are provided: a computer program comprising program code means for causing a computer to carry out the steps of the method disclosed herein when said computer program is carried out on a computer; and a non-transitory computer-readable recording medium having stored therein a computer program product, which when executed by a device, causes the methods disclosed herein to be performed.
Preferred embodiments of the invention are defined in the dependent claims. It shall be understood that the claimed system, method and computer program have similar and/or identical preferred embodiments as the claimed device and as defined in the dependent claims.
The invention enables a more efficient and reliable interactive image segmentation, in particular an intelligent lasso method. In particular, the registered motion parameters of the user-initiated motion are used to distribute the plurality of image control points over the first contour. A user performing image segmentation tends to move an image positioner device, such as a mouse, more quickly when he is more certain about the results of the motion initiated by him. This is especially true when the structure of the object is clearly visible. In contrast, users tend to move the mouse more slowly when a strong image gradient is reasonably close to the mouse pointer that the user controls by moving the mouse, so that the user feels that high accuracy is necessary. Thus, if the motion parameters are partially high and partially low during the user-initiated motion, the first contour created by the user may generally be separated into a region with a higher accuracy or "High Accuracy Region (HAR)" and a region with a lower accuracy or "Low Accuracy Region (LAR)". The HAR is relatively close to the target segmentation result including one or more nearest internal boundaries.
Those skilled in the art will appreciate that such separations are merely qualitative and relative but not quantitative or absolute. The skilled person also understands that the HAR of the first contour corresponds to a better structure of the object relative to the LAR due to a relatively higher accuracy. Since the density of image control points decreases with the motion parameter, in the case when both HAR and LAR are present in the first contour, there are relatively more image control points in HAR than in LAR. In such a case, the second contour is determined more based on the HARs than the LARs of the first contour, resulting in increased efficiency and reliability of the image segmentation. The skilled person understands that the invention is not limited to the case when both HARs and LARs are present in the first contour, but that the first contour may mainly comprise HARs or LARs. Independent of the actual configuration of the first contour, the invention achieves reliable results for image segmentation, freeing the user from the pressure that needs to be very accurate throughout the user-initiated motion. The segmentation and delineation process thus becomes less cumbersome.
In this context, the image locator device may typically comprise a mouse. However, this is not limiting of the invention, as the image locator means may comprise any means suitable for enabling a user to perform a user initiated motion, which can be translated into a contour on a display unit such as a monitor, screen or display. In particular, the image locator means may comprise a pointer cooperating with a mouse, an electronic drawing device or a touch screen. The image control points comprise precision image objects that can be positioned at specific locations of the image in order to highlight areas and/or modify image content. The one or more segmentation functions include any function known for the application of image segmentation. The skilled person understands that segmentation functions include inter alia active contours, model-based segmentation and graph cuts, wherein further actions such as level sets, region growing, deformable contours, statistical shape models, interactive methods may also be used alone or in combination with each other. The skilled person further understands that the density of image control points is determined by the distance between adjacent image control points measured along the first contour.
In a preferred embodiment, the segmentation is configured to identify a plurality of target points within the first contour, the target points each being located within a volume and/or path starting with a corresponding one of the image control points, the second contour being formed by connecting the plurality of target points. Preferably, adjacent target points may be connected to determine the second contour. The volume may preferably comprise one or more cylindrical volumes, in particular around one or more lines perpendicular to the edge of the first contour. Since there are relatively more image control points in the HAR than in the LAR, there are more identified target points corresponding to the image control points of the HAR than the LAR. Advantageously, this enables the second profile to be determined with increased accuracy and efficiency.
In a further preferred embodiment, the path comprises a straight path, which is perpendicular or at an oblique angle to the edge of the first profile and/or has a length which increases with the motion parameter. A straight path is relatively easy to define compared to other shapes or forms, such as a curved path. As a result, the identification of the target point and thus the image segmentation is simplified. In addition, a straight path perpendicular to the edge of the first contour has a well-defined direction with respect to the first contour. A straight-line path can thus be easily and reliably defined for each image control point. In addition, the straight-line path is shorter for image control points in the HAR and longer for image control points in the LAR. The invention thus considers that the distance between the structure and the first contour is smaller in the HAR than in the LAR and enables image segmentation with high accuracy.
In a further preferred embodiment, the segmentation unit is configured to analyze image parameters of the image over the volume and/or the path and to identify the object point where it detects a peak of the image parameters. Measuring image parameters is advantageous for identifying target points with high accuracy, since quantitative analysis of image properties is possible, enabling even blurred image details to be taken into account during segmentation. The image parameters may comprise any image algorithm known in the art of image segmentation, such as image gradients, gray values, contrast, etc.
In a further preferred embodiment, the image parameters comprise image gradients, the segmentation unit being configured to identify the object point in which it detects a gradient peak of the image gradients. Image gradients are suitable parameters because the presence of image gradients indicates a boundary between two or more groups of image content corresponding to different materials/tissues/structures. Advantageously, the present invention enables accurate and easy contour correction.
In a further preferred embodiment, the gradient peak comprises a maximum gradient peak and/or a first gradient peak starting from the image control point. The maximum gradient peak is understood to be obtained by the maximum of the image gradient detected throughout the volume and/or path. In this way, the target point can be positioned with high accuracy at the boundary between two different material and/or tissue types. This advantageously results in increased reliability of the interactive image segmentation. In addition, the present invention contemplates that the distance between the first contour and the structure of the object is relatively small in certain regions of the first contour, especially in HARs. For such regions, it is likely that only one gradient peak exists within a short distance from the corresponding image control point. It is therefore sufficient to detect the first gradient peak in order to detect the maximum gradient peak or a gradient peak close in its magnitude and/or position to the maximum gradient peak. Advantageously, the efficiency of image segmentation is further improved.
In a further preferred embodiment, the segmentation unit is configured to identify the target point only if the gradient peak is above a predefined threshold gradient. In this way it is possible to prevent target points which cannot be qualified for reasonably defining the second contour from being identified. Advantageously, the reliability of the image segmentation is further improved.
In another preferred embodiment, the second contour comprises a distance to the first contour that increases with the motion parameter. In this way a band is formed as defined by the first and second contours, wherein the bandwidth is larger in the LAR than in the HAR. Such bands advantageously serve as a refined starting point for the pattern cutting, instead of randomly selected bands, thereby resulting in an increased reliability of the segmentation result.
In a further preferred embodiment, the image comprises a first image slice and at least one second image slice, further comprising a transfer unit for transferring the structure and/or the first contour and/or the second contour from the first image slice to the at least one second image slice. In this way, the structure and/or the first contour and/or the second contour can be used for segmenting the second image slice. This advantageously improves the image segmentation by adjusting the first contour and/or the second contour or by predicting new contours.
In a further preferred embodiment, the image control point unit is configured to distribute a plurality of additional image control points without using the motion parameters. In particular, the path corresponding to the additional image control point comprises a length related to the curvature of the structure. Advantageously, the present invention enables both interactive image segmentation and automatic image segmentation, as well as a combination of both, resulting in increased user flexibility.
In another preferred embodiment, the motion parameter registration unit is configured to signal an error when the motion parameter is below a predefined threshold parameter. The invention enables to indicate segmentation results that are counter-intuitive to the image content (e.g. due to the presence of an implant), or to indicate situations where image gradients cannot be measured. Advantageously, this results in a more secure and reliable image segmentation.
Drawings
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter. In the following drawings:
fig. 1 shows a schematic block diagram of an embodiment of a device according to the present invention;
fig. 2 shows a schematic block diagram of another embodiment of a device according to the invention;
FIG. 3 shows a schematic block diagram of another embodiment of an apparatus according to the present invention;
FIG. 4 shows a schematic block diagram of an embodiment of a system according to the invention;
FIG. 5 shows a schematic block diagram of an embodiment of a method according to the invention; and is
Fig. 6 shows a segmentation of a medical image.
Detailed Description
Referring to fig. 1, there is shown a schematic block diagram of an apparatus 10a for segmenting an image 12 of an object according to a first embodiment. The device 10a comprises a data interface 16 for receiving an image 12 of an object. The image 12 includes a structure 18 corresponding to a portion of the object. The object is typically a living being, such as a human being, wherein the portion of the structure 18 corresponding thereto may for example comprise an anatomical structure, such as a lung, a brain, a heart, a stomach, etc. Those skilled in the art understand that the structure 18 is surrounded by a boundary that separates the structure 18 from image content external to the structure 18 of the image 12.
The data interface 16 may be any type of data interface known in the art, particularly a data connection between the imaging device to the apparatus 10 a. Such a data connection is used to transfer image data from the imaging device to the apparatus 10a for image segmentation. The type of data interface 16 may include, but is not limited to, Current Loop, RS-232, GPIB, V.35, and the like.
The device 10a comprises a conversion unit 20, the conversion unit 20 being adapted to convert a user-initiated motion 22 of the image locator device 24 into a first contour, wherein the structure 18 is surrounded by the first contour. The image locator device 24 may comprise a mouse. Alternatively, the image locator device 24 may comprise a mouse pointer controlled by a mouse or by a user's finger or another device enabling the pointer to be moved on a screen, in particular a touch screen. In a preferred embodiment, the conversion unit 20 is configured to process motion data corresponding to the user initiated motion 22 and to generate the first contour based on the motion data. In another preferred embodiment, the conversion unit 20 is configured to further enable adding and/or modifying the first contour, for example modifying the thickness and/or brightness and/or type of lines used for visualizing the first contour, such as solid and dashed lines.
In the embodiment shown in fig. 1, the image locator means 24 is arranged outside the device 10a, wherein the user-initiated motion 22 is processed into the device 10a via a data channel indicated with an arrow between the image locator means 24 and the conversion unit 20. In the preferred embodiment of the apparatus 10b shown in fig. 2, the image locator device 24 is integrated into the apparatus 10 b. Thus, the device 10b is configured to receive and translate the user-initiated motion 22. In this way, the user need only perform the user-initiated motion 22, which is automatically converted into the first contour. This advantageously improves the segmentation efficiency.
The device 10a further comprises a motion parameter registration unit 26, the motion parameter registration unit 26 being configured to register motion parameters of the user initiated motion 22 to the first contour. The motion parameters include a positioner velocity, which is the velocity of the image positioner device, and/or a positioner acceleration, which is the acceleration of the image positioner device 24. In a preferred embodiment, the motion parameter registration unit 26 is configured to register the motion parameters by receiving the motion parameters measured by the motion parameter registration unit 26 itself and/or by the image localizer device 24 and/or by a separate motion measurement means, in particular a motion sensor. The image locator device 24 and/or the separate motion measurement apparatus may be configured to measure the motion parameters using optical and/or thermal and/or mechanical and/or magnetic and/or acoustic and/or other types of sensors or devices and/or combinations (one or more) thereof. Preferably, the motion parameters comprise the speed and/or acceleration of a mouse and/or a pointer and/or an electronic drawing device or the like. In another embodiment, the motion parameters of the pointer may be measured when the user moves his finger on the touch screen or a touch sensitive panel that implements the same or similar functionality as the touch screen. Preferably, the motion parameters are registered to the entire first contour, i.e. each individual image point such as a pixel and/or voxel of the first contour. This means that each pixel and/or voxel of the first contour receives a specific value for the motion parameter.
In another preferred embodiment of the device 10c shown in fig. 3, the conversion unit 20 and the motion parameter registration unit 26 are configured as a single element. In this way, the conversion of the user initiated motion into the first contour and the registration of the motion parameters to the first contour is done in one single step. This advantageously results in a more efficient image segmentation. It will be appreciated that such a configuration is equally applicable to the device 10b shown in fig. 2.
The apparatus 10a further comprises image control point means 28 for distributing the plurality of image control points over the first contour at a density decreasing with the motion parameter. Within the scope of the present invention, the density of image control points refers to the number of image control points per unit length of the contour. In a preferred embodiment, the image control point device 28 is configured to cooperate with one or more Graphical User Interfaces (GUIs). For example, the GUI may include one or more control elements, such as buttons, for enabling a user to activate the distribution of image control points. In addition, the image control point device 28 may enable a user to define the number of image control points such that when a desired number of image control points have been selected, the distance between adjacent image control points will be automatically selected such that the density of image control points decreases with the motion parameter. The distance between adjacent image control points refers to the length of the contour segment between adjacent image control points. Preferably, the visual shape of the image control points may also be defined by the image control point device 28.
The device 10a further comprises a segmentation unit 30 for segmenting the image 12 by determining a second contour within the first contour based on the plurality of image control points, the segmentation unit being configured to use one or more segmentation functions. The one or more segmentation functions may include active contours, model-based segmentation, and graph cuts. However, this is not a limitation of the present invention, and the segmentation functions may include one or more of level sets, region growing, deformable contours, statistical shape models, interactive methods, where any combination of the above functions may be used for the particular case of image segmentation. The second contour may be the final result of the image segmentation. Alternatively, the device 10a may be configured to further adjust the determined second profile in order to create at least one third profile.
Referring to fig. 4, a schematic block diagram of an embodiment of system 32 is shown. The system 32 includes: an imaging device 34 for generating at least one image, such as image 12 of subject 14; and a device 10a-c as disclosed herein for segmenting the generated image 12. Imaging device 34 may include any imaging device suitable for medical imaging and/or biological imaging, including, but not limited to, x-ray radiation imaging, Magnetic Resonance Imaging (MRI), medical ultrasound imaging or ultrasound, endoscopy, elastography, tactile imaging, thermal imaging, medical photography and nuclear medicine functional imaging, positron emission tomography imaging (PET), electroencephalography (EEG), Magnetoencephalography (MEG), Electrocardiography (ECG), and the like. Preferably, the system 32 implements and/or supports radiation therapy planning.
In the embodiment illustrated in fig. 4, the dashed line between the imaging device 34 and the subject 14 indicates that the entire subject 14 is imaged by the imaging device 34. However, this is not a limitation of the present invention, but merely serves as an illustrative example of image generation. The skilled person will appreciate that the imaging device 34 may be configured to locally image one or more portions of the subject 14. In the embodiment shown in FIG. 4, system 32 comprises device 10a shown in FIG. 1. However, it will also be appreciated that other embodiments of the apparatus 10b-c shown in FIGS. 2 and 3 and disclosed elsewhere in the present disclosure may be integrated into the system 32 for segmenting the image 12 generated by the imaging device 34.
Referring to fig. 5, a schematic block diagram of an embodiment of a method according to the present invention is shown. In the following, the method in fig. 5 will be explained with reference to the example of fig. 6 showing segmentation of a medical image containing a femoral head structure 36. In a first step 101, an image shown in (a) of fig. 6 is received, wherein the femoral head structure 36 is the enclosed structure on the left hand side of (a) of fig. 6. The images may be generated by one or more of the above-described medical imaging techniques. In a second step 102, the user-initiated motion 22 of the image locator device 24 is converted into an initial contour 38 that encompasses the femoral head structure 36 on the left-hand side of fig. 6 (a). It will be appreciated that the image locator device 24 may comprise any of the types of image locator devices mentioned with reference to fig. 1 to 3. Preferably, the user-initiated motion 22 is gradually switched over during the duration of the user-initiated motion 22. Alternatively, the user-initiated motion 22 may be translated after it has been completely initiated by the user. As can be seen from fig. 6 (a), the initial contour 38 around the femoral head structure 36 is relatively precise in the contour region corresponding to the hip joint (the right extremity of the femoral head structure 36), while the initial contour 38 is relatively rough elsewhere. With respect to one embodiment of the present invention, in which the image locator device 24 comprises a mouse, it can therefore be assumed that the user moves the mouse relatively slowly to create the initial contour 38 corresponding to the hip joint, while he moves the mouse relatively quickly when creating other contour regions.
In a third step 103, motion parameters including the velocity and/or acceleration of the image locator device 24 (e.g. mouse) are registered to the initial contour 38. As mentioned above, the user may move the mouse at different speeds while performing the user-initiated motion 22. In a fourth step 104, a plurality of image control points 40 are distributed over the initial contour 38 as shown in fig. 6 (B). It is noted that for feasibility reasons only three of all image control points 40 are exemplarily indicated with reference numerals in (B) of fig. 6. The image control points 40 utilize a density profile that decreases with a motion parameter, such as the speed of the mouse during the user-initiated motion 22. This can be seen in fig. 6 (B) because the density of image control points 40 is higher in the region of the initial contour 38 corresponding to the hip joint than in other regions of the initial contour 38, where the density is measured based on the distance between adjacent image control points 40 along the initial contour 38.
In a preferred embodiment, image control points are used to identify a plurality of target points within the initial contour. In particular, the target points are identified within image points (pixels and/or voxels) enclosed by the initial contour 38 and/or located directly on the initial contour 38. In an embodiment, the target points are each located within a volume and/or path within the initial contour 38. Specifically, the target points are each located within a curvilinear path and/or a straight-line path starting with a corresponding image control point 40 on the initial contour 38. In the embodiment shown in fig. 6 (B), a target point is identified for each image control point on the straight-line path 42 starting from the corresponding image control point towards the inside of the initial contour 38 shown in fig. 6 (B). Note that, for feasibility reasons, only three of all the straight-line paths 42 are assigned with reference numerals in (B) of fig. 6. In the context of the active contour segmentation method, the straight-line path 42 is known as a search ray, since the target point is searched along each search ray starting from the corresponding image control point 40. Preferably, each search ray is directed perpendicular to the edge of the initial contour 38. This means that the straight path 42 is perpendicular to the tangent line where the initial contour 38 intersects the corresponding image control point 40. However, this is not a limitation of the present invention, as one or more straight paths 42 may establish any angle between 0 ° and 360 ° with respect to the corresponding tangent.
In the active contour or model-based segmentation (MBS) method, the image parameters along each straight-line path 42 are analyzed. Preferably, image gradients are detected and analyzed along each linear path. However, this is not a limitation of the present invention, as other types of image parameters, such as gray scale values and/or contrast, may also be analyzed along the straight-line path 42. In addition, the image parameters may be analyzed for other embodiments, wherein the target point is searched within a volume and/or curved path starting with the corresponding image control point 40. Preferably, the image control points 40 may be distributed in one or more hexagonal meshes perpendicular to the particular path. In a further preferred embodiment, the target point is identified when a peak of the image parameters, in particular a gradient peak, is detected within the straight-line path 42 or within a search path which is a curved path or a search volume.
In the embodiment shown in fig. 6 (B), the linear path 42 has a length that increases with a motion parameter such as a mouse speed. This results in linear paths 42 that differ in length, with the linear paths 42 in the region of the initial contour 38 corresponding to the hip joint being relatively shorter than the other linear paths 42. Thus, the gradient peak is searched for within a shorter distance from the image control point 40 for the hip joint than for other portions of the femoral head structure 36. In another preferred embodiment, the maximum gradient peak is searched within one or more straight-line paths 42. This means that from the many gradient peaks detected in the straight path 42, the one with the largest magnitude is selected to identify the target point. In still other embodiments, the first gradient peak detected from the corresponding image control point 42 is used to identify the target point. In a further preferred embodiment, the target point is identified in case the local second derivative of the next localization gradient peak is derived. In this way, the maximum gradient can be detected using a curved path through the corresponding image control point 440 and the identified target point.
After identifying the target point, the initial profile 38 may be adjusted in a fifth step 105 to create an adjusted profile 44. The results are shown in fig. 6 (C), where the adjusted profile 44 corresponds to the femoral head structure 36 with increased accuracy compared to the initial profile 38. Preferably, the adjusted contour 44 is formed by connecting adjacent target points.
Preferably, one or more detected gradient peaks are compared to a predefined threshold gradient, such that the target point is only identified if the gradient peak is above the predefined threshold gradient. Further preferably, the user may change the predefined threshold gradient, for example by mouse wheel movement. In this manner, the user may force the adjusted contour 44 further toward the femoral head structure 36 if the adjusted contour 44 does not sufficiently well meet the desired segmentation target or in the event that the contour adjustment jams before the desired segmentation target is met.
It is noted that the initial profile 38 corresponds to the first profile mentioned with reference to fig. 1 to 3, whereas the adjusted profile 44 corresponds to the second profile.
The graph cut method may also be applied to generate the adjusted contour 44. To do so, an annular band is determined having as its outer boundary the initial contour 38 and as its inner boundary the additional contour. Here, the adjusted profile 44 corresponds to the first profile mentioned with reference to fig. 1 to 3, whereas the additional profile corresponds to the second profile. The distance between the additional contour and the initial contour 38 is selected to increase with the motion parameter (e.g., mouse speed) registered to the initial contour 38. This results in an annular band whose bandwidth is relatively large for the LAR of the area other than the hip joint, such as the initial contour 38, while the bandwidth is relatively small for the contour area corresponding to the hip joint.
After defining the annular band within the initial contour 38, a conventional graph cut based on image intensity may be performed, for example, by assigning all image points (pixels and/or voxels) on the initial contour 38 to source nodes and/or assigning image points (pixels and/or voxels) on additional contours to junction nodes. The image points assigned to the source nodes construct the background, while the image points assigned to the junction nodes construct the foreground. The graph cut method is followed by generating a final contour while taking into account motion parameters such as mouse speed. Preferably, an arrow pointing from the source node may be assigned to each image point of the background and/or foreground. The arrows may carry weights determined based on gray value differences between adjacent image points and/or gray value distances of a priori knowledge on the foreground and background. Graph cut is defined as the division of a graph node into a portion connected to a source node and a portion connected to a junction node, which eliminates all edges between two partitions, minimizing the sum of the weights of the cut edges known as the cut cost. It should also be understood that other variations of the pattern cutting method may also be applied here to determine the final profile.
As mentioned in the graph cut method, image control points 40 on the initial contour 38 are used as input to a map of image points (pixels and/or voxels) within the annular band. In another preferred embodiment, one or more target points for defining the endless belt are identified. In still other embodiments, the maximum distance between the initial profile 38 and the additional profile is determined. In particular, the maximum distance marks a bandwidth beyond which image content within the initial contour 38 is defined as a junction node. In still other embodiments, the final contour includes a varying bandwidth, particularly as a function of a motion parameter such as mouse speed.
In one or more embodiments, the image 12 includes a first image slice and at least one second image slice, wherein the structure 18 and/or the first contour and/or the second contour may be transferred from the first image slice to the second image slice. Preferably, the second profile expands or moves outwardly after being transferred. Further preferably, the second contour is further adjusted to form a third contour, wherein the second contour is treated in the same or similar way as the first contour, except that the motion parameters are all uniform over the second contour. In a further preferred embodiment, the structure 18 is transferred and expanded and then adjusted to a second image slice, wherein the structure 18 is treated in the same or similar way as the first contour, but with uniform motion parameters. In this way, the user can trigger propagation of the contour to an image with the same concept, i.e. automatic adjustment, to one or more next slices after checking the current adjustment, and correct it if necessary. Additionally, if the user triggers propagation, the segmentation process may be applied to multiple next slices until it is stopped, for example, by a rating metric. The user can then scroll through the different slices and only re-trigger propagation when he has to make corrections to the segmentation results, so that the corrected version is propagated for the next few slices. This enables a very fast segmentation of the 3D structure while still taking any user input into account. The quality metric and the re-propagation triggered by the user adjustment ensure that only reasonable results that require no or little editing are shown to the user, so that the time for correction is very small. If the algorithm is not able to determine a good contour, the user will start contouring from the beginning, which is usually much faster than making many corrections.
In a further preferred embodiment the third contour is predicted in the second image slice without user interaction. Preferably, the plurality of additional image control points 40 may be distributed without using motion parameters, wherein the path corresponding to the additional image control points comprises a length related to the curvature of the structure 18. In another preferred embodiment, the gradient intensity and/or gray value profiles may be analyzed to identify an optimal third contour in the second slice.
In a further preferred embodiment, the invention enables partial automation of delineation, especially for non-trained structures. This is advantageous because the division time can be further reduced. In another preferred embodiment, the present invention utilizes one or more previously generated contours (e.g., from adjacent slices) of the structure 18 in order to perform image segmentation. This is advantageous because information about the structure 18, for example the presence of local image blur, becomes available, so that the accuracy of the segmentation result is further improved.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single element or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
Any reference signs in the claims shall not be construed as limiting the scope.

Claims (13)

1. An apparatus for segmenting an image (12) of an object (14), comprising:
a data interface (16) for receiving an image (12) of the object (14), the image (12) depicting a structure of the object (14);
a conversion unit (20) for converting a user-initiated motion (22) of an image locator device (24) into a first contour around the structure;
a motion parameter registration unit (26) for registering motion parameters of the user initiated motion (22) to the first contour, the motion parameters comprising a velocity and/or an acceleration of the image locator device (24);
an image control points unit (28) for distributing a plurality of image control points over the first contour at a density that decreases with decreasing motion parameter; and
a segmentation unit (30) for segmenting the image (12) by determining a second contour within the first contour based on the plurality of image control points, the segmentation unit (30) being configured to use one or more segmentation functions,
wherein the segmentation unit (30) is configured to identify a plurality of target points within the first contour, the target points each being located within a volume and/or path starting with a corresponding image control point of the plurality of image control points, the second contour being formed by connecting the plurality of target points,
wherein the path comprises a straight path that is perpendicular or at an oblique angle to an edge of the first profile and/or has a length that increases with decreasing motion parameter.
2. The apparatus as defined in claim 1, wherein the segmentation unit (30) is configured to analyze image parameters of the image on the volume and/or the path and to identify the target point at which it detects a peak of the image parameters.
3. The apparatus as defined in claim 2, wherein the image parameters comprise image gradients, the segmentation unit (30) being configured to identify the object point at which it detects a gradient peak of the image gradients.
4. The apparatus of claim 3, wherein the gradient peak comprises a maximum gradient peak and/or a first gradient peak from the image control point.
5. The device as claimed in claim 3, wherein the segmentation unit (30) is configured to identify the target point only if the gradient peak is above a predefined threshold gradient.
6. The apparatus of claim 1, wherein a distance between the second profile and the first profile increases as the motion parameter decreases.
7. The apparatus of claim 1, wherein the image comprises a first image slice and at least one second image slice, the apparatus further comprising a transfer unit for transferring the structure and/or the first contour and/or the second contour from the first image slice to the at least one second image slice.
8. The device according to claim 1, wherein the image control point unit (28) is configured to distribute a plurality of additional image control points without using the motion parameters.
9. The device of claim 1, wherein the motion parameter registration unit is configured to signal an error when the motion parameter is below a predefined threshold parameter.
10. The device as claimed in claim 1, wherein the segmentation unit (30) is configured to use active contours, model-based segmentation and/or graph cuts as segmentation functions.
11. A system (32) for segmenting an image of an object (14), comprising:
an imaging device (34) for generating at least one image (12) of the subject (14); and
the device of claim 1, for segmenting the at least one generated image (12).
12. A method for segmenting an image (12) of an object (14), comprising the steps of:
receiving an image (12) of the object (14), the image (12) depicting a structure of the object (14);
translating a user-initiated motion (22) of an image locator device (24) into a first contour around the structure;
registering motion parameters of the user-initiated motion (22) to the first contour, the motion parameters comprising a velocity and/or an acceleration of the image locator device (24);
distributing a plurality of image control points over the first contour at a density that decreases with a decrease in the motion parameter; and is
Segmenting the image (12) by determining a second contour within the first contour based on the plurality of image control points, the segmenting using one or more segmentation functions,
wherein the method further comprises identifying a plurality of target points within the first contour, the target points each being located within a volume and/or path starting with a corresponding image control point of the plurality of image control points, the second contour being formed by connecting the plurality of target points,
wherein the path comprises a straight path that is perpendicular or at an oblique angle to an edge of the first profile and/or has a length that increases with decreasing motion parameter.
13. A computer readable medium having a computer program stored thereon, the computer program comprising program code means for causing a computer to carry out the steps of the method as claimed in claim 12 when said computer program is carried out on the computer.
CN201580038336.XA 2014-07-15 2015-06-16 Device, system and method for segmenting an image of an object Active CN106537452B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP14176985.1 2014-07-15
EP14176985 2014-07-15
PCT/EP2015/063441 WO2016008665A1 (en) 2014-07-15 2015-06-16 Device, system and method for segmenting an image of a subject.

Publications (2)

Publication Number Publication Date
CN106537452A CN106537452A (en) 2017-03-22
CN106537452B true CN106537452B (en) 2021-04-09

Family

ID=51211561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580038336.XA Active CN106537452B (en) 2014-07-15 2015-06-16 Device, system and method for segmenting an image of an object

Country Status (5)

Country Link
US (1) US10223795B2 (en)
EP (2) EP3748581A1 (en)
JP (1) JP6739422B2 (en)
CN (1) CN106537452B (en)
WO (1) WO2016008665A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11723579B2 (en) 2017-09-19 2023-08-15 Neuroenhancement Lab, LLC Method and apparatus for neuroenhancement
US11717686B2 (en) 2017-12-04 2023-08-08 Neuroenhancement Lab, LLC Method and apparatus for neuroenhancement to facilitate learning and performance
US11273283B2 (en) 2017-12-31 2022-03-15 Neuroenhancement Lab, LLC Method and apparatus for neuroenhancement to enhance emotional response
US11364361B2 (en) 2018-04-20 2022-06-21 Neuroenhancement Lab, LLC System and method for inducing sleep by transplanting mental states
EP3849410A4 (en) 2018-09-14 2022-11-02 Neuroenhancement Lab, LLC System and method of improving sleep
EP3644275A1 (en) * 2018-10-22 2020-04-29 Koninklijke Philips N.V. Predicting correctness of algorithmic segmentation
US11786694B2 (en) 2019-05-24 2023-10-17 NeuroLight, Inc. Device, method, and app for facilitating sleep
CN116843708B (en) * 2023-08-30 2023-12-12 荣耀终端有限公司 Image processing method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2350185A (en) * 1999-05-17 2000-11-22 Ibm Automatically determining trackers along contour of a displayed image
EP1679652A1 (en) * 2005-01-06 2006-07-12 Thomson Licensing Image segmentation method and device
CN101404085A (en) * 2008-10-07 2009-04-08 华南师范大学 Partition method for interactive three-dimensional body partition sequence image
US20090297034A1 (en) * 2008-05-28 2009-12-03 Daniel Pettigrew Tools for selecting a section of interest within an image
CN102306373A (en) * 2011-08-17 2012-01-04 深圳市旭东数字医学影像技术有限公司 Method and system for dividing up three-dimensional medical image of abdominal organ
CN103049907A (en) * 2012-12-11 2013-04-17 深圳市旭东数字医学影像技术有限公司 Interactive image segmentation method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2856229B2 (en) * 1991-09-18 1999-02-10 財団法人ニューメディア開発協会 Image clipping point detection method
JP2004181240A (en) * 2002-12-03 2004-07-02 Koninkl Philips Electronics Nv System and method for forming boundary of object imaged by ultrasonic imaging
US7376252B2 (en) * 2003-11-25 2008-05-20 Ge Medical Systems Global Technology Company, Llc User interactive method and user interface for detecting a contour of an object
US8270696B2 (en) 2007-02-09 2012-09-18 The Trustees Of The University Of Pennsylvania Image slice segmentation using midpoints of contour anchor points
WO2008141293A2 (en) 2007-05-11 2008-11-20 The Board Of Regents Of The University Of Oklahoma One Partner's Place Image segmentation system and method
CN102388403A (en) 2009-04-03 2012-03-21 皇家飞利浦电子股份有限公司 Interactive iterative closest point algorithm for organ segmentation
JP5953842B2 (en) * 2012-03-14 2016-07-20 オムロン株式会社 Image inspection method and inspection area setting method
JP6329490B2 (en) * 2013-02-05 2018-05-23 株式会社日立製作所 X-ray CT apparatus and image reconstruction method
JP5741660B2 (en) * 2013-09-18 2015-07-01 カシオ計算機株式会社 Image processing apparatus, image processing method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2350185A (en) * 1999-05-17 2000-11-22 Ibm Automatically determining trackers along contour of a displayed image
EP1679652A1 (en) * 2005-01-06 2006-07-12 Thomson Licensing Image segmentation method and device
US20090297034A1 (en) * 2008-05-28 2009-12-03 Daniel Pettigrew Tools for selecting a section of interest within an image
CN101404085A (en) * 2008-10-07 2009-04-08 华南师范大学 Partition method for interactive three-dimensional body partition sequence image
CN102306373A (en) * 2011-08-17 2012-01-04 深圳市旭东数字医学影像技术有限公司 Method and system for dividing up three-dimensional medical image of abdominal organ
CN103049907A (en) * 2012-12-11 2013-04-17 深圳市旭东数字医学影像技术有限公司 Interactive image segmentation method

Also Published As

Publication number Publication date
EP3748581A1 (en) 2020-12-09
WO2016008665A1 (en) 2016-01-21
EP3170144B1 (en) 2020-11-18
US20170178340A1 (en) 2017-06-22
JP6739422B2 (en) 2020-08-12
US10223795B2 (en) 2019-03-05
CN106537452A (en) 2017-03-22
EP3170144A1 (en) 2017-05-24
JP2017527015A (en) 2017-09-14

Similar Documents

Publication Publication Date Title
CN106537452B (en) Device, system and method for segmenting an image of an object
EP3537976B1 (en) Selecting acquisition parameter for imaging system
KR101883258B1 (en) Detection of anatomical landmarks
US10133348B2 (en) Gaze-tracking driven region of interest segmentation
CN106999130B (en) Device for determining the position of an interventional instrument in a projection image
US9697600B2 (en) Multi-modal segmentatin of image data
US10628963B2 (en) Automatic detection of an artifact in patient image
US10445904B2 (en) Method and device for the automatic generation of synthetic projections
KR20170088742A (en) Workstation, medical imaging apparatus comprising the same and control method for the same
CN107787203A (en) Image registration
JP2012179360A (en) Medical image processing apparatus, x-ray computer tomography device, and medical image processing program
JP2022173271A (en) Device for imaging object
US11715208B2 (en) Image segmentation
KR101685821B1 (en) Method and System for Body and ROI Segmentation for Chest X-ray Images
WO2016113161A1 (en) Adaptive segmentation for rotational c-arm computed tomography with a reduced angular range
WO2017101990A1 (en) Determination of registration accuracy
EP4052651A1 (en) Image-based planning of tomographic scan

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant