CN115409858A - Medical image segmentation method, equipment and device - Google Patents

Medical image segmentation method, equipment and device Download PDF

Info

Publication number
CN115409858A
CN115409858A CN202210983459.2A CN202210983459A CN115409858A CN 115409858 A CN115409858 A CN 115409858A CN 202210983459 A CN202210983459 A CN 202210983459A CN 115409858 A CN115409858 A CN 115409858A
Authority
CN
China
Prior art keywords
medical image
point
segmentation
target
segmented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210983459.2A
Other languages
Chinese (zh)
Inventor
刘于豪
吴海燕
李倩儒
吴乙荣
李和意
陈永健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Medical Equipment Co Ltd
Original Assignee
Qingdao Hisense Medical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Medical Equipment Co Ltd filed Critical Qingdao Hisense Medical Equipment Co Ltd
Priority to CN202210983459.2A priority Critical patent/CN115409858A/en
Publication of CN115409858A publication Critical patent/CN115409858A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present application relates to the field of medical imaging technologies, and in particular, to a method, an apparatus, and a device for segmenting a medical image, so as to provide a scheme for rapidly segmenting an edge of a target organ in the medical image. The method comprises the steps of determining layer interval information between a medical image to be segmented and a reference medical image; determining target segmentation points corresponding to the reference segmentation points in the medical image to be segmented respectively according to a plurality of reference segmentation points selected in the reference medical image and layer interval information between the medical image to be segmented and the reference medical image; aiming at any two adjacent target segmentation points in the medical image to be segmented, determining the edge of a target organ between the two adjacent target segmentation points according to the pixel values of the two adjacent target segmentation points and a predefined cost function; and according to the determined edge of the target organ between every two adjacent target segmentation points, segmenting the target organ from the image to be segmented. The method and the device can greatly improve the segmentation efficiency of the medical image.

Description

Medical image segmentation method, equipment and device
Technical Field
The present application relates to the field of medical imaging technologies, and in particular, to a method, a device, and an apparatus for segmenting a medical image.
Background
With the successful application of medical image segmentation technology in clinical medicine, image segmentation technology plays an increasingly important role in medical image processing and analysis, and segmented images are being widely applied to various occasions.
Currently, a common method for organ segmentation of medical images is that, for an acquired image sequence including a plurality of medical images, a user manually outlines the edge of an organ in each medical image, so as to perform segmentation according to the organ edge outlined by the user. However, the current medical image segmentation method is low in efficiency.
Disclosure of Invention
The application aims to provide a medical image segmentation method, equipment and a device, which are used for providing a scheme for rapidly segmenting the edge of a target organ in a medical image.
In a first aspect, the present application provides a method of segmentation of a medical image, the method comprising:
determining layer interval information between a medical image to be segmented and a reference medical image; the medical image to be segmented and the reference medical image are images of different layers in a medical image sequence obtained by scanning a target organ;
determining target segmentation points corresponding to the reference segmentation points in the medical image to be segmented according to a plurality of reference segmentation points selected from the reference medical image and layer interval information between the medical image to be segmented and the reference medical image;
aiming at any two adjacent target segmentation points in the medical image to be segmented, according to the pixel values of the two adjacent target segmentation points and a predefined cost function, determining an edge of the target organ between the two adjacent target segmentation points;
and according to the determined edge of the target organ between every two adjacent target segmentation points, segmenting the target organ from the image to be segmented.
In a second aspect of the present invention, the present application provides a segmentation apparatus for medical images, the apparatus comprises at least one processor, and at least one memory;
wherein the memory stores program code which, when executed by the processor, implements a method of segmentation of medical images as described in the first aspect above.
In a third aspect, the present application provides an apparatus for segmenting a medical image, comprising:
obtaining the module is provided with a plurality of modules, for determining layer interval information between the medical image to be segmented and the reference medical image; wherein the medical image to be segmented and the reference medical image are for the target scanning an organ to obtain images of different layers in a medical image sequence;
the determining module is used for determining target segmentation points which correspond to the reference segmentation points in the medical image to be segmented respectively according to a plurality of reference segmentation points selected from the reference medical image and layer interval information between the medical image to be segmented and the reference medical image;
an edge detection module, aiming at any two adjacent target segmentation points in the medical image to be segmented, determining the edge of the target organ between the two adjacent target segmentation points according to the pixel values of the two adjacent target segmentation points and a predefined cost function;
and the segmentation module is used for segmenting the target organ from the image to be segmented according to the determined edge of the target organ between every two adjacent target segmentation points.
In a fourth aspect, the present application provides a computer-readable storage medium, in which instructions, when executed by an electronic device, enable the electronic device to perform the method of segmenting a medical image as described in the first aspect above.
In a fourth aspect, the present application provides a computer program product comprising a computer program:
the computer program, when executed by a processor, implements a method of segmentation of a medical image as described in the first aspect above.
Techniques provided by embodiments of the present application the scheme has at least the following beneficial effects:
in the embodiment of the application, a Zhang Cankao medical image is selected from a medical image sequence, and a plurality of reference segmentation points are selected from a reference medical image; aiming at other medical images except the reference medical image in the medical image sequence, target segmentation points corresponding to the reference segmentation points in the medical image to be segmented can be automatically determined according to a plurality of reference segmentation points selected in the reference medical image and layer interval information between the medical image to be segmented and the reference medical image, a user does not need to manually select the segmentation points for each medical image, and the efficiency of determining the segmentation points in the medical image can be greatly improved; in the embodiment of the application, after the target segmentation point in the medical image is determined for each medical image, the edge of the target organ can be automatically generated according to the pixel value and the cost function of the target segmentation point, and compared with a scheme that the edge of the target organ is manually outlined by a user in the medical image in the related art, the scheme that the edge of the target organ is automatically generated according to a plurality of target segmentation points can more quickly segment the target organ from the medical image, and the segmentation efficiency of the medical image is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is an application scenario diagram of an alternative image segmentation method according to an embodiment of the present application;
fig. 2 is an application scenario diagram of an alternative image segmentation method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating an image segmentation method according to an embodiment of the present disclosure;
FIG. 4 is a diagram of a display interface for determining a reference segmentation point according to an embodiment of the present disclosure;
FIG. 5 is a flowchart of a method for determining an edge of a target organ between two reference segmentation points according to an embodiment of the present application;
FIG. 6 is a diagram illustrating a method for generating a weighted directed graph according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a candidate path between two adjacent reference segmentation points according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of determining the edge of a target organ between two adjacent reference segmentation points according to an embodiment of the present application;
FIG. 9 is a flowchart illustrating an embodiment of determining a segmentation point of a target in a medical image to be segmented according to the present application;
FIG. 10 is a flowchart illustrating a method for determining a gradient direction corresponding to a reference segmentation point according to an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating a gradient direction corresponding to a reference segmentation point according to an embodiment of the present disclosure;
FIG. 12A is a schematic diagram of a pixel point in any one of the medical images according to an embodiment of the present application;
FIG. 12B is a schematic diagram illustrating the layer spacing of any two adjacent medical images according to an embodiment of the present application;
FIG. 13 is a schematic illustration of determining an abscissa offset value and an ordinate offset value in accordance with an embodiment of the present application;
FIG. 14 is a diagram illustrating a location of a reference dividing point according to an embodiment of the present application;
FIG. 15 is a three-dimensional schematic diagram illustrating the determination of a target segmentation point according to a reference segmentation point in an embodiment of the present application;
FIG. 16 is a schematic diagram illustrating the determination of a plane offset value and a depth offset value according to an embodiment of the present application;
FIG. 17 is a flowchart illustrating an accuracy verification of a target segmentation point according to an embodiment of the present disclosure;
FIG. 18 is a graph illustrating the segmentation effect of the target organ edge in the medical image according to the embodiment of the present application;
fig. 19 is a schematic structural diagram of a medical image segmentation apparatus provided in an embodiment of the present application;
fig. 20 is a schematic structural diagram of a medical image segmentation apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. The embodiments described herein are part of the present application and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Also, in the description of the embodiments of the present application, "/" indicates an inclusive meaning unless otherwise specified, for example, a/B may indicate a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
Hereinafter, the terms "first" and "second" are used for descriptive purposes only, and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the features, and in the description of embodiments of this application, the meaning of "a plurality" is two or more, unless indicated otherwise.
In order to facilitate understanding of the interactive medical sequence image segmentation method and apparatus provided in the embodiments of the present application, some terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
(1) Computed Tomography (CT): in CT, a certain thickness of a layer of a human body is scanned by X-ray beams, and the X-rays transmitted through the layer are received by a detector, converted into visible light, converted into electrical signals by photoelectric conversion, converted into digital signals by an analog/digital converter (analog/digital converter), and input into a computer for processing. The image formation is handled as a division of the selected slice into cuboids of the same volume, called voxels (voxels). The information obtained from the scan is calculated to obtain the X-ray attenuation coefficient or absorption coefficient for each voxel, and then arranged into a matrix, i.e., a digital matrix (digital matrix), which may be stored on a magnetic or optical disk. Each digit in the digital matrix is converted into small blocks with unequal gray scale from black to white, i.e. pixels (pixels), by a digital/analog converter (digital/analog converter), and the small blocks are arranged in a matrix, i.e. a CT image is formed. Therefore, the CT image is a reconstructed image. The X-ray absorption coefficient of each voxel can be calculated by different mathematical methods. The working procedure of CT is as follows: according to the different absorption and transmittance of different tissues of human body to X-ray, it uses the instrument with very high sensitivity to measure human body, then inputs the data obtained by measurement into the electronic computer, after the data is processed by the electronic computer, the cross-section or three-dimensional image of the examined position of human body can be taken, and the small pathological changes of any position in human body can be found.
(2) Pixel: pixels are defined by tiles of the image that have a well-defined location and assigned color values that determine how the image appears. A pixel can be considered to be an indivisible unit or element in the entire image. Indivisible means that it can no longer be cut into smaller units or elements, it exists as a single color cell.
(3) Cost function (cost contribution/loss contribution): the cost function refers to a function which maps the value of a random event or its related random variable to a non-negative real number to represent the "risk" or "loss" of the random event. Wherein, the smaller the cost function value, the smaller the error, and the closer the model and the parameter are to the reality.
An application scenario of an alternative image segmentation method provided in the embodiments of the present application is described below with reference to the accompanying drawings. The medical image device 10 as shown in fig. 1 comprises an image acquisition unit 101 and an image processing unit 102; wherein:
an image acquisition unit 101 for acquiring a sequence of medical images; the medical image sequence comprises a plurality of medical images;
an image processing unit 102 for determining layer interval information between a medical image to be segmented and a reference medical image; the medical image to be segmented and the reference medical image are images of different layers in a medical image sequence obtained by scanning aiming at a target organ; determining target segmentation points corresponding to the reference segmentation points in the medical image to be segmented respectively according to a plurality of reference segmentation points selected in the reference medical image and layer interval information between the medical image to be segmented and the reference medical image; aiming at any two adjacent target segmentation points in the medical image to be segmented, determining the edge of a target organ between the two adjacent target segmentation points according to the pixel values of the two adjacent target segmentation points and a predefined cost function; and according to the determined edge of the target organ between every two adjacent target segmentation points, segmenting the target organ from the image to be segmented.
In addition, the first and second substrates are, the embodiment of the present application further provides an application scenario of the optional image segmentation method, as shown in fig. 2, comprises a medical image acquisition device 20 and a server 11;
a medical image acquisition device 20 for acquiring a medical image sequence including a plurality of medical images; sending the acquired medical image sequence to the server 11;
the server 11 is configured to receive a medical image sequence sent by the medical image acquisition device 20, and determine layer interval information between a medical image to be segmented and a reference medical image; the medical image to be segmented and the reference medical image are images of different layers in a medical image sequence obtained by scanning aiming at a target organ; determining target segmentation points corresponding to the reference segmentation points in the medical image to be segmented respectively according to a plurality of reference segmentation points selected in the reference medical image and layer interval information between the medical image to be segmented and the reference medical image; aiming at any two adjacent target segmentation points in the medical image to be segmented, determining the edge of a target organ between the two adjacent target segmentation points according to the pixel values of the two adjacent target segmentation points and a predefined cost function; and according to the determined edge of the target organ between every two adjacent target segmentation points, segmenting the target organ from the image to be segmented.
Of course, the method provided in the embodiment of the present application is not limited to the application scenario shown in fig. 1 or fig. 2, and may also be used in other possible application scenarios, and the embodiment of the present application is not limited.
As shown in fig. 3, an image segmentation method according to an embodiment of the present application may specifically include the following steps:
s301, determining layer interval information between the medical image to be segmented and the reference medical image; the medical image to be segmented and the reference medical image are images of different layers in a medical image sequence obtained by scanning a target organ;
step S302, determining target segmentation points corresponding to the reference segmentation points in the medical image to be segmented according to a plurality of reference segmentation points selected in the reference medical image and layer interval information between the medical image to be segmented and the reference medical image;
step S303, aiming at any two adjacent target segmentation points in the medical image to be segmented, determining the edge of a target organ between the two adjacent target segmentation points according to the pixel values of the two adjacent target segmentation points and a predefined cost function;
and S304, according to the determined edge of the target organ between every two adjacent target segmentation points, segmenting the target organ from the image to be segmented.
In the embodiment of the application, a Zhang Cankao medical image is selected from a medical image sequence, and a plurality of reference segmentation points are selected from a reference medical image; aiming at other medical images except the reference medical image in the medical image sequence, target segmentation points corresponding to the reference segmentation points in the medical image to be segmented can be automatically determined according to a plurality of reference segmentation points selected in the reference medical image and layer interval information between the medical image to be segmented and the reference medical image, a user does not need to manually select the segmentation points for each medical image, and the efficiency of determining the segmentation points in the medical image can be greatly improved; in the embodiment of the application, after the target segmentation point in the medical image is determined for each medical image, the edge of the target organ can be automatically generated according to the pixel value and the cost function of the target segmentation point, and compared with a scheme that the edge of the target organ is manually outlined by a user in the medical image in the related art, the scheme that the edge of the target organ is automatically generated according to a plurality of target segmentation points can more quickly segment the target organ from the medical image, and the segmentation efficiency of the medical image is improved.
In the embodiment of the present application, before step S301, a medical image needs to be determined from the medical image sequence as a reference medical image;
in implementation, the embodiment of the application can adopt various modes to select the reference medical image from the medical image sequence;
for example, according to the embodiment of the application, a specific medical image can be selected from the medical image sequence as a reference medical image according to a preset rule; for example, an image in the medical image sequence that contains the largest edge contour of the target organ may be selected as the reference medical image, or an image in the medical image sequence that contains the smallest edge contour of the target organ may be selected as the reference medical image.
As another example, a medical image is arbitrarily selected from the medical image sequence as a reference medical image.
The above manner is only an example of the embodiment of the present application, and the embodiment of the present application does not limit the selection manner of the reference medical image in practical application.
Based on the acquired reference medical image, the user can select a reference segmentation point from the reference medical image in the embodiment of the application; wherein the reference segmentation point is located at an edge of the target organ in the reference medical image.
In an implementation, a user may select a plurality of reference segmentation points from a reference medical image.
As shown in fig. 4, a reference medical image is displayed in a display interface of the medical image device, and a user can judge the position of a target organ in the reference medical image based on his own experience, and select a plurality of reference segmentation points, such as a point a and a point B shown in fig. 4, at the edge of the target organ in the reference medical image by a click operation.
According to the method and the device, after the user selects the reference segmentation point from the reference medical image, the edge of the target organ can be automatically generated based on the reference segmentation point selected by the user;
according to the method and the device, in the process that the user selects the reference segmentation points, the edges of the target organ can be generated based on the two adjacent reference segmentation points selected by the user;
for example, if a point a selected by the user in the reference medical image is used as a reference segmentation point, the starting point of image segmentation is determined to be point a, and if the user determines the next reference segmentation point B, the point B is determined to be the end point of image segmentation, and the edge of the target organ between the point a and the point B is generated. After determining the edge of the target organ between the point a and the point B, the point B is updated as the starting point of the image segmentation, and after the user determines the next reference segmentation point C, the point C is determined as the end point of the image segmentation, and the edge of the target organ between the point B and the point C is generated. And the like until the target organ is segmented from the medical image.
Or, in the embodiment of the present application, after the user selects all reference segmentation points from the reference medical image, an edge of the target organ between every two adjacent reference segmentation points is generated;
for example, the number of reference segmentation points may be preset, and after it is detected that the user selects the preset number of reference segmentation points from the reference medical image, the edge of the target organ between every two adjacent reference segmentation points is automatically generated; alternatively, after the user selects a plurality of reference segmentation points, an edge of the target organ between each two adjacent reference segmentation points is generated in response to a user-triggered image segmentation operation.
The following details the process of determining the target organ edge between the two reference segmentation points:
as shown in fig. 5, the flowchart for determining the target organ edge between two reference segmentation points in the embodiment of the present application may specifically include the following steps:
step S501, aiming at one reference segmentation point of two adjacent reference segmentation points, generating a directed weighted graph taking the reference segmentation point as a starting point according to a pixel value of the reference segmentation point and a predefined cost function;
for two reference segmentation points, a directed weighted graph is generated with one of the reference segmentation points as a starting point (wherein the starting point of the two reference segmentation points may be a segmentation point previously selected by a user).
The nodes in the directed weighted graph are pixel points in the medical image to be segmented, edges in the directed weighted graph represent the adjacent relation between the pixels, and the weight of the edges in the directed weighted graph is a cost value between two corresponding adjacent pixel points determined according to a predefined cost function.
In implementation, a reference partition point is used as a starting point, eight neighborhood pixel points adjacent to the reference partition point are used as nodes of the directed weighted graph, cost values of the reference partition point and the eight neighborhood pixel points are determined according to a predefined cost function, and the cost values are used as edge weight values between the reference partition point and the corresponding neighborhood pixel points. Then, taking eight neighborhood pixel points of the reference partition point as center pixel points respectively, taking the neighborhood pixel point adjacent to the center pixel point as a node of a directed weighted graph aiming at any center pixel point, determining cost values of the center pixel point and each neighborhood pixel point according to a predefined cost function, and taking the cost values as edge weighted values between the center pixel point and the corresponding neighborhood pixel points; and in the same way, generating a directed weighted graph taking the reference division point as a starting point.
Optionally, the predefined cost function is shown as follows:
Cost(p,q)=A*f l (p)+B*f g (p)+C*f gd (p,q)+D*||p,q|| 2
wherein p is a reference partition point or a central pixel point, q is a neighborhood pixel point, f l For functions based on Laplace operators, f g For functions based on Sobel gradient operators, f gd Is a function based on gradient direction, | | p, q | | non-woven phosphor 2 Based on the two norms of the point p and the point q, A, B, C, D is the weight corresponding to each preset function, cost value between the Cost (p, q) point p and the point q.
In an alternative manner, in the process of generating the directed weighted graph with the reference partition point as the starting point, after the next reference partition point is detected and the cost value between the next reference partition point and the corresponding neighbor pixel point is determined, the process of generating the directed weighted graph may be stopped, so as to obtain the directed weighted graph with the reference partition point as the starting point and the next reference partition point of the reference partition point as the end point.
As shown in fig. 6, an embodiment of the present application is a schematic diagram for generating a directed weighted graph. For example, if the user selects point a as a reference segmentation point in the reference medical image, the starting point of image segmentation is determined to be point a, and after the user determines the next reference segmentation point B, point B is determined to be the end point of image segmentation;
firstly, taking a point A as a starting point of a directed weighting graph, taking neighborhood pixels E1, E2, E3, E4, E5, E6, E7 and E8 of the point A as nodes, respectively calculating cost values of the starting point A and eight neighborhood pixels E1, E2, E3, E4, E5, E6, E7 and E8 according to a predefined cost function, and taking the determined cost value between the point A and each neighborhood pixel as a weighted value of an edge between the point A and the eight neighborhood pixels. Then, respectively taking E1, E2, E3, E4, E5, E6, E7 and E8 as central pixel points, taking each neighborhood pixel point corresponding to the central pixel point as a node, and respectively calculating cost values of the neighborhood pixel points when the E1, E2, E3, E4, E5, E6, E7 and E8 are respectively taken as the central pixel points according to a predefined cost function; for example, taking the point E1 as a central pixel point, taking the neighborhood pixels E2, A, E, C6, and C7 of the point E1 as nodes, and calculating the cost values of the points E1 and E2, A, E, C6, and C7, respectively; and analogizing in sequence until cost values of the reference segmentation point B and eight neighborhood pixel points B1, E7, E6, B4, B5, B6, B7 and B8 are determined, and determining weight values of edges between the end point B and the eight neighborhood pixels. Thus, a directed weighted graph with point a as the starting point and point B as the end point can be obtained as shown in fig. 6.
Step S502, determining a candidate path between two adjacent reference segmentation points according to the generated directed weighted graph;
in implementation, after the directional weighted graph is obtained, a candidate path from one reference segmentation point to the next reference segmentation point is determined from the directional weighted graph;
assuming that the generated directional weighted graph is shown in FIG. 6, candidate paths from point A to point B are selected from the directional weighted graph shown in FIG. 6, and the selected candidate paths are a plurality of candidate paths starting at point A and ending at point B, such as A-E7-B, A-E6-B, A-E6-B4-B, A-E5-B4-B, as shown in FIG. 7.
Step S503, determining the cost value of each path, and taking the path with the minimum cost value between two adjacent reference segmentation points as the edge of the target organ between the two adjacent reference segmentation points.
In implementation, after a candidate path between two adjacent reference segmentation points is determined, a cost value corresponding to the candidate path is calculated according to each candidate path;
specifically, the sum of the weight values of all edges on the candidate path is used as the cost value corresponding to the candidate path.
As shown in fig. 8, an exemplary diagram of determining an edge of a target organ between two adjacent reference segmentation points according to an embodiment of the present application is shown. In the candidate routes shown in fig. 7, the cost value corresponding to each candidate route is calculated, for example, the weight value of the side between a and E7 is 2, the weight value of the side between E7 and B is 1,A and E6 is 3, the weight value of the side between E6 and B is 2, the weight value of the side between E6 and B4 is 1, the weight value of the side between B4 and B is 2,A and E5 is 1, and the weight value of the side between E5 and B4 is 2, then the cost value of the a-E7-B route is 3,A-E6-B route is 5,A-E6-B4-B route is 6,A-E5-B4-B route is 5, wherein the cost value of the a-E7-B route is the minimum, and thus the a-E7-B route is selected as the division point between the end point of the organ, and the end point of the adjacent organ is the end point of the adjacent organ.
Based on the above manner, the embodiment of the present application can obtain the edge of the target organ in the reference medical image, and in the embodiment of the present application, and based on the reference medical image, the edge of the target organ in the other medical images except the reference medical image in the medical image sequence can be automatically determined.
The following describes in detail an image segmentation method for any medical image to be segmented except a reference medical image in a medical image sequence:
in implementation, firstly, a target segmentation point in a medical image to be segmented is determined according to a plurality of reference segmentation points selected from a reference medical image, and then image segmentation is carried out according to the target segmentation point in the image to be segmented.
The following describes in detail the determination process of determining a target segmentation point in a medical image to be segmented from a plurality of reference segmentation points selected in a reference medical image:
as shown in fig. 9, the flowchart for determining the target segmentation point in the medical image to be segmented according to the embodiment of the present application may specifically include the following steps:
step S901, determining layer interval information between the medical image to be segmented and the reference medical image.
The medical image to be segmented and the reference medical image are images of different layers in a medical image sequence obtained by scanning a target organ;
optionally, the layer interval information between the medical image to be segmented and the reference medical image is the number of layers of the medical image to be segmented and the reference medical image in the medical image sequence.
For example, if the reference medical image is a first image in the medical image sequence and the medical image to be segmented is a third image in the medical image sequence, the layer interval information num between the medical image to be segmented and the reference medical image is 2.
Step S902, according to a plurality of reference segmentation points selected in the reference medical image, determining the gradient direction corresponding to the reference segmentation points.
Alternatively, as shown in fig. 10, in the embodiment of the present application, the gradient direction corresponding to the reference segmentation point is determined according to the following manner:
step S1001, according to the pixel value corresponding to the reference division point and the pixel value of the neighborhood pixel point adjacent to the reference division point, determining a gradient value between a reference segmentation point and each neighborhood pixel point;
step S1002, determining a target neighborhood pixel point with the maximum gradient value between the neighborhood pixel points of the reference segmentation point and the reference segmentation point;
and step S1003, taking the direction between the reference segmentation point and the target neighborhood pixel point as the gradient direction corresponding to the reference segmentation point.
As shown in fig. 11, a schematic diagram of gradient directions corresponding to reference segmentation points according to an embodiment of the present application. In the figure, a point E is a reference segmentation point in a reference medical image, points E1, E2, E3, E4, E5, E6, E7 and E8 are neighborhood pixel points of the point E, gradient values of the reference segmentation point and the points E1, E2, E3, E4, E5, E6, E7 and E8 are calculated to obtain neighborhood pixel points with the maximum gradient values of the reference segmentation point and the neighborhood pixel points, and the gradient direction corresponding to the reference segmentation point is determined. For example, if the gradient value between the reference partition point E and the neighborhood pixel point E5 is the largest, the direction from the reference partition point E to the neighborhood pixel point E5 is determined to be the gradient direction corresponding to the reference partition point E.
And step S903, determining offset information corresponding to the reference segmentation point according to the layer interval information between the medical image to be segmented and the reference medical image.
The offset information corresponding to the reference segmentation point in the embodiment of the application comprises a plane offset value and a depth offset value;
optionally, in the embodiment of the present application, the plane offset value corresponding to the reference segmentation point may be determined according to the following manner:
determining a pixel offset value between two adjacent medical images in the medical image sequence according to the pixel spacing in the medical images; determining a plane offset value corresponding to a reference segmentation point according to layer interval information between the medical image to be segmented and the reference medical image and the pixel offset value;
it should be noted that, in the embodiment of the present application, a pixel distance between each pixel point and its left and right adjacent pixel points in the medical image is defined as spacing x, a pixel distance between its upper and lower adjacent pixel points is defined as spacing y, and a layer distance between two adjacent medical images is defined as spacing z.
As shown in fig. 12A, a point A, B, C is a pixel point in any one of the medical images in the medical image sequence, a pixel distance between a pixel point a and a pixel point B is spacingX, and a pixel distance between a pixel point a and a pixel point C is spacingY; fig. 12B shows that the inter-layer distance of any two adjacent medical images is spacingZ.
Optionally, in the embodiment of the present application, the pixel offset value corresponding to the reference segmentation point is determined by the following formula:
Figure BDA0003801088390000091
the segmentation method comprises the steps of obtaining segmentation points, obtaining pixel distance information, obtaining interval information, and obtaining pixel offset values corresponding to the segmentation points.
The plane deviation value corresponding to the reference segmentation point can be determined according to the following formula:
Figure BDA0003801088390000092
and num is layer interval information between the medical image to be segmented and the reference medical image, and D is a plane offset value corresponding to the reference segmentation point.
Optionally, in the embodiment of the present application, the depth offset value corresponding to the reference segmentation point may be determined according to the following manner:
and determining a depth offset value corresponding to the reference segmentation point according to the interlayer spacing between two adjacent medical images in the medical image sequence and the interlayer spacing information between the medical image to be segmented and the reference medical image.
In implementation, the depth offset value corresponding to the reference segmentation point may be determined according to the following formula:
dz=num*spacingZ
and num is the interlayer interval information between the medical image to be segmented and the reference medical image, spacing Z is the interlayer interval, and dz is the depth deviation value corresponding to the reference segmentation point.
And step S904, determining a target segmentation point in the medical image to be segmented according to the gradient direction and the offset information corresponding to the plurality of determined reference segmentation points.
After a plane offset value and a depth offset value corresponding to a reference segmentation point are determined, a target segmentation point corresponding to the reference segmentation point in a medical image to be segmented is determined according to position information of the reference segmentation point in the reference medical image, the plane offset value and the depth offset value corresponding to the reference segmentation point and the gradient direction;
optionally, respectively determining an abscissa offset value and an ordinate offset value corresponding to the reference segmentation point according to the plane offset value and the gradient direction corresponding to the reference segmentation point; determining the abscissa value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the abscissa value and the abscissa offset value of the reference segmentation point; determining a longitudinal coordinate value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the longitudinal coordinate value of the reference segmentation point and the longitudinal coordinate deviation value; and
and determining the depth coordinate value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the depth coordinate value and the depth deviation value of the reference segmentation point.
As shown in fig. 13, the embodiment of the present application schematically determines the abscissa offset value and the ordinate offset value according to the plane offset value.
And the graph midpoint E is a reference segmentation point in the reference medical image, the included angle between the gradient direction and the X axis is a, the plane deviation value is D, the abscissa deviation value is D × cosa, and the ordinate deviation value is D × sina.
Because the position information of the reference segmentation point in the reference medical image comprises an abscissa value, an ordinate value and a depth coordinate value, when the position of the target segmentation point in the medical image to be segmented is determined, the abscissa value, the ordinate value and the depth coordinate value of the target segmentation point in the medical image to be segmented need to be determined; the following are introduced separately:
1. the abscissa value of the target segmentation point in the medical image to be segmented;
in implementation, according to the abscissa value and the abscissa offset value of the reference segmentation point, the abscissa value of the target segmentation point corresponding to the reference segmentation point in the medical image to be segmented is determined.
Specifically, the sum of the abscissa value of the reference division point and the abscissa offset value is taken as the abscissa value of the target division point.
2. The longitudinal coordinate value of the target segmentation point in the medical image to be segmented;
in implementation, according to the ordinate value of the reference segmentation point and the ordinate offset value, the ordinate value of the target segmentation point corresponding to the reference segmentation point in the medical image to be segmented is determined.
Specifically, the sum of the ordinate value of the reference division point and the ordinate offset value is used as the ordinate value of the target division point.
3. The depth coordinate value of the target segmentation point in the medical image to be segmented;
in implementation, according to the depth coordinate value of the reference segmentation point and the depth offset value, the depth coordinate value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented is determined.
Specifically, the sum of the depth coordinate value of the reference segmentation point and the depth coordinate offset value is used as the depth coordinate value of the target segmentation point.
For ease of understanding, the location of the reference segmentation points in the reference medical image is now illustrated. As shown in fig. 14, a schematic diagram of three-dimensional coordinates of a reference dividing point position according to an embodiment of the present application. The point E is a reference segmentation point in the reference medical image, for example, the coordinate of the point E is (X, Y, Z), the position of the reference segmentation point E corresponding to the X axis is X, the position of the reference segmentation point E corresponding to the Y axis is Y, and the depth of the reference segmentation point E corresponding to the Z axis is Z.
In the embodiment of the application, the position of the target segmentation point in the medical image to be segmented is determined according to the position information of the reference segmentation point and the abscissa offset value, the ordinate offset value and the depth offset value corresponding to the reference segmentation point.
As shown in fig. 15, a three-dimensional schematic diagram of determining a target segmentation point according to a reference segmentation point in the embodiment of the present application is shown. In fig. 15, a point E is a reference segmentation point in the reference medical image, and the coordinate of the point E is E (x, y, z), a target segmentation point in the medical image to be segmented shown by a point F in fig. 15 can be obtained according to an abscissa offset value dx, an ordinate offset value dy, and a depth offset value dz corresponding to the reference segmentation point, and the position of the target segmentation point in the medical image to be segmented is F (x + dx, y + dy, z + dz).
As shown in fig. 16, the point E is a reference segmentation point in the reference medical image, and a corresponding target segmentation point F in the medical image to be segmented can be obtained according to the determined plane offset value and the depth offset value.
According to the method and the device, after the target segmentation points corresponding to the reference segmentation points in the medical image to be segmented are determined, the accuracy of the target segmentation points is verified, if the accuracy verification of the target segmentation points passes, image segmentation is performed according to the determined target segmentation points, and if the accuracy verification of the target segmentation points does not pass, the target segmentation points are corrected.
The following describes in detail the process of verifying the accuracy of the target segmentation point:
as shown in fig. 17, the flowchart for verifying the accuracy of the target segmentation point in the embodiment of the present application may specifically include the following steps:
in step S1701: and aiming at any one target segmentation point, acquiring an HU value of the target segmentation point and an HU value of a reference segmentation point corresponding to the target segmentation point.
Wherein, HU value is the pixel value of every pixel point in the medical image through gathering in the medical equipment, if medical image is CT image, then HU value also is CT value.
It should be noted that, in the embodiment of the present application, when a target segmentation point in a medical image to be segmented is verified, the determined target segmentation point may be verified after the target segmentation point is determined according to one reference segmentation point; or after all the target segmentation points in the medical image to be segmented are determined, respectively checking all the determined target segmentation points.
In step S1702: the difference between the HU values of the target segmentation point and the corresponding reference segmentation point is determined.
In step S1703: judging whether the difference value between the HU value of the target segmentation point and the HU value of the corresponding reference segmentation point is within a set range; if so, step S1704 is executed, otherwise, step S1705 is executed.
The setting range in the embodiment of the present application may be a preset fixed range, or the setting range may be determined according to the HU value of the corresponding reference segmentation point; for example, the setting range is
Figure BDA0003801088390000121
Where a is the HU value of the reference segmentation point.
In step S1704: and determining that the target segmentation point passes the check.
In step S1705: and determining the gradient value between the reference segmentation point and each adjacent pixel point, and taking the position information of the adjacent pixel point with the maximum gradient value as the position information of a new target segmentation point.
After the target segmentation point in the medical image to be segmented is obtained in the above mode, the edge of the target organ in the medical image to be segmented is determined according to the determined target segmentation point;
optionally, for one target segmentation point of the two adjacent target segmentation points, generating a directed weighted graph with the one target segmentation point as a starting point according to the pixel value of the one target segmentation point and a predefined cost function; the nodes in the directed weighted graph are pixel points in the medical image to be segmented, edges in the directed weighted graph represent the adjacent relation between the pixels, and the weight of the edges in the directed weighted graph is a cost value between two corresponding adjacent pixel points determined according to the predefined cost function; determining the cost value of each path between the two adjacent target segmentation points according to the generated directional weighted graph; and taking the path with the minimum cost value between the two adjacent target segmentation points as the edge of the target organ between the two adjacent target segmentation points.
It should be noted that, in the embodiment of the present application, a detailed process for determining the edge of the target organ in the medical image to be segmented according to the target segmentation point in the medical image to be segmented may refer to the above process for determining the edge of the target organ in the medical image according to the reference segmentation point in the reference medical image, and details are not described here again.
As shown in FIG. 18, the effect map of the edge segmentation of the target organ in the medical image according to the embodiment of the present application is shown.
Based on the same inventive concept, the embodiment of the application provides a medical image segmentation device; as shown in fig. 19, the apparatus 1900 for segmenting medical images comprises at least one processor 1901 and at least one memory 1902;
wherein the memory 1902 stores program code that, when executed by the processor 1901, causes the processor to:
determining layer interval information between a medical image to be segmented and a reference medical image; the medical image to be segmented and the reference medical image are images of different layers in a medical image sequence obtained by scanning a target organ;
determining target segmentation points corresponding to the reference segmentation points in the medical image to be segmented according to a plurality of reference segmentation points selected from the reference medical image and layer interval information between the medical image to be segmented and the reference medical image;
aiming at any two adjacent target segmentation points in the medical image to be segmented, determining the edge of the target organ between the two adjacent target segmentation points according to the pixel values of the two adjacent target segmentation points and a predefined cost function;
and according to the determined edge of the target organ between every two adjacent target segmentation points, segmenting the target organ from the image to be segmented.
Optionally, the processor 1901 is specifically configured to:
respectively executing the following operations for any one reference segmentation point in the reference medical image:
determining a gradient direction corresponding to the reference segmentation point according to a pixel value corresponding to the reference segmentation point and a pixel value of an adjacent pixel point adjacent to the reference segmentation point; and
determining offset information corresponding to the reference segmentation point according to the layer interval information between the medical image to be segmented and the reference medical image;
optionally, the processor 1901 is specifically configured to:
determining a gradient value between the reference segmentation point and each adjacent pixel point according to the pixel value corresponding to the reference segmentation point and the pixel value of the adjacent pixel point adjacent to the reference segmentation point;
determining a target adjacent pixel point with the maximum gradient value between the target adjacent pixel point and the reference partition point in the adjacent pixel points of the reference partition point;
and taking the direction between the reference segmentation point and the target adjacent pixel point as the gradient direction corresponding to the reference segmentation point.
Optionally, the offset information corresponding to the reference segmentation point includes a plane offset value and a depth offset value;
the processor 1901 is specifically configured to:
determining a pixel offset value between two adjacent medical images in the medical image sequence according to a pixel pitch in the medical images; determining a plane offset value corresponding to the reference segmentation point according to the layer interval information between the medical image to be segmented and the reference medical image and the pixel offset value; and
and determining a depth offset value corresponding to the reference segmentation point according to the interlayer spacing between two adjacent medical images in the medical image sequence and the interlayer spacing information between the medical image to be segmented and the reference medical image.
Optionally, the position information of the reference segmentation point in the reference medical image includes an abscissa value, an ordinate value, and a depth coordinate value;
the processor 1901 is specifically configured to:
respectively determining an abscissa offset value and an ordinate offset value corresponding to the reference segmentation point according to the plane offset value corresponding to the reference segmentation point and the gradient direction; determining the abscissa value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the abscissa value of the reference segmentation point and the abscissa offset value; determining a longitudinal coordinate value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the longitudinal coordinate value of the reference segmentation point and the longitudinal coordinate offset value; and
and determining the depth coordinate value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the depth coordinate value of the reference segmentation point and the depth offset value.
Optionally, the processor 1901 is further configured to:
aiming at any one target segmentation point, acquiring an HU value of the target segmentation point and an HU value of a reference segmentation point corresponding to the target segmentation point;
and determining that the difference value between the HU value of the target segmentation point and the HU value of the corresponding reference segmentation point is within a set range.
Optionally, the processor 1901 is further configured to:
and if the difference value between the HU value of the target segmentation point and the HU value of the corresponding reference segmentation point is not within the set range, determining the gradient value between the reference segmentation point and each adjacent pixel point, and taking the position information of the adjacent pixel point with the maximum gradient value as the position information of a new target segmentation point.
Optionally, the processor 1901 is specifically configured to:
aiming at one target segmentation point of the two adjacent target segmentation points, generating a directed weighted graph taking the target segmentation point as a starting point according to the pixel value of the target segmentation point and a predefined cost function; the nodes in the directed weighted graph are pixel points in the medical image to be segmented, edges in the directed weighted graph represent the adjacent relation between the pixels, and the weight of the edges in the directed weighted graph is a cost value between two corresponding adjacent pixel points determined according to the predefined cost function;
determining the cost value of each path between the two adjacent target segmentation points according to the generated directional weighted graph; and taking the path with the minimum cost value between the two adjacent target segmentation points as the edge of the target organ between the two adjacent target segmentation points.
As shown in fig. 20, an embodiment of the present application provides a medical image segmentation apparatus 2000, which can be specifically applied to the medical image device in the foregoing embodiment, and includes:
an obtaining module 2001, configured to determine layer interval information between the medical image to be segmented and the reference medical image; the medical image to be segmented and the reference medical image are images of different layers in a medical image sequence obtained by scanning a target organ;
a determining module 2002, configured to determine, according to a plurality of reference segmentation points selected from the reference medical image and layer interval information between the medical image to be segmented and the reference medical image, target segmentation points corresponding to the reference segmentation points in the medical image to be segmented, respectively;
an edge detection module 2003, for any two adjacent target segmentation points in the medical image to be segmented, determining an edge of the target organ between the two adjacent target segmentation points according to pixel values of the two adjacent target segmentation points and a predefined cost function;
the segmentation module 2004 segments the target organ from the image to be segmented according to the determined edge of the target organ between each two adjacent target segmentation points.
Optionally, the determining module 2002 is specifically configured to:
respectively executing the following operations for any one reference segmentation point in the reference medical image:
determining a gradient direction corresponding to the reference segmentation point according to a pixel value corresponding to the reference segmentation point and a pixel value of an adjacent pixel point adjacent to the reference segmentation point; and
determining offset information corresponding to the reference segmentation point according to the layer interval information between the medical image to be segmented and the reference medical image;
and determining a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the position information of the reference segmentation point in the reference medical image, the offset information corresponding to the reference segmentation point and the gradient direction.
Optionally, the determining module 2002 is specifically configured to:
determining a gradient value between the reference partition point and each adjacent pixel point according to the pixel value corresponding to the reference partition point and the pixel value of the adjacent pixel point adjacent to the reference partition point;
determining a target adjacent pixel point with the maximum gradient value between the target adjacent pixel point and the reference partition point in the adjacent pixel points of the reference partition point;
and taking the direction between the reference segmentation point and the target adjacent pixel point as the gradient direction corresponding to the reference segmentation point.
Optionally, the offset information corresponding to the reference segmentation point includes a plane offset value and a depth offset value;
the determining module 2002 is specifically configured to:
determining a pixel offset value between two adjacent medical images in the medical image sequence according to a pixel pitch in the medical images; determining a plane offset value corresponding to the reference segmentation point according to the layer interval information between the medical image to be segmented and the reference medical image and the pixel offset value; and
and determining a depth offset value corresponding to the reference segmentation point according to the interlayer spacing between two adjacent medical images in the medical image sequence and the interlayer spacing information between the medical image to be segmented and the reference medical image.
Optionally, the position information of the reference segmentation point in the reference medical image includes an abscissa value, an ordinate value, and a depth coordinate value;
the determining module 2002 is specifically configured to:
respectively determining a horizontal coordinate deviation value and a vertical coordinate deviation value corresponding to the reference division point according to the plane deviation value corresponding to the reference division point and the gradient direction; determining the abscissa value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the abscissa value of the reference segmentation point and the abscissa offset value; determining the ordinate value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the ordinate value of the reference segmentation point and the ordinate offset value; and
and determining the depth coordinate value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the depth coordinate value of the reference segmentation point and the depth offset value.
Optionally, the determining module 2002 is further configured to:
aiming at any one target segmentation point, acquiring an HU value of the target segmentation point and an HU value of a reference segmentation point corresponding to the target segmentation point;
and determining that the difference value between the HU value of the target segmentation point and the HU value of the corresponding reference segmentation point is within a set range.
Optionally, the determining module 2002 is further configured to:
and if the difference value between the HU value of the target segmentation point and the HU value of the corresponding reference segmentation point is not within the set range, determining the gradient value between the reference segmentation point and each adjacent pixel point, and taking the position information of the adjacent pixel point with the maximum gradient value as the position information of a new target segmentation point.
Optionally, the edge detection module 2003 is specifically configured to:
aiming at one target segmentation point of the two adjacent target segmentation points, generating a directed weighted graph taking the target segmentation point as a starting point according to the pixel value of the target segmentation point and a predefined cost function; the nodes in the directed weighted graph are pixel points in the medical image to be segmented, edges in the directed weighted graph represent the adjacent relation between the pixels, and the weight of the edges in the directed weighted graph is a cost value between two corresponding adjacent pixel points determined according to the predefined cost function;
determining the cost value of each path between the two adjacent target segmentation points according to the generated directed weighted graph; and the cost value between the two adjacent target segmentation points is calculated.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as a memory comprising instructions, executable by a processor to perform the above-described method of segmentation of a medical image is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when being executed by a processor, carries out any one of the methods of segmentation of medical images as provided herein.
In an exemplary embodiment, aspects of a medical image segmentation method provided by the present application may also be implemented in the form of a program product, which includes program code for causing a computer device to perform the steps in the medical image segmentation method according to various exemplary embodiments of the present application described above in this specification, when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product of the segmentation method for medical images of the embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method of segmentation of a medical image, the method comprising:
determining layer interval information between a medical image to be segmented and a reference medical image; the medical image to be segmented and the reference medical image are images of different layers in a medical image sequence obtained by scanning a target organ;
determining target segmentation points which respectively correspond to the reference segmentation points in the medical image to be segmented according to a plurality of reference segmentation points selected in the reference medical image and layer interval information between the medical image to be segmented and the reference medical image;
aiming at any two adjacent target segmentation points in the medical image to be segmented, determining the edge of the target organ between the two adjacent target segmentation points according to the pixel values of the two adjacent target segmentation points and a predefined cost function;
and according to the determined edge of the target organ between every two adjacent target segmentation points, segmenting the target organ from the image to be segmented.
2. The method as claimed in claim 1, wherein the determining target segmentation points in the medical image to be segmented corresponding to the respective reference segmentation points according to the plurality of reference segmentation points selected in the reference medical image and the layer interval information between the medical image to be segmented and the reference medical image comprises:
respectively executing the following operations for any one reference segmentation point in the reference medical image:
determining a gradient direction corresponding to the reference segmentation point according to a pixel value corresponding to the reference segmentation point and a pixel value of an adjacent pixel point adjacent to the reference segmentation point; and
determining offset information corresponding to the reference segmentation point according to the layer interval information between the medical image to be segmented and the reference medical image;
and determining a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the position information of the reference segmentation point in the reference medical image, the offset information corresponding to the reference segmentation point and the gradient direction.
3. The method of claim 2, wherein determining the gradient direction corresponding to the reference segmentation point according to the pixel value corresponding to the reference segmentation point and the pixel values of the neighboring pixels adjacent to the reference segmentation point comprises:
determining a gradient value between the reference partition point and each adjacent pixel point according to the pixel value corresponding to the reference partition point and the pixel value of the adjacent pixel point adjacent to the reference partition point;
determining a target adjacent pixel point with the maximum gradient value between the target adjacent pixel point and the reference partition point in the adjacent pixel points of the reference partition point;
and taking the direction between the reference segmentation point and the target adjacent pixel point as the gradient direction corresponding to the reference segmentation point.
4. The method of claim 2, wherein the offset information corresponding to the reference segmentation point comprises a plane offset value and a depth offset value;
the determining offset information corresponding to the reference segmentation point according to the layer interval information between the medical image to be segmented and the reference medical image includes:
determining a pixel offset value between two adjacent medical images in the medical image sequence according to a pixel pitch in the medical images; determining a plane offset value corresponding to the reference segmentation point according to the layer interval information between the medical image to be segmented and the reference medical image and the pixel offset value; and
and determining a depth offset value corresponding to the reference segmentation point according to the interlayer spacing between two adjacent medical images in the medical image sequence and the interlayer spacing information between the medical image to be segmented and the reference medical image.
5. The method according to claim 4, wherein the positional information of the reference segmentation point in the reference medical image includes abscissa values, ordinate values, and depth coordinate values;
the determining a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the position information of the reference segmentation point in the reference medical image, the offset information corresponding to the reference segmentation point and the gradient direction includes:
respectively determining an abscissa offset value and an ordinate offset value corresponding to the reference segmentation point according to the plane offset value corresponding to the reference segmentation point and the gradient direction; determining the abscissa value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the abscissa value of the reference segmentation point and the abscissa offset value; determining a longitudinal coordinate value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the longitudinal coordinate value of the reference segmentation point and the longitudinal coordinate offset value; and
and determining the depth coordinate value of a target segmentation point corresponding to the reference segmentation point in the medical image to be segmented according to the depth coordinate value of the reference segmentation point and the depth deviation value.
6. The method according to any one of claims 1 to 5, wherein after the determining of the target segmentation points corresponding to the respective reference segmentation points in the medical image to be segmented, before the determining of the edge of the target organ between the two adjacent target segmentation points according to the pixel values of the two adjacent target segmentation points and a predefined cost function, the method further comprises:
aiming at any target segmentation point, acquiring an HU value of the target segmentation point and an HU value of a reference segmentation point corresponding to the target segmentation point;
and determining that the difference value between the HU value of the target segmentation point and the HU value of the corresponding reference segmentation point is within a set range.
7. The method of claim 6, wherein the method further comprises:
and if the difference value between the HU value of the target segmentation point and the HU value of the corresponding reference segmentation point is not within the set range, determining the gradient value between the reference segmentation point and each adjacent pixel point, and taking the position information of the adjacent pixel point with the maximum gradient value as the position information of a new target segmentation point.
8. The method of any one of claims 1 to 5, wherein determining the edge of the target organ between the two adjacent target segmentation points according to the pixel values of the two adjacent target segmentation points and a predefined cost function comprises:
aiming at one target segmentation point of the two adjacent target segmentation points, generating a directed weighted graph taking the target segmentation point as a starting point according to the pixel value of the target segmentation point and a predefined cost function; the nodes in the directed weighted graph are pixel points in the medical image to be segmented, edges in the directed weighted graph represent the adjacent relation between the pixels, and the weight of the edges in the directed weighted graph is a cost value between two corresponding adjacent pixel points determined according to the predefined cost function;
determining the cost value of each path between the two adjacent target segmentation points according to the generated directional weighted graph; and taking the path with the minimum cost value between the two adjacent target segmentation points as the edge of the target organ between the two adjacent target segmentation points.
9. An apparatus for segmentation of medical images, characterized in that the apparatus comprises at least one processor, and at least one memory;
wherein the memory stores program code that, when executed by the processor, causes the processor to perform the following:
determining layer interval information between a medical image to be segmented and a reference medical image; the medical image to be segmented and the reference medical image are images of different layers in a medical image sequence obtained by scanning a target organ;
determining target segmentation points corresponding to the reference segmentation points in the medical image to be segmented according to a plurality of reference segmentation points selected from the reference medical image and layer interval information between the medical image to be segmented and the reference medical image;
aiming at any two adjacent target segmentation points in the medical image to be segmented, determining the edge of the target organ between the two adjacent target segmentation points according to the pixel values of the two adjacent target segmentation points and a predefined cost function;
and according to the determined edge of the target organ between every two adjacent target segmentation points, segmenting the target organ from the image to be segmented.
10. An apparatus for segmentation of medical images, the apparatus comprising:
an acquisition module for determining layer interval information between a medical image to be segmented and a reference medical image; the medical image to be segmented and the reference medical image are images of different layers in a medical image sequence obtained by scanning a target organ;
the determining module is used for determining target segmentation points which correspond to the reference segmentation points in the medical image to be segmented respectively according to a plurality of reference segmentation points selected from the reference medical image and layer interval information between the medical image to be segmented and the reference medical image;
the edge detection module is used for determining the edge of the target organ between any two adjacent target segmentation points in the medical image to be segmented according to the pixel values of the two adjacent target segmentation points and a predefined cost function;
and the segmentation module is used for segmenting the target organ from the image to be segmented according to the determined edge of the target organ between every two adjacent target segmentation points.
CN202210983459.2A 2022-08-16 2022-08-16 Medical image segmentation method, equipment and device Pending CN115409858A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210983459.2A CN115409858A (en) 2022-08-16 2022-08-16 Medical image segmentation method, equipment and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210983459.2A CN115409858A (en) 2022-08-16 2022-08-16 Medical image segmentation method, equipment and device

Publications (1)

Publication Number Publication Date
CN115409858A true CN115409858A (en) 2022-11-29

Family

ID=84159308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210983459.2A Pending CN115409858A (en) 2022-08-16 2022-08-16 Medical image segmentation method, equipment and device

Country Status (1)

Country Link
CN (1) CN115409858A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797872A (en) * 2023-01-31 2023-03-14 捷易(天津)包装制品有限公司 Machine vision-based packaging defect identification method, system, equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797872A (en) * 2023-01-31 2023-03-14 捷易(天津)包装制品有限公司 Machine vision-based packaging defect identification method, system, equipment and medium

Similar Documents

Publication Publication Date Title
US9841277B2 (en) Graphical feedback during 3D scanning operations for obtaining optimal scan resolution
US10704900B2 (en) Detection device and detection method
US8423124B2 (en) Method and system for spine visualization in 3D medical images
RU2523929C1 (en) System and method for automated planning of views in 3d brain images
US7643663B2 (en) Volume measurement in 3D datasets
CN106651895B (en) Method and device for segmenting three-dimensional image
US20150324999A1 (en) Method and System for Segmentation of the Liver in Magnetic Resonance Images Using Multi-Channel Features
WO2005031635A1 (en) System and method for three-dimensional reconstruction of a tubular organ
CN111340756B (en) Medical image lesion detection merging method, system, terminal and storage medium
US9652684B2 (en) Image processing for classification and segmentation of rock samples
CN113034389B (en) Image processing method, device, computer equipment and storage medium
CN104732520A (en) Cardio-thoracic ratio measuring algorithm and system for chest digital image
CN102132322B (en) Apparatus for determining modification of size of object
EP3047455B1 (en) Method and system for spine position detection
US8306354B2 (en) Image processing apparatus, method, and program
CN109141384A (en) Acquisition and preprocess method to data before the detection after subway tunnel completion
CN115409858A (en) Medical image segmentation method, equipment and device
CN107504917A (en) A kind of three-dimensional dimension measuring method and device
JP3236362B2 (en) Skin surface shape feature extraction device based on reconstruction of three-dimensional shape from skin surface image
EP2168492B1 (en) Medical image displaying apparatus, medical image displaying method, and medical image displaying program
US7835555B2 (en) System and method for airway detection
US8165375B2 (en) Method and system for registering CT data sets
CN113129297A (en) Automatic diameter measurement method and system based on multi-phase tumor images
EP2889001B1 (en) Shape data-generating program, shape data-generating method and shape data-generating device
JP2021085838A (en) Bar arrangement inspection system, bar arrangement inspection method and bar arrangement inspection program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination