CN109961449B - Image segmentation method and device, and three-dimensional image reconstruction method and system - Google Patents

Image segmentation method and device, and three-dimensional image reconstruction method and system Download PDF

Info

Publication number
CN109961449B
CN109961449B CN201910300832.8A CN201910300832A CN109961449B CN 109961449 B CN109961449 B CN 109961449B CN 201910300832 A CN201910300832 A CN 201910300832A CN 109961449 B CN109961449 B CN 109961449B
Authority
CN
China
Prior art keywords
image
pixel
segmentation
value
term
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910300832.8A
Other languages
Chinese (zh)
Other versions
CN109961449A (en
Inventor
周朝政
朱振中
张长青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Electric Group Corp
Original Assignee
Shanghai Electric Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Electric Group Corp filed Critical Shanghai Electric Group Corp
Priority to CN201910300832.8A priority Critical patent/CN109961449B/en
Publication of CN109961449A publication Critical patent/CN109961449A/en
Application granted granted Critical
Publication of CN109961449B publication Critical patent/CN109961449B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image segmentation method and device, and a three-dimensional image reconstruction method and system, wherein the image segmentation method comprises the following steps: acquiring image data, wherein the image data comprises probability values corresponding to each pixel composing an image, and the probability values are the probabilities that the values of the pixels are preset tag values; constructing an objective function of the image according to the probability value corresponding to each pixel; and carrying out image segmentation on the network for constructing the image according to the optimized objective function so as to obtain an image segmentation result. The invention can realize accurate segmentation between hard tissues and can provide accurate operation navigation for subsequent operation of doctors.

Description

Image segmentation method and device, and three-dimensional image reconstruction method and system
Technical Field
The invention belongs to the technical field of image segmentation, and particularly relates to an image segmentation method and device, and a three-dimensional image reconstruction method and system.
Background
The three-dimensional reconstruction based on the image is a method for calculating and extracting three-dimensional depth information of a scene and an object from a plurality of pictures and reconstructing a three-dimensional model of the object or the scene with strong sense of reality according to the obtained three-dimensional depth information. The method is related to a plurality of hot fields, such as a plurality of fields including computer image processing, computer graphics, computer vision, computer aided design and the like. At present, the three-dimensional reconstruction technology based on images has become a very potential hot field, and has important application in various aspects, such as high-tech fields of electronic commerce, aerospace flight, remote sensing mapping, virtual museums and the like. The three-dimensional image reconstruction technology is also applied in the field of medical image reconstruction, for example, a self-adaptive threshold and morphology method is adopted to segment the initial contour of the liver layer by layer, the largest liver slice is selected, the region of interest of the liver is extracted, the initial contour is extracted, seed points are selected, and a Gaussian mixture model is adopted to model the foreground and background colors; for another example, based on analyzing the characteristics of the CT image of the abdominal liver, the liver is accurately extracted by using a relative fuzzy connectivity method based on three-dimensional voxels and confidence intervals by combining the similarity between the spatial voxels and the pixels of the CT sequence image, so as to provide accurate data for the subsequent three-dimensional reconstruction of the liver. However, the prior art has not been able to accurately achieve the segmentation of hard tissue from hard tissue (e.g., pelvic bone and femur).
Disclosure of Invention
The invention aims to overcome the defect of low precision of hard tissue and hard tissue segmentation in the three-dimensional image reconstruction technology in the prior art, and provides a high-precision image segmentation method and device, and a three-dimensional image reconstruction method and system.
The invention solves the technical problems by the following technical scheme:
the invention provides an image segmentation method, which comprises the following steps:
s1, acquiring image data, wherein the image data comprises probability values corresponding to each pixel forming an image, and the probability values are probabilities that the values of the pixels are preset label values;
s2, constructing an objective function of the image according to the probability value corresponding to each pixel; objective function
Figure BDA0002028168080000021
The method comprises the following steps:
Figure BDA0002028168080000022
wherein R (L) is a region term, B (L) is a boundary term, C is a coefficient for balancing the region term and the boundary term, and C is greater than or equal to 0; the larger C is, the larger the boundary weight is, and the boundary term factors can play a more obvious role in the segmentation process. For the segmentation between hard tissues, C should not be too large;
s3, constructing a network of the image according to the objective function;
s4, dividing the image according to the network to obtain an image division result.
Preferably, step S2 includes: the region term is obtained according to the following formula:
Figure BDA0002028168080000023
wherein Img is used to characterize the image and p is used to characterize the pixel; />
Figure BDA0002028168080000024
And the penalty term is used for representing the penalty term, and the penalty term is the negative logarithm of the probability value.
Preferably, the method comprises the steps of,
Figure BDA0002028168080000025
wherein (1)>
Figure BDA0002028168080000026
For characterizing probability values.
Preferably, the image data further includes a feature value corresponding to each pixel constituting the image;
the step S2 comprises the following steps: the boundary term is obtained according to the following formula:
B(L)=∑ p∈Imgq∈N(p) B {p,q} delta (p, q), where Img is used to characterize the image, p is used to characterize the pixel, q is used to characterize the pixel in the surrounding neighborhood N (p) of pixel p; b (B) {p,q} For characterizing the boundary term factors, delta (p, q) for characterizing the probability sub-factors that neighboring pixel p is similar to pixel q,
Figure BDA0002028168080000027
Figure BDA0002028168080000028
wherein I is p Characteristic value for characterizing pixel p, I q Feature value for characterizing pixel q, σ is used to characterize the variance of feature values between neighboring pixels p and pixel q.
Preferably, step S2 further comprises: to factor B boundary term {p,q} Minimizing;
the step S4 includes: obtaining a minimum cut of the image according to a maximum flow minimum cut algorithm;
the characteristic value comprises gray value or texture characteristic value or color characteristic value, the value range of C is [0,5],
the image is of bone.
Preferably, the boundary term is factored by {p,q} The step of minimizing includes:
boundary term factor B according to least square method {p,q} Minimizing.
The invention also provides equipment for image segmentation, which comprises an image data acquisition unit, an objective function construction unit, a network construction unit and a segmentation unit;
the image data acquisition unit is used for acquiring image data, wherein the image data comprises probability values corresponding to each pixel composing the image, and the probability values are probabilities that the values of the pixels are preset tag values;
the objective function construction unit is used for constructing an objective function of the image according to the probability value corresponding to each pixel; objective function
Figure BDA0002028168080000031
The method comprises the following steps:
Figure BDA0002028168080000032
wherein R (L) is a region term, B (L) is a boundary term, and C is a coefficient for balancing the region term and the boundary term;
the network construction unit is used for constructing a network of the image according to the objective function;
the segmentation unit is used for segmenting the image according to the network to obtain an image segmentation result.
Preferably, the objective function construction unit is further configured to obtain the region term according to the following formula:
Figure BDA0002028168080000033
wherein Img is used to characterize the image and p is used to characterize the pixel; />
Figure BDA0002028168080000034
And the penalty term is used for representing the penalty term, and the penalty term is the negative logarithm of the probability value.
Preferably, the method comprises the steps of,
Figure BDA0002028168080000035
wherein (1)>
Figure BDA0002028168080000036
For characterizing probability values.
Preferably, the image data further includes a feature value corresponding to each pixel constituting the image;
the objective function construction unit is further configured to obtain a boundary term according to the following formula:
B(L)=∑ p∈Imgq∈N(p) B {p,q} delta (p, q), where Img is used to characterize the image, p is used to characterize the pixel, q is used to characterize the pixel in the surrounding neighborhood N (p) of pixel p; b (B) {p,q} For characterizing the boundary term factors, delta (p, q) for characterizing the probability sub-factors that neighboring pixel p is similar to pixel q,
Figure BDA0002028168080000041
Figure BDA0002028168080000042
wherein I is p Characteristic value for characterizing pixel p, I q Feature value for characterizing pixel q, σ is used to characterize the variance of feature values between neighboring pixels p and pixel q.
Preferably, the objective function construction unit is further configured to factor B the boundary term {p,q} Minimizing;
the segmentation unit is also used for obtaining the minimum segmentation of the image according to a maximum flow minimum segmentation algorithm;
the characteristic value comprises a gray value or a texture characteristic value or a color characteristic value;
the value range of C is [0,5];
the image is of bone.
Preferably, the objective function construction unit is further configured to factor B the boundary term according to least square method {p,q} Minimizing.
The invention also provides a three-dimensional image reconstruction method, which comprises an image segmentation step, wherein the image segmentation step is realized by adopting the image segmentation method.
Preferably, before the step of image segmentation, the reconstruction method further comprises the steps of:
collecting a plurality of CT images of a target object;
respectively carrying out noise reduction treatment on the CT images to obtain corresponding noise reduction images;
constructing an image according to the plurality of noise reduction images, and obtaining probability values and characteristic values corresponding to each pixel composing the image to generate image data;
after the step of image segmentation, the reconstruction method further comprises the steps of:
converting the image segmentation result into binary data;
and displaying the binarized data as a three-dimensional image according to an image display algorithm.
The invention also provides a three-dimensional image reconstruction system which comprises the device for image segmentation.
Preferably, the reconstruction system further comprises a CT image acquisition unit, a noise reduction unit, an image initialization unit, a binarization unit and a three-dimensional image display unit;
the CT image acquisition unit is used for acquiring a plurality of CT images of the target object;
the noise reduction unit is used for respectively carrying out noise reduction treatment on the CT images to obtain corresponding noise reduction images;
the image initializing unit is used for constructing an image according to the plurality of noise reduction images and obtaining probability values and characteristic values corresponding to each pixel composing the image so as to generate image data;
the binarization unit is used for converting the image segmentation result into binarization data;
the three-dimensional image display unit is used for displaying the binarized data as a three-dimensional image according to an image display algorithm.
The invention has the positive progress effects that: the invention can realize accurate segmentation between hard tissues and can provide accurate operation navigation for subsequent operation of doctors.
Drawings
Fig. 1 is a schematic configuration diagram of an apparatus for image segmentation of embodiment 1 of the present invention.
Fig. 2 is a schematic view of a CT bone image according to embodiment 1 of the present invention.
Fig. 3 is a schematic view of an a priori segmented image of the CT bone image of fig. 2.
Fig. 4 is a schematic diagram of a network generated by the apparatus for image segmentation of embodiment 1 of the present invention.
Fig. 5 is a flowchart of a method of image segmentation according to embodiment 1 of the present invention.
Fig. 6 is a schematic structural diagram of a three-dimensional image reconstruction system according to embodiment 2 of the present invention.
Fig. 7 is a schematic view of a three-dimensional image reconstruction system according to embodiment 2 of the present invention.
Fig. 8 is a flowchart of a three-dimensional image reconstruction method according to embodiment 2 of the present invention.
Detailed Description
The invention is further illustrated by means of the following examples, which are not intended to limit the scope of the invention.
Example 1
The present embodiment provides an apparatus for image segmentation, which includes an image data acquisition unit 101, an objective function construction unit 102, a network construction unit 103, and a segmentation unit 104, referring to fig. 1.
The image data acquisition unit 101 is configured to acquire image data including a probability value corresponding to each pixel constituting an image, the probability value being a probability that the value of the pixel is a preset label value.
The objective function construction unit 102 is configured to construct an objective function of the image according to the probability value corresponding to each pixel; objective function
Figure BDA0002028168080000061
The method comprises the following steps:
Figure BDA0002028168080000062
wherein R (L) is a region term, B (L) is a boundary term, C is a coefficient for balancing the region term and the boundary term, and C is 0 or more. The larger C is, the larger the boundary weight is, and the boundary term factors can play a more obvious role in the segmentation process. For the segmentation between hard tissues, C should not be too large.
The network construction unit 103 is used for constructing a network of images according to the objective function.
The segmentation unit 104 is configured to segment the image according to the network to obtain an image segmentation result.
As a preferred embodiment, the apparatus for image segmentation of the present embodiment is used for segmentation of an image of a bone. The image data acquisition unit 101 receives bone image data generated from a plurality of CT bone images. Fig. 2 shows one of a plurality of CT bone images, where region a is bone and region B is soft tissue, with different patterns used to characterize different gray scales. The figure is only one illustration and the actual state of the CT bone image will be clear to a person skilled in the art. The bone image data includes spatial information of each pixel constituting the bone image, and a probability value corresponding to each pixel constituting the bone image. In this embodiment, the probability value corresponding to a pixel is the probability that the value of the pixel is 1 (the preset label value), that is, the probability that the pixel is bone. In this embodiment, the probability value corresponding to a pixel is a normalized gray value of the pixel, for example, the pixel is represented by an 8-bit binary number, the gray value corresponding to the pixel is 230, and then the probability value corresponding to the pixel is 0.90 (230/256), which represents that the probability that the pixel is bone is 90%; if the gray value corresponding to a pixel is 26, the probability value corresponding to the pixel is 0.10, which characterizes the probability that the pixel is bone as 10%. The probability value corresponding to each pixel constituting the bone image is obtained according to a priori distribution R, and the specific calculation method can be implemented by those skilled in the art according to the knowledge in the art, and will not be described herein.
If the probability that a pixel is bone is characterized by a normalized gray value, the CT bone image of FIG. 2 will be transformed into the a priori segmented image shown in FIG. 3, where region A' (i.e., the white region in the figure) is bone; region B' is non-skeletal and is characterized by a vertical line pattern, which is actually black. However, the prior segmentation shown in fig. 3 alone does not meet the requirements of image reconstruction accuracy, and therefore another factor must be considered to increase the accuracy of the segmentation, which can be understood as a gradient limitation at bone-to-bone boundaries. Let L denote a segmentation under which all pixels in the image are given a label of 1 (representing that the pixel corresponds to bone) or 0 (representing that the pixel corresponds to non-bone), then at the boundary of the pixels labeled 1 and 0, we naturally hope that this boundary corresponds exactly to the boundary of bone. The present embodiment adds a boundary constraint B (characterized by boundary term B (L)), and the objective function associated with the segmentation L is:
Figure BDA0002028168080000071
wherein R (L) is a region term, B (L) is a boundary term, and C is a coefficient for balancing the region term and the boundary term. The objective function construction unit 102 is used for constructing the objective function.
In the present embodiment, the objective function construction unit 102 is further configured to obtain the region term R (L) according to the following formula:
Figure BDA0002028168080000072
wherein Img is used to characterize the bone image and p is used to characterize the pixel;
Figure BDA0002028168080000073
and the penalty term is used for representing the penalty term corresponding to the pixel, and the penalty term is the negative logarithm of the probability value corresponding to the pixel. That is, the region term R (L) is the sum of penalty terms corresponding to each pixel in the received bone image data. In this embodiment, <' > a->
Figure BDA0002028168080000074
Wherein (1)>
Figure BDA0002028168080000075
For characterizing the probability value of the corresponding pixel. In other alternative embodiments, +.>
Figure BDA0002028168080000076
Or->
Figure BDA0002028168080000077
Based on the above calculation formula of the penalty term, it can be seen that the value of the penalty term corresponding to the pixel decreases with an increase in the probability value of the pixel, that is, when the probability that the pixel is 1 is greater than the probability that the pixel is 0, that is, the probability that the pixel is used to characterize bone is greater. Therefore, when the penalty is minimized, it is biased to determine that the pixel is bone.
The objective function construction unit 102 is further configured to construct a boundary term B (L), specifically, the boundary term is obtained according to the following formula:
B(L)=∑ p∈Imgq∈N(p) B {p,q} delta (p, q), where Img is used to characterize the bone image, p is used to characterize the pixel, q is used to characterize the pixel in the surrounding neighborhood N (p) of pixel p; b (B) {p,q} For characterizing the boundary term factors, delta (p, q) is used for characterizing the probability sub-factors that pixel p is similar to pixel q. Taking a 4 neighborhood (typically characterized by N4 (p)) as an example, the pixels in the 4 neighborhood of pixel p are the 4 pixels in the image that are located above, below, to the left and to the right of pixel p, respectively. In the present embodiment of the present invention, in the present embodiment,
Figure BDA0002028168080000081
Figure BDA0002028168080000082
wherein I is p Characteristic value for characterizing pixel p, I q Feature value for characterizing pixel q, σ for characterizing pixelVariance of eigenvalues between p and pixel q. In the present embodiment, the characteristic value I of the pixel p p Eigenvalue I of pixel q q The probability values corresponding to the pixels are adopted, so that the data quantity required to be stored in the skeleton image data is reduced, and the calculation efficiency is improved. In other alternative embodiments, the bone image data contains a feature value for each pixel, which may be a gray value (i.e., a gray value that is not normalized) or a texture feature value or a color feature value.
The boundary term B (L) mainly considers the relationship between each pixel p in the image and its surrounding neighborhood N (p). That is, in order to obtain higher segmentation accuracy, it is desirable that the boundary of the segmented bone stay in a region with a large gradient, that is, the difference between the pixel p and the heterogeneous points existing in the neighboring neighborhood N (p) is as large as possible by taking the boundary term factor B {p,q} Minimizing the obtained. That is, in order to obtain higher segmentation accuracy, the objective function construction unit 102 is further configured to factor B the boundary term {p,q} Minimizing. In the present embodiment, the objective function construction unit 102 factors the boundary term factor B according to the least square method {p,q} Minimizing. At the definitely boundary term factor B {p,q} After the calculation of (a), the person skilled in the art can realize the boundary term factor B by the least square method according to the knowledge in the art {p,q} Is not described in detail herein.
Boundary term factor B {p,q} The similarity between the pixel p and the pixel q is measured, and when the pixel p is more similar to the pixel q, B is {p,q} The larger. From delta (p, q), boundary points of bones can be determined, and by minimizing B {p,q} The differences of pixels belonging to different types at the boundary of the skeleton are made as large as possible, thereby improving the accuracy of segmentation.
C is a value greater than or equal to 0, and is used for balancing the coefficients of the region item and the boundary item, so that the importance degree of the region item and the boundary item can be measured. In order to obtain the best splitting effect, in this embodiment, the preferred value range of C is [0,5], which is obtained through trial and error.
After determining the region term R (L), the boundary terms B (L) and C, the objective function is obtained
Figure BDA0002028168080000091
Next, the network construction unit 103 constructs a network of images from the objective function and the spatial information of each pixel contained in the image data. The network is shown in fig. 4, where p is a pixel in the image, q is a pixel in the neighborhood N (p) of the pixel p, T is the root node of the image, and S is the target point of the image. The selection of the root node T and the target point S in the network construction is performed by a person skilled in the art, and will not be described in detail here. In this embodiment, the root node T is the pixel of the bone identified as the top image of the bone image, and the target point S is the pixel of the bone identified as the bottom CT image of the bone image. After the network construction is completed, all pixels on the skeleton image are connected with the target point S and the root node T, all pixels are respectively connected with pixels in the 4-neighbor area, and the target point S and the root node T are not connected. The weights of the edges (also called "arcs") generated by the network construction are as follows: the weight of the edge { p, q } is B {p,q} The weight of the side { p, T } is C.R p (1) The weight of the side { p, S } is C.R p (0)。R p (1) The probability of the value of the pixel p being 1, namely the probability value corresponding to the pixel p; r is R p (0) R is the probability that the value of pixel p is 0 p (0)=1-R p (1)。
After the network construction is completed, the segmentation unit 104 segments the image according to the network to obtain an image segmentation result. In order to achieve high-precision segmentation, in the present embodiment, the segmentation unit 104 obtains a minimum segmentation of the image according to a maximum flow minimum segmentation algorithm.
A network is denoted by G (V, E), where V is the set of all vertices (i.e., all pixels) of the network G, in the form { V1, V2, V3 … … }; e is the set of all edges in the network, shaped as { < v1, v2>, < v3, v1>, … … }, and when the edges in the network are directional, it is called a directional network.
Let G (V, E) be a directed network, one vertex is specified in V, called the source point (labeled S), and the other vertex is called the sink point (labeled T). For each arc < u, v > belonging to E, there is a weight c (u, v) >0, called the arc capacity. Such networks are commonly referred to as capacity networks.
The actual traffic (abbreviated traffic) through each arc < u, v > in the capacity network is denoted as f (u, v).
The set of traffic on all arcs { f (u, v) }, is referred to as one network flow of the capacity network.
The network flow f satisfying the following conditions in the capacity network G is called a feasible flow:
flow restriction conditions for arcs: 0< = f (u, v) < = c (u, v).
Balance conditions: the inflow flow is equal to the outflow flow for any point v, except S and T. It can be seen that the S inflow is the same as the T outflow, and its value is called the flow of the feasible flow.
Maximum flow: in a capacity network, the feasible flow with the largest traffic is called the network maximum flow, simply the maximum flow.
Type of arc: saturated arc: i.e. f (u, v) =c (u, v), unsaturated arc: i.e., f (u, v) < c (u, v), zero flow arc: i.e. f (u, v) =0. Chain: in a capacity network, the vertex sequences (u 1, u2, u3, u4 … … un) are called a chain, requiring an arc between two adjacent points. Forward arc: an arc whose direction coincides with the positive direction of the chain, the set of which is denoted p+. Rearward arc: an arc of opposite direction to the positive direction of the chain, the set of which is denoted P-.
The amplification way is as follows: let f be a feasible flow in a capacity network G, P be a chain from S to T, and if P satisfies the following condition, P is referred to as an augmentation path for the feasible flow. All forward arcs in P are unsaturated arcs. All backward arcs in P are non-zero flow arcs. The operation of extending this amplification path to improve the feasible flow is called amplification. Residual capacity: the residual quantity of a given capacity network G (V, E) and a viable flow f, arcs < u, V > is noted as cl (u, V) =c (u, V) -f (u, V). The residual capacity on each arc represents the flow that can be increased on that arc. Since the residual capacity from vertex u to vertex v decreases, equivalently to the increase in residual capacity from vertex v to vertex u, intuitively understood is an operation that allows for remorse. There is also a residual capacity cl (v, u) = -f (u, v) in the opposite direction for each arc length < u, v >.
Residual network: there is a capacity network G (V, E) and network flows thereon, the residual network of G with respect to f being denoted G ' (V ', E '). Wherein the vertex set V ' of G ' is the same as the vertex set of G, V ' =v. For any arc < u, v > in G, if f (u, v) < c (u, v), then there will be an arc < u, v > in G ' of capacity c ' (u, v) =c (u, v) -f (u, v) & if f (u, v) >0, then there will be an arc < v, u > in G of capacity c ' (v, u) =f (u, v). The residual network is also referred to as residual network.
The method comprises the following steps: when an edge E exists between two adjacent vertexes in the vertex subsets U { vi1, V2, vi3, … …, vin } of V in the capacity network G, the edge E belongs to the edge set E, the U is called a path connecting the two break points vi1, vin.
Cutting and collecting: in the capacity network G, a part of arcs (edges) is deleted so that there is no path from S to T, and the set { e1, e2, … …, en } of deleted arcs is called a cut set of G, and the capacity of the cut set is called the weight sum of all arcs in the cut set.
Minimum cutting: the cut set with the smallest capacity in G is called the smallest cut of G.
Based on this, the following theorem is obtained:
theorem 1: the flow f is the maximum flow of G if and only if f has no amplification path.
Theorem 2: the flow of the largest stream in G is equal to the capacity of the smallest cuter.
On the one hand, when the maximum flow of the network G has been obtained, then by theorem 1, f there is no augmented path, then the residual network G 'of the network G with respect to f will have no path from S to T, that is to say S can be used as the source point of the tree in the network G', it is extended continuously (if the weights of a to b exist between the point a of the tree and the point b outside the tree, then b is taken into the tree), and finally a tree with S as the source point can be generated, called tree Tr. Because there is no S-to-T way, T does not belong to tree Tr. By this method, a point in the tree Tr is determined as a target (specifically, in the present embodiment, the target refers to a bone), a point outside the tree is determined as a background (specifically, in the present embodiment, the background refers to a non-bone), and a segmentation result is obtained. The maximum flow f may be equivalently constituted by weighted paths from S to T, and these paths are all saturated, i.e. the side with the smallest weight in the path is scaled down to 0 in its residual network, while the weights of the remaining sides remain positive (there is a forward arc). According to the segmentation method, when two minimum weight edges exist in one S-T road at the same time, namely, the minimum weight edge closest to the S is selected as a cutting edge, so that each S-T road is ensured to select only one cutting point, and the weight of each S-T road is exactly equal to the flow of the road. Since the flow rate of the maximum flow is equal to the sum of the flows of all the above-mentioned paths, that is, the sum of the weights of the cut points selected in each path, then the capacity of the cut set obtained by the dividing method is equal. Since the cutset (which separates S and T, so called cutset) has a capacity equal to the maximum flow traffic, we have guaranteed by theorem 2 that it is the smallest cutlet of network G.
On the other hand, how to obtain the maximum flow of the capacity network. By theorem 1, the maximum flow where f is G is equivalent to f without an amplification path. Then, initial f (u, v) =0 (for any u, v). Then by continually looking for an amplified path for f and increasing the flow in that path to saturation (flow equals the minimum weight edge on that path) until f is not amplified. Since the edges in network G are limited, the S-to-T paths are limited, and when all paths are traversed (i.e., all S-to-T paths are saturated), there is no augmentation path for f. Thus, the maximum flow can be obtained by the above method.
According to the analysis, the segmentation problem in the image can be equivalently converted into the problem of the minimum segmentation in the network; the problem of obtaining the minimum cut in the network is equivalently converted into the problem of solving the maximum flow of the network through the maximum flow minimum cut theorem.
In this embodiment, the maximum flow minimum cut algorithm adopted by the dividing unit is an EK (Edmonds-Karp) algorithm (shortest path augmentation algorithm), and the specific flow is as follows:
first, a tree Tr is constructed according to the network G and S is used as a source point, and when (S, v) is not saturated, v is received into the tree Tr for all v in the network G, and the level L (v) =1 where the v node is located is recorded.
In the second step, traversing the point u with a level k in the tree Tr, and for all adjacent points v not in the tree, when < u, v > is not saturated, then v is included in the tree Tr, the level L (v) =l (u) +1, and k=k+1.
Thirdly, if a point u in the current tree satisfies the condition that < u, T > is not saturated, an augmentation path P from S to T is found. The augmented path P saturates the unsaturated arc, replacing G with its residual network G', returning to the first step. If not, and the k-th layer in the tree is non-empty, the second step is returned. If the enhancement path does not exist, and the k-th layer in the tree is empty, the fourth step is entered.
And fourthly, returning to the current tree Tr, wherein all points in the tree Tr are target points, and the rest points are backgrounds, so that a final segmentation result is obtained.
The device for image segmentation in the embodiment can generate an optimized objective function, and segment a bone image by using a maximum flow minimum segmentation algorithm, so that different bones in the bone image are segmented, and higher image segmentation accuracy is realized.
The present embodiment also provides a method of image segmentation implemented using the apparatus for image segmentation of the present embodiment, referring to fig. 5, the method of image segmentation including the steps of:
step S201, acquiring image data. The image data includes a probability value corresponding to each pixel constituting the image, the probability value being a probability that the value of the pixel is a preset label value.
Step S202, constructing an objective function of the image according to the probability value corresponding to each pixel.
The objective function
Figure BDA0002028168080000131
The method comprises the following steps:
Figure BDA0002028168080000132
wherein R (L) is a region term, B (L) is a boundary term, and C is a term forAnd balancing coefficients of the region term and the boundary term.
Step S203, constructing a network of images according to the objective function.
Step S204, dividing the image according to the network to obtain an image division result.
In step S201, the image data acquisition unit receives bone image data generated from a plurality of CT bone images. Fig. 2 shows one of a plurality of CT bone images, where region a is bone and region B is soft tissue. The bone image data includes spatial information of each pixel constituting the bone image, and a probability value corresponding to each pixel constituting the bone image. In this embodiment, the probability value corresponding to a pixel is the probability that the value of the pixel is 1 (the preset label value), that is, the probability that the pixel is bone. In this embodiment, the probability value corresponding to a pixel is a normalized gray value of the pixel, for example, the pixel is represented by an 8-bit binary number, the gray value corresponding to the pixel is 230, and then the probability value corresponding to the pixel is 0.90 (230/256), which represents that the probability that the pixel is bone is 90%; if the gray value corresponding to a pixel is 26, the probability value corresponding to the pixel is 0.10, which characterizes the probability that the pixel is bone as 10%.
In step S202, the objective function construction unit constructs the objective function. Wherein the objective function construction unit is further configured to obtain a region term R (L) according to the following formula:
Figure BDA0002028168080000141
wherein Img is used to characterize the bone image and p is used to characterize the pixel;
Figure BDA0002028168080000142
and the penalty term is used for representing the penalty term corresponding to the pixel, and the penalty term is the negative logarithm of the probability value corresponding to the pixel. That is, the region term R (L) is the sum of penalty terms corresponding to each pixel in the received bone image data. In this embodiment, <' > a->
Figure BDA0002028168080000143
Wherein (1)>
Figure BDA0002028168080000144
For characterizing the probability value of the corresponding pixel. In other alternative embodiments, +.>
Figure BDA0002028168080000145
Or->
Figure BDA0002028168080000146
The objective function construction unit also constructs a boundary term B (L), which is specifically obtained according to the following formula:
B(L)=∑ p∈Imgq∈N(p) B {p,q} delta (p, q), where Img is used to characterize the bone image, p is used to characterize the pixel, q is used to characterize the pixel in the surrounding neighborhood N (p) of pixel p; b (B) {p,q} For characterizing the boundary term factors, delta (p, q) is used for characterizing the probability sub-factors that pixel p is similar to pixel q. Taking a 4 neighborhood (typically characterized by N4 (p)) as an example, the pixels in the 4 neighborhood of pixel p are the 4 pixels in the image that are located above, below, to the left and to the right of pixel p, respectively. In the present embodiment of the present invention, in the present embodiment,
Figure BDA0002028168080000147
Figure BDA0002028168080000148
wherein I is p Characteristic value for characterizing pixel p, I q Feature value for characterizing pixel q, σ is used to characterize the variance of feature value between pixel p and pixel q. In the present embodiment, the characteristic value I of the pixel p p Eigenvalue I of pixel q q The probability values corresponding to the pixels are adopted, so that the data quantity required to be stored in the skeleton image data is reduced, and the calculation efficiency is improved. In other alternative embodiments, the bone image data includes a characteristic value for each pixel, which may be grayA degree value (i.e., a gray value that has not been normalized) or a texture feature value or a color feature value.
The boundary term B (L) mainly considers the relationship between each pixel p in the image and its surrounding neighborhood N (p). That is, in order to obtain higher segmentation accuracy, it is desirable that the boundary of the segmented bone stay in a region with a large gradient, that is, the difference between the pixel p and the heterogeneous points existing in the neighboring neighborhood N (p) is as large as possible by taking the boundary term factor B {p,q} Minimizing the obtained. That is, in order to obtain higher segmentation accuracy, the objective function construction unit is also used to factor B the boundary term {p,q} Minimizing. In the present embodiment, the objective function construction unit factors the boundary term B according to the least square method {p,q} Minimizing. At the definitely boundary term factor B {p,q} After the calculation of (a), the person skilled in the art can realize the boundary term factor B by the least square method according to the knowledge in the art {p,q} Is not described in detail herein.
Boundary term factor B {p,q} The similarity between the pixel p and the pixel q is measured, and when the pixel p is more similar to the pixel q, B is {p,q} The larger. From delta (p, q), boundary points of bones can be determined, and by minimizing B {p,q} The differences of pixels belonging to different types at the boundary of the skeleton are made as large as possible, thereby improving the accuracy of segmentation.
C is a value greater than or equal to 0, and is used for balancing the coefficients of the region item and the boundary item, so that the importance degree of the region item and the boundary item can be measured. In order to obtain the best splitting effect, in this embodiment, the preferred value range of C is [0,5], which is obtained through trial and error.
After determining the region term R (L), the boundary terms B (L) and C, the objective function is obtained
Figure BDA0002028168080000151
Next, in step S203, the network construction unit constructs an image from the objective function and the spatial information of each pixel contained in the image dataA network. The network is shown in fig. 4, where p is a pixel in the image, q is a pixel in the surrounding neighborhood N (p) of the pixel p, T is the sink of the image, and S is the source of the image. The selection manner of the sink T and the source S in constructing the network can be implemented by those skilled in the art, and will not be described herein. In this embodiment, the sink T is the first pixel of the top image of the bone image, and the source S is the last pixel of the bottom image of the bone image. After the network construction is completed, all pixels on the skeleton image are connected with the source point S and the sink point T, all pixels are respectively connected with pixels in the 4-neighbor area, and the source point S and the sink point T are not connected. The weights of the edges (also called "arcs") generated by the network construction are as follows: the weight of the edge { p, q } is B {p,q} The weight of the side { p, T } is C.R p (1) The weight of the side { p, S } is C.R p (0)。R p (1) The probability of the value of the pixel p being 1, namely the probability value corresponding to the pixel p; r is R p (0) R is the probability that the value of pixel p is 0 p (0)=1-R p (1)。
After the network construction is completed, in step S204, the segmentation unit segments the image according to the network to obtain an image segmentation result. In order to achieve high-precision segmentation, in this embodiment, the segmentation unit obtains the minimum segmentation of the image according to the EK algorithm in the maximum flow minimum segmentation algorithm, and the specific flow is not described here again.
Example 2
The present embodiment provides a three-dimensional image reconstruction system, referring to fig. 6, which includes the apparatus for image segmentation of embodiment 1, and further includes a CT image acquisition unit 301, a noise reduction unit 302, an image initialization unit 303, a binarization unit 304, and a three-dimensional image display unit 305.
The CT image acquisition unit is used for acquiring a plurality of CT images of the target object. In this embodiment, the CT image is a bone CT image.
The noise reduction unit is used for respectively carrying out noise reduction processing on the CT images so as to obtain corresponding noise reduction images.
The image initializing unit is used for constructing an image according to the plurality of noise reduction images and obtaining probability values and characteristic values corresponding to each pixel of the component images and space information corresponding to each pixel of the component images to generate image data. In this embodiment, the image initializing unit is implemented using a full convolutional neural network.
The device for image segmentation is used for receiving the image data and realizing segmentation of the image to obtain an image segmentation result. The specific segmentation process is not described here in detail.
The binarization unit is used for converting the image segmentation result into binarization data.
The three-dimensional image display unit is used for displaying the binarized data as a three-dimensional image according to an image display algorithm. In the present embodiment, the three-dimensional image display unit runs a VTK (Visualization Toolkit, visualization tool function library) program to realize three-dimensional image display. Fig. 7 gives a schematic representation of a three-dimensional image, which is only one illustration, the actual effect of which is clear to a person skilled in the art. Where area AA is the pelvis (the real image may be displayed in green) and area BB is the femur (the real image may be displayed in blue). Through display, a doctor can clearly observe the boundary between hard tissues, and accurate surgical navigation is provided for subsequent surgeries of the doctor.
The embodiment also provides a three-dimensional image reconstruction method, which is realized by adopting the three-dimensional image reconstruction system of the embodiment. Referring to fig. 8, the three-dimensional image reconstruction method includes the steps of:
step S401, acquiring a plurality of CT images of a target object.
And step S402, respectively carrying out noise reduction processing on the CT images to obtain corresponding noise reduction images.
Step S403, an image is constructed according to the plurality of noise reduction images, and a probability value and a feature value corresponding to each pixel constituting the image are obtained to generate image data.
Step S404, dividing the image. This step is implemented by the image segmentation method of embodiment 1. The specific implementation process is not described here in detail.
Step S405, converting the image segmentation result into binarized data.
Step S406, the binarized data is displayed as a three-dimensional image according to an image display algorithm.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of the invention, but such changes and modifications fall within the scope of the invention.

Claims (12)

1. A method of image segmentation comprising the steps of:
s1, acquiring image data, wherein the image data comprises probability values corresponding to each pixel forming an image, and the probability values are probabilities that the values of the pixels are preset label values;
s2, constructing an objective function of the image according to the probability value corresponding to each pixel; the objective function
Figure FDA0004067374510000011
The method comprises the following steps:
Figure FDA0004067374510000012
wherein R (L) is a region term, B (L) is a boundary term, C is a coefficient for balancing the region term and the boundary term, and C is greater than or equal to 0;
s3, constructing a network of the image according to the objective function;
s4, dividing the image according to the network to obtain an image division result;
the step S2 comprises the following steps: the region term is obtained according to the following formula:
Figure FDA0004067374510000013
wherein Img is used to characterize the graphImage, p is used to characterize the pixel; />
Figure FDA0004067374510000014
For characterizing a penalty term, which is the negative logarithm of the probability value, ++>
Figure FDA0004067374510000015
The value used for representing the pixel p is the preset label value;
the image data further includes a feature value corresponding to each pixel constituting an image;
the step S2 comprises the following steps: the boundary term is obtained according to the following formula:
B(L)=∑ p∈Imgq∈N(p) B {p,q} delta (p, q), where Img is used to characterize the image, p is used to characterize the pixel, q is used to characterize pixels in the surrounding neighborhood N (p) of the pixel p; b (B) {p,q} For characterizing the boundary term factors, delta (p, q) for characterizing the probability sub-factors that neighboring pixel p is similar to pixel q,
Figure FDA0004067374510000016
Figure FDA0004067374510000017
wherein I is p Characteristic value for characterizing pixel p, I q Characteristic value for characterizing pixel q, σ for characterizing variance of characteristic value between adjacent pixels p and pixel q, +.>
Figure FDA0004067374510000018
The value used for representing the pixel q is the preset label value.
2. The method of image segmentation as set forth in claim 1,
Figure FDA0004067374510000021
wherein (1)>
Figure FDA0004067374510000022
For characterizing the probability value.
3. The method of image segmentation as set forth in claim 1,
step S2 further includes: by the boundary term factor B {p,q} Minimizing;
the step S4 includes: obtaining a minimum cut of the image according to a maximum flow minimum cut algorithm;
the characteristic value comprises a gray value or a texture characteristic value or a color characteristic value, and the value range of C is [0,5];
the image is of bone.
4. A method of image segmentation as claimed in claim 3, wherein the boundary term factor B {p,q} The step of minimizing includes:
the boundary term factor B is calculated according to the least square method {p,q} Minimizing.
5. An apparatus for image segmentation, characterized by comprising an image data acquisition unit, an objective function construction unit, a network construction unit, a segmentation unit;
the image data acquisition unit is used for acquiring image data, wherein the image data comprises probability values corresponding to each pixel composing an image, and the probability values are probabilities that the values of the pixels are preset tag values;
the objective function construction unit is used for constructing an objective function of the image according to the probability value corresponding to each pixel; the objective function
Figure FDA0004067374510000023
The method comprises the following steps:
Figure FDA0004067374510000024
wherein R (L) is a region term, B (L) is a boundary term, and C is a coefficient for balancing the region term and the boundary term;
the network construction unit is used for constructing a network of the image according to the objective function;
the segmentation unit is used for segmenting the image according to the network to obtain an image segmentation result; the objective function construction unit is further configured to obtain the region term according to the following formula:
Figure FDA0004067374510000031
wherein Img is used to characterize the image and p is used to characterize the pixel; />
Figure FDA0004067374510000032
For characterizing a penalty term, which is the negative logarithm of the probability value, ++>
Figure FDA0004067374510000033
The value used for representing the pixel p is the preset label value;
the image data further includes a feature value corresponding to each pixel constituting an image;
the objective function construction unit is further configured to obtain the boundary term according to the following formula:
B(L)=∑ p∈Imgq∈N(p) B {p,q} delta (p, q), where Img is used to characterize the image, p is used to characterize the pixel, q is used to characterize pixels in the surrounding neighborhood N (p) of the pixel p; b (B) {p,q} For characterizing the boundary term factors, delta (p, q) for characterizing the probability sub-factors that neighboring pixel p is similar to pixel q,
Figure FDA0004067374510000034
Figure FDA0004067374510000035
wherein I is p Characteristic value for characterizing pixel p, I q Characteristic value for characterizing pixel q, σ for characterizing variance of characteristic value between adjacent pixels p and pixel q, +.>
Figure FDA0004067374510000036
The value used for representing the pixel q is the preset label value.
6. The apparatus for image segmentation as set forth in claim 5,
Figure FDA0004067374510000037
wherein (1)>
Figure FDA0004067374510000038
For characterizing the probability value.
7. The apparatus for image segmentation as set forth in claim 5, wherein the objective function construction unit is further configured to factor B the boundary term {p,q} Minimizing;
the segmentation unit is also used for obtaining the minimum segmentation of the image according to a maximum flow minimum segmentation algorithm;
the characteristic value comprises a gray value or a texture characteristic value or a color characteristic value;
the value range of C is [0,5];
the image is of bone.
8. The apparatus for image segmentation as set forth in claim 7, wherein the objective function construction unit is further configured to factor B the boundary term according to a least square method {p,q} Minimizing.
9. A method of reconstructing a three-dimensional image, characterized in that the method of reconstructing a three-dimensional image comprises a step of image segmentation, the step of image segmentation being performed using the method of image segmentation as claimed in any one of claims 1 to 4.
10. The method of reconstructing a three-dimensional image according to claim 9, wherein prior to the step of image segmentation, the method further comprises the steps of:
collecting a plurality of CT images of a target object;
respectively carrying out noise reduction treatment on a plurality of CT images to obtain corresponding noise reduction images;
constructing the image according to a plurality of the noise reduction images, and obtaining a probability value and a characteristic value corresponding to each pixel composing the image to generate the image data;
after the step of image segmentation, the reconstruction method further comprises the steps of:
converting the image segmentation result into binary data;
and displaying the binarized data as a three-dimensional image according to an image display algorithm.
11. A three-dimensional image reconstruction system comprising a device for image segmentation as claimed in any one of claims 5 to 8.
12. The three-dimensional image reconstruction system according to claim 11, further comprising a CT image acquisition unit, a noise reduction unit, an image initialization unit, a binarization unit, a three-dimensional image display unit;
the CT image acquisition unit is used for acquiring a plurality of CT images of the target object;
the noise reduction unit is used for respectively carrying out noise reduction treatment on the CT images so as to obtain corresponding noise reduction images;
the image initializing unit is used for constructing the image according to a plurality of the noise reduction images and obtaining probability values and characteristic values corresponding to each pixel composing the image so as to generate the image data;
the binarization unit is used for converting the image segmentation result into binarization data;
the three-dimensional image display unit is used for displaying the binarized data into a three-dimensional image according to an image display algorithm.
CN201910300832.8A 2019-04-15 2019-04-15 Image segmentation method and device, and three-dimensional image reconstruction method and system Active CN109961449B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910300832.8A CN109961449B (en) 2019-04-15 2019-04-15 Image segmentation method and device, and three-dimensional image reconstruction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910300832.8A CN109961449B (en) 2019-04-15 2019-04-15 Image segmentation method and device, and three-dimensional image reconstruction method and system

Publications (2)

Publication Number Publication Date
CN109961449A CN109961449A (en) 2019-07-02
CN109961449B true CN109961449B (en) 2023-06-02

Family

ID=67026092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910300832.8A Active CN109961449B (en) 2019-04-15 2019-04-15 Image segmentation method and device, and three-dimensional image reconstruction method and system

Country Status (1)

Country Link
CN (1) CN109961449B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610491B (en) * 2019-09-17 2021-11-19 湖南科技大学 Liver tumor region segmentation method of abdominal CT image
FR3104934B1 (en) * 2019-12-18 2023-04-07 Quantum Surgical Method for automatic planning of a trajectory for a medical intervention
CN111714145B (en) * 2020-05-27 2022-07-01 浙江飞图影像科技有限公司 Femoral neck fracture detection method and system based on weak supervision segmentation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596887A (en) * 2018-04-17 2018-09-28 湖南科技大学 A kind of abdominal CT sequence image liver neoplasm automatic division method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69911958T2 (en) * 1998-05-29 2004-08-12 Computerized Medical Systems, Inc. SELF SEGMENTATION / CONTOUR PROCESSING PROCESS
US8358823B2 (en) * 2011-03-30 2013-01-22 Mitsubishi Electric Research Laboratories, Inc. Method for tracking tumors in bi-plane images
CN109146993B (en) * 2018-09-11 2021-08-13 广东工业大学 Medical image fusion method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596887A (en) * 2018-04-17 2018-09-28 湖南科技大学 A kind of abdominal CT sequence image liver neoplasm automatic division method

Also Published As

Publication number Publication date
CN109961449A (en) 2019-07-02

Similar Documents

Publication Publication Date Title
Chen et al. Automatic segmentation of individual tooth in dental CBCT images from tooth surface map by a multi-task FCN
CN105957066B (en) CT image liver segmentation method and system based on automatic context model
CN109741343B (en) T1WI-fMRI image tumor collaborative segmentation method based on 3D-Unet and graph theory segmentation
CN106108925B (en) Method and system for whole body bone removal and vessel visualization in medical images
EP2024937B1 (en) Method and system for generating a 3d representation of a dynamically changing 3d scene
CN111640120B (en) Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network
CN109961449B (en) Image segmentation method and device, and three-dimensional image reconstruction method and system
CN104933709B (en) Random walk CT lung tissue image automatic segmentation methods based on prior information
US8787642B2 (en) Method, device and computer-readable recording medium containing program for extracting object region of interest
WO2018189541A1 (en) Recist assessment of tumour progression
Candemir et al. Graph-cut based automatic lung boundary detection in chest radiographs
CN110866905B (en) Rib recognition and labeling method
JP2016116843A (en) Image processing apparatus, image processing method and image processing program
CN107545579B (en) Heart segmentation method, device and storage medium
CN110738652A (en) method and device for separating arteriovenous from pulmonary artery
CN105389821B (en) It is a kind of that the medical image cutting method being combined is cut based on cloud model and figure
CN108230301A (en) A kind of spine CT image automatic positioning dividing method based on active contour model
Chodorowski et al. Color lesion boundary detection using live wire
CN109345536B (en) Image super-pixel segmentation method and device
CN111462138A (en) Semi-automatic segmentation method and device for diseased hip joint image
Galvão et al. RISF: recursive iterative spanning forest for superpixel segmentation
Kitrungrotsakul et al. Interactive deep refinement network for medical image segmentation
JP6257949B2 (en) Image processing apparatus and medical image diagnostic apparatus
CN106023144B (en) Divide the method for femur in fault image
EP3018626B1 (en) Apparatus and method for image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant