CN113706492A - Lung parenchyma automatic segmentation method based on chest CT image - Google Patents

Lung parenchyma automatic segmentation method based on chest CT image Download PDF

Info

Publication number
CN113706492A
CN113706492A CN202110960219.6A CN202110960219A CN113706492A CN 113706492 A CN113706492 A CN 113706492A CN 202110960219 A CN202110960219 A CN 202110960219A CN 113706492 A CN113706492 A CN 113706492A
Authority
CN
China
Prior art keywords
image
region
segmentation
chest
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110960219.6A
Other languages
Chinese (zh)
Inventor
邢文宇
侯东妮
朱志斌
童琳
他得安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202110960219.6A priority Critical patent/CN113706492A/en
Publication of CN113706492A publication Critical patent/CN113706492A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Abstract

The invention provides a lung parenchyma automatic segmentation method based on a chest CT image, which adopts an automatic seed point extraction method to extract seed points, a region growing method and a hole filling method to extract a chest cavity outline, an area threshold method to remove trachea regions in two connected regions, thereby obtaining a lung parenchyma region, further, the obtained lung parenchyma region and the corresponding chest CT image are used as a label and an image which are in one-to-one correspondence to form a training set, the training of the segmentation network is carried out based on the training set to obtain a trained lung parenchyma segmentation mesh model which is used for extracting the lung parenchyma region from a new chest CT image, therefore, the method can automatically acquire the segmentation labels, manual labeling is not needed, the manual workload is reduced, the efficiency is improved, and the acquired segmentation labels have high consistency.

Description

Lung parenchyma automatic segmentation method based on chest CT image
Technical Field
The invention belongs to the field of medical image processing, and particularly relates to a lung parenchyma automatic segmentation method based on a chest CT image.
Background
The CT image is an important technical means for clinically diagnosing lung diseases, particularly for lung cancer, the discovery and analysis of early symptoms such as lung nodules and the like can be used for early clinical intervention and treatment, and the risk of disease deterioration is reduced. At present, clinically, lung nodules are analyzed mainly by a clinician according to a large number of two-dimensional CT slice images, the efficiency is low, part of key information is easily missed, and phenomena such as missed diagnosis and misdiagnosis are caused, so that the realization of three-dimensional reconstruction of lung parenchymal regions containing the lung nodules is very important. The lung parenchyma is a key area for evaluating and researching diseases, and the accurate segmentation and reconstruction of the lung parenchyma are crucial to further researching the tissues and pathological changes of the functions of organs of the lung.
The traditional lung parenchyma segmentation mode needs manual intervention, at present, a deep learning model method can be adopted for lung parenchyma segmentation, but the acquisition of segmentation labels often needs a clinician to label manually, so that the workload is large and the efficiency is low. Meanwhile, the segmentation labels may come from multiple doctors, and due to the fact that the multiple doctors have differences in relevant medical experience, judgment standards and the like, the consistency of the labels is also affected to a certain extent.
Disclosure of Invention
In order to solve the problems, the invention provides a lung parenchyma automatic segmentation method based on a chest CT image, which adopts the following technical scheme:
the invention provides a lung parenchyma automatic segmentation method based on a chest CT image, which is used for extracting a lung parenchyma region from the chest CT image and is characterized by comprising the step S1 of selecting seed points from the chest CT image by using a seed point extraction method; step S2, extracting a thorax contour by using a region growing method and a hole filling method based on the seed points; step S3, performing connected domain analysis within the range of the chest cavity outline, and judging whether two connected domains with similar areas exist; step S4, when the judgment in the step S3 is negative, separating the thorax contour by using an angular point detection method to obtain two connected domains; step S5, removing the trachea areas in the two connected domains by using an area threshold method to obtain lung parenchymal areas; step S6 is to form a training set by using the lung parenchymal region as a label and the breast CT image corresponding to the lung parenchymal region as an image corresponding to the label, perform training of the segmentation network based on the training set to obtain a trained lung parenchymal segmentation mesh model, and extract the lung parenchymal region from the new breast CT image using the lung parenchymal segmentation mesh model.
The method for automatically segmenting the lung parenchyma based on the chest CT image provided by the invention also has the technical characteristics that the method for extracting the seed points comprises the following steps: step A1, using sliding windows with preset size and preset step length to perform global traversal on the chest CT image, judging whether the average pixel gray value of the image in each sliding window is higher than a set threshold value, and taking the sliding window as a preliminarily selected sliding window when the average pixel gray value is judged to be higher than the set threshold value; and step A2, similarity measure calculation is carried out on each preliminarily selected sliding window and four sliding windows adjacent to the preliminarily selected sliding window, whether the preliminarily selected sliding window and the four adjacent sliding windows have high similarity is judged, and when the judgment is yes, the centroid of the preliminarily selected sliding window is extracted to serve as a seed point.
The lung parenchyma automatic segmentation method based on the chest CT image, provided by the invention, can also have the technical characteristics that the similarity measure calculation comprises gray level similarity calculation, texture similarity calculation and structure similarity calculation, and the gray level similarity calculation adopts a mean difference algorithm for calculation:
Figure BDA0003222003990000031
the texture similarity calculation adopts an entropy difference algorithm to calculate:
Figure BDA0003222003990000032
Figure BDA0003222003990000033
the structural similarity calculation adopts a Hamming value based on Hash coding to calculate:
Figure BDA0003222003990000034
wherein N is the image scale, X is the pixel gray scale value of the middle area image, and Y is the gray scale value of the middle area imagenPixel gray value of field image of X, Ex、EyIs the entropy of the image, i is the gray value of the pixel, j is the mean gray value of the field image, PijThe probability of taking the gray value ij for the image.
The method for automatically segmenting the lung parenchyma based on the chest CT image provided by the invention can also have the technical characteristics that the region growing method comprises the following steps: step B1, the grown region comprises a seed point; and B2, searching pixel points in the four adjacent domains of the seed points and not in the grown region, searching for pixel points with the gray value smaller than a set threshold value, adding the pixel points into the grown region when the pixel points are found, taking the pixel points as new seed points, repeating the step B2 to perform region growth, and stopping the region growth when the pixel points are not found, wherein the grown region is the thorax contour with the hole.
The automatic lung parenchyma segmentation method based on the chest CT image provided by the invention can also have the technical characteristics that the corner point detection method comprises the following steps: step C1, calculating the response value of each pixel point in the image of the thorax contour; step C2, selecting two pixel points with response values far larger than those of other pixel points as two corner points; and step C3, connecting the two corner points to realize the separation of the two areas.
The method for automatically segmenting the lung parenchyma based on the chest CT image provided by the invention can also have the technical characteristics that the response value is calculated according to the following formula:
R=detM-k(traceM)2
Figure BDA0003222003990000041
wherein R is the response value, M is the intermediate matrix, k is the test constant, w is the Gaussian function, Ix、IyRepresenting two gradients of the image in the X and Y directions, respectively.
The lung parenchyma automatic segmentation method based on the chest CT image provided by the invention can also have the technical characteristics that the lung parenchyma region is a binary mask image, the segmentation grid is a U-Net segmentation network, the loss function of the segmentation grid is a cross entropy loss function, and the evaluation index of the segmentation grid is a Dice index.
Action and Effect of the invention
According to the automatic lung parenchyma segmentation method based on the chest CT image, due to the adoption of the seed point extraction method, proper seed points can be automatically selected from the chest CT image; due to the adoption of the region growing method and the hole filling method, the thorax contour can be automatically extracted and obtained on the basis of the seed points; the area threshold method is adopted, so that the tracheal regions in the two communicating regions can be removed respectively, and the lung parenchyma region is obtained; further, the obtained lung parenchymal region (i.e., binary mask image) is used as a label, the corresponding chest CT image is used as an image corresponding to the label to form a training set, and the segmentation network is trained based on the training set to obtain a trained lung parenchymal segmentation mesh model, so that the lung parenchymal region can be extracted from a new chest CT image by using the lung parenchymal segmentation mesh model. Therefore, the lung parenchyma automatic segmentation method based on the chest CT image can automatically acquire the segmentation labels, manual labeling is not needed, the manual workload is reduced, the efficiency is improved, and the segmentation labels are obtained by a uniform method and have high consistency. Furthermore, through the trained lung parenchyma segmentation grid model, lung parenchyma areas in various forms can be accurately obtained, the working efficiency is improved, and technical support is provided for subsequent three-dimensional reconstruction work.
Drawings
FIG. 1 is a flowchart illustrating a method for automatic segmentation of lung parenchyma based on a CT image of a breast according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating automatic extraction of seed points according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of region growing and thorax contour extraction in an embodiment of the present invention;
FIG. 4 is a schematic illustration of the separation of left and right lung parenchymal regions in an embodiment of the present invention;
FIG. 5 is a diagram illustrating the final left and right lung parenchymal region extraction result in accordance with an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a U-Net network in the embodiment of the present invention.
Detailed Description
In order to make the technical means, creation features, achievement objectives and effects of the present invention easy to understand, the following describes a method for automatically segmenting lung parenchyma based on a chest CT image according to the present invention with reference to the embodiments and the accompanying drawings.
< example >
Fig. 1 is a flowchart illustrating a method for automatically segmenting lung parenchyma based on a chest CT image according to an embodiment of the present invention.
As shown in fig. 1, an automatic lung parenchyma segmentation method based on a chest CT image of the present invention includes the following steps:
step S1, selecting seed points from the breast CT image by using a seed point extraction method.
Fig. 2 is a schematic diagram of automatic extraction of seed points in the embodiment of the present invention.
As shown in fig. 2, the seed point extraction method includes the following steps:
and step A1, performing global traversal on the chest CT image by using a sliding window with a preset size and a preset step length, judging whether the average pixel gray value of the image in each sliding window is higher than a set threshold value, and taking the sliding window as a preliminarily selected sliding window when the average pixel gray value is judged to be higher than the set threshold value.
In this embodiment, the size of the sliding window is 25 pixels × 25 pixels, the step size of the sliding window is 25 pixels, and the gray square at the upper left corner in fig. 2 is a schematic diagram of the sliding window. Setting the threshold value to 200, namely, regarding the window with the average pixel gray value higher than 200 in the window as the thorax contour region, and preliminarily selecting the thorax contour region, wherein 5 sliding windows are included in the dashed line frame in fig. 2, and the sliding window in the middle is the preliminarily selected sliding window.
And step A2, similarity measure calculation is carried out on each preliminarily selected sliding window and four sliding windows adjacent to the preliminarily selected sliding window, whether the preliminarily selected sliding window and the four adjacent sliding windows have high similarity is judged, and when the judgment is yes, the centroid of the preliminarily selected sliding window is extracted as a seed point.
In this embodiment, the similarity measure calculation includes a gray level similarity calculation, a texture similarity calculation, and a structure similarity calculation. The gray level similarity calculation adopts a mean difference algorithm for calculation, and the larger the value of the gray level difference value of the two calculated images is, the higher the similarity of the two images is proved to be:
Figure BDA0003222003990000071
the texture similarity calculation adopts an entropy difference algorithm for calculation, and the smaller the value of the entropy difference value of the two images obtained by calculation, the higher the similarity of the two images is proved to be:
Figure BDA0003222003990000072
Figure BDA0003222003990000073
the structural similarity calculation adopts Hamming values based on Hash coding to calculate, and the smaller the numerical value of the Hamming values of the two images obtained by calculation is, the higher the similarity of the two images is proved to be:
Figure BDA0003222003990000074
in the above formula, N is the scale of the image, X is the gray value of the pixel of the image in the middle area, and Y isnPixel gray value of field image of X, Ex、EyThe entropy of the image is represented by i, i is the gray value of the pixel (i is more than or equal to 0 and less than or equal to 255), j is the average value of the gray values of the field image (j is more than or equal to 0 and less than or equal to 255), and PijThe probability of taking the gray value ij for the image.
When one sliding window and four sliding windows in four neighborhoods of the sliding window have high similarity, the sliding window is selected, the mass center of the image in the sliding window area is used as a seed point, and the seed point is a pixel point in the image.
In fig. 2, the dashed line box contains 5 sliding windows, the middle sliding window is the initially selected sliding window, and the surrounding 4 sliding windows are the sliding windows in the four neighborhoods of the initially selected sliding window, and through the above calculation, the 4 sliding windows and the initially selected sliding window in the middle have high similarity, so that the middle sliding window is selected, and the centroid thereof is used as a seed point.
As described above, in step S1, a seed point is automatically obtained, where the seed point is one or more pixel points in the breast CT image.
Step S2, extracting the thorax contour by using a region growing method and a hole filling method based on the seed points.
FIG. 3 is a schematic diagram of region growing and thorax contour extraction in an embodiment of the present invention.
As shown in fig. 3, the region growing method includes the steps of:
step B1, the grown region comprises a seed point;
and B2, searching pixel points in four adjacent domains of the seed points and not in the grown region, searching for pixel points with the gray value smaller than a set threshold value, adding the pixel points into the grown region when the pixel points are found, taking the pixel points as new seed points, repeating the step B2 to perform region growth, stopping the region growth when the pixel points are not found, and determining the grown region as the thorax contour with the hole at the moment.
In this embodiment, the threshold is set to 15, that is, the pixel points with the gray value less than 15 are regarded as the thorax contour region. As shown in fig. 3, when the region growth stops, a thorax contour with two holes is obtained, and then the thorax contour with holes is processed by using a hole filling method to fill the closed holes therein, so as to obtain the whole thorax contour region.
As described above, the thorax contour is extracted in step S2.
And step S3, performing connected domain analysis within the range of the chest cavity outline, and judging whether two connected domains with similar areas exist.
In this embodiment, the connected component analysis method is a method in the prior art, that is, adjacent pixels with the same pixel value are grouped into a set as a connected component, and a flag is set for each connected component, so that the connected component analysis is completed.
When the judgment in the step S3 is yes, that is, two connected regions with similar areas exist within the range of the chest cavity outline, which indicates that the left and right lung parenchyma separation interfaces are clearly displayed and do not need to be separated; if the determination in step S3 is no, it is described that the left and right lung parenchyma separation interfaces are not clearly displayed and are connected to each other on the chest CT image, and the separation process is necessary.
And S4, when the judgment in the step S3 is negative, separating the chest cavity outline by using an angular point detection method to obtain two connected domains with similar areas.
In this embodiment, before using the corner detection method, the chest contour is gaussian-smoothed.
FIG. 4 is a schematic diagram of the separation of the left and right lung parenchymal regions according to an embodiment of the present invention.
As shown in fig. 4, in this embodiment, the corner detection method is a Harris corner detection method, and includes the following steps:
step C1, calculating the response value of each pixel point in the image of the thorax contour;
step C2, selecting two pixel points with response values far larger than those of other pixel points as two corner points;
and step C3, connecting the two corner points to realize the separation of the two connected areas.
The image of the thorax contour is preprocessed and subjected to Gaussian smoothing, and then the response value of a pixel point in the image is calculated by the following formula:
R=detM-k(traceM)2
Figure BDA0003222003990000091
wherein R is the response value, M is a2 × 2 intermediate matrix, k is the test constant, w is the Gaussian function, Ix、IyRepresenting two gradients of the image in the X and Y directions, respectively.
By the above formula, the response value of each point in the image of the thorax contour is calculated, and non-maximum suppression is performed in the 3 × 3 domain, and the point of the local maximum is selected. As shown in fig. 4, a plurality of pixel points can be obtained through preliminary Harris corner detection, and the joints of the left and right lung parenchyma regions are relatively sharp, so that the autocorrelation at the joints is relatively low, and the corner response value is relatively large, so that two points with the maximum response value above the image in fig. 4 are stored, and then the two points are connected, so that the separation processing of the left and right lung parenchyma regions can be realized, and two connected domains with similar areas are obtained.
As described above, in step S4, the two connected regions having similar areas are obtained by processing the case where the left and right lung parenchyma separation interfaces are not clearly displayed.
In step S5, the tracheal regions in the two connected regions are removed by using an area threshold method, respectively, to obtain lung parenchymal regions.
In this embodiment, the area threshold method is an area threshold algorithm in the prior art, and a specific threshold is determined according to a distribution characteristic of a gray value of an image to perform image segmentation, so as to remove a trachea region in two connected domains, thereby obtaining a lung parenchyma region.
Fig. 5 is a diagram illustrating final left and right lung parenchymal region extraction results in an embodiment of the present invention.
As shown in fig. 5, the lung parenchymal region image finally extracted is a binary mask image, the lung parenchymal region is displayed in white, and the background portion (i.e., the portion other than the lung parenchymal region) is displayed in black.
As described above, in step S5, a lung parenchymal region is obtained, and the lung parenchymal region is a binary mask image.
Step S6 is to form a training set by using the lung parenchymal region as a label and the breast CT image corresponding to the lung parenchymal region as an image corresponding to the label, perform training of the segmentation network based on the training set to obtain a trained lung parenchymal segmentation mesh model, and extract the lung parenchymal region from the new breast CT image using the lung parenchymal segmentation mesh model.
In this embodiment, the split network is a U-Net network.
FIG. 6 is a schematic structural diagram of a U-Net network in the embodiment of the present invention.
As shown in fig. 6, the U-Net network is mainly composed of a convolutional layer, a max-pooling layer (down-sampling), a deconvolution layer (up-sampling), and a ReLu nonlinear activation function, wherein the convolutional layer and the deconvolution layer employ a two-dimensional convolutional layer Conv2 d.
In this embodiment, when the U-Net network model is trained, the number of samples is 1579, the batch size is set to 5, the learning rate is 0.0001, the number of training times is 30, and the ratio of the training set to the verification set in the training process is 9: 1, the activation function is a ReLU activation function, the RMSprop is selected by the optimization function, the momentum is set to be 0.9, the loss function adopts a cross-entropy loss function, and the loss value of the cross-entropy loss function is calculated by the following formula:
L=-[ylog(p)+(1-y)log(1-p)]
in the formula, L is a cross entropy loss function loss value, y represents a sample label, the lung parenchymal region is 1, the background (i.e., the portion of the image other than the lung parenchymal region) is 0, and p represents the probability that the sample is predicted as the lung parenchymal region.
The evaluation index of the training result of the U-Net network is a Dice index, and the value of the Dice index is calculated by the following formula:
Figure BDA0003222003990000111
in the formula, RsegFor the predicted segmentation result, RgtIs the segmentation result of the ground channel.
In this embodiment, after the training, the loss value of the loss function of the lung parenchyma segmentation network model may reach 0.0141, and the value of the Dice index may reach 0.9655, so that the lung parenchyma segmentation network model has high accuracy. The trained lung parenchymal segmentation network model can then be used for fast extraction of lung parenchymal regions from new breast CT images.
Examples effects and effects
According to the method for automatically segmenting the lung parenchyma based on the chest CT image, which is provided by the embodiment, due to the adoption of the seed point extraction method, proper seed points can be automatically selected from the chest CT image; due to the adoption of the region growing method and the hole filling method, the thorax contour can be automatically extracted and obtained on the basis of the seed points; the area threshold method is adopted, so that the tracheal regions in the two communicating regions can be removed respectively, and the lung parenchyma region is obtained; further, the obtained lung parenchymal region (i.e., binary mask image) is used as a label, the corresponding chest CT image is used as an image corresponding to the label to form a training set, and the segmentation network is trained based on the training set to obtain a trained lung parenchymal segmentation mesh model, so that the lung parenchymal region can be extracted from a new chest CT image by using the lung parenchymal segmentation mesh model. Therefore, the lung parenchyma automatic segmentation method based on the chest CT image can automatically acquire the segmentation labels, manual labeling is not needed, the manual workload is reduced, the efficiency is improved, and the segmentation labels are obtained by a uniform method and have high consistency. Furthermore, through the trained lung parenchyma segmentation grid model, lung parenchyma areas in various forms can be accurately obtained, the working efficiency is improved, and technical support is provided for subsequent three-dimensional reconstruction work.
The above-described embodiments are merely illustrative of specific embodiments of the present invention, and the present invention is not limited to the description of the above-described embodiments.

Claims (8)

1. An automatic lung parenchymal segmentation method based on a chest CT image is used for extracting and obtaining a lung parenchymal region from the chest CT image, and is characterized by comprising the following steps:
step S1, selecting seed points from the chest CT image by using a seed point extraction method;
step S2, extracting a thorax contour by using a region growing method and a hole filling method based on the seed points;
step S3, performing connected domain analysis within the range of the chest cavity outline, and judging whether two connected domains with similar areas exist;
step S4, when the judgment in the step S3 is negative, the rib cage outline is separated by using an angular point detection method to obtain two connected domains;
step S5, removing the trachea areas in the two connected areas by using an area threshold method to obtain the lung parenchymal area;
step S6 is to form a training set by using the lung parenchymal region as a label and the breast CT image corresponding to the lung parenchymal region as an image corresponding to the label, perform training of a segmentation network based on the training set to obtain a trained lung parenchymal segmentation mesh model, and extract the lung parenchymal region from the new breast CT image using the lung parenchymal segmentation mesh model.
2. The method of claim 1, wherein the method comprises:
the seed point extraction method comprises the following steps:
step A1, performing global traversal on the chest CT image by using sliding windows with preset sizes and preset step lengths, judging whether the average pixel gray value of the image in each sliding window is higher than a set threshold value, and taking the sliding window as a preliminarily selected sliding window when the average pixel gray value is judged to be higher than the set threshold value;
and A2, performing similarity measure calculation on each preliminarily selected sliding window and the four sliding windows adjacent to the preliminarily selected sliding window, judging whether the preliminarily selected sliding window and the four adjacent sliding windows have high similarity, and if so, extracting the centroid of the preliminarily selected sliding window as the seed point.
3. The method of claim 2, wherein the method comprises:
wherein the similarity measure calculation comprises a gray level similarity calculation, a texture similarity calculation and a structure similarity calculation,
the gray level similarity calculation adopts a mean difference algorithm to calculate:
Figure FDA0003222003980000021
the texture similarity calculation adopts an entropy difference algorithm to calculate:
Figure FDA0003222003980000022
Figure FDA0003222003980000023
the structural similarity calculation adopts a Hamming value based on Hash coding to calculate:
Figure FDA0003222003980000024
wherein N is the image scale, X is the pixel gray scale value of the middle area image, and Y is the gray scale value of the middle area imagenPixel gray value of field image of X, Ex、EyIs the entropy of the image, i is the gray value of the pixel, j is the mean gray value of the field image, PijThe probability of taking the gray value ij for the image.
4. The method of claim 1, wherein the method comprises:
wherein the region growing method comprises the following steps:
step B1, the grown region including the seed point;
and B2, searching pixel points in four adjacent domains of the seed points and not in the grown region, searching the pixel points with the gray value smaller than a set threshold value, adding the pixel points into the grown region when the pixel points are found, taking the pixel points as new seed points, repeating the step B2 to perform region growth, and stopping the region growth when the pixel points are not found, wherein the grown region is the thorax contour with the hole.
5. The method of claim 1, wherein the method comprises:
the corner point detection method comprises the following steps:
step C1, calculating the response value of each pixel point in the image of the thorax contour;
step C2, selecting two pixel points with the response values far larger than those of other pixel points as two corner points;
and step C3, connecting the two corner points to realize the separation of the two regions.
6. The method of claim 5, wherein the method comprises:
wherein the response value is calculated according to the following formula:
R=detM-k(traceM)2
Figure FDA0003222003980000041
wherein R is the response value, M is the intermediate matrix, k is the test constant, w is the Gaussian function, Ix、IyRepresenting two gradients of the image in the X and Y directions, respectively.
7. The method of claim 1, wherein the method comprises:
wherein the lung parenchymal region is a binary mask image,
the segmentation mesh is a U-Net segmentation mesh,
the loss function of the segmentation grid is a cross-entropy loss function, and the loss value of the cross-entropy loss function is calculated by the following formula:
L=-[ylog(p)+(1-y)log(1-p)]
wherein y is a sample label and p is a probability that a sample is predicted to be the lung parenchymal region.
8. The method of claim 7, wherein the method comprises:
the evaluation index of the segmented network is a Dice index, and the Dice index is calculated by the following formula:
Figure FDA0003222003980000042
in the formula, RsegFor the predicted segmentation result, RgtIs the segmentation result of the ground channel.
CN202110960219.6A 2021-08-20 2021-08-20 Lung parenchyma automatic segmentation method based on chest CT image Pending CN113706492A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110960219.6A CN113706492A (en) 2021-08-20 2021-08-20 Lung parenchyma automatic segmentation method based on chest CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110960219.6A CN113706492A (en) 2021-08-20 2021-08-20 Lung parenchyma automatic segmentation method based on chest CT image

Publications (1)

Publication Number Publication Date
CN113706492A true CN113706492A (en) 2021-11-26

Family

ID=78653916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110960219.6A Pending CN113706492A (en) 2021-08-20 2021-08-20 Lung parenchyma automatic segmentation method based on chest CT image

Country Status (1)

Country Link
CN (1) CN113706492A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972165A (en) * 2022-03-24 2022-08-30 中山大学孙逸仙纪念医院 Method and device for measuring time-average shearing force

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101714211A (en) * 2009-12-04 2010-05-26 西安电子科技大学 Detection method of high-resolution remote sensing image street center line
CN109410166A (en) * 2018-08-30 2019-03-01 中国科学院苏州生物医学工程技术研究所 Full-automatic partition method for pulmonary parenchyma CT image
CN110363053A (en) * 2018-08-09 2019-10-22 中国人民解放军战略支援部队信息工程大学 A kind of Settlement Place in Remote Sensing Image extracting method and device
CN111754472A (en) * 2020-06-15 2020-10-09 南京冠纬健康科技有限公司 Pulmonary nodule detection method and system
CN113012127A (en) * 2021-03-18 2021-06-22 复旦大学 Cardiothoracic ratio measuring method based on chest medical image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101714211A (en) * 2009-12-04 2010-05-26 西安电子科技大学 Detection method of high-resolution remote sensing image street center line
CN110363053A (en) * 2018-08-09 2019-10-22 中国人民解放军战略支援部队信息工程大学 A kind of Settlement Place in Remote Sensing Image extracting method and device
CN109410166A (en) * 2018-08-30 2019-03-01 中国科学院苏州生物医学工程技术研究所 Full-automatic partition method for pulmonary parenchyma CT image
CN111754472A (en) * 2020-06-15 2020-10-09 南京冠纬健康科技有限公司 Pulmonary nodule detection method and system
CN113012127A (en) * 2021-03-18 2021-06-22 复旦大学 Cardiothoracic ratio measuring method based on chest medical image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
郭圣文;曾庆思;陈坚;: "胸部CT中肺实质的提取与辅助诊断", 《中国生物医学工程学报》, vol. 27, no. 5, 20 October 2008 (2008-10-20), pages 788 - 791 *
金飞: "基于纹理特征的遥感影像居民地提取技术研究", 《中国博士学位论文全文数据库 信息科技辑》, 15 January 2014 (2014-01-15), pages 37 - 38 *
鲁宏伟;文燕;: "区域生长法在PCB元件分割中的应用", 《小型微型计算机系统》, vol. 28, no. 8, 15 August 2007 (2007-08-15), pages 1489 - 1491 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972165A (en) * 2022-03-24 2022-08-30 中山大学孙逸仙纪念医院 Method and device for measuring time-average shearing force
CN114972165B (en) * 2022-03-24 2024-03-15 中山大学孙逸仙纪念医院 Method and device for measuring time average shearing force

Similar Documents

Publication Publication Date Title
JP7198577B2 (en) Image analysis method, device, program, and method for manufacturing trained deep learning algorithm
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN104933709B (en) Random walk CT lung tissue image automatic segmentation methods based on prior information
CN109102506B (en) Automatic segmentation method for abdominal CT liver lesion image based on three-level cascade network
CN111931811B (en) Calculation method based on super-pixel image similarity
WO2019000455A1 (en) Method and system for segmenting image
CN107330263A (en) A kind of method of area of computer aided breast invasive ductal carcinoma histological grading
CN110874860B (en) Target extraction method of symmetrical supervision model based on mixed loss function
JP2019148950A (en) Method for image analysis, image analyzer, program, method for manufacturing learned deep learning algorithm, and learned deep learning algorithm
CN111161272B (en) Embryo tissue segmentation method based on generation of confrontation network
CN113034462B (en) Method and system for processing gastric cancer pathological section image based on graph convolution
CN111340128A (en) Lung cancer metastatic lymph node pathological image recognition system and method
CN109919254B (en) Breast density classification method, system, readable storage medium and computer device
CN112819747A (en) Method for automatically diagnosing benign and malignant nodules based on lung tomography image
CN112263217B (en) Improved convolutional neural network-based non-melanoma skin cancer pathological image lesion area detection method
CN110796661B (en) Fungal microscopic image segmentation detection method and system based on convolutional neural network
CN111738997A (en) Method for calculating new coronary pneumonia lesion area ratio based on deep learning
CN114841947A (en) Method and device for multi-scale feature extraction and prognosis analysis of H &amp; E staining pathological image tumor region
CN110349168B (en) Femoral head CT image segmentation method
CN113706492A (en) Lung parenchyma automatic segmentation method based on chest CT image
CN112950611A (en) Liver blood vessel segmentation method based on CT image
CN112712540B (en) Lung bronchus extraction method based on CT image
CN105869169A (en) Automatic dividing method of tumor issue micro array image
CN104915989A (en) CT image-based blood vessel three-dimensional segmentation method
CN110390678B (en) Tissue type segmentation method of colorectal cancer IHC staining image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination