CN113516669A - Trachea extraction method, device and equipment based on CT image and storage medium - Google Patents

Trachea extraction method, device and equipment based on CT image and storage medium Download PDF

Info

Publication number
CN113516669A
CN113516669A CN202110699195.3A CN202110699195A CN113516669A CN 113516669 A CN113516669 A CN 113516669A CN 202110699195 A CN202110699195 A CN 202110699195A CN 113516669 A CN113516669 A CN 113516669A
Authority
CN
China
Prior art keywords
image
training set
matrix
original
bronchial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110699195.3A
Other languages
Chinese (zh)
Other versions
CN113516669B (en
Inventor
黎永秀
赵梦
罗利
陈超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Yingku Technology Co ltd
Original Assignee
Hubei Yingku Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Yingku Technology Co ltd filed Critical Hubei Yingku Technology Co ltd
Priority to CN202110699195.3A priority Critical patent/CN113516669B/en
Publication of CN113516669A publication Critical patent/CN113516669A/en
Application granted granted Critical
Publication of CN113516669B publication Critical patent/CN113516669B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention discloses a trachea extraction method, a trachea extraction device, trachea extraction equipment and a trachea extraction storage medium based on CT images, wherein the method comprises the following steps: acquiring an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set, and performing matrixing processing to obtain an original matrix training set and a semantic segmentation matrix training set; inputting an original matrix training set and a semantic segmentation matrix training set into a 3D neural network, and obtaining a prediction model after training by the neural network; after the CT image to be predicted is subjected to matrixing processing, inputting the CT image to be predicted after the matrixing processing into the prediction model, and outputting a prediction matrix by the prediction model; and carrying out binarization processing on the prediction matrix, carrying out ossification processing on the prediction matrix after binarization processing, and carrying out crotch structure detection on the ossified bronchial structure picture to obtain a bronchial segmented tree structure. The invention solves the problem that trachea information can not be extracted well in CT images at present.

Description

Trachea extraction method, device and equipment based on CT image and storage medium
Technical Field
The invention relates to the technical field of CT scanning, in particular to a method, a device, equipment and a storage medium for extracting a trachea based on a CT image.
Background
Currently, enhanced CT scanning and low-dose CT scanning have widespread application and great significance in breast examination, diagnosis and preoperative planning. The doctor can diagnose the focus part through the image data. The diagnosis is basically based on two-dimensional images, and although the current image equipment workstation and the image browsing client provide 3D post-processing, the image equipment workstation and the image browsing client need manual intervention and take much time and energy. In particular, the density of the bronchi in the extrathoracic CT images is very low, and the density of blood vessels is very high relative to the bronchi. It is difficult to better display both bronchi and blood vessels when viewing the image. And operation planning is difficult to be performed under the two-dimensional image, and particularly, a puncture treatment scheme and a radiotherapy treatment scheme which need to avoid bronchus and blood vessels are required.
Disclosure of Invention
The present invention is directed to overcome the above technical deficiencies, and provides a method, an apparatus, a device and a storage medium for extracting trachea information based on CT images, which solve the technical problem that trachea information cannot be better extracted from CT images in the prior art.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a method for extracting trachea based on CT image, comprising the following steps:
acquiring an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set;
performing matrixing processing on the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set;
inputting the original matrix training set and the semantic segmentation matrix training set into a preset 3D neural network, and obtaining a prediction model after the neural network is trained;
acquiring a CT image to be predicted, performing matrixing processing on the CT image to be predicted, inputting the matrix-processed CT image to be predicted into the prediction model, and outputting a prediction matrix by the prediction model;
and carrying out binarization processing on the prediction matrix, carrying out ossification processing on the prediction matrix after binarization processing, and carrying out crotch structure detection on the ossified bronchial structure picture to obtain a bronchial segmented tree structure.
Preferably, in the CT image-based trachea extraction method, the matrixing the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set specifically includes:
performing value range adjustment processing on each image in the original CT image training set and the semantic segmentation image training set;
preprocessing each image after value range adjustment to obtain a preprocessed original sampling sample and a semantic segmentation sampling sample;
and performing matrixing processing on the original sampling sample and the semantic segmentation sampling sample to obtain an original matrix training set and a semantic segmentation matrix training set.
Preferably, in the method for extracting trachea based on CT image, the preprocessing of each image after the value range adjustment specifically includes:
and carrying out interpolation, sampling reduction and cutting processing on each image after the value range adjustment.
Preferably, in the method for extracting trachea based on CT image, the neural network is a 3D U-Net convolution neural network.
Preferably, in the CT image-based trachea extraction method, the binarizing the prediction matrix, and after the ossifying the prediction matrix after the binarizing, performing a crotch structure detection on the ossified bronchial structure diagram to obtain a bronchial segment tree structure specifically includes:
carrying out binarization processing on the prediction matrix, and converting the prediction matrix after binarization processing into a binary matrix under a world coordinate system;
carrying out ossification treatment on the binary matrix according to a preset predicted intensity parameter;
and performing crotch structure detection on the ossified bronchial structure picture to obtain a bronchial segmented tree structure.
Preferably, in the CT image-based trachea extraction method, the ossifying the binary matrix according to a preset predicted intensity parameter specifically includes:
and recombining the binary matrix, stacking the following pixels according to the pixel spacing and the layer thickness in sequence after setting the coordinate of the first pixel of the binary matrix as the initial image position, extracting the isosurface of the binary matrix by adopting an isosurface extraction method, generating a grid model, and generating a bronchial structure diagram according to the grid model.
Preferably, in the CT image-based trachea extraction method, the detecting of the bifurcation structure of the ossified bronchial structure map to obtain the bronchial segmented tree structure specifically includes:
and extracting the central line of the ossified bronchial structure picture, and gradually segmenting the ossified bronchial structure picture according to the central line to obtain a bronchial segmented tree structure.
In a second aspect, the present invention further provides a trachea extraction device based on CT images, including:
the training set acquisition module is used for acquiring an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set;
the training set processing module is used for performing matrixing processing on the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set;
the model establishing module is used for inputting the original matrix training set and the semantic segmentation matrix training set into a preset 3D neural network, and obtaining a prediction model after the neural network is trained;
the image processing module is used for acquiring a CT image to be predicted, matrixing the CT image to be predicted, inputting the matrixed CT image to be predicted into the prediction model, and outputting a prediction matrix by the prediction model;
and the bronchial segmented tree structure establishing module is used for carrying out binarization processing on the prediction matrix, carrying out ossification processing on the prediction matrix after binarization processing, and carrying out crotch structure detection on the ossified bronchial structure picture so as to obtain the bronchial segmented tree structure.
In a third aspect, the present invention further provides a trachea extraction device based on CT images, including: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
the processor, when executing the computer readable program, implements the steps of the CT image-based trachea extraction method as described above.
In a fourth aspect, the present invention also provides a computer readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the CT image-based trachea extraction method as described above.
Compared with the prior art, the trachea extraction method, the device, the equipment and the storage medium based on the CT images provided by the invention have the advantages that the 3D neural network is used for training the CT images to obtain the 3D semantic segmentation model, the 3D semantic segmentation model can be used for carrying out the semantic segmentation of the bronchus on the input CT images, and the bronchus can be displayed in a 3D form by establishing the segmentation tree structure of the bronchus after the segmentation is finished, so that the examination by workers is facilitated, and the workers can conveniently avoid the bronchus to carry out the design of a puncture treatment scheme and a radiotherapy treatment scheme during the operation planning.
Drawings
FIG. 1 is a flowchart illustrating a method for extracting trachea based on CT images according to a preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of an embodiment of a raw CT image in accordance with the present invention;
FIG. 3 is a diagram of one embodiment of semantically segmenting an image according to the present invention;
FIG. 4 is a schematic representation of an embodiment of a bronchial structure map obtained after ossification treatment according to the present invention;
FIG. 5 is a schematic representation of one embodiment of a bronchial segment tree structure obtained after ossification treatment according to the present invention;
fig. 6 is a schematic diagram of a trachea extraction device based on CT images according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, a method for extracting trachea based on CT image according to an embodiment of the present invention includes the following steps:
s100, acquiring an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set.
In this embodiment, an original CT image training set is first obtained, where the original CT image training set includes a plurality of CT images (as shown in fig. 2), and the location and shape of bronchial tubes in each CT image are delineated, the semantic segmentation image training set is generated based on the original CT image training set, and a semantic segmentation image (as shown in fig. 3) is a binary image in which bronchial tubes in the original CT image are labeled, in a specific implementation, the semantic segmentation image uses 512 × 512 single-channel 8Bit image data that is the same as the original CT image matrix, and in the last navigation of the image, a confirmed region (tracheal region) in the image uses a value of 1, and other regions use a value of 0, and a corresponding 512 × 512 single-channel 8Bit binary image is generated. In one embodiment, the results of the existing acquisitions (i.e., the original CT images) are delineated and labeled using the mimics software. And after the labeling, MASK data and STL data are obtained and can be checked in software. The invention needs MASK data, which can be derived by using mimics, and the derived binary image of 8Bit Dicom data is a semantic segmentation image corresponding to the original CT image, wherein the non-labeled area is 0, and the labeled area is 1. The original data was retained, 16Bit Dicom data, data range essentially-1024 to 4096, where-1025 bits of air and 0 is water.
S200, performing matrixing processing on the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set.
In this embodiment, for subsequent training, an original CT image training set and the semantic segmentation image training set need to be preprocessed first, and specifically, the step S200 specifically includes:
performing value range adjustment processing on each image in the original CT image training set and the semantic segmentation image training set;
preprocessing each image after value range adjustment to obtain a preprocessed original sampling sample and a semantic segmentation sampling sample;
and performing matrixing processing on the original sampling sample and the semantic segmentation sampling sample to obtain an original matrix training set and a semantic segmentation matrix training set.
Specifically, because the obtained original data span is very large, value range adjustment needs to be performed on the data, and irrelevant value range data is removed as much as possible. Such as bone, muscle, etc. And the value range is equally divided into the interval 0-255. And carrying out equalization processing. Therefore, the calculation amount and the calculation range of the neural network can be reduced as much as possible.
Further, due to the limitation of the computing capability of the computer hardware, the 3D data needs to be preprocessed, specifically, the preprocessing of each image after the value range adjustment includes: and carrying out interpolation, sampling reduction and cutting processing on each image after the value range adjustment. The data are cut into smaller matrix blocks and then put into the neural network for learning, so that the training of the neural network can be facilitated. The original CT image adopts a 3D linear interpolation method, and after the semantic segmentation image is subjected to linear interpolation, binarization processing needs to be carried out again, so that a non-labeled region is 0, and a labeled region is 1. After the processing is finished, the original sampling samples and the semantic segmentation sampling samples can be matrixed to obtain an original matrix training set and a semantic segmentation matrix training set which corresponds to the original matrix training set one by one.
S300, inputting the original matrix training set and the semantic segmentation matrix training set into a preset 3D neural network, and obtaining a prediction model after training by the neural network.
In this embodiment, after the training set is prepared, the neural network model may be used for training, and a prediction model may be obtained after not less than 200 sets of data and 50 ten thousand iterative trainings, where the prediction model is used to process an input CT image to obtain a prediction matrix, and the prediction matrix is used to reflect the shape and position of a bronchus.
Specifically, the neural network adopts a 3D U-Net convolution neural network. In this embodiment, a dimension is added on the basis of the original 2D neural network, and the problem of the recognition rate in a single-layer image is solved. Aiming at semantic segmentation objects with space morphological structures, the recognition precision can be greatly improved under the condition of adding one space dimension. The neural network performs recognition and semantic segmentation according to a certain spatial structure.
3D U-Net also adopts data augmentation (data augmentation) approach, mainly by rotation, scaling and setting image as gray, while applying smooth dense deformation field (smooth dense deformation field) on training data and real labeled data, mainly by selecting a grid with standard deviation of 4 from a normally distributed random vector sample, with a distance of 32 voxels in each direction, and then applying B-Spline Interpolation (B-Spline Interpolation), which is a more general approach to find a similar shape to the original shape to approximate (approximation). Training is then started with weighted cross-entropy loss (weighted cross-entropy loss function) so as to reduce the weight of the background and increase the weight of the annotated image data portion to achieve a balanced impact on the tubules and background voxels.
S400, obtaining a CT image to be predicted, performing matrixing processing on the CT image to be predicted, inputting the matrix CT image to be predicted into the prediction model, and outputting a prediction matrix by the prediction model.
In this embodiment, after a CT image to be predicted is obtained, as with the image processing method in the training set, the CT image to be predicted is preprocessed, that is, the value range of the CT image to be predicted is adjusted, then interpolation, downsampling, and cutting are performed after the value range adjustment, the processed image is matrixing processed to obtain an input matrix that is the same as the original matrix in the training set, then the input matrix is input into the prediction model for prediction, and after prediction, points with an accuracy lower than 99.5% are obtained. A complete prediction matrix is obtained.
S500, performing binarization processing on the prediction matrix, performing ossification processing on the prediction matrix after binarization processing, and performing crotch structure detection on the ossified bronchial structure picture to obtain a bronchial segmented tree structure.
In this embodiment, after the prediction matrix is obtained, the bronchial segment tree structure can be drawn according to the prediction matrix to obtain a 3D bronchial structure diagram, which is convenient for a worker to check, and the bronchus can be conveniently avoided during operation planning. Specifically, the step S500 specifically includes:
carrying out binarization processing on the prediction matrix, and converting the prediction matrix after binarization processing into a binary matrix under a world coordinate system;
carrying out ossification treatment on the binary matrix according to a preset predicted intensity parameter;
and performing crotch structure detection on the ossified bronchial structure picture to obtain a bronchial segmented tree structure.
In this embodiment, the obtained prediction matrix is subjected to binarization processing once, and a non-representation area in the matrix is set to 0, and a representation area is set to 1. The coordinate system is then converted to the world coordinate system. Then, the reorganization is performed according to the world coordinate system, the ossification is performed by using the bronchial tree structure as an ossification basis (an image obtained after the ossification is shown in fig. 4), in order to optimize the bronchial structure, optimization adjustment processing needs to be performed according to a preset predicted intensity parameter, and then, the crotch structure detection can be performed according to the adjusted bronchial structure composition, so as to obtain a bronchial segmented tree structure (shown in fig. 5). Specifically, the ossifying the binary matrix according to the preset predicted intensity parameter specifically includes:
and recombining the binary matrix, stacking the following pixels according to the pixel spacing and the layer thickness in sequence after setting the coordinate of the first pixel of the binary matrix as the initial image position, extracting the isosurface of the binary matrix by adopting an isosurface extraction method, generating a grid model, and generating a bronchial structure diagram according to the grid model.
In this embodiment, the preset predicted intensity parameters at least include image position, image direction, patient position, distance between each image and pixel distance. Where the image position indicates the spatial coordinates (x, y, z) of the first pixel in the upper left corner of the image. I.e. the coordinates of the first pixel of the binary matrix transmission. The image orientation indicates the orientation cosine of the first row and the first column of the image relative to the patient. The directions of the coordinate axes are such that the X-axis is directed to the left hand side of the patient, the y-axis is directed to the back of the patient, and the Z-axis is directed to the head of the patient, as determined by the patient's orientation. The position of the patient is descriptive of the position of the patient relative to an imaging device such as a CT or MR device. Wherein HFP denotes head forward, prone; HFS means head forward, supine. The distance between each image is also the layer thickness. The pixel pitch represents the distance in each pixel representation space.
When ossification is performed, a result matrix is obtained such as: the coordinates of the first pixel of a three-dimensional matrix of 512 x n are set as the initial image position, followed by stacking in order according to the pixel pitch and layer thickness. And (3) extracting an isosurface from the bronchus data (binary matrix) by using a Marching Cubes algorithm, generating grid model data, and calculating a normal vector. Delaunay triangulation is performed on all points on the mesh and a three-dimensional Voronol diagram is generated.
Further, after the ossification treatment of the bronchial structure diagram is obtained, the bifurcation structure detection is performed on the ossified bronchial structure diagram, starting from the first stage, the bifurcation is followed by the second stage, and so on. And finally obtaining the bronchial segmented tree structure. According to actual requirements, the total number of stages does not exceed six stages. Specifically, the detecting of the crotch structure of the ossified bronchial structure diagram to obtain the bronchial segmented tree structure specifically comprises:
and extracting the central line of the ossified bronchial structure picture, and gradually segmenting the ossified bronchial structure picture according to the central line to obtain a bronchial segmented tree structure.
Specifically, firstly, obtaining PolyLine of a center line by using a mode of 'vmtkcentelines', and obtaining a collection (ID) of all points on the PolyLine; the point ID that appears 2 times and more is then searched for in the point set, according to the PolyLine composition pattern. These points are the intersections and all the points that appear are arranged in line order. And the physiological structure determines that the bronchus can only have the conditions of 1 minute 2 and 1 minute 3. Therefore, the points can be grouped according to the lines of the intersection points, and after the grouping is completed, any one of each line is the tail end which appears once in the point set, and the intersection points which appear for many times are the intersection points. Then, according to the image position information in the Dicom sequence, chest CT is shot in a standard mode, scanning is started from the neck, and the mark points (two ends of a line segment) with the nearest position of the first image are obtained. And c, traversing the two end points of all the line segments obtained in the step b, and acquiring the point which appears once and is closest to the first graph plane as a starting point. Finally, according to the path-finding algorithm of the most basic continuous points, performing recursion search every time when a bifurcation point is recorded in one stage until the point which appears only once is inquired and recursion is finished; and no more than 6 recursions (six bronchi) are performed, so that the construction of the bronchial segment tree structure is completed.
Referring to fig. 6, based on the above-mentioned trachea extraction device based on CT image, the present invention further provides a trachea extraction device 600 based on CT image, which includes:
a training set obtaining module 610, configured to obtain an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set;
a training set processing module 620, configured to perform matrixing processing on the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set;
a model establishing module 630, configured to input the original matrix training set and the semantic segmentation matrix training set into a preset 3D neural network, and obtain a prediction model after the neural network performs training;
the image processing module 640 is configured to obtain a CT image to be predicted, perform matrixing processing on the CT image to be predicted, input the matrixed CT image to be predicted into the prediction model, and output a prediction matrix by the prediction model;
and the bronchial segmented tree structure establishing module 650 is configured to perform binarization processing on the prediction matrix, perform ossification processing on the prediction matrix after binarization processing, and perform crotch structure detection on the ossified bronchial structure diagram to obtain a bronchial segmented tree structure.
Since the above-mentioned detailed description has been made on the trachea extraction method based on CT images, it is not repeated herein.
Based on the above method for extracting trachea based on CT image, the present invention further provides a device for extracting trachea based on CT image, comprising: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
the processor, when executing the computer readable program, implements the steps of the CT image-based trachea extraction method according to the above embodiments.
Since the above-mentioned detailed description has been made on the trachea extraction method based on CT images, it is not repeated herein.
Based on the above-mentioned method for extracting trachea based on CT image, the present invention further provides a computer readable storage medium, where one or more programs are stored, and the one or more programs can be executed by one or more processors to implement the steps in the method for extracting trachea based on CT image according to the above-mentioned embodiments.
Since the above-mentioned detailed description has been made on the trachea extraction method based on CT images, it is not repeated herein.
In summary, according to the CT image-based trachea extraction method, device, equipment and storage medium provided by the present invention, the 3D neural network is used to train the CT image to obtain the 3D semantic segmentation model, the 3D semantic segmentation model can be used to perform the bronchial semantic segmentation on the input CT image, and after the segmentation is completed, the bronchus can be displayed in a 3D form by establishing the bronchial segmented tree structure, so that the bronchus can be conveniently viewed by the staff, and the staff can conveniently avoid the bronchus to perform the design of the puncture treatment scheme and the radiotherapy treatment scheme during the surgical planning.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A trachea extraction method based on CT images is characterized by comprising the following steps:
acquiring an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set;
performing matrixing processing on the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set;
inputting the original matrix training set and the semantic segmentation matrix training set into a preset 3D neural network, and obtaining a prediction model after the neural network is trained;
acquiring a CT image to be predicted, performing matrixing processing on the CT image to be predicted, inputting the matrix-processed CT image to be predicted into the prediction model, and outputting a prediction matrix by the prediction model;
and carrying out binarization processing on the prediction matrix, carrying out ossification processing on the prediction matrix after binarization processing, and carrying out crotch structure detection on the ossified bronchial structure picture to obtain a bronchial segmented tree structure.
2. The CT image-based trachea extraction method according to claim 1, wherein the matrixing the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set specifically comprises:
performing value range adjustment processing on each image in the original CT image training set and the semantic segmentation image training set;
preprocessing each image after value range adjustment to obtain a preprocessed original sampling sample and a semantic segmentation sampling sample;
and performing matrixing processing on the original sampling sample and the semantic segmentation sampling sample to obtain an original matrix training set and a semantic segmentation matrix training set.
3. The CT image-based trachea extraction method according to claim 2, wherein the preprocessing of each image after the value range adjustment specifically comprises:
and carrying out interpolation, sampling reduction and cutting processing on each image after the value range adjustment.
4. The CT image-based trachea extraction method of claim 1, wherein the neural network is a 3D U-Net convolutional neural network.
5. The method for extracting trachea based on CT image according to claim 1, wherein the binarizing the prediction matrix, performing ossification on the binarized prediction matrix, and performing crotch structure detection on the ossified bronchial structure diagram to obtain the bronchial segmented tree structure specifically comprises:
carrying out binarization processing on the prediction matrix, and converting the prediction matrix after binarization processing into a binary matrix under a world coordinate system;
carrying out ossification treatment on the binary matrix according to a preset predicted intensity parameter;
and performing crotch structure detection on the ossified bronchial structure picture to obtain a bronchial segmented tree structure.
6. The CT-image-based trachea extraction method according to claim 5, wherein the ossifying the binary matrix according to the preset predicted intensity parameters specifically comprises:
and recombining the binary matrix, stacking the following pixels according to the pixel spacing and the layer thickness in sequence after setting the coordinate of the first pixel of the binary matrix as the initial image position, extracting the isosurface of the binary matrix by adopting an isosurface extraction method, generating a grid model, and generating a bronchial structure diagram according to the grid model.
7. The method for extracting trachea according to claim 5, wherein the detecting the bifurcation structure of the ossified bronchial structure map to obtain the bronchial segmented tree structure specifically comprises:
and extracting the central line of the ossified bronchial structure picture, and gradually segmenting the ossified bronchial structure picture according to the central line to obtain a bronchial segmented tree structure.
8. A trachea extraction device based on CT image, characterized by comprising:
the training set acquisition module is used for acquiring an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set;
the training set processing module is used for performing matrixing processing on the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set;
the model establishing module is used for inputting the original matrix training set and the semantic segmentation matrix training set into a preset 3D neural network, and obtaining a prediction model after the neural network is trained;
the image processing module is used for acquiring a CT image to be predicted, matrixing the CT image to be predicted, inputting the matrixed CT image to be predicted into the prediction model, and outputting a prediction matrix by the prediction model;
and the bronchial segmented tree structure establishing module is used for carrying out binarization processing on the prediction matrix, carrying out ossification processing on the prediction matrix after binarization processing, and carrying out crotch structure detection on the ossified bronchial structure picture so as to obtain the bronchial segmented tree structure.
9. A trachea extraction device based on CT images is characterized by comprising: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
the processor, when executing the computer readable program, implements the steps of the CT image-based trachea extraction method according to any one of claims 1 to 7.
10. A computer readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the CT image-based trachea extraction method according to any one of claims 1 to 7.
CN202110699195.3A 2021-06-23 2021-06-23 CT image-based trachea extraction method, device, equipment and storage medium Active CN113516669B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110699195.3A CN113516669B (en) 2021-06-23 2021-06-23 CT image-based trachea extraction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110699195.3A CN113516669B (en) 2021-06-23 2021-06-23 CT image-based trachea extraction method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113516669A true CN113516669A (en) 2021-10-19
CN113516669B CN113516669B (en) 2023-04-25

Family

ID=78066007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110699195.3A Active CN113516669B (en) 2021-06-23 2021-06-23 CT image-based trachea extraction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113516669B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170262611A1 (en) * 2016-03-09 2017-09-14 The Chinese University Of Hong Kong Modeling method for orthopedic casts
CN108133478A (en) * 2018-01-11 2018-06-08 苏州润心医疗器械有限公司 A kind of method for extracting central line of coronary artery vessel
CN108633312A (en) * 2015-11-18 2018-10-09 光学实验室成像公司 X-ray image feature detects and registration arrangement and method
CN109615636A (en) * 2017-11-03 2019-04-12 杭州依图医疗技术有限公司 Vascular tree building method, device in the lobe of the lung section segmentation of CT images
CN109658411A (en) * 2019-01-21 2019-04-19 杭州英库医疗科技有限公司 A kind of correlation analysis based on CT images feature Yu Patients with Non-small-cell Lung prognosis situation
CN109685787A (en) * 2018-12-21 2019-04-26 杭州依图医疗技术有限公司 Output method, device in the lobe of the lung section segmentation of CT images
US20190304592A1 (en) * 2018-01-12 2019-10-03 Shenzhen Keya Medical Technology Corp. System and method for calculating vessel flow parameters based on angiography
US20200085318A1 (en) * 2016-06-02 2020-03-19 Aneuscreen Ltd. Method and system for monitoring a condition of cerebral aneurysms
US20200098124A1 (en) * 2018-09-24 2020-03-26 Beijing Curacloud Technology Co., Ltd. Prediction method for healthy radius of blood vessel path, prediction method for candidate stenosis of blood vessel path, and blood vessel stenosis degree prediction device
CN111127482A (en) * 2019-12-20 2020-05-08 广州柏视医疗科技有限公司 CT image lung trachea segmentation method and system based on deep learning
CN111861988A (en) * 2020-06-09 2020-10-30 深圳市旭东数字医学影像技术有限公司 Method and system for automatic and semi-automatic lung lobular segmentation based on bronchus
CN112107362A (en) * 2020-08-24 2020-12-22 江苏大学 Computer-assisted surgery design system for coronary heart disease
CN112330686A (en) * 2019-08-05 2021-02-05 罗雄彪 Method for segmenting and calibrating lung bronchus

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108633312A (en) * 2015-11-18 2018-10-09 光学实验室成像公司 X-ray image feature detects and registration arrangement and method
US20170262611A1 (en) * 2016-03-09 2017-09-14 The Chinese University Of Hong Kong Modeling method for orthopedic casts
US20200085318A1 (en) * 2016-06-02 2020-03-19 Aneuscreen Ltd. Method and system for monitoring a condition of cerebral aneurysms
CN109615636A (en) * 2017-11-03 2019-04-12 杭州依图医疗技术有限公司 Vascular tree building method, device in the lobe of the lung section segmentation of CT images
CN108133478A (en) * 2018-01-11 2018-06-08 苏州润心医疗器械有限公司 A kind of method for extracting central line of coronary artery vessel
US20190304592A1 (en) * 2018-01-12 2019-10-03 Shenzhen Keya Medical Technology Corp. System and method for calculating vessel flow parameters based on angiography
US20200098124A1 (en) * 2018-09-24 2020-03-26 Beijing Curacloud Technology Co., Ltd. Prediction method for healthy radius of blood vessel path, prediction method for candidate stenosis of blood vessel path, and blood vessel stenosis degree prediction device
CN109685787A (en) * 2018-12-21 2019-04-26 杭州依图医疗技术有限公司 Output method, device in the lobe of the lung section segmentation of CT images
CN109658411A (en) * 2019-01-21 2019-04-19 杭州英库医疗科技有限公司 A kind of correlation analysis based on CT images feature Yu Patients with Non-small-cell Lung prognosis situation
CN112330686A (en) * 2019-08-05 2021-02-05 罗雄彪 Method for segmenting and calibrating lung bronchus
CN111127482A (en) * 2019-12-20 2020-05-08 广州柏视医疗科技有限公司 CT image lung trachea segmentation method and system based on deep learning
CN111861988A (en) * 2020-06-09 2020-10-30 深圳市旭东数字医学影像技术有限公司 Method and system for automatic and semi-automatic lung lobular segmentation based on bronchus
CN112107362A (en) * 2020-08-24 2020-12-22 江苏大学 Computer-assisted surgery design system for coronary heart disease

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A. GARCIA-UCEDA ET AL.: "Automatic airway segmentation from Computed Tomography using robust and efficient 3-D convolutional neural network", 《HTTPS://ARXIV.ORG/ABS/2103.16328V1》, 3 March 2021 (2021-03-03), pages 1 - 17 *
A. GARCIA-UCEDA JUAREZ ET AL.: "Automatic Airway Segmentation in chest CT using Convolutional Neural Networks", 《HTTPS://ARXIV.ORG/ABS/1808.04576》, 14 August 2018 (2018-08-14), pages 1 - 12 *
GUOHUA CHENG ET AL.: "Segmentation of the Airway Tree From Chest CT Using Tiny Atrous Convolutional Network", 《IEEE ACCESS》, vol. 9, 16 February 2021 (2021-02-16), pages 1 - 12 *
付长信: "头颈部动静脉血管提取方法研究" *

Also Published As

Publication number Publication date
CN113516669B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
US20070109299A1 (en) Surface-based characteristic path generation
CN109410188B (en) System and method for segmenting medical images
CN110956635A (en) Lung segment segmentation method, device, equipment and storage medium
CN106127819A (en) Medical image extracts method and the device thereof of vessel centerline
CN112614169B (en) 2D/3D spine CT (computed tomography) level registration method based on deep learning network
CN112862824A (en) Novel coronavirus pneumonia focus detection method, system, device and storage medium
CN109300136B (en) Automatic segmentation method for organs at risk based on convolutional neural network
CN112598649B (en) 2D/3D spine CT non-rigid registration method based on generation of countermeasure network
US20210271914A1 (en) Image processing apparatus, image processing method, and program
CN106327479A (en) Apparatus and method for identifying blood vessels in angiography-assisted congenital heart disease operation
CN112561917A (en) Image segmentation method and system, electronic device and readable storage medium
CN111462071B (en) Image processing method and system
CN111340209A (en) Network model training method, image segmentation method and focus positioning method
CN114792326A (en) Surgical navigation point cloud segmentation and registration method based on structured light
CN115578320A (en) Full-automatic space registration method and system for orthopedic surgery robot
CN113706514B (en) Focus positioning method, device, equipment and storage medium based on template image
CN111080556A (en) Method, system, equipment and medium for strengthening trachea wall of CT image
Wang et al. Naviairway: a bronchiole-sensitive deep learning-based airway segmentation pipeline for planning of navigation bronchoscopy
CN113516669B (en) CT image-based trachea extraction method, device, equipment and storage medium
CN114693622B (en) Plaque erosion automatic detection system based on artificial intelligence
CN115761230A (en) Spine segmentation method based on three-dimensional image
CN113160417B (en) Multi-organ three-dimensional reconstruction control method based on urinary system
CN109872353B (en) White light data and CT data registration method based on improved iterative closest point algorithm
CN113889238A (en) Image identification method and device, electronic equipment and storage medium
CN114693698A (en) Neural network-based computer-aided lung airway segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant