CN113516669B - CT image-based trachea extraction method, device, equipment and storage medium - Google Patents

CT image-based trachea extraction method, device, equipment and storage medium Download PDF

Info

Publication number
CN113516669B
CN113516669B CN202110699195.3A CN202110699195A CN113516669B CN 113516669 B CN113516669 B CN 113516669B CN 202110699195 A CN202110699195 A CN 202110699195A CN 113516669 B CN113516669 B CN 113516669B
Authority
CN
China
Prior art keywords
image
matrix
training set
bronchus
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110699195.3A
Other languages
Chinese (zh)
Other versions
CN113516669A (en
Inventor
黎永秀
赵梦
罗利
陈超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Yingku Technology Co ltd
Original Assignee
Hubei Yingku Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Yingku Technology Co ltd filed Critical Hubei Yingku Technology Co ltd
Priority to CN202110699195.3A priority Critical patent/CN113516669B/en
Publication of CN113516669A publication Critical patent/CN113516669A/en
Application granted granted Critical
Publication of CN113516669B publication Critical patent/CN113516669B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention discloses a trachea extraction method, a device, equipment and a storage medium based on CT images, wherein the method comprises the following steps: acquiring an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set, and carrying out matrixing treatment to obtain an original matrix training set and a semantic segmentation matrix training set; inputting the original matrix training set and the semantic segmentation matrix training set into a 3D neural network, and obtaining a prediction model after training by the neural network; after matrix processing is carried out on the CT image to be predicted, the CT image to be predicted after the matrix processing is input into the prediction model, and a prediction matrix is output by the prediction model; and performing binarization processing on the prediction matrix, performing ossification processing on the binarization-processed prediction matrix, and performing crotch structure detection on the ossified bronchus structure diagram to obtain a bronchus segmented tree structure. The method solves the problem that the tracheal information cannot be extracted better from CT images at present.

Description

CT image-based trachea extraction method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of CT scanning, in particular to a CT image-based trachea extraction method, a CT image-based trachea extraction device, CT image-based trachea extraction equipment and a CT image-based trachea extraction storage medium.
Background
Enhanced CT scanning and low dose CT scanning are currently of general application and importance for examination, diagnosis and pre-operative planning of the chest. The doctor can diagnose the focus part through the image data. The diagnosis is basically based on two-dimensional images, and although the workstation of the imaging equipment and the image browsing client side provide 3D post-processing at present, manual intervention is needed, and more time and effort are spent. In particular, the density of bronchi in extrathoracic CT images is very low and the density of blood vessels is very high relative to bronchi. It is difficult to simultaneously display bronchi and blood vessels well while viewing the image. And the operation planning is difficult to be made under the two-dimensional image, in particular to a puncture treatment scheme and a radiotherapy treatment scheme which need to avoid bronchus and blood vessels.
Disclosure of Invention
The invention aims to overcome the technical defects, and provides a CT image-based tracheal extraction method, device, equipment and storage medium, which solve the technical problem that tracheal information cannot be extracted well in CT images in the prior art.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a method for extracting a trachea based on CT images, comprising the steps of:
acquiring an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set;
performing matrixing treatment on the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set;
inputting the original matrix training set and the semantic segmentation matrix training set into a preset 3D neural network, and obtaining a prediction model after training by the neural network;
acquiring a CT image to be predicted, carrying out matrixing on the CT image to be predicted, inputting the matrixed CT image to be predicted into the prediction model, and outputting a prediction matrix by the prediction model;
and performing binarization processing on the prediction matrix, performing ossification processing on the binary prediction matrix, and performing crotch structure detection on the ossified bronchus structure diagram to obtain a bronchus segmented tree structure.
Preferably, in the method for extracting a trachea based on a CT image, the matrixing the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set specifically includes:
performing value range adjustment processing on each image in the original CT image training set and the semantic segmentation image training set;
preprocessing each image with the value range adjusted to obtain a preprocessed original sampling sample and a semantic segmentation sampling sample;
and carrying out matrixing treatment on the original sampling sample and the semantic segmentation sampling sample to obtain an original matrix training set and a semantic segmentation matrix training set.
Preferably, in the tracheal extraction method based on CT image, the preprocessing of each image after the value range adjustment specifically includes:
and carrying out interpolation, downsampling and cutting on each image after the value range adjustment.
Preferably, in the tracheal extraction method based on CT image, the neural network adopts a 3DU-Net convolutional neural network.
Preferably, in the method for extracting a trachea based on CT images, the performing binarization processing on the prediction matrix, performing ossification processing on the prediction matrix after binarization processing, and performing crotch structure detection on a bronchus structure diagram after ossification processing to obtain a bronchus segmented tree structure specifically includes:
performing binarization processing on the prediction matrix, and converting the binarized prediction matrix into a binary matrix under a world coordinate system;
performing ossification treatment on the binary matrix according to preset predicted intensity parameters;
and performing crotch structure detection on the ossified bronchus structure drawing to obtain a bronchus segmented tree structure.
Preferably, in the method for extracting a trachea based on a CT image, the performing ossification processing on the binary matrix according to a preset predicted intensity parameter specifically includes:
and recombining the binary matrix, setting the coordinate of the first pixel of the binary matrix as an initial image position, sequentially stacking the following pixels according to the pixel spacing and the layer thickness, extracting the isosurface of the binary matrix by adopting an isosurface extraction method, generating a grid model, and generating a bronchus structure diagram according to the grid model.
Preferably, in the method for extracting a bronchus based on CT image, the detecting a crotch structure of the bronchus structure after ossification is performed to obtain a bronchus segmented tree structure specifically includes:
extracting the central line of the bronchus structure diagram after ossification, and segmenting the bronchus structure diagram after ossification step by step according to the central line so as to obtain a bronchus segmented tree structure.
In a second aspect, the present invention further provides a tracheal extraction device based on CT images, including:
the training set acquisition module is used for acquiring an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set;
the training set processing module is used for carrying out matrixing processing on the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set;
the model building module is used for inputting the original matrix training set and the semantic segmentation matrix training set into a preset 3D neural network, and obtaining a prediction model after training by the neural network;
the image processing module is used for acquiring a CT image to be predicted, carrying out matrixing on the CT image to be predicted, inputting the matrixed CT image to be predicted into the prediction model, and outputting a prediction matrix by the prediction model;
and the bronchus segmented tree structure building module is used for carrying out binarization processing on the prediction matrix, carrying out ossification processing on the prediction matrix after the binarization processing, and carrying out crotch structure detection on a bronchus structure diagram after the ossification processing so as to obtain the bronchus segmented tree structure.
In a third aspect, the present invention also provides a CT image-based tracheal extraction device, comprising: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
the processor, when executing the computer readable program, implements the steps in the CT image based tracheal extraction method as described above.
In a fourth aspect, the present invention also provides a computer readable storage medium storing one or more programs executable by one or more processors to implement the steps in a CT image based tracheal extraction method as described above.
Compared with the prior art, the CT image-based trachea extraction method, device, equipment and storage medium provided by the invention have the advantages that the 3D neural network is used for training CT images to obtain the 3D semantic segmentation model, the 3D semantic segmentation model is used for carrying out bronchus semantic segmentation on the input CT images, and the bronchus can be displayed in a 3D mode by establishing a bronchus segmentation tree structure after segmentation is finished, so that the viewing of staff is facilitated, and the design of a puncture treatment scheme and a radiotherapy treatment scheme by avoiding the bronchus by the staff is facilitated when operation planning is carried out.
Drawings
FIG. 1 is a flowchart of a CT image-based tracheal extraction method according to a preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of an embodiment of an original CT image of the present invention;
FIG. 3 is a schematic diagram of one embodiment of a semantically segmented image of the present invention;
FIG. 4 is a schematic representation of one embodiment of a bronchial structure obtained after the ossification process of the present invention;
FIG. 5 is a schematic representation of one embodiment of a bronchial segmented tree structure obtained after the ossification process of the present invention;
fig. 6 is a schematic diagram of a tracheal extraction device based on CT images according to a preferred embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, the method for extracting a trachea based on a CT image according to an embodiment of the present invention includes the following steps:
s100, acquiring an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set.
In this embodiment, an original CT image training set is first obtained, where the original CT image training set includes a plurality of CT images (as shown in fig. 2), and positions and shapes of bronchi in each CT image have been sketched, the semantic segmentation image training set is generated based on the original CT image training set, the semantic segmentation image (as shown in fig. 3) is a binary image marked out of bronchi in the original CT image, and in a specific implementation, the semantic segmentation image uses 512 x 512 single-channel 8Bit image data identical to the matrix of the original CT image, and in the last navigation of the image, a confirmation area (tracheal area) in the image uses a value of 1, and other areas use a value of 0, so as to generate a corresponding 512 x 512 single-channel 8Bit binary image. In one embodiment, the results of an existing shot (i.e., the original CT image) are delineated and annotated using MIMIC software. After labeling, MASK data and STL data are obtained, and can be checked in software. The invention needs MASK data, which can be exported by using mimics, and a binary image of the Dicom data of 8Bit after exporting is a semantic segmentation image corresponding to an original CT image, wherein a non-labeling area is 0, and a labeling area is 1. The original data is reserved, the data is 16Bit Dicom data, the data range is basically-1024 to 4096, wherein-1025 bits of air and 0 is water.
And S200, carrying out matrixing treatment on the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set.
In this embodiment, for subsequent training, the original CT image training set and the semantic segmentation image training set need to be preprocessed, and specifically, the step S200 specifically includes:
performing value range adjustment processing on each image in the original CT image training set and the semantic segmentation image training set;
preprocessing each image with the value range adjusted to obtain a preprocessed original sampling sample and a semantic segmentation sampling sample;
and carrying out matrixing treatment on the original sampling sample and the semantic segmentation sampling sample to obtain an original matrix training set and a semantic segmentation matrix training set.
Specifically, because the obtained original data span is very large, the value range adjustment needs to be performed on the data, and irrelevant value range data is removed as much as possible. Such as bones, muscles, etc. And equally divide the value range into the interval 0-255. And (5) carrying out averaging treatment. The calculation amount and calculation range of the neural network can be reduced as much as possible.
Further, because of limitation of computing capability of computer hardware, preprocessing of 3D data is required, specifically, preprocessing of each image after value range adjustment is specifically: and carrying out interpolation, downsampling and cutting on each image after the value range adjustment. After the data is cut into smaller matrix blocks, the matrix blocks are put into a neural network for learning, so that training of the neural network can be facilitated. The original CT image adopts a 3D bilinear interpolation method, and after the bilinear interpolation of the semantic segmentation image, binarization processing is required to be performed again, so that the non-marked area is 0, and the marked area is 1. After the processing is completed, the original sampling sample and the semantic segmentation sampling sample can be matrixed to obtain an original matrix training set and a semantic segmentation matrix training set which corresponds to the original matrix training set one by one.
S300, inputting the original matrix training set and the semantic segmentation matrix training set into a preset 3D neural network, and training by the neural network to obtain a prediction model.
In this embodiment, after the training set is prepared, a neural network model may be used to perform training, and a prediction model may be obtained after not less than 200 sets of data and 50 ten thousand iterative training, where the prediction model is used to process an input CT image to obtain a prediction matrix, and the prediction matrix is used to reflect the shape and position of a bronchus.
Specifically, the neural network adopts a 3D U-Net convolutional neural network. In the embodiment, a dimension is added on the basis of the original 2D neural network, so that the problem of recognition rate in a single-layer image is solved. The recognition accuracy can be greatly improved under the condition of adding one space dimension for the semantic segmentation object with the space morphological structure. The neural network performs recognition and semantic segmentation according to a certain spatial structure.
3D U-Net also uses data enhancement (data augmentation), consisting essentially of rotation, scaling and setting the image to gray, where a smooth dense deformation field (smooth dense deformation field) is applied to both training data and truly labeled data, essentially by selecting a grid of standard deviation 4 from a normal distribution of random vector samples, with 32 voxel spacing in each direction, and then applying B-spline interpolation (B-Spline Interpolation), which is more generally described as finding a similar shape in the original shape to approximate (approbustion). The data is then trained using weighted cross entropy loss (weighted cross-entropy loss function) such that the background weight is reduced and the weight of the marked image data portion is increased to achieve balanced loss over the affected tubules and background voxels.
S400, acquiring a CT image to be predicted, carrying out matrixing on the CT image to be predicted, inputting the matrixed CT image to be predicted into the prediction model, and outputting a prediction matrix by the prediction model.
In this embodiment, after obtaining a CT image to be predicted, as in the image processing manner in the training set, the CT image to be predicted is preprocessed, that is, the CT image to be predicted is subjected to value range adjustment, then interpolation, downsampling and cutting are performed after the value range adjustment, the processed image is subjected to matrixing processing, so as to obtain an input matrix identical to the original matrix in the training set, then the input matrix is input into the prediction model for prediction, and after prediction, a point with an accuracy lower than 99.5% is obtained. And obtaining a complete prediction matrix.
S500, performing binarization processing on the prediction matrix, performing ossification processing on the binary prediction matrix, and performing crotch structure detection on the ossified bronchus structure diagram to obtain a bronchus segmented tree structure.
In this embodiment, after obtaining the prediction matrix, the drawing of the bronchial segmented tree structure can be performed according to the prediction matrix, so as to obtain a 3D form of the bronchial structure, which is convenient for the staff to check, and can conveniently avoid the bronchus during the operation planning. Specifically, the step S500 specifically includes:
performing binarization processing on the prediction matrix, and converting the binarized prediction matrix into a binary matrix under a world coordinate system;
performing ossification treatment on the binary matrix according to preset predicted intensity parameters;
and performing crotch structure detection on the ossified bronchus structure drawing to obtain a bronchus segmented tree structure.
In this embodiment, binarization processing is performed once on the obtained prediction matrix, and the non-representation area in the matrix is set to 0 and the representation area is set to 1. The coordinate system is then converted to a world coordinate system. Then reorganizing according to the world coordinate system, and performing ossification treatment based on the bronchus tree structure (the image obtained after ossification treatment is shown in fig. 4), and further performing optimization adjustment treatment according to preset prediction intensity parameters to optimize the bronchus structure, and then performing crotch structure detection according to the adjusted bronchus structure diagram to obtain the bronchus segmented tree structure (shown in fig. 5). Specifically, the ossifying the binary matrix according to the preset predicted intensity parameter specifically includes:
and recombining the binary matrix, setting the coordinate of the first pixel of the binary matrix as an initial image position, sequentially stacking the following pixels according to the pixel spacing and the layer thickness, extracting the isosurface of the binary matrix by adopting an isosurface extraction method, generating a grid model, and generating a bronchus structure diagram according to the grid model.
In this embodiment, the preset predicted intensity parameters include at least an image position, an image direction, a patient position, a distance between each image, and a pixel pitch. Wherein the image position indicates the spatial coordinates (x, y, z) of the first pixel in the upper left corner of the image. I.e. the coordinates of the first pixel of the binary matrix transmission. The image orientation indicates the orientation cosine of the first row and first column of the image relative to the patient. The direction of the coordinate axes is determined based on the direction of the patient (X-axis is directed to the left hand side of the patient, y-axis is directed to the back of the patient, and Z-axis is directed to the head of the patient. The position of the patient is descriptive of the position of the patient relative to an imaging device such as CT or MR, wherein HFP indicates the head is forward, prone; HFS indicates the head is forward, supine. The distance between each image, i.e. the layer thickness. The pixel pitch indicates the distance in each pixel representation space.
In the ossification process, the obtained result matrix is as follows: the coordinates of the first pixel of a three-dimensional matrix of 512 x n are set as the initial image position, followed by stacking in sequence according to the pixel pitch and layer thickness. And extracting the equivalent surface from the bronchus data (binary matrix) by using a Marching cube algorithm, generating grid model data, and calculating an algorithm vector. Delaunay triangulation is performed on all points on the grid and a three-dimensional Voronoi diagram is generated.
Further, after obtaining the bronchus structure diagram after ossification, crotch structure detection is performed for the bronchus structure diagram for ossification completion, and the like, starting from the first stage, branching is performed to the second stage, and so on. Finally, the bronchus segmented tree structure is obtained. According to actual demands, the total number of stages is not more than six. Specifically, the crotch structure detection is performed on the bronchus structure chart after ossification treatment to obtain a bronchus segmented tree structure specifically comprises:
extracting the central line of the bronchus structure diagram after ossification, and segmenting the bronchus structure diagram after ossification step by step according to the central line so as to obtain a bronchus segmented tree structure.
Specifically, a polymine of a central line is obtained by using a vmtkcentrelines mode, and a set (ID) of all points on the polymine is obtained; the point IDs that occur 2 times and more are then searched in the point set, according to the PolyLine composition pattern. The points are intersections and all the points that appear are arranged in the order of the lines. And the physiological structure determines that the bronchus can only have the conditions of 1 minute and 2 minutes and 1 minute and 3 minutes. Therefore, the points can be grouped according to the cross points by lines, and after the grouping is completed, any one line only appears once in the point set as the tail end and appears many times as the cross point. And then, according to the image position information in the Dicom sequence, the chest CT is shot in a standard way, scanning is started from the neck, and the nearest marking points (two ends of a line segment) of the position of the first image are obtained. Traversing to obtain two endpoints of all line segments in the step b, and obtaining the point which appears once and is closest to the first graph plane as a starting point. Finally, according to the most basic continuous point routing algorithm, recording one level of bifurcation points every time, and carrying out recursion searching until the point where only one occurrence is found is recursively ended; and no more than 6 recursions (six bronchi) are performed, thereby completing the establishment of the bronchus segmented tree structure.
Referring to fig. 6, based on the above-mentioned CT image-based tracheal extraction device, the present invention further provides a CT image-based tracheal extraction device 600, which includes:
the training set obtaining module 610 is configured to obtain an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set;
the training set processing module 620 is configured to perform matrixing processing on the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set;
the model building module 630 is configured to input the original matrix training set and the semantic segmentation matrix training set into a preset 3D neural network, and obtain a prediction model after training by the neural network;
the image processing module 640 is configured to obtain a CT image to be predicted, matrix-process the CT image to be predicted, input the matrix-processed CT image to be predicted into the prediction model, and output a prediction matrix from the prediction model;
the bronchus segmented tree structure building module 650 is configured to perform binarization processing on the prediction matrix, perform ossification processing on the binarized prediction matrix, and perform crotch structure detection on the ossified bronchus structure diagram to obtain a bronchus segmented tree structure.
Since the method for extracting the trachea based on the CT image has been described in detail above, the detailed description thereof will be omitted.
Based on the CT image-based trachea extraction method, the invention also correspondingly provides a CT image-based trachea extraction device, which comprises: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
the processor, when executing the computer readable program, implements the steps in the CT image-based tracheal extraction method according to the embodiments described above.
Since the method for extracting the trachea based on the CT image has been described in detail above, the detailed description thereof will be omitted.
Based on the above-mentioned CT image-based tracheal extraction method, the present invention further provides a corresponding computer-readable storage medium, where one or more programs are stored, and the one or more programs may be executed by one or more processors, so as to implement the steps in the CT image-based tracheal extraction method according to the above embodiments.
Since the method for extracting the trachea based on the CT image has been described in detail above, the detailed description thereof will be omitted.
In summary, the method, the device, the equipment and the storage medium for extracting the trachea based on the CT image provided by the invention use the 3D neural network to train the CT image to obtain the 3D semantic segmentation model, the 3D semantic segmentation model can be used for carrying out the bronchus semantic segmentation on the input CT image, and the bronchus can be displayed in a 3D mode by establishing a bronchus segmentation tree structure after the segmentation is finished, so that the method, the device and the storage medium are convenient for a worker to check, and the worker can conveniently avoid the design of a puncture treatment scheme and a radiotherapy treatment scheme by the bronchus when carrying out operation planning.
The above-described embodiments of the present invention do not limit the scope of the present invention. Any other corresponding changes and modifications made in accordance with the technical idea of the present invention shall be included in the scope of the claims of the present invention.

Claims (7)

1. The tracheal extraction method based on the CT image is characterized by comprising the following steps of:
acquiring an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set;
performing matrixing treatment on the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set;
inputting the original matrix training set and the semantic segmentation matrix training set into a preset 3D neural network, and obtaining a prediction model after training by the neural network;
acquiring a CT image to be predicted, carrying out matrixing on the CT image to be predicted, inputting the matrixed CT image to be predicted into the prediction model, and outputting a prediction matrix by the prediction model;
performing binarization processing on the prediction matrix, performing ossification processing on the binary prediction matrix, and performing crotch structure detection on the ossified bronchus structure diagram to obtain a bronchus segmented tree structure;
performing binarization processing on the prediction matrix, performing ossification processing on the prediction matrix after the binarization processing, and performing crotch structure detection on the bronchus structure diagram after the ossification processing to obtain a bronchus segmented tree structure, wherein the method specifically comprises the following steps of:
performing binarization processing on the prediction matrix, and converting the binarized prediction matrix into a binary matrix under a world coordinate system;
performing ossification treatment on the binary matrix according to preset predicted intensity parameters;
performing crotch structure detection on the ossified bronchus structure drawing to obtain a bronchus segmented tree structure;
the ossification processing of the binary matrix according to the preset predicted intensity parameter specifically comprises the following steps:
recombining the binary matrix, setting the coordinate of the first pixel of the binary matrix as an initial image position, sequentially stacking the following pixels according to the pixel spacing and the layer thickness, extracting the equivalent surface of the binary matrix by adopting an equivalent surface extraction method, generating a grid model, and generating a bronchus structure diagram according to the grid model;
the branch structure detection is carried out on the bronchus structure diagram after ossification treatment so as to obtain a bronchus segmented tree structure specifically comprising:
extracting the central line of the ossified bronchus structure chart, and gradually segmenting the ossified bronchus structure chart according to the central line to obtain a bronchus segmented tree structure;
when the acquisition of the bronchus segmented tree is carried out, a vmtk centrelines mode is firstly used for obtaining a PolyLine of a central line, a set of all points on the PolyLine is obtained, after searching the points which appear twice or more in the set of points, the points are grouped according to lines according to the cross points, after the grouping is finished, chest CT is shot according to the image position information in the Dicom sequence, scanning is started from the neck, the nearest mark point of the position of the first graph is obtained, then the steps are repeated to obtain two endpoints of all line segments, the point which appears once and is nearest to the plane of the first graph is obtained as a starting point, finally, a first level is not recorded according to a path finding algorithm of continuous points, recursion finding is carried out, and recursion is ended when the point which appears once is known to be found, so that the establishment of the bronchus segmented tree structure is completed, wherein the number of recursion is not more than 6.
2. The method for extracting a trachea based on CT images according to claim 1, wherein the matrixing the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set specifically comprises:
performing value range adjustment processing on each image in the original CT image training set and the semantic segmentation image training set;
preprocessing each image with the value range adjusted to obtain a preprocessed original sampling sample and a semantic segmentation sampling sample;
and carrying out matrixing treatment on the original sampling sample and the semantic segmentation sampling sample to obtain an original matrix training set and a semantic segmentation matrix training set.
3. The method for extracting a trachea based on CT images according to claim 2, wherein the preprocessing of each image after the adjustment of the value range specifically comprises:
and carrying out interpolation, downsampling and cutting on each image after the value range adjustment.
4. The CT image-based tracheal extraction method of claim 1, wherein the neural network is a 3DU-Net convolutional neural network.
5. A tracheal extraction device based on CT images, comprising:
the training set acquisition module is used for acquiring an original CT image training set and a semantic segmentation image training set corresponding to the original CT image training set;
the training set processing module is used for carrying out matrixing processing on the original CT image training set and the semantic segmentation image training set to obtain an original matrix training set and a semantic segmentation matrix training set;
the model building module is used for inputting the original matrix training set and the semantic segmentation matrix training set into a preset 3D neural network, and obtaining a prediction model after training by the neural network;
the image processing module is used for acquiring a CT image to be predicted, carrying out matrixing on the CT image to be predicted, inputting the matrixed CT image to be predicted into the prediction model, and outputting a prediction matrix by the prediction model;
the bronchial segment tree structure building module is used for carrying out binarization processing on the prediction matrix, carrying out ossification processing on the prediction matrix after the binarization processing, and carrying out crotch structure detection on a bronchial structure chart after the ossification processing so as to obtain a bronchial segment tree structure;
the method specifically comprises the steps of performing binarization processing on the prediction matrix, performing ossification processing on the binarized prediction matrix, and performing crotch structure detection on a bronchus structure diagram after ossification processing to obtain a bronchus segmented tree structure, wherein the method specifically comprises the following steps:
performing binarization processing on the prediction matrix, and converting the binarized prediction matrix into a binary matrix under a world coordinate system;
performing ossification treatment on the binary matrix according to preset predicted intensity parameters;
performing crotch structure detection on the ossified bronchus structure drawing to obtain a bronchus segmented tree structure;
the ossification processing of the binary matrix according to the preset predicted intensity parameter specifically comprises the following steps:
recombining the binary matrix, setting the coordinate of the first pixel of the binary matrix as an initial image position, sequentially stacking the following pixels according to the pixel spacing and the layer thickness, extracting the equivalent surface of the binary matrix by adopting an equivalent surface extraction method, generating a grid model, and generating a bronchus structure diagram according to the grid model;
the branch structure detection is carried out on the bronchus structure diagram after ossification treatment so as to obtain a bronchus segmented tree structure specifically comprising:
extracting the central line of the ossified bronchus structure chart, and gradually segmenting the ossified bronchus structure chart according to the central line to obtain a bronchus segmented tree structure;
when the acquisition of the bronchus segmented tree is carried out, a vmtk centrelines mode is firstly used for obtaining a PolyLine of a central line, a set of all points on the PolyLine is obtained, after searching the points which appear twice or more in the set of points, the points are grouped according to lines according to the cross points, after the grouping is finished, chest CT is shot according to the image position information in the Dicom sequence, scanning is started from the neck, the nearest mark point of the position of the first graph is obtained, then the steps are repeated to obtain two endpoints of all line segments, the point which appears once and is nearest to the plane of the first graph is obtained as a starting point, finally, a first level is not recorded according to a path finding algorithm of continuous points, recursion finding is carried out, and recursion is ended when the point which appears once is known to be found, so that the establishment of the bronchus segmented tree structure is completed, wherein the number of recursion is not more than 6.
6. A CT image-based tracheal extraction device, comprising: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
the processor, when executing the computer readable program, implements the steps of the CT image based tracheal extraction method as claimed in any one of claims 1-4.
7. A computer readable storage medium storing one or more programs executable by one or more processors to perform the steps of the CT image based tracheal extraction method of any of claims 1-4.
CN202110699195.3A 2021-06-23 2021-06-23 CT image-based trachea extraction method, device, equipment and storage medium Active CN113516669B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110699195.3A CN113516669B (en) 2021-06-23 2021-06-23 CT image-based trachea extraction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110699195.3A CN113516669B (en) 2021-06-23 2021-06-23 CT image-based trachea extraction method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113516669A CN113516669A (en) 2021-10-19
CN113516669B true CN113516669B (en) 2023-04-25

Family

ID=78066007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110699195.3A Active CN113516669B (en) 2021-06-23 2021-06-23 CT image-based trachea extraction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113516669B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108633312A (en) * 2015-11-18 2018-10-09 光学实验室成像公司 X-ray image feature detects and registration arrangement and method
CN109658411A (en) * 2019-01-21 2019-04-19 杭州英库医疗科技有限公司 A kind of correlation analysis based on CT images feature Yu Patients with Non-small-cell Lung prognosis situation
CN112107362A (en) * 2020-08-24 2020-12-22 江苏大学 Computer-assisted surgery design system for coronary heart disease

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9972406B2 (en) * 2016-03-09 2018-05-15 The Chinese University Of Hong Kong Modeling method for orthopedic casts
IL246009B (en) * 2016-06-02 2018-11-29 Ezer Haim Method and system for monitoring condition of cerebral aneurysms
CN109584252B (en) * 2017-11-03 2020-08-14 杭州依图医疗技术有限公司 Lung lobe segment segmentation method and device of CT image based on deep learning
CN108133478B (en) * 2018-01-11 2021-11-23 苏州润迈德医疗科技有限公司 Method for extracting coronary artery blood vessel central line
US10580526B2 (en) * 2018-01-12 2020-03-03 Shenzhen Keya Medical Technology Corporation System and method for calculating vessel flow parameters based on angiography
CN109979593B (en) * 2018-09-24 2021-04-13 北京科亚方舟医疗科技股份有限公司 Method for predicting healthy radius of blood vessel route, method for predicting stenosis candidate of blood vessel route, and device for predicting degree of blood vessel stenosis
CN109685787A (en) * 2018-12-21 2019-04-26 杭州依图医疗技术有限公司 Output method, device in the lobe of the lung section segmentation of CT images
CN112330686A (en) * 2019-08-05 2021-02-05 罗雄彪 Method for segmenting and calibrating lung bronchus
CN111127482B (en) * 2019-12-20 2023-06-30 广州柏视医疗科技有限公司 CT image lung and trachea segmentation method and system based on deep learning
CN111861988A (en) * 2020-06-09 2020-10-30 深圳市旭东数字医学影像技术有限公司 Method and system for automatic and semi-automatic lung lobular segmentation based on bronchus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108633312A (en) * 2015-11-18 2018-10-09 光学实验室成像公司 X-ray image feature detects and registration arrangement and method
CN109658411A (en) * 2019-01-21 2019-04-19 杭州英库医疗科技有限公司 A kind of correlation analysis based on CT images feature Yu Patients with Non-small-cell Lung prognosis situation
CN112107362A (en) * 2020-08-24 2020-12-22 江苏大学 Computer-assisted surgery design system for coronary heart disease

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
付长信.头颈部动静脉血管提取方法研究.《中国优秀硕士学位论文全文数据库医药卫生科技辑》.2019,全文. *

Also Published As

Publication number Publication date
CN113516669A (en) 2021-10-19

Similar Documents

Publication Publication Date Title
JP6514325B2 (en) System and method for segmenting medical images based on anatomical landmark-based features
US20070109299A1 (en) Surface-based characteristic path generation
CN110689564B (en) Dental arch line drawing method based on super-pixel clustering
CN112614169B (en) 2D/3D spine CT (computed tomography) level registration method based on deep learning network
CN107527339B (en) Magnetic resonance scanning method, device and system
CN107274406A (en) A kind of method and device of detection sensitizing range
CN111429502A (en) Method and system for generating a centerline of an object and computer readable medium
CN111462071B (en) Image processing method and system
CN113706514B (en) Focus positioning method, device, equipment and storage medium based on template image
CN114612318A (en) Three-dimensional modeling method, system and equipment based on cultural relic CT image contour line
CN114792326A (en) Surgical navigation point cloud segmentation and registration method based on structured light
CN108597589B (en) Model generation method, target detection method and medical imaging system
CN113256670A (en) Image processing method and device, and network model training method and device
CN105787978A (en) Automatic medical image interlayer sketching method, device and system
CN113516669B (en) CT image-based trachea extraction method, device, equipment and storage medium
CN116725563B (en) Eyeball salience measuring device
CN111105476A (en) Three-dimensional reconstruction method for CT image based on Marching Cubes
CN115969400A (en) Apparatus for measuring area of eyeball protrusion
CN113160417B (en) Multi-organ three-dimensional reconstruction control method based on urinary system
CN113935889A (en) Method, system and medium for automatic 2D/3D coronary artery registration
CN113889238A (en) Image identification method and device, electronic equipment and storage medium
Mu et al. Construction of anatomically accurate finite element models of the human hand and a rat kidney
CN116958217B (en) MRI and CT multi-mode 3D automatic registration method and device
CN113379782B (en) Tubular structure extraction method, tubular structure extraction device, tubular structure extraction equipment and storage medium
CN111429343B (en) Rapid detection method for branch point in three-dimensional digital image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant