CN113112609A - Navigation method and system for lung biopsy bronchoscope - Google Patents
Navigation method and system for lung biopsy bronchoscope Download PDFInfo
- Publication number
- CN113112609A CN113112609A CN202110274619.1A CN202110274619A CN113112609A CN 113112609 A CN113112609 A CN 113112609A CN 202110274619 A CN202110274619 A CN 202110274619A CN 113112609 A CN113112609 A CN 113112609A
- Authority
- CN
- China
- Prior art keywords
- image
- nodule
- lung
- data
- bronchoscope
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
- G06T2207/30064—Lung nodule
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Computer Graphics (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Endoscopes (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a navigation method and a system for a lung biopsy bronchoscope, wherein the method comprises the following steps: acquiring CT image data; inputting CT image data into a trained deep learning lung nodule recognition neural network model, and outputting to obtain lung nodule data; performing endoscope image space three-dimensional model reconstruction on CT image data by using a virtual endoscope technology, and performing focus marking in the endoscope image space three-dimensional model by combining lung nodule data; selecting a focus mark in the endoscope image space three-dimensional model, and planning a surgical path; and mapping the bronchoscope catheter from the pose in the actual physical coordinate system to the endoscope image space, and carrying out registration from the actual physical position space to the image space. Compared with the prior art, the method has the advantages of integrating deep learning nodule diagnosis, CT reconstruction and virtual endoscope navigation, high positioning accuracy diagnosis efficiency and the like.
Description
Technical Field
The invention relates to the technical field of bronchoscopy, in particular to a navigation method and a navigation system for a lung biopsy bronchoscope.
Background
Bronchogenic carcinoma (lung cancer for short) is a malignant tumor with the highest morbidity and mortality worldwide, and the annual morbidity and mortality both tend to rise year by year. Surgery is still the most effective treatment method for lung cancer at present, but because the clinical manifestations of early lung cancer are nonspecific, most patients miss the operative time when diagnosing lung cancer, and the overall effect of the current chemotherapy and radiotherapy is not ideal. Early diagnosis and surgery are the key to improve the survival of lung cancer patients, so the technology of accurately identifying and diagnosing lung cancer at an early stage is very important.
The conventional method for early screening of lung cancer is percutaneous lung aspiration biopsy or needle aspiration to obtain a lesion sample for biopsy, but the method has high accuracy, is limited by the position and size of a lesion, is not suitable for small peripheral lung nodules and the like, and may cause complications such as pneumothorax and the like. So intracorporeal puncture is the focus of attention. Meanwhile, the conventional bronchoscopy only can see the focus in the bronchial cavity, the tissue and cytology examination is carried out on the focus, the focus outside the bronchial cavity cannot be detected, the diagnosis can be carried out only by blind biopsy and brush inspection, and the sampling is often unsatisfactory. So the academic world is always actively exploring smaller, more flexible and active pulmonary bronchoscopes.
The operation navigation system accurately corresponds the preoperative or intraoperative image data of a patient to the anatomical structure of the patient on an operation bed, tracks the surgical instrument during the operation and updates and displays the position of the surgical instrument on the image of the patient in real time in the form of a virtual probe, so that a doctor can clearly know the position of the surgical instrument relative to the anatomical structure of the patient, and the surgical operation is quicker, more accurate and safer.
At present, there are no researches and proposals related to the application of the surgical navigation system to the interventional operation bronchoscope of the small bronchus in the technology.
Disclosure of Invention
The invention aims to provide a navigation method and a system for a lung biopsy bronchoscope to realize surgical navigation of the bronchoscope.
The purpose of the invention can be realized by the following technical scheme:
a method of navigating towards a lung biopsy bronchoscope, comprising the steps of:
s1, acquiring CT image data;
s2, inputting the CT image data into the trained deep learning lung nodule recognition neural network model, and outputting to obtain lung nodule data;
s3, performing endoscope image space three-dimensional model reconstruction on the CT image data by using a virtual endoscope technology, and performing focus marking in the endoscope image space three-dimensional model by combining lung nodule data;
s4, selecting a focus mark in the endoscope image space three-dimensional model, and planning a surgical path;
s5, mapping the bronchoscope catheter from the pose in the actual physical coordinate system to the endoscope image space through the electromagnetic position system and the pose sensor arranged on the bronchus, and carrying out registration from the actual physical position space to the image space;
s6, adjusting the parameter setting of the virtual camera in real time according to the position change of the actual bronchoscope catheter by the virtual endoscope view, carrying out image transformation to synchronize the virtual endoscope view and the bronchoscope catheter image, and then displaying the three-dimensional pose of the bronchoscope catheter in the endoscope image space in real time in the operation process;
and S7, controlling the bronchoscope catheter to move according to the real-time displayed three-dimensional pose of the bronchoscope catheter in the endoscope image space, and sending the bronchoscope catheter to the designated position according to the operation path.
Further, the deep learning pulmonary nodule identification model comprises a nodule detection network model and a nodule classification network model, wherein the nodule detection network model adopts a Deeplong improved fast R-CNN network structure, uses a three-dimensional convolution kernel, takes a dual-path network as a basic structure, and adopts a U-Net contraction-expansion structure for outputting the position and size of a candidate nodule, the probability of being a pulmonary nodule and semantic information; the nodule classification network model is used to classify the good or bad of the candidate nodules.
Further, the following steps are performed in the nodule detection network model: inputting an image block obtained by cutting a three-dimensional CT image of CT image data; before the first maximum pooling, firstly using two convolution layers to generate features, and then using a dual-path block to further extract the features in a contracted subnet; the subsequent expansion sub-network is composed of an deconvolution and a dual-path block, and the features of each layer of the contraction path are superposed with the features of the expansion path in the up-sampling process of the expansion path.
Further, the nodule classification network model executes the following steps:
a1, cutting the CT image data by taking the position of the lung nodule output by the nodule detection network model as the center to obtain a cut data block, wherein the cut size is the size of the nodule output by the nodule detection network model, when the size of the nodule is an odd number, adding 1 to the nodule, and if the size of the nodule is an even number, keeping the size unchanged;
a2, expanding the semantic labels output by the nodule detection network model, namely performing primary up-sampling, wherein the expansion mode is replication;
a3, superposing the semantic label obtained by up-sampling with the center of the cut data block obtained by cutting, counting data according to the size of a nodule, then, setting the corresponding value in the cut data block corresponding to the region marked as a non-nodule region in the semantic label block to zero, and keeping the position corresponding to the nodule region unchanged;
a4, performing spatial pyramid pooling on the clipped data block processed in the step A3, wherein the pooling views are set to be 1 × 1,2 × 2, 3 × 3 and 4 × 4, so as to obtain output characteristics of 30 × 30 × 30;
a5, inputting the features obtained in the step A4 into A3D DPN, further extracting the features, and obtaining a 2560-dimensional feature vector when all the convolutional layers and the pooling layers are finished;
and A6, shrinking the feature vectors by using a full-connection network to obtain the classification results of benign and malignant lung nodules.
Further, the step S1 includes:
s11, carrying out lung image substantial extraction operation on the initial CT image data to obtain an intermediate data set;
s12, preprocessing the intermediate data set to obtain final CT image data: firstly, reading an original image of a dicom file, and extracting a pixel value; and (3) carrying out normalization on the extracted pixel values, wherein the normalization formula is as follows:
where p is the pixel value of the lung image, pnormalizationIs a normalized value; then, performing random rotation, translation, scaling and inversion on the original image to perform data enhancement; and finally, reading the pixel space of the image label of the dicom file to determine the pixel spacing, and obtaining an image with uniform pixel spacing through scale transformation and cubic spline interpolation, so that each pixel represents the size of 1mm multiplied by 1mm of lung tissue, namely the final CT image data.
Further, the training data set of the deep learning pulmonary nodule identification model is obtained as follows:
b1, acquiring a public pulmonary nodule data set;
b2, carrying out image expansion on the pulmonary nodule data set by utilizing a CT-GAN algorithm to obtain an initial data set;
b3, carrying out lung image substantial extraction operation on the initial data set to obtain an intermediate data set;
b4, preprocessing the intermediate data set to obtain a training data set: firstly, reading an original image of a dicom file, and extracting a pixel value; and (3) carrying out normalization on the extracted image pixel values, wherein the normalization formula is as follows:
where p is the pulmonary nodule datasetPixel value, p, of a lung imagenormalizationIs a normalized value; then, performing random rotation, translation, scaling and inversion on the original image to perform data enhancement; and finally, reading a pixel space of an image label of the dicom file to determine the pixel spacing, and obtaining an image with uniform pixel spacing through scale transformation and cubic spline interpolation, wherein each pixel represents 1mm multiplied by 1mm of lung tissue, namely the training data set.
Further, the lung image parenchyma extraction operation includes: carrying out binarization on the image of the CT image data; clearing the external boundary except the lung parenchyma to be processed; taking the first two largest connected regions as lung lobes by marking the connected regions in the image; carrying out corrosion operation on the image; filling tiny cavities in the lung lobes by adopting closed operation; filling the holes in the lung parenchyma through region filling to obtain a complete lung parenchyma mask; the lung parenchymal mask is multiplied by the original image to obtain a final segmented lung parenchymal image, i.e. an intermediate data set.
Further, the CT-GAN algorithm specifically performs the following steps:
b21, reading a dicom file and a label file of the pulmonary nodule data set, positioning a target nodule, and processing according to the size of 16 multiplied by 16 to obtain a mask;
b22, inputting the mask data into the trained CT-GAN network, and injecting nodules at the designated positions;
b23, scaling and repairing the injected nodule to make the injected nodule closer to a real nodule;
and B24, writing the mask data after the nodes are injected into the dicom file to obtain an initial data set.
A navigation system for a lung biopsy bronchoscope, comprising:
the deep learning pulmonary nodule identification module is used for acquiring CT image data, inputting the CT image data into a trained deep learning pulmonary nodule identification neural network model, and outputting to obtain pulmonary nodule data;
the CT virtual endoscope module is used for reconstructing an endoscope image space three-dimensional model of CT image data by utilizing a virtual endoscope technology and marking a focus in the endoscope image space three-dimensional model by combining pulmonary nodule data;
the operation navigation registration module is used for selecting a focus mark in the endoscope image space three-dimensional model and planning an operation path; mapping the position and pose of the bronchoscope catheter from an actual physical coordinate system to an endoscope image space through an electromagnetic position system and a position and pose sensor arranged on a bronchus, and registering the actual physical position space to the image space; the virtual endoscope view adjusts the parameter setting of the virtual camera in real time along with the position change of the actual bronchoscope catheter, performs image transformation, synchronizes the virtual endoscope view with the bronchoscope catheter image, and then displays the three-dimensional pose of the bronchoscope catheter in the endoscope image space in real time in the operation process.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention integrates deep learning nodule diagnosis, CT reconstruction and virtual endoscope navigation, and is oriented to a navigation method and a system of a novel lung biopsy bronchoscope, so that the positioning is accurate and the diagnosis efficiency is high.
2. A doctor needs to process a large amount of CT data every day and screens out pulmonary nodules, missed detection is easy to occur due to fatigue or insufficient experience, diagnosis can be well assisted by deep learning of pulmonary nodule diagnosis, diagnosis time and workload are reduced, and examination and labeling are performed in a three-dimensional model, so that the method is more intuitive.
3. According to the invention, data fed back by the head sensor of the bronchoscope catheter is displayed on the reconstructed three-dimensional bronchial image in real time in the operation, so that the bronchoscope can be directly guided to rapidly enter a target bronchus, the examination time is shortened, and the pain of a patient is small; the secondary bronchoscopy can be avoided or the number of bronchoscopy times can be reduced; the variant bronchus can be identified easily and can reach the focus part smoothly; can partially replace X-ray guided bronchoscope biopsy, reduce the radiation; blind puncture is avoided, and the incidence of complications is reduced.
4. In view of the fact that the existing high-quality pulmonary nodule data set is small, the data volume is small, and a deep learning network model needs massive data set training, the method introduces a pulmonary nodule image generation algorithm CT-GAN based on a condition generation network, the network can learn the mapping relation between images, and the pulmonary nodule is added at a specified position by tampering original CT image data to obtain approximately real medical image data, so that the problem of insufficient positive samples is solved to a certain extent, and the detection precision of the pulmonary nodule is improved.
5. At present, many lung deep learning items directly depend on masks given by a specific public data set (such as LUNA-16) and are not suitable for actual maskless medical data, and lung parenchyma extraction preprocessing is added in a deep learning lung identification module, so that the lung deep learning method is closer to an actual medical scene.
6. In order to facilitate operation and improve training speed, most of the existing lung nodule identification networks adopt fixed input sizes, but various nodules are difficult to cover by using a certain size.
Drawings
FIG. 1: is a schematic overall flow chart of the invention.
FIG. 2: and (4) a schematic diagram of a nodule detection network structure.
FIG. 3: and (4) a schematic diagram of a nodule classification network structure.
FIG. 4: the process schematic diagram of the deep learning pulmonary nodule neural network model training process.
FIG. 5: a flow chart of a CT data three-dimensional reconstruction general flow.
FIG. 6: and 5, a flow chart for judging the overflow of the threshold value.
FIG. 7: and (3) a flow explanation schematic diagram of a neighborhood growth comparison method.
FIG. 8: and (4) a pruning algorithm flow schematic diagram.
FIG. 9: front registration module overview.
FIG. 10: the electromagnetic positioning system used in the invention is composed of a schematic diagram.
FIG. 11: and displaying a schematic diagram of the navigation image.
FIG. 12: a schematic drawing of a wire-drawn catheter bend.
FIG. 13: sensor versus catheter position.
FIG. 14: a computational schematic is simulated.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
As shown in fig. 1, the present embodiment provides a navigation method for a novel lung biopsy bronchoscope, which includes the following steps:
step S1, acquiring CT image data:
step S11, carrying out lung image substantial extraction operation on the initial CT image data to obtain an intermediate data set;
step S12, preprocessing the intermediate data set to obtain final CT image data: firstly, reading an original image of a dicom file, and extracting a pixel value; and (3) carrying out normalization on the extracted pixel values, wherein the normalization formula is as follows:
where p is the pixel value of the lung image, pnormalizationIs a normalized value; then, performing random rotation, translation, scaling and inversion on the original image to perform data enhancement; and finally, reading the pixel space of an image label (0028, 0031) of the dicom file to determine the pixel spacing, and obtaining an image with uniform pixel spacing through scale transformation and cubic spline interpolation to ensure that each pixel represents 1mm multiplied by 1mm of lung tissue, namely the final CT image data.
And step S2, inputting the CT image data into the trained deep learning lung nodule recognition neural network model, and outputting to obtain lung nodule data.
And step S3, performing endoscope image space three-dimensional model reconstruction on the CT image data by using a virtual endoscope technology, and performing focus marking in the endoscope image space three-dimensional model by combining the lung nodule data.
And S4, selecting a focus mark in the endoscope image space three-dimensional model, and planning the operation path.
And step S5, mapping the position of the bronchoscope catheter from the actual physical coordinate system to the endoscope image space through the electromagnetic position system and the position sensor arranged on the bronchus, and carrying out registration from the actual physical position space to the image space.
And step S6, adjusting the parameter setting of the virtual camera in real time according to the position change of the actual bronchoscope catheter by the virtual endoscope view, performing image transformation to synchronize the virtual endoscope view and the bronchoscope catheter image, and then displaying the three-dimensional pose of the bronchoscope catheter in the endoscope image space in real time in the operation process.
And S7, controlling the bronchoscope catheter to move according to the real-time displayed three-dimensional pose of the bronchoscope catheter in the endoscope image space, and sending the bronchoscope catheter to a specified position according to the operation path.
The entire navigation method will be described in detail below.
The deep learning pulmonary nodule identification model comprises a nodule detection network model and a nodule classification network model:
1) nodule detection network model
As shown in fig. 2, the nodule detection adopts a deep lung model (deep lung model) improved Faster R-CNN network structure for lung nodule detection, and the nodule detection convolutional neural network model uses a three-dimensional (3D) convolutional kernel, takes a dual-path network (DPN) as a basic structure, and adopts a U-Net's "contraction-expansion" structure so as to better extract the characteristics of lung nodules. The following steps are performed in the nodule detection network model:
a. inputting an image block of 96 × 96 × 96 pixel size cut out from a three-dimensional CT image of CT image data;
b. before the first maximum pooling, firstly using two convolution layers to generate features, and then using eight dual-path blocks in a 'contraction' subnet to further extract the features;
c. the subsequent "extended" sub-network consists of deconvolution and dual-path blocks, similar to the U-Net network, which superimpose the features of each layer of the contracted path with those of the extended path during the upsampling of the extended path. For a set of input data, the detection network can give probability, location, size and semantic information of whether it is a nodule, which information is used to participate in subsequent nodule classification.
2) Nodule classification network model
As illustrated in fig. 3, the output of the nodule detection network model is the location, size, probability of being a lung nodule, and semantic information of the nodule candidate. For candidate nodules with the probability larger than the threshold value, cutting corresponding CT image data according to the position and the size, and performing good-malignant classification on the candidate nodules by using a nodule classification network model according to semantic labels output by a nodule detection network model, wherein the processing method comprises the following steps:
a. and (3) cutting the CT image data by taking the lung nodule position output by the nodule detection network model as a center to obtain a cut data block, wherein the cut size is the size of the nodule output by the detection network, when the size of the nodule is an odd number, the size is added with 1, and if the size is an even number, the size is unchanged.
b. And expanding 24 × 24 × 24 semantic labels output by the detection network to 96 × 96 × 96, namely performing one-time upsampling, wherein each voxel is expanded to 4 × 4 × 4 voxels, and the expansion mode is replication.
c. And d, overlapping the semantic label obtained by up-sampling with the center of the original data obtained by cutting in the step a, and counting data according to the size of the nodule, wherein the cutting block of the original data is not larger than the semantic label block. Then, the corresponding value in the cropping data block corresponding to the region marked as the background (non-nodule region) in the semantic tag block is set to zero, and the position corresponding to the foreground (nodule region) is kept unchanged in the cropping data block.
d. And c, performing spatial pyramid pooling on the data block obtained by processing in the step c, wherein the pooling views are set to be 1 × 1,2 × 2, 3 × 3 and 4 × 4, so as to obtain an output characteristic of 30 × 30 × 30. Spatial pyramid pooling can normalize features of different input scales to equal scales. If the input feature map size is a × a and the pooling view size is n × n, the pool is definedForm a window asStep length of pooling isThus, regardless of the size of the input, for a pooled view with scale 1 × 1, it is pooled into one feature, and for a pooled view with scale 4 × 4, it is pooled into 16 features, so the size of the final output does not depend on the input size, but on the setting of the pooled view.
e. And D, inputting the features obtained in the step D into the 3D DPN for further feature extraction. At the end of all convolutional and pooling layers, a 2560-dimensional feature vector is obtained.
f. And (5) contracting the feature vectors by using a full-connection network to obtain the classification result of benign and malignant lung nodules.
Secondly, the training process of the deep learning pulmonary nodule recognition model in the embodiment is specifically as follows, as shown in fig. 4:
a. obtaining a published pulmonary nodule dataset;
b. carrying out image expansion on the pulmonary nodule data set by utilizing a CT-GAN algorithm, and mixing the pulmonary nodule data set with the original data set to obtain an initial data set;
c. carrying out lung image substantial extraction operation on the initial data set to obtain an intermediate data set;
d. and preprocessing the intermediate data set to obtain a training data set.
e. The model is trained according to a training data set.
1) In the step b above:
in view of the fact that a high-quality pulmonary nodule data set is few at present, the data volume is small, and a deep learning network model requires massive data set training, a pulmonary nodule image generation algorithm CT-GAN (CT image generation type countermeasure network) based on a condition generation countermeasure network (cGAN) is introduced in the embodiment, the network can learn the mapping relationship between images, and the original CT image data is tampered, so that pulmonary nodules are added at a specified position to obtain approximately real medical image data, the problem of insufficient positive samples is solved to a certain extent, and the detection accuracy of the pulmonary nodules is improved. The CT-GAN algorithm specifically performs the following steps:
firstly, reading a dicom file and a marking file of a pulmonary nodule data set, positioning a target nodule, and processing according to the size of 16 multiplied by 16 to obtain a mask;
secondly, inputting mask data into the trained CT-GAN network, and injecting nodules at the designated positions;
then, scaling and repairing the injected nodule to make the nodule closer to a real nodule;
and finally, writing the mask data after the nodes are injected into the dicom file to obtain an initial data set.
2) In the above step c:
since the lung CT image includes not only the lung but also other tissues, some of which have similar shapes to the lung nodule, and interfere with the detection and identification of the lung nodule, the lung parenchyma extraction and preprocessing are required before the lung image is input into the network.
The lung parenchyma extraction is specifically as follows:
firstly, carrying out binarization on an image of CT image data;
secondly, removing the external boundary except the lung parenchyma to be processed;
since the connected regions in the image may contain other tissues besides the lung lobes, such as spinal cord tissue, the first two largest connected regions are taken as the lung lobes by marking the connected regions in the image;
sometimes, there are adhesion tissues between the lung trachea and the lung parenchyma, so that the trachea cannot be separated from the lung parenchyma, and some lung nodules are connected with the lung blood vessels or the lung trachea at the lung lobe boundaries, and in order to ensure that the nodules do not have adhesion relations with the lung trachea and completely exist in the segmented lung lobes during lung lobe segmentation, the image needs to be corroded to ensure that the lung nodules are separated from the lung trachea and the like, and the size of a corrosion template adopted in the embodiment is 2 × 2;
after the erosion operation, some nodules remain in the lung parenchyma, some nodules remain in the lung trachea due to the erosion operation of the previous step, and the direct segmentation divides the lung nodules and the lung trachea into the lungs, so in order to include the nodules in the lung parenchyma, the embodiment adopts the closed operation to achieve the purpose of filling the tiny cavities in the lung lobes, and the size of the adopted template is 10 × 10;
then, filling the cavities in the lung parenchyma through region filling to obtain a complete lung parenchyma mask;
finally, the lung parenchymal mask is multiplied by the original image to obtain a final segmented lung parenchymal image, namely an intermediate data set.
The pretreatment is as follows:
firstly, reading an original image of a dicom file, and extracting a pixel value; and (3) carrying out normalization on the extracted image pixel values, wherein the normalization formula is as follows:
where p is the pixel value of the lung image in the pulmonary nodule dataset, pnormalizationIs a normalized value; then, performing random rotation, translation, scaling and inversion on the original image to perform data enhancement; and finally, reading a pixel space of an image label (0028, 0031) of the dicom file to determine the pixel spacing, and obtaining an image with uniform pixel spacing through scale transformation and cubic spline interpolation to ensure that each pixel represents 1mm multiplied by 1mm of lung tissue, namely the training data set.
Thirdly, the explanation of performing endoscope image space three-dimensional model reconstruction on CT image data by using the virtual endoscope technology is as follows:
fig. 5 is a general flowchart of three-dimensional reconstruction from CT image data, and the overall steps include bronchial segmentation, centerline extraction, data processing and image rendering.
(1) Bronchial segmentation
I) The trunk bronchial tubes are obtained using rapid region growing based on the cohort structure.
As shown in fig. 6, a data structure of DMQ (dynamic Marking queue) is used for growth, a trachea inlet is selected in the top CT as an origin of growth to press in the DMQ, a neighborhood of a target point is subjected to growth judgment when the target point leaves the DMQ, if a growth condition is met, the DMQ is pressed in, and new growth points caused by the newly pressed points are judged when the new points leave the DMQ; until there are no more new points in the DMQ, the region growing job is completed.
When judging whether the target point belongs to the growth area, adopting a neighborhood average comparison mode:
wherein N isi={N1,...,NnIs a set of point clouds on the building surface, NjIs a neighbor, Nb(Ni) Is NiN neighborhood of (D)midIs the average gray value of the n neighborhood points. Using the gray value of the center point and DmidAnd comparing, and if the residual value range is within the residual value range, determining that the target point belongs to the bronchial region, and increasing the number of the segmentation target points to the maximum extent.
II) threshold adaptive monitoring is employed to identify leakage problems, preventing over-growth.
When each increase is finished, recording the number of target voxel points obtained by the current increase, then increasing the threshold value, and increasing N again by taking the same voxel point as a starting pointk+1And comparing the proportion of the voxel point number of the new round of growth with that of the previous round of growth, wherein the specific formula is as follows:
if the ratio σ of the two increases exceeds the empirical value by 10%, the increase is considered to be leaky.
Where Δ N represents an increasing gray value, Nk+1Representing the number of target voxel points, N, that increase the threshold and grow again starting from the same voxel pointkRepresenting the number of target voxel points gained by the current growthAfter finding a leak, one will go back to the previous round of region growing operation. And correcting the final threshold value to the threshold value adopted by the region growing in the previous round as a final result.
After finding the leak, the last round of region growing operations will be traced back. And correcting the final threshold value to the threshold value adopted by the region growing in the previous round as a final result.
III) performing segmentation completion on the secondary bronchus by judging the terminal fuzzy model to obtain a relatively complete bronchus. In the further determination, the recursive determination of new points is performed using the following fuzzy model, and for any point p (x, y, z) with a determination, the following formula is given:
F(p)=Hu(p)+Gmax(p)+N(p)
where Hu (p) denotes the CT value at the current point p is in the interval [ -1000, the current threshold value]The normalized value of (a) is used for representing the fuzzy judgment of the gray value of the current point; gmax(p) represents the maximum gradient change of point p from its 6 neighbourhood in the grayscale range of 100, where Δ CtRepresenting the respective gray difference values of the central point p and the 6 neighborhoods, and if the result exceeds 100, considering the result as 1, wherein the result is used for representing the fuzzy judgment of the change amplitude of the current point p and the surrounding points; the expression n (p) represents the percentage of bronchial voxels judged in the 6 neighborhood of the current point p, and is used for representing the fuzzy judgment of how fast the bronchial targets searched around the current point p are accelerated. Wherein N isfIndicating the number of judged voxel points belonging to the bronchus.
The coefficients considered for each fuzzy case are considered to be equivalent, and have equivalent influence on the judgment result. When the value of the total F (p) is less than 1.5, the point is considered to belong to the bronchus.
In order to prevent leakage, the number of points newly added after each fuzzy judgment is recorded, and the average increased number of voxels after multiple regrowth is limited to establish a termination condition.
IV) restitution of the resulting bronchial tree in detail using morphological closing post-processing:
the formula of the morphological closing operation is as above formula, the closing operation is a mask operation of performing morphological expansion and erosion on the same area successively by using the same template, and may also be referred to as convolution of the image, that is:
wherein the formulas of the morphological corrosion Er (IMG) and the expansion Di (IMG) are respectively as the formula, MmM is obtained after translation of M. A three-dimensional template, like a two-dimensional translation, is understood to mean a translation of the original image in all three dimensions.
(2) Centerline extraction
For complex tubular structures such as the bronchial or vascular tree, there will be multiple centerlines, also referred to as skeleton lines. The invention adopts a look-up-table (LUT) based rapid three-dimensional corrosion thinning method, which firstly segments a bronchus binary image from original data in a region growing mode, secondly establishes a corrosion thinning model and an LUT, then searches the LUT for corrosion thinning, and finally prunes the thinning result.
I) Establishment of corrosion model
Defining voxels belonging to the target object as foreground points SfgAnd the rest are defined as background points Sbg. There is a subset S in the set of foreground pointssub-fgIf all the elements meet the following four criteria, they are called Simple points (SV).
For the judgment of the conditions a and b, when all the voxel points are traversed, the condition can be directly obtained through the distribution of foreground points and background points in the neighborhood of each foreground voxel point.
According to the condition c, a 26-neighborhood region growing method is adopted, as shown in fig. 7. Take any S in 26 neighborhoodsfgNb at the target point as the starting point of region growing26And carrying out 26-neighborhood foreground point region growth to obtain a 26-connected foreground point set S as a growth result. The growing result is compared with the set S of foreground points in its own neighborhood 26, and based on the uniqueness of the neighborhoods, if the number of points in the two neighborhoods is the same, the condition is considered to be met.
Condition d use of Nb18Middle SbgNb of6Local area growing method stores connected 6 neighbors using a data structure in which the list contains a list. After traversing is finished, Nb is added6Set of background points S in (1) and S in connected 6-neighborhood listiAnd sequentially comparing, and searching whether the same set exists or not to obtain a judgment result of whether the condition d is met or not.
II) analysis of neighborhood distribution and creation of LUTs
The invention adopts the algorithm of multithreading parallel matching double Boolean arrays to establish the LUT, and the size of the array is 226The subscript represents VC (Voxel Combination) to which the index value uniquely corresponds. One array is an LUT and is a reference depended on in the subsequent corrosion refinement, and the value of the LUT indicates whether the central point under the VC is a simple point; the other array is the Marking Table (MT), which takes values to indicate whether the VC has been calculated.
III) applying rapid corrosion and thinning of LUT to obtain the preliminarily obtained skeleton line crude extraction effect.
Storing foreground points in the image into a temporary array PointList before thinningborderThen, the following steps are carried out:
a. in PointListfgFinding out edge points to obtain PointListborder;
b. Traversing PointListsborderMidpoint, Nb from boundary points2σObtaining a 26bits index value corresponding to the index, searching in the LUT by using the index, and judging and deleting a simple point according to a judgment value of the LUT;
c. repeating the steps a and b until the PointListborderAnd no simple point can be found in the process, so that the preliminarily obtained skeleton line rough extraction effect is obtained.
IV) pruning operation to eliminate burrs on the main air pipe.
The pruning process is used to eliminate some extra false branches, i.e. burrs, on the main gas pipe.
The invention adopts a size-adaptive pruning algorithm. The specific operation steps are as follows, and the flow chart is as shown in FIG. 8:
a. setting a proper pruning Threshold according to the number of voxels of the skeleton line, and taking 1.7% of the number of voxels as an empirical value;
b. traversing the thinned skeleton lines to obtain all the endpoint PointListsend;
c. In PointListendThe point in (1) is traversed by taking the point as a starting point until the branch point, and if the length of the branch point is less than Threshold, the branch point is traversed by taking the point as the starting pointBranch deletion;
d. repeating b and c until no burrs can be deleted.
Fourthly, mapping the bronchoscope catheter from the pose in the actual physical coordinate system to the endoscope image space, and performing registration from the actual physical position space to the image space as follows:
1) drawing technique
I) Introduction to image processing library
In the implementation of the virtual bronchoscope, the reading of the related original data and the drawing of the data are developed on the basis of a VTK (visualization toolkit) open source library.
II) volume rendering and surface rendering
To improve rendering efficiency, surface rendering is selected.
2) Data conversion
I) Bronchial iso-surface extraction
Since the bronchial segmentation based on the CT data set still results in volume data, it is necessary to apply iso-surface extraction technique to reconstruct surface data of the bronchial surface.
Here, a Marching Cube algorithm (MC) most typical in iso-surface extraction is adopted, and the basic steps of the algorithm are as follows:
a. constructing an index table with the size of 256 according to the relation between symmetry and rotation, and recording the states of the isosurface under various conditions;
b. marking voxels in the image according to a given isosurface value;
c. sequentially traversing the voxels in the marked image and performing d-f operation;
d. calculating the position of the voxel in the index table according to the distribution condition of the voxel;
e. inquiring an index table, calculating coordinates of intersection points of all edges through interpolation, and adding focus three-dimensional coordinate data obtained by deep learning and identifying a focus part;
f. and connecting the intersection points to form a patch to obtain a three-dimensional image of the bronchus and the focus.
II) skeleton line data parsing and conversion
The skeleton line conversion algorithm comprises the following steps:
a. building a skeleton line tree structure from a bronchus inlet, and storing the skeleton line tree structure into a Node link Node List form;
b. performing coordinate conversion on the voxels in each Node Tree Node to obtain the actual three-dimensional coordinates corresponding to the voxels;
c. the three-dimensional coordinate sets in each node are connected to generate vtkkpolyline data.
As shown in fig. 9, mapping the bronchoscope catheter from the pose in the actual physical coordinate system into the endoscope image space is specifically developed as follows:
(1) instrument preparation-electromagnetic positioning system and pose sensor
Electromagnetic positioning system and pose sensor
For the determination of the catheter pose, the invention adopts an electromagnetic positioning system which mainly comprises three parts, namely a magnetic field generator, a pose sensor and a system control unit, and the system diagram is shown in figure 10. The magnetic field generator can generate a three-dimensional magnetic field space, each sensor is provided with a coordinate system, and the pose of the sensor coordinate system in the three-dimensional magnetic field space can be measured after the pose sensor is placed in the magnetic field space. If the pose sensor is integrated in the catheter, the coordinate position of the catheter in a three-dimensional space can be acquired in real time, and the position of the catheter relative to the bronchial image can be obtained through coordinate system registration, so that the relative position of the catheter in the bronchus of the human body can be displayed in real time.
The existing position and pose sensors comprise a 5-degree-of-freedom sensor and a 6-degree-of-freedom sensor, and the 6-degree-of-freedom sensor is adopted in the invention in consideration of the calculation complexity of the double 5-degree-of-freedom sensor.
② data acquisition of multithread
The invention divides the data acquisition thread into two threads through the interface approach of data acquisition, one is to use serial port communication to acquire pose information-an electromagnetic positioning system, and the other is to use a data acquisition card to acquire a sensor.
(2) Coordinate system calibration and image display of catheter and sensor
The rotation around the sensor can be detected by the pose of the sensor with 6 degrees of freedom, and when the catheter moves, the bending plane of the catheter can be completely determined according to the rotation around the sensor, so that the simulation can be carried out only by one position point. Before bronchoscopy is carried out, firstly, a coordinate system of a catheter and a pose sensor is calibrated.
The image display module mainly comprises two aspects of navigation image display and real-time position display of the movement of the catheter, wherein the navigation image display comprises the display of a three-dimensional model of the bronchus, virtual endoscopic display during operation and the like, and the real-time display of the movement of the catheter comprises the real-time three-dimensional position of the catheter relative to the bronchus, simulated display of the shape during the movement of the catheter and the like. The real-time three-dimensional position of the catheter relative to the bronchus needs calibration of a coordinate system of the catheter and the position and posture sensor with 6 degrees of freedom and registration of the coordinate system need to be realized, and the simulation display of the shape when the catheter moves needs to be realized through simulation of the motion of the catheter.
Navigation image display-bronchial model three-view and bronchial virtual endoscopic view
In order to comprehensively display the three-dimensional bronchial model and the position of the catheter, the invention adopts three views of the bronchial model and virtual endoscopic display of the bronchus, as shown in fig. 11.
The three directional views show the relative position of the catheter in the bronchi and the endoscopic views show detailed information of the catheter movement. The endoscopic view is obtained by setting the pose of the endoscopic camera in the bronchus, the focus of the endoscopic camera is set at the point which is closest to the catheter on the center, and the camera is positioned at a certain distance backwards along the center line, so that the endoscopic camera can obtain the maximum endoscopic view in the bronchus.
Simulation of catheter movement
The invention adopts the circular arc to simulate the bending shape of the conduit.
As shown in fig. 12, the eccentricity at the end of the steel wire rope is set as e, the length of the bent section is set as L, the radius of the central axis of the bent conduit is set as R, the bending radius of the steel wire is set as R, the bending angle is set as θ, and the stroke of the steel wire rope is set as Δ, so as to obtain the equation:
where e and L are fixed known quantities and Δ can be measured in real time by catheter motion control, R, r and θ can be determined from the above equation for catheter bending.
Coordinate system calibration of catheter and 6-degree-of-freedom pose sensor
And calibrating, namely determining the relative position of the bending plane of the conduit in the coordinate system of the sensor.
As shown in fig. 13, the Z axis in the sensor coordinate system is along the sensor direction, and the relative position between the X axis of the sensor coordinate system and the normal vector n of the bending plane of the catheter can be determined by calculating the included angle α between the X axis of the sensor coordinate system and the normal vector n. The calibration method is to collect a pose information T when the catheter is not bent1=[u1 v1 w1 p1](relative to a fixed coordinate system), then bending the catheter and collecting a pose information T2=[u2 v2 w2 p2]Wherein u, v, w are direction vectors parallel to the X, Y, Z axes at the sensor position point A (making the tangential direction w at the A parallel to the Z direction), and p is the coordinate of any point on the catheter except the A point. From the Z axis in the sensor coordinate system along the sensor direction, w1And w2Are all in the plane of bending, so n is w1×w2Then cos α is n × u1The angle alpha can be calculated so that the relative position of the catheter and the sensor coordinate system is fully determined.
The following describes how to calculate the actual coordinates using the calibrated coordinate system:
as shown in fig. 14, when the catheter moves, the pose information of the front end of the catheter is detected in real time, and the pose information acquired by the sensor is Ti=[ui vi wi pi](i ═ 0,1,2.. n), first according to α and uiCan directly calculate the normal vector n of the bending plane of the conduiti. In the bending plane of the conduit, the circle center of the arc of the curved conduit is set as O, the position point of the sensor is A, and the tangent of the arc at the A positionThe line is wiTherefore, it isThe direction vector of (a) is: u. ofOA=wi×ni,(R is the radius of the arc calculated above), the coordinates of point O areWith O as origin and uOAIs the X axis, wiIs Y-axis, niA rectangular coordinate system is established for the Z axis, which transforms a matrix T relative to a fixed coordinate systemO=[uOA ni wi pO]. For any point C on the arc, the central angle of the edge starting from OA is γ (γ < θ, θ is the arc central angle obtained above). At TOHaving P in the coordinate systemc'=[xc'yc'zc'1]', wherein xc'=R·cosγ,yc'=R·sinγ,zc'0. The coordinates of point C in the fixed coordinate system are: p is a radical ofc=TO·pc'
In the same way, coordinate values of other points can be obtained.
Coordinate system registration
And the position of the catheter relative to the bronchial image can be obtained through coordinate system registration. The invention adopts point set-based spatial registration, and the first step of the algorithm is the acquisition of two groups of corresponding point sets. In the invention, the point set in the actual space coordinate system is acquired on the bronchus model by the sensor probe, and the acquisition of the point set in the image space is to pick up points of a reconstructed three-dimensional bronchus image by using a mouse. If the two sets of point sets can completely correspond to each other, the transformation matrix between the two coordinates can be accurately obtained.
When two groups of point sets are registered, corresponding mathematical calculation is needed to obtain a transformation matrix. An iterative algorithm (ICP algorithm) based on the closest point is adopted, and two groups of point sets X { X } are respectively arranged in two space coordinate systemsi|xi∈ R 41,2, n, and Y { Y ═ Y ·i|yi∈R4I 1,2, n, one-to-one correspondence. The transformation matrix q between two sets of points is a4 x 4 matrix, then the ICP algorithm can be summarized as solving for the matrix q such that:
is a minimum problem. Generally, the iterative process adopts the least square method idea, and the calculation process is as follows:
a. collecting two groups of corresponding point sets: a. thek、Bk;
b. Set of computation points Ak、BkCorresponding points between, such that | | Ak-Bk||=min;
c. R is calculated by least square methodk、TkSo that | | (R)kAk+Tk)-Bk||=min;
e. If d iskGreater than a given thresholdThen use Ak+1In place of AkAnd c, repeating the step b, otherwise, completing the calculation, and q ═ R0 T0][R1 T1]…[Ri Ti](i=1,2,...,k)。
After the coordinate system is registered, a transformation matrix between the actual physical coordinate system and the image coordinate system can be obtained, and the catheter pose information is mapped to an image space after being acquired every time and then is subjected to corresponding coordinate system transformation, so that the accurate display of the relative position of the catheter in the bronchus is realized on a computer image.
(3) Intraoperative navigation
In the operation process, the image display module changes in real time according to the movement of the catheter in the bronchus of the patient, so that the virtual endoscopic view and the actual camera image are synchronized, and the navigation is realized. The doctor can control the motion of the catheter according to the real-time images and send the catheter to a designated position.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.
Claims (9)
1. A navigation method for a lung biopsy bronchoscope, characterized by comprising the following steps:
s1, acquiring CT image data;
s2, inputting the CT image data into the trained deep learning lung nodule recognition neural network model, and outputting to obtain lung nodule data;
s3, performing endoscope image space three-dimensional model reconstruction on the CT image data by using a virtual endoscope technology, and performing focus marking in the endoscope image space three-dimensional model by combining lung nodule data;
s4, selecting a focus mark in the endoscope image space three-dimensional model, and planning a surgical path;
s5, mapping the bronchoscope catheter from the pose in the actual physical coordinate system to the endoscope image space through the electromagnetic position system and the pose sensor arranged on the bronchus, and carrying out registration from the actual physical position space to the image space;
s6, adjusting the parameter setting of the virtual camera in real time according to the position change of the actual bronchoscope catheter by the virtual endoscope view, carrying out image transformation to synchronize the virtual endoscope view and the bronchoscope catheter image, and then displaying the three-dimensional pose of the bronchoscope catheter in the endoscope image space in real time in the operation process;
and S7, controlling the bronchoscope catheter to move according to the real-time displayed three-dimensional pose of the bronchoscope catheter in the endoscope image space, and sending the bronchoscope catheter to the designated position according to the operation path.
2. The navigation method for the lung biopsy bronchoscope according to claim 1, wherein the deep learning lung nodule identification model comprises a nodule detection network model and a nodule classification network model, wherein the nodule detection network model adopts a deep lung improved Faster R-CNN network structure, uses a three-dimensional convolution kernel, takes a two-path network as a basic structure, and adopts a U-Net contraction-expansion structure, and is used for outputting the position and size of a candidate nodule, the probability of being a lung nodule and semantic information; the nodule classification network model is used to classify the good or bad of the candidate nodules.
3. The method of navigating towards a bronchoscope for lung biopsy according to claim 2, wherein the following steps are performed in the nodule detection network model: inputting an image block obtained by cutting a three-dimensional CT image of CT image data; before the first maximum pooling, firstly using two convolution layers to generate features, and then using a dual-path block to further extract the features in a contracted subnet; the subsequent expansion sub-network is composed of an deconvolution and a dual-path block, and the features of each layer of the contraction path are superposed with the features of the expansion path in the up-sampling process of the expansion path.
4. The method of navigating towards a bronchoscope for lung biopsy according to claim 2, wherein the following steps are performed in the nodule classification network model:
a1, cutting the CT image data by taking the position of the lung nodule output by the nodule detection network model as the center to obtain a cut data block, wherein the cut size is the size of the nodule output by the nodule detection network model, when the size of the nodule is an odd number, adding 1 to the nodule, and if the size of the nodule is an even number, keeping the size unchanged;
a2, expanding the semantic labels output by the nodule detection network model, namely performing primary up-sampling, wherein the expansion mode is replication;
a3, superposing the semantic label obtained by up-sampling with the center of the cut data block obtained by cutting, counting data according to the size of a nodule, then, setting the corresponding value in the cut data block corresponding to the region marked as a non-nodule region in the semantic label block to zero, and keeping the position corresponding to the nodule region unchanged;
a4, performing spatial pyramid pooling on the clipped data block processed in the step A3, wherein the pooling views are set to be 1 × 1,2 × 2, 3 × 3 and 4 × 4, so as to obtain output characteristics of 30 × 30 × 30;
a5, inputting the features obtained in the step A4 into A3D DPN, further extracting the features, and obtaining a 2560-dimensional feature vector when all the convolutional layers and the pooling layers are finished;
and A6, shrinking the feature vectors by using a full-connection network to obtain the classification results of benign and malignant lung nodules.
5. The method for navigating towards a lung biopsy bronchoscope according to claim 1, wherein the step S1 comprises:
s11, carrying out lung image substantial extraction operation on the initial CT image data to obtain an intermediate data set;
s12, preprocessing the intermediate data set to obtain final CT image data: firstly, reading an original image of a dicom file, and extracting a pixel value; and (3) carrying out normalization on the extracted pixel values, wherein the normalization formula is as follows:
where p is the pixel value of the lung image, pnormalizationIs a normalized value; then, performing random rotation, translation, scaling and inversion on the original image to perform data enhancement; finally, reading the pixel space of the image label of the dicom file to determine the pixel spacing, obtaining an image with uniform pixel spacing through scale transformation and cubic spline interpolation, and enabling each pixel to represent 1m of lung tissueAnd m is multiplied by 1mm, and the size is the final CT image data.
6. The method for navigating to the bronchoscope for lung biopsy according to claim 1, wherein the training data set of the deep learning lung nodule recognition model is obtained as follows:
b1, acquiring a public pulmonary nodule data set;
b2, carrying out image expansion on the pulmonary nodule data set by utilizing a CT-GAN algorithm to obtain an initial data set;
b3, carrying out lung image substantial extraction operation on the initial data set to obtain an intermediate data set;
b4, preprocessing the intermediate data set to obtain a training data set: firstly, reading an original image of a dicom file, and extracting a pixel value; and (3) carrying out normalization on the extracted image pixel values, wherein the normalization formula is as follows:
where p is the pixel value of the lung image in the pulmonary nodule dataset, pnormalizationIs a normalized value; then, performing random rotation, translation, scaling and inversion on the original image to perform data enhancement; and finally, reading a pixel space of an image label of the dicom file to determine the pixel spacing, and obtaining an image with uniform pixel spacing through scale transformation and cubic spline interpolation, wherein each pixel represents 1mm multiplied by 1mm of lung tissue, namely the training data set.
7. The method of navigating towards a lung biopsy bronchoscope according to claim 5 or 6, wherein the lung image substance extraction operation comprises: carrying out binarization on the image of the CT image data; clearing the external boundary except the lung parenchyma to be processed; taking the first two largest connected regions as lung lobes by marking the connected regions in the image; carrying out corrosion operation on the image; filling tiny cavities in the lung lobes by adopting closed operation; filling the holes in the lung parenchyma through region filling to obtain a complete lung parenchyma mask; the lung parenchymal mask is multiplied by the original image to obtain a final segmented lung parenchymal image, i.e. an intermediate data set.
8. The method for navigating towards a bronchoscope for lung biopsy according to claim 6, wherein the CT-GAN algorithm performs the following steps:
b21, reading a dicom file and a label file of the pulmonary nodule data set, positioning a target nodule, and processing according to the size of 16 multiplied by 16 to obtain a mask;
b22, inputting the mask data into the trained CT-GAN network, and injecting nodules at the designated positions;
b23, scaling and repairing the injected nodule to make the injected nodule closer to a real nodule;
and B24, writing the mask data after the nodes are injected into the dicom file to obtain an initial data set.
9. A navigation system for a lung biopsy bronchoscope, comprising:
the deep learning pulmonary nodule identification module is used for acquiring CT image data, inputting the CT image data into a trained deep learning pulmonary nodule identification neural network model, and outputting to obtain pulmonary nodule data;
the CT virtual endoscope module is used for reconstructing an endoscope image space three-dimensional model of CT image data by utilizing a virtual endoscope technology and marking a focus in the endoscope image space three-dimensional model by combining pulmonary nodule data;
the operation navigation registration module is used for selecting a focus mark in the endoscope image space three-dimensional model and planning an operation path; mapping the position and pose of the bronchoscope catheter from an actual physical coordinate system to an endoscope image space through an electromagnetic position system and a position and pose sensor arranged on a bronchus, and registering the actual physical position space to the image space; the virtual endoscope view adjusts the parameter setting of the virtual camera in real time along with the position change of the actual bronchoscope catheter, performs image transformation, synchronizes the virtual endoscope view with the bronchoscope catheter image, and then displays the three-dimensional pose of the bronchoscope catheter in the endoscope image space in real time in the operation process.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110274619.1A CN113112609A (en) | 2021-03-15 | 2021-03-15 | Navigation method and system for lung biopsy bronchoscope |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110274619.1A CN113112609A (en) | 2021-03-15 | 2021-03-15 | Navigation method and system for lung biopsy bronchoscope |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113112609A true CN113112609A (en) | 2021-07-13 |
Family
ID=76711441
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110274619.1A Pending CN113112609A (en) | 2021-03-15 | 2021-03-15 | Navigation method and system for lung biopsy bronchoscope |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113112609A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113487605A (en) * | 2021-09-03 | 2021-10-08 | 北京字节跳动网络技术有限公司 | Tissue cavity positioning method, device, medium and equipment for endoscope |
CN113616333A (en) * | 2021-09-13 | 2021-11-09 | 上海微创医疗机器人(集团)股份有限公司 | Catheter movement assistance method, catheter movement assistance system, and readable storage medium |
CN113855242A (en) * | 2021-12-03 | 2021-12-31 | 杭州堃博生物科技有限公司 | Bronchoscope position determination method, device, system, equipment and medium |
CN114067626A (en) * | 2021-09-30 | 2022-02-18 | 中日友好医院(中日友好临床医学研究所) | Bronchoscope simulation system based on personalized data |
CN114693642A (en) * | 2022-03-30 | 2022-07-01 | 北京医准智能科技有限公司 | Nodule matching method and device, electronic equipment and storage medium |
CN115120346A (en) * | 2022-08-30 | 2022-09-30 | 中国科学院自动化研究所 | Target point positioning method and device, electronic equipment and bronchoscope system |
WO2023066072A1 (en) * | 2021-10-20 | 2023-04-27 | 上海微创微航机器人有限公司 | Catheter positioning method, interventional surgery system, electronic device and storage medium |
WO2023124979A1 (en) * | 2021-12-31 | 2023-07-06 | 杭州堃博生物科技有限公司 | Lung bronchoscope navigation method, electronic device and computer readable storage medium |
CN116433874A (en) * | 2021-12-31 | 2023-07-14 | 杭州堃博生物科技有限公司 | Bronchoscope navigation method, device, equipment and storage medium |
CN116473673A (en) * | 2023-06-20 | 2023-07-25 | 浙江华诺康科技有限公司 | Path planning method, device, system and storage medium for endoscope |
CN116935009A (en) * | 2023-09-19 | 2023-10-24 | 中南大学 | Operation navigation system for prediction based on historical data analysis |
CN117649408A (en) * | 2024-01-29 | 2024-03-05 | 天津博思特医疗科技有限责任公司 | Lung nodule recognition processing method based on lung CT image |
CN117808975A (en) * | 2024-02-27 | 2024-04-02 | 天津市肿瘤医院(天津医科大学肿瘤医院) | Deep learning-based three-dimensional reconstruction method for lung image surgery planning |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106659373A (en) * | 2014-07-02 | 2017-05-10 | 柯惠有限合伙公司 | Dynamic 3d lung map view for tool navigation inside the lung |
US20180368917A1 (en) * | 2017-06-21 | 2018-12-27 | Biosense Webster (Israel) Ltd. | Registration with trajectory information with shape sensing |
CN111772792A (en) * | 2020-08-05 | 2020-10-16 | 山东省肿瘤防治研究院(山东省肿瘤医院) | Endoscopic surgery navigation method, system and readable storage medium based on augmented reality and deep learning |
CN112258530A (en) * | 2020-12-21 | 2021-01-22 | 四川大学 | Neural network-based computer-aided lung nodule automatic segmentation method |
CN112315582A (en) * | 2019-08-05 | 2021-02-05 | 罗雄彪 | Positioning method, system and device of surgical instrument |
CN112450960A (en) * | 2020-12-21 | 2021-03-09 | 周永 | Virtual endoscope display method based on VR/AR combined digital lung technology |
-
2021
- 2021-03-15 CN CN202110274619.1A patent/CN113112609A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106659373A (en) * | 2014-07-02 | 2017-05-10 | 柯惠有限合伙公司 | Dynamic 3d lung map view for tool navigation inside the lung |
US20180368917A1 (en) * | 2017-06-21 | 2018-12-27 | Biosense Webster (Israel) Ltd. | Registration with trajectory information with shape sensing |
CN112315582A (en) * | 2019-08-05 | 2021-02-05 | 罗雄彪 | Positioning method, system and device of surgical instrument |
CN111772792A (en) * | 2020-08-05 | 2020-10-16 | 山东省肿瘤防治研究院(山东省肿瘤医院) | Endoscopic surgery navigation method, system and readable storage medium based on augmented reality and deep learning |
CN112258530A (en) * | 2020-12-21 | 2021-01-22 | 四川大学 | Neural network-based computer-aided lung nodule automatic segmentation method |
CN112450960A (en) * | 2020-12-21 | 2021-03-09 | 周永 | Virtual endoscope display method based on VR/AR combined digital lung technology |
Non-Patent Citations (5)
Title |
---|
LIUZ_NOTES: "基于深度学习的肺部CT影像识别——采用U-net、3D CNN、cGAN实现肺结节的检测(三)", 《HTTPS://BLOG.CSDN.NET/LIUZ_NOTES/ARTICLE/DETAILS/106616654》 * |
刘明威: "虚拟支气管镜的研究", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 * |
薛潺涓: "基于深度学习的肺部图像分类研究", 《中国优秀硕士学位论文全文数据库 (医药卫生科技辑)》 * |
郑秀瑗 等: "《现代运动生物力学 第2版》", 30 April 2007, 国防工业出版社 * |
陈求名 等: "电磁导航支气管镜在外周肺病变诊治中的临床应用进展", 《中国肺癌杂志》 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113487605B (en) * | 2021-09-03 | 2021-11-19 | 北京字节跳动网络技术有限公司 | Tissue cavity positioning method, device, medium and equipment for endoscope |
CN113487605A (en) * | 2021-09-03 | 2021-10-08 | 北京字节跳动网络技术有限公司 | Tissue cavity positioning method, device, medium and equipment for endoscope |
CN113616333A (en) * | 2021-09-13 | 2021-11-09 | 上海微创医疗机器人(集团)股份有限公司 | Catheter movement assistance method, catheter movement assistance system, and readable storage medium |
CN113616333B (en) * | 2021-09-13 | 2023-02-10 | 上海微创微航机器人有限公司 | Catheter movement assistance method, catheter movement assistance system, and readable storage medium |
CN114067626A (en) * | 2021-09-30 | 2022-02-18 | 中日友好医院(中日友好临床医学研究所) | Bronchoscope simulation system based on personalized data |
CN114067626B (en) * | 2021-09-30 | 2023-12-15 | 浙江优亿医疗器械股份有限公司 | Bronchoscope simulation system based on personalized data |
WO2023066072A1 (en) * | 2021-10-20 | 2023-04-27 | 上海微创微航机器人有限公司 | Catheter positioning method, interventional surgery system, electronic device and storage medium |
CN113855242A (en) * | 2021-12-03 | 2021-12-31 | 杭州堃博生物科技有限公司 | Bronchoscope position determination method, device, system, equipment and medium |
CN113855242B (en) * | 2021-12-03 | 2022-04-19 | 杭州堃博生物科技有限公司 | Bronchoscope position determination method, device, system, equipment and medium |
WO2023097944A1 (en) * | 2021-12-03 | 2023-06-08 | 杭州堃博生物科技有限公司 | Bronchoscope position determination method and apparatus, system, device, and medium |
WO2023124979A1 (en) * | 2021-12-31 | 2023-07-06 | 杭州堃博生物科技有限公司 | Lung bronchoscope navigation method, electronic device and computer readable storage medium |
CN116416414A (en) * | 2021-12-31 | 2023-07-11 | 杭州堃博生物科技有限公司 | Lung bronchoscope navigation method, electronic device and computer readable storage medium |
CN116433874A (en) * | 2021-12-31 | 2023-07-14 | 杭州堃博生物科技有限公司 | Bronchoscope navigation method, device, equipment and storage medium |
CN116416414B (en) * | 2021-12-31 | 2023-09-22 | 杭州堃博生物科技有限公司 | Lung bronchoscope navigation method, electronic device and computer readable storage medium |
CN114693642A (en) * | 2022-03-30 | 2022-07-01 | 北京医准智能科技有限公司 | Nodule matching method and device, electronic equipment and storage medium |
CN115120346A (en) * | 2022-08-30 | 2022-09-30 | 中国科学院自动化研究所 | Target point positioning method and device, electronic equipment and bronchoscope system |
CN115120346B (en) * | 2022-08-30 | 2023-02-17 | 中国科学院自动化研究所 | Target point positioning device, electronic equipment and bronchoscope system |
CN116473673B (en) * | 2023-06-20 | 2024-02-27 | 浙江华诺康科技有限公司 | Path planning method, device, system and storage medium for endoscope |
CN116473673A (en) * | 2023-06-20 | 2023-07-25 | 浙江华诺康科技有限公司 | Path planning method, device, system and storage medium for endoscope |
CN116935009A (en) * | 2023-09-19 | 2023-10-24 | 中南大学 | Operation navigation system for prediction based on historical data analysis |
CN116935009B (en) * | 2023-09-19 | 2023-12-22 | 中南大学 | Operation navigation system for prediction based on historical data analysis |
CN117649408A (en) * | 2024-01-29 | 2024-03-05 | 天津博思特医疗科技有限责任公司 | Lung nodule recognition processing method based on lung CT image |
CN117808975A (en) * | 2024-02-27 | 2024-04-02 | 天津市肿瘤医院(天津医科大学肿瘤医院) | Deep learning-based three-dimensional reconstruction method for lung image surgery planning |
CN117808975B (en) * | 2024-02-27 | 2024-05-03 | 天津市肿瘤医院(天津医科大学肿瘤医院) | Deep learning-based three-dimensional reconstruction method for lung image surgery planning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113112609A (en) | Navigation method and system for lung biopsy bronchoscope | |
CN110310281B (en) | Mask-RCNN deep learning-based pulmonary nodule detection and segmentation method in virtual medical treatment | |
CN108765363B (en) | Coronary artery CTA automatic post-processing system based on artificial intelligence | |
Schlathoelter et al. | Simultaneous segmentation and tree reconstruction of the airways for virtual bronchoscopy | |
CN101393644B (en) | Hepatic portal vein tree modeling method and system thereof | |
CN109215033A (en) | The method and system of image segmentation | |
CN106127849B (en) | Three-dimensional fine vascular method for reconstructing and its system | |
CN112862833B (en) | Blood vessel segmentation method, electronic device and storage medium | |
US20080161687A1 (en) | Repeat biopsy system | |
CN112258514B (en) | Segmentation method of pulmonary blood vessels of CT (computed tomography) image | |
EP2244633A2 (en) | Medical image reporting system and method | |
JP2012223315A (en) | Medical image processing apparatus, method, and program | |
CN104318554B (en) | Medical image Rigid Registration method based on triangulation Optimized Matching | |
CN103942772A (en) | Multimodal multi-dimensional blood vessel fusion method and system | |
CN113538471B (en) | Plaque segmentation method, plaque segmentation device, computer equipment and storage medium | |
Haq et al. | BTS-GAN: computer-aided segmentation system for breast tumor using MRI and conditional adversarial networks | |
Guo et al. | Coarse-to-fine airway segmentation using multi information fusion network and CNN-based region growing | |
CN113706514B (en) | Focus positioning method, device, equipment and storage medium based on template image | |
CN116051553B (en) | Method and device for marking inside three-dimensional medical model | |
Alirr et al. | Automatic liver segmentation from ct scans using intensity analysis and level-set active contours | |
Perchet et al. | Advanced navigation tools for virtual bronchoscopy | |
Shao et al. | A segmentation method of airway from chest CT image based on VGG-Unet neural network | |
Wang et al. | A monocular SLAM system based on SIFT features for gastroscope tracking | |
Tong et al. | Computer-aided lung nodule detection based on CT images | |
CN114419032B (en) | Method and device for segmenting the endocardium and/or the epicardium of the left ventricle of the heart |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210713 |