CN112489047A - Deep learning-based pelvic bone and arterial vessel multi-level segmentation method thereof - Google Patents
Deep learning-based pelvic bone and arterial vessel multi-level segmentation method thereof Download PDFInfo
- Publication number
- CN112489047A CN112489047A CN202110159220.9A CN202110159220A CN112489047A CN 112489047 A CN112489047 A CN 112489047A CN 202110159220 A CN202110159220 A CN 202110159220A CN 112489047 A CN112489047 A CN 112489047A
- Authority
- CN
- China
- Prior art keywords
- stage
- segmentation
- segmentation model
- pelvis
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of segmentation of pelvic bones and pelvic artery vessel trees, in particular to a deep learning-based multi-level segmentation method for pelvic bones and artery vessels, which is used for solving the problem that the abdominal pelvic bones and the pelvic artery vessel trees cannot be automatically, efficiently and accurately segmented in a multi-resolution CT image in the prior art. The invention comprises the following steps of 1: preparing and marking data; step 2: preprocessing data; and step 3: constructing a first-stage segmentation model of the 3D convolutional neural network based on multi-stage segmentation; and 4, step 4: constructing a second-stage segmentation model; and 5: training a first-stage segmentation model and a second-stage segmentation model by using the calibrated data and the synthesized loss function; step 6: and (5) carrying out abdomen information segmentation on the input three-dimensional CT image by using the trained first-stage segmentation model and second-stage segmentation model in the step 5. The invention can automatically, efficiently and accurately segment the abdominal pelvis and the pelvis blood vessel tree in the multi-resolution CT image.
Description
Technical Field
The invention relates to the technical field of segmentation of a pelvic bone and a pelvic artery vessel tree, in particular to a deep learning-based multi-level segmentation method for the pelvic bone and the artery vessels thereof.
Background
Lateral lymph node metastasis is an important metastasis way of low-grade rectal cancer, radiotherapy and chemotherapy have poor curative effects on the low-grade rectal cancer, the prognosis of a patient with the rectal cancer is influenced, lateral lymph node cleaning is used as an effective treatment means, the lateral lymph node metastasis is more and more widely applied to clinic along with popularization of laparoscopic surgery and improvement of surgical skills of surgeons, and more evidence of evidence-based medicine indicates that: lateral Lymph Node Dissection (LLND) can reduce the local recurrence rate of rectal cancer pelvic cavity, and the accurate lateral lymph node dissection of surgical indications can bring survival benefit, and pelvic lymph node identification is very important for treatment and diagnosis of pelvic region cancers including cervical cancer, prostate cancer, colon cancer and rectal cancer, and pelvic lymph nodes are located near pelvic arteries and branches and cannot be seen in common CT or MRI, but the iliac arteries are visible, and can be used for positioning position auxiliary diagnosis of the lymph nodes by segmenting the iliac arteries and the branches.
The abdominal, pelvic internal organs and vessels are numerous and need to be evaluated thoroughly and carefully before surgery to determine the location, extent, and anatomical relationship of the lesion to the surrounding area, so that a preoperative plan can be better formulated, determine what the best surgery and extent of surgical resection, and three-dimensional reconstruction of the main artery in the pelvic cavity can provide a computer model for preoperative surgical planning on a PC and laparoscopic virtual surgical training, and the small vessels in the pelvic cavity are of interest to clinicians, because these branch small vessels may be located near the lymph nodes, helping the physician locate the lymph nodes and making a correct diagnosis.
However, vessel segmentation is very challenging, and besides the reasons of poor contrast of CT image, high noise, complex background, etc., the vessel structure itself is complex, the vessel is wrapped around other organs and tissues and is difficult to distinguish, and stent, calcification, aneurysm and stenosis disturb the appearance and geometry of the vessel.
In the prior art, a doctor generally carries out segmentation reconstruction in a hospital through professional software such as 3D Slicer, ITK-SNAP and other manual labels, the method for automatically segmenting the pelvic bones and the artery vessel trees in the pelvic cavities is not researched much, some pelvic bone segmentation methods and pelvic artery vessel segmentation methods can be generally divided into a traditional method and a deep learning method, and the traditional methods for vessel segmentation include three methods: region growing methods, active contour methods, and centerline-based methods.
The traditional blood vessel segmentation methods generally use the geometric features of blood vessels to construct a deformable shape model to fit the blood vessel structure, but due to the lack of an effective learning algorithm, the methods cannot well solve the problems of poor contrast, large noise, complex background and the like existing in the segmentation of tubular structures.
In recent years, a method for extracting deep features of an image based on a neural network is rapidly developed, a deep neural network is successfully applied in the field of medical image segmentation, a 3D convolutional neural network generates output with corresponding size in an end-to-end mode by adopting input with any size and effectively deducing and learning a feature hierarchical structure, so that the application in the field of segmentation is particularly wide, however, researches on segmentation of pelvic bones and artery blood vessels in the pelvic bones are not many, one important reason is that gold standard labels of the artery blood vessels in the pelvic bones are difficult to obtain, and no related public data set exists at present.
Generally speaking, with the popularization and use of Computed Tomography (CT) technology in hospitals, CT becomes one of the main technologies for abdominal disease diagnosis and treatment, the segmentation of pelvic bone and lower limb artery vessels is crucial to the positioning of lateral lymph nodes, the common CT pelvic bone vessels are unclear, and the enhancement of the arteries in the pelvic cavities and the CTA vessels is more obvious, but the manual segmentation of the pelvic bone and the artery vessels in the pelvic bone is still very complicated, and the labeling of one instance of CT usually requires 2-4 hours. Therefore, a 3D convolutional neural network which is urgently needed to be constructed can quickly and accurately perform an automatic pelvic bone and artery vessel segmentation method from CT, which is of great significance for the auxiliary diagnosis and treatment of abdominal diseases.
Disclosure of Invention
Based on the problems, the invention provides a deep learning-based multi-level segmentation method for the pelvis and the artery vessel thereof, which is used for solving the problem that the abdominal pelvis and the artery vessel tree of the pelvis cannot be automatically, efficiently and accurately segmented in a multi-resolution CT image in the prior art. The invention can receive CT data with original size to automatically and quickly generate accurate segmentation results of the abdominal pelvis and the pelvic artery vessel tree to generate pelvic bone and pelvic artery vessel tree information, and simultaneously, the vessel information is placed in the pelvic environment to ensure that the data display is more three-dimensional and more specific, the relative position relationship of artery vessels and abdomen is displayed more clearly, and the diagnosis and judgment of doctors are facilitated, so that the abdominal pelvis and the pelvic artery tree can be automatically, efficiently and accurately segmented in a multi-resolution CT image by using a computer.
The invention specifically adopts the following technical scheme for realizing the purpose:
a deep learning-based multi-level segmentation method for a pelvis and arterial vessels thereof comprises the following steps:
step 1: data preparation and marking, wherein data import from a data system and calibration of pelvic bone and pelvic artery vessel tree data are mainly completed in the stage;
step 2: data preprocessing, in which data are preprocessed to remove redundant background information;
and step 3: constructing a first-stage segmentation model of a 3D convolutional neural network based on multi-level segmentation, wherein the first-stage segmentation model is used for segmenting the pelvis and roughly segmenting the artery and vessel tree of the pelvis;
and 4, step 4: constructing a second-stage segmentation model of the 3D convolutional neural network based on multi-level segmentation, wherein the second-stage segmentation model carries out fine segmentation on the blood vessel by using the segmentation result of the first-stage segmentation model and a distance conversion scale label based on a gold standard blood vessel label;
and 5: training a first-stage segmentation model and a second-stage segmentation model by using the calibrated data and the synthesized loss function;
step 6: and (5) carrying out abdomen information segmentation on the input three-dimensional CT image by using the trained first-stage segmentation model and second-stage segmentation model in the step (5), and outputting a segmentation result.
In the invention, nine labels are marked in total, namely a pelvic label, a pelvic artery blood vessel label, a main artery, a left common iliac artery, a right common iliac artery, a left external iliac artery, a right external iliac artery, a left internal iliac artery and a right internal iliac artery. The latter seven are small segmental blood vessel labels which are respectively a main artery, a left common iliac artery, a right common iliac artery, a left external iliac artery, a right external iliac artery, a left internal iliac artery and a right internal iliac artery. The labels used by the first-level segmentation model are: pelvic label, pelvic artery blood vessel label, the label that the second grade segmentation model used is: the main artery, the left common iliac artery, the right common iliac artery, the left external iliac artery, the right external iliac artery, the left internal iliac artery and the right internal iliac artery.
The preprocessing of the data in the step 2 comprises cutting processing and normalization processing, the data are respectively cut off 20-100 pixels according to the edges of the labels in the preprocessing stage, meanwhile, the CT value is kept between 0HU and 1600HU, and finally the obtained data are normalized to be between [0,1 ].
And the first-stage segmentation model obtains the blood vessel information of the pelvis and the pelvic artery, and then the obtained blood vessel information of the pelvis and the pelvic artery is fused with the original CT information to obtain the blood vessel tree segmentation result through the second-stage segmentation model. The original CT information refers to CT data input by the first-stage segmentation model.
The first-stage segmentation model and the second-stage segmentation model both adopt a 3D-Unet network as a main body to extract a 3D convolutional neural network, a plurality of segmentation result sets with different scales, different levels and details are generated for the same CT image by using the first-stage segmentation model and the second-stage segmentation model, and the segmentation result sets form multi-level 3D convolutional neural network structure expressions with different scales for the same CT image. The multi-level 3D convolutional neural network refers to a 3D convolutional neural network in a first-level segmentation model and a second-level segmentation model.
The system further comprises a recombination recalibration module and a space self-adaptive compression activation module, wherein the recombination recalibration module and the space self-adaptive compression activation module firstly acquire local information, and then acquire a larger receptive field by utilizing the cavity convolution.
The second-stage segmentation model inputs a scale label obtained by calculation of a distance conversion algorithm by using a gold standard vessel label, and defines a set,Then, the calculation formula of the distance conversion algorithm is as follows:
wherein, for a voxel labeled as a blood vessel, the distance conversion value is the minimum Euclidean distance from the voxel to the voxel on the surface of the blood vessel,representing a certain voxelNeighboring 6 voxels, setIs a set of voxels of the surface of the vessel,representing voxelsThe label of (1);a certain voxel is represented by a number of pixels,a certain voxel representing the surface of a blood vessel;representing voxelsThe label of (a) is used,representing a certain voxelThe distance conversion value of (1).
The step 5 of training the first-stage segmentation model and the second-stage segmentation model comprises the following steps:
step 5.1: the method comprises the steps of weighting cross entropy classification learning errors and depth distance conversion learning errors, wherein the weighting cross entropy learning errors enable the contribution ratio of a foreground background to be balanced, the depth distance conversion learning errors are used for reducing the difficulty of segmenting a vascular structure from a complex surrounding structure and ensuring that a segmentation result has a proper shape prototype, the first-stage segmentation model adopts the weighting cross entropy classification learning errors, and the second-stage segmentation model adopts the weighting cross entropy classification learning errors and the depth distance conversion learning errors;
step 5.2: and (3) training the 3D convolutional neural network by adopting a mixed precision training method, a breakpoint training method, a training optimization algorithm and a BP feedback propagation algorithm, wherein the training optimization algorithm adopts an Adam optimization algorithm.
The initial learning rate of the Adam optimization algorithm is set to 0.001, and the attenuation coefficient is set toIf the error of a single case is not reduced after 20 cases of data training, the learning rate is multiplied by an attenuation coefficient of 0.8, a training batch is set to be 1, and the learning iteration number is 100; and (2) classifying error learning is simultaneously used by adopting a BP feedback propagation algorithm, different error learning segmentation tasks are adopted for the first-stage segmentation model and the second-stage segmentation model, the 3D convolutional neural network learning updates parameters once for each batch, after one iteration learning, the first-stage segmentation model or the second-stage segmentation model judges the total error of each stage, if the current error is smaller than the error of the previous iteration, the current model of the current stage is saved, then the training is continued, and if the training reaches the maximum iteration number, or the total error is not reduced in the ten continuous iterations, the training is stopped.
The calculation formula of the weighted cross entropy classification learning error is as follows:
wherein, thereinFor an error function, P is the vessel and pelvis prediction generated by the first stage of segmentation; g isCutting real blood vessels and pelvis labels in the task;refers to a certain voxel in CT data; v represents the set of all voxels in the CT image data;is the control weight for tag i;is a voxelThe label of (a) is the true probability of i;is a voxelIs the predicted probability of i, where non-target information (i = 0), artery (i = 1), pelvis (i = 2); w is all weights of the 3D convolutional neural network;is the weight of the first level classification. The non-target information refers to information that is not desired to be paid attention to.
Setting Z as the prediction scale output by the second-stage segmentation model of the 3D convolutional neural network, and for any V ∈ V,the method belongs to Z, k scales of voxels containing a vascular structure are obtained through a distance conversion algorithm in the 3D convolutional neural network training of a second-stage segmentation model, and a calculation formula of a depth distance conversion learning error is as follows:
wherein the content of the first and second substances,learning errors for depth-distance conversion of the second-stage segmentation model;is the weight coefficient of the vessel label in the weighted cross entropy loss function;refers to a certain voxel in CT data; v represents the set of all voxels in the CT image data; k is K scales obtained by rounding down the distance conversion value obtained by calculation of the distance conversion algorithm; 1(K) is an indicator function when satisfiedThe value is 1 when k is not satisfied, and the value is 0 when k is not satisfied;is a voxelThe distance from the surface of the vessel is scaled,>0; w is all weights of the 3D convolutional neural network;is the weight of the second level classification;is a voxelA probability of belonging to the kth scale;is a balance factor to balance the two loss terms;is a normalization factor;is that the voxel v belongs toThe probability of an individual scale of the image,andin (1)All are shown asAnd (4) each dimension.
The invention has the following beneficial effects:
(1) the invention can receive CT data with original size to automatically and quickly generate accurate segmentation results of the abdominal pelvis and the pelvic artery vessel tree to generate pelvic bone and pelvic artery vessel tree information, and simultaneously, the vessel information is placed in the pelvic environment to ensure that the data display is more three-dimensional and more specific, the relative position relationship of artery vessels and abdomen is displayed more clearly, and the diagnosis and judgment of doctors are facilitated, so that the abdominal pelvis and the pelvic artery tree can be automatically, efficiently and accurately segmented in a multi-resolution CT image by using a computer.
(2) The 3D convolutional neural network is adopted in the invention, multi-scale context information can be extracted from CT data quickly and efficiently, the hierarchical segmentation method utilizes natural structure hierarchy to carry out hierarchical segmentation, a pelvic bone and a pelvic bone artery vessel tree are obtained firstly, the artery vessel tree is learned and then segmented again to obtain the condition of each segment of blood vessel in the vessel tree, and finally, the segmentation result of pixel level is obtained.
(3) The method adopts the 3D convolutional neural network to extract features, and combines hierarchical segmentation to perform rough segmentation and fine segmentation in sequence, the invention provides batch updating learning of each layer, and simultaneously integrates an error function of a weighted cross entropy classification error and a depth distance conversion error, so that the 3D convolutional neural network can generate an accurate and reliable target segmentation result by utilizing the self-structural hierarchical characteristics of an object and the self-geometric characteristics of blood vessels.
(4) The invention can quickly and accurately detect the trained model, realize batch CT detection, realize unattended batch operation, has high segmentation speed, can be promoted and accelerated along with upgrading and expanding of equipment, can directly reconstruct and carry out 3D development by various 3D technologies after simply processing the obtained pixel-level label data, gives preoperative guidance to doctors in a more specific and more detailed mode, provides more abundant and more three-dimensional abdomen information for the doctors so as to better carry out various diagnoses, has significance for various problems such as other abdominal nodule detection researches, nodule time sequence registration and the like by using the pelvic bone and the pelvic bone artery vessel tree as a meaningful reference object, can help the pelvic bone and the pelvic artery tree to eliminate partial irrelevant interference and provide related structural data support, the development of gastrointestinal intelligent medical treatment is promoted.
(5) The invention is popularized to the bottom layer, can solve the problem of insufficient professional medical resources of the primary hospital, improve the diagnosis level of the primary hospital and reduce the probability of misdiagnosis and missed diagnosis in remote areas.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic representation of the overall multi-layer fractionation process of the present invention;
FIG. 3 is a schematic diagram of the present invention based on 3D-Unet features as main body extraction of a backbone 3D convolutional neural network;
FIG. 4 is a schematic diagram of the operation of the realignment module and the spatial adaptive compression activation module according to the present invention.
Detailed Description
For a better understanding of the present invention by those skilled in the art, the present invention will be described in further detail below with reference to the accompanying drawings and the following examples.
Example 1:
as shown in fig. 1 and 2, a deep learning based multi-level segmentation method for pelvic bones and their arterial vessels comprises the following steps:
step 1: data preparation and marking, wherein data import from a data system and calibration of pelvic bone and pelvic artery vessel tree data are mainly completed in the stage;
step 2: data preprocessing, in which data are preprocessed to remove redundant background information;
and step 3: constructing a first-stage segmentation model of a 3D convolutional neural network based on multi-level segmentation, wherein the first-stage segmentation model is used for segmenting the pelvis and roughly segmenting the artery and vessel tree of the pelvis;
and 4, step 4: constructing a second-stage segmentation model of the 3D convolutional neural network based on multi-level segmentation, wherein the second-stage segmentation model carries out fine segmentation on the blood vessel by using the segmentation result of the first-stage segmentation model and a distance conversion scale label based on a gold standard blood vessel label; the gold standard vessel label refers to a vessel labeling label which is audited by medical experts, and the distance conversion scale label is obtained by calculating the gold standard vessel label through a distance conversion algorithm;
and 5: training a first-stage segmentation model and a second-stage segmentation model by using the calibrated data and the synthesized loss function;
step 6: and (5) carrying out abdomen information segmentation on the input three-dimensional CT image by using the trained first-stage segmentation model and second-stage segmentation model in the step (5), and outputting a segmentation result.
In the invention, data preparation is carried out in step 1, the method for training the model is a method based on supervised learning, the iterative update of the model needs data with accurate labels, and the deep neural network method needs high-quality data for training, so that the data needs to be prepared for the training of the model at first, the 3D full-convolution 3D convolution neural network adopted by the invention can learn the characteristics of the data from a small amount of data, the used data is CT images of the abdomen of a hospital image department, and the data of at least sixty patients is collected;
and (3) labeling the data in the step 1, wherein nine labels are labeled in total, namely a pelvic label, a pelvic artery blood vessel label, a main artery, a left common iliac artery, a right common iliac artery, a left external iliac artery, a right external iliac artery, a left internal iliac artery and a right internal iliac artery. The latter seven are small segmental blood vessel labels which are respectively a main artery, a left common iliac artery, a right common iliac artery, a left external iliac artery, a right external iliac artery, a left internal iliac artery and a right internal iliac artery. The labels used by the first-level segmentation model are: pelvic label, pelvic artery blood vessel label, the label that the second grade segmentation model used is: the total artery, the left common iliac artery, the right common iliac artery, the left external iliac artery, the right external iliac artery, the left internal iliac artery and the right internal iliac artery are used as pixel level labels, and the nine labels can be directly used for 3D reconstruction after being partially processed; for the labeled labels, for each image of CT examination, the labeled labels are sent to an expert reviewing hospital imaging science for labeling determination after being labeled by researchers, and the accuracy and the objectivity of each label are guaranteed.
In the invention, the preprocessing of the data in the step 2 comprises cutting processing and normalization processing, the data are respectively cut off 20-100 pixels according to the edges of the labels in the preprocessing stage, meanwhile, the CT value is kept between 0HU and 1600HU, and finally obtained data are normalized to be between [0,1], wherein HU refers to a dimensionless unit commonly used in Computed Tomography (CT).
In the invention, in the step 3 and the step 4, the first-stage segmentation model obtains the blood vessel information of the pelvis and the pelvic artery, and then the obtained blood vessel information of the pelvis and the pelvic artery is fused with the original CT information to obtain the blood vessel tree segmentation result through the second-stage segmentation model. The original CT information refers to CT data input by a first-stage segmentation model, the segmentation model used by the method adopts a multi-stage segmentation method, the reason for adopting the multi-stage segmentation method is that the pelvis and the vessel tree have natural hierarchical features, and the method for splitting the task into the pelvis, the vessels and small segments of vessels is more favorable for learning of a 3D convolutional neural network and improving the accuracy.
As shown in fig. 3, the numbers inside brackets in the figure indicate the number of channels, such as 16, 32, 48, 64, etc., the number of channels is the number of channels, the channels (channels) are proper nouns in a convolutional neural network, the number of channels is equal to the number of classes, the number of classes is the total number of voxel labels, the first-stage segmentation model and the second-stage segmentation model both use a 3D-Unet network as a main body to extract a 3D convolutional neural network, a plurality of scale, different level and detail segmentation result sets are generated for the same CT image by using the first-stage segmentation model and the second-stage segmentation model, the segmentation result sets form a multi-level 3D convolutional neural network structure expression for the same CT image with different scales, the invention only considers three-dimensional CT images, therefore, the full convolutional neural network structure used by the segmentation model in the invention includes different types of 3D convolutional neural network layers, because the input data has three-dimensional features, the modules used by the 3D convolutional neural network model are all three-dimensional operation feature extraction main 3D convolutional neural networks.
As shown in fig. 4, the number inside brackets in the drawing indicates the number of channels, such as n, 2n, m, the number of channels is the number of channels, the number of channels (channels) is a proper noun in the convolutional neural network, the number of channels is equal to the number of classes, the number of classes is the total number of voxel labels, the convolutional neural network further comprises a re-calibration module and a spatial adaptive compression activation module, the re-calibration module and the spatial adaptive compression activation module acquire local information first, and then acquire a larger receptive field by using a hole convolution, so that the captured spatial information is wider, more complex features can be synthesized, meanwhile, weights are allocated to different information positions to facilitate dynamic adjustment of back propagation and effects of a loss function, input enhancement supplements partial information lost in the data down-sampling process, and the 3D convolutional neural network can rapidly extract multi-scale information in the data, The characteristics of multiple types have strong learning ability; the "local information" refers to information perceived by the 3D convolutional neural network in a local range of the CT image, and the concept corresponding to the information is global information, that is, information perceived from the whole CT image, and the "re-alignment module and the" spatial adaptive compression activation module "are both the prior art, and the specific implementation principle of the" re-alignment module "and the" spatial adaptive compression activation module "can be clearly known through fig. 4, where 1 × 1 convolution is performed first, and then 3 × 3 convolution is performed.
The second-stage segmentation model inputs a scale label obtained by calculation of a distance conversion algorithm by using a gold standard vessel label, and defines a set,Then, the calculation formula of the distance conversion algorithm is as follows:
wherein, for a voxel labeled as a blood vessel, the distance conversion value is the minimum Euclidean distance from the voxel to the voxel on the surface of the blood vessel,representing a certain voxelNeighboring 6 voxels, setIs a set of voxels of the surface of the vessel,representing voxelsThe label of (1);a certain voxel is represented by a number of pixels,a certain voxel representing the surface of a blood vessel;representing voxelsThe label of (a) is used,representing a certain voxelThe distance conversion value of (1).
In step 5 of the present invention, as shown in fig. 1-4, training the first-stage segmentation model and the second-stage segmentation model comprises the following steps:
step 5.1: the method comprises the steps of weighting cross entropy classification learning errors and depth distance conversion learning errors, wherein the weighting cross entropy learning errors enable the contribution ratio of a foreground background to be balanced, the depth distance conversion learning errors are used for reducing the difficulty of segmenting a vascular structure from a complex surrounding structure and ensuring that a segmentation result has a proper shape prototype, the first-stage segmentation model adopts the weighting cross entropy classification learning errors, and the second-stage segmentation model adopts the weighting cross entropy classification learning errors and the depth distance conversion learning errors; the learning error directly influences the training performance of the model, and from the two angles of numerical classification effect and vessel geometric characteristics, 3D convolutional neural network learning is carried out by considering the classification learning error of weighted Cross Entropy (Weight Cross entry) and the Deep Distance conversion (Deep Distance Transform, DDT) learning error of each hierarchy segmentation task, and as the whole abdominal CT data has the phenomenon of extreme unbalance, namely a target region (foreground) only occupies a small part and other irrelevant information (background) occupies a large proportion, the scene contribution ratio before and after the weighted Cross Entropy learning error is adopted to balance in the training stage; the blood vessel has geometric characteristics, can be regarded as a series of tubular structures with spheres changing the center and the radius of the blood vessel, can utilize position information from points to the wall of the blood vessel, and can reduce the difficulty of segmenting the blood vessel structure from a complex surrounding structure by using depth-distance conversion learning errors and ensure that a segmentation result has a proper shape prototype.
For the segmentation training of the first-stage classification pelvis blood vessels, namely in a first-stage segmentation model, because the pelvis does not have blood vessel characteristics and is used as a coarse classification result, the error at the stage only adopts the weighted cross entropy, and the calculation formula of the weighted cross entropy classification learning error is as follows:
wherein, the P is an error function, and is the prediction of blood vessels and pelvis generated by the first-stage segmentation; g is a real blood vessel and pelvis label in the segmentation task;refers to a certain voxel in CT data; v represents the set of all voxels in the CT image data;is the control weight for tag i;is a voxelThe label of (a) is the true probability of i;is a voxelIs the predicted probability of i, where non-target information (i = 0), artery (i = 1), pelvis (i = 2); w is all weights of the 3D convolutional neural network;is the weight of the first level classification. The non-target information refers to information that is not desired to be paid attention to.
Setting Z as the prediction scale output by the second-stage segmentation model of the 3D convolutional neural network, and for any V ∈ V,the method belongs to Z, k scales of voxels containing a vascular structure are obtained through a distance conversion algorithm in the 3D convolutional neural network training of a second-stage segmentation model, and a calculation formula of a depth distance conversion learning error is as follows:
wherein the content of the first and second substances,learning errors for depth-distance conversion of the second-stage segmentation model;is the weight coefficient of the vessel label in the weighted cross entropy loss function;refers to a certain voxel in CT data; v represents the set of all voxels in the CT image data; k is a distance conversion calculated by a distance conversion algorithmConverting values and rounding down to obtain K scales; 1(K) is an indicator function when satisfiedThe value is 1 when k is not satisfied, and the value is 0 when k is not satisfied;is a voxelThe distance from the surface of the vessel is scaled,>0; w is all weights of the 3D convolutional neural network;is the weight of the second level classification;is a voxelA probability of belonging to the kth scale;is a balance factor to balance the two loss terms;is a normalization factor;is a voxelBelong to the firstThe probability of an individual scale of the image,andin (1)All are shown asA unit scale ofAndare all conventional functions; the first term of the distance loss function isIs a standard softmax loss, penalizing the misclassification at each scale, the second term of the distance loss function beingAnd when the predicted scale classification result is different from the actual scale label, increasing the distance error punishment, wherein in the second-stage segmentation training, the error is the sum of the weighted cross entropy error and the distance conversion error.
Step 5.2: and (3) training the 3D convolutional neural network by adopting a mixed precision training method, a breakpoint training method, a training optimization algorithm and a BP feedback propagation algorithm, wherein the training optimization algorithm adopts an Adam optimization algorithm.
The initial learning rate of the Adam optimization algorithm is set to 0.001, and the attenuation coefficient is set toIf the error of a single case does not decrease after the training of 20 cases of data continuously, the learning rate is multiplied byThe attenuation coefficient is 0.8, the one-time training batch is set to be 1, and the learning iteration number is 100; and (2) classifying error learning is simultaneously used by adopting a BP feedback propagation algorithm, different error learning segmentation tasks are adopted for the first-stage segmentation model and the second-stage segmentation model, the 3D convolutional neural network learning updates parameters once for each batch, after one iteration learning, the first-stage segmentation model or the second-stage segmentation model judges the total error of each stage, if the current error is smaller than the error of the previous iteration, the current model of the current stage is saved, then the training is continued, and if the training reaches the maximum iteration number, or the total error is not reduced in the ten continuous iterations, the training is stopped.
By the method, the CT data with the original size can be received to automatically and quickly generate accurate segmentation results of the abdominal pelvis and the pelvic artery blood vessel tree, the pelvic bone and pelvic artery blood vessel tree information is generated, meanwhile, the blood vessel information is placed in the pelvic environment to enable the data display to be more three-dimensional and more specific, the relative position relation of the artery blood vessel and the abdomen is displayed to be clearer, diagnosis and judgment of a doctor are facilitated, and therefore the abdominal pelvis and the pelvic artery tree can be automatically, efficiently and accurately segmented in the multi-resolution CT image by the aid of a computer.
The above is an embodiment of the present invention. The embodiments and specific parameters in the embodiments are only for the purpose of clearly illustrating the process of verifying the invention, and are not intended to limit the scope of the invention, which is defined by the claims.
Claims (10)
1. A deep learning-based multi-level segmentation method for a pelvis and arterial vessels thereof is characterized by comprising the following steps of: the method comprises the following steps:
step 1: data preparation and marking are carried out, and data import from a data system and calibration of the pelvic bone and the pelvic artery vascular tree data are completed;
step 2: data preprocessing, namely preprocessing the data and removing redundant background information;
and step 3: constructing a first-stage segmentation model of a 3D convolutional neural network based on multi-level segmentation, wherein the first-stage segmentation model is used for segmenting the pelvis and roughly segmenting the artery and vessel tree of the pelvis;
and 4, step 4: constructing a second-stage segmentation model of the 3D convolutional neural network based on multi-level segmentation, wherein the second-stage segmentation model carries out fine segmentation on the blood vessel by using the segmentation result of the first-stage segmentation model and a distance conversion scale label based on a gold standard blood vessel label;
and 5: training a first-stage segmentation model and a second-stage segmentation model by using the calibrated data and the synthesized loss function;
step 6: and (5) carrying out abdomen information segmentation on the input three-dimensional CT image by using the trained first-stage segmentation model and second-stage segmentation model in the step (5), and outputting a segmentation result.
2. The deep learning-based multi-level segmentation method for the pelvis and arterial vessels thereof according to claim 1, characterized in that: the preprocessing of the data in the step 2 comprises cutting processing and normalization processing, the data are respectively cut off 20-100 pixels according to the edges of the labels in the preprocessing stage, meanwhile, the CT value is kept between 0HU and 1600HU, and finally the obtained data are normalized to be between [0,1 ].
3. The deep learning-based multi-level segmentation method for the pelvis and arterial vessels thereof according to claim 1, characterized in that: and the first-stage segmentation model obtains the blood vessel information of the pelvis and the pelvic artery, and then the obtained blood vessel information of the pelvis and the pelvic artery is fused with the original CT information to obtain the blood vessel tree segmentation result through the second-stage segmentation model.
4. The deep learning-based multi-level segmentation method for the pelvis and arterial vessels thereof as claimed in claim 3, wherein: the first-stage segmentation model and the second-stage segmentation model both adopt a 3D-Unet network as a main body to extract a 3D convolutional neural network, a plurality of segmentation result sets with different scales, different levels and details are generated for the same CT image by using the first-stage segmentation model and the second-stage segmentation model, and the segmentation result sets form multi-level 3D convolutional neural network structure expressions with different scales for the same CT image.
5. The deep learning-based multi-level segmentation method for the pelvis and arterial vessels thereof as claimed in claim 4, wherein: the system further comprises a recombination recalibration module and a space self-adaptive compression activation module, wherein the recombination recalibration module and the space self-adaptive compression activation module firstly acquire local information, and then acquire a larger receptive field by utilizing the cavity convolution.
6. The deep learning-based multi-level segmentation method for the pelvis and arterial vessels thereof as claimed in claim 5, wherein: the second-stage segmentation model inputs a scale label obtained by calculation of a distance conversion algorithm by using a gold standard vessel label, and defines a set,Then, the calculation formula of the distance conversion algorithm is as follows:
wherein, for a voxel labeled as a blood vessel, the distance conversion value is the minimum Euclidean distance from the voxel to the voxel on the surface of the blood vessel,representing a certain voxelNeighboring 6 voxels, setIs a set of voxels of the surface of the vessel,representing voxelsThe label of (1);a certain voxel is represented by a number of pixels,a certain voxel representing the surface of a blood vessel;representing voxelsV denotes the set of all voxels in the CT image data,representing a certain voxelThe distance conversion value of (1).
7. The deep learning-based multi-level segmentation method for the pelvis and arterial vessels thereof according to claim 1, characterized in that: the step 5 of training the first-stage segmentation model and the second-stage segmentation model comprises the following steps:
step 5.1: the method comprises the steps of weighting cross entropy classification learning errors and depth distance conversion learning errors, wherein the weighting cross entropy learning errors enable the contribution ratio of a foreground background to be balanced, the depth distance conversion learning errors are used for reducing the difficulty of segmenting a vascular structure from a complex surrounding structure and ensuring that a segmentation result has a proper shape prototype, the first-stage segmentation model adopts the weighting cross entropy classification learning errors, and the second-stage segmentation model adopts the weighting cross entropy classification learning errors and the depth distance conversion learning errors;
step 5.2: and (3) training the 3D convolutional neural network by adopting a mixed precision training method, a breakpoint training method, a training optimization algorithm and a BP feedback propagation algorithm, wherein the training optimization algorithm adopts an Adam optimization algorithm.
8. The deep learning-based multi-level segmentation method for the pelvis and arterial vessels thereof according to claim 7, characterized in that: the initial learning rate of the Adam optimization algorithm is set to 0.001, and the attenuation coefficient is set toIf the error of a single case does not decrease after 20 cases of continuous training of data, the learning rate is multiplied by the attenuation coefficient 0.8, the training batch is set to 1, and the learning iteration number is 100.
9. The deep learning-based multi-level segmentation method for the pelvis and arterial vessels thereof according to claim 7, characterized in that: the calculation formula of the weighted cross entropy classification learning error is as follows:
wherein, thereinFor an error function, P is the vessel and pelvis prediction generated by the first stage of segmentation; g is a real blood vessel and pelvis label in the segmentation task;refers to a certain voxel in CT data; v meterA set of all voxels in the CT image data;is the control weight for tag i;is a voxelThe label of (a) is the true probability of i;is a voxelIs the prediction probability of i, W is all the weights of the 3D convolutional neural network;is the weight of the first level classification.
10. The deep learning-based multi-level segmentation method for the pelvis and arterial vessels thereof according to claim 7, characterized in that: setting Z as the prediction scale of the output of the second-stage segmentation model of the 3D convolutional neural network, and determining any V ∈ V, ZvThe method belongs to Z, k scales of voxels containing a vascular structure are obtained through a distance conversion algorithm in the 3D convolutional neural network training of a second-stage segmentation model, and a calculation formula of a depth distance conversion learning error is as follows:
wherein the content of the first and second substances,learning errors for depth-distance conversion of the second-stage segmentation model;is the weight coefficient of the vessel label in the weighted cross entropy loss function;refers to a certain voxel in CT data; v represents the set of all voxels in the CT image data; k is K scales obtained by rounding down the distance conversion value obtained by calculation of the distance conversion algorithm; 1(K) is an indicator function when satisfiedThe value is 1 when k is not satisfied, and the value is 0 when k is not satisfied;is the distance transformation scale of the voxel v from the vessel surface,>0; w is all weights of the 3D convolutional neural network;is the weight of the second level classification;is a voxelA probability of belonging to the kth scale;is a balance factor to balance the two loss terms;is a normalization factor;is a voxelBelong to the firstThe probability of an individual scale of the image,andin (1)All are shown asAnd (4) each dimension.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110159220.9A CN112489047B (en) | 2021-02-05 | 2021-02-05 | Deep learning-based pelvic bone and arterial vessel multi-level segmentation method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110159220.9A CN112489047B (en) | 2021-02-05 | 2021-02-05 | Deep learning-based pelvic bone and arterial vessel multi-level segmentation method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112489047A true CN112489047A (en) | 2021-03-12 |
CN112489047B CN112489047B (en) | 2021-06-01 |
Family
ID=74912385
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110159220.9A Active CN112489047B (en) | 2021-02-05 | 2021-02-05 | Deep learning-based pelvic bone and arterial vessel multi-level segmentation method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112489047B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113205508A (en) * | 2021-05-20 | 2021-08-03 | 强联智创(北京)科技有限公司 | Segmentation method, device and equipment based on image data |
CN113361584A (en) * | 2021-06-01 | 2021-09-07 | 推想医疗科技股份有限公司 | Model training method and device, and pulmonary arterial hypertension measurement method and device |
CN113486711A (en) * | 2021-05-26 | 2021-10-08 | 上海应用技术大学 | Traffic sign recognition model training method and system |
CN113643317A (en) * | 2021-10-18 | 2021-11-12 | 四川大学 | Coronary artery segmentation method based on depth geometric evolution model |
CN113744215A (en) * | 2021-08-24 | 2021-12-03 | 清华大学 | Method and device for extracting center line of tree-shaped lumen structure in three-dimensional tomography image |
CN113781636A (en) * | 2021-09-14 | 2021-12-10 | 杭州柳叶刀机器人有限公司 | Pelvic bone modeling method and system, storage medium, and computer program product |
CN114723739A (en) * | 2022-05-09 | 2022-07-08 | 深圳市铱硙医疗科技有限公司 | Blood vessel segmentation model training data labeling method and device based on CTA image |
US11382712B2 (en) | 2019-12-22 | 2022-07-12 | Augmedics Ltd. | Mirroring in image guided surgery |
US11389252B2 (en) | 2020-06-15 | 2022-07-19 | Augmedics Ltd. | Rotating marker for image guided surgery |
CN114972361A (en) * | 2022-04-25 | 2022-08-30 | 北京医准智能科技有限公司 | Blood flow segmentation method, device, equipment and storage medium |
CN115588012A (en) * | 2022-12-13 | 2023-01-10 | 四川大学 | Pelvic artery blood vessel segmentation method, system, storage medium and terminal |
US11750794B2 (en) | 2015-03-24 | 2023-09-05 | Augmedics Ltd. | Combining video-based and optic-based augmented reality in a near eye display |
US11766296B2 (en) | 2018-11-26 | 2023-09-26 | Augmedics Ltd. | Tracking system for image-guided surgery |
US11896445B2 (en) | 2021-07-07 | 2024-02-13 | Augmedics Ltd. | Iliac pin and adapter |
US11974887B2 (en) | 2018-05-02 | 2024-05-07 | Augmedics Ltd. | Registration marker for an augmented reality system |
US11980506B2 (en) | 2019-07-29 | 2024-05-14 | Augmedics Ltd. | Fiducial marker |
CN113744215B (en) * | 2021-08-24 | 2024-05-31 | 清华大学 | Extraction method and device for central line of tree-shaped lumen structure in three-dimensional tomographic image |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765422A (en) * | 2018-06-13 | 2018-11-06 | 云南大学 | A kind of retinal images blood vessel automatic division method |
CN109215041A (en) * | 2018-08-17 | 2019-01-15 | 上海交通大学医学院附属第九人民医院 | A kind of full-automatic pelvic tumor dividing method and system, storage medium and terminal |
US20190064378A1 (en) * | 2017-08-25 | 2019-02-28 | Wei D. LIU | Automated Seismic Interpretation Using Fully Convolutional Neural Networks |
CN109447998A (en) * | 2018-09-29 | 2019-03-08 | 华中科技大学 | Based on the automatic division method under PCANet deep learning model |
CN109493317A (en) * | 2018-09-25 | 2019-03-19 | 哈尔滨理工大学 | The more vertebra dividing methods of 3D based on concatenated convolutional neural network |
CN109615636A (en) * | 2017-11-03 | 2019-04-12 | 杭州依图医疗技术有限公司 | Vascular tree building method, device in the lobe of the lung section segmentation of CT images |
CN110047128A (en) * | 2018-01-15 | 2019-07-23 | 西门子保健有限责任公司 | The method and system of X ray CT volume and segmentation mask is rebuild from several X-ray radiogram 3D |
CN110211140A (en) * | 2019-06-14 | 2019-09-06 | 重庆大学 | Abdominal vascular dividing method based on 3D residual error U-Net and Weighted Loss Function |
CN111091573A (en) * | 2019-12-20 | 2020-05-01 | 广州柏视医疗科技有限公司 | CT image pulmonary vessel segmentation method and system based on deep learning |
CN111179237A (en) * | 2019-12-23 | 2020-05-19 | 北京理工大学 | Image segmentation method and device for liver and liver tumor |
US20200203001A1 (en) * | 2017-07-07 | 2020-06-25 | University Of Louisville Research Foundation, Inc. | Segmentation of medical images |
US20200320751A1 (en) * | 2019-04-06 | 2020-10-08 | Kardiolytics Inc. | Autonomous segmentation of contrast filled coronary artery vessels on computed tomography images |
-
2021
- 2021-02-05 CN CN202110159220.9A patent/CN112489047B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200203001A1 (en) * | 2017-07-07 | 2020-06-25 | University Of Louisville Research Foundation, Inc. | Segmentation of medical images |
US20190064378A1 (en) * | 2017-08-25 | 2019-02-28 | Wei D. LIU | Automated Seismic Interpretation Using Fully Convolutional Neural Networks |
CN109615636A (en) * | 2017-11-03 | 2019-04-12 | 杭州依图医疗技术有限公司 | Vascular tree building method, device in the lobe of the lung section segmentation of CT images |
CN110047128A (en) * | 2018-01-15 | 2019-07-23 | 西门子保健有限责任公司 | The method and system of X ray CT volume and segmentation mask is rebuild from several X-ray radiogram 3D |
CN108765422A (en) * | 2018-06-13 | 2018-11-06 | 云南大学 | A kind of retinal images blood vessel automatic division method |
CN109215041A (en) * | 2018-08-17 | 2019-01-15 | 上海交通大学医学院附属第九人民医院 | A kind of full-automatic pelvic tumor dividing method and system, storage medium and terminal |
CN109493317A (en) * | 2018-09-25 | 2019-03-19 | 哈尔滨理工大学 | The more vertebra dividing methods of 3D based on concatenated convolutional neural network |
CN109447998A (en) * | 2018-09-29 | 2019-03-08 | 华中科技大学 | Based on the automatic division method under PCANet deep learning model |
US20200320751A1 (en) * | 2019-04-06 | 2020-10-08 | Kardiolytics Inc. | Autonomous segmentation of contrast filled coronary artery vessels on computed tomography images |
CN110211140A (en) * | 2019-06-14 | 2019-09-06 | 重庆大学 | Abdominal vascular dividing method based on 3D residual error U-Net and Weighted Loss Function |
CN111091573A (en) * | 2019-12-20 | 2020-05-01 | 广州柏视医疗科技有限公司 | CT image pulmonary vessel segmentation method and system based on deep learning |
CN111179237A (en) * | 2019-12-23 | 2020-05-19 | 北京理工大学 | Image segmentation method and device for liver and liver tumor |
Non-Patent Citations (12)
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11750794B2 (en) | 2015-03-24 | 2023-09-05 | Augmedics Ltd. | Combining video-based and optic-based augmented reality in a near eye display |
US11980508B2 (en) | 2018-05-02 | 2024-05-14 | Augmedics Ltd. | Registration of a fiducial marker for an augmented reality system |
US11980507B2 (en) | 2018-05-02 | 2024-05-14 | Augmedics Ltd. | Registration of a fiducial marker for an augmented reality system |
US11974887B2 (en) | 2018-05-02 | 2024-05-07 | Augmedics Ltd. | Registration marker for an augmented reality system |
US11766296B2 (en) | 2018-11-26 | 2023-09-26 | Augmedics Ltd. | Tracking system for image-guided surgery |
US11980429B2 (en) | 2018-11-26 | 2024-05-14 | Augmedics Ltd. | Tracking methods for image-guided surgery |
US11980506B2 (en) | 2019-07-29 | 2024-05-14 | Augmedics Ltd. | Fiducial marker |
US11382712B2 (en) | 2019-12-22 | 2022-07-12 | Augmedics Ltd. | Mirroring in image guided surgery |
US11801115B2 (en) | 2019-12-22 | 2023-10-31 | Augmedics Ltd. | Mirroring in image guided surgery |
US11389252B2 (en) | 2020-06-15 | 2022-07-19 | Augmedics Ltd. | Rotating marker for image guided surgery |
CN113205508B (en) * | 2021-05-20 | 2022-01-25 | 强联智创(北京)科技有限公司 | Segmentation method, device and equipment based on image data |
CN113205508A (en) * | 2021-05-20 | 2021-08-03 | 强联智创(北京)科技有限公司 | Segmentation method, device and equipment based on image data |
CN113486711A (en) * | 2021-05-26 | 2021-10-08 | 上海应用技术大学 | Traffic sign recognition model training method and system |
CN113361584A (en) * | 2021-06-01 | 2021-09-07 | 推想医疗科技股份有限公司 | Model training method and device, and pulmonary arterial hypertension measurement method and device |
US11896445B2 (en) | 2021-07-07 | 2024-02-13 | Augmedics Ltd. | Iliac pin and adapter |
CN113744215B (en) * | 2021-08-24 | 2024-05-31 | 清华大学 | Extraction method and device for central line of tree-shaped lumen structure in three-dimensional tomographic image |
CN113744215A (en) * | 2021-08-24 | 2021-12-03 | 清华大学 | Method and device for extracting center line of tree-shaped lumen structure in three-dimensional tomography image |
CN113781636B (en) * | 2021-09-14 | 2023-06-20 | 杭州柳叶刀机器人有限公司 | Pelvic bone modeling method and system, storage medium, and computer program product |
CN113781636A (en) * | 2021-09-14 | 2021-12-10 | 杭州柳叶刀机器人有限公司 | Pelvic bone modeling method and system, storage medium, and computer program product |
CN113643317B (en) * | 2021-10-18 | 2022-01-04 | 四川大学 | Coronary artery segmentation method based on depth geometric evolution model |
CN113643317A (en) * | 2021-10-18 | 2021-11-12 | 四川大学 | Coronary artery segmentation method based on depth geometric evolution model |
CN114972361B (en) * | 2022-04-25 | 2022-12-16 | 北京医准智能科技有限公司 | Blood flow segmentation method, device, equipment and storage medium |
CN114972361A (en) * | 2022-04-25 | 2022-08-30 | 北京医准智能科技有限公司 | Blood flow segmentation method, device, equipment and storage medium |
CN114723739A (en) * | 2022-05-09 | 2022-07-08 | 深圳市铱硙医疗科技有限公司 | Blood vessel segmentation model training data labeling method and device based on CTA image |
CN115588012A (en) * | 2022-12-13 | 2023-01-10 | 四川大学 | Pelvic artery blood vessel segmentation method, system, storage medium and terminal |
Also Published As
Publication number | Publication date |
---|---|
CN112489047B (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112489047B (en) | Deep learning-based pelvic bone and arterial vessel multi-level segmentation method thereof | |
CN105279759B (en) | The abdominal cavity aortic aneurysm outline dividing method constrained with reference to context information arrowband | |
Selver et al. | Patient oriented and robust automatic liver segmentation for pre-evaluation of liver transplantation | |
EP2194505B1 (en) | Method and apparatus for segmenting spine and aorta in a medical image according to a skeletal atlas | |
Aljabri et al. | A review on the use of deep learning for medical images segmentation | |
CN110517238B (en) | AI three-dimensional reconstruction and human-computer interaction visualization network system for CT medical image | |
CN107545584A (en) | The method, apparatus and its system of area-of-interest are positioned in medical image | |
CN110751651B (en) | MRI pancreas image segmentation method based on multi-scale migration learning | |
CN109801268B (en) | CT radiography image renal artery segmentation method based on three-dimensional convolution neural network | |
CN110288611A (en) | Coronary vessel segmentation method based on attention mechanism and full convolutional neural networks | |
CN109934829B (en) | Liver segmentation method based on three-dimensional graph segmentation algorithm | |
CN112862833A (en) | Blood vessel segmentation method, electronic device and storage medium | |
Fan et al. | Lung nodule detection based on 3D convolutional neural networks | |
CN107665737A (en) | Vascular wall stress-strain state acquisition methods, computer-readable medium and system | |
CN113160120A (en) | Liver blood vessel segmentation method and system based on multi-mode fusion and deep learning | |
Soler et al. | Fully automatic anatomical, pathological, and functional segmentation from CT scans for hepatic surgery | |
Xie et al. | Semi-supervised region-connectivity-based cerebrovascular segmentation for time-of-flight magnetic resonance angiography image | |
Qian et al. | Automatic segmentation method using FCN with multi-scale dilated convolution for medical ultrasound image | |
CN104915989B (en) | Blood vessel three-dimensional dividing method based on CT images | |
CN113192069A (en) | Semantic segmentation method and device for tree structure in three-dimensional tomography image | |
Wang et al. | Bowelnet: Joint semantic-geometric ensemble learning for bowel segmentation from both partially and fully labeled ct images | |
Tao et al. | Tooth CT Image Segmentation Method Based on the U-Net Network and Attention Module. | |
CN116630334B (en) | Method, device, equipment and medium for real-time automatic segmentation of multi-segment blood vessel | |
Xiao et al. | PET and CT image fusion of lung cancer with siamese pyramid fusion network | |
Yuan et al. | Pulmonary arteries segmentation from CT images using PA‐Net with attention module and contour loss |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |