CN111047591A - Focal volume measuring method, system, terminal and storage medium based on deep learning - Google Patents
Focal volume measuring method, system, terminal and storage medium based on deep learning Download PDFInfo
- Publication number
- CN111047591A CN111047591A CN202010173183.2A CN202010173183A CN111047591A CN 111047591 A CN111047591 A CN 111047591A CN 202010173183 A CN202010173183 A CN 202010173183A CN 111047591 A CN111047591 A CN 111047591A
- Authority
- CN
- China
- Prior art keywords
- segmentation
- data
- image
- layer
- volume
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 31
- 238000000034 method Methods 0.000 title claims description 39
- 230000011218 segmentation Effects 0.000 claims abstract description 158
- 238000002591 computed tomography Methods 0.000 claims abstract description 91
- 238000001514 detection method Methods 0.000 claims abstract description 31
- 238000002372 labelling Methods 0.000 claims abstract description 13
- 238000005259 measurement Methods 0.000 claims abstract description 11
- 230000000007 visual effect Effects 0.000 claims abstract description 10
- 238000000691 measurement method Methods 0.000 claims abstract description 3
- 230000003902 lesion Effects 0.000 claims description 59
- 210000004072 lung Anatomy 0.000 claims description 30
- 238000004891 communication Methods 0.000 claims description 15
- 238000003384 imaging method Methods 0.000 claims description 11
- 239000003814 drug Substances 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 10
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 6
- 238000009499 grossing Methods 0.000 claims description 6
- 208000002151 Pleural effusion Diseases 0.000 claims description 5
- 238000003062 neural network model Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 206010016654 Fibrosis Diseases 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 3
- 230000004761 fibrosis Effects 0.000 claims description 3
- 239000005337 ground glass Substances 0.000 claims description 2
- 238000011161 development Methods 0.000 abstract description 5
- 230000018109 developmental process Effects 0.000 abstract description 5
- 239000010410 layer Substances 0.000 description 76
- 206010035664 Pneumonia Diseases 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000007547 defect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 241000711573 Coronaviridae Species 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 239000002356 single layer Substances 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 208000000059 Dyspnea Diseases 0.000 description 2
- 206010013975 Dyspnoeas Diseases 0.000 description 2
- 206010021143 Hypoxia Diseases 0.000 description 2
- 206010061218 Inflammation Diseases 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 2
- 210000003484 anatomy Anatomy 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 208000018875 hypoxemia Diseases 0.000 description 2
- 230000004054 inflammatory process Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 201000003144 pneumothorax Diseases 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 208000024891 symptom Diseases 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 208000010444 Acidosis Diseases 0.000 description 1
- 206010001052 Acute respiratory distress syndrome Diseases 0.000 description 1
- 206010011224 Cough Diseases 0.000 description 1
- 206010012735 Diarrhoea Diseases 0.000 description 1
- 208000006083 Hypokinesia Diseases 0.000 description 1
- 206010027417 Metabolic acidosis Diseases 0.000 description 1
- 206010028748 Nasal obstruction Diseases 0.000 description 1
- 206010051986 Pneumatosis Diseases 0.000 description 1
- 206010037660 Pyrexia Diseases 0.000 description 1
- 208000013616 Respiratory Distress Syndrome Diseases 0.000 description 1
- 206010039101 Rhinorrhoea Diseases 0.000 description 1
- 206010040070 Septic Shock Diseases 0.000 description 1
- 201000000028 adult respiratory distress syndrome Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 208000017574 dry cough Diseases 0.000 description 1
- 230000004064 dysfunction Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000002008 hemorrhagic effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 208000010753 nasal discharge Diseases 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000241 respiratory effect Effects 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000036303 septic shock Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/206—Drawing of charts or graphs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Abstract
The application provides a focus volume measurement method, a system, a terminal and a storage medium based on deep learning, comprising the following steps: acquiring 2D (two-dimensional) bedding surface data of a CT (computed tomography) image; determining standard labeling data according to the labeling result of the doctor on the 2D bedding surface data of the CT image; inputting 2D layer data of the CT image into a preset deep learning network model, and comparing a model prediction result with standard marking data to update model parameters to obtain a trained segmentation network model; acquiring 2D layer data of a CT image to be tested, inputting the data into a trained segmentation network model, and predicting a 2D layer segmentation result; merging the prediction segmentation results on each 2D layer into 3D segmentation regions according to whether the prediction segmentation results belong to the same focus region, and connecting the 3D segmentation regions to obtain a 3D segmentation result; calculating various focus volumes according to the 3D segmentation result; drawing a oscillogram for visual display according to the number and the volume of various focuses of the same patient according to the detection time; the measurement of the number and the volume of the focus and the development trend presentation of the focus are realized.
Description
Technical Field
The present application relates to the field of medical imaging and computer-aided technologies, and in particular, to a method, a system, a terminal, and a storage medium for focal volume measurement based on deep learning.
Background
The latent period of the novel coronavirus is 1-14 days, generally 3-7 days, and the main symptoms are fever, hypodynamia and dry cough, and a few patients have symptoms of nasal obstruction, watery nasal discharge, diarrhea and the like. Some severe patients develop dyspnea and/or hypoxemia after one week from hypoxemia without significant dyspnea, and severe patients rapidly progress to acute respiratory distress syndrome, septic shock, uncorrectable metabolic acidosis, and hemorrhagic dysfunction. It is seen that it develops rapidly in the patient and is prone to respiratory problems in the patient, resulting in having to use external equipment to assist breathing.
At present, the diagnosis and treatment scheme (trial fifth edition) for pneumonia infected by the novel coronavirus takes a suspected case with pneumonia imaging characteristics as a clinical diagnosis case standard in Hubei province, and takes the observation of the lung condition of a patient in a chest CT image as a main basis for screening 2019nCoV pneumonia at present. Currently, whether the lesion is changed or not can be judged only by means of visual observation. Because CT carries out tomographic reconstruction on part of tissues of a patient to reconstruct a multi-layer sectional image, the size of a single layer does not represent the real size of a focus, the volume of the focus cannot be accurately judged on a 2D plane, and a judgment error caused by space sense exists; in addition, the time dimension is added, so that the repeated quantification of the focus cannot be used for recording the accurate increase or decrease simply through an observation mode, particularly the lung invaded by the novel coronavirus pneumonia, the focus is unevenly distributed and irregularly shaped, and once the morphological position of the focus is changed, the focus cannot be accurately positioned and the size and the number of the focus can be measured.
Therefore, a method for measuring the lesion volume based on deep learning is needed to realize the rapid and accurate measurement of the number and volume of lesions and the result presentation of the lesion development trend.
Disclosure of Invention
In view of the above disadvantages of the prior art, the present application provides a method, a system, a terminal and a storage medium for measuring a lesion volume based on deep learning, in which a type of each pixel point of a 2D layer image is determined, then, points of the same type of adjacent pixels are merged, and a plurality of layers of lesion regions are added in an accumulation manner to obtain a volume of the lesion.
In order to solve the above technical problem, the present application provides a method for measuring lesion volume based on deep learning, including:
acquiring 2D (two-dimensional) bedding surface data of a CT (computed tomography) image;
determining standard labeling data according to the labeling result of the doctor on the 2D bedding surface data of the CT image;
inputting 2D layer data of the CT image into a preset deep learning network model, comparing a prediction result of the model with standard labeling data to calculate loss, and updating model parameters through a gradient back-propagation algorithm to obtain a trained segmentation network model;
acquiring 2D layer data of a CT image to be tested, inputting the 2D layer data into a trained segmentation network model, and predicting a 2D layer segmentation result;
merging the predicted segmentation results on each 2D layer into 3D segmentation regions according to whether the predicted segmentation results belong to the same focus region, and connecting the 3D segmentation regions to obtain a 3D segmentation result;
calculating the volume of each focus according to the 3D segmentation result;
and (4) drawing a oscillogram for visual display according to the detection time for the quantity and the volume of various focuses of the same patient.
Optionally, the lesion comprises multiple nature lesions of ground glass shadow, solid deformation, nodule, fibrosis, pleural effusion and white lung.
Optionally, the method includes the steps of inputting 2D layer data of the CT image into a preset deep learning network model, comparing a prediction result of the model with standard annotation data to calculate loss, and updating model parameters through a gradient back-propagation algorithm to obtain a trained segmentation network model, and includes:
combining the upper and lower three continuous layers of the 2D layer data of the CT image as the input of three channels of the segmentation network model, and inputting the input into a Mask-RCNN neural network model with a characteristic pyramid network as a backbone network for target detection and segmentation training to obtain a trained segmentation network model.
Optionally, the obtaining 2D layer data of the CT image to be tested and inputting the data into the trained segmentation network model to predict the 2D layer segmentation result includes:
acquiring 2D (two-dimensional) bedding surface data of a CT (computed tomography) image to be tested through a DICOM (digital imaging and communications in medicine) database;
carrying out patient numbering and patient detection time identification on the 2D bedding surface data of the CT image to be detected;
and inputting the identified 2D layer data of the CT image to be tested into the trained segmentation network model, and predicting the 2D layer segmentation result.
Optionally, the merging the segmentation results predicted on each 2D layer into a 3D segmentation region according to whether the segmentation results belong to the same lesion area, and obtaining the 3D segmentation result through 3D segmentation region connection includes:
and smoothing the predicted segmentation results on each 2D layer, and combining the segmentation results into a complete 3D segmentation result.
Optionally, the calculating the volume of each lesion according to the 3D segmentation result includes:
acquiring segmentation coordinates of lung lobes through a DICOM database, and respectively calculating the number of pixels of the lung lobes and each focus through the segmentation coordinates of the lung lobes;
acquiring the distance and the bedding distance between the central points of adjacent pixels in a DICOM database;
and respectively calculating the volumes of the lung lobes and the focus through a formula of the pixel number, the distance between the central points of the adjacent pixels, and the distance between the central points of the adjacent pixels.
Optionally, the step of drawing a waveform diagram for visually displaying the number and the volume of various lesions of the same patient according to the detection time includes:
acquiring 2D (two-dimensional) bedding surface data of CT (computed tomography) images with the same patient number through a DICOM (digital imaging and communications in medicine) database;
measuring the lesion volume of the 2D layer data of the CT image according to the method to obtain the number and the volume of various lesions of the same patient at different detection times;
and (5) drawing a curve oscillogram for displaying the volume and the quantity of the focus according to the change of the detection time.
In a second aspect, the present invention further provides a lesion volume measurement system based on deep learning, including:
the data acquisition unit is configured for acquiring 2D bedding surface data of the CT image;
the data annotation unit is configured for determining standard annotation data according to an annotation result of a doctor on the 2D bedding surface data of the CT image;
the model training unit is configured for inputting 2D layer data of the CT image into a preset deep learning network model, comparing a prediction result of the model with standard marking data to calculate loss, and updating model parameters through a gradient back-propagation algorithm to obtain a trained segmentation network model;
the model prediction unit is configured to acquire 2D layer data of the CT image to be tested, input the data into the trained segmentation network model and predict a 2D layer segmentation result;
the layer merging unit is configured to merge the predicted segmentation results on each 2D layer into 3D segmentation regions according to whether the predicted segmentation results belong to the same lesion area, and the 3D segmentation results are obtained through the connection of the 3D segmentation regions;
a volume calculation unit configured to calculate a volume of each lesion according to the 3D segmentation result;
and the data display unit is configured for drawing a oscillogram for visually displaying the quantity and the volume of various focuses of the same patient according to the detection time.
Optionally, the model training unit is specifically configured to:
combining the upper and lower three continuous layers of the 2D layer data of the CT image as the input of three channels of the segmentation network model, and inputting the input into a Mask-RCNN neural network model with a characteristic pyramid network as a backbone network for target detection and segmentation training to obtain a trained segmentation network model.
Optionally, the model prediction unit is specifically configured to:
acquiring 2D (two-dimensional) bedding surface data of a CT (computed tomography) image to be tested through a DICOM (digital imaging and communications in medicine) database;
carrying out patient numbering and patient detection time identification on the 2D bedding surface data of the CT image to be detected;
and inputting the identified 2D layer data of the CT image to be tested into the trained segmentation network model, and predicting the 2D layer segmentation result.
Optionally, the layer merging unit is specifically configured to:
and smoothing the predicted segmentation results on each 2D layer, and combining the segmentation results into a complete 3D segmentation result.
Optionally, the volume calculation unit is specifically configured to:
acquiring segmentation coordinates of lung lobes through a DICOM database, and respectively calculating the number of pixels of the lung lobes and each focus through the segmentation coordinates of the lung lobes;
acquiring the distance and the bedding distance between the central points of adjacent pixels in a DICOM database;
and respectively calculating the volumes of the lung lobes and the focus through a formula of the pixel number, the distance between the central points of the adjacent pixels, and the distance between the central points of the adjacent pixels.
Optionally, the data display unit is specifically configured to:
acquiring 2D (two-dimensional) bedding surface data of CT (computed tomography) images with the same patient number through a DICOM (digital imaging and communications in medicine) database;
measuring the lesion volume of the 2D layer data of the CT image according to the method to obtain the number and the volume of various lesions of the same patient at different detection times;
and (5) drawing a curve oscillogram for displaying the volume and the quantity of the focus according to the change of the detection time.
In a third aspect, a terminal is provided, including:
a processor, a memory, wherein,
the memory is used for storing a computer program which,
the processor is used for calling and running the computer program from the memory so as to make the terminal execute the method of the terminal.
In a fourth aspect, a computer storage medium is provided having stored therein instructions that, when executed on a computer, cause the computer to perform the method of the above aspects.
Compared with the prior art, the method has the following beneficial effects:
1. the method analyzes the 3D structure abstracted from the image in a deep learning mode, measures and calculates the focus volume by detecting and segmenting the accurate outline of the focus, excludes the mutually-associated focus, records the number of all the focuses, performs classification statistics through anatomical structures such as lung lobes and the like, and realizes quantitative statistics of the focus volume. The defects that in single inspection, the 3D focus can not be abstracted and measured through a 2D image and the number of the focuses can not be accurately counted are overcome. The 3D volume obtained by abstracting the 2D image into a 3D structure is more in accordance with the medical standard than the size of a single-layer 2D structure, so that the visual proportion perception is provided for the focus and the lung of a patient, and the method is helpful for assisting a doctor to judge the state of the focus of the patient at the present time.
2. According to the method, the influence of CT of a patient on the past is analyzed, the oscillogram is drawn based on the volume and the number of focuses in the form of a chart, quantized focus information is presented in a visual curve chart, the number and the size of the focuses at different time points can be transversely compared to obtain an analysis result of the development trend of the focuses, a clinical treatment plan is designed in a targeted mode, the problem that the design of a treatment scheme is caused by errors is solved, the treatment period is delayed, and the defect that errors are easily judged on large-scale focuses or focuses with unclear local edges when the focuses are observed on a 2D basis is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of a lesion volume measurement based on deep learning provided in an embodiment of the present application.
Fig. 2 is an operation interface for a patient test result provided in the embodiment of the present application.
FIG. 3 is a graph of the waveform of the volume and number variation curves over time for a suspected lesion in a patient according to an embodiment of the present application.
Fig. 4 is a schematic block diagram of a lesion volume measurement system based on deep learning provided in an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a deep learning-based lesion volume measurement controlled terminal according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart of a depth learning-based lesion volume measurement method according to an embodiment of the present disclosure, the method including:
s101: acquiring 2D (two-dimensional) bedding surface data of a CT (computed tomography) image;
s102: determining standard labeling data according to the labeling result of the doctor on the 2D bedding surface data of the CT image;
s103: inputting 2D layer data of the CT image into a preset deep learning network model, comparing a prediction result of the model with standard labeling data to calculate loss, and updating model parameters through a gradient back-propagation algorithm to obtain a trained segmentation network model;
s104: acquiring 2D layer data of a CT image to be tested, inputting the 2D layer data into a trained segmentation network model, and predicting a 2D layer segmentation result;
s105: merging the predicted segmentation results on each 2D layer into 3D segmentation regions according to whether the predicted segmentation results belong to the same focus region, and connecting the 3D segmentation regions to obtain a 3D segmentation result;
s106: calculating the volume of each focus according to the 3D segmentation result;
s107: and (4) drawing a oscillogram for visual display according to the detection time for the quantity and the volume of various focuses of the same patient.
Based on the above embodiments, as a preferred embodiment, the lesion includes multiple nature lesions of frosty glass shadow, solid ghost, nodule, fibrosis, pleural effusion and white lung.
Based on the above embodiment, as a preferred embodiment, S103 inputs 2D bedding surface data of the CT image into a preset deep learning network model, compares a prediction result of the model with standard annotation data to calculate loss, and updates model parameters through a gradient back-propagation algorithm to obtain a trained segmentation network model, including:
combining the upper and lower layers of continuous 2D layer data of the CT image as the input of three channels of the segmentation network model, inputting the input into a Mask-RCNN neural network model taking a feature pyramid (ResNet 50+ FPN) network as a backbone network for target detection and segmentation training, and obtaining a trained segmentation network model.
It should be noted that, three continuous upper and lower layers of standard annotation data are obtained, 3 layers are used as input of the segmentation network model to form a pseudo 3D structure (2.5D), and the output of the segmentation network model is a 2D image with only one layer, so the other two input layers play an auxiliary role. The continuous upper and lower three layers are combined to be used as the input of three channels of the model, not only the pre-trained network parameters can be used, but also the mutual correlation among a plurality of layer CT images is considered, and for the continuous three-layer input images, the target of network learning is the labeling information of the middle layer. And acquiring 2D layer data of the CT image to be tested, inputting the data of the upper layer and the lower layer into the model for prediction, wherein the prediction result corresponds to the result on the middle layer.
In addition, because the size difference between pleural effusion and pneumothorax is large on chest CT, many networks do not have good segmentation effect on all sizes. Therefore, the FPN can well find out smaller signs to improve the effect of the model by combining the obtained characteristics of different scales of a plurality of layers in a pyramid structure mode.
Based on the above embodiment, as a preferred embodiment, the step S104 of acquiring 2D slice data of a CT image to be tested, inputting the acquired 2D slice data into a trained segmentation network model, and predicting a 2D slice segmentation result includes:
acquiring 2D (two-dimensional) bedding surface data of a CT (computed tomography) image to be tested through a DICOM (digital imaging and communications in medicine) database;
carrying out patient numbering and patient detection time identification on the 2D bedding surface data of the CT image to be detected;
and inputting the identified 2D layer data of the CT image to be tested into the trained segmentation network model, and predicting the 2D layer segmentation result.
Based on the above embodiment, as a preferred embodiment, the step S105 of merging the segmentation results predicted at each 2D level into 3D segmentation regions according to whether the segmentation results belong to the same lesion area, and obtaining the 3D segmentation results through 3D segmentation region connection includes:
and smoothing the predicted segmentation results on each 2D layer, and combining the segmentation results into a complete 3D segmentation result.
Specifically, each pixel point of the 2D layer image is subjected to focus category judgment through a segmentation network model, then points with the same focus category of adjacent pixels are combined, and multiple layers of focus areas are added in an accumulation mode to obtain a 3D segmentation result, namely effusion volume or pneumothorax volume information of the focus.
It should be noted that, in all existing CT segmentation schemes, 3D patches of interest are directly used as input, and modeling is directly performed to obtain 3D masks through segmentation. Unlike existing 3D object segmentation, the present application does not require the provision of a 3D ROI area in advance, i.e. the input to the segmentation unit is not 3D patch. The method and the device predict the segmentation result directly on each layer (2D slice) of the CT image. For convenience of display, the segmentation results predicted on different 2D levels are combined into a 3D region according to whether the segmentation results belong to the same lesion region or according to the degree of correlation. For each 2D segmentation result obtained by each layer, considering the class connection among the layers, according to the 3D connectivity, the 2D segmentation results on a plurality of layers are smoothed and merged into a complete 3D segmentation result, and a better segmentation result based on the lesion level can be obtained.
Based on the above embodiment, as a preferred embodiment, the step S106 of calculating the volume of each lesion according to the 3D segmentation result includes:
acquiring segmentation coordinates of lung lobes through a DICOM database, and respectively calculating the number of pixels of the lung lobes and each focus through the segmentation coordinates of the lung lobes;
acquiring the distance (pixel spacing) and the horizon spacing (slicespacing) between the central points of adjacent pixels in the DICOM database;
and respectively calculating the volumes of the lung lobes and the focus through a formula of the pixel number, the distance between the central points of the adjacent pixels, and the distance between the central points of the adjacent pixels.
Based on the above embodiment, as a preferred embodiment, S107 is a step of visually displaying a waveform map of the number and volume of various lesions of the same patient according to the detection time, and the method includes:
acquiring 2D (two-dimensional) bedding surface data of CT (computed tomography) images with the same patient number through a DICOM (digital imaging and communications in medicine) database;
measuring the lesion volume of the 2D layer data of the CT image according to the method to obtain the number and the volume of various lesions of the same patient at different detection times;
and (5) drawing a curve oscillogram for displaying the volume and the quantity of the focus according to the change of the detection time.
Specifically, as shown in fig. 2, fig. 2 is an operation interface for a patient test result provided in the embodiment of the present application. The patient may be obtained N CT exams via the DICOM database, where the last one is the latest examination in the current visit. The user side can delete detected focuses, if false positive or classification errors are detected, statistics is problematic, the deleting function of a user is opened, the sum of the number and the whole volume of the focuses can be changed along with deletion, and the changed structure is a curve displayed according to the requirements of the user. Judging whether the user modifies the data of the two checks that the user side is in follow-up comparison, if so, covering the data modified by the user with the data acquired by the database, drawing a chart according to the selected color and the icon, adding dynamic effects such as a prompt box and the like, monitoring the operation and modification of the user side by the user, modifying the focus when the user modifies the focus, re-filtering the data and rendering a curve chart.
Referring to fig. 3, fig. 3 is a waveform of a volume and a number variation curve over time for a suspected lesion in a patient according to an embodiment of the present application. As can be seen from fig. 3, the development trend of the lesion of the patient can be directly observed by displaying the volume and the number of the lesion according to the waveform of the curve drawn by the change of the detection time.
For example, when a patient with coronary pneumonia is subjected to CT examination in a hospital in 2018, no inflammation exists in a single lung, and when the patient is subjected to CT examination in the hospital in 2-month and 3-month 2020, the condition that the number of the suspected inflammation lesions is high and the ratio is high is confirmed; after receiving treatment, the disease gradually improves 2 months and 13 days in 2020, and the number of the focus and the whole volume are obviously improved compared with the previous examination.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a deep learning-based lesion volume measurement system according to an embodiment of the present application, where the system 400 includes:
a data acquisition unit 401 configured to acquire 2D slice data of a CT image;
the data annotation unit 402 is configured to determine standard annotation data according to an annotation result of a doctor on 2D bedding surface data of the CT image;
the model training unit 403 is configured to input 2D layer data of the CT image into a preset deep learning network model, compare a prediction result of the model with standard annotation data to calculate loss, and update model parameters by using a gradient back-propagation algorithm to obtain a trained segmentation network model;
a model prediction unit 404 configured to obtain 2D layer data of a CT image to be tested, input the 2D layer data into a trained segmentation network model, and predict a 2D layer segmentation result;
a level merging unit 405 configured to merge the segmentation results predicted on each 2D level into 3D segmentation regions according to whether the segmentation results belong to the same lesion area, and obtain 3D segmentation results through 3D segmentation region connection;
a volume calculation unit 406 configured to calculate a volume of each lesion according to the 3D segmentation result;
and the data display unit 407 is configured to draw a waveform map for visually displaying the number and the volume of various lesions of the same patient according to the detection time.
Based on the above embodiment, as a preferred embodiment, the model training unit 403 is specifically configured to:
combining the upper and lower three continuous layers of the 2D layer data of the CT image as the input of three channels of the segmentation network model, and inputting the input into a Mask-RCNN neural network model with a characteristic pyramid network as a backbone network for target detection and segmentation training to obtain a trained segmentation network model.
Based on the foregoing embodiment, as a preferred embodiment, the model prediction unit 404 is specifically configured to:
acquiring 2D (two-dimensional) bedding surface data of a CT (computed tomography) image to be tested through a DICOM (digital imaging and communications in medicine) database;
carrying out patient numbering and patient detection time identification on the 2D bedding surface data of the CT image to be detected;
and inputting the identified 2D layer data of the CT image to be tested into the trained segmentation network model, and predicting the 2D layer segmentation result.
Based on the above embodiment, as a preferred embodiment, the layer merging unit 405 is specifically configured to:
and smoothing the predicted segmentation results on each 2D layer, and combining the segmentation results into a complete 3D segmentation result.
Based on the above embodiment, as a preferred embodiment, the volume calculating unit 406 is specifically configured to:
acquiring segmentation coordinates of lung lobes through a DICOM database, and respectively calculating the number of pixels of the lung lobes and each focus through the segmentation coordinates of the lung lobes;
acquiring the distance and the bedding distance between the central points of adjacent pixels in a DICOM database;
and respectively calculating the volumes of the lung lobes and the focus through a formula of the pixel number, the distance between the central points of the adjacent pixels, and the distance between the central points of the adjacent pixels.
Based on the above embodiment, as a preferred embodiment, the data display unit 407 is specifically configured to:
acquiring 2D (two-dimensional) bedding surface data of CT (computed tomography) images with the same patient number through a DICOM (digital imaging and communications in medicine) database;
measuring the lesion volume of the 2D layer data of the CT image according to the method to obtain the number and the volume of various lesions of the same patient at different detection times;
and (5) drawing a curve oscillogram for displaying the volume and the quantity of the focus according to the change of the detection time.
Fig. 5 is a schematic structural diagram of a controlled terminal 500 according to an embodiment of the present invention, where the controlled terminal 500 may be used to perform deep learning-based lesion volume measurement according to the embodiment of the present invention.
Among them, the controlled terminal 500 may include: a processor 510, a memory 520, and a communication unit 530. The components communicate via one or more buses, and those skilled in the art will appreciate that the architecture of the servers shown in the figures is not intended to be limiting, and may be a bus architecture, a star architecture, a combination of more or less components than those shown, or a different arrangement of components.
The memory 520 may be used for storing instructions executed by the processor 510, and the memory 520 may be implemented by any type of volatile or non-volatile storage terminal or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk. The executable instructions in the memory 520, when executed by the processor 510, enable the controlled terminal 500 to perform some or all of the steps in the method embodiments described below.
The processor 510 is a control center of the storage terminal, connects various parts of the entire electronic terminal using various interfaces and lines, and performs various functions of the electronic terminal and/or processes data by operating or executing software programs and/or modules stored in the memory 520 and calling data stored in the memory. The processor may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected with the same or different functions. For example, processor 510 may include only a Central Processing Unit (CPU). In the embodiment of the present invention, the CPU may be a single operation core, or may include multiple operation cores.
A communication unit 530 for establishing a communication channel so that the storage terminal can communicate with other terminals. And receiving user data sent by other terminals or sending the user data to other terminals.
The present invention also provides a computer storage medium, wherein the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Therefore, the image is abstracted into a 3D structure for analysis in a deep learning mode, the focus volume is measured and calculated by detecting and segmenting the accurate outline of the focus, the correlated focuses are eliminated, the number of all the focuses is recorded, and the quantitative statistics of the focus volume is realized by carrying out classification statistics on anatomical structures such as lung lobes. The defects that in single inspection, the 3D focus can not be abstracted and measured through a 2D image and the number of the focuses can not be accurately counted are overcome. The 3D volume obtained by abstracting the 2D image into a 3D structure is more in accordance with the medical standard than the size of a single-layer 2D structure, so that the visual proportion perception is provided for the focus and the lung of a patient, and the method is helpful for assisting a doctor to judge the state of the focus of the patient at the present time; according to the method, the influence of CT of a patient on the past is analyzed, the oscillogram is drawn based on the volume and the number of focuses in the form of a chart, quantized focus information is displayed in a visual curve chart, the number and the size of the focuses at different time points can be transversely compared to obtain an analysis result of the development trend of the focuses, the design of a clinical treatment plan is performed in a targeted manner, the problem that the design of a treatment scheme is caused by errors is solved, the treatment period is delayed, and the defect that errors are easily judged on the large-scale focuses or the focuses with unclear local edges when the focuses are observed on a 2D basis is overcome. The method is characterized in that the segmentation of a center layer is assisted by introducing complementary information of continuous upper and lower layers, and the 2D layer segmentation result is predicted by respectively modeling a space dimension (2D layer) and a time sequence dimension (upper and lower layers). And for each layer of the obtained 2D prediction, innovatively combining the 2D prediction into a 3D segmentation result by using smoothing processing according to 3D up-down connectivity, accurately segmenting the outlines of two diseases on an image in CT, and obtaining the accurate volume of a final focus according to preset parameters of scanning equipment, thereby reducing the influence caused by subjective factors of doctors, improving the diagnosis rate, and improving the accuracy, reliability and measurement efficiency of measuring the pleural effusion and the pneumatosis volume. The technical effects achieved by the present embodiment can be referred to the above description, and are not described herein again.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in the form of a software product, where the computer software product is stored in a storage medium, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like, and the storage medium can store program codes, and includes instructions for enabling a computer terminal (which may be a personal computer, a server, or a second terminal, a network terminal, and the like) to perform all or part of the steps of the method in the embodiments of the present invention.
The same and similar parts in the various embodiments in this specification may be referred to each other. Especially, for the terminal embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the description in the method embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed system and method can be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, systems or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
Although the present invention has been described in detail by referring to the drawings in connection with the preferred embodiments, the present invention is not limited thereto. Various equivalent modifications or substitutions can be made on the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and these modifications or substitutions are within the scope of the present invention/any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. A focus volume measurement method based on deep learning is characterized by comprising the following steps:
acquiring 2D (two-dimensional) bedding surface data of a CT (computed tomography) image;
determining standard labeling data according to the labeling result of the doctor on the 2D bedding surface data of the CT image;
inputting 2D layer data of the CT image into a preset deep learning network model, comparing a prediction result of the model with standard labeling data to calculate loss, and updating model parameters through a gradient back-propagation algorithm to obtain a trained segmentation network model;
acquiring 2D layer data of a CT image to be tested, inputting the 2D layer data into a trained segmentation network model, and predicting a 2D layer segmentation result;
merging the predicted segmentation results on each 2D layer into 3D segmentation regions according to whether the predicted segmentation results belong to the same focus region, and connecting the 3D segmentation regions to obtain a 3D segmentation result;
calculating the volume of each focus according to the 3D segmentation result;
and (4) drawing a oscillogram for visual display according to the detection time for the quantity and the volume of various focuses of the same patient.
2. The method of claim 1, wherein the lesion comprises multiple nature lesions of ground glass shadow, real deformation, nodule, fibrosis, pleural effusion, and white lung.
3. The method for measuring lesion volume based on deep learning of claim 1, wherein the step of inputting 2D bedding surface data of the CT image into a preset deep learning network model, comparing a prediction result of the model with standard labeling data to calculate loss, and updating model parameters through a gradient back-propagation algorithm to obtain a trained segmentation network model comprises the steps of:
combining the upper and lower three continuous layers of the 2D layer data of the CT image as the input of three channels of the segmentation network model, and inputting the input into a Mask-RCNN neural network model with a characteristic pyramid network as a backbone network for target detection and segmentation training to obtain a trained segmentation network model.
4. The method for measuring lesion volume based on deep learning of claim 1, wherein the acquiring and inputting 2D slice data of the CT image to be tested into the trained segmentation network model to predict the 2D slice segmentation result comprises:
acquiring 2D (two-dimensional) bedding surface data of a CT (computed tomography) image to be tested through a DICOM (digital imaging and communications in medicine) database;
carrying out patient numbering and patient detection time identification on the 2D bedding surface data of the CT image to be detected;
and inputting the identified 2D layer data of the CT image to be tested into the trained segmentation network model, and predicting the 2D layer segmentation result.
5. The method for measuring lesion volume based on deep learning of claim 1, wherein the merging of the predicted segmentation results on each 2D layer into 3D segmentation regions according to whether the prediction results belong to the same lesion region, and obtaining the 3D segmentation results through 3D segmentation region connection comprises:
and smoothing the predicted segmentation results on each 2D layer, and combining the segmentation results into a complete 3D segmentation result.
6. The method according to claim 1, wherein the calculating the volume of each lesion according to the 3D segmentation result comprises:
acquiring segmentation coordinates of lung lobes through a DICOM database, and respectively calculating the number of pixels of the lung lobes and each focus through the segmentation coordinates of the lung lobes;
acquiring the distance and the bedding distance between the central points of adjacent pixels in a DICOM database;
and respectively calculating the volumes of the lung lobes and the focus through a formula of the pixel number, the distance between the central points of the adjacent pixels, and the distance between the central points of the adjacent pixels.
7. The method for measuring lesion volume based on deep learning of claim 1, wherein the step of drawing waveform images of various lesion numbers and volumes of the same patient according to detection time for visual display comprises:
acquiring 2D (two-dimensional) bedding surface data of CT (computed tomography) images with the same patient number through a DICOM (digital imaging and communications in medicine) database;
measuring the lesion volume of the 2D layer data of the CT image according to the method to obtain the number and the volume of various lesions of the same patient at different detection times;
and (5) drawing a curve oscillogram for displaying the volume and the quantity of the focus according to the change of the detection time.
8. A lesion volume measurement system based on deep learning, comprising:
the data acquisition unit is configured for acquiring 2D bedding surface data of the CT image;
the data annotation unit is configured for determining standard annotation data according to an annotation result of a doctor on the 2D bedding surface data of the CT image;
the model training unit is configured for inputting 2D layer data of the CT image into a preset deep learning network model, comparing a prediction result of the model with standard marking data to calculate loss, and updating model parameters through a gradient back-propagation algorithm to obtain a trained segmentation network model;
the model prediction unit is configured to acquire 2D layer data of the CT image to be tested, input the data into the trained segmentation network model and predict a 2D layer segmentation result;
the layer merging unit is configured to merge the predicted segmentation results on each 2D layer into 3D segmentation regions according to whether the predicted segmentation results belong to the same lesion area, and the 3D segmentation results are obtained through the connection of the 3D segmentation regions;
a volume calculation unit configured to calculate a volume of each lesion according to the 3D segmentation result;
and the data display unit is configured for drawing a oscillogram for visually displaying the quantity and the volume of various focuses of the same patient according to the detection time.
9. A terminal, comprising:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any one of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010173183.2A CN111047591A (en) | 2020-03-13 | 2020-03-13 | Focal volume measuring method, system, terminal and storage medium based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010173183.2A CN111047591A (en) | 2020-03-13 | 2020-03-13 | Focal volume measuring method, system, terminal and storage medium based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111047591A true CN111047591A (en) | 2020-04-21 |
Family
ID=70231047
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010173183.2A Pending CN111047591A (en) | 2020-03-13 | 2020-03-13 | Focal volume measuring method, system, terminal and storage medium based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111047591A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111539944A (en) * | 2020-04-28 | 2020-08-14 | 安徽科大讯飞医疗信息技术有限公司 | Lung focus statistical attribute acquisition method and device, electronic equipment and storage medium |
CN111667458A (en) * | 2020-04-30 | 2020-09-15 | 杭州深睿博联科技有限公司 | Method and device for detecting early acute cerebral infarction in flat-scan CT |
CN111738980A (en) * | 2020-05-14 | 2020-10-02 | 上海依智医疗技术有限公司 | Medical image display method, computer equipment and storage medium |
CN111915555A (en) * | 2020-06-19 | 2020-11-10 | 杭州深睿博联科技有限公司 | 3D network model pre-training method, system, terminal and storage medium |
CN111915556A (en) * | 2020-06-22 | 2020-11-10 | 杭州深睿博联科技有限公司 | CT image lesion detection method, system, terminal and storage medium based on double-branch network |
CN112017185A (en) * | 2020-10-30 | 2020-12-01 | 平安科技(深圳)有限公司 | Focus segmentation method, device and storage medium |
CN112053769A (en) * | 2020-09-30 | 2020-12-08 | 沈阳东软智能医疗科技研究院有限公司 | Three-dimensional medical image labeling method and device and related product |
CN112190277A (en) * | 2020-11-09 | 2021-01-08 | 华中科技大学同济医学院附属协和医院 | Data fitting method for CT reexamination of new coronary pneumonia |
CN112419309A (en) * | 2020-12-11 | 2021-02-26 | 上海联影医疗科技股份有限公司 | Medical image phase determination method, apparatus, computer device and storage medium |
CN112435212A (en) * | 2020-10-15 | 2021-03-02 | 杭州脉流科技有限公司 | Brain focus region volume obtaining method and device based on deep learning, computer equipment and storage medium |
CN112614573A (en) * | 2021-01-27 | 2021-04-06 | 北京小白世纪网络科技有限公司 | Deep learning model training method and device based on pathological image labeling tool |
CN113096093A (en) * | 2021-04-12 | 2021-07-09 | 中山大学 | Method, system and device for calculating quantity and volume of calculi in CT (computed tomography) image |
CN113450337A (en) * | 2021-07-07 | 2021-09-28 | 沈阳先进医疗设备技术孵化中心有限公司 | Evaluation method and device for hydrops in pericardial cavity, electronic device and storage medium |
CN113506294A (en) * | 2021-09-08 | 2021-10-15 | 远云(深圳)互联网科技有限公司 | Medical image evaluation method, system, computer equipment and storage medium |
CN114332023A (en) * | 2021-12-30 | 2022-04-12 | 上海市嘉定区中心医院 | Pneumothorax automatic diagnosis and crisis early warning method, device, equipment and storage medium |
CN116205967A (en) * | 2023-04-27 | 2023-06-02 | 中国科学院长春光学精密机械与物理研究所 | Medical image semantic segmentation method, device, equipment and medium |
CN117635613A (en) * | 2024-01-25 | 2024-03-01 | 武汉大学人民医院(湖北省人民医院) | Fundus focus monitoring device and method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103793611A (en) * | 2014-02-18 | 2014-05-14 | 中国科学院上海技术物理研究所 | Medical information visualization method and device |
CN107492097A (en) * | 2017-08-07 | 2017-12-19 | 北京深睿博联科技有限责任公司 | A kind of method and device for identifying MRI image area-of-interest |
CN109886179A (en) * | 2019-02-18 | 2019-06-14 | 深圳视见医疗科技有限公司 | The image partition method and system of cervical cell smear based on Mask-RCNN |
CN110310281A (en) * | 2019-07-10 | 2019-10-08 | 重庆邮电大学 | Lung neoplasm detection and dividing method in a kind of Virtual Medical based on Mask-RCNN deep learning |
CN110782446A (en) * | 2019-10-25 | 2020-02-11 | 杭州依图医疗技术有限公司 | Method and device for determining volume of lung nodule |
CN110853011A (en) * | 2019-11-11 | 2020-02-28 | 河北工业大学 | Method for constructing convolutional neural network model for pulmonary nodule detection |
-
2020
- 2020-03-13 CN CN202010173183.2A patent/CN111047591A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103793611A (en) * | 2014-02-18 | 2014-05-14 | 中国科学院上海技术物理研究所 | Medical information visualization method and device |
CN107492097A (en) * | 2017-08-07 | 2017-12-19 | 北京深睿博联科技有限责任公司 | A kind of method and device for identifying MRI image area-of-interest |
CN109886179A (en) * | 2019-02-18 | 2019-06-14 | 深圳视见医疗科技有限公司 | The image partition method and system of cervical cell smear based on Mask-RCNN |
CN110310281A (en) * | 2019-07-10 | 2019-10-08 | 重庆邮电大学 | Lung neoplasm detection and dividing method in a kind of Virtual Medical based on Mask-RCNN deep learning |
CN110782446A (en) * | 2019-10-25 | 2020-02-11 | 杭州依图医疗技术有限公司 | Method and device for determining volume of lung nodule |
CN110853011A (en) * | 2019-11-11 | 2020-02-28 | 河北工业大学 | Method for constructing convolutional neural network model for pulmonary nodule detection |
Non-Patent Citations (2)
Title |
---|
MENGLU LIU: "Segmentation of Lung Nodule in CT Images Based on Mask R-CNN", 《PROCEEDING OF 2018 9TH INTERNATIONAL CONFERENCE ON AWARENESS SCIENCE AND TECHNOLOGY》 * |
郭桐: "肺结节图像的自动分割与识别", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111539944A (en) * | 2020-04-28 | 2020-08-14 | 安徽科大讯飞医疗信息技术有限公司 | Lung focus statistical attribute acquisition method and device, electronic equipment and storage medium |
CN111539944B (en) * | 2020-04-28 | 2024-04-09 | 讯飞医疗科技股份有限公司 | Method, device, electronic equipment and storage medium for acquiring statistical attribute of lung focus |
CN111667458A (en) * | 2020-04-30 | 2020-09-15 | 杭州深睿博联科技有限公司 | Method and device for detecting early acute cerebral infarction in flat-scan CT |
CN111667458B (en) * | 2020-04-30 | 2023-09-01 | 杭州深睿博联科技有限公司 | Early acute cerebral infarction detection method and device in flat scanning CT |
CN111738980A (en) * | 2020-05-14 | 2020-10-02 | 上海依智医疗技术有限公司 | Medical image display method, computer equipment and storage medium |
CN111738980B (en) * | 2020-05-14 | 2023-08-04 | 北京深睿博联科技有限责任公司 | Medical image display method, computer equipment and storage medium |
CN111915555A (en) * | 2020-06-19 | 2020-11-10 | 杭州深睿博联科技有限公司 | 3D network model pre-training method, system, terminal and storage medium |
CN111915556A (en) * | 2020-06-22 | 2020-11-10 | 杭州深睿博联科技有限公司 | CT image lesion detection method, system, terminal and storage medium based on double-branch network |
CN112053769A (en) * | 2020-09-30 | 2020-12-08 | 沈阳东软智能医疗科技研究院有限公司 | Three-dimensional medical image labeling method and device and related product |
CN112053769B (en) * | 2020-09-30 | 2023-03-10 | 沈阳东软智能医疗科技研究院有限公司 | Three-dimensional medical image labeling method and device and related product |
CN112435212A (en) * | 2020-10-15 | 2021-03-02 | 杭州脉流科技有限公司 | Brain focus region volume obtaining method and device based on deep learning, computer equipment and storage medium |
CN112017185B (en) * | 2020-10-30 | 2021-02-05 | 平安科技(深圳)有限公司 | Focus segmentation method, device and storage medium |
CN112017185A (en) * | 2020-10-30 | 2020-12-01 | 平安科技(深圳)有限公司 | Focus segmentation method, device and storage medium |
CN112190277A (en) * | 2020-11-09 | 2021-01-08 | 华中科技大学同济医学院附属协和医院 | Data fitting method for CT reexamination of new coronary pneumonia |
CN112419309A (en) * | 2020-12-11 | 2021-02-26 | 上海联影医疗科技股份有限公司 | Medical image phase determination method, apparatus, computer device and storage medium |
CN112419309B (en) * | 2020-12-11 | 2023-04-07 | 上海联影医疗科技股份有限公司 | Medical image phase determination method, apparatus, computer device and storage medium |
CN112614573A (en) * | 2021-01-27 | 2021-04-06 | 北京小白世纪网络科技有限公司 | Deep learning model training method and device based on pathological image labeling tool |
CN113096093A (en) * | 2021-04-12 | 2021-07-09 | 中山大学 | Method, system and device for calculating quantity and volume of calculi in CT (computed tomography) image |
CN113450337A (en) * | 2021-07-07 | 2021-09-28 | 沈阳先进医疗设备技术孵化中心有限公司 | Evaluation method and device for hydrops in pericardial cavity, electronic device and storage medium |
CN113506294A (en) * | 2021-09-08 | 2021-10-15 | 远云(深圳)互联网科技有限公司 | Medical image evaluation method, system, computer equipment and storage medium |
CN114332023A (en) * | 2021-12-30 | 2022-04-12 | 上海市嘉定区中心医院 | Pneumothorax automatic diagnosis and crisis early warning method, device, equipment and storage medium |
CN116205967A (en) * | 2023-04-27 | 2023-06-02 | 中国科学院长春光学精密机械与物理研究所 | Medical image semantic segmentation method, device, equipment and medium |
CN117635613A (en) * | 2024-01-25 | 2024-03-01 | 武汉大学人民医院(湖北省人民医院) | Fundus focus monitoring device and method |
CN117635613B (en) * | 2024-01-25 | 2024-04-16 | 武汉大学人民医院(湖北省人民医院) | Fundus focus monitoring device and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111047591A (en) | Focal volume measuring method, system, terminal and storage medium based on deep learning | |
CN107622492B (en) | Lung fissure segmentation method and system | |
CN111402260A (en) | Medical image segmentation method, system, terminal and storage medium based on deep learning | |
US7283652B2 (en) | Method and system for measuring disease relevant tissue changes | |
CA2737668C (en) | Method and system for measuring tissue damage and disease risk | |
JP5081390B2 (en) | Method and system for monitoring tumor burden | |
US8050734B2 (en) | Method and system for performing patient specific analysis of disease relevant changes of a disease in an anatomical structure | |
CN108348204B (en) | Generating a lung condition map | |
US20030095692A1 (en) | Method and system for lung disease detection | |
CN110969623B (en) | Lung CT multi-symptom automatic detection method, system, terminal and storage medium | |
CN108038875B (en) | Lung ultrasonic image identification method and device | |
CN111080584A (en) | Quality control method for medical image, computer device and readable storage medium | |
CN111340756B (en) | Medical image lesion detection merging method, system, terminal and storage medium | |
WO2021073120A1 (en) | Method and device for marking lung area shadows in medical image, server, and storage medium | |
GB2451416A (en) | ROI-based assessment of abnormality using transformation invariant features | |
CN113706435A (en) | Chest enhanced CT image processing method based on traditional image omics | |
US20220284578A1 (en) | Image processing for stroke characterization | |
US20220148727A1 (en) | Cad device and method for analysing medical images | |
CN113223015A (en) | Vascular wall image segmentation method, device, computer equipment and storage medium | |
CN114332132A (en) | Image segmentation method and device and computer equipment | |
CN115954101A (en) | Health degree management system and management method based on AI tongue diagnosis image processing | |
CN116091466A (en) | Image analysis method, computer device, and storage medium | |
CN108399354A (en) | The method and apparatus of Computer Vision Recognition tumour | |
US9436889B2 (en) | Image processing device, method, and program | |
CN110992312B (en) | Medical image processing method, medical image processing device, storage medium and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200421 |
|
RJ01 | Rejection of invention patent application after publication |