CN109166104A - A kind of lesion detection method, device and equipment - Google Patents
A kind of lesion detection method, device and equipment Download PDFInfo
- Publication number
- CN109166104A CN109166104A CN201810866045.5A CN201810866045A CN109166104A CN 109166104 A CN109166104 A CN 109166104A CN 201810866045 A CN201810866045 A CN 201810866045A CN 109166104 A CN109166104 A CN 109166104A
- Authority
- CN
- China
- Prior art keywords
- images
- lesion
- layer
- convolutional
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention discloses a kind of lesion detection method, device and equipment, the described method includes: handling using from coding convolutional neural networks model CT images, feature extraction is carried out to CT images by the convolutional layer and pond layer of encoder network, obtain the lesion candidate region of the CT images, the lesion candidate region of CT images is handled by the up-sampling layer and convolutional layer of decoder network, obtain the lesion center position of the CT images, the CT images that lesion center position has been determined are handled by probability output layer, obtain the extent of disease and lesion type information of the CT images.Since the calculating of decoder network is the convolution processing result based on encoder network, so the application does not need to carry out duplicate convolutional calculation processing, the convolutional calculation processing time is saved, under the premise of guaranteeing lesion detection accuracy rate, lesion detection efficiency is improved to a certain extent.
Description
Technical field
This application involves medical image processing fields, and in particular to a kind of lesion detection method, device and equipment.
Background technique
Cancer is present society to human health and the maximum disease of life threat, and common diagnostic techniques is electrometer
Calculation machine tomoscan (Computed Tomography, CT) technology usually goes out whether each organ occurs by CT diagnosis of technique
Lesion, and judge the type (i.e. good pernicious) of lesion, facilitate the early diagnosis and therapy of various cancers, and therefore can reduce
The death rate of patient.
For example, lung cancer is one of fastest-rising cancer of morbidity and mortality, lung cancer is usually to be developed by Lung neoplasm
Come, the good pernicious early diagnosis and therapy for facilitating lung cancer that doctor passes through analysis patient's Lung neoplasm.
In practical clinical, it can be carried out first using original CT image of the full convolutional neural networks model to patient
Convolutional calculation extracts the candidate region of lesion on CT image;Secondly, the CT image for being marked with lesion candidate region is made again
For the input of another convolutional neural networks model, convolutional calculation is re-started, finally obtains the testing result of lesion.
Since existing processing mode needs to carry out convolutional calculation twice, wherein include duplicate calculating content, and convolution
The inherently biggish treatment process of system performance and time loss is calculated, so, current lesion detection method efficiency is lower.
Summary of the invention
This application provides a kind of lesion detection method, device and equipments, can be improved the efficiency of lesion detection.
In a first aspect, this application provides a kind of lesion detection methods, which comprises
Obtain computer tomography CT image;
Using the CT images as the input object from coding convolutional neural networks model;Wherein, described from coding convolution
Neural network model is made of encoder network, decoder network and probability output layer, the encoder network by convolutional layer and
Pond layer is composed with stepped construction, and the decoder network is composed of up-sampling layer and convolutional layer with stepped construction;
Feature extraction is carried out to the CT images by the convolutional layer and pond layer of the encoder network, obtains the CT shadow
The lesion candidate region of picture;
The lesion candidate region of the CT images is handled by the up-sampling layer and convolutional layer of the decoder network,
Obtain the lesion center position of the CT images;
It is handled, is obtained described by the CT images of the probability output layer the determination lesion center position
The extent of disease and lesion type information of CT images.
In a kind of implementation, the method also includes:
According to the lesion center position of the CT images, extent of disease and lesion type information, the CT shadow is constructed
The threedimensional model of picture;
Creation includes the virtual reality scenario of the threedimensional model.
In a kind of implementation, it is described by the probability output layer the CT shadow of the determination lesion center position
As being handled, the extent of disease and lesion type information of the CT images are obtained, comprising:
It is carried out by pixel each in the CT images of the probability output layer the determination lesion center position
Classification, obtains the classification results of each pixel;
According to the classification results of each pixel, the extent of disease and each extent of disease point of the CT images are determined
Not corresponding lesion type information.
In a kind of implementation, the CT images are lung's CT images, and the encoder network is by every group of 2 3*3 convolution
4 groups of structures of layer and 1 pond layer are composed with stepped construction, and the decoder network is by every group of 1 up-sampling layer and 2
4 groups of structures of 3*3 convolutional layer are composed with stepped construction.
In a kind of implementation, the lesion type information include full mold tubercle, sub- full mold tubercle, ground glass type tubercle and
Normal tissue.
Second aspect, present invention also provides a kind of lesion detection device, described device includes:
Module is obtained, for obtaining computer tomography CT image;
Input module, for using the CT images as the input object from coding convolutional neural networks model;Wherein, institute
It states and is made of from coding convolutional neural networks model encoder network, decoder network and probability output layer, the encoder net
Network is composed of convolutional layer and pond layer with stepped construction, and knot is laminated by up-sampling layer and convolutional layer for the decoder network
Structure is composed;
Extraction module, for by the encoder network convolutional layer and pond layer to the CT images carry out feature mention
It takes, obtains the lesion candidate region of the CT images;
First processing module, for up-sampling layer and convolutional layer to the lesion of the CT images by the decoder network
Candidate region is handled, and the lesion center position of the CT images is obtained;
Second processing module, for the CT images by the probability output layer the determination lesion center position
It is handled, obtains the extent of disease and lesion type information of the CT images.
In a kind of implementation, described device further include:
Module is constructed, for according to the lesion center position of the CT images, extent of disease and lesion type information,
Construct the threedimensional model of the CT images;
Creation module, for creating the virtual reality scenario comprising the threedimensional model.
In a kind of implementation, the Second processing module includes:
Classification submodule, for the CT images by the probability output layer the determination lesion center position
Each pixel is classified, and the classification results of each pixel are obtained;
It determines submodule, for the classification results according to each pixel, determines the extent of disease of the CT images, and
The corresponding lesion type information of each extent of disease.
The third aspect, present invention also provides a kind of computer readable storage medium, the computer readable storage medium
In be stored with instruction, when described instruction is run on the terminal device, so that the terminal device executes above-mentioned lesion such as and examines
Survey method.
Fourth aspect, present invention also provides a kind of lesion detection equipment, comprising: memory, processor, and it is stored in institute
The computer program that can be run on memory and on the processor is stated, when the processor executes the computer program,
Realize such as above-mentioned lesion detection method.
In lesion detection method provided by the embodiments of the present application, using from coding convolutional neural networks model to CT images into
Row processing carries out feature extraction to CT images by the convolutional layer and pond layer of encoder network, and the lesion for obtaining the CT images is waited
Favored area handles the lesion candidate region of CT images by the up-sampling layer and convolutional layer of decoder network, obtains the CT
The lesion center position of image handles the CT images that lesion center position has been determined by probability output layer, obtains
The extent of disease and lesion type information of the CT images.In the embodiment of the present application, due to the input of decoder network be by
Encoder network carries out the lesion candidate region that convolutional calculation is determined, that is to say, that the calculating of decoder network is based on volume
The convolution processing result of code device network, so, the embodiment of the present application does not need to carry out duplicate convolutional calculation processing, saves
The convolutional calculation processing time under the premise of guaranteeing lesion detection accuracy rate, improves to a certain extent compared with prior art
Lesion detection efficiency.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for
For those of ordinary skill in the art, without any creative labor, it can also be obtained according to these attached drawings
His attached drawing.
Fig. 1 is a kind of flow chart of lesion detection method provided in an embodiment of the present invention;
Fig. 2 is provided in an embodiment of the present invention a kind of from the schematic diagram for encoding convolutional neural networks model;
Fig. 3 is the flow chart of another lesion detection method provided in an embodiment of the present invention;
Fig. 4 is a kind of flow chart of pulmonary nodule detection method provided in an embodiment of the present invention;
Fig. 5 is a kind of showing from coding convolutional neural networks model for Lung neoplasm detection provided in an embodiment of the present invention
It is intended to;
Fig. 6 is a kind of structural schematic diagram of lesion detection device provided in an embodiment of the present invention;
Fig. 7 is a kind of structural schematic diagram of lesion detection equipment provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on
Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall in the protection scope of this application.
With the development of various diseases (such as cancer) detection technique, lesion locations, range, type of patient etc. are detected
Technology is also constantly being improved.Currently, can use full volume for the detection of the information such as the lesion locations, range, type of patient
Product neural network is completed, specifically, carrying out convolution meter using original CT image of the full convolutional neural networks model to patient first
It calculates, the candidate region of lesion on CT image is extracted, secondly, the CT image for being marked with lesion candidate region is used as again another
The input of convolutional neural networks model re-starts a convolutional calculation, obtains the final detection result of lesion.
But by above-mentioned steps it is found that being needed in the existing lesion detection method completed using full convolutional neural networks
Convolutional calculation twice is carried out, wherein being once for original CT image, another time is for the CT for being marked with lesion candidate region
Image, it is clear that two kinds of CT images include more duplicate content, also result in during convolutional calculation twice that there are more repetitions
Calculating content, and the biggish data handling procedure of convolutional calculation inherently system performance and time loss, so, at present
Lesion detection method there are problems that low efficiency.
In order to improve the efficiency of lesion detection, the embodiment of the present application provides a kind of lesion detection method, using coding certainly
Convolutional neural networks model handles CT images, carries out feature to CT images by the convolutional layer and pond layer of encoder network
It extracts, obtains the lesion candidate region of the CT images, layer and convolutional layer are up-sampled to the lesion of CT images by decoder network
Candidate region is handled, and the lesion center position of the CT images is obtained, by probability output layer to lesion central point has been determined
The CT images of position are handled, and the extent of disease and lesion type information of the CT images are obtained.In the embodiment of the present application, by
It is to carry out the lesion candidate region that convolutional calculation is determined by encoder network in the input of decoder network, that is to say, that
The calculating of decoder network is the convolution processing result based on encoder network, so, the embodiment of the present application does not need to carry out
Duplicate convolutional calculation processing, saves the convolutional calculation processing time, compared with prior art, is guaranteeing lesion detection accuracy rate
Under the premise of, lesion detection efficiency is improved to a certain extent.
Embodiment of the method one
It is a kind of flow chart of lesion detection method provided by the embodiments of the present application referring to Fig. 1, this method comprises:
S101: computer tomography CT image is obtained.
In the embodiment of the present application, before carrying out lesion detection, acquisition CT images first are as lesion detection object;Its
In, CT images can be the CT images of the organ of the lesion to be detected of patient, such as lung's CT images, thyroid gland CT images, brain CT
Image etc., specifically, the embodiment of the present application can detect the Lung neoplasm in lung's CT images, it can be to thyroid gland CT shadow
Thyroid nodule as in is detected, and can also be detected to cerebral ischemia, the necrosis etc. in brain CT images, not limited herein
Certain specific lesion detection object.
In practical application, the CT images as lesion detection object can be the CT shadow for obtaining and being stored in advance database
Picture is also possible to carry out the CT images obtained after CT scan to patient in real time, herein with no restrictions.
In a kind of optional embodiment, influence to subsequent lesion detection is interfered in order to reduce noise etc. in CT images,
Image pretreatment operation can be carried out to CT images after getting CT images.For example, the technologies pair such as gaussian filtering can be passed through
CT images carry out image pretreatment operation to eliminate noise jamming.
S102: using the CT images as the input object from coding convolutional neural networks model.
Wherein, the convolutional neural networks model of coding certainly is by encoder network, decoder network and probability output layer group
At the encoder network is composed of convolutional layer and pond layer with stepped construction, and the decoder network is by up-sampling layer
It is composed with volume base with stepped construction.
In the embodiment of the present application, building in advance is from coding convolutional neural networks model, and using should be from coding convolutional Neural
Network model handles CT images, finally obtains lesion detection result.It referring to fig. 2, is provided by the embodiments of the present application one
Schematic diagram of the kind from coding convolutional neural networks model, wherein be by encoder network, solve from coding convolutional neural networks model
Code device network and probability output layer composition, encoder network by several convolutional layers and several pond layers with stepped construction combines and
At decoder network is composed of several up-sampling layers and several convolutional layers with stepped construction.Specifically, encoder network and
The structure design of decoder network can be adjusted according to lesion detection object, such as lung's CT images and brain CT images
The structure of lesion detection, the encoder network being respectively adopted and decoder network designs different.
In practical application, it be used to carry out lesion detection object from coding convolutional neural networks model what is constructed in advance
Before processing, this is trained from coding convolutional neural networks model first with great amount of samples.Specifically, can be by profession
Personnel's (such as medical practitioner) provide sample and are labeled to sample, and in order to increase the diversity of sample, the embodiment of the present application is also
Geometric transformation, such as rotation, translation, scaling can be carried out, to sample to realize the amplification to sample.Furthermore it is possible to using anti-
It is trained to BP algorithm is propagated to from coding convolutional neural networks model, by adjusting the super ginseng for encoding convolutional neural networks certainly
Number (such as learning rate, network concealed layer number, convolution kernel size, activation primitive) makes loss function reach minimum, in process
The preferable model of generalization ability may finally be obtained by stating training.Since backpropagation BP algorithm is common model training method,
So no longer describing in detail herein to it.
In practical application, using the CT images got in S101 as trained from coding convolutional neural networks model
Input object, finally obtain the lesion detection result of the CT images after treatment.
S103: feature extraction is carried out to the CT images by the convolutional layer and pond layer of the encoder network, obtains institute
State the lesion candidate region of CT images.
It is made of from the encoder network of coding convolutional neural networks model convolutional layer and pond layer, encoder network
Effect is to complete the low-dimensional feature of CT images to the expression of high dimensional feature, specifically, due to convolutional layer each in encoder network
Between exist with stepped construction, such as convolutional layer 1-1 and convolutional layer 1-2 exists with stepped construction, then CT images first pass through convolution
The processing of layer 1-1 enters convolutional layer 1-2 after convolutional layer 1-1 processing, passes through the processing of convolutional layer 1-2, then with such
It pushes away, is finally completed the processing to the CT images by the processing with each layer existing for stepped construction.It is first right in encoder network
The convolutional layer (such as above-mentioned convolutional layer 1-1) that CT images are handled can extract the simple feature of CT images, such as point, line
Etc. features, the subsequent convolutional layer (such as above-mentioned convolutional layer 1-2) that CT images are handled can complex characteristic to CT images into
Row extracts, such as Lung neoplasm shape feature.
In practical application, the convolutional layer and pond layer of encoder network by carrying out feature extraction to the CT images of input,
Obtain the lesion candidate region of the CT images.Specifically, pond layer is reduced by the characteristic pattern to CT images and carries out obvious spy
The reservation process of sign, so that the convolutional layer of encoder network carries out the more efficient of convolutional calculation processing to it.
S104: the lesion candidate region of the CT images is carried out by the up-sampling layer and convolutional layer of the decoder network
Processing, obtains the lesion center position of the CT images.
It is made of from coding convolutional neural networks solution to model code device network up-sampling layer and convolutional layer, decoder network
Effect be to determine lesion locations in CT images.
In practical application, decoder network is connected with encoder network, and the output of encoder network is the lesion of CT images
Candidate region is also the input of decoder network simultaneously.Specifically, by up-sampling layer and convolutional layer pair in decoder network
Lesion candidate region is further processed, and obtains the lesion center position of CT images.Specifically, by up-sampling layer to warp
The lesion candidate region for crossing the diminution processing of pond layer in encoder network amplifies processing, and based on the disease after enhanced processing
Become candidate region and carry out convolutional calculation, to determine the lesion center position in lesion candidate region.
Since the input of decoder network is to carry out the lesion candidate region that convolutional calculation is determined by encoder network,
That is, the calculating of decoder network is the convolution processing result based on encoder network, so, the embodiment of the present application is not
It needs to carry out duplicate convolutional calculation processing, saves the convolutional calculation processing time, improve lesion detection effect to a certain extent
Rate.
S105: it is handled, is obtained by the CT images of the probability output layer the determination lesion center position
To the extent of disease and lesion type information of the CT images.
Probability output layer is connected with decoder network, determines that the CT images of lesion center position are the defeated of decoder network
Out, at the same be also probability output layer input.The effect of probability output layer is for determining extent of disease and disease in CT images
Become type information.Specifically, probability output layer divides each pixel in the CT images for determining lesion center position
Class obtains the classification results of each pixel;Then, according to the classification results of each pixel, the lesion model of CT images is determined
It encloses and the corresponding lesion type information of each extent of disease.
Specifically, can determine the classification results of each pixel according to the pixel value of each pixel, it is assumed that belong to
The pixel of one pixel value range corresponds to the classification results that value is 0, and the pixel for belonging to the second pixel value range corresponds to value
For 1 classification results, and so on, it may be predetermined that the corresponding relationship of multiple pixel value ranges and classification results;It is obtaining
After the corresponding value of each pixel, the identical pixel of value is determined to belong to same classification results, if the classification knot
Fruit is malignant change, then the range that the pixel that value corresponds to the classification results is linked to be is the extent of disease that CT influences, simultaneously
The corresponding lesion type information of the extent of disease is malignant change type.It is worth noting that, CT images may include multiple
Extent of disease, multiple extent of disease are discontinuous, each extent of disease have corresponding lesion type information, such as malignant change, just
Often tissue etc..
In lesion detection method provided by the embodiments of the present application, using from coding convolutional neural networks model to CT images into
Row processing carries out feature extraction to CT images by the convolutional layer and pond layer of encoder network, and the lesion for obtaining the CT images is waited
Favored area handles the lesion candidate region of CT images by the up-sampling layer and convolutional layer of decoder network, obtains the CT
The lesion center position of image handles the CT images that lesion center position has been determined by probability output layer, obtains
The extent of disease and lesion type information of the CT images.In the embodiment of the present application, due to the input of decoder network be by
Encoder network carries out the lesion candidate region that convolutional calculation is determined, that is to say, that the calculating of decoder network is based on volume
The convolution processing result of code device network, so, the embodiment of the present application does not need to carry out duplicate convolutional calculation processing, saves
The convolutional calculation processing time under the premise of guaranteeing lesion detection accuracy rate, improves to a certain extent compared with prior art
Lesion detection efficiency.
Embodiment of the method two
For lesion center position, extent of disease and the lesion type information etc. detected after lesion detection,
Doctor needs further to diagnose patient according to above-mentioned testing result, is understood more intuitively patient's for the ease of doctor
Testing result, and successive treatment mode is more accurately determined based on testing result, with reference to Fig. 3, the embodiment of the present application is in above-mentioned side
On the basis of one S101-S105 of method embodiment, following steps are further additionally provided:
S301: according to the lesion center position of the CT images, extent of disease and lesion type information, described in building
The threedimensional model of CT images.
Lesion center position, extent of disease and the lesion type of CT images are detected by above method embodiment one
After information, the threedimensional model of the CT images is constructed based on above-mentioned testing result.
By taking the building of the threedimensional model of lung's CT images as an example, background information extra in lung's CT images is removed first, with
Lung tissue structure is obtained, the profile for secondly extracting lung tissue structure generates vector outline data, again according to vector outline data
It carries out Curve Reconstruction and generates curved surface, threedimensional model finally is obtained to the curved surface march face entity of generation.Wherein it is possible to using
Edge detection method or changes of threshold method carry out the removal of background information, in order to improve the speed and essence of background information removal
Degree, the embodiment of the present application can be removed background information using the threshold transformation based on Otsu threshold method.
S302: creation includes the virtual reality scenario of the threedimensional model.
In the embodiment of the present application, it can use the existing various virtual reality scenario modes of building and carry out scene construction,
This is no longer discussed in detail.
It successfully include the virtual reality scenario of the threedimensional model of the CT images based on creation, doctor can be by virtually existing
Real equipment (such as VR glasses) enters inside organ the observation for carrying out lesion in a manner of immersing, specifically, can be from different angles
Degree is by the further careful observation lesion of the operations such as scaling, rotation, so that being conducive to doctor is understood more intuitively lesion detection
As a result, more accurately determining successive treatment mode.
In addition, for the doctor less for experience, by virtual reality remote assistance function, experienced doctor can be asked
It is raw to assist diagnosis, so as to improve the accuracy of diagnosis.Specifically, experienced doctor can also be observed by virtual reality device
Lesion detection is understood inside organ as a result, the doctor to help experience less provides more accurate diagnostic result.
It is well known that lung cancer is one of fastest-rising malignant tumour of morbidity and mortality, and lung cancer is usually by lung
Tubercle develops, doctor generally passes through naked eye lung CT images at present, and analyzes Lung neoplasm in conjunction with personal experience
Good pernicious, with increasing for patient populations, for doctor, the analysis of lung's CT images is that a difficulty is big and heavy workload
Work.
For this purpose, the lesion detection method that above method embodiment one, two provides is applied particularly to lung by the embodiment of the present application
The detection of tubercle, with reference to Fig. 4, this method is specifically included:
S401: lung's CT images are obtained.
S402: using lung's CT images as the input object from coding convolutional neural networks model.
Wherein, the convolutional neural networks model of coding certainly is by encoder network, decoder network and probability output layer group
At the encoder network is composed of 4 groups of structures of every group of 2 3*3 convolutional layers and 1 pond layer with stepped construction, institute
Decoder network is stated to be composed of 4 groups of structures of every group of 1 up-sampling layer and 2 3*3 convolutional layers with stepped construction.
With reference to Fig. 5, for a kind of coding convolutional neural networks mould certainly for Lung neoplasm detection provided by the embodiments of the present application
The schematic diagram of type.Wherein, the function of pond layer is realized using maximum pond layer, specifically, two convolutional layers and a maximum pond
Change layer and form one group of structure, forms encoder network, a up-sampling layer and two convolutional layer compositions one by four groups of above structures
Group structure, forms decoder network by four groups of above structures, the maximum pond layer in last group of structure of encoder network
Output is as the input for up-sampling layer in first group of structure of decoder network, in last group of structure of decoder network
Input of the output of convolutional layer as probability output layer (Softmax layers).
S403: feature extraction is carried out to lung's CT images by the convolutional layer and pond layer of the encoder network, obtains lung
The Lung neoplasm candidate region of portion's CT images.
Convolutional layer in each group of structure and maximum pond layer in its encoder network are to the progress of lung's CT images
Feature extraction, the feature extraction by multiple groups structure finally obtain the Lung neoplasm candidate region of lung's CT images.
S404: by the decoder network up-sampling layer and convolutional layer to the Lung neoplasm candidate regions of lung's CT images into
Row processing, obtains the Lung neoplasm center position of lung's CT images.
Up-sampling layer and convolutional layer in decoder network in each group of structure are at Lung neoplasm candidate region
Reason, finally obtains Lung neoplasm center position after the processing of multiple groups structure.
S405: being handled by lung's CT images of the probability output layer the determination Lung neoplasm center position,
Obtain the Lung neoplasm range and Lung neoplasm type information of lung's CT images.
It includes full mold tubercle, sub- full mold tubercle, ground glass type knot that Lung neoplasm type information is predefined in probability output layer
Section and normal tissue, are further processed lung's CT images that Lung neoplasm center position has been determined by probability output layer
Afterwards, the Lung neoplasm range of each Lung neoplasm and corresponding Lung neoplasm type information in lung's CT images are determined.For example, inspection
Survey the position, size and belong to full mold tubercle, sub- full mold tubercle or ground glass type tubercle etc. that result includes some Lung neoplasm.
S406: according to the Lung neoplasm center position, Lung neoplasm range and Lung neoplasm type information of lung's CT images, structure
Build the threedimensional model of lung's CT images.
S407: creation includes the virtual reality scenario of the threedimensional model.
The building of threedimensional model and the creation of virtual reality scenario are no longer introduced herein in the embodiment of the present application, can be joined
Understood according to embodiment of the method two.
In pulmonary nodule detection method provided by the embodiments of the present application, using the coding convolution certainly for Lung neoplasm detection design
Neural network model handles lung's CT images, finally obtains the testing result of Lung neoplasm.Due in the embodiment of the present application
Input of the output of the maximum pond layer of encoder network directly as the up-sampling layer of decoder network, that is to say, that decoding
The calculating of device network is the convolution processing result based on encoder network, so, the embodiment of the present application does not need to be repeated
Convolutional calculation processing, save the convolutional calculation processing time, compared with prior art, guaranteeing Lung neoplasm Detection accuracy
Under the premise of, Lung neoplasm detection efficiency is improved to a certain extent.
In addition, the virtual reality scenario based on Lung neoplasm testing result is built, is conducive to doctor and lung is understood more intuitively
Nodule detection is as a result, more accurately determine successive treatment mode.
Installation practice
It is a kind of structure chart of lesion detection device provided in this embodiment referring to Fig. 6, which includes:
Module 601 is obtained, for obtaining computer tomography CT image;
Input module 602, for using the CT images as the input from coding convolutional neural networks model;Wherein, institute
It states and is made of from coding convolutional neural networks model encoder network, decoder network and probability output layer, the encoder net
Network is composed of convolutional layer and pond layer with stepped construction, and knot is laminated by up-sampling layer and convolutional layer for the decoder network
Structure is composed;
Extraction module 603, for by the encoder network convolutional layer and pond layer to the CT images carry out feature
It extracts, obtains the lesion candidate region of the CT images;
First processing module 604, for the up-sampling layer and convolutional layer by the decoder network to the CT images
Lesion candidate region is handled, and the lesion center position of the CT images is obtained;
Second processing module 605, for the CT by the probability output layer the determination lesion center position
Image is handled, and the extent of disease and lesion type information of the CT images are obtained.
In a kind of embodiment, described device can also include:
Module is constructed, for according to the lesion center position of the CT images, extent of disease and lesion type information,
Construct the threedimensional model of the CT images;
Creation module, for creating the virtual reality scenario comprising the threedimensional model.
Wherein, the Second processing module includes:
Classification submodule, for the CT images by the probability output layer the determination lesion center position
Each pixel is classified, and the classification results of each pixel are obtained;
It determines submodule, for the classification results according to each pixel, determines the extent of disease of the CT images, and
The corresponding lesion type information of each extent of disease.
Lesion detection device provided by the embodiments of the present application carries out CT images using from coding convolutional neural networks model
Processing carries out feature extraction to CT images by the convolutional layer and pond layer of encoder network, and the lesion for obtaining the CT images is candidate
Region handles the lesion candidate region of CT images by the up-sampling layer and convolutional layer of decoder network, obtains the CT shadow
The lesion center position of picture handles the CT images that lesion center position has been determined by probability output layer, is somebody's turn to do
The extent of disease and lesion type information of CT images.In the embodiment of the present application, since the input of decoder network is by compiling
Code device network carries out the lesion candidate region that convolutional calculation is determined, that is to say, that the calculating of decoder network is based on coding
The convolution processing result of device network, so, the embodiment of the present application does not need to carry out duplicate convolutional calculation processing, saves volume
Product calculating treatmenting time under the premise of guaranteeing lesion detection accuracy rate, improves disease compared with prior art to a certain extent
Become detection efficiency.
Correspondingly, the embodiment of the present invention also provides a kind of lesion detection equipment, it is shown in Figure 7, may include:
Processor 701, memory 702, input unit 703 and output device 704.Processor in lesion detection equipment
701 quantity can be one or more, take a processor as an example in Fig. 7.In some embodiments of the invention, processor
701, memory 702, input unit 703 and output device 704 can be connected by bus or other means, wherein with logical in Fig. 7
It crosses for bus connection.
Memory 702 can be used for storing software program and module, and processor 701 is stored in memory 702 by operation
Software program and module, thereby executing the various function application and data processing of lesion detection equipment.Memory 702 can
It mainly include storing program area and storage data area, wherein storing program area can be needed for storage program area, at least one function
Application program etc..In addition, memory 702 may include high-speed random access memory, it can also include non-volatile memories
Device, for example, at least a disk memory, flush memory device or other volatile solid-state parts.Input unit 703 can be used
It is related with the user setting of lesion detection equipment and function control in the number or character information that receive input, and generation
Signal input.
Specifically in the present embodiment, processor 701 can be according to following instruction, by one or more application program
The corresponding executable file of process be loaded into memory 702, and run and be stored in memory 702 by processor 701
Application program, to realize the various functions in above-mentioned lesion detection method.
The application also provides a kind of computer readable storage medium, which is characterized in that the computer readable storage medium
In be stored with instruction, when described instruction is run on the terminal device, so that the terminal device executes above-mentioned lesion detection side
Method.
For device embodiment, since it corresponds essentially to embodiment of the method, so related place is referring to method reality
Apply the part explanation of example.The apparatus embodiments described above are merely exemplary, wherein described be used as separation unit
The unit of explanation may or may not be physically separated, and component shown as a unit can be or can also be with
It is not physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to actual
It needs that some or all of the modules therein is selected to achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not
In the case where making the creative labor, it can understand and implement.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
A kind of lesion detection method, device and equipment provided by the embodiment of the present application is described in detail above,
Specific examples are used herein to illustrate the principle and implementation manner of the present application, and the explanation of above embodiments is only used
The present processes and its core concept are understood in help;At the same time, for those skilled in the art, according to the application's
Thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not be construed as
Limitation to the application.
Claims (10)
1. a kind of lesion detection method, which is characterized in that the described method includes:
Obtain computer tomography CT image;
Using the CT images as the input object from coding convolutional neural networks model;Wherein, described from coding convolutional Neural
Network model is made of encoder network, decoder network and probability output layer, and the encoder network is by convolutional layer and pond
Layer is composed with stepped construction, and the decoder network is composed of up-sampling layer and convolutional layer with stepped construction;
Feature extraction is carried out to the CT images by the convolutional layer and pond layer of the encoder network, obtains the CT images
Lesion candidate region;
The lesion candidate region of the CT images is handled by the up-sampling layer and convolutional layer of the decoder network, is obtained
The lesion center position of the CT images;
It is handled by the CT images of the probability output layer the determination lesion center position, obtains the CT shadow
The extent of disease and lesion type information of picture.
2. the method according to claim 1, wherein the method also includes:
According to the lesion center position of the CT images, extent of disease and lesion type information, the CT images are constructed
Threedimensional model;
Creation includes the virtual reality scenario of the threedimensional model.
3. the method according to claim 1, wherein it is described by the probability output layer in the determination lesion
The CT images of heart point position are handled, and the extent of disease and lesion type information of the CT images are obtained, comprising:
Classified by the probability output layer each pixel in the CT images of the determination lesion center position,
Obtain the classification results of each pixel;
According to the classification results of each pixel, determine that the extent of disease of the CT images and each extent of disease are right respectively
The lesion type information answered.
4. according to the method described in claim 3, it is characterized in that, the CT images are lung's CT images, the encoder net
Network is composed of 4 groups of structures of every group of 2 3*3 convolutional layers and 1 pond layer with stepped construction, and the decoder network is by every
4 groups of structures of 1 up-sampling layer of group and 2 3*3 convolutional layers are composed with stepped construction.
5. according to the method described in claim 4, it is characterized in that, the lesion type information includes full mold tubercle, sub- full mold
Tubercle, ground glass type tubercle and normal tissue.
6. a kind of lesion detection device, which is characterized in that described device includes:
Module is obtained, for obtaining computer tomography CT image;
Input module, for using the CT images as the input object from coding convolutional neural networks model;Wherein, it is described from
Coding convolutional neural networks model be made of encoder network, decoder network and probability output layer, the encoder network by
Convolutional layer and pond layer are composed with stepped construction, and the decoder network is by up-sampling layer and convolutional layer with stepped construction group
It closes;
Extraction module, for by the encoder network convolutional layer and pond layer to the CT images carry out feature extraction, obtain
To the lesion candidate region of the CT images;
First processing module, for up-sampling layer and convolutional layer to the lesion candidate of the CT images by the decoder network
Region is handled, and the lesion center position of the CT images is obtained;
Second processing module, for being carried out by the CT images of the probability output layer the determination lesion center position
Processing, obtains the extent of disease and lesion type information of the CT images.
7. device according to claim 6, which is characterized in that described device further include:
Module is constructed, for according to the lesion center position of the CT images, extent of disease and lesion type information, building
The threedimensional model of the CT images;
Creation module, for creating the virtual reality scenario comprising the threedimensional model.
8. device according to claim 6, which is characterized in that the Second processing module includes:
Classify submodule, for by each in the CT images of the probability output layer the determination lesion center position
Pixel is classified, and the classification results of each pixel are obtained;
It determines submodule, for the classification results according to each pixel, determines the extent of disease of the CT images and each
The corresponding lesion type information of extent of disease.
9. a kind of computer readable storage medium, which is characterized in that instruction is stored in the computer readable storage medium, when
When described instruction is run on the terminal device, so that the terminal device executes lesion as described in any one in claim 1-5
Detection method.
10. a kind of lesion detection equipment characterized by comprising memory, processor, and be stored on the memory simultaneously
The computer program that can be run on the processor when the processor executes the computer program, is realized as right is wanted
Seek the described in any item lesion detection methods of 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810866045.5A CN109166104A (en) | 2018-08-01 | 2018-08-01 | A kind of lesion detection method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810866045.5A CN109166104A (en) | 2018-08-01 | 2018-08-01 | A kind of lesion detection method, device and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109166104A true CN109166104A (en) | 2019-01-08 |
Family
ID=64898617
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810866045.5A Pending CN109166104A (en) | 2018-08-01 | 2018-08-01 | A kind of lesion detection method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109166104A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109965829A (en) * | 2019-03-06 | 2019-07-05 | 重庆金山医疗器械有限公司 | Imaging optimization method, image processing apparatus, imaging device and endoscopic system |
CN110211200A (en) * | 2019-04-22 | 2019-09-06 | 深圳安科高技术股份有限公司 | A kind of arch wire generation method and its system based on nerual network technique |
CN110210234A (en) * | 2019-04-23 | 2019-09-06 | 平安科技(深圳)有限公司 | The moving method of medical information, device, computer equipment and storage medium when changing the place of examination |
CN110223279A (en) * | 2019-05-31 | 2019-09-10 | 上海商汤智能科技有限公司 | A kind of image processing method and device, electronic equipment |
CN110232686A (en) * | 2019-06-19 | 2019-09-13 | 东软医疗系统股份有限公司 | Acquisition methods, device, CT equipment and the storage medium of Lung neoplasm follow-up image |
CN110349162A (en) * | 2019-07-17 | 2019-10-18 | 苏州大学 | A kind of more lesion image partition methods of macular edema |
CN110969632A (en) * | 2019-11-28 | 2020-04-07 | 北京推想科技有限公司 | Deep learning model training method, image processing method and device |
CN111598882A (en) * | 2020-05-19 | 2020-08-28 | 联想(北京)有限公司 | Organ detection method and device and computer equipment |
CN112116603A (en) * | 2020-09-14 | 2020-12-22 | 中国科学院大学宁波华美医院 | Pulmonary nodule false positive screening method based on multitask learning |
CN112150411A (en) * | 2020-08-28 | 2020-12-29 | 刘军 | Information processing method and device |
-
2018
- 2018-08-01 CN CN201810866045.5A patent/CN109166104A/en active Pending
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109965829A (en) * | 2019-03-06 | 2019-07-05 | 重庆金山医疗器械有限公司 | Imaging optimization method, image processing apparatus, imaging device and endoscopic system |
CN109965829B (en) * | 2019-03-06 | 2022-05-06 | 重庆金山医疗技术研究院有限公司 | Imaging optimization method, image processing apparatus, imaging apparatus, and endoscope system |
CN110211200A (en) * | 2019-04-22 | 2019-09-06 | 深圳安科高技术股份有限公司 | A kind of arch wire generation method and its system based on nerual network technique |
CN110210234A (en) * | 2019-04-23 | 2019-09-06 | 平安科技(深圳)有限公司 | The moving method of medical information, device, computer equipment and storage medium when changing the place of examination |
CN110223279B (en) * | 2019-05-31 | 2021-10-08 | 上海商汤智能科技有限公司 | Image processing method and device and electronic equipment |
CN110223279A (en) * | 2019-05-31 | 2019-09-10 | 上海商汤智能科技有限公司 | A kind of image processing method and device, electronic equipment |
CN110232686A (en) * | 2019-06-19 | 2019-09-13 | 东软医疗系统股份有限公司 | Acquisition methods, device, CT equipment and the storage medium of Lung neoplasm follow-up image |
CN110349162A (en) * | 2019-07-17 | 2019-10-18 | 苏州大学 | A kind of more lesion image partition methods of macular edema |
CN110969632A (en) * | 2019-11-28 | 2020-04-07 | 北京推想科技有限公司 | Deep learning model training method, image processing method and device |
CN110969632B (en) * | 2019-11-28 | 2020-09-08 | 北京推想科技有限公司 | Deep learning model training method, image processing method and device |
CN111598882A (en) * | 2020-05-19 | 2020-08-28 | 联想(北京)有限公司 | Organ detection method and device and computer equipment |
CN111598882B (en) * | 2020-05-19 | 2023-11-24 | 联想(北京)有限公司 | Organ detection method, organ detection device and computer equipment |
CN112150411A (en) * | 2020-08-28 | 2020-12-29 | 刘军 | Information processing method and device |
CN112150411B (en) * | 2020-08-28 | 2023-12-15 | 刘军 | Information processing method and device |
CN112116603A (en) * | 2020-09-14 | 2020-12-22 | 中国科学院大学宁波华美医院 | Pulmonary nodule false positive screening method based on multitask learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109166104A (en) | A kind of lesion detection method, device and equipment | |
Feng et al. | CPFNet: Context pyramid fusion network for medical image segmentation | |
JP6993371B2 (en) | Computed tomography lung nodule detection method based on deep learning | |
CN109886986A (en) | A kind of skin lens image dividing method based on multiple-limb convolutional neural networks | |
CN109598722B (en) | Image analysis method based on recurrent neural network | |
Yang et al. | A two-stage convolutional neural network for pulmonary embolism detection from CTPA images | |
CN109685809B (en) | Liver infusorian focus segmentation method and system based on neural network | |
CN107622492A (en) | Lung splits dividing method and system | |
JP2021524083A (en) | Image processing methods, equipment, computer programs, and computer equipment | |
CN109727253A (en) | Divide the aided detection method of Lung neoplasm automatically based on depth convolutional neural networks | |
CN111429474A (en) | Mammary gland DCE-MRI image focus segmentation model establishment and segmentation method based on mixed convolution | |
CN111815766B (en) | Processing method and system for reconstructing three-dimensional model of blood vessel based on 2D-DSA image | |
CN109300136B (en) | Automatic segmentation method for organs at risk based on convolutional neural network | |
CN109685810A (en) | A kind of recognition methods of Bile fistula lesion and system based on deep learning | |
CN108038875A (en) | A kind of lung ultrasound image-recognizing method and device | |
CN109919212A (en) | The multi-dimension testing method and device of tumour in digestive endoscope image | |
Alam et al. | S2C-DeLeNet: A parameter transfer based segmentation-classification integration for detecting skin cancer lesions from dermoscopic images | |
EP2052363A2 (en) | A method, apparatus, graphical user interface, computer-readable medium, and use for quantification of a structure in an object of an image dataset | |
CN109727227A (en) | A kind of diagnosis of thyroid illness method based on SPECT image | |
Hassan et al. | SEADNet: Deep learning driven segmentation and extraction of macular fluids in 3D retinal OCT scans | |
Aswathy et al. | CAD systems for automatic detection and classification of COVID-19 in nano CT lung image by using machine learning technique | |
Xiao et al. | PET and CT image fusion of lung cancer with siamese pyramid fusion network | |
Han et al. | Deep Learning‐Based Computed Tomography Image Features in the Detection and Diagnosis of Perianal Abscess Tissue | |
Jiang et al. | MDCF_Net: A Multi-dimensional hybrid network for liver and tumor segmentation from CT | |
JPH09299366A (en) | Region extract device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 110179 No. 177-1 Innovation Road, Hunnan District, Shenyang City, Liaoning Province Applicant after: DongSoft Medical System Co., Ltd. Address before: 110179 No. 177-1 Innovation Road, Hunnan District, Shenyang City, Liaoning Province Applicant before: Dongruan Medical Systems Co., Ltd., Shenyang |
|
CB02 | Change of applicant information | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190108 |
|
RJ01 | Rejection of invention patent application after publication |