CN111798424B - Medical image-based nodule detection method and device and electronic equipment - Google Patents

Medical image-based nodule detection method and device and electronic equipment Download PDF

Info

Publication number
CN111798424B
CN111798424B CN202010621972.8A CN202010621972A CN111798424B CN 111798424 B CN111798424 B CN 111798424B CN 202010621972 A CN202010621972 A CN 202010621972A CN 111798424 B CN111798424 B CN 111798424B
Authority
CN
China
Prior art keywords
nodule
network
medical image
nodules
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010621972.8A
Other languages
Chinese (zh)
Other versions
CN111798424A (en
Inventor
张佳琦
吕晨翀
丁佳
王子腾
孙安澜
胡阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Yizhun Intelligent Technology Co ltd
Original Assignee
Guangxi Yizhun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Yizhun Intelligent Technology Co ltd filed Critical Guangxi Yizhun Intelligent Technology Co ltd
Priority to CN202010621972.8A priority Critical patent/CN111798424B/en
Publication of CN111798424A publication Critical patent/CN111798424A/en
Application granted granted Critical
Publication of CN111798424B publication Critical patent/CN111798424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

A nodule detection method based on medical images takes an image sequence based on medical images as input, positions candidate nodules through a coarse detection network and outputs a first prediction score; carrying out classification training on an image sequence corresponding to the candidate nodule positioned by the coarse detection network through a false positive inhibition network to obtain the undetermined nodule and a second prediction score thereof; and constructing a single feature of the undetermined nodule, constructing a global feature based on the single features of the undetermined nodules, splicing the single feature and the global feature into a fusion feature, performing regression training, and outputting a third prediction score to complete a nodule detection result. The problem of among the prior art to the not good effect of grinding glass nodule and special position is solved.

Description

Medical image-based nodule detection method and device and electronic equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a medical image-based nodule detection method and apparatus, a computer-readable storage medium, and an electronic device.
Background
Lung cancer is one of the most serious malignant tumors threatening human life and health today. Early diagnosis and treatment of lung cancer can increase the five-year survival rate of a patient from 14% to 49%, and therefore, early detection and diagnosis of lung cancer is the key for improving the survival probability of the patient. It is believed that nodules are one of the most important early signs of cancer, lesion characteristics can be inferred from lesion features of nodules, and CT examination is considered to be the best diagnostic tool for finding and diagnosing lung nodules. Due to the uncertainty of the characteristics such as the size, the shape and the density of the nodule, the traditional manual identification cannot meet the detection requirements in terms of workload and precision. The need for Computer Aided Detection (CAD) has become acute.
Early researchers primarily extracted features manually, and then classified the characterized vectors using various machine learning methods. However, the feature expression capability of the manual extraction is low, and additional preprocessing is required to reduce the influence of the extraneous region, so that the application to the actual environment is not possible. In recent years, with the development of artificial intelligence and deep learning algorithms, image processing in medical diagnosis is increasingly involved, and researchers are trying to extract features from original CT images by using CNNs, which are influenced by the breakthrough of deep neural networks represented by CNNs in image classification tasks, so that the characteristics of the images themselves can be more objectively characterized, and the methods are becoming the mainstream of nodule detection methods.
The conventional lung nodule detection algorithm based on the lung CT image has a good detection effect on isolated lung nodules at present, but the detection effect on nodules in special positions, frosted glass shadows and other types of nodules is not ideal. The isolated lung nodule has clear boundary and obvious position and is easy to detect; the nodes in the lung portal region are mixed with blood vessels, bronchi and lymph node tissues, so that false positive judgment is easy to occur. And for another category of lung nodules, such as Ground Glass Nodules (GGNs) which are mainly characterized by ground glass shadows, the detection effect is poor by using a conventional detection algorithm. From the histopathological point of view, the appearance of GGN indicates that the lesion is still in early stage, active stage or progressive stage, so that the timely and correct judgment of the morphology and properties of GGN is very important for guiding treatment. On the other hand, the lung nodules are various in types and shapes, and other tissues in the lung, such as thickened blood vessels, also present a sphere-like shape, similar to the presentation state of the lung nodules in the CT image, and also one of the difficulties that the detection algorithm should solve.
In some currently common CAD methods for deep learning, for example, an algorithm for extracting an object from an original CT image and then detecting the object by using a Region-based Convolutional neural network (RCNN) series, there are common problems that the object detection is easily affected by a large number of negative samples, the effect of suppressing the false positive result is not good when classifying ground glass nodules and other similar morphological tissues, and the effect of detecting a junction located at a special position is not good. How to solve the problems in the detection algorithm has great significance for fully utilizing medical resources and relieving the diagnosis pressure of doctors.
Disclosure of Invention
Aiming at solving the problem of poor effect of grinding glass nodules and special positions in the related art, the invention provides a nodule detection method based on a medical image, which mainly comprises the following three stages:
s1, in a coarse detection stage, positioning a candidate nodule through a coarse detection network by taking an image sequence based on a medical image as input, and outputting a first prediction score;
in the stage of S2 false positive inhibition, classification training is carried out on an image sequence corresponding to the candidate nodule positioned by the coarse detection network through the false positive inhibition network to obtain the undetermined nodule and a second prediction score thereof;
s3 global judgment stage, constructing single features of the undetermined nodules, constructing global features based on the single features of the undetermined nodules, splicing the single features and the global features into fusion features, performing regression training, and outputting third prediction scores to complete nodule detection; wherein the individual features of the undetermined nodule are constructed from their characteristic parameters in a false positive inhibition network in combination with artificial features.
In the coarse detection stage, a feature pyramid network is used for integrating multi-scale information. The characteristic graph output by the shallow layer network of the characteristic pyramid network contains more geometric information, and the target position is more accurate; the feature map of the deep network contains more semantic information, but the smaller size is coarse with the target location. The feature pyramid network is combined with semantic information and geometric information/detail information, and is suitable for actual conditions of different sizes of nodules in the medical image scene.
In addition, another sample is characterized by a larger absolute number of simple negative samples and more easily separable samples than hardly separable samples in the negative samples. By adopting the Focal Loss function (Focal local), the method aims at the situation that the positive and negative samples are extremely unbalanced in the target detection task, and the Loss of target detection is easily influenced by a large amount of simple negative samples. The function may make the model focus more on the hard-to-divide samples during training by reducing the weights of the hard-to-divide samples. The difficult-to-sample mining mechanism focuses more on difficult-to-distinguish samples, and detection accuracy of complex types of ground glass nodules and the like is improved.
And in the stage of false positive inhibition, the nodule candidates obtained by the coarse detection network are subjected to fine classification. The 3D convolutional neural network can further play a good false positive inhibition effect on the output result of the coarse detection stage, and the non-nodule area in the output result of the coarse detection stage is eliminated as far as possible. A typical 3D convolutional neural network includes an input layer, a convolutional layer, a pooling layer, a Dropout layer, a fully-connected layer, and an output layer, where the convolutional layer is a 3D convolutional kernel. After the candidate nodules are subjected to the 3D convolutional neural network, a second output score is given to represent the judgment result of the pulmonary nodules, and compared with the candidate nodules determined in the coarse detection stage, the non-nodule areas of the candidate nodules selected through judgment of the second output score are obviously reduced.
The design of the first two stage frames meets the requirements of high detection rate and low false positive of lung nodules to a greater extent, but detection deviation may occur when result judgment is carried out according to the characteristics of a single nodule. The problem that the false positive inhibition stage only depends on a single nodule to judge is further solved by combining the global information of all nodules in a case through the global information.
A structure of ContextNet may be used to implement comprehensive determination using global information, and a specific implementation of a typical ContextNet is as follows:
s301: extracting characteristic parameters of the undetermined nodule in a false positive inhibition stage, and combining artificial characteristics to form a single characteristic of the undetermined nodule; the characteristic parameters of the false positive inhibition stage can select the characteristics of the full connection layer in the 3D convolutional neural network adopted by the stage to form a basic characteristic vector of a single characteristic. The artificial features can be composed of one or more parameters of a first output score of a coarse detection network of the undetermined nodule, a second output score of a false positive inhibition network and nodule morphological features, wherein the morphological features can comprise a long diameter, a short diameter, a density and the like.
S302: constructing global features of a plurality of nodes to be determined; the global feature is constructed by the following method: and selecting the first n single features of all the undetermined nodules for splicing to form a global feature, and if the undetermined nodules of the case are less than n, filling the vacancy with 0 to ensure that the length of the vacancy is equal to the length of the global feature.
S303: fusing the single feature and the global feature; and for each nodule of the undetermined region, combining the single feature constructed in the S301 and the global feature constructed in the S302 to form a combined feature of the nodule.
S304: and classifying the fusion features to obtain a third prediction score, wherein one classification method is to perform regression training by adopting a Gradient Boosting Decision Tree (GBDT).
And after classification is carried out according to the third prediction score, the nodule detection through the three-stage framework is completed.
In another aspect of the present invention, extracting contour information from the detected nodule image using a segmentation network to compute morphological features, including one or more of a long diameter, a short diameter, an equivalent diameter, a volume, and a Hu value of the nodule, for use by a physician in an actual scene. The split network can use any model that works well, such as 3D U-Net.
In another aspect of the invention, a follow-up matching step is also included. Because the posture and the breathing of the patient can obviously influence the angle and the shape of image acquisition during each scanning, and medical images in different periods cannot be matched through pixel positions, the image registration is carried out through a 3D full convolution network. On the basis of image registration, by calculating Euclidean distances among lung nodules in different periods, a graph is built by connecting edges among the nodules meeting the threshold setting, the nodule pair with the minimum distance among the connected components is selected as a final matching result, the corresponding relation of the nodules in the images in different periods can be built, and a doctor is assisted in identifying and analyzing the change of the old nodules and the new nodules.
Typically, the method of the invention is used to process CT images of the lungs, with the detection target being a lung lesion.
The method for detecting a nodule based on a medical image according to the present invention may be applied to the purpose of auxiliary diagnosis of a disease, or may be applied to medical statistical research for the purpose of non-disease diagnosis. In addition, when used as an aid to diagnosis, the method is often not sufficient to diagnose the disease directly, and is only used as an intermediate result for reference by the physician.
The present disclosure also includes a nodule detection apparatus based on medical images, comprising: an acquisition unit that acquires a medical image to be processed; and the detection module is used for carrying out nodule detection on the medical image to be processed according to the medical image-based nodule detection method.
A computer-readable medium, in which a computer program is stored which, when being executed by a processor, carries out the aforementioned medical image-based nodule detection method.
An electronic device comprising a processor and a memory, wherein the memory stores thereon one or more computer readable instructions, and wherein the one or more computer readable instructions, when executed by the processor, implement the medical image-based nodule detection method.
The invention has the beneficial technical effects that:
the invention provides a three-stage pulmonary nodule detection method, which aims to solve the problem of poor effect on ground glass nodules and special positions in the related art by optimizing an algorithm frame, transplanting false positives and judging comprehensive global information. In addition, after the nodule is detected, quantitative morphological indexes of the nodule are analyzed by utilizing a segmentation network, and lesion pairing, analysis and comparison of multiple examinations of the same patient are performed by combining a lesion follow-up module. The computer-aided establishment of the corresponding relation of the nodules in the multi-detection image of the patient is realized, and doctors are assisted to make a better judgment.
The method has the advantages of high detection accuracy, low false positive rate, capability of quantifying nodules, capability of follow-up visit and tracking and the like.
Drawings
FIG. 1 is a schematic diagram of a three-stage detection method according to the present invention;
FIG. 2 is a schematic diagram of an actual scene of multi-scale information detection of a lung portal region nodule;
FIG. 3 is a schematic diagram of a coarse detection stage using a feature pyramid network;
FIG. 4 is a schematic diagram of a false positive suppression network;
FIG. 5 is a diagram of a typical architecture for ContextNet;
FIG. 6 is a graph of a second output and a third output score for a lung nodule candidate region;
FIG. 7 is a schematic view of ground glass nodule detection;
FIG. 8 is a schematic diagram of segmentation network extraction nodule contours;
FIG. 9 is a schematic diagram of a 3D U-Net network structure FIG. 10 is a schematic diagram of an image registration network;
FIG. 11 is a schematic representation of the establishment of a relationship to a nodule through a follow-up procedure.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It should be understood that the example embodiments may be embodied in many different forms and should not be construed as limited to the examples set forth herein.
The present application describes nodule detection apparatus/methods based on medical images, where the medical images may include projection image data obtained by various imaging systems. The imaging system may be a single mode imaging system, such as a Computed Tomography (CT) system, an ultrasound imaging system, an X-ray optical imaging system, a Positron Emission Tomography (PET) system, and the like. The imaging system may also be a multi-modality imaging system, such as a positron emission tomography-computed tomography (PET-CT), positron emission tomography-magnetic resonance imaging (PET-MRI) system, a single photon emission tomography-computed tomography (SPECT-CT) system, a digital subtraction angiography-computed tomography (DSA-CT) system, or the like. The medical image may comprise a reconstructed image obtained by reconstructing the projection data.
Computers may be used to implement particular methods and apparatus disclosed in some embodiments of the invention. In some embodiments, a computer may implement embodiments of the invention by its hardware devices, software programs, firmware, and combinations thereof. In some embodiments, the computer may be a general purpose computer, or a specific purpose computer. In some embodiments, the computer may be a mobile terminal, a personal computer, a server, a cloud computing platform, and the like.
The computer may include an internal communication bus, a processor, read-only memory, random access memory, communication ports, input/output components, a hard disk, and a user interface. An internal communication bus may enable data communication between computer components. The processor may make the determination and issue the prompt. In some embodiments, a processor may be comprised of one or more processors. The communication port may implement a computer and other components such as: and the external equipment, the image acquisition equipment, the database, the external storage, the image processing workstation and the like are in data communication. In some embodiments, a computer may send and receive information and data from a network through a communication port. The input/output components support the flow of input/output data between the computer and other components. By way of example, the input/output components may include one or more of the following: a mouse, a trackball, a keyboard, a touch-sensitive component, a sound receiver, etc. The user interface may enable interaction and information exchange between the computer and the user. The computer may also include various forms of program storage units and data storage units, such as a hard disk, Read Only Memory (ROM) and Random Access Memory (RAM), capable of storing various data files used in the processing and/or communication of the computer, and possibly program instructions executed by the processor.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In another aspect, the present application also provides a computer-readable medium, which may be included in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below.
Before explaining the technical solution of the embodiment of the present invention in detail, a relevant principle about a Convolutional Neural Network (CNN) is first introduced.
CNN is a multi-layered supervised learning neural network, which is commonly used to handle image-related machine learning problems. A typical CNN consists of a convolutional layer (Convolution), a Pooling layer (firing), and a fully connected layer (FullyConnection). The lower layer generally consists of convolution layers and pooling layers, wherein the convolution layers are used for enhancing the original signal characteristics of the image and reducing noise through convolution operation, and the pooling layers are used for reducing the calculation amount while keeping the image rotation invariance according to the principle of image local correlation. The fully connected layer is located at the upper layer of the CNN, the input of the fully connected layer is a feature image obtained by feature extraction of the convolutional layer and the pooling layer, the output of the fully connected layer is a connectable classifier, and the input image is classified by adopting logistic regression, Softmax regression or Support Vector Machine (SVM).
The CNN training process generally adopts a gradient descent method to minimize a loss function, weight parameters of all layers in the network are reversely adjusted layer by layer through a loss layer connected behind a full connection layer, and the accuracy of the network is improved through frequent iterative training.
In an exemplary embodiment of the invention, the detection of lung nodules is performed in three stages, the principle of which is schematically illustrated in fig. 1:
s1 coarse detection stage
Taking lung nodule detection in a CT image as an example, as a detection target in a medical image, the size and the morphological difference of the lung nodule detection are large, it is difficult to cover all scales by a data set, and the robustness of the conventional CNN algorithm is challenged due to different acquisition equipment sources and parameters. Therefore, integrating multi-scale information is a problem to be solved in the coarse detection stage, and in the embodiment of the present invention, a Feature Pyramid Network (FPN) is used to solve the problem.
In a conventional image processing method, a pyramid is a relatively common means, such as Scale-invariant Feature Transform (SIFT), multi-layer Feature acquisition is performed based on the pyramid, and for a deep network, the native convolutional network features determine a natural pyramid structure. In the process of continuous convolution, the smaller the Size (Size) of the deep Feature Map (Feature Map), the more semantic information it contains, but due to the characteristic of the smaller Size, it is difficult to detect a small target and it is time-consuming. On the other hand, if the Feature Map is up-sampled and then restored step by step, the Size (Size) of the Feature Map (Feature Map) becomes large, and more detailed information can be focused, so that the problem that a small target is difficult to detect is solved.
Thus, a feature pyramid network is a network that consists of two parts, bottom-up and top-down. From bottom to top, the traditional convolution network is used for feature extraction, and with the deepening of convolution, the spatial resolution is reduced, the spatial information is lost, and more semantic information is detected; the FPN also provides a top-down path, a layer which simultaneously retains high precision is constructed, lateral connection is provided, original feature information is integrated into an up-sampling feature layer, the original feature represents accurate position information, and path results in two directions are fused together, so that feature graphs containing semantic information with different sizes or resolutions are obtained, and distribution of detection frames in the detector is adaptively adjusted according to characteristics of the nodule lesions.
In one embodiment, to eliminate extreme Scale target regions (too large or too small), training is only performed within the desired Scale range, i.e., only candidates within a certain size range are selected, and the remaining out-of-Scale candidates are ignored when propagating back, thereby generating detection targets when training and regressing, i.e., using a concept similar to Scale Normalization for Image Pyramides (SNIP) algorithm.
Fig. 2 illustrates the detection effect of the nodules in the lung portal region at different scales: the characteristic diagram of the shallow network contains more geometric information, and the target position is more accurate; the feature map of the deep network contains more semantic information, but the smaller size is coarse with the target location. Therefore, the characteristic pyramid network is adopted in the coarse detection stage, and the requirements of detection and classification can be better met at the same time.
On the other hand, the absolute number of simple negative samples for nodule detection is large, the number of easily-separable samples in the negative samples is more than that of difficultly-separable samples, and a difficultly-separable mining mechanism is introduced in the embodiment, so that the model focuses more on the difficultly-separable samples, and the detection accuracy of complex types of nodules such as ground glass nodules is improved.
In the model training process, a Loss Function (Loss Function) is used for evaluating the degree of inconsistency between a predicted value and a true value of the model, and the process of neural network training or optimization is the process of minimizing the Loss Function. Typical loss function in training classification tasks using CNN, the loss layer of a full connection layer post-connection often uses a cross-entropy loss function. Cross entropy loss function:
L1(py)=-logpy
however, in order to perform difficult mining, an exponential coefficient is added into a typical cross entropy loss function, so that the contribution of positive and negative samples to the loss function is automatically adjusted, and the focusing on difficult samples is realized.
The introduction of the Focal Loss function (Focal local) is to solve the problem of serious imbalance of class sample proportion in target detection, and aims to solve the class imbalance problem by reducing internal weighting (simple samples), so that even if the number of simple samples is large, the contribution of the simple samples to the total Loss is small. The function can enable the model to be more concentrated in the samples difficult to be distinguished during training by reducing the weight of the samples easy to be distinguished, and the problem that the loss of target detection is easily controlled by a large number of simple negative samples is avoided.
For the two classes of focus Loss function (Focal Loss):
L2(py)=-αy(1-py)γlogpy
wherein p isyIs a model probability prediction value with the value between 0 and 1 and alphayThe balance of the number of positive and negative samples is controlled to be between 0 and 1. This is an improved loss function based on the cross-entropy loss function, the modulation term (1-P)y)γThe loss of the easily separable samples can be reduced, the loss of the difficultly separable samples can be increased, and the model can be focused on learning the difficultly separable samples. Alpha can be dynamically adjusted according to the performance of the standard data set on the neural network modelyValue of (d), detection rate of modelAnd the average false positive rate reaches an acceptable balance, then alpha under this result can be selectedy. Meanwhile, whether a higher gamma value needs to be set or not can be determined according to the detection rate and the trend of the false positive rate, the gamma value under the result can be kept unchanged under the condition that the detection rate is high and the average false positive rate is low, and if the detection rate and the average false positive rate are smaller, the gamma value is increased appropriately.
In one embodiment of the invention, α is selectedy=1,γ=2。
A typical coarse detection stage network is shown in figure 3. In the coarse detection stage, a CT image is used as input, a characteristic pyramid is used as a characteristic extraction network, candidates outside a specified range are omitted during back propagation, a focus loss function and a Smooth L1 model are used for regression training in training classification, and the positioning position of a candidate nodule and a first prediction score of the candidate nodule are output.
S2 false positive inhibition phase
The stage aims to further perform false positive inhibition on the output result of the coarse detection stage through the convolutional neural network, eliminate non-nodule areas in the output result of the coarse detection stage, and achieve fine classification.
The false positive inhibition network can adopt a typical CNN network model structure. In the traditional CNN method, a two-dimensional model is adopted, firstly, 2D visual angles of 3D structures are obtained around nodules in various slicing modes, then, the CNN model is used for carrying out feature extraction, and features are extracted by adopting the 2D CNN model, so that detection results are obtained. This method is based on a method of converting slices into 2D views, underutilizes scanned 3D environmental information, and has limitations in both detection accuracy and effect. And the features are directly extracted from a stereo space based on a 3D CNN model, and the steps and the thought of the method are basically consistent with those of a 2D CNN, and the difference is that a two-dimensional convolution kernel used by the former is replaced by a three-dimensional convolution kernel.
In this embodiment, a typical 3D CNN network is adopted, such as the false positive suppression network shown in fig. 4, which includes an input layer, a convolutional layer, a pooling layer, a Dropout layer, a full link layer, and a Softmax layer. In a typical embodiment, after a 3D image is input, a plurality of convolution kernels of 3 × 3 × 3 are used in a training process, and then an activation function such as a modified linear unit (ReLU) may be used to perform network activation. The convolutional layer, the pooling layer, the Droupout layer, the fully-connected layer and the Softmax layer are all classic network model units in a classic CNN network, and are not described in detail in this embodiment.
In one embodiment, the acquired multiple nodule images may be used directly as training samples; or using a public case database, such as a public database of lung CT images, to construct training samples. In another embodiment, a pre-trained 3D CNN is used as a parameter for the false positive cancellation network.
After the candidate nodules are subjected to the 3D convolutional neural network, a second output score is given to represent the judgment result of the pulmonary nodules, and compared with the candidate nodules determined in the coarse detection stage, the non-nodule areas of the candidate nodules selected through judgment of the second output score are obviously reduced.
S3 Global decision phase
The design of the frames in the first two stages meets the requirements of high detection rate and low false positive of pulmonary nodules to a greater extent, but in the diagnosis process of an actual doctor, more than one lesion part often exists in one case, the false positive inhibition stage eliminates the characteristics input by the network and judges the result as the characteristics of a single nodule, and the condition that the manifestation shape of the lesion is very similar to the nodule can cause deviation. If the overall characteristics of the case image are combined, the global information is integrated, the detected nodules are prevented from being looked at in an isolated mode, and the false positive nodules are further restrained.
Therefore, in the present embodiment, after the first two stages are ended, the global decision stage is added, and the main idea is to combine global information of all nodes in a case, further make up the bias of the S2 false positive suppression stage that only depends on a single node for decision, and help a doctor to eliminate false positive nodes that are difficult to decide.
In order to solve the above problem, the structure of ContextNet may be adopted to realize comprehensive determination using global information. A typical embodiment of ContextNet is shown in fig. 5. It basically comprises the following steps:
s301: extracting characteristic parameters of the undetermined nodule in a false positive inhibition stage, and combining artificial characteristics to form a single characteristic of the undetermined nodule;
the characteristic parameters of the false positive inhibition stage can select the characteristics of the 3D CNN full-connection layer to form a basic characteristic vector of a single characteristic.
The artificial features can be composed of one or more parameters of a first output score of a coarse detection network of the undetermined nodule, a second output score of a false positive inhibition network and nodule morphological features, wherein the morphological features can comprise a long diameter, a short diameter, a density and the like. This is to consider that introducing personal experience of the doctor in the problem of judging the nodule, such as parameters of the long diameter, the short diameter, the average density and the like of the nodule as the constituent elements of the artificial features, and introducing the first prediction score in the coarse detection stage and the second prediction score in the false positive suppression stage is also beneficial to constructing a single feature with rich hierarchy.
S302: constructing a global feature based on the single features of the plurality of undetermined nodules;
wherein the global features are constructed in the following way: and selecting the first n single features of all the undetermined nodules for splicing to form a global feature, and if the undetermined nodules of the case are less than n, filling 0 in the vacancy to ensure that the lengths of the global features are equal.
S303: fusing the single feature and the global feature;
and for each nodule of the undetermined region, the single feature in the S302 and the global feature constructed in the S302 are fused to form a fusion feature of the nodule.
S304: the fusion features are classified, and in one embodiment, a Gradient Boosting Decision Tree (GBDT) is used for regression training to obtain a third prediction score.
After the ContextNet network is used, the third prediction score of the undetermined candidate area is corrected according to the second prediction score, and the false positive candidate area is further restrained. In practical application, one of a plurality of undetermined nodules in one case may have a higher second prediction score, but after global judgment, the third prediction score is lower, and a doctor judges that the blood vessel thickening shadow is similar to a spherical shape and classifies the blood vessel thickening shadow into a false positive focus. The global information judgment stage has a good inhibition effect on false positive tissues similar to the nodule morphology.
An exemplary procedure for identifying nodules in CT images of the lungs using the three-phase method described above is described in the following example.
In the rough detection stage, the preprocessed image sequence is used as input, the feature pyramid is used as a feature extraction network, the propofol outside the specified range is omitted during back propagation, a focus loss function is used for classification, Smooth L1 is used for regression training, and a first prediction score is output. The network establishes a mapping relation between a feature map and an original CT image, wherein the position of a candidate nodule region in a corresponding CT image is marked as (x, y, z), each point on the feature map corresponds to the output probability of predicting whether the point is a nodule or not, and a 4-dimensional offset vector representing the point relative to the center of the marked nodule.
The three-dimensional image blocks are classified by a false positive suppression network, and the three-dimensional image blocks, for example, image blocks of 64 × 64 × 64 or 36 × 36 × 36 size, are clipped in the original CT image with the center of the nodule candidate region output in the coarse detection stage as a base point. In one embodiment, the 3D CNN network may be composed of six convolutional layers, three pooling layers, three fully-connected layers, and a final Softmax output layer; wherein, a ReLU activation function can be connected after each convolution layer, and a Dropout layer is input after each pooling layer and output layer to avoid overfitting. And finally, taking the second output score as a classification prediction result of the second-stage nodules.
And a global judgment stage: and taking the characteristics of the last full-connection layer of the false positive inhibition network of each undetermined nodule as a basic vector, outputting scores in a coarse detection stage, outputting scores in a false positive inhibition stage, and taking the long diameter, the short diameter, the average density and the like of the nodule as artificial characteristics to jointly construct a single characteristic of the undetermined nodule. And taking the first n second prediction scores of each case output in the false positive suppression network, wherein n is 20-30, preferably 24, splicing the single features of the corresponding undetermined nodules, and if the number of the undetermined nodules classified in the false positive suppression stage of the case is less than 24, filling the gaps with 0 to form the global feature. And splicing the single feature and the global feature, and performing regression training on the fusion feature of each node to be predicted by using GBDT (Gradient Boosting Decision Tree) to obtain a third prediction score of the node to be predicted. And giving a final nodule detection result through the third prediction score.
Fig. 6 illustrates a second prediction score for a nodule over a false positive suppression phase and a third prediction score corrected over a global decision phase for one case, where the outcome of the third prediction score is more focused by the physician and has a higher accuracy than its second prediction score.
In order to evaluate the effect of the above method. The test was selected on the public data set LUNA16 and the results were compared to both public methods.
The method comprises the following steps: DingJ, Li, A., Hu, Z., Wang, L: Accurate pure non-detection In computed tomography images using reliable connected networks In MICCAI. (2017)559-567
The method 2 comprises the following steps: dou, Q., Chen, H., Jin, e, Automated public non-product detection via 3d connected with online sample filtering and hybrid-less residual filtering in MICCAI (2017)630-638
The evaluation is performed using the FROC (Free-Response driver Operating characterization) criterion commonly used in machine learning algorithms. Specifically, the FROC criterion characterizes the relationship between recall (number of detected nodules in all CT data in the test set/number of nodules in all CT data in the test set) and the average number of false positives per CT scan (number of all non-actual nodules predicted as nodules in the test set/number of scans in the test set, FPs/scan).
Table 1 discloses data set test results
0.125 0.250 0.500 1.000 2.000 4.000 8.000 Mean
Method
1 0.659 0.745 0.819 0.865 0.906 0.933 0.946 0.839
Method 2 0.748 0.853 0.887 0.922 0.938 0.944 0.946 0.891
The invention 0.793 0.853 0.902 0.944 0.958 0.972 0.975 0.9138
It can be seen that the nodule detection method of the present invention achieves good results in terms of nodule recall when the average number of false positives per CT scan is one or more.
The feature pyramid, the difficult case mining mechanism and the global information judgment stage introduced in the method optimize the algorithm aiming at the complex type of the nodules, so that the accuracy of the difficult case detection such as the ground glass nodules is improved.
An example of an actual scene is illustrated in fig. 7, and it can be seen from the marked boxes in the figure that the method can realize accurate identification of the ground glass nodules. The ground glass nodule is less prominent than the surrounding background region and has a blurred boundary, which makes detection difficult in the prior art.
In another embodiment, the morphological characteristics of the nodule detected in the present invention are calculated, and the morphological characteristic result is given, that is, the long diameter, the short diameter, the equivalent diameter, the volume, the Hu value, etc. of the nodule are calculated according to the contour of the identified nodule, and the effect diagram is shown in fig. 8.
The nodule contour identification can be carried out by adopting a better segmentation network, such as U-Net, FCN, deep Lab V1 and the like, and the invention adopts a 3D U-Net segmentation network. 3D U-Net principle As shown in FIG. 9, the left side can be considered an encoder and the right side can be considered a decoder. The encoder progressively pools the spatial dimensions of the layers and the decoder progressively restores the details and spatial dimensions of the object. A fast connection exists between the encoder and the decoder, thus helping the decoder to better repair the details of the target.
And outputting mask information of the region by the segmented network, wherein the pixel value of the mask belonging to the nodule region is 1, and the pixel value of the mask belonging to the background region is 0, so as to obtain the fine contour of the nodule candidate region. On this basis, the longest diameter, equivalent diameter, average HU value, maximum HU value, minimum HU value, volume value, and the like are performed, and the above-described indices are provided to the doctor to assist the doctor in determining the nodule.
In another aspect of the invention, the step of follow-up registration is further designed to match lung nodule correspondence in CT images of the lungs of the patient at different times. Due to the patient's posture, breathing, and the like during each scan, the angle and shape of the lungs are significantly affected and cannot be matched by pixel location.
Calibration and correspondence of the images can be performed using an image registration network to learn shape and position differences between two similar images. As shown in fig. 10, a CNN network is used to learn a parameterized registration deformation field (parametric function), and then registration between a changing image (Moving image) and a Fixed image (Fixed image) is achieved by spatial transformation. And for a given image pair to be registered, directly utilizing the learned parameter function to perform registration. The input end of the registration network is a group of image pairs consisting of a change image and a fixed image, and the output end is a registration area based on a parameter theta (kernel function of a convolution layer in a CNN network)
Figure GDA0002797952700000162
The registration region is a field of displacement vectors that specify for each voxel (the smallest unit that can be segmented in the three-dimensional image) the displacement offset from the Fixed image to the Moving image. The changing image (Moving image) is then transformed into a changed image registration result (Moved image) by a spatial transformation. The loss function in the network training process comprises two parts: loss of similarity and loss of smoothness, wherein the loss of similarity is a measure of facies between a Moved image and a Fixed imageThe smoothness loss is to ensure the smoothness of the registration area.
The loss function can be described as:
Figure GDA0002797952700000161
wherein I (F, M, phi) is Isim(F,M(φ))+λIsmooth(φ), where F is fixed image, M is moving image, M (φ) is moving image, Isim(.) loss of similarity, Ismooth(.) or smoothing loss, λ is the regularization coefficient.
Therefore, in one embodiment, the image registration is performed by a 3D full convolution network, specifically, the embodiment employs a volume-Morph network, the network structure of which is a full convolution structure similar to 3DUnet, the input is two images with the size H × W × D, which are respectively a change image and a fixed image, and the network output is a transformation matrix of H × W × D3. And (3) the 3-tuple (dh, dw, dd) at the (i, j, k) th position of the transformation matrix corresponds to the pixel at the (i, j, k) th position of the changed image one by one, and the registration changed image with the shape close to that of the fixed image can be generated by translating the pixel at each position of the changed image in the direction of the corresponding triplet.
On the basis of registration of two scanning images, by calculating the Euclidean distance between nodes obtained by a detection algorithm in two different scanning periods, a graph is built on the connected edges between the nodes meeting threshold setting (for example, 1cm), the node pair with the minimum distance in the connected components is selected as a final pairing result, and the corresponding relation of the nodes scanned at different times can be obtained.
Fig. 11 shows the correspondence of the same nodule established by CT images of the same patient taken at different times under the follow-up method based on the method of the present invention.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiment of the present invention.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The flowcharts shown in the drawings are only exemplary and do not necessarily include all the contents and operations/steps, nor are they necessarily performed in the described order, and the order of actual execution may possibly vary depending on the actual situation. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
The scope of the invention is limited only by the appended claims. The above embodiments are merely illustrative, and not restrictive, of the present disclosure, and various changes and modifications may be made by those skilled in the relevant art without departing from the spirit and scope of the present disclosure, and therefore all equivalent technical solutions also fall within the scope of the present disclosure.

Claims (15)

1. A nodule detection method based on medical images is characterized by comprising the following steps:
taking an image sequence based on a medical image as an input, positioning a candidate nodule through a coarse detection network and outputting a first prediction score;
carrying out classification training on an image sequence corresponding to the candidate nodule positioned by the coarse detection network through a false positive inhibition network to obtain the undetermined nodule and a second prediction score thereof;
constructing a single feature of a to-be-determined nodule, constructing a global feature based on the single features of a plurality of to-be-determined nodules, splicing the single feature and the global feature into a fusion feature, performing regression training, and outputting a third prediction score;
the global feature is constructed by the following method: selecting the first n single features of all the nodules to be determined for splicing to form a global feature, and if the number of the nodules to be determined in a case is less than n, filling the vacancy with 0 to ensure that the length of the vacancy is equal to the length of the global feature;
constructing a single characteristic of the undetermined nodule by combining artificial characteristics with characteristic parameters of the single characteristic in a false positive inhibition network;
the artificial features of the pending nodule include one or more parameters of the first prediction score of a pending nodule over a coarse detection network, the second prediction score over a false positive suppression network, a nodule morphology feature.
2. The nodule detection method based on medical images as claimed in claim 1, wherein the rough detection network adopts a feature pyramid network for feature extraction, and a focus loss function is adopted as a loss function in classification training.
3. The medical image-based nodule detection method of claim 1 wherein the false positive suppression network employs a 3D convolutional neural network.
4. A medical image based nodule detection method according to any one of claims 1 to 3 wherein the construction of global features is performed by: selecting the first n single features of all the nodules to be determined for splicing, and filling the vacancy with 0 if the nodules to be determined of the case are less than n; and performing regression training on the fusion features by adopting a gradient lifting decision tree.
5. The medical image-based nodule detection method of claim 4, wherein n is 20-30.
6. A medical image-based nodule detection method according to any one of claims 1 to 3 further comprising extracting contour information for the detected nodule using a segmentation network to compute morphological features including one or more of a long diameter, a short diameter, an equivalent diameter, a volume, a Hu value of the nodule.
7. The medical image-based nodule detection method of claim 6 wherein the segmentation network is a 3D U-Net segmentation network.
8. The nodule detection method based on medical images as claimed in any one of claims 1 to 3, further comprising a follow-up registration step, wherein the follow-up matching step is implemented by performing image registration on medical images of the same case in different periods, calculating Euclidean distances between lung nodules in different periods, establishing a graph of connected edges between nodules meeting threshold setting, selecting a nodule pair with the minimum distance among connected components as a final pairing result, establishing a corresponding relation between nodules in images in different periods, and assisting a doctor in identifying and analyzing changes of old nodules and new nodules.
9. The medical image-based nodule detection method of any one of claims 1 to 3 wherein the medical image is from a lung CT image.
10. The medical image-based nodule detection method of claim 4 wherein the medical image is from a pulmonary CT image.
11. The medical image-based nodule detection method of claim 6, wherein the medical image is from a lung CT image.
12. The medical image-based nodule detection method of claim 8, wherein the medical image is from a lung CT image.
13. A medical image-based nodule detection apparatus includes an image acquisition unit that acquires a medical image to be processed; a detection unit for performing nodule detection on the medical image to be processed according to the medical image-based nodule detection method of any one of claims 1 to 12.
14. A computer-readable medium, in which a computer program is stored which, when being executed by a processor, carries out the medical image-based nodule detection method according to any one of claims 1 to 12.
15. An electronic device comprising a processor and a memory, the memory having stored thereon one or more readable instructions which, when executed by the processor, implement the medical image-based nodule detection method of any one of claims 1 to 12.
CN202010621972.8A 2020-06-30 2020-06-30 Medical image-based nodule detection method and device and electronic equipment Active CN111798424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010621972.8A CN111798424B (en) 2020-06-30 2020-06-30 Medical image-based nodule detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010621972.8A CN111798424B (en) 2020-06-30 2020-06-30 Medical image-based nodule detection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111798424A CN111798424A (en) 2020-10-20
CN111798424B true CN111798424B (en) 2021-02-09

Family

ID=72810987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010621972.8A Active CN111798424B (en) 2020-06-30 2020-06-30 Medical image-based nodule detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111798424B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112259199A (en) * 2020-10-29 2021-01-22 西交利物浦大学 Medical image classification model training method, system, storage medium and medical image processing device
CN112597887B (en) * 2020-12-22 2024-05-07 深圳集智数字科技有限公司 Target identification method and device
CN113610785A (en) * 2021-07-26 2021-11-05 安徽理工大学 Pneumoconiosis early warning method and device based on intelligent image and storage medium
CN114219807B (en) * 2022-02-22 2022-07-12 成都爱迦飞诗特科技有限公司 Mammary gland ultrasonic examination image grading method, device, equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6125194A (en) * 1996-02-06 2000-09-26 Caelum Research Corporation Method and system for re-screening nodules in radiological images using multi-resolution processing, neural network, and image processing
CN102842132A (en) * 2012-07-12 2012-12-26 上海联影医疗科技有限公司 CT pulmonary nodule detection method
CN104751178A (en) * 2015-03-31 2015-07-01 上海理工大学 Pulmonary nodule detection device and method based on shape template matching and combining classifier
CN107274402A (en) * 2017-06-27 2017-10-20 北京深睿博联科技有限责任公司 A kind of Lung neoplasm automatic testing method and system based on chest CT image
CN108230323A (en) * 2018-01-30 2018-06-29 浙江大学 A kind of Lung neoplasm false positive screening technique based on convolutional neural networks
CN108537784A (en) * 2018-03-30 2018-09-14 四川元匠科技有限公司 A kind of CT figure pulmonary nodule detection methods based on deep learning
CN109671055A (en) * 2018-11-27 2019-04-23 杭州深睿博联科技有限公司 Pulmonary nodule detection method and device
CN109685776A (en) * 2018-12-12 2019-04-26 华中科技大学 A kind of pulmonary nodule detection method based on ct images and system
CN109685768A (en) * 2018-11-28 2019-04-26 心医国际数字医疗系统(大连)有限公司 Lung neoplasm automatic testing method and system based on lung CT sequence
CN109741312A (en) * 2018-12-28 2019-05-10 上海联影智能医疗科技有限公司 A kind of Lung neoplasm discrimination method, device, equipment and medium
CN110059697A (en) * 2019-04-29 2019-07-26 上海理工大学 A kind of Lung neoplasm automatic division method based on deep learning
CN110232386A (en) * 2019-05-09 2019-09-13 杭州深睿博联科技有限公司 Based on the pyramidal Lung neoplasm classification method of local feature and device
CN110853011A (en) * 2019-11-11 2020-02-28 河北工业大学 Method for constructing convolutional neural network model for pulmonary nodule detection

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186881A (en) * 2011-12-30 2013-07-03 无锡睿影信息技术有限公司 Method for obtaining lung soft tissue image based on virtual dual energy technique
US9305351B2 (en) * 2013-06-16 2016-04-05 Larry D. Partain Method of determining the probabilities of suspect nodules being malignant
CN103745227A (en) * 2013-12-31 2014-04-23 沈阳航空航天大学 Method for identifying benign and malignant lung nodules based on multi-dimensional information
US11730387B2 (en) * 2018-11-02 2023-08-22 University Of Central Florida Research Foundation, Inc. Method for detection and diagnosis of lung and pancreatic cancers from imaging scans
CN109919230B (en) * 2019-03-10 2022-12-06 西安电子科技大学 Medical image pulmonary nodule detection method based on cyclic feature pyramid

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6125194A (en) * 1996-02-06 2000-09-26 Caelum Research Corporation Method and system for re-screening nodules in radiological images using multi-resolution processing, neural network, and image processing
CN102842132A (en) * 2012-07-12 2012-12-26 上海联影医疗科技有限公司 CT pulmonary nodule detection method
CN104751178A (en) * 2015-03-31 2015-07-01 上海理工大学 Pulmonary nodule detection device and method based on shape template matching and combining classifier
CN107274402A (en) * 2017-06-27 2017-10-20 北京深睿博联科技有限责任公司 A kind of Lung neoplasm automatic testing method and system based on chest CT image
CN108230323A (en) * 2018-01-30 2018-06-29 浙江大学 A kind of Lung neoplasm false positive screening technique based on convolutional neural networks
CN108537784A (en) * 2018-03-30 2018-09-14 四川元匠科技有限公司 A kind of CT figure pulmonary nodule detection methods based on deep learning
CN109671055A (en) * 2018-11-27 2019-04-23 杭州深睿博联科技有限公司 Pulmonary nodule detection method and device
CN109685768A (en) * 2018-11-28 2019-04-26 心医国际数字医疗系统(大连)有限公司 Lung neoplasm automatic testing method and system based on lung CT sequence
CN109685776A (en) * 2018-12-12 2019-04-26 华中科技大学 A kind of pulmonary nodule detection method based on ct images and system
CN109741312A (en) * 2018-12-28 2019-05-10 上海联影智能医疗科技有限公司 A kind of Lung neoplasm discrimination method, device, equipment and medium
CN110059697A (en) * 2019-04-29 2019-07-26 上海理工大学 A kind of Lung neoplasm automatic division method based on deep learning
CN110232386A (en) * 2019-05-09 2019-09-13 杭州深睿博联科技有限公司 Based on the pyramidal Lung neoplasm classification method of local feature and device
CN110853011A (en) * 2019-11-11 2020-02-28 河北工业大学 Method for constructing convolutional neural network model for pulmonary nodule detection

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"Automated pulmonary nodule detection via 3d convnets with online sample filtering and hybrid-loss residual learning";Dou,Q 等;《MICCAI》;20170904;630-638 *
"Improving Accuracy of Lung Nodule Classification Using Deep Learning with Focal Loss.";Tran, Giang Son 等;《Journal of healthcare engineering》;20190204;1-9 *
"Lung Nodule Detection in CT Images Using Statistical and Shape-Based Features";Khehrah, N 等;《ournal of Imaging》;20200224;第6卷(第2期);1-14 *
"基于深度卷积神经网络的肺结节检测算法";邓忠豪 等;《计算机应用》;20190319;第39卷(第7期);2109-2115 *
"甲状腺结节超声图像特征提取及识别";王昕 等;《长春工业大学学报》;20170815;第38卷(第4期);322-327 *

Also Published As

Publication number Publication date
CN111798424A (en) 2020-10-20

Similar Documents

Publication Publication Date Title
Li et al. Attention dense-u-net for automatic breast mass segmentation in digital mammogram
Halder et al. Lung nodule detection from feature engineering to deep learning in thoracic CT images: a comprehensive review
Chen et al. Recent advances and clinical applications of deep learning in medical image analysis
CN111798424B (en) Medical image-based nodule detection method and device and electronic equipment
Bi et al. Automatic liver lesion detection using cascaded deep residual networks
CN110930367B (en) Multi-modal ultrasound image classification method and breast cancer diagnosis device
CN106682435B (en) System and method for automatically detecting lesion in medical image through multi-model fusion
Tang et al. High-resolution 3D abdominal segmentation with random patch network fusion
Shaukat et al. Computer-aided detection of lung nodules: a review
US11308611B2 (en) Reducing false positive detections of malignant lesions using multi-parametric magnetic resonance imaging
Zhou et al. A cascaded multi-stage framework for automatic detection and segmentation of pulmonary nodules in developing countries
JP7346553B2 (en) Determining the growth rate of objects in a 3D dataset using deep learning
CN114565761A (en) Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image
Lan et al. Run: Residual u-net for computer-aided detection of pulmonary nodules without candidate selection
Feng et al. Supervoxel based weakly-supervised multi-level 3D CNNs for lung nodule detection and segmentation
Jaffar et al. An ensemble shape gradient features descriptor based nodule detection paradigm: a novel model to augment complex diagnostic decisions assistance
Wen et al. Pulmonary nodule detection based on convolutional block attention module
Tian et al. Radiomics and Its Clinical Application: Artificial Intelligence and Medical Big Data
CN114581698A (en) Target classification method based on space cross attention mechanism feature fusion
Tyagi et al. An amalgamation of vision transformer with convolutional neural network for automatic lung tumor segmentation
Harrison et al. State-of-the-art of breast cancer diagnosis in medical images via convolutional neural networks (cnns)
Dhalia Sweetlin et al. Patient-Specific Model Based Segmentation of Lung Computed Tomographic Images.
CN116228690A (en) Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT
Yang et al. 3D multi‐view squeeze‐and‐excitation convolutional neural network for lung nodule classification
CN112086174A (en) Three-dimensional knowledge diagnosis model construction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant