CN110827310A - CT image automatic detection method and system - Google Patents

CT image automatic detection method and system Download PDF

Info

Publication number
CN110827310A
CN110827310A CN201911064984.9A CN201911064984A CN110827310A CN 110827310 A CN110827310 A CN 110827310A CN 201911064984 A CN201911064984 A CN 201911064984A CN 110827310 A CN110827310 A CN 110827310A
Authority
CN
China
Prior art keywords
target
network
image
region
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911064984.9A
Other languages
Chinese (zh)
Inventor
童超
翟运开
梁保宇
赵杰
马倩倩
何贤英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
First Affiliated Hospital of Zhengzhou University
Original Assignee
Beihang University
First Affiliated Hospital of Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University, First Affiliated Hospital of Zhengzhou University filed Critical Beihang University
Priority to CN201911064984.9A priority Critical patent/CN110827310A/en
Publication of CN110827310A publication Critical patent/CN110827310A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a CT image automatic detection method of a three-dimensional convolutional neural network improved by fast R-CNN and Focal local based on an iterative self-organizing analysis algorithm, belonging to the field of image data processing. The invention provides a target detection method in a CT image by using a deep learning method. The method comprises two parts: extracting network in target area and removing network of false positive target. Wherein, the target area extraction network is a Faster R-CNN model. The false positive target removal network is a three-dimensional convolution neural network model. In order to adapt to the diversity of the sizes of the targets to be detected, the invention introduces an iterative self-organizing analysis algorithm (ISODATA) into the Faster R-CNN. In order to solve the problem of uneven positive and negative samples in the process of removing false positive targets, the Loss function Focal local is introduced into the three-dimensional convolution neural network.

Description

CT image automatic detection method and system
Technical Field
The invention provides a CT image automatic target detection method based on a three-dimensional convolutional neural network improved by fast R-CNN and Focal local of an iterative self-organizing analysis algorithm, belonging to the field of image data processing (G06T).
Background
The CT imaging technology is a technology in which a CT device scans a target object with X-rays, receives the X-rays by a detector, converts the X-rays into visible light, converts the visible light into an electrical signal, converts the electrical signal into digital signals by an analog/digital converter, and inputs the digital signals to a computer for processing. CT images are used in various industries such as medical inspection, industrial inspection, security inspection, and the like. Among them, the detection of targets in CT images is an important task. In medicine, the detection of lesion areas in CT images is a pre-step in the diagnosis of a variety of diseases. In industry, target detection is often used to detect various common defects such as air holes, inclusions, pinholes, shrinkage cavities, delamination and the like from CT images. In terms of security, the method for detecting an object in a CT image is also widely used for detection of security, air transportation, port transportation, large cargo container case devices, and the like.
The existing target detection method in the CT image mainly adopts a detection method based on manual detection and statistical machine learning and a detection method based on deep learning. However, the existing detection methods have the following drawbacks: (1) the manual detection method mainly depends on the detection experience of related personnel on the target object, the manual film reading efficiency is low, the manpower and material resource consumption is large, and the detection result is unstable. (2) The detection method based on the statistical machine learning generally uses low-level features such as shapes, densities, sizes, and the like, which are artificially designed, and it is difficult to cope with the diversity of physical forms and spatial positions of the target object, and thus the detection efficiency is low. (3) Although the existing detection method based on deep learning has a high detection effect, the existing detection method still has a large improvement space in consideration of the problems of size adaptation of a target object and positive and negative sample balance in the training process.
Disclosure of Invention
In order to solve the above mentioned problems, the present invention provides a method for detecting a target in a CT image by using a deep learning method. The method comprises two parts: extracting network in target area and removing network of false positive target. Wherein the target area extraction network is
Figure BDA0002258102870000021
Where y represents the true value of the sample,
Figure 1
indicating a sample prediction value of α is [0,1]]The weighting factor, γ, within the interval is an adjustable parameter for weighing the difficult and easy samples.
According to a first aspect of the present application, the proposed CT image detection method comprises the following detection steps:
step 1, constructing a target detection training set in the CT image. The method comprises the steps of CT image acquisition and image preprocessing. The acquisition product of the CT image comprises a complete CT image sequence and the category and position label of the target object in each CT image. Image pre-processing includes, but is not limited to, image segmentation, normalization, and the like. Image segmentation refers to the removal of extraneous regions in a CT image that are easily detected. For example, in a lung CT image, a CT image region having Hounsfield Unit (HU) values in the range of [ -1200,600] is generally regarded as a lung parenchyma region, and other regions are regarded as background regions. The normalization refers to mapping the pixel value of each image into a [0,1] interval, and the preprocessing method can optimize the training process of the deep learning method.
And 2, constructing a target detection network, and training network parameters by using single CT images in the target detection training set to realize a target area extraction network. In order to solve the problem of large size change of the target and improve the adaptation degree of the network to the targets with different sizes, the constructed target detection network is an Faster R-CNN network improved by using an iterative self-organizing analysis algorithm. The network comprises a proposed area generation network and a target classification network, wherein the two networks share a set of convolution layers for extracting image features. The method specifically comprises the following steps:
and 2.1, inputting a single CT image into a group of convolutional layers to extract image characteristics.
Step 2.2 will process the training set using an iterative self-organizing analysis algorithm. And clustering by the iterative self-organizing analysis algorithm according to the sample size and the shape distribution in the training set to generate a group of parameters for generating the suggestion region, wherein the parameters comprise the size of the generated suggestion region and the shape of the generated suggestion region.
Step 2.3 generates a proposed area network. The suggested region generation network comprises three modules of suggested region generation, suggested region classification and suggested region regression. The generate proposed region module generates a proposed region at each point of the image feature generated by the last layer of convolutional layer using parameters generated by an iterative self-organizing analysis algorithm.
For each generated proposed region, the proposed region classification module uses a classifier to predict whether the proposed region contains a target object, giving a probability that the proposed region contains the target object, i.e., a predicted value. The fact that the proposed region contains the true value of the target object means that if the proposed region contains the target object, the true value is 1; otherwise, it is 0. The proposed region classification module classifies the target object by calculating a first-class loss between the predicted value and a true value of the proposed region containing the target object.
For each generated proposed region, a proposed region regression module predicts coordinates of the proposed region containing the target object using a regressor. And the suggested region regression module calculates a second loss between the coordinates of the suggested region and the coordinates of each target in the training set by calculating the fitting degree of the coordinates of the suggested region and the coordinates of each target in the training set.
Step 2.4 target classification network. And for the proposed region containing the target, the target classification network maps the proposed region to the image features to form a detection region on the image features, maps the detection region to vectors with the same length, and further classifies the vectors through a target position regression module and a target classification module.
For each mapped vector, the target classification module uses a classifier to predict the class to which the vector belongs, including classes and contexts defined in the training set. The target classification module respectively gives the probability that the vector belongs to each category, namely the predicted value with the length being the number of the categories is output. The true value of the target classification module means that if the vector belongs to a certain class, the value of the class in the tag is 1, and the values of other classes are 0. The target classification module calculates a third-class loss between the predicted value and the true value.
For each mapped vector, the target location regression module predicts the location coordinates of the corresponding target in the vector using a regressor. And the target position regression module calculates the fourth loss between the predicted coordinate and the real target coordinate by calculating the fit degree of the predicted coordinate and the real target coordinate.
Step 2.5 loss and parameter update. And (4) combining the four types of losses calculated in the steps 2.3 and 2.4 to form a loss function of the target area extraction network. By optimizing the loss function, parameters of the target area extraction network are updated. Continuously using the training samples in the target detection training set in the CT image, and updating the parameters through the steps 2.1-2.5 until the loss function value reaches the optimum value to obtain the trained target region extraction network.
And 3, constructing a false positive target removal network. And (3) forming a detection area by the CT image used in the training in the step (2) through the step (2). And splicing the CT image of the detection area with the adjacent CT images to form a three-dimensional sequence, and taking out the detection area from the three-dimensional sequence to form a three-dimensional image block as a training sample of the network. And respectively taking the false detection area as a training negative sample and the correct detection area as a positive sample, and sending the positive sample into a three-dimensional convolution neural network to extract three-dimensional image features. The extracted image features are subjected to full-connection layer to obtain a predicted value of the target, namely the probability that the target is a real target. And calculating loss by the probability value and the true value of the target, and updating parameters in the full-connection layer and the three-dimensional convolutional neural network by optimizing a loss function to realize a network for removing false positive targets. Because the problem of sample imbalance caused by too few positive samples and too many negative samples exists in the output result of the target area extraction network, the constructed false positive target removal network is trained by adopting a Loss function Focal local.
And 4, after the training of the target area extraction network and the false positive target removal network is finished, the target area extraction network and the false positive target removal network can be used for detecting the target in the CT image. During detection, the CT image to be detected passes through a target area extraction network, and a detection area output by the target area extraction network is extracted, wherein the detection area comprises a correct detection area and a large number of false detection areas; the detection area and the adjacent CT images are spliced to form a three-dimensional image block; and sending the three-dimensional image block into a false positive target removing network to remove a false detection area, and obtaining the area and the type of the target to be detected. According to the second aspect of the application, the method for automatically detecting the target in the CT image of the three-dimensional convolutional neural network improved by fast R-CNN and Focal local based on the iterative self-organization analysis algorithm comprises the following steps:
step 1, constructing a target detection training set in a CT image;
step 2, constructing a target detection network, and training network parameters by using single CT images in a target detection training set to realize a target area extraction network;
step 3, constructing a false positive target removing network;
step 4, during detection, the CT image to be detected passes through a target area extraction network, and a detection area output by the target area extraction network is extracted, wherein the detection area comprises a correct detection area and a large number of false detection areas; the detection area and the adjacent CT images are spliced to form a three-dimensional image block; and sending the three-dimensional image block into a false positive target removing network to remove a false detection area, and obtaining the area and the type of the target to be detected.
According to the method for automatically detecting the target in the CT image of the second aspect of the application, further, the target extraction network uses the improved Faster R-CNN network, namely, an iterative self-organization analysis algorithm is used for proposing network generation parameters for the region of the Faster R-CNN network, and the purpose of adapting to targets with different shapes is achieved.
The method for automatically detecting the target in the CT image according to the second aspect of the present application is further characterized in that the false positive target removal network is trained by using a Loss function Focal local. Focal local plays a role in balancing positive and negative samples with hard and easy samples in the deep neural network, which is expressed as follows
Wherein,
Figure 2
indicates a predicted value, α is [0,1]]The weighting factor, γ, within the interval is an adjustable parameter for weighing the difficult and easy samples. The loss function can better process the problem of proportion imbalance of positive and negative samples in the task of removing the false positive targets. The network optimizes parameters in the network through a minimum loss function, and training of the false positive target removing network is achieved.
According to a third aspect of the present application, there is provided a computer device for automatic detection of objects in CT images of a three-dimensional convolutional neural network improved based on an iterative self-organizing analysis algorithm improved fastr-CNN and Focal local area, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the following steps when executing the program:
step 1, constructing a target detection training set in the CT image. The method comprises the steps of CT image acquisition and image preprocessing; the acquisition product of the CT image comprises a complete CT image sequence and the category and position label of a target object in each CT image; the image preprocessing comprises operations such as image segmentation and normalization;
step 2, constructing a target detection network, and training network parameters by using single CT images in a target detection training set to realize a target area extraction network; the method specifically comprises the following steps:
step 2.1, inputting a single CT image into a group of convolutional layers to extract image characteristics;
step 2.2, processing the training set by using an iterative self-organizing analysis algorithm; clustering is carried out by an iterative self-organizing analysis algorithm according to the sample size and shape distribution in the training set, and a group of parameters for generating the suggestion region are generated, wherein the parameters comprise the size of the generated suggestion region and the shape of the generated suggestion region;
step 2.3, generating a suggested region on each point of the image characteristics generated by the last layer of convolution layer by using parameters generated by an iterative self-organizing analysis algorithm;
for each generated suggestion region, predicting whether the suggestion region contains a target object by using a classifier, and giving a probability that the suggestion region contains the target object, namely a predicted value; by calculating a first type of loss between the predicted value and the true value of the proposed area containing the target object.
For each generated suggestion region, predicting coordinates of the suggestion region containing the target object; and calculating the second loss between the coordinates of the recommended area and the coordinates of each target in the training set by calculating the fitting degree of the coordinates of the recommended area and the coordinates of each target in the training set.
2.4, mapping the suggestion region containing the target to the image feature to form a detection region on the image feature, mapping the detection region to a vector with the same length, and further classifying the vector through a target position regression module and a target classification module;
for each mapped vector, predicting the category to which the vector belongs, wherein the category comprises each category and background defined in a training set; respectively giving the probability that the vector belongs to each category, namely outputting a predicted value with the length being the number of the categories; by calculating a third class loss between the predicted and true values.
For each mapped vector, predicting the position coordinates of the corresponding target in the vector; calculating the fourth loss between the predicted coordinate and the real target coordinate by calculating the fitting degree of the predicted coordinate and the real target coordinate;
step 2.5 loss and parameter update. Combining the four types of losses calculated in the steps 2.3 and 2.4 to form a loss function of the target area extraction network; updating parameters of the target area extraction network by optimizing the loss function;
continuously using the training samples in the target detection training set in the CT image, and updating parameters through the steps 2.1-2.5 until the loss function value reaches the optimum value to obtain a trained target region extraction network;
step 3, constructing a false positive target removing network, and forming a detection area on the CT image used for training in the step 2 through the step 2; splicing the CT image of the detection area with the adjacent CT images to form a three-dimensional sequence, taking out the detection area from the three-dimensional sequence to form a three-dimensional image block as a training sample of the network; respectively taking the false detection area as a training negative sample and the correct detection area as a positive sample, and sending the positive sample and the correct detection area into a three-dimensional convolution neural network to extract three-dimensional image features; the extracted image features are subjected to full-connection layer to obtain a predicted value of the target, namely the probability that the target is a real target. And calculating Loss by the probability value and the true value of the target, updating parameters in the full-connection layer and the three-dimensional convolutional neural network by optimizing a Loss function, and realizing a false positive target removal network, wherein the constructed false positive target removal network is trained by adopting a Loss function Focal local.
According to the computer device of the third aspect of the present application, when the processor executes the program, step 4 is further implemented when detecting a target in a CT image, the CT image to be detected passes through a target region extraction network, and a detection region output by the target region extraction network is extracted, wherein the detection region includes a correct detection region and a large number of false detection regions; the detection area and the adjacent CT images are spliced to form a three-dimensional image block; and sending the three-dimensional image block into a false positive target removing network to remove a false detection area, and obtaining the area and the type of the target to be detected.
According to a fourth aspect of the present application, there is provided an automatic target detection system in CT images based on the three-dimensional convolutional neural network improved by fast R-CNN and Focal local of iterative self-organizing analysis algorithm, comprising:
the training set constructing unit is used for constructing a target detection training set in the CT image;
the target detection network construction unit is used for constructing a target detection network, and training network parameters by using single CT images in a target detection training set to realize a target area extraction network;
a false positive target removal network construction unit; the CT image detection system is used for splicing the CT image where the detection area is located and adjacent CT images to form a three-dimensional sequence according to the detection area formed by the target detection network construction unit, taking the detection area out of the three-dimensional sequence to form a three-dimensional image block which is used as a training sample of the positive target removal network; respectively taking the false detection area as a training negative sample and the correct detection area as a positive sample, and sending the positive sample and the correct detection area into a three-dimensional convolution neural network to extract three-dimensional image features; the extracted image features pass through a full-connection layer to obtain a predicted value of a target, namely the probability that the target is a real target, the probability value and the real value of the target calculate loss, parameters in the full-connection layer and a three-dimensional convolutional neural network are updated through an optimized loss function, and a false positive target removal network is realized;
the detection unit is used for extracting a detection area output by the target area extraction network when the CT image to be detected passes through the target area extraction network during detection, wherein the detection area comprises a correct detection area and a large number of false detection areas, and the detection area and the adjacent CT image are spliced to form a three-dimensional image block; and sending the three-dimensional image block into a false positive target removing network to remove a false detection area, and obtaining the area and the type of the target to be detected.
The invention has the beneficial effects that:
1. in the actual target detection task of the CT image, the target distribution area is wide, the size is variable, the density is uncertain, and the shape is different. Due to its morphological complexity, it is often difficult for the relevant personnel and conventional automated detection algorithms to effectively distinguish between targets and other interfering objects. Aiming at the morphological diversity of different targets, the method adopts a deep learning method and uses a Faster R-CNN structure with leading performance in the aspect of target detection. The structure can automatically learn and extract the deep features of the CT image, and extract and detect the target based on the deep features. In the fast R-CNN structure, the invention introduces an iterative self-organizing analysis algorithm clustering algorithm which can automatically learn the size distribution of targets and greatly improve the sensitivity of a target detection network in a CT image to the targets with various sizes.
2. In the existing target detection research based on deep learning, due to the characteristics of a target detection network, a large number of false positive samples are generated in the target region extraction process, so that the number of positive samples and negative samples is extremely unbalanced in the false positive removal process, the training difficulty is high, and the number of false positive targets is difficult to effectively reduce. In addition, the target detection model in most CT images does not consider the three-dimensional characteristics of the CT image sequence itself, and neglects the role of the three-dimensional spatial information in target detection. The invention realizes the extraction of the three-dimensional information of the CT image through the three-dimensional convolution neural network, and solves the problem of unbalance of positive and negative samples by adopting FocalLoss, thereby achieving the aim of accurately reducing the false positive target.
Drawings
Fig. 1 is a detailed step of the method for detecting an object in a CT image according to the present disclosure. Fig. 1A illustrates a training step of a target detection method in a CT image, and fig. 1B illustrates a step of detecting a target by the target detection method in the CT image.
FIG. 2 is a schematic diagram of the structure of the method of this patent.
Fig. 3 shows a specific structure of the false positive removal network proposed in this patent. Where Conv denotes the convolutional layer, Pooling denotes the max Pooling layer, and FC denotes the fully-connected layer.
FIG. 4 is a diagram illustrating the steps of generating proposed regions using the iterative self-organizing analysis algorithm used in this patent.
Fig. 5 is a schematic structural diagram of a target detection system in a CT image.
Fig. 6 is a schematic structural diagram of a target detection implementation device in a CT image.
Detailed Description
The present invention will be described more specifically with reference to examples. However, the present invention should not be construed as being limited to the following examples unless the invention departs from its concept.
An embodiment of the invention is used for detecting lung nodules in CT images, i.e. the detected object is a nodule region of the lung. The detailed steps of the proposed method for object detection in CT images are shown in fig. 1. FIG. 1A illustrates a training process and FIG. 1B illustrates a detection process. On the whole, the method for detecting the target in the CT image comprises four steps of CT image target detection training set construction, target area extraction network construction, false positive target removal network construction and target detection in the CT image. The following is set forth in connection with specific embodiments.
Step 1, collecting a chest CT image, preprocessing the chest CT image, labeling a lung nodule region in the image, and constructing a lung nodule detection training set.
The data of this example was derived from the open source data set, the lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), provided by the national cancer institute, and contained a total of 1018 low dose lung CT images from 888 patients. The acquired values of this data set are X-ray attenuation values in HU. HU value is calculated by the formula
Figure BDA0002258102870000101
Wherein, muXLinear attenuation coefficient, related to X-ray intensity; HU value of Water muWater (W)0, HU value of air μQi (Qi)Is-1000.
The image preprocessing mode adopted by the invention comprises lung parenchyma segmentation and image normalization. In a lung CT image, the present invention considers the HU value within [ -1200,600] as the lung parenchymal region. The lung parenchymal segmentation means that a lung parenchymal region is reserved in the CT image, and an image part outside the lung parenchymal region is filled with an irrelevant value. Image normalization refers to mapping the CT image values to the [0,1] interval by a 0-1 normalization method.
And 2, constructing a pulmonary nodule detection network, and training network parameters by using the CT images in the pulmonary nodule detection training set. The network structure is a Faster R-CNN network structure shown based on the target area extraction network in FIG. 2.
2.1 combining the front and back CT images of the lung into one, and sending the combined image into the Faster R-CNN model. The Faster R-CNN model extracts image deep level features through a convolutional neural network (in the embodiment, a classic deep learning model VGG16 in a picture classification task is adopted). To ensure that the Faster R-CNN model can be effectively adapted to lung nodules of different sizes and shapes, the input image is sent to an iterative self-organizing analysis algorithm to calculate parameters of a suggested region, wherein the parameters comprise the size and the aspect ratio of the suggested region. The steps of the iterative self-organizing analysis algorithm are shown in figure 4.
2.2 the last layer of image features generated by the network and the parameters generated by the iterative self-organizing analysis algorithm are sent to a regional suggestion network to generate a suggestion region suspected of containing lung nodules. The area suggestion network slides over the image feature map generated by the last layer of the convolutional neural network using 3 x 3 sliding windows, each generating a fixed size, aspect ratio suggestion area with input parameters.
For each generated suggested region, the suggested region classification module uses a classifier to predict whether the suggested region contains lung nodules, giving a probability that the suggested region contains lung nodules, i.e., a predicted value. The fact that the proposed region contains the lung nodule means that if the proposed region contains the lung nodule, the true value is 1; otherwise, it is 0. The proposed region classification module classifies the lung nodules by calculating a classification loss between the predicted values and the true values of the proposed regions that contain the lung nodules. Wherein the loss function
Figure BDA0002258102870000102
Is defined as follows:
section (2):
Figure RE-GDA0002310992050000011
wherein,
Figure BDA0002258102870000104
and p represents the predicted value and the true value of the proposed region being the lung nodule region, respectively.
For each generated proposed region, a proposed region regression module predicts coordinates of the proposed region containing the target object using a regressor. The suggested region regression module calculates the regression loss between the coordinates of the suggested region and the coordinates of the real target by calculating the fit degree of the coordinates of the suggested region and the coordinates of the real target. Wherein the loss function
Figure BDA0002258102870000111
Is defined as follows:
wherein,
Figure BDA0002258102870000113
and tpThe predicted coordinate correction value and the real coordinate correction value of the proposed area are respectively represented. smoothL1Is a smooth L1 loss function, which is defined as follows:
through the two loss functions, the area recommendation network calculates the total loss function Lrpn. Loss function LrpnThe definition is as follows.
Figure BDA0002258102870000115
Wherein N isclsIndicating the number of generated proposed regions, NregFor the total number of positive and negative proposed regions, I (p ≧ 1) indicates that the positional regression loss is calculated only when a lung nodule is present in the proposed region.
And 2.4, combining the suggested region output by the region suggestion network and the lung nodule image characteristics through a region suggestion pooling layer, uniformly mapping the combined region and the lung nodule image characteristics to vectors with the same length, and further classifying the vectors through a target position regression module and a target classification module. For each mapped vector, the target classification module predicts whether the vector is a lung nodule or not by using a classifier, and respectively gives the probability that the vector belongs to the lung nodule and belongs to the background, namely outputs a predicted value with the length of 2. The true value of the target classification module means that if the vector belongs to a lung nodule, the value of the class in the label is 1, and the value of the background is 0; otherwise the background value is 1 and the pulmonary nodule value is 0. The target classification module calculates the loss between the predicted value and the true value. The loss function Lcls(p, u) is defined as follows
Section (2):
Lcls(p,u)=-log(p·u+(1-p)(1-U)),
wherein p is a predicted value and u is a true value.
For each one mapped intoAnd the target position regression module predicts the position coordinates of the corresponding target in the vector by using the regressor. The target position regression module calculates the loss between the predicted coordinates and the coordinates of the real target by calculating the fit degree of the predicted coordinates and the coordinates of the real target. The loss function Lloc(tuV) is as defined below
Lloc(tu,v)=smoothL1(tu-v),
Wherein, tuDenotes the coordinate correction value of the real object, and v denotes the predicted coordinate correction value. Through the two loss functions, the area recommendation network calculates the total loss function LrcnnThe network parameters are updated in reverse by the optimization penalty function L. And continuously inputting training samples for multiple iterations to obtain a trained regional suggestion network. Loss function LrcnnThe definition is as follows.
Lrcnn(p,u,tu,v)=Lcls(p,u)+I(p≥1)Lloc(tu,v),
I (p ≧ 1) indicates that positional regression loss is calculated only when lung nodules are present in the proposed region.
The final loss function of the target area extraction network can be written as
Figure BDA0002258102870000121
By optimizing the loss function, parameters of the target area extraction network are updated. And continuously predicting and updating parameters by using the training samples in the lung nodule detection training set in the CT image until the loss function value reaches the optimum value, and obtaining a trained target region extraction network.
And 3, constructing a false positive target network. And (3) splicing the region detected in the step (2) with the adjacent CT image of the patient to form a three-dimensional image block. In order to effectively utilize the three-dimensional characteristics of the lung CT image sequence itself, in this embodiment, after the lung nodule region detected in step 2 is extracted from the CT image, the lung nodule region and the corresponding regions in the 19 lung nodule images adjacent to each other before and after the lung nodule region are combined to form a three-dimensional image block, and the three-dimensional image block is scaled to the size of 36 × 36 pixel values to form an image block with the size of 36 × 36 × 20 pixel values. And (3) respectively taking the false detection area in the step (2) as a training negative sample and the correct detection area in the step (2) as a positive sample according to the training labels.
The positive and negative training samples are subjected to convolutional neural network extraction characteristics, and the probability that the positive and negative training samples are in a correct detection area, namely a predicted value, is obtained through a full connection layer and an output layer (softmax layer). If the detection area is the correct detection area, the true value is 1; otherwise the true value is 0.
And calculating the loss of the predicted value and the true value of the sample. Because the suspected lung nodule area proposed by the target area extraction network contains a large number of false positive areas, the number of positive samples and negative samples is greatly unbalanced in the false positive process, the training difficulty is high, and the number of false positive lung nodules is difficult to effectively reduce. To solve this problem, the Loss function of the false positive target removing network adopts Focal local, and the content is as follows.
Figure BDA0002258102870000122
Wherein,indicates a predicted value, α is [0,1]]The weighting factor, γ, within the interval is an adjustable parameter for weighing the difficult and easy samples. The network updates the parameters in the network by optimizing the loss function. And continuously using the constructed three-dimensional training sample to predict and update parameters until the loss function value reaches the optimum value, and obtaining a trained false positive target removal network.
And 4, after training of the target area extraction network and the false positive target removal network is finished, obtaining a target detection system in the CT image. During detection, the CT image to be detected passes through a target area extraction network, and a detection area output by the target area extraction network is extracted, wherein the detection area comprises a correct detection area and a large number of false detection areas; the detection area and the adjacent CT images are spliced to form a three-dimensional image block; and sending the three-dimensional image block into a false positive target removing network to remove a false detection area, and obtaining the area and the type of the target to be detected.
Fig. 5 is a schematic structural diagram of a target detection system in a CT image according to an embodiment of the present invention.
Referring to fig. 5, the system for detecting a target in a CT image includes:
constructing a training set, wherein the training set is used for acquiring a CT image sequence and corresponding labels, and processing CT images by using an image preprocessing method to generate the training set;
constructing a target area extraction network, wherein the target area extraction network is constructed, and network parameters are updated by using training samples in a training set to obtain a well-trained target area extraction network;
constructing a false positive target removal network, wherein the false positive target removal network is used for constructing a false positive target removal network, extracting a target area predicted by the network by using the target area, forming a three-dimensional image block, and then using the three-dimensional image block as a training sample to update network parameters to obtain a perfectly trained false positive target removal network;
and image detection, which is used for detecting the target object in the image by using the CT image after the network training is finished.
Fig. 6 is a schematic structural diagram of a target detection implementation apparatus in a CT image according to an embodiment of the present invention.
Referring to fig. 6, the apparatus includes a memory, a processor, and a graphic processor; the memory is used for storing one or more computer instructions and network parameters, the processor is used for executing one or more computer instructions, and the graphics processor is used for accelerating the calculation of related numerical values in the network training process so as to realize the target detection method in the CT image.
Further, the device shown in fig. 6 also comprises a bus and a communication interface, the processor, the communication interface and the memory structure being connected by the bus.
The Memory may include a high-speed Random Access Memory (RAM) or a Non-volatile Memory (Non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through a communication interface, and the internet, a wide area network, a local network, a metropolitan area network and the like can be used. The bus may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.

Claims (5)

1. The method for automatically detecting the target in the CT image of the three-dimensional convolutional neural network improved by fast R-CNN and Focal local based on the iterative self-organization analysis algorithm is characterized by comprising the following steps:
step 1, constructing a target detection training set in the CT image. The method comprises the steps of CT image acquisition and image preprocessing. The acquisition product of the CT image comprises a complete CT image sequence and the category and position label of the target object in each CT image. Image pre-processing includes, but is not limited to, image segmentation, normalization, and the like. Image segmentation refers to the removal of extraneous regions in a CT image that are easily detected. For example, in a lung CT image, a CT image region having Hounsfield Unit (HU) values in the range of [ -1200,600] is generally regarded as a lung parenchyma region, and other regions are regarded as background regions. The normalization refers to mapping the pixel value of each image into a [0,1] interval, and the preprocessing method can optimize the training process of the deep learning method.
And 2, constructing a target detection network, and training network parameters by using single CT images in the target detection training set to realize a target area extraction network. In order to solve the problem of large size change of the target and improve the adaptation degree of the network to the targets with different sizes, the constructed target detection network is an Faster R-CNN network improved by using an iterative self-organizing analysis algorithm. The network comprises a proposed area generation network and a target classification network, wherein the two networks share a set of convolution layers for extracting image features. The method specifically comprises the following steps:
and 2.1, inputting a single CT image into a group of convolutional layers to extract image characteristics.
Step 2.2 will process the training set using an iterative self-organizing analysis algorithm. And clustering by the iterative self-organizing analysis algorithm according to the sample size and the shape distribution in the training set to generate a group of parameters for generating the suggestion region, wherein the parameters comprise the size of the generated suggestion region and the shape of the generated suggestion region.
Step 2.3 generates a proposed area network. The suggested region generation network comprises three modules of suggested region generation, suggested region classification and suggested region regression. The generate proposed region module generates a proposed region at each point of the image feature generated by the last layer of convolutional layer using parameters generated by an iterative self-organizing analysis algorithm.
For each generated proposed region, the proposed region classification module uses a classifier to predict whether the proposed region contains a target object, giving a probability that the proposed region contains the target object, i.e., a predicted value. The fact that the proposed region contains the true value of the target object means that if the proposed region contains the target object, the true value is 1; otherwise, it is 0. The proposed region classification module classifies the target object by calculating a first-class loss between the predicted value and a true value of the proposed region containing the target object.
For each generated proposed region, a proposed region regression module predicts coordinates of the proposed region containing the target object using a regressor. And the suggested region regression module calculates a second loss between the coordinates of the suggested region and the coordinates of each target in the training set by calculating the fitting degree of the coordinates of the suggested region and the coordinates of each target in the training set.
Step 2.4 target classification network. And for the proposed region containing the target, the target classification network maps the proposed region to the image features to form a detection region on the image features, maps the detection region to vectors with the same length, and further classifies the vectors through a target position regression module and a target classification module.
For each mapped vector, the target classification module uses a classifier to predict the class to which the vector belongs, including classes and contexts defined in the training set. The target classification module respectively gives the probability that the vector belongs to each category, namely the predicted value with the length being the number of the categories is output. The true value of the target classification module means that if the vector belongs to a certain class, the value of the class in the tag is 1, and the values of other classes are 0. The target classification module calculates a third-class loss between the predicted value and the true value.
For each mapped vector, the target location regression module predicts the location coordinates of the corresponding target in the vector using a regressor. And the target position regression module calculates the fourth loss between the predicted coordinate and the real target coordinate by calculating the fit degree of the predicted coordinate and the real target coordinate.
Step 2.5 loss and parameter update. And (4) combining the four types of losses calculated in the steps 2.3 and 2.4 to form a loss function of the target area extraction network. By optimizing the loss function, parameters of the target area extraction network are updated. Continuously using the training samples in the target detection training set in the CT image, and updating the parameters through the steps 2.1-2.5 until the loss function value reaches the optimum value to obtain the trained target region extraction network.
And 3, constructing a false positive target removal network. And (3) forming a detection area by the CT image used in the training in the step (2) through the step (2). And splicing the CT image of the detection area with the adjacent CT images to form a three-dimensional sequence, and taking out the detection area from the three-dimensional sequence to form a three-dimensional image block as a training sample of the network. And respectively taking the false detection area as a training negative sample and the correct detection area as a positive sample, and sending the positive sample into a three-dimensional convolution neural network to extract three-dimensional image features. The extracted image features are subjected to full-connection layer to obtain a predicted value of the target, namely the probability that the target is a real target. And calculating loss by the probability value and the true value of the target, and updating parameters in the full-connection layer and the three-dimensional convolutional neural network by optimizing a loss function to realize a network for removing false positive targets. Because the problem of sample imbalance caused by too few positive samples and too many negative samples exists in the output result of the target area extraction network, the constructed false positive target removal network is trained by adopting a Loss function Focal local.
And 4, after the training of the target area extraction network and the false positive target removal network is finished, the target area extraction network and the false positive target removal network can be used for detecting the target in the CT image. During detection, the CT image to be detected passes through a target area extraction network, and a detection area output by the target area extraction network is extracted, wherein the detection area comprises a correct detection area and a large number of false detection areas; the detection area and the adjacent CT images are spliced to form a three-dimensional image block; and sending the three-dimensional image block into a false positive target removing network to remove a false detection area, and obtaining the area and the type of the target to be detected.
2. The method of claim 1, wherein the target extraction network uses a modified Faster R-CNN network, i.e., an iterative ad-hoc analysis algorithm is used to suggest network generation parameters for regions of the Faster R-CNN network for the purpose of adapting to targets of different shapes.
3. The method of claim 1, wherein the false positive target removal network is trained using a loss function FocalLoss. Focal local plays a role in balancing positive and negative samples with hard and easy samples in the deep neural network, which is expressed as follows
Figure FDA0002258102860000031
Wherein,
Figure FDA0002258102860000032
indicates a predicted value, α is [0,1]]The weighting factor, γ, within the interval is an adjustable parameter for weighing the difficult and easy samples. The loss function can better process the problem of proportion imbalance of positive and negative samples in the task of removing the false positive targets. The network optimizes parameters in the network through a minimum loss function, and training of the false positive target removing network is achieved.
4. A computer device for automatic detection of targets in CT images based on an iterative self-organizing analysis algorithm improved Faster R-CNN and Focal local improved three-dimensional convolutional neural network, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the following steps when executing the program:
step 1, constructing a target detection training set in the CT image. The method comprises the steps of CT image acquisition and image preprocessing; the acquisition product of the CT image comprises a complete CT image sequence and the category and position label of a target object in each CT image; the image preprocessing comprises operations such as image segmentation and normalization;
step 2, constructing a target detection network, and training network parameters by using single CT images in a target detection training set to realize a target area extraction network; the method specifically comprises the following steps:
step 2.1, inputting a single CT image into a group of convolutional layers to extract image characteristics;
step 2.2, processing the training set by using an iterative self-organizing analysis algorithm; clustering is carried out by an iterative self-organizing analysis algorithm according to the sample size and shape distribution in the training set, and a group of parameters for generating the suggestion region are generated, wherein the parameters comprise the size of the generated suggestion region and the shape of the generated suggestion region;
step 2.3, generating a suggested region on each point of the image characteristics generated by the last layer of convolution layer by using parameters generated by an iterative self-organizing analysis algorithm;
for each generated suggestion region, predicting whether the suggestion region contains a target object by using a classifier, and giving a probability that the suggestion region contains the target object, namely a predicted value; by calculating a first type of loss between the predicted value and the true value of the proposed area containing the target object.
For each generated suggestion region, predicting coordinates of the suggestion region containing the target object; and calculating the second loss between the coordinates of the recommended area and the coordinates of each target in the training set by calculating the fitting degree of the coordinates of the recommended area and the coordinates of each target in the training set.
2.4, mapping the suggestion region containing the target to the image feature to form a detection region on the image feature, mapping the detection region to a vector with the same length, and further classifying the vector through a target position regression module and a target classification module;
for each mapped vector, predicting the category to which the vector belongs, wherein the category comprises each category and background defined in a training set; respectively giving the probability that the vector belongs to each category, namely outputting a predicted value with the length being the number of the categories; by calculating a third class loss between the predicted and true values.
For each mapped vector, predicting the position coordinates of the corresponding target in the vector; calculating the fourth loss between the predicted coordinate and the real target coordinate by calculating the fitting degree of the predicted coordinate and the real target coordinate;
step 2.5 loss and parameter update. Combining the four types of losses calculated in the steps 2.3 and 2.4 to form a loss function of the target area extraction network; updating parameters of the target area extraction network by optimizing the loss function;
continuously using the training samples in the target detection training set in the CT image, and updating parameters through the steps 2.1-2.5 until the loss function value reaches the optimum value to obtain a trained target region extraction network;
step 3, constructing a false positive target removing network, and forming a detection area on the CT image used for training in the step 2 through the step 2; splicing the CT image of the detection area with the adjacent CT images to form a three-dimensional sequence, taking out the detection area from the three-dimensional sequence to form a three-dimensional image block as a training sample of the network; respectively taking the false detection area as a training negative sample and the correct detection area as a positive sample, and sending the positive sample and the correct detection area into a three-dimensional convolution neural network to extract three-dimensional image features; the extracted image features are subjected to full-connection layer to obtain a predicted value of the target, namely the probability that the target is a real target. And calculating Loss by the probability value and the true value of the target, updating parameters in the full-connection layer and the three-dimensional convolutional neural network by optimizing a Loss function, and realizing a false positive target removal network, wherein the constructed false positive target removal network is trained by adopting a Loss function Focal local.
5. The computer device according to claim 4, wherein the processor further implements step 4 when executing the program, when detecting the target in the CT image, the CT image to be detected passes through the target region extraction network, and the detection region output by the target region extraction network is extracted, wherein the detection region includes a correct detection region and a large number of false detection regions; the detection area and the adjacent CT images are spliced to form a three-dimensional image block; and sending the three-dimensional image block into a false positive target removing network to remove a false detection area, and obtaining the area and the type of the target to be detected.
CN201911064984.9A 2019-11-01 2019-11-01 CT image automatic detection method and system Pending CN110827310A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911064984.9A CN110827310A (en) 2019-11-01 2019-11-01 CT image automatic detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911064984.9A CN110827310A (en) 2019-11-01 2019-11-01 CT image automatic detection method and system

Publications (1)

Publication Number Publication Date
CN110827310A true CN110827310A (en) 2020-02-21

Family

ID=69552255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911064984.9A Pending CN110827310A (en) 2019-11-01 2019-11-01 CT image automatic detection method and system

Country Status (1)

Country Link
CN (1) CN110827310A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969622A (en) * 2020-02-28 2020-04-07 南京安科医疗科技有限公司 Image processing method and system for assisting pneumonia diagnosis
CN112785565A (en) * 2021-01-15 2021-05-11 上海商汤智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN113222038A (en) * 2021-05-24 2021-08-06 北京安德医智科技有限公司 Breast lesion classification and positioning method and device based on nuclear magnetic image
CN113256634A (en) * 2021-07-13 2021-08-13 杭州医策科技有限公司 Cervical carcinoma TCT slice vagina arranging method and system based on deep learning
CN114005001A (en) * 2021-11-05 2022-02-01 西安交通大学 X-ray image detection method and system based on deep learning
WO2022089473A1 (en) * 2020-10-30 2022-05-05 International Business Machines Corporation Multiple operating point false positive removal for lesion identification
CN114677383A (en) * 2022-03-03 2022-06-28 西北工业大学 Pulmonary nodule detection and segmentation method based on multi-task learning
CN114937502A (en) * 2022-07-07 2022-08-23 西安交通大学 Method and system for evaluating osteoporotic vertebral compression fracture based on deep learning
US11436724B2 (en) 2020-10-30 2022-09-06 International Business Machines Corporation Lesion detection artificial intelligence pipeline computing system
US11587236B2 (en) 2020-10-30 2023-02-21 International Business Machines Corporation Refining lesion contours with combined active contour and inpainting
US11688063B2 (en) 2020-10-30 2023-06-27 Guerbet Ensemble machine learning model architecture for lesion detection
US11694329B2 (en) 2020-10-30 2023-07-04 International Business Machines Corporation Logistic model to determine 3D z-wise lesion connectivity
US11749401B2 (en) 2020-10-30 2023-09-05 Guerbet Seed relabeling for seed-based segmentation of a medical image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599939A (en) * 2016-12-30 2017-04-26 深圳市唯特视科技有限公司 Real-time target detection method based on region convolutional neural network
CN108986073A (en) * 2018-06-04 2018-12-11 东南大学 A kind of CT image pulmonary nodule detection method based on improved Faster R-CNN frame
CN110135296A (en) * 2019-04-30 2019-08-16 上海交通大学 Airfield runway FOD detection method based on convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599939A (en) * 2016-12-30 2017-04-26 深圳市唯特视科技有限公司 Real-time target detection method based on region convolutional neural network
CN108986073A (en) * 2018-06-04 2018-12-11 东南大学 A kind of CT image pulmonary nodule detection method based on improved Faster R-CNN frame
CN110135296A (en) * 2019-04-30 2019-08-16 上海交通大学 Airfield runway FOD detection method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
REN S,HE K,GIRSHICK R,ET AL.: "Faster R-CNN:Towards Real-time ObjectDetection with Region Proposal Networks", 《NEURAL INFORMATION PROCESSING SYSTEMS》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969622A (en) * 2020-02-28 2020-04-07 南京安科医疗科技有限公司 Image processing method and system for assisting pneumonia diagnosis
US11688063B2 (en) 2020-10-30 2023-06-27 Guerbet Ensemble machine learning model architecture for lesion detection
US11749401B2 (en) 2020-10-30 2023-09-05 Guerbet Seed relabeling for seed-based segmentation of a medical image
US11694329B2 (en) 2020-10-30 2023-07-04 International Business Machines Corporation Logistic model to determine 3D z-wise lesion connectivity
WO2022089473A1 (en) * 2020-10-30 2022-05-05 International Business Machines Corporation Multiple operating point false positive removal for lesion identification
US11688517B2 (en) 2020-10-30 2023-06-27 Guerbet Multiple operating point false positive removal for lesion identification
US11688065B2 (en) 2020-10-30 2023-06-27 Guerbet Lesion detection artificial intelligence pipeline computing system
US11436724B2 (en) 2020-10-30 2022-09-06 International Business Machines Corporation Lesion detection artificial intelligence pipeline computing system
US11587236B2 (en) 2020-10-30 2023-02-21 International Business Machines Corporation Refining lesion contours with combined active contour and inpainting
CN112785565A (en) * 2021-01-15 2021-05-11 上海商汤智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN112785565B (en) * 2021-01-15 2024-01-05 上海商汤智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN113222038B (en) * 2021-05-24 2021-10-22 北京安德医智科技有限公司 Breast lesion classification and positioning method and device based on nuclear magnetic image
CN113222038A (en) * 2021-05-24 2021-08-06 北京安德医智科技有限公司 Breast lesion classification and positioning method and device based on nuclear magnetic image
CN113256634A (en) * 2021-07-13 2021-08-13 杭州医策科技有限公司 Cervical carcinoma TCT slice vagina arranging method and system based on deep learning
CN114005001A (en) * 2021-11-05 2022-02-01 西安交通大学 X-ray image detection method and system based on deep learning
CN114005001B (en) * 2021-11-05 2024-04-09 西安交通大学 X-ray image detection method and system based on deep learning
CN114677383A (en) * 2022-03-03 2022-06-28 西北工业大学 Pulmonary nodule detection and segmentation method based on multi-task learning
CN114677383B (en) * 2022-03-03 2024-03-15 西北工业大学 Pulmonary nodule detection and segmentation method based on multitask learning
CN114937502A (en) * 2022-07-07 2022-08-23 西安交通大学 Method and system for evaluating osteoporotic vertebral compression fracture based on deep learning

Similar Documents

Publication Publication Date Title
CN110827310A (en) CT image automatic detection method and system
CN110599448B (en) Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
CN109685776B (en) Pulmonary nodule detection method and system based on CT image
Gamarra et al. Split and merge watershed: A two-step method for cell segmentation in fluorescence microscopy images
WO2021203795A1 (en) Pancreas ct automatic segmentation method based on saliency dense connection expansion convolutional network
CN110717896B (en) Plate strip steel surface defect detection method based on significance tag information propagation model
Ramesh et al. Isolation and two-step classification of normal white blood cells in peripheral blood smears
CN111815564B (en) Method and device for detecting silk ingots and silk ingot sorting system
WO2022012110A1 (en) Method and system for recognizing cells in embryo light microscope image, and device and storage medium
CN106340016B (en) A kind of DNA quantitative analysis method based on microcytoscope image
CN112085714B (en) Pulmonary nodule detection method, model training method, device, equipment and medium
CN112598713A (en) Offshore submarine fish detection and tracking statistical method based on deep learning
CN111079620B (en) White blood cell image detection and identification model construction method and application based on transfer learning
Pan et al. Cell detection in pathology and microscopy images with multi-scale fully convolutional neural networks
CN112819821B (en) Cell nucleus image detection method
CN110648322A (en) Method and system for detecting abnormal cervical cells
CN109685765B (en) X-ray film pneumonia result prediction device based on convolutional neural network
CN105389821B (en) It is a kind of that the medical image cutting method being combined is cut based on cloud model and figure
CN110705565A (en) Lymph node tumor region identification method and device
Jia et al. Multi-layer segmentation framework for cell nuclei using improved GVF Snake model, Watershed, and ellipse fitting
CN112132166A (en) Intelligent analysis method, system and device for digital cytopathology image
CN109671055B (en) Pulmonary nodule detection method and device
CN115880266B (en) Intestinal polyp detection system and method based on deep learning
Wen et al. Review of research on the instance segmentation of cell images
CN113313107A (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200221

WD01 Invention patent application deemed withdrawn after publication