CN111126424A - Ultrasonic image classification method based on convolutional neural network - Google Patents

Ultrasonic image classification method based on convolutional neural network Download PDF

Info

Publication number
CN111126424A
CN111126424A CN201811315959.9A CN201811315959A CN111126424A CN 111126424 A CN111126424 A CN 111126424A CN 201811315959 A CN201811315959 A CN 201811315959A CN 111126424 A CN111126424 A CN 111126424A
Authority
CN
China
Prior art keywords
network
image
training
neural network
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811315959.9A
Other languages
Chinese (zh)
Other versions
CN111126424B (en
Inventor
袁杰
汤键
金志斌
吴敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201811315959.9A priority Critical patent/CN111126424B/en
Publication of CN111126424A publication Critical patent/CN111126424A/en
Application granted granted Critical
Publication of CN111126424B publication Critical patent/CN111126424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses an ultrasonic image classification method based on a convolutional neural network. The method comprises the following steps: the data set is augmented by using a method of cutting for many times, adding Gaussian noise and carrying out histogram equalization on the image; training a target detection neural network by using a cross training strategy, and verifying and testing; optimizing the convolutional neural network by using a transfer learning method of loading a pre-training model and fine tuning, and verifying and testing; and positioning a target area in the ultrasonic image by using the target detection neural network, and performing classification evaluation on the ultrasonic image by using the convolutional neural network.

Description

Ultrasonic image classification method based on convolutional neural network
Technical Field
The invention relates to the field of ultrasonic image analysis, in particular to an ultrasonic image classification method based on a convolutional neural network.
Background
The ultrasonic imaging technology is an important imaging mode in medical imaging, and the automatic diagnosis technology of the ultrasonic image is beneficial to assisting clinical diagnosis and doctor training. There are many disadvantages in using the traditional ultrasound image classification method: 1. the training time is long, and the financial cost is high; 2. the examination time of the patient is long; 3. the inevitable accuracy, reliability and consistency problems caused by the doctor's human factors.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the technical problem that the existing ultrasonic image classification method is poor in consistency and reliability, and provides an ultrasonic image classification method based on a convolutional neural network in deep learning and a basic image processing method, so that the positioning of a target area in an image and the classification and evaluation of the ultrasonic image are realized.
In order to solve the technical problem, the invention discloses an ultrasonic image classification method based on a convolutional neural network, which comprises the following steps:
step 1, using a method of cutting for many times, adding Gaussian noise and carrying out histogram equalization on an image to augment a data set;
step 2, training a target detection neural network by using a cross training strategy, and verifying and testing;
step 3, optimizing the convolutional neural network by using a transfer learning method of loading a pre-training model and fine tuning, and verifying and testing;
and 4, positioning a target area in the ultrasonic image by using the target detection neural network, and performing classification evaluation on the ultrasonic image by using the convolutional neural network.
In step 1, the image used for data set augmentation is an image sub-block including a target area, which is cut out from an original image for a plurality of times, and the purpose of the image sub-block is to remove an invalid black area and label information and prevent interference with network training. When the image block is cut, the image block is cut four times in the central target area and the left, right and lower areas thereof.
In step 1, one of the data set augmentation methods is to add white gaussian noise to the ultrasound image, and the histogram curve follows one-dimensional gaussian distribution:
Figure BSA0000173471160000011
wherein a is a standard deviation of the values of a,
Figure BSA0000173471160000012
is the mean value.
In step 1, one of the data set augmentation methods is to perform histogram equalization on the ultrasound image, that is, perform some kind of mapping transformation on the pixel gray scale in the original image, so that the probability density of the transformed image gray scale is uniformly distributed. For a discrete image, assume that the total number of pixels in the digital image is N, the total number of gray levels is M, and the value of the kth gray level is rkHaving a grey level r in the imagekHas a number of pixels of nkThen the gray level r in the imagekThe pixel occurrence probability of (a) is:
Figure BSA0000173471160000021
the transformation function for homogenizing is:
Figure BSA0000173471160000022
and performing gray level transformation on the image by using the transformation function to obtain the image with the equalized histogram.
In step 2, the adopted target detection network is Fast R-CNN, the target detection network is composed of an RPN network and a Fast R-CNN network, the RPN network is used for extracting candidate areas, and the Fast R-CNN adopts the candidate areas provided by the RPN network to generate a final positioning result.
The loss function of the Faster R-CNN is a multi-task joint loss function and is used for simultaneously training target classification and bounding box regression, and the calculation mode is as follows:
Figure BSA0000173471160000023
Lcls=-log pu
Figure BSA0000173471160000024
Figure BSA0000173471160000025
wherein p isuProbability of frame candidate as target, LclsLogarithmic loss function of true class u, LlocIs a localization loss function, marked by a candidate box belonging to the class u true (v ═ vx,vy,vw,vh) And predicted candidate frame coordinates
Figure BSA0000173471160000026
Figure BSA0000173471160000027
Are collectively defined. Where the background class flag u is 0, i.e. the background candidate box does not count in LlocAnd (4) calculating. L isclsBatch size N by trainingclsNormalization, LlocBy the number of candidate frame positions NregNormalized by λ LlocAnd LclsThe balance parameter of (1). Regression of candidate frames using smoothL1The function calculates a loss function.
In the step 3, the adopted convolutional neural network is a GoogLeNet network, and the network design adopts an inclusion structure, namely convolutional kernels with different sizes and cascade output of a maximum pooling layer are adopted.
In step 3, the loss function calculation formula of the adopted network is as follows:
Losstotal=a×loss1+b×loss2+loss3
wherein loss1And loss2Is the auxiliary loss function of the middle layer, multiplied by discounting weights a and b, respectively, a and b each being a constant between 0 and 1. loss3 is the loss function of the last layer. Each loss function is calculated as cross entropy:
Figure BSA0000173471160000028
Figure BSA0000173471160000031
wherein f isjIs the jth element, y, of the last fully-connected layer output vectoriIs an input image xiWt is the weight matrix of the entire network.
In step 3, the adopted pre-training model is pre-trained on the natural image data set ImageNet, the learning rate of the final full-connection layer is guaranteed to be unchanged while the network learning rate is reduced during fine adjustment, and the weights of the pre-training model are copied by other layers while the full-connection layer is initialized randomly.
In steps 2 and 3, five-fold cross validation is adopted for testing the network performance, namely, a hierarchical sampling method is adopted to divide a data set into 5 mutually exclusive subsets with the same size:
Figure BSA0000173471160000032
wherein D is a data set, and D1, D2, D3, D4 and D5 are mutually exclusive subsets obtained by hierarchical sampling respectively.
Each time the union of four subsets was used as the training set and the remaining one as the test set, resulting in 5 sets of training and test sets. The final test results are the average of 5 test results.
And 4, positioning a target area in the ultrasonic image by using a target detection neural network, and performing classification evaluation on the ultrasonic image by using a convolution neural network, namely inputting a selected image by using a trained neural network model and performing forward reasoning to obtain a classification result and a positioning result of each image.
The invention provides an ultrasonic image classification method based on a convolutional neural network by utilizing the characteristics of high precision and high stability of the convolutional neural network. The invention applies the convolutional neural network to the evaluation and analysis of the ultrasonic image, and can improve the accuracy, reliability and consistency of the ultrasonic image analysis, thereby eliminating the human interference factors in the standardized identification process of the ultrasonic image.
Drawings
The foregoing and other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a schematic diagram of the system of the present invention.
Fig. 2 is a schematic diagram of data augmentation.
FIG. 3 is a diagram of a fast R-CNN network architecture.
FIG. 4 is a diagram of loading a pre-trained model and fine-tuning the network.
FIG. 5 is a schematic diagram of five-fold cross validation.
Fig. 6 is a schematic diagram of the inclusion module structure in the google lenet network.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
As shown in fig. 1, the invention discloses an ultrasound image classification method based on a convolutional neural network, which comprises the following steps:
step 1, using a method of cutting for many times, adding Gaussian noise and carrying out histogram equalization on an image to augment a data set;
step 2, training a target detection neural network by using a cross training strategy, and verifying and testing;
step 3, optimizing the convolutional neural network by using a transfer learning method of loading a pre-training model and fine tuning, and verifying and testing;
and 4, positioning a target area in the ultrasonic image by using the target detection neural network, and performing classification evaluation on the ultrasonic image by using the convolutional neural network.
In this example, the ultrasound images input in step 1 were acquired from a philippiu 22 and Philips EPIQ 7 ultrasound diagnostic apparatus, the ultrasound probe used was of the type L12-5, the scan depth was 2.5cm, and the two-dimensional gain was adjusted to 50% -60%.
In this example, the image used for data set augmentation in step 1 is 520 x 120 image blocks containing the target region cut out from the original image of 1024 x 768, which is intended to remove the invalid black regions and the medical marker information to prevent interference with the network training. When the image block is cut, the image block is cut four times in the central target area and the left, right and lower areas thereof.
In this example, one of the data set augmentation methods in step 1 is to add white gaussian noise to the ultrasound image, and the histogram curve follows one-dimensional gaussian distribution:
Figure BSA0000173471160000041
wherein a is a standard deviation of the values of a,
Figure BSA0000173471160000042
is the mean value.
In this example, one of the data set augmentation means in step 1 is to perform histogram equalization on the ultrasound image, that is, perform some kind of mapping transformation on the pixel gray scale in the original image, so that the probability density of the transformed image gray scale is uniformly distributed. For a discrete image, assume that the total number of pixels in the digital image is N, the total number of gray levels is M, and the value of the kth gray level is rkHaving a grey level r in the imagekHas a number of pixels of nkThen the gray level r in the imagekThe pixel occurrence probability of (a) is:
Figure BSA0000173471160000043
the transformation function for homogenizing is:
Figure BSA0000173471160000044
and performing gray level transformation on the image by using the transformation function to obtain the image with the equalized histogram.
In this example, the target detection network used in step 2 is Fast R-CNN, and the target detection network is composed of an RPN (region pro-social network) network and a Fast R-CNN network, where the RPN network is used to extract candidate regions, and the Fast R-CNN generates a final positioning result by using the candidate regions provided by the RPN network.
The loss function of the Faster R-CNN is a multi-task joint loss function and is used for simultaneously training target classification and bounding box regression, and the calculation mode is as follows:
Figure BSA0000173471160000051
Lcls=-log pu
Figure BSA0000173471160000052
Figure BSA0000173471160000053
wherein p isuProbability of frame candidate as target, LclsLogarithmic loss function of true class u, LlocIs a localization loss function, marked by a candidate box belonging to the class u true (v ═ vx,vy,vw,vh) And predicted candidate frame coordinates
Figure BSA0000173471160000054
Figure BSA0000173471160000055
Are collectively defined. Where the background class flag u is 0, i.e. the background candidate box does not count in LlocAnd (4) calculating. L isclsBatch size N by trainingclsNormalization, LlocBy the number of candidate frame positions NregNormalized, the balance parameter λ defaults to 10. Regression of candidate frames using smoothL1The function calculates a loss function.
In this example, the network adopted in step 2 deploys training on the deep learning framework Caffe, and the training method is a 4-step cross training strategy:
(1) initializing an RPN network by utilizing a ZFNET network model pre-trained on ImageNet, and training and iterating for 8000 times;
(2) initializing a Fast R-CNN network by using a ZFNET network model pre-trained on ImageNet, inputting the Fast R-CNN network into a candidate region generated by the RPN in the step (1), and training and iterating for 4000 times;
(3) initializing the RPN by using the weight initialized by the Fast R-CNN network in the step (2), fixing the weight of the shared convolution layer and finely adjusting the unique part in the RPN, and training and iterating for 8000 times;
(4) the weights of the shared convolutional layers are fixed and unique parts in Fast R-CNN are fine-tuned at the same time, training iterates 4000 times.
In this example, the convolutional neural network used in step 3 is a google lenet network, the network design uses an inclusion structure, that is, a cascade output of 1 × 1 convolution module, 3 × 3 convolution module, 5 × 5 convolution module, and 3 × 3 max pooling layer is used, and 1 × 1 convolution module is added before 3 × 3, 5 × 5 convolution module, and after 3 × 3 max pooling layer.
In this example, the loss function of the network used in step 3 is calculated as follows:
Losstotal=0.3×loss1+0.3×loss2+loss3
wherein loss1And loss2Is the auxiliary loss function of the middle layer, multiplied by a weight of 0.3, loss, respectively3Is the loss function of the last layer. Each loss function is calculated as cross entropy:
Figure BSA0000173471160000056
Figure BSA0000173471160000057
wherein f isjIs the jth element, y, of the last fully-connected layer output vectoriIs an input image xiWt is the weight matrix of the entire network.
In this example, the network used in step 3 deploys training on the deep learning framework Caffe, and the google lenet network model pre-trained on ImageNet is derived fromCaffe Model Zoo(http:// caffe.berkeleyvision.org/model zoo.html)And (4) downloading. And during fine adjustment, the network learning rate is reduced, the learning rate of the final full-connection layer is ensured to be unchanged, the name of the full-connection layer is modified, so that the full-connection layer is initialized randomly in the training process, and the weights of the pre-training model are copied by other layers.
In this example, the test of the network performance in steps 2 and 3 adopts five-fold cross validation, that is, adopts a hierarchical sampling method to divide the data set into 5 mutually exclusive subsets with the same size:
Figure BSA0000173471160000061
wherein D is a data set, and D1, D2, D3, D4 and D5 are mutually exclusive subsets obtained by hierarchical sampling respectively.
Each time the union of four subsets was used as the training set and the remaining one as the test set, resulting in 5 sets of training and test sets. The final test results are the average of 5 test results.
In this example, in step 4, the target detection neural network is used to locate the target region in the ultrasound image, and the convolutional neural network is used to perform classification evaluation on the ultrasound image, that is, the trained neural network model is used to input the selected image and perform forward reasoning, so as to obtain the classification result and the location result of each image.
The present invention provides a method for classifying ultrasound images based on convolutional neural network, and a variety of methods and ways for implementing the method, and the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, a plurality of modifications and embellishments can be made without departing from the principle of the present invention, and these modifications and embellishments should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (10)

1. An ultrasound image classification method based on a convolutional neural network is characterized by comprising the following steps:
step 1, using a method of cutting for many times, adding Gaussian noise and carrying out histogram equalization on an image to augment a data set;
step 2, training a target detection neural network by using a cross training strategy, and verifying and testing;
step 3, optimizing the convolutional neural network by using a transfer learning method of loading a pre-training model and fine tuning, and verifying and testing;
and 4, positioning a target area in the ultrasonic image by using the target detection neural network, and performing classification evaluation on the ultrasonic image by using the convolutional neural network.
2. The method according to claim 1, wherein the image used for data set augmentation in step 1 is a sub-block of the image of the target area cut out multiple times from the original image, with the purpose of removing invalid black areas and labeling information to prevent interference with network training. When the image block is cut, the image block is cut four times in the central target area and the left, right and lower areas thereof.
3. The method of claim 1, wherein one of the data set augmentation measures in step 1 is to add white gaussian noise to the ultrasound image, whose histogram curve follows a one-dimensional gaussian distribution:
Figure FSA0000173471150000011
wherein a is a standard deviation of the values of a,
Figure FSA0000173471150000012
is the mean value.
4. The method of claim 1, wherein one of the data set expansion means in step 1 is histogram equalization of the ultrasound image, i.e. some mapping transformation is performed on the gray levels of the pixels in the original image, so that the probability density of the transformed gray levels of the image is uniformly distributed. For a discrete image, assume that the total number of pixels in the digital image is N, the total number of gray levels is M, and the value of the kth gray level is rkHaving a grey level r in the imagekHas a number of pixels of nkThen the gray level r in the imagekThe pixel occurrence probability of (a) is:
Figure FSA0000173471150000013
the transformation function for homogenizing is:
Figure FSA0000173471150000014
and performing gray level transformation on the image by using the transformation function to obtain the image with the equalized histogram.
5. The method according to claim 1, wherein the target detection Network used in step 2 is a Fast R-CNN (Fast Region-based connected Network), and the target detection Network is composed of an RPN (resilient pro-potential Network) Network and a Fast R-CNN Network, the RPN Network is used to extract candidate regions, and the Fast R-CNN uses the candidate regions provided by the RPN Network to generate the final positioning result.
The loss function of the Faster R-CNN is a multi-task joint loss function and is used for simultaneously training target classification and bounding box regression, and the calculation mode is as follows:
Figure FSA0000173471150000021
Lcls=-log pu
Figure FSA0000173471150000022
Figure FSA0000173471150000023
wherein p isuProbability of frame candidate as target, LclsLogarithmic loss function of true class u, LlocIs a localization loss function, marked by a candidate box belonging to the class u true (v ═ vx,vy,vw,vh) And predicted candidate frame coordinates
Figure FSA0000173471150000024
Are collectively defined. Where the background class flag u is 0, i.e. the background candidate box does not count in LlocAnd (4) calculating. L isclsBatch size N by trainingclsNormalization, LlocBy the number of candidate frame positions NregNormalized by λ LlocAnd LclsThe balance parameter of (1). Regression of candidate frames using smoothL1The function calculates a loss function.
6. The method of claim 1, wherein the training method used in step 2 is a cross-training strategy:
(1) initializing an RPN (resilient packet network) by utilizing a ZFNET (zero crossing network) model pre-trained on ImageNet, and training iteration;
(2) initializing a Fast R-CNN network by utilizing a ZFNET network model pre-trained on ImageNet, inputting a candidate region generated by the RPN in the step (1), and training iteration;
(3) initializing the RPN network by using the weight initialized by the Fast R-CNN network in the step (2), fixing the weight of the shared convolution layer, finely adjusting the unique part in the RPN, and training iteration;
(4) the weights of the shared convolutional layers are fixed and at the same time the unique parts in Fast R-CNN are fine-tuned and iterations are trained.
7. The method according to claim 1, wherein the convolutional neural network used in step 3 is a google net network, and the network design adopts an inclusion structure, that is, a cascade output of convolutional kernels with different sizes and a maximum pooling layer is adopted.
8. The method of claim 1, wherein the loss function of the network used in step 3 is calculated as follows:
Losstotal=a×loss1+b×loss2+loss3
wherein loss1And loss2Is the auxiliary loss function of the middle layer, multiplied by discount weights a and b, respectively, a and b eachIs a constant between 0 and 1. loss3Is the loss function of the last layer. Each loss function is calculated as cross entropy:
Figure FSA0000173471150000025
Figure FSA0000173471150000026
wherein f isjIs the jth element, y, of the last fully-connected layer output vectoriIs an input image xiWt is the weight matrix of the network.
9. The method according to claim 1, wherein the pre-training model used in step 3 is pre-trained on the natural image dataset ImageNet, the learning rate of the final fully-connected layer is guaranteed to be unchanged while the network learning rate is reduced during fine tuning, and the weights of the pre-training model are copied by other layers while the fully-connected layer is randomly initialized.
10. The method of claim 1, wherein the testing of the network performance in steps 2 and 3 adopts five-fold cross validation, that is, a hierarchical sampling method to divide the data set into 5 mutually exclusive subsets of the same size:
Figure FSA0000173471150000031
wherein D is a data set, and D1, D2, D3, D4 and D5 are mutually exclusive subsets obtained by hierarchical sampling respectively.
Each time the union of four subsets was used as the training set and the remaining one as the test set, resulting in 5 sets of training and test sets. The final test results are the average of 5 test results.
CN201811315959.9A 2018-11-01 2018-11-01 Ultrasonic image classification method based on convolutional neural network Active CN111126424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811315959.9A CN111126424B (en) 2018-11-01 2018-11-01 Ultrasonic image classification method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811315959.9A CN111126424B (en) 2018-11-01 2018-11-01 Ultrasonic image classification method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN111126424A true CN111126424A (en) 2020-05-08
CN111126424B CN111126424B (en) 2023-06-23

Family

ID=70495016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811315959.9A Active CN111126424B (en) 2018-11-01 2018-11-01 Ultrasonic image classification method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN111126424B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652496A (en) * 2020-05-28 2020-09-11 中国能源建设集团广东省电力设计研究院有限公司 Operation risk assessment method and device based on network security situation awareness system
CN112102337A (en) * 2020-09-16 2020-12-18 哈尔滨工程大学 Bone surface segmentation method under ultrasonic imaging

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651830A (en) * 2016-09-28 2017-05-10 华南理工大学 Image quality test method based on parallel convolutional neural network
CN107274451A (en) * 2017-05-17 2017-10-20 北京工业大学 Isolator detecting method and device based on shared convolutional neural networks
CN107480730A (en) * 2017-09-05 2017-12-15 广州供电局有限公司 Power equipment identification model construction method and system, the recognition methods of power equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651830A (en) * 2016-09-28 2017-05-10 华南理工大学 Image quality test method based on parallel convolutional neural network
CN107274451A (en) * 2017-05-17 2017-10-20 北京工业大学 Isolator detecting method and device based on shared convolutional neural networks
CN107480730A (en) * 2017-09-05 2017-12-15 广州供电局有限公司 Power equipment identification model construction method and system, the recognition methods of power equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652496A (en) * 2020-05-28 2020-09-11 中国能源建设集团广东省电力设计研究院有限公司 Operation risk assessment method and device based on network security situation awareness system
CN111652496B (en) * 2020-05-28 2023-09-05 中国能源建设集团广东省电力设计研究院有限公司 Running risk assessment method and device based on network security situation awareness system
CN112102337A (en) * 2020-09-16 2020-12-18 哈尔滨工程大学 Bone surface segmentation method under ultrasonic imaging

Also Published As

Publication number Publication date
CN111126424B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
Lee et al. Automated mammographic breast density estimation using a fully convolutional network
Sori et al. DFD-Net: lung cancer detection from denoised CT scan image using deep learning
CN112241766B (en) Liver CT image multi-lesion classification method based on sample generation and transfer learning
CN111275714B (en) Prostate MR image segmentation method based on attention mechanism 3D convolutional neural network
CN108335303B (en) Multi-scale palm skeleton segmentation method applied to palm X-ray film
Fang et al. Automatic breast cancer detection based on optimized neural network using whale optimization algorithm
CN109949276B (en) Lymph node detection method for improving SegNet segmentation network
CN110705555A (en) Abdomen multi-organ nuclear magnetic resonance image segmentation method, system and medium based on FCN
Aranguren et al. Improving the segmentation of magnetic resonance brain images using the LSHADE optimization algorithm
CN108629785B (en) Three-dimensional magnetic resonance pancreas image segmentation method based on self-learning
CN110706214B (en) Three-dimensional U-Net brain tumor segmentation method fusing condition randomness and residual error
WO2020168648A1 (en) Image segmentation method and device, and computer-readable storage medium
Tang et al. Cmu-net: a strong convmixer-based medical ultrasound image segmentation network
CN111291825A (en) Focus classification model training method and device, computer equipment and storage medium
CN111325750A (en) Medical image segmentation method based on multi-scale fusion U-shaped chain neural network
CN117078692B (en) Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion
CN113034507A (en) CCTA image-based coronary artery three-dimensional segmentation method
CN112750137A (en) Liver tumor segmentation method and system based on deep learning
CN111080658A (en) Cervical MRI image segmentation method based on deformable registration and DCNN
CN111080592B (en) Rib extraction method and device based on deep learning
CN111126424A (en) Ultrasonic image classification method based on convolutional neural network
CN111862071B (en) Method for measuring CT value of lumbar 1 vertebral body based on CT image
CN116563285B (en) Focus characteristic identifying and dividing method and system based on full neural network
CN108596900B (en) Thyroid-associated ophthalmopathy medical image data processing device and method, computer-readable storage medium and terminal equipment
CN116309806A (en) CSAI-Grid RCNN-based thyroid ultrasound image region of interest positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant