CN111325236A - Ultrasonic image classification method based on convolutional neural network - Google Patents

Ultrasonic image classification method based on convolutional neural network Download PDF

Info

Publication number
CN111325236A
CN111325236A CN202010070699.4A CN202010070699A CN111325236A CN 111325236 A CN111325236 A CN 111325236A CN 202010070699 A CN202010070699 A CN 202010070699A CN 111325236 A CN111325236 A CN 111325236A
Authority
CN
China
Prior art keywords
image
network
generator
data set
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010070699.4A
Other languages
Chinese (zh)
Other versions
CN111325236B (en
Inventor
金志斌
周雪
程裕家
袁杰
彭成磊
李睿钦
张玮婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202010070699.4A priority Critical patent/CN111325236B/en
Publication of CN111325236A publication Critical patent/CN111325236A/en
Application granted granted Critical
Publication of CN111325236B publication Critical patent/CN111325236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses an ultrasonic image classification method based on a convolutional neural network. The method comprises the following steps: defining an interested area from an original image and cutting to obtain a cut image; performing data amplification on the cut image by adopting a Gaussian noise adding method and a histogram equalization method to obtain a data set after data amplification; training for generating a countermeasure network by using the augmented data set, and verifying and testing to obtain a trained generator; loading the trained generator, reasoning out an image through noise, and calibrating a label for the generated image; and expanding the image generated by the generator into a classification data set, retraining the convolutional neural network to classify the ultrasonic image, outputting the accuracy and the recall rate, and evaluating the network performance. When the ultrasonic images are classified, the problem of insufficient training data set in the neural network is solved, and the generalization performance of the network is improved.

Description

Ultrasonic image classification method based on convolutional neural network
Technical Field
The invention relates to the field of ultrasonic image analysis, in particular to an ultrasonic image classification method based on a convolutional neural network.
Background
In deep learning image classification studies, large-scale datasets are typically relied upon to avoid the over-fitting problem. When the amount of image data is not enough or the number distribution among different classes of images is not balanced, the image is generally augmented by adopting a traditional image augmentation method, such as multiple cropping, gaussian noise addition, gray level equalization and other traditional image augmentation methods.
Although these conventional image augmentation methods can expand the existing data set, they also bring the problem of network overfitting because these conventional image augmentation methods can only generate images extremely similar to the original images, and with the increase of the augmented data volume, the same data items in the data set increase more and more, which eventually leads to network overfitting, and the generalization performance is poor, that is, only the images in the data set can be distinguished, and the classification effect on the images with different new forms is poor.
Disclosure of Invention
The purpose of the invention is as follows: the situations that the image data quantity is not enough or the image types are not rich exist frequently in the deep learning field, and the good image augmentation method can be used for achieving the effect of achieving twice the result with half the effort and even being decisive; however, at the same time, a single image augmentation method may also result in overfitting of the network, that is, only a current training set can obtain a good classification effect, and the generalization performance of the network is poor. The invention aims to solve the technical problem that a classification data set is expanded by using a generated confrontation network synthetic image and a traditional image augmentation mode together, so that the ultrasonic image classification performance of a convolutional neural network is improved.
In order to solve the technical problem, an ultrasound image classification method based on a convolutional neural network is provided, which comprises the following steps:
step 1, a region of interest is defined from an original image and is cut to obtain a cut image;
step 2, performing data augmentation on the cut image by adopting a Gaussian noise adding method and a histogram equalization method to obtain a data set after data augmentation;
step 3, training for generating a countermeasure network by using the augmented data set, and verifying and testing to obtain a trained generator;
step 4, loading the trained generator, reasoning out an image through noise, and calibrating a label for the generated image;
and 5, expanding the image generated by the generator into a classification data set, retraining the convolutional neural network to classify the ultrasonic image, outputting the accuracy and the recall rate, and evaluating the network performance.
Further, in one implementation, the step 1 includes: and selecting image sub-blocks containing a target area from the original image, and cutting the image sub-blocks to obtain a cut image, wherein the size of the cut image is uniform including the target area, and the image sub-blocks containing the target area are the interested area of the original image.
Further, in one implementation, the step 2 includes:
adding Gaussian white noise to the cut image to enable a histogram curve of the image added with the Gaussian white noise to be in accordance with one-dimensional Gaussian distribution;
and performing histogram equalization on the cut image to ensure that the pixel values of the mapped image accord with uniform distribution.
Further, in one implementation, the step 3 includes:
step 3-1, adding the image in the data set obtained in the step 2 to a real image data set, inputting the real image in the real image data set into a generation countermeasure network, and using the generated image inferred by a generator together as an input image of a discriminator, wherein the label of the real image is true, and the label of the generated image is false;
step 3-2, a discriminator is connected in series behind the generator, random noise is input, the generated image is input into the discriminator after passing through the generator, the label of the generated image is set to be true at the moment, the loss function value is returned, and only the network parameter of the generator is updated, but the network parameter of the discriminator is kept unchanged;
and 3-3, generating a generator weight file according to the trained network parameters of the generator.
Further, in one implementation, the step 4 includes:
step 4-1, directly importing the network parameters of the generator in the step 3 into a generator weight file for reasoning;
and 4-2, generating an image through the generator, and calibrating a label for the image generated by the generator.
Further, in one implementation, the step 5 includes:
step 5-1, merging the marked generated image in the step 4 with an original data set to serve as a training set of a residual error classification network;
step 5-2, the training process of the residual error classification network is divided into a training stage and a verification stage, a data set is completely iterated once, namely, verification is carried out once, a best-performing network model is tracked through updating parameters, the best-performing network model is a model with the highest verification accuracy, and the model with the highest verification accuracy is returned after training is finished;
after training is finished, inputting a test data set with labels into a trained network, and calculating the proportion of the number of accurately classified samples to the total number of samples in the test data set to obtain the accuracy of a residual error classification network, wherein the higher the accuracy is, the better the network performance is; meanwhile, the recall rate is output, namely the proportion of the number of accurately classified samples to the total number of samples of the training data set after the training data set passes through the residual error classification network is calculated, and the higher the recall rate is, the better the network performance is.
As can be seen from the above technical solutions, an embodiment of the present invention provides an ultrasound image classification method based on a convolutional neural network, including: step 1, a region of interest is defined from an original image and is cut to obtain a cut image; step 2, performing data augmentation on the cut image by adopting a Gaussian noise adding method and a histogram equalization method to obtain a data set after data augmentation; step 3, training for generating a countermeasure network by using the augmented data set, and verifying and testing to obtain a trained generator; step 4, loading the trained generator, reasoning out an image through noise, and calibrating a label for the generated image; and 5, expanding the image generated by the generator into a classification data set, retraining the convolutional neural network to classify the ultrasonic image, outputting the accuracy and the recall rate, and evaluating the network performance.
In the prior art, the conventional image augmentation method can augment the existing data set, but the problem of network overfitting is also brought, so that the classification effect on the images with different new forms is poor. By adopting the method, the confrontation network is generated to generate the sample image, a large number of training samples can be obtained, so that the problem of insufficient image training samples is solved, and meanwhile, the method for data augmentation is expanded.
Specifically, the generated effective images are expanded into classified data to intensively train the classification model again by using a residual classification network and verify the test, so that the classification precision and reliability are improved, compared with the prior art, the method solves the problem of insufficient training data amount of deep learning only by using the conventional image sample, and avoids the problem of network overfitting caused by the limitation of the conventional expansion mode; meanwhile, the effective images generated by the generator are expanded into a classification data set, the classification model is retrained by using a residual classification network, and the two are combined, so that the classification precision of the trained network is improved, the problem that the classification effect of the prior research only on the existing data set is good is solved, and the generalization performance of the network is improved.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
Fig. 1 is a schematic workflow diagram of generation of a countermeasure network in an ultrasound image classification method based on a convolutional neural network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a neural network architecture of a discriminator in an ultrasound image classification method based on a convolutional neural network according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a neural network architecture of a generator in the convolutional neural network-based ultrasound image classification method according to an embodiment of the present invention;
FIG. 4a is a generated image of a countermeasure network generated in the convolutional neural network-based ultrasound image classification method according to the embodiment of the present invention;
FIG. 4b is a diagram illustrating an original image of a countermeasure network generated in the convolutional neural network-based ultrasound image classification method according to the embodiment of the present invention;
fig. 5 is a schematic diagram of a residual error classification network module in an ultrasound image classification method based on a convolutional neural network according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The embodiment of the invention discloses an ultrasonic image classification method based on a convolutional neural network, which is applied to the grading research of an ultrasonic image of arthritis.
The ultrasound image classification method based on the convolutional neural network described in the embodiment includes the following steps:
step 1, a region of interest is defined from an original image and is cut to obtain a cut image; in this embodiment, drawing software may be used to circle the region of interest from the original image and perform size-determining clipping, so as to obtain a clipped image.
Step 2, performing data augmentation on the cut image by adopting a Gaussian noise adding method and a histogram equalization method to obtain a data set after data augmentation;
step 3, training for generating a countermeasure network by using the augmented data set, and verifying and testing to obtain a trained generator; in this embodiment, the generated countermeasure network (GAN) refers to a combined network formed by cascading generators and discriminators.
Step 4, loading the trained generator, reasoning out an image through noise, and calibrating a label for the generated image; in this step, the noise-inferred image includes a large number of images, and the inferred images are added to the original data set each time to form a new data set, so as to test the generalization performance of the network trained on the training set, and the amount of inference is continuously increased until the network performance reaches the expectation.
And 5, expanding the image generated by the generator into a classification data set, retraining the convolutional neural network to classify the ultrasonic image, and outputting the accuracy and the recall rate to evaluate the network performance. In this step, the classified data set refers to the total data set obtained by performing step 1, step 2, and step 4. In this embodiment, the ultrasound image is acquired by a hospital having a professional device.
In the ultrasound image classification method based on the convolutional neural network according to this embodiment, the step 1 includes: and selecting image sub-blocks containing a target area from the original image, and cutting the image sub-blocks to obtain a cut image, wherein the size of the cut image is uniform including the target area, and the image sub-blocks containing the target area are the interested area of the original image.
Specifically, in this step, the size of the cropped image is a uniform size including the target region, the cropped image is obtained, the image sub-block including the target region is the region of interest of the original image, and the subsequent processing is performed on the region of interest to reduce the processing time and improve the accuracy. In this embodiment, the original image used is an image of an arthritic region acquired by a medical ultrasound imaging apparatus, and the imaging depth of the image differs depending on the acquisition apparatus. The resolution of the original image is 1024 × 768, in order to eliminate the invalid region of the original image, reduce the calculation amount and the calculation time for generating a countermeasure network and a residual classification network and improve the accuracy and the reliability of classification, the original image is cut into an image with the resolution of 520 × 120 to serve as a training sample, wherein the target region is the position of a slip film.
In the ultrasound image classification method based on the convolutional neural network according to this embodiment, the step 2 includes:
adding gaussian white noise to the cut image to make a histogram curve of the image after adding gaussian white noise conform to one-dimensional gaussian distribution, specifically, in this embodiment, the method for adding gaussian white noise includes:
Figure BDA0002377249130000051
wherein x is input, μ is mean, and σ is standard deviation;
performing histogram equalization on the clipped image to make the pixel values of the mapped image conform to uniform distribution, specifically, in this embodiment, the mapping method is as follows:
Figure BDA0002377249130000061
wherein s iskIs the cumulative probability, n is the sum of the pixels in the image, L is the total number of possible gray levels in the image, niIs the number of pixels of the ith gray scale.
Specifically, in this embodiment, gaussian white noise may be added to the clipped image first, and then histogram equalization may be performed, or gaussian white noise may be added to the clipped image first, and then histogram equalization may be performed. In this embodiment, the clipped image obtained in step 1 is subjected to histogram equalization and augmented by adding gaussian white noise, so that the number of image samples is increased by three times.
In the ultrasound image classification method based on the convolutional neural network according to this embodiment, the step 3 includes:
step 3-1, adding the image in the data set obtained in the step 2 to a real image data set, inputting the real image in the real image data set into a generation countermeasure network, and using the generated image inferred by a generator together as an input image of a discriminator, wherein the label of the real image is true, and the label of the generated image is false; in this embodiment, the real image dataset is the dataset obtained through steps 1 and 2, and the real image dataset is valid only for step 3. In this embodiment, the generation countermeasure network refers to a combined network formed by cascading a generator and a discriminator. In addition, in this embodiment, the true and false data are only distinguished in training the generation countermeasure network, and the true and false data are not distinguished in the classification dataset, and the image generated by the trained generator in step 3, i.e. the image labeled as false, is also part of the classification dataset.
Step 3-2, a discriminator is connected in series behind the generator, random noise is input, the generated image is input into the discriminator after passing through the generator, the label of the generated image is set to be true at the moment, the loss function value is returned, and only the network parameter of the generator is updated, but the network parameter of the discriminator is kept unchanged;
in this embodiment, the loss function of the discriminator includes two parts, which are the sum of the error calculation result for the real image and the error calculation result for the generated image. In the Pytorch, the calculation method of the loss function is BCEloss:
lossreal=criterion(realout,reallabel)
lossfake=criterion(fakeout,fakelabel)
lossd=lossreal+lossfake
therein, lossrealLoss function value, loss, derived for the discriminator on the real imagefakeValue of loss function, real, derived for the arbiter to generate the imagelabelFor labels of real images, realoutA specific image which is a real image; fakeoutTo generate labels for images, fakelabelFor generating specific images of an image, lossdThe method is an overall loss function of a discriminator obtained after summarizing results of generated images and real images, and criterion represents a calculation method of the loss function, and is essentially an imitation function.
The loss function of the generator is calculated by combining the real tag and the generated image and using BCEloss, in this embodiment, the real tag is marked as 1:
lossg=criterion(output,real_label)
therein, lossgThe method is a loss function of a generator, output represents a generated image, real _ label represents a real label, and criterion represents a calculation method of the loss function, and is essentially an imitation function.
In addition, due to the requirement of the convolutional neural network, the generator and the discriminator both need to select a proper optimization algorithm, so that divergence of the loss function value is prevented while the loss function is ensured to be converged at a maximum value. In the specific implementation, the generator and the discriminator adopt an Adam optimizer to update parameters. The Learning Rate is 0.0003, so as to prevent the oscillation phenomenon caused by the over-large Learning Rate.
And 3-3, generating a generator weight file according to the trained network parameters of the generator.
In this embodiment, all the samples augmented in step 2 are used in step 3 to train by generating a countermeasure network. The basic flow chart of generating the countermeasure network is shown in fig. 1, the neural network architecture of the discriminator is shown in fig. 2, and the neural network architecture of the generator is shown in fig. 3. By using the neural network architecture of the generator, a group of discriminators and generators is obtained by training all samples, wherein the network parameters of the discriminators are shown in table 1, and the network parameters of the generators are shown in table 2.
TABLE 1 arbiter network parameters
Figure BDA0002377249130000071
Figure BDA0002377249130000081
TABLE 2 Generator network parameters
Network layer type Network output size Amount of ginseng
Linear-1 [-1,1,249600] 25,209,600
ReLU with BatchNorm2d-2 [-1,1,240,1040] 2
Conv2d-3 [-1,50,240,1040] 500
ReLU with BatchNorm2d-4 [-1,50,240,1040] 100
Conv2d-5 [-1,25,240,1040] 11,725
ReLU with BatchNorm2d-6 [-1,25,240,1040,] 50
Conv2d-7 [-1,1,120,520] 226
Tanh-8 [-1,1,120,520] 0
In the ultrasound image classification method based on the convolutional neural network according to this embodiment, the step 4 includes:
step 4-1, directly importing the generator network architecture in the step 3 into a generator weight file for reasoning;
step 4-2, generating an image through the generator, and marking a label for the image generated by the generator; specifically, in this embodiment, the label may be calibrated for the image generated by the generator according to the severity of the disease.
In this embodiment, in step 4, the generator model obtained in step 3 is used for reasoning, random noise is added, and any number of pseudo images of the arthritis affected part can be generated in a cyclic iteration manner, so as to increase the number of samples, where a set of original images and generated images are shown in fig. 4a and 4 b.
In the ultrasound image classification method based on the convolutional neural network according to this embodiment, the step 5 includes:
step 5-1, merging the marked generated image in the step 4 with an original data set to serve as a training set of a residual error classification network;
step 5-2, the training process of the residual error classification network is divided into a training stage and a verification stage, a data set is completely iterated once, namely, verification is carried out once, a best-performing network model is tracked through updating parameters, the best-performing network model is a model with the highest verification accuracy, and the model with the highest verification accuracy is returned after training is finished; after training is finished, inputting a test data set with labels into a trained network, and calculating the proportion of accurately classified sample numbers to the total number of samples of the test data set to obtain the accuracy of a residual error classification network (ResNet), wherein the higher the accuracy is, the better the network performance is; meanwhile, the output recall ratio is the ratio of the number of accurately classified samples to the total number of samples in the training data set after the training data set passes through the residual error classification network, and the higher the recall ratio is, the better the network performance is.
In this embodiment, in step 5, the original image, the image augmented in step 2, and the image inferred by the generator in step 4 are used as a total sample, and the total sample is divided into four levels of 0, 1, 2, and 3 according to the severity of the disease, where 0 represents no disease, 1 represents slight disease, 2 represents medium disease, and 3 represents severe disease. And carrying out classification training by using a residual classification network to obtain a network model, acquiring a new ultrasonic image of the diseased part of the arthritis from a hospital, and verifying the performance of the network model.
The method comprises the steps that a conventional residual error module and an improved bottleneck residual error module are used in the construction of a residual error classification network, the conventional residual error module is formed by continuously stacking two convolution modules of 3 ×, the improved bottleneck residual error module is formed by sequentially stacking convolution modules of 1 × 1, 3 × and 1 ×, and a 1 × convolution module in the bottleneck residual error module also achieves the function of reducing the dimension, so that the convolution operation of 3 × is performed in a lower dimension, and the purposes of reducing the calculated amount and improving the calculation efficiency are achieved.
Specifically, the generated effective images are expanded into classified data to intensively train the classification model again by using a residual classification network and verify the test, so that the classification precision and reliability are improved, compared with the prior art, the method solves the problem of insufficient training data amount of deep learning only by using the conventional image sample, and avoids the problem of network overfitting caused by the limitation of the conventional expansion mode; meanwhile, the effective images generated by the generator are expanded into a classification data set, the classification model is retrained by using a residual classification network, and the two are combined, so that the classification precision of the trained network is improved, the problem that the classification effect of the prior research only on the existing data set is good is solved, and the generalization performance of the network is improved.
The invention provides a method for improving classification performance of a neural network based on generation of a countermeasure network, and it should be noted that the type of required ultrasonic equipment does not limit the patent; the resolution of the acquired ultrasonic image does not limit the patent; the content of the captured images is not limiting to this patent. It should be noted that various modifications and adaptations may occur to those skilled in the art without departing from the present principles and should be considered within the scope of the present invention. In addition, each component not specified in the embodiment can be implemented by the prior art.

Claims (6)

1. An ultrasound image classification method based on a convolutional neural network is characterized by comprising the following steps:
step 1, a region of interest is defined from an original image and is cut to obtain a cut image;
step 2, performing data augmentation on the cut image by adopting a Gaussian noise adding method and a histogram equalization method to obtain a data set after data augmentation;
step 3, training for generating a countermeasure network by using the augmented data set, and verifying and testing to obtain a trained generator;
step 4, loading the trained generator, reasoning out an image through noise, and calibrating a label for the generated image;
and 5, expanding the image generated by the generator into a classification data set, retraining the convolutional neural network to classify the ultrasonic image, outputting the accuracy and the recall rate, and evaluating the network performance.
2. The ultrasonic image classification method based on the convolutional neural network as claimed in claim 1, wherein said step 1 comprises: and selecting image sub-blocks containing a target area from the original image, and cutting the image sub-blocks to obtain a cut image, wherein the size of the cut image is uniform including the target area, and the image sub-blocks containing the target area are the interested area of the original image.
3. The ultrasonic image classification method based on the convolutional neural network as claimed in claim 1, wherein said step 2 comprises:
adding Gaussian white noise to the cut image to enable a histogram curve of the image added with the Gaussian white noise to be in accordance with one-dimensional Gaussian distribution;
and performing histogram equalization on the cut image to ensure that the pixel values of the mapped image accord with uniform distribution.
4. The ultrasonic image classification method based on the convolutional neural network as claimed in claim 1, wherein said step 3 comprises:
step 3-1, adding the image in the data set obtained in the step 2 to a real image data set, inputting the real image in the real image data set into a generation countermeasure network, and using the generated image inferred by a generator together as an input image of a discriminator, wherein the label of the real image is true, and the label of the generated image is false;
step 3-2, a discriminator is connected in series behind the generator, random noise is input, the generated image is input into the discriminator after passing through the generator, the label of the generated image is set to be true at the moment, the loss function value is returned, and only the network parameter of the generator is updated, but the network parameter of the discriminator is kept unchanged;
and 3-3, generating a generator weight file according to the trained network parameters of the generator.
5. The ultrasonic image classification method based on the convolutional neural network as claimed in claim 1, wherein said step 4 comprises:
step 4-1, directly importing the network parameters of the generator in the step 3 into a generator weight file for reasoning;
and 4-2, generating an image through the generator, and calibrating a label for the image generated by the generator.
6. The method for classifying ultrasonic images based on the convolutional neural network as claimed in claim 1, wherein said step 5 comprises:
step 5-1, merging the marked generated image in the step 4 with an original data set to serve as a training set of a residual error classification network;
step 5-2, the training process of the residual error classification network is divided into a training stage and a verification stage, a data set is completely iterated once, namely, verification is carried out once, a best-performing network model is tracked through updating parameters, the best-performing network model is a model with the highest verification accuracy, and the model with the highest verification accuracy is returned after training is finished;
after training is finished, inputting a test data set with labels into a trained network, and calculating the proportion of the number of accurately classified samples to the total number of samples in the test data set to obtain the accuracy of a residual error classification network, wherein the higher the accuracy is, the better the network performance is;
meanwhile, the recall rate is output, namely the proportion of the number of accurately classified samples to the total number of samples of the training data set after the training data set passes through the residual error classification network is calculated, and the higher the recall rate is, the better the network performance is.
CN202010070699.4A 2020-01-21 2020-01-21 Ultrasonic image classification method based on convolutional neural network Active CN111325236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010070699.4A CN111325236B (en) 2020-01-21 2020-01-21 Ultrasonic image classification method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010070699.4A CN111325236B (en) 2020-01-21 2020-01-21 Ultrasonic image classification method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN111325236A true CN111325236A (en) 2020-06-23
CN111325236B CN111325236B (en) 2023-04-18

Family

ID=71173236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010070699.4A Active CN111325236B (en) 2020-01-21 2020-01-21 Ultrasonic image classification method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN111325236B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860664A (en) * 2020-07-24 2020-10-30 大连东软教育科技集团有限公司 Ultrasonic plane wave composite imaging method, device and storage medium
CN111861924A (en) * 2020-07-23 2020-10-30 成都信息工程大学 Cardiac magnetic resonance image data enhancement method based on evolved GAN
CN111858351A (en) * 2020-07-23 2020-10-30 深圳慕智科技有限公司 Deep learning inference engine test method based on differential evaluation
CN112336357A (en) * 2020-11-06 2021-02-09 山西三友和智慧信息技术股份有限公司 RNN-CNN-based EMG signal classification system and method
CN112396110A (en) * 2020-11-20 2021-02-23 南京大学 Method for generating anti-cascade network augmented image
CN112507881A (en) * 2020-12-09 2021-03-16 山西三友和智慧信息技术股份有限公司 sEMG signal classification method and system based on time convolution neural network
CN113361443A (en) * 2021-06-21 2021-09-07 广东电网有限责任公司 Method and system for power transmission line image sample counterstudy augmentation
CN114201632A (en) * 2022-02-18 2022-03-18 南京航空航天大学 Label noisy data set amplification method for multi-label target detection task
CN114529484A (en) * 2022-04-25 2022-05-24 征图新视(江苏)科技股份有限公司 Deep learning sample enhancement method for direct current component change in imaging
CN115349834A (en) * 2022-10-18 2022-11-18 合肥心之声健康科技有限公司 Electrocardiogram screening method and system for asymptomatic severe coronary artery stenosis
CN111860664B (en) * 2020-07-24 2024-04-26 东软教育科技集团有限公司 Ultrasonic plane wave composite imaging method, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190012768A1 (en) * 2015-12-14 2019-01-10 Motion Metrics International Corp. Method and apparatus for identifying fragmented material portions within an image
CN109614979A (en) * 2018-10-11 2019-04-12 北京大学 A kind of data augmentation method and image classification method based on selection with generation
US20190197368A1 (en) * 2017-12-21 2019-06-27 International Business Machines Corporation Adapting a Generative Adversarial Network to New Data Sources for Image Classification
CN110516561A (en) * 2019-08-05 2019-11-29 西安电子科技大学 SAR image target recognition method based on DCGAN and CNN

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190012768A1 (en) * 2015-12-14 2019-01-10 Motion Metrics International Corp. Method and apparatus for identifying fragmented material portions within an image
US20190197368A1 (en) * 2017-12-21 2019-06-27 International Business Machines Corporation Adapting a Generative Adversarial Network to New Data Sources for Image Classification
CN109614979A (en) * 2018-10-11 2019-04-12 北京大学 A kind of data augmentation method and image classification method based on selection with generation
CN110516561A (en) * 2019-08-05 2019-11-29 西安电子科技大学 SAR image target recognition method based on DCGAN and CNN

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861924A (en) * 2020-07-23 2020-10-30 成都信息工程大学 Cardiac magnetic resonance image data enhancement method based on evolved GAN
CN111858351A (en) * 2020-07-23 2020-10-30 深圳慕智科技有限公司 Deep learning inference engine test method based on differential evaluation
CN111861924B (en) * 2020-07-23 2023-09-22 成都信息工程大学 Cardiac magnetic resonance image data enhancement method based on evolutionary GAN
CN111860664A (en) * 2020-07-24 2020-10-30 大连东软教育科技集团有限公司 Ultrasonic plane wave composite imaging method, device and storage medium
CN111860664B (en) * 2020-07-24 2024-04-26 东软教育科技集团有限公司 Ultrasonic plane wave composite imaging method, device and storage medium
CN112336357A (en) * 2020-11-06 2021-02-09 山西三友和智慧信息技术股份有限公司 RNN-CNN-based EMG signal classification system and method
WO2022105308A1 (en) * 2020-11-20 2022-05-27 南京大学 Method for augmenting image on the basis of generative adversarial cascaded network
CN112396110B (en) * 2020-11-20 2024-02-02 南京大学 Method for generating augmented image of countermeasure cascade network
CN112396110A (en) * 2020-11-20 2021-02-23 南京大学 Method for generating anti-cascade network augmented image
CN112507881A (en) * 2020-12-09 2021-03-16 山西三友和智慧信息技术股份有限公司 sEMG signal classification method and system based on time convolution neural network
CN113361443A (en) * 2021-06-21 2021-09-07 广东电网有限责任公司 Method and system for power transmission line image sample counterstudy augmentation
CN114201632A (en) * 2022-02-18 2022-03-18 南京航空航天大学 Label noisy data set amplification method for multi-label target detection task
CN114529484A (en) * 2022-04-25 2022-05-24 征图新视(江苏)科技股份有限公司 Deep learning sample enhancement method for direct current component change in imaging
CN114529484B (en) * 2022-04-25 2022-07-12 征图新视(江苏)科技股份有限公司 Deep learning sample enhancement method for direct current component change in imaging
CN115349834A (en) * 2022-10-18 2022-11-18 合肥心之声健康科技有限公司 Electrocardiogram screening method and system for asymptomatic severe coronary artery stenosis

Also Published As

Publication number Publication date
CN111325236B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN111325236B (en) Ultrasonic image classification method based on convolutional neural network
CN109035149B (en) License plate image motion blur removing method based on deep learning
CN112396110B (en) Method for generating augmented image of countermeasure cascade network
CN109190665B (en) Universal image classification method and device based on semi-supervised generation countermeasure network
CN109063724B (en) Enhanced generation type countermeasure network and target sample identification method
CN107784288B (en) Iterative positioning type face detection method based on deep neural network
CN111353373B (en) Related alignment domain adaptive fault diagnosis method
CN114240947B (en) Construction method and device of sweep image database and computer equipment
CN112613350A (en) High-resolution optical remote sensing image airplane target detection method based on deep neural network
CN113723295A (en) Face counterfeiting detection method based on image domain frequency domain double-flow network
CN113923104A (en) Network fault diagnosis method, equipment and storage medium based on wavelet neural network
CN113628297A (en) COVID-19 deep learning diagnosis system based on attention mechanism and transfer learning
CN111915595A (en) Image quality evaluation method, and training method and device of image quality evaluation model
Luo et al. SMD anomaly detection: a self-supervised texture–structure anomaly detection framework
CN114758272A (en) Forged video detection method based on frequency domain self-attention
CN113592008A (en) System, method, equipment and storage medium for solving small sample image classification based on graph neural network mechanism of self-encoder
CN112200182A (en) Deep learning-based wafer ID identification method and device
CN115294424A (en) Sample data enhancement method based on generation countermeasure network
CN112346126B (en) Method, device, equipment and readable storage medium for identifying low-order faults
CN115330697A (en) Tire flaw detection domain self-adaption method based on migratable Swin transducer
Celona et al. CNN-based image quality assessment of consumer photographs
CN114004295A (en) Small sample image data expansion method based on countermeasure enhancement
Villaret Promising depth map prediction method from a single image based on conditional generative adversarial network
Lim et al. Analyzing deep neural networks with noisy labels
CN112541555A (en) Deep learning-based classifier model training method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant