CN111553873A - Automatic brain neuron detection method based on multi-scale convolutional neural network - Google Patents

Automatic brain neuron detection method based on multi-scale convolutional neural network Download PDF

Info

Publication number
CN111553873A
CN111553873A CN202010051615.2A CN202010051615A CN111553873A CN 111553873 A CN111553873 A CN 111553873A CN 202010051615 A CN202010051615 A CN 202010051615A CN 111553873 A CN111553873 A CN 111553873A
Authority
CN
China
Prior art keywords
scale
convolution
pixels
neural network
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010051615.2A
Other languages
Chinese (zh)
Other versions
CN111553873B (en
Inventor
尤珍臻
姜明
石争浩
石程
梁继民
都双丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202010051615.2A priority Critical patent/CN111553873B/en
Publication of CN111553873A publication Critical patent/CN111553873A/en
Application granted granted Critical
Publication of CN111553873B publication Critical patent/CN111553873B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

The invention discloses a brain neuron automatic detection method based on a multi-scale convolution neural network. Then, preprocessing the original color image; and thirdly, constructing a multi-scale convolution neural network and predicting the neuron centroid probability. Training neural network parameters on a training set by using a back propagation and random gradient descent method according to a minimum cross entropy principle, and verifying the accuracy of the neural network by using a test set; and finally, the neuron centroid detection is realized through the calculation of the local extreme value. The invention solves the problem of large limitation of the neuron detection method in the prior art.

Description

Automatic brain neuron detection method based on multi-scale convolutional neural network
Technical Field
The invention belongs to the technical field of computer science and biomedicine, and particularly relates to a brain neuron automatic detection method based on a multi-scale convolutional neural network.
Background
Currently, the relevant algorithms for cell detection are mostly directed to specific anatomical areas where there are few adherent cells or low cell densities. Therefore, single cells can be well detected by applying the traditional threshold segmentation, mathematical morphology method, pit detection-based method, region growing method, watershed algorithm, active contour model, Gaussian mixture model and other methods. However, in high-density anatomical regions, such as the dentate gyrus region of the hippocampus, thousands of neurons are bonded to each other, and the method is not applicable, and easily causes over-detection and under-detection phenomena of the neurons. In recent years, deep learning methods (CNN, FCRN, U-net and the like) widely applied to histological microscopic images can effectively solve the problem of automatic detection of part of adhesive neurons, but because the network structures adopt a sensing field with a fixed size, the method has certain limitation on detection of a large number of adhesive neurons in a high-density anatomical region. The present invention can solve the above problems well and automatically detect a large number of adherent neurons in a high-density anatomical region.
Disclosure of Invention
The invention aims to provide a brain neuron automatic detection method based on a multi-scale convolutional neural network, and solves the problem that the neuron detection method in the prior art is large in limitation.
The technical scheme adopted by the invention is that the brain neuron automatic detection method based on the multi-scale convolutional neural network is implemented according to the following steps:
step 1, establishing a database, randomly dividing images in the database into a training set and a test set, and constructing a corresponding training set truth value diagram and a corresponding test set truth value diagram;
step 2, preprocessing the training set and the test set established in the step 1 to obtain a normalized training set image and a normalized test set image;
step 3, constructing a multi-scale convolution neural network: training and updating network parameters by respectively using the training set image in the step 2 and the training set truth value diagram in the step 1 as the input and the output of the multi-scale convolutional neural network, so as to obtain a model of the multi-scale convolutional neural network;
step 4, predicting the neuron centroid probability: sending the test set image in the step 2 to the input end of the multi-scale convolutional neural network model trained in the step 3, wherein the output result obtained by the network is a predicted probability graph of the neuron centroid in the test set;
step 5, detecting the neuron centroid: according to the probability map of the neuron centroid in the step 4, for each pixel in the map, extracting the pixel which takes the pixel as the center and has the inner probability of the disk with the radius of R larger than T and the local maximum value, wherein T is 0.15, calculating the connected components of all the extracted pixels, and the center of gravity of the connected components is the centroid of the neuron detected by the invention.
The present invention is also characterized in that,
the step 1 is as follows:
randomly selecting N images from M images in a database as a training set, using the rest M-N images as a test set, manually marking a disc at the central position, namely the centroid, of each neuron in the M images to identify each neuron, and constructing a true value image.
The disc radius is 5 pixels.
The step 2 is as follows:
preprocessing the database image established in the step 1 to obtain a normalized image I:
I(x,y)=(R(x,y)+G(x,y)+B(x,y))/3/255 (1)
wherein I (x, y) is a gray scale normalized value of the pixel (x, y) in the image I, I (x, y) ranges from 0 to 1, the database image of step 1 is a color image and is composed of red R, green G, and blue B components, R (x, y) is a gray scale of the pixel (x, y) in the R component, G (x, y) is a gray scale of the pixel (x, y) in the G component, and B (x, y) is a gray scale of the pixel (x, y) in the B component.
The step 3 is as follows:
step 3.1, constructing a multi-scale encoder network;
step 3.2, constructing a decoder network;
and 3.3, taking the training set image in the step 2 as the input end of the multi-scale encoder network constructed in the step 3.1, taking the training set truth diagram in the step 1 as the output end of the multi-scale decoder network constructed in the step 3.2, and training and updating network parameters by using a back propagation and random gradient descent method according to a minimum cross entropy principle to obtain a multi-scale convolutional neural network model.
The multi-scale encoder network in step 3.1 is composed of a maximum pooling layer, a convolution layer and a ReLU layer, and specifically comprises the following steps:
step 3.1.1, firstly, 3 scales of extraction neuron features are constructed, and the method specifically comprises the following steps:
a1. directly carrying out maximum pooling operation on the training set image in the step 2 to obtain a characteristic diagram as a first scale
Figure RE-GDA0002560398160000031
Wherein m and n represent the length and width of the feature map, d represents the length of the third dimension of the feature map, i.e. the number of feature maps, and the feature map of step a1 is
Figure RE-GDA0002560398160000032
a1 is the number of the feature map;
a2. performing convolution operation on the training set image in the step 2 by using 64 convolution kernels with the size of 3 × 3 pixels, wherein the weights of the 64 convolution kernels are one of the parameters of the multi-scale convolution neural network needing to be trained, and then performing maximum pooling operation as a second scale to obtain a feature map
Figure RE-GDA0002560398160000033
a2 is the number of the feature map;
a3. using 64 convolution kernels with the size of 3 × 3 pixels to perform two continuous convolution operations on the training set image in the step 2, wherein the weight of 128 convolution kernels adopted by the two convolution operations is one of the parameters of the multi-scale convolution neural network needing to be trained, and then performing maximum pooling operation as a third scale to obtain a feature map
Figure RE-GDA0002560398160000041
a3 is the number of the feature map;
step 3.1.2, obtaining the characteristic diagram of three scales obtained in the step 3.1.1
Figure RE-GDA0002560398160000042
Figure RE-GDA0002560398160000043
Cascading together to obtain a feature map
Figure RE-GDA0002560398160000044
m1 is the number of the feature map, so far, the receptive field size is 2 × 2 pixels, 4 × 4 pixels and 6 × 6 pixels;
step 3.1.3, then continuing to construct 3 scales to extract neuron features, which is specifically as follows:
b1. the signature obtained in step 3.1.2 was checked using 1 convolution kernel with a size of 1 × 1 pixel
Figure RE-GDA0002560398160000045
Performing convolution operation once, wherein the weight of the convolution kernel is one of the parameters of the multi-scale convolution neural network to be trained, and then performing maximum pooling operation once as a first scale to obtain a feature map
Figure RE-GDA0002560398160000046
b1 is the number of the characteristic diagram;
b2. the signature obtained in step 3.1.2 was checked using 256 convolution checks of size 3 × 3 pixels
Figure RE-GDA0002560398160000047
Performing convolution operation once, wherein the weight of the 256 convolution kernels is one of the parameters of the multi-scale convolution neural network to be trained, and then performing maximum pooling operation once for the second scale to obtain a feature map
Figure RE-GDA0002560398160000048
b2 is the number of the characteristic diagram;
b3. the signature obtained in step 3.1.2 was checked using 256 convolution checks of size 3 × 3 pixels
Figure RE-GDA0002560398160000049
Performing two continuous convolution operations, wherein the weight of 512 convolution kernels adopted by the two convolution operations is one of the parameters of the multi-scale convolution neural network needing training, and then performing one maximum pooling operation as a third scale to obtain a feature map
Figure RE-GDA00025603981600000410
b3 is the number of the characteristic diagram;
step 3.1.4, obtaining the characteristic diagram of three scales in the step 3.1.3
Figure RE-GDA00025603981600000411
Figure RE-GDA00025603981600000412
Cascading together to obtain a feature map
Figure RE-GDA00025603981600000413
m2 is the number of the feature diagram, so far, the receptive field sizes are 4 × 4 pixels, 6 × 6 pixels, 8 × 8 pixels, 10 × 10 pixels, 12 × 12 pixels, 14 × 14 pixels and 16 × 16 pixels;
step 3.1.5, comparing the characteristic diagram obtained in step 3.1.4
Figure RE-GDA0002560398160000051
Performing two convolution operations by using 1024 convolution kernels with the size of 3 × 3 pixels, wherein the weight of 2048 convolution kernels adopted by the two convolution operations is one of the parameters of the multi-scale convolution neural network to be trained, and the two convolution operations are used for enhancing the extraction of the characteristic of the neuron centroid detail to obtain a characteristic map
Figure RE-GDA0002560398160000052
c is the number of the characteristic diagram, so far, 20 × 20 pixels, 22 × 22 pixels, 24 × 24 pixels, respectively, of the receptor field size are obtained,A multi-scale encoder network of 7 different scales of 26 × 26 pixels, 28 × 28 pixels, 30 × 30 pixels and 32 × 32 pixels.
In step 3.2, the decoder network consists of two groups of upsampling layers, convolutional layers and ReLU layers corresponding to the encoder network, and the decoder network specifically comprises the following steps:
step 3.2.1, performing primary up-sampling on the result of the step 3.1 to obtain a characteristic diagram
Figure RE-GDA0002560398160000053
q is the number of the characteristic diagram;
step 3.2.2, check the signature obtained in step 3.2.1 with 512 convolutions of size 2 × 2 pixels
Figure RE-GDA0002560398160000054
Performing a convolution operation to obtain a feature map
Figure RE-GDA0002560398160000055
e is the serial number of the feature graph, and the weight of the 512 convolution kernels is one of the parameters of the multi-scale convolution neural network needing to be trained;
step 3.2.3, check the signature obtained in step 3.2.2 with 512 convolutions of size 3 × 3 pixels
Figure RE-GDA0002560398160000056
Performing a convolution operation to obtain a feature map
Figure RE-GDA0002560398160000057
g is the serial number of the feature graph, and the weight of the 512 convolution kernels is one of the parameters of the multi-scale convolution neural network needing to be trained;
step 3.2.4, comparing the characteristic diagram obtained in step 3.2.3
Figure RE-GDA0002560398160000058
Performing one-time up-sampling to obtain a characteristic diagram
Figure RE-GDA0002560398160000059
h is theThe number of the feature map;
step 3.2.5, checking the signature obtained in step 3.2.4 using 256 convolutions of size 2 × 2 pixels
Figure RE-GDA0002560398160000061
Performing a convolution operation to obtain a feature map
Figure RE-GDA0002560398160000062
k is the serial number of the feature graph, and the weight of the 256 convolution kernels is one of the parameters of the multi-scale convolution neural network needing to be trained;
step 3.2.6, Using 256 convolution checks of size 3 × 3 pixels the signature graph obtained in step 3.2.5
Figure RE-GDA0002560398160000063
Performing a convolution operation to obtain a feature map
Figure RE-GDA0002560398160000064
l represents the number of the feature graph, and the weight of the 256 convolution kernels is one of the parameters of the multi-scale convolution neural network needing to be trained;
thus, step 3.2.6 obtains a feature map
Figure RE-GDA0002560398160000065
Is restored to the size 512 × 512 of the training set image of step 2, at which time the feature map number is 256;
step 3.2.7, performing convolution operation once on the result of the step 3.2.6 by adopting 2 convolution kernels with the kernel size of 3 × 3 pixels to obtain a feature map
Figure RE-GDA0002560398160000066
n is the number of the feature map, the weight of the 2 convolution kernels is one of the parameters of the multi-scale convolution neural network to be trained, and the number 2 of the feature maps corresponds to 2 categories in the true value map in the step 1, namely the neuron centroid and the non-centroid;
step 3.2.8, applying a sigmoid activation function to the result of step 3.2.7 to obtain a centroid probability map P of the neuron corresponding to the training set image I of step 2, wherein the size of the centroid probability map P is 512 × 512 pixels, and in the centroid probability map P, the probability value of a pixel is larger, and the probability that the pixel is the neuron centroid is larger.
Step 3.3 is specifically as follows:
setting the learning rate to be 0.0001, selecting Adam as an optimizer, setting the loss function to be binary _ cross, and minimizing the loss function by using a back propagation and random gradient descent method to obtain trained network parameters, namely the weights of all convolution kernels, wherein the weights of all the convolution kernels form a multi-scale convolution neural network model.
The method has the advantages that a data set is constructed in the hippocampal region, the position of the neuron centroid in the data set is manually marked as a true value graph, and a database is expanded for the application of deep learning in the medical field; the constructed multi-scale convolutional neural network can automatically, effectively and accurately detect a large number of adhered neurons in a high-density anatomical region; the constructed multi-scale convolution neural network is convenient for directly applying a trained model to process a new image, and can effectively shorten the detection time of neurons in a large-size brain microscopic image.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a multi-scale convolutional neural network structure constructed by the present invention;
FIG. 3(a) is an image of the experimental use of the present invention;
FIG. 3(b) is a true value plot of an experimental image used in the present invention;
FIG. 4(a1) is a low density image with a small number of neurons;
FIG. 4(a2) is a higher density image with many adherent neurons;
FIG. 4(a3) is an extremely high density image with a large number of adherent neurons;
FIG. 4(b1) is a probability map of neuronal center of mass obtained by applying a multi-scale convolutional neural network to FIG. 4(a 1);
FIG. 4(b2) is a probability map of neuronal centroid obtained by applying the multi-scale convolutional neural network to FIG. 4(a 2);
FIG. 4(b3) is a probability map of neuronal centroid obtained by applying the multi-scale convolutional neural network to FIG. 4(a 3);
FIG. 4(c1) is a diagram of the neuronal centroids detected in FIG. 4(a 1);
FIG. 4(c2) is a diagram of the neuronal centroids detected in FIG. 4(a 2);
fig. 4(c3) is a diagram of the neuron centroids detected in fig. 4(a 3).
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The database used in the present invention is derived from the brain tissue microscopic images of macaques provided by the French atomic energy and alternative energy agency (CEA) of the cooperative unit. The present invention uses 864 images (each image size 512 × 512 pixels) in the hippocampus, a tissue microscopic image (about 145GB) of the 91 st coronal slice.
The invention discloses a brain neuron automatic detection method based on a multi-scale convolutional neural network, a flow chart is shown in figure 1, and the method is implemented according to the following steps:
step 1, establishing a database, randomly dividing images in the database into a training set and a test set, and constructing a corresponding training set truth value diagram and a corresponding test set truth value diagram;
step 2, preprocessing the training set and the test set established in the step 1 to obtain a normalized training set image and a normalized test set image;
step 3, constructing a multi-scale convolution neural network: training and updating network parameters by respectively using the training set image in the step 2 and the training set truth value diagram in the step 1 as the input and the output of the multi-scale convolutional neural network, so as to obtain a model of the multi-scale convolutional neural network;
step 4, predicting the neuron centroid probability: sending the test set image in the step 2 to the input end of the multi-scale convolutional neural network model trained in the step 3, wherein the output result obtained by the network is a predicted probability graph of the neuron centroid in the test set;
step 5, detecting the neuron centroid: according to the probability map of the neuron centroid in the step 4, for each pixel in the map, extracting the pixel which takes the pixel as the center and has the inner probability of the disk with the radius of R larger than T and the local maximum value, wherein T is 0.15, calculating the connected components of all the extracted pixels, and the center of gravity of the connected components is the centroid of the neuron detected by the invention.
Wherein, the step 1 is as follows:
randomly selecting N images from M images in a database as a training set, using the rest M-N images as a test set, manually marking a disc at the central position, namely the centroid, of each neuron in the M images to identify each neuron, and constructing a true value image. FIG. 3(a) is an image of the experimental use of the present invention; dark areas in the image represent neurons, and are characterized in that the center positions of the neurons are darker in color, and the colors gradually become brighter from the center positions of the neurons to the boundaries of the neurons; FIG. 3(b) is a true value plot of an experimental image used in the present invention; contains 2 categories: neuronal centroid and non-centroid, where a white disc with a radius of 5 pixels represents the manually labeled neuronal centroid and the remaining black area is the non-centroid area;
the step 2 is as follows:
preprocessing the database image established in the step 1 to obtain a normalized image I:
I(x,y)=(R(x,y)+G(x,y)+B(x,y))/3/255 (1)
wherein I (x, y) is a gray scale normalized value of the pixel (x, y) in the image I, I (x, y) ranges from 0 to 1, the database image of step 1 is a color image and is composed of red R, green G, and blue B components, R (x, y) is a gray scale of the pixel (x, y) in the R component, G (x, y) is a gray scale of the pixel (x, y) in the G component, and B (x, y) is a gray scale of the pixel (x, y) in the B component.
The step 3 is as follows:
step 3.1, constructing a multi-scale encoder network;
step 3.2, constructing a decoder network;
and 3.3, taking the training set image in the step 2 as the input end of the multi-scale encoder network constructed in the step 3.1, taking the training set truth diagram in the step 1 as the output end of the multi-scale decoder network constructed in the step 3.2, and training and updating network parameters by using a back propagation and random gradient descent method according to a minimum cross entropy principle to obtain a multi-scale convolutional neural network model.
The multi-scale encoder network in step 3.1 is composed of a maximum pooling layer, a convolution layer and a ReLU layer, and specifically comprises the following steps:
step 3.1.1, firstly, 3 scales of extraction neuron features are constructed, and the method specifically comprises the following steps:
a1. directly carrying out maximum pooling operation on the training set image in the step 2 to obtain a characteristic diagram as a first scale
Figure RE-GDA0002560398160000101
Wherein m and n represent the length and width of the feature map, d represents the length of the third dimension of the feature map, i.e. the number of feature maps, and the feature map of step a1 is
Figure RE-GDA0002560398160000102
a1 is the number of the feature map;
a2. performing convolution operation on the training set image in the step 2 by using 64 convolution kernels with the size of 3 × 3 pixels, wherein the weights of the 64 convolution kernels are one of the parameters of the multi-scale convolution neural network needing to be trained, and then performing maximum pooling operation as a second scale to obtain a feature map
Figure RE-GDA0002560398160000103
a2 is the number of the feature map;
a3. using 64 convolution kernels with the size of 3 × 3 pixels to perform two continuous convolution operations on the training set image in the step 2, wherein the weight of 128 convolution kernels adopted by the two convolution operations is one of the parameters of the multi-scale convolution neural network needing to be trained, and then performing maximum pooling operation as a third scale to obtain a feature map
Figure RE-GDA0002560398160000104
a3 is the number of the feature map;
step 3.1.2, step 3.1.1 three-dimensional feature map
Figure RE-GDA0002560398160000105
Figure RE-GDA0002560398160000106
Cascading together to obtain a feature map
Figure RE-GDA0002560398160000107
m1 is the number of the feature map, so far, the receptive field size is 2 × 2 pixels, 4 × 4 pixels and 6 × 6 pixels;
step 3.1.3, then continuing to construct 3 scales to extract neuron features, which is specifically as follows:
b1. the signature obtained in step 3.1.2 was checked using 1 convolution kernel with a size of 1 × 1 pixel
Figure RE-GDA0002560398160000108
Performing convolution operation once, wherein the weight of the convolution kernel is one of the parameters of the multi-scale convolution neural network to be trained, and then performing maximum pooling operation once as a first scale to obtain a feature map
Figure RE-GDA0002560398160000109
b1 is the number of the characteristic diagram;
b2. the signature obtained in step 3.1.2 was checked using 256 convolution checks of size 3 × 3 pixels
Figure RE-GDA00025603981600001010
Performing convolution operation once, wherein the weight of the 256 convolution kernels is one of the parameters of the multi-scale convolution neural network to be trained, and then performing maximum pooling operation once for the second scale to obtain a feature map
Figure RE-GDA0002560398160000111
b2 is the number of the characteristic diagram;
b3. the signature obtained in step 3.1.2 was checked using 256 convolution checks of size 3 × 3 pixels
Figure RE-GDA0002560398160000112
Performing two continuous convolution operations, wherein the weight of 512 convolution kernels adopted by the two convolution operations is one of the parameters of the multi-scale convolution neural network needing training, and then performing one maximum pooling operation as a third scale to obtain a feature map
Figure RE-GDA0002560398160000113
b3 is the number of the characteristic diagram;
step 3.1.4, obtaining the characteristic diagram of three scales in the step 3.1.3
Figure RE-GDA0002560398160000114
Figure RE-GDA0002560398160000115
Cascading together to obtain a feature map
Figure RE-GDA0002560398160000116
m2 is the number of the feature diagram, so far, the receptive field sizes are 4 × 4 pixels, 6 × 6 pixels, 8 × 8 pixels, 10 × 10 pixels, 12 × 12 pixels, 14 × 14 pixels and 16 × 16 pixels;
step 3.1.5, comparing the characteristic diagram obtained in step 3.1.4
Figure RE-GDA0002560398160000117
Performing two convolution operations by using 1024 convolution kernels with the size of 3 × 3 pixels, wherein the weight of 2048 convolution kernels adopted by the two convolution operations is one of the parameters of the multi-scale convolution neural network to be trained, and the two convolution operations are used for enhancing the extraction of the characteristic of the neuron centroid detail to obtain a characteristic map
Figure RE-GDA0002560398160000118
c is the number of the characteristic diagram, so far, a multi-scale encoder network consisting of 7 different scales with the receptor field sizes of 20 × 20 pixels, 22 × 22 pixels, 24 × 24 pixels, 26 × 26 pixels, 28 × 28 pixels, 30 × 30 pixels and 32 × 32 pixels is obtained.
In step 3.2, the decoder network consists of two groups of upsampling layers, convolutional layers and ReLU layers corresponding to the encoder network, and the decoder network specifically comprises the following steps:
step 3.2.1, performing primary up-sampling on the result of the step 3.1 to obtain a characteristic diagram
Figure RE-GDA0002560398160000119
q is the number of the characteristic diagram;
step 3.2.2, check the signature obtained in step 3.2.1 with 512 convolutions of size 2 × 2 pixels
Figure RE-GDA00025603981600001110
Performing a convolution operation to obtain a feature map
Figure RE-GDA00025603981600001111
e is the serial number of the feature graph, and the weight of the 512 convolution kernels is one of the parameters of the multi-scale convolution neural network needing to be trained;
step 3.2.3, check the signature obtained in step 3.2.2 with 512 convolutions of size 3 × 3 pixels
Figure RE-GDA0002560398160000121
Performing a convolution operation to obtain a feature map
Figure RE-GDA0002560398160000122
g is the serial number of the feature graph, and the weight of the 512 convolution kernels is one of the parameters of the multi-scale convolution neural network needing to be trained;
step 3.2.4, comparing the characteristic diagram obtained in step 3.2.3
Figure RE-GDA0002560398160000123
Performing one-time up-sampling to obtain a characteristic diagram
Figure RE-GDA0002560398160000124
h is the number of the characteristic diagram;
step 3.2.5, checking the signature obtained in step 3.2.4 using 256 convolutions of size 2 × 2 pixels
Figure RE-GDA0002560398160000125
Performing a convolution operation to obtain a feature map
Figure RE-GDA0002560398160000126
k is the serial number of the feature graph, and the weight of the 256 convolution kernels is one of the parameters of the multi-scale convolution neural network needing to be trained;
step 3.2.6, Using 256 convolution checks of size 3 × 3 pixels the signature graph obtained in step 3.2.5
Figure RE-GDA0002560398160000127
Performing a convolution operation to obtain a feature map
Figure RE-GDA0002560398160000128
l represents the number of the feature graph, and the weight of the 256 convolution kernels is one of the parameters of the multi-scale convolution neural network needing to be trained;
thus, step 3.2.6 obtains a feature map
Figure RE-GDA0002560398160000129
Is restored to the size 512 × 512 of the training set image of step 2, at which time the feature map number is 256;
step 3.2.7, performing convolution operation once on the result of the step 3.2.6 by adopting 2 convolution kernels with the kernel size of 3 × 3 pixels to obtain a feature map
Figure RE-GDA00025603981600001210
n is the number of the feature map, the weight of the 2 convolution kernels is one of the parameters of the multi-scale convolution neural network to be trained, and the number 2 of the feature maps corresponds to 2 categories in the true value map in the step 1, namely the neuron centroid and the non-centroid;
step 3.2.8, applying a sigmoid activation function to the result of step 3.2.7 to obtain a centroid probability map P of the neuron corresponding to the training set image I of step 2, wherein the size of the centroid probability map P is 512 × 512 pixels, and in the centroid probability map P, the probability value of a pixel is larger, and the probability that the pixel is the neuron centroid is larger.
Step 3.3 is specifically as follows:
setting the learning rate to be 0.0001, selecting Adam as an optimizer, setting the loss function to be binary _ cross, and minimizing the loss function by using a back propagation and random gradient descent method to obtain trained network parameters, namely the weights of all convolution kernels, wherein the weights of all the convolution kernels form a multi-scale convolution neural network model.
FIG. 2 is a multi-scale convolutional neural network structure constructed by the present invention, conv represents convolutional operation, ReLU represents activation function, max pool represents maximum pooling operation, up represents up-sampling operation, collocation represents cascade operation, and sigmoid represents activation function. The number at the bottom left of each square represents the size of the image, and the number directly above each square represents the length of the third dimension of the image.
FIG. 4 is a diagram of neuronal centroid detection results obtained with different neuronal density images according to the present invention. Where fig. 4(a1) - (a3) are grayscale images of the original color image provided by the cooperative unit, the disk with radius of 5 pixels represents the manually labeled neuron centroid, and fig. 4(a1) is a low density image with few neurons. Fig. 4(a2) is a higher density image with many adherent neurons. Fig. 4(a3) is an extremely high density image with a large number of adherent neurons. FIGS. 4(b1) - (b3) are probability maps of applying the invented multi-scale convolutional neural network to obtain the neuronal centroid corresponding to FIGS. 4(a1) -4 (a3), respectively. The brighter a certain pixel in the image is, the greater the probability that the pixel is the neuron centroid is; FIGS. 4(c1) - (c3) are neuronal centroids detected in FIGS. 4(a1) - (a3), respectively. In fig. 4(b1) - (b3), connected components of pixels having a probability greater than T (T ═ 0.15) and a local maximum are calculated, and the centroid of the pixels is the centroid of the neuron detected by the present invention, and the centroid is represented by a circular disk having a radius R.
The multi-scale convolutional neural network constructed by the method uses the test set to verify the accuracy of the prediction result. Sending the test set image in the step 2 into a neural network, obtaining a neuron probability map by using a trained neural network model, calculating the number of neurons obtained by prediction by using a neuron centroid map corresponding to the test set obtained in the step 5, and quantitatively evaluating the performance of the proposed network by comparing a relative counting error with the number of neurons of the test set true value map in the step 1, wherein the formula (2) is the definition of the relative error.
=|Na-Ne|/Ne(2)
In the formula, NaIs the number of neuronal centers of mass, N, detected by an automatic methodeIs the number of neuronal centroids that the expert marks. The smaller the relative error, the better the performance of the automatic detection method. As shown in Table 1, FCRN, U-net, neural network of the present invention were applied to the test set image of step 2, respectively, and the mean relative error and standard deviation of the obtained neuron numbers were calculated by comparing the neuron numbers with the neuron numbers obtained from the test set truth map of step 1,
TABLE 1 comparison of relative errors. + -. standard deviations in the different methods
Method of producing a composite material Relative error. + -. standard deviation
FCRN 1.428±1.255
U-net 0.169±0.214
Inventive neural network 0.135±0.189
In table 1, FCRN: xie, J.A.noble, and A.Zisserman, "Microcopy cell counting and detection with full volumetric recovery networks," Computer methods in Biomechanics and biomedicalengineering, Imaging & Visualization, vol.6, No.3, pp.283-292,2016.
U-net:T.Falk et al.,“U-Net:deep learning for cell counting,detection,and morphometry,”Nature Methods,vol.16,no.1,pp.67–70,2019,doi: 10.1038/s41592-018-0261-2。
As can be seen from Table 1, the average relative error and standard deviation obtained by the application of the present invention to the test set were both smaller than those obtained by the reference method. The average relative error of the number of the neurons obtained by applying the neural network is the minimum, and compared with the FCRN and the U-net, the neuron centroid detection accuracy is improved by 90.5% and 20.1% respectively. In addition, the standard deviation of the number of the neurons obtained by applying the neural network is also minimum, and the fact that the neural network is more robust than other two reference methods for the neuron tissue microscopic images with different densities is proved.

Claims (8)

1. The method for automatically detecting the cerebral neurons based on the multi-scale convolutional neural network is characterized by comprising the following steps of:
step 1, establishing a database, randomly dividing images in the database into a training set and a test set, and constructing a corresponding training set truth value diagram and a corresponding test set truth value diagram;
step 2, preprocessing the training set and the test set established in the step 1 to obtain a normalized training set image and a normalized test set image;
step 3, constructing a multi-scale convolution neural network: training and updating network parameters by respectively using the training set image in the step 2 and the training set truth value diagram in the step 1 as the input and the output of the multi-scale convolutional neural network, so as to obtain a model of the multi-scale convolutional neural network;
step 4, predicting the neuron centroid probability: sending the test set image in the step 2 to the input end of the multi-scale convolutional neural network model trained in the step 3, wherein the output result obtained by the network is a predicted probability graph of the neuron centroid in the test set;
step 5, detecting the neuron centroid: according to the probability map of the neuron centroid in the step 4, for each pixel in the map, extracting the pixel which takes the pixel as the center and has the inner probability of the disk with the radius of R larger than T and the local maximum value, wherein T is 0.15, calculating the connected components of all the extracted pixels, and the center of gravity of the connected components is the centroid of the neuron detected by the invention.
2. The method for automatically detecting neurons in brain based on multi-scale convolutional neural network of claim 1, wherein the step 1 is as follows:
randomly selecting N images from M images in a database as a training set, using the rest M-N images as a test set, manually marking a disc at the central position, namely the centroid, of each neuron in the M images to identify each neuron, and constructing a true value image.
3. The method of claim 2, wherein the disk radius is 5 pixels.
4. The method for automatically detecting neurons in brain based on multi-scale convolutional neural network of claim 2, wherein the step 2 is as follows:
preprocessing the database image established in the step 1 to obtain a normalized image I:
I(x,y)=(R(x,y)+G(x,y)+B(x,y))/3/255 (1)
wherein I (x, y) is a gray scale normalized value of the pixel (x, y) in the image I, I (x, y) ranges from 0 to 1, the database image of step 1 is a color image and is composed of red R, green G, and blue B components, R (x, y) is a gray scale of the pixel (x, y) in the R component, G (x, y) is a gray scale of the pixel (x, y) in the G component, and B (x, y) is a gray scale of the pixel (x, y) in the B component.
5. The method for automatically detecting neurons in brain based on multi-scale convolutional neural network of claim 4, wherein the step 3 is as follows:
step 3.1, constructing a multi-scale encoder network;
step 3.2, constructing a decoder network;
and 3.3, taking the training set image in the step 2 as the input end of the multi-scale encoder network constructed in the step 3.1, taking the training set truth diagram in the step 1 as the output end of the multi-scale decoder network constructed in the step 3.2, and training and updating network parameters by using a back propagation and random gradient descent method according to a minimum cross entropy principle to obtain a multi-scale convolutional neural network model.
6. The method for automatically detecting neurons in brain based on multi-scale convolutional neural network of claim 5, wherein the multi-scale encoder network in step 3.1 is composed of max pooling layer, convolutional layer and ReLU layer, specifically as follows:
step 3.1.1, firstly, 3 scales of extraction neuron features are constructed, and the method specifically comprises the following steps:
a1. directly carrying out maximum pooling operation on the training set image in the step 2 to obtain a characteristic diagram as a first scale
Figure FDA0002371371600000021
Wherein m and n represent the length and width of the feature map, d represents the length of the third dimension of the feature map, i.e. the number of feature maps, and the feature map of step a1 is
Figure FDA0002371371600000031
a1 is the number of the feature map;
a2. performing convolution operation on the training set image in the step 2 by using 64 convolution kernels with the size of 3 × 3 pixels, wherein the weights of the 64 convolution kernels are one of the parameters of the multi-scale convolution neural network needing to be trained, and then performing maximum pooling operation as a second scale to obtain a feature map
Figure FDA0002371371600000032
a2 is the number of the feature map;
a3. make itCarrying out two times of continuous convolution operations on the training set image in the step 2 by using 64 convolution kernels with the size of 3 × 3 pixels, wherein the weight of 128 convolution kernels adopted by the two times of convolution operations is one of the parameters of the multi-scale convolution neural network needing to be trained, and then carrying out one time of maximum pooling operation as a third scale to obtain a feature map
Figure FDA0002371371600000033
a3 is the number of the feature map;
step 3.1.2, obtaining the characteristic diagram of three scales obtained in the step 3.1.1
Figure FDA0002371371600000034
Figure FDA0002371371600000035
Cascading together to obtain a feature map
Figure FDA0002371371600000036
m1 is the number of the feature map, so far, the receptive field size is 2 × 2 pixels, 4 × 4 pixels and 6 × 6 pixels;
step 3.1.3, then continuing to construct 3 scales to extract neuron features, which is specifically as follows:
b1. the signature obtained in step 3.1.2 was checked using 1 convolution kernel with a size of 1 × 1 pixel
Figure FDA0002371371600000037
Performing convolution operation once, wherein the weight of the convolution kernel is one of the parameters of the multi-scale convolution neural network to be trained, and then performing maximum pooling operation once as a first scale to obtain a feature map
Figure FDA0002371371600000038
b1 is the number of the characteristic diagram;
b2. the signature obtained in step 3.1.2 was checked using 256 convolution checks of size 3 × 3 pixels
Figure FDA0002371371600000039
Performing convolution operation once, wherein the weight of the 256 convolution kernels is one of the parameters of the multi-scale convolution neural network to be trained, and then performing maximum pooling operation once for the second scale to obtain a feature map
Figure FDA00023713716000000310
b2 is the number of the characteristic diagram;
b3. the signature obtained in step 3.1.2 was checked using 256 convolution checks of size 3 × 3 pixels
Figure FDA0002371371600000041
Performing two continuous convolution operations, wherein the weight of 512 convolution kernels adopted by the two convolution operations is one of the parameters of the multi-scale convolution neural network needing training, and then performing one maximum pooling operation as a third scale to obtain a feature map
Figure FDA0002371371600000042
b3 is the number of the characteristic diagram;
step 3.1.4, obtaining the characteristic diagram of three scales in the step 3.1.3
Figure FDA0002371371600000043
Figure FDA0002371371600000044
Cascading together to obtain a feature map
Figure FDA0002371371600000045
m2 is the number of the feature diagram, so far, the receptive field sizes are 4 × 4 pixels, 6 × 6 pixels, 8 × 8 pixels, 10 × 10 pixels, 12 × 12 pixels, 14 × 14 pixels, and 16 × 16 pixels;
step 3.1.5, comparing the characteristic diagram obtained in step 3.1.4
Figure FDA0002371371600000046
Performing two convolution operations by using 1024 convolution kernels with the size of 3 × 3 pixels, wherein the weight of 2048 convolution kernels adopted by the two convolution operations is one of the parameters of the multi-scale convolution neural network to be trained, and the two convolution operations are used for enhancing the extraction of the characteristic of the neuron centroid detail to obtain a characteristic map
Figure FDA0002371371600000047
c is the number of the characteristic diagram, so far, a multi-scale encoder network which is composed of 7 different scales with the receptor field sizes of 20 × 20 pixels, 22 × 22 pixels, 24 × 24 pixels, 26 × 26 pixels, 28 × 28 pixels, 30 × 30 pixels and 32 × 32 pixels is obtained.
7. The method for automatic detection of neurons in brain based on multi-scale convolutional neural network as claimed in claim 6, wherein the decoder network in step 3.2 is composed of two sets of upsampling layer, convolutional layer and ReLU layer corresponding to the encoder network, specifically as follows:
step 3.2.1, performing primary up-sampling on the result of the step 3.1 to obtain a characteristic diagram
Figure FDA0002371371600000048
q is the number of the characteristic diagram;
step 3.2.2, check the signature obtained in step 3.2.1 with 512 convolutions of size 2 × 2 pixels
Figure FDA0002371371600000049
Performing a convolution operation to obtain a feature map
Figure FDA00023713716000000410
e is the serial number of the feature graph, and the weight of the 512 convolution kernels is one of the parameters of the multi-scale convolution neural network needing to be trained;
step 3.2.3, check the signature obtained in step 3.2.2 with 512 convolutions of size 3 × 3 pixels
Figure FDA0002371371600000051
Performing a convolution operation to obtain a feature map
Figure FDA0002371371600000052
g is the serial number of the feature graph, and the weight of the 512 convolution kernels is one of the parameters of the multi-scale convolution neural network needing to be trained;
step 3.2.4, comparing the characteristic diagram obtained in step 3.2.3
Figure FDA0002371371600000053
Performing one-time up-sampling to obtain a characteristic diagram
Figure FDA0002371371600000054
h is the number of the characteristic diagram;
step 3.2.5, checking the signature obtained in step 3.2.4 using 256 convolutions of size 2 × 2 pixels
Figure FDA0002371371600000055
Performing a convolution operation to obtain a feature map
Figure FDA0002371371600000056
k is the serial number of the feature graph, and the weight of the 256 convolution kernels is one of the parameters of the multi-scale convolution neural network needing to be trained;
step 3.2.6, Using 256 convolution checks of size 3 × 3 pixels the signature graph obtained in step 3.2.5
Figure FDA0002371371600000057
Performing a convolution operation to obtain a feature map
Figure FDA0002371371600000058
l represents the number of the feature graph, and the weight of the 256 convolution kernels is one of the parameters of the multi-scale convolution neural network needing to be trained;
thus, step 3.2.6 obtains a feature map
Figure FDA0002371371600000059
Is restored to the size 512 × 512 of the training set image of step 2, at which time the feature map number is 256;
step 3.2.7, performing convolution operation once on the result of the step 3.2.6 by adopting 2 convolution kernels with the kernel size of 3 × 3 pixels to obtain a feature map
Figure FDA00023713716000000510
n is the number of the feature map, the weight of the 2 convolution kernels is one of the parameters of the multi-scale convolution neural network to be trained, and the number 2 of the feature maps corresponds to 2 categories in the true value map in the step 1, namely the neuron centroid and the non-centroid;
step 3.2.8, applying a sigmoid activation function to the result of step 3.2.7 to obtain a centroid probability map P of the neuron corresponding to the training set image I of step 2, wherein the size of the centroid probability map P is 512 × 512 pixels, and in the centroid probability map P, the probability value of a pixel is larger, and the probability that the pixel is the neuron centroid is larger.
8. The method for automatically detecting neurons in brain based on multi-scale convolutional neural network of claim 7, wherein the step 3.3 is as follows:
setting the learning rate to be 0.0001, selecting Adam as an optimizer, setting the loss function to be binary _ cross, and minimizing the loss function by using a back propagation and random gradient descent method to obtain trained network parameters, namely the weights of all convolution kernels, wherein the weights of all the convolution kernels form a multi-scale convolution neural network model.
CN202010051615.2A 2020-01-17 2020-01-17 Automatic detection method for brain neurons based on multi-scale convolution neural network Active CN111553873B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010051615.2A CN111553873B (en) 2020-01-17 2020-01-17 Automatic detection method for brain neurons based on multi-scale convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010051615.2A CN111553873B (en) 2020-01-17 2020-01-17 Automatic detection method for brain neurons based on multi-scale convolution neural network

Publications (2)

Publication Number Publication Date
CN111553873A true CN111553873A (en) 2020-08-18
CN111553873B CN111553873B (en) 2023-03-14

Family

ID=72005446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010051615.2A Active CN111553873B (en) 2020-01-17 2020-01-17 Automatic detection method for brain neurons based on multi-scale convolution neural network

Country Status (1)

Country Link
CN (1) CN111553873B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070690A (en) * 2020-08-25 2020-12-11 西安理工大学 Single image rain removing method based on convolutional neural network double-branch attention generation
CN113240620A (en) * 2021-01-29 2021-08-10 西安理工大学 Highly adhesive and multi-size brain neuron automatic segmentation method based on point markers
CN113674207A (en) * 2021-07-21 2021-11-19 电子科技大学 Automatic PCB component positioning method based on graph convolution neural network
CN113920124A (en) * 2021-06-22 2022-01-11 西安理工大学 Brain neuron iterative segmentation method based on segmentation and error guidance
CN115578335A (en) * 2022-09-29 2023-01-06 西安理工大学 Vocal cord white spot image classification method based on multi-scale feature extraction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2913432A1 (en) * 2015-11-26 2016-01-27 Robert Zakaluk System and method for identifying, analyzing, and reporting on players in a game from video
WO2018052586A1 (en) * 2016-09-14 2018-03-22 Konica Minolta Laboratory U.S.A., Inc. Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks
CN108846473A (en) * 2018-04-10 2018-11-20 杭州电子科技大学 Light field depth estimation method based on direction and dimension self-adaption convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2913432A1 (en) * 2015-11-26 2016-01-27 Robert Zakaluk System and method for identifying, analyzing, and reporting on players in a game from video
WO2018052586A1 (en) * 2016-09-14 2018-03-22 Konica Minolta Laboratory U.S.A., Inc. Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks
CN108846473A (en) * 2018-04-10 2018-11-20 杭州电子科技大学 Light field depth estimation method based on direction and dimension self-adaption convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈柏立等: "基于卷积神经网络的交通路标识别", 《计算机与现代化》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070690A (en) * 2020-08-25 2020-12-11 西安理工大学 Single image rain removing method based on convolutional neural network double-branch attention generation
CN112070690B (en) * 2020-08-25 2023-04-25 西安理工大学 Single image rain removing method based on convolution neural network double-branch attention generation
CN113240620A (en) * 2021-01-29 2021-08-10 西安理工大学 Highly adhesive and multi-size brain neuron automatic segmentation method based on point markers
CN113240620B (en) * 2021-01-29 2023-09-12 西安理工大学 Highly-adhesive and multi-size brain neuron automatic segmentation method based on point marking
CN113920124A (en) * 2021-06-22 2022-01-11 西安理工大学 Brain neuron iterative segmentation method based on segmentation and error guidance
CN113674207A (en) * 2021-07-21 2021-11-19 电子科技大学 Automatic PCB component positioning method based on graph convolution neural network
CN113674207B (en) * 2021-07-21 2023-04-07 电子科技大学 Automatic PCB component positioning method based on graph convolution neural network
CN115578335A (en) * 2022-09-29 2023-01-06 西安理工大学 Vocal cord white spot image classification method based on multi-scale feature extraction

Also Published As

Publication number Publication date
CN111553873B (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN111553873B (en) Automatic detection method for brain neurons based on multi-scale convolution neural network
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
CN111179229B (en) Industrial CT defect detection method based on deep learning
CN110097554B (en) Retina blood vessel segmentation method based on dense convolution and depth separable convolution
CN110136154B (en) Remote sensing image semantic segmentation method based on full convolution network and morphological processing
CN112132817B (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN112465830B (en) Automatic segmentation method for polished glass-like lung nodule and computer equipment
CN110781901B (en) Instrument ghost character recognition method based on BP neural network prediction threshold
CN107871316B (en) Automatic X-ray film hand bone interest area extraction method based on deep neural network
CN110751644B (en) Road surface crack detection method
CN111275660B (en) Flat panel display defect detection method and device
CN108549912A (en) A kind of medical image pulmonary nodule detection method based on machine learning
CN107145885A (en) A kind of individual character figure character recognition method and device based on convolutional neural networks
WO2022127500A1 (en) Multiple neural networks-based mri image segmentation method and apparatus, and device
CN113449784B (en) Image multi-classification method, device, equipment and medium based on priori attribute map
CN111815563B (en) Retina optic disc segmentation method combining U-Net and region growing PCNN
CN113240620B (en) Highly-adhesive and multi-size brain neuron automatic segmentation method based on point marking
WO2020119624A1 (en) Class-sensitive edge detection method based on deep learning
CN113221731B (en) Multi-scale remote sensing image target detection method and system
CN110689060A (en) Heterogeneous image matching method based on aggregation feature difference learning network
CN113762151A (en) Fault data processing method and system and fault prediction method
CN111401209B (en) Action recognition method based on deep learning
CN111210398A (en) White blood cell recognition system based on multi-scale pooling
CN113344933A (en) Glandular cell segmentation method based on multi-level feature fusion network
CN112613354A (en) Heterogeneous remote sensing image change detection method based on sparse noise reduction self-encoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant