CN111931663A - Fluorescence immunochromatography quantitative image peak-finding algorithm based on deep learning - Google Patents

Fluorescence immunochromatography quantitative image peak-finding algorithm based on deep learning Download PDF

Info

Publication number
CN111931663A
CN111931663A CN202010807072.2A CN202010807072A CN111931663A CN 111931663 A CN111931663 A CN 111931663A CN 202010807072 A CN202010807072 A CN 202010807072A CN 111931663 A CN111931663 A CN 111931663A
Authority
CN
China
Prior art keywords
image
layer
size
peak
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010807072.2A
Other languages
Chinese (zh)
Inventor
张栋
杜康
刘新全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Boom Science Co ltd
Original Assignee
Tianjin Boom Science Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Boom Science Co ltd filed Critical Tianjin Boom Science Co ltd
Priority to CN202010807072.2A priority Critical patent/CN111931663A/en
Publication of CN111931663A publication Critical patent/CN111931663A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • G06F2218/10Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning, which comprises the following steps: collecting a large amount of fluorescence immunochromatography quantitative image data; marking the position of a peak point in the collected fluorescence immunochromatographic quantitative image to obtain label information of the image data; carrying out standardized preprocessing on the label information, and establishing an algorithm training set; establishing a first layer of convolutional neural network of a cascade algorithm, and positioning a peak point in a very small error range; a second layer of convolutional neural network of the cascade algorithm is established, so that the result is more accurate; and after the data of the test set is subjected to standardized preprocessing, establishing the test set, inputting the test set into a trained algorithm network, and testing the peak searching accuracy of the fluorescence immunochromatographic image. The fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning can identify correct peak points and output accurate peak point coordinate data.

Description

Fluorescence immunochromatography quantitative image peak-finding algorithm based on deep learning
Technical Field
The invention belongs to the field of image processing, and particularly relates to a fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning.
Background
The peak searching methods used at present include a direct peak searching method, a half-peak searching method, a general polynomial fitting method, a monte carlo algorithm, a gaussian-polynomial fitting method, a three-point peak searching algorithm, and the like. The direct peak searching method and the half-peak searching method are mainly applied in a first-order numerical differentiation mode, peak searching is realized by differentiating the global image, the calculation process is simple, but the method is only suitable for searching isolated peaks, and the peak searching accuracy of complex images with large curve fluctuation is too low; the general polynomial fitting method selects a general polynomial to fit, and uses a least square method to judge, so that the method has the advantages of simplicity, easiness in realization and low peak searching accuracy; the Monte Carlo algorithm, also called a centroid detection method, is a statistical simulation algorithm, and has the advantages that the calculation speed is high, but the linearity of the algorithm is not ideal, so that the peak searching precision is low; the Gaussian-polynomial fitting method is used for carrying out Gaussian polynomial conversion on a fluorescence peak image to carry out peak searching, the peak searching accuracy of the method is higher than that of a general polynomial fitting method, but the method has higher sensitivity to a waveform image, weaker anti-noise capability and higher requirement on the peak shape of the image, and the accuracy of identifying the peak shape influenced by an interference peak is low; finally, regarding the three-point peak searching algorithm, the peak searching accuracy of the algorithm is obviously improved, but the algorithm has strict requirements on the normalization of the image peak shape.
Generally speaking, the existing method can not accurately position the peak point of the image containing the interference peak, and provides a convolutional neural network algorithm with a cascade thought aiming at the problems.
Disclosure of Invention
In view of the above, the present invention aims to provide a fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning, so as to solve the problem of inaccurate peak-finding by fluorescence immunochromatographic method.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a fluorescence immunochromatography quantitative image peak-finding algorithm based on deep learning comprises the following steps:
step S1, collecting fluorescence immunochromatography quantitative image data of a large number of samples with different projects;
step S2, manually labeling the peak point position in the fluorescence immunochromatography quantitative image collected in the step 1 to obtain the label information of the image;
step S3, carrying out standardized preprocessing on the fluorescence immunochromatographic quantitative image and the corresponding label information, and establishing an algorithm training set;
step S4, establishing a first layer of convolutional neural network 10-net, inputting the preprocessed data set into a training set for training, performing global regression prediction on the image by using the first layer of network, and positioning the peak point in a minimum error range;
step S5, establishing a second layer convolutional neural network 6-net, establishing a patch for each result by taking the result output by the first layer as the center, inputting the patch into a second layer algorithm for training by taking the result as a training set, and repeatedly fine-tuning the result of the first layer to finally obtain accurate coordinate prediction information;
and step S6, after the data of the test set is subjected to standardized preprocessing, the test set is established and input into the trained algorithm network, and the peak searching accuracy of the fluorescence immunochromatographic image is tested.
Further, the image data in step S1 is a peak image of the concentrations of the markers in the human body, which characterize the physiological index.
Further, the step S3 includes the following steps:
step S31, carrying out gray processing on the collected fluorescence immunochromatography quantitative image;
step S32, data enhancement is carried out on the image data, and the scale of the data set is expanded;
step S33, cutting and compressing the image data to make the image size fit the image size input by the convolution neural network algorithm;
step S34, standardizing the image data, normalizing the label data value to a small range, and converting the image data into a two-dimensional data set.
Further, in step S31, the pixel value of the fluorescence immunochromatographic image after the gradation processing is between 0 and 255.
Further, the specific method of step S32 is as follows: and performing data enhancement on the fluorescence immunochromatography quantitative image, and expanding the image data set by using a mirror surface overturning mode.
Further, the specific method of step S33 is as follows: and performing priori preprocessing clipping operation on the image, and compressing the clipped data, wherein the size of the clipped data is the receiving size of the first layer of convolutional neural network and is used as a candidate training area.
Further, in step S34, the specific method for performing the normalization process on the image data is as follows: the pixel values of all pixel points in the fluorescence immunochromatography quantitative image are standardized, the mean value is subtracted from the pixel values of all the points, then the square difference is divided, the pixel values are normalized to be close to the 0 value, the variance is 1, and the standardized formula is as follows:
Figure BDA0002629530050000031
wherein, Normal _ XiThe pixel value size, X, of each pixel point in the image after standardization processingiThe mean is the pixel average value of all pixel points in the image, and the Std is the pixel value variance.
Further, the specific method of step S4 is as follows:
the first layer of convolutional neural network 39-net is responsible for extracting and combining global image features, and comprises four layers of convolutional layers C11, C12, C13 and C14, four layers of pooling layers P11, P12, P13 and P14, four layers of ReLU activation layers R11, R12, R13 and R14 and two fully-connected layers F11 and F12; a first layer of convolutional neural network, which takes 39 × 39 size images as input, and convolutional layer C11 performs convolution operation on the input images by 20 convolutional cores with size of 4 × 4; the pooling layer P11 down-samples the feature map area with the size of 2 × 2 for the feature map after convolution layer C11 convolution, and obtains a feature map with the size of 18 × 18 through maximum pooling; the convolution layer C12 performs convolution operation on the input image by 40 convolution kernels of size 3 × 3; the pooling layer P12 down-samples the feature map area with the size of 2 x 2 for the feature map after convolution layer C12 convolution, and obtains a feature map with the size of 8 x 8 through maximum pooling; the convolution layer C13 performs convolution operation on the input image by 60 convolution kernels of size 3 × 3; the pooling layer P13 down-samples the feature map area with the size of 2 x 2 for the feature map after convolution layer C13 convolution, and obtains a feature map with the size of 3 x 3 through maximum pooling; the convolution layer C14 performs convolution operation on the input image by 80 convolution kernels of size 2 × 2; the pooling layer P14 down-samples the feature map region with the size of 2 x 2 for the feature map after convolution layer C14 convolution, and obtains a feature map with the size of 1 x 1 through maximum pooling; the fully connected layers F11 and F12 have 120 neurons and 4 neurons, respectively, corresponding to the size of the image to be input by the second layer.
Further, the specific method of step S5 is as follows: the second layer of convolutional neural network 15-net is responsible for performing local accurate fine tuning on the result output by the first layer of algorithm, and mainly comprises two convolutional layers C21 and C22, two pooling layers P21 and P22 and two full-connection layers F21 and F22; a second layer convolutional neural network algorithm, which takes a 15 × 15 image as an input, and the convolutional layer C21 performs a convolution operation on the input image by using 40 convolutional cores having a size of 2 × 2; the pooling layer P21 down-samples the feature map area with the size of 2 x 2 for the feature map after convolution layer C21 convolution, and obtains a feature map with the size of 6 x 6 through maximum pooling; the convolution layer C22 performs convolution operation on the input image by 40 convolution kernels of size 3 × 3; the pooling layer P22 down-samples the feature map area with the size of 2 x 2 for the feature map after convolution layer C22 convolution, and obtains the feature map with the size of 2 x 2 through maximum pooling; the two fully-connected layers F21 and F22 respectively have 60 neurons and 2 neurons, and one network is responsible for coordinate prediction of one peak point.
Further, the specific steps of step S6 are as follows:
s61, an algorithm testing stage, namely firstly, cutting and compressing an image to be tested, then loading trained algorithm parameters, importing a data set to be tested, carrying out a first-layer algorithm, extracting and combining image characteristics through a convolution layer, sensing local fields, carrying out down-sampling on image data through a plurality of pooling layers, carrying out dimensionality reduction on the data, removing redundant information, carrying out peak point prediction on a global image, and finally outputting two peak point coordinates with relatively rough precision;
and S62, expanding the coordinate information output by the first layer into patches, cutting and compressing the patches, establishing a new training set, inputting the new training set into a second layer of convolutional neural network, wherein the second layer comprises four network models, two parallel networks are responsible for accurate fine tuning of a peak point, and the results of the parallel networks are weighted and averaged, so that regression accuracy can be further improved, and finally each parallel network outputs a peak point regression coordinate.
Compared with the prior art, the fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning has the following advantages:
(1) the fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning uses a fluorescence immunochromatographic image with a marked peak position as input, uses a multilayer convolutional neural network algorithm with a cascade structure as a core, and uses a parallel structure for each layer of algorithm, so that the accurate peak-finding of the fluorescence immunochromatographic image is realized, and the accurate coordinate positions of a C peak and a T peak are obtained.
(2) The fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning provides an accurate and efficient peak-finding algorithm for fluorescence immunochromatographic quantitative detection, can identify correct peak points and output accurate peak point coordinate data, and further plays a better auxiliary role in the aspect of rapid quantitative detection of substance concentration.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flowchart of a fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning according to an embodiment of the present invention;
FIG. 2 is a flow chart of image preprocessing according to an embodiment of the present invention;
FIG. 3 is a diagram of a first layer convolutional neural network according to an embodiment of the present invention;
fig. 4 is a diagram of a second layer convolutional neural network structure according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
A fluorescence immunochromatography quantitative image peak-finding algorithm based on deep learning is shown in figures 1 to 4 and comprises the following steps:
step S1, collecting fluorescence immunochromatography quantitative image data of a large number of samples with different projects;
the fluorescence immunochromatographic image acquired by the scheme covers peak image images obtained by detecting a plurality of different sample items, wherein the peak image images include normal peak shape images and peak image images doped with various interference peaks caused by various unavoidable factors, and the data set comprises detection peak images representing a plurality of item samples such as human ferritin, vitamin D3, parathyroid hormone, D-dimer, whole-course C-reactive protein and the like.
Step S2, manually labeling the peak point position in the fluorescence immunochromatography quantitative image collected in the step 1 to obtain the label information of the image;
the invention needs to label the acquired image data, mainly carries out coordinate labeling on the C peak and the T peak in the fluorescence immunochromatographic image detected by various samples respectively, and labels the pixel coordinate information of two peak points in the global image.
Step S3, carrying out standardized preprocessing on the fluorescence immunochromatographic quantitative image and the corresponding label information, and establishing an algorithm training set;
step S4, establishing a first layer of convolutional neural network 10-net, inputting the preprocessed data set into a training set for training, performing global regression prediction on the image by using the first layer of network, and positioning the peak point in a minimum error range;
step S5, establishing a second layer convolutional neural network 6-net, establishing a patch for each result by taking the result output by the first layer as the center, inputting the patch into a second layer algorithm for training by taking the result as a training set, and repeatedly fine-tuning the result of the first layer to finally obtain accurate coordinate prediction information;
and step S6, after the data of the test set is subjected to standardized preprocessing, the test set is established and input into the trained algorithm network, and the peak searching accuracy of the fluorescence immunochromatographic image is tested.
The image data in step S1 is a peak-shaped image of the concentrations of the markers characterizing certain physiological indicators in the human body.
The step S3 includes the steps of:
step S31, carrying out gray processing on the collected fluorescence immunochromatography quantitative image;
step S32, data enhancement is carried out on the image data, and the scale of the data set is expanded;
step S33, cutting and compressing the image data to make the image size fit the image size input by the convolution neural network algorithm;
step S34, standardizing the image data, normalizing the label data value to a small range, and converting the image data into a two-dimensional data set.
In step S31, the grayed fluorescence immunochromatographic image has a pixel value of 0 to 255.
The specific method of step S32 is as follows: and performing data enhancement on the fluorescence immunochromatography quantitative image, and expanding the image data set by using a mirror surface overturning mode.
The specific method of step S33 is as follows: and performing priori preprocessing clipping operation on the image, and compressing the clipped data, wherein the size of the clipped data is the receiving size of the first layer of convolutional neural network, and the size of the clipped data is 39 multiplied by 39 to serve as a candidate training area.
Preferably, in the step S34,
the specific method for standardizing the image data comprises the following steps: the pixel values of all pixel points in the fluorescence immunochromatography quantitative image are standardized, the mean value is subtracted from the pixel values of all the points, then the square difference is divided, the pixel values are normalized to be close to the 0 value, the variance is 1, and the standardized formula is as follows:
Figure BDA0002629530050000091
wherein, Normal _ XiThe pixel value size, X, of each pixel point in the image after standardization processingiThe mean is the pixel value of each pixel point (sorted by column) in the fluorescence immunochromatographic image, and the Std is the pixel value variance.
The specific method of step S4 is as follows:
the first layer of convolutional neural network 39-net is responsible for extracting and combining global image features, and comprises four layers of convolutional layers C11, C12, C13 and C14, four layers of pooling layers P11, P12, P13 and P14, four layers of ReLU activation layers R11, R12, R13 and R14 and two fully-connected layers F11 and F12;
a first layer of convolutional neural network, which takes 39 × 39 size images as input, and convolutional layer C11 performs convolution operation on the input images by 20 convolutional cores with size of 4 × 4; the pooling layer P11 down-samples the feature map area with the size of 2 × 2 for the feature map after convolution layer C11 convolution, and obtains a feature map with the size of 18 × 18 through maximum pooling; the convolution layer C12 performs convolution operation on the input image by 40 convolution kernels of size 3 × 3; the pooling layer P12 down-samples the feature map area with the size of 2 x 2 for the feature map after convolution layer C12 convolution, and obtains a feature map with the size of 8 x 8 through maximum pooling; the convolution layer C13 performs convolution operation on the input image by 60 convolution kernels of size 3 × 3; the pooling layer P13 down-samples the feature map area with the size of 2 x 2 for the feature map after convolution layer C13 convolution, and obtains a feature map with the size of 3 x 3 through maximum pooling; the convolution layer C14 performs convolution operation on the input image by 80 convolution kernels of size 2 × 2; the pooling layer P14 down-samples the feature map region with the size of 2 x 2 for the feature map after convolution layer C14 convolution, and obtains a feature map with the size of 1 x 1 through maximum pooling; the fully connected layers F11 and F12 have 120 neurons and 4 neurons, respectively, corresponding to the size of the image to be input by the second layer.
The specific method of step S5 is as follows:
the second layer of convolutional neural network 15-net is responsible for performing local accurate fine tuning on the result output by the first layer of algorithm, and mainly comprises two convolutional layers C21 and C22, two pooling layers P21 and P22 and two full-connection layers F21 and F22;
as shown in fig. 4, the second layer convolutional neural network algorithm, which takes a 15 × 15 size image as an input, and convolutional layer C21 performs a convolution operation on the input image by 40 convolutional cores of 2 × 2 size; the pooling layer P21 down-samples the feature map area with the size of 2 x 2 for the feature map after convolution layer C21 convolution, and obtains a feature map with the size of 6 x 6 through maximum pooling; the convolution layer C22 performs convolution operation on the input image by 40 convolution kernels of size 3 × 3; the pooling layer P22 down-samples the feature map area with the size of 2 x 2 for the feature map after convolution layer C22 convolution, and obtains the feature map with the size of 2 x 2 through maximum pooling; the two fully-connected layers F21 and F22 respectively have 60 neurons and 2 neurons, and one network is responsible for coordinate prediction of one peak point.
The specific working principle for the first layer of convolutional neural network and the second layer of convolutional neural network is as follows:
step A1: the formula for forward propagation is as follows:
Figure BDA0002629530050000101
Figure BDA0002629530050000102
wherein l represents the number of layers of the neural network, Wij lIs a weight representing the connection between the jth neuron in the l-th layer and the ith neuron in the l-th layer, bi lA bias term, z, representing the ith neuron in layer l +1i lRepresenting the ith nerve in the l layerWeighted sum of cells including offset cells, ai lRepresents the activation value (i.e., output value) of the ith neuron in the l-th layer.
Step A2: the formula for the loss function is as follows:
Figure BDA0002629530050000103
wherein c is the number of output layer nodes, X is training sample label data, X is model actual output, and J is the loss of one-time forward propagation of the algorithm.
Step A3: the formula for the weight update iteration is as follows:
Figure BDA0002629530050000111
Figure BDA0002629530050000112
wherein, Wij lIs a weight representing the connection between the jth neuron in the ith layer and the ith neuron in the ith layer. J is the loss function. bi lRepresents the bias term for the ith neuron in layer l + 1.
Step A4: the specific back propagation process is as follows:
a41: obtaining the output activation value of each node through forward propagation
A42: for output unit i of the output layer, the gradient value is calculated, and the error term of each layer is propagated backwards through the loss function, and the formula is as follows:
Figure BDA0002629530050000113
a43: for each layer, the gradient formula for the ith node of the l-th layer is as follows:
Figure BDA0002629530050000114
the partial derivatives of the loss function with respect to each parameter w and b are solved successively from back to front by a back propagation algorithm.
A44: with respect to calculating the partial derivatives of the parameters w and b, the formula is as follows:
Figure BDA0002629530050000115
Figure BDA0002629530050000121
and calculating partial derivatives of each layer w and b through a back propagation algorithm, and iteratively updating each weight parameter.
Preferably, the specific implementation method of step S6 is as follows
S61, an algorithm testing stage, namely firstly, cutting and compressing an image to be tested, then loading trained algorithm parameters, importing a data set to be tested, carrying out a first-layer algorithm, extracting and combining image characteristics through a convolution layer, sensing local fields, carrying out down-sampling on image data through a plurality of pooling layers, carrying out dimensionality reduction on the data, removing redundant information, carrying out peak point prediction on a global image, and finally outputting two peak point coordinates with relatively rough precision;
the algorithm training data comprises 10000 groups of data, and the test set comprises 2000 groups of data, wherein the data comprises 1900 groups of normal peak shape images and 100 groups of abnormal peak shape images. In the aspect of prediction accuracy, the predicted error value is within 1 pixel point through statistics; for the 2000 test images, the total run time was 47.48s, and the average predicted time was 42/s.
In the algorithm testing stage, the main evaluation indexes are the running speed and the prediction accuracy, and the test standard of the accuracy is as follows:
Figure BDA0002629530050000122
wherein AbsoluteError is a prediction error value, xiFor the actual predicted value of the algorithm, XiLabel data for a fluorescence immunochromatographic image;
and S62, expanding the coordinate information output by the first layer into patches, cutting and compressing the patches, establishing a new training set, inputting the new training set into a second layer of convolutional neural network, wherein the second layer comprises four network models, two parallel networks are responsible for accurate fine tuning of a peak point, and the results of the parallel networks are weighted and averaged, so that regression accuracy can be further improved, and finally each parallel network outputs a peak point regression coordinate.
The fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning uses a fluorescence immunochromatographic image with a marked peak position as input, uses a multilayer convolutional neural network algorithm with a cascade structure as a core, and uses a parallel structure for each layer of algorithm, so that the accurate peak-finding of the fluorescence immunochromatographic image is realized, and the accurate coordinate positions of a C peak and a T peak are obtained. The scheme can be used in the technologies of colloidal gold immunochromatography, quantum dot immunochromatography, up-conversion nanoparticle immunochromatography and the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A fluorescence immunochromatography quantitative image peak-finding algorithm based on deep learning is characterized in that: the method comprises the following steps:
step S1, collecting a large amount of fluorescence immunochromatography quantitative image data;
step S2, manually labeling the peak point position in the fluorescence immunochromatography quantitative image collected in the step 1 to obtain the label information of the image;
step S3, carrying out standardized preprocessing on the fluorescence immunochromatographic quantitative image and the corresponding label information, and establishing an algorithm training set;
step S4, establishing a first layer of convolutional neural network of a cascade algorithm, training the convolutional neural network algorithm by using a preprocessed data set, performing global regression prediction on an image by using the first layer of network, and positioning a peak point in a minimum error range;
step S5, establishing a second layer convolutional neural network of the cascade algorithm, taking the result output by the first layer as the center, establishing a corresponding patch for each result, taking the patch as a training set to input the second layer algorithm for training, and repeatedly fine-tuning the first layer result to obtain a more accurate prediction result;
and step S6, after the data of the test set is subjected to standardized preprocessing, the test set is established and input into the trained algorithm network, and the peak searching accuracy of the fluorescence immunochromatographic image is tested.
2. The fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning of claim 1, which is characterized in that: the image data in step S1 is a peak image of the concentrations of the markers characterizing the physiological index in the human body.
3. The fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning of claim 1, which is characterized in that: the step S3 includes the steps of:
step S31, carrying out gray processing on the collected fluorescence immunochromatography quantitative image;
step S32, data enhancement is carried out on the image data, and the scale of the data set is expanded;
step S33, cutting and compressing the image data to make the image size fit the image size input by the convolution neural network algorithm;
step S34, standardizing the image data, normalizing the label data value to a small range, and converting the image data into a two-dimensional data set.
4. The fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning of claim 3, wherein: in step S31, the grayed fluorescence immunochromatographic image has a pixel value of 0 to 255.
5. The fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning of claim 3, wherein: the specific method of step S32 is as follows: and performing data enhancement on the fluorescence immunochromatography quantitative image, and expanding the image data set by using a mirror surface overturning mode.
6. The fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning of claim 3, wherein: the specific method of step S33 is as follows: and performing priori preprocessing clipping operation on the image, and compressing the clipped data, wherein the size of the clipped data is the receiving size of the first layer of convolutional neural network and is used as a candidate training area.
7. The fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning of claim 3, wherein: in step S34, the specific method of normalizing the image data is as follows: the pixel values of all pixel points in the fluorescence immunochromatography quantitative image are standardized, the mean value is subtracted from the pixel values of all the points, then the square difference is divided, the pixel values are normalized to be close to the 0 value, the variance is 1, and the standardized formula is as follows:
Figure FDA0002629530040000021
wherein, Normal _ XiThe pixel value size, X, of each pixel point in the image after standardization processingiThe mean is the pixel average value of all pixel points in the image, and the Std is the pixel value variance.
8. The fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning of claim 1, which is characterized in that: the specific method of step S4 is as follows:
the first layer of convolutional neural network 39-net is responsible for extracting and combining global image features, and comprises four layers of convolutional layers C11, C12, C13 and C14, four layers of pooling layers P11, P12, P13 and P14, four layers of ReLU activation layers R11, R12, R13 and R14 and two fully-connected layers F11 and F12; a first layer of convolutional neural network, which takes 39 × 39 size images as input, and convolutional layer C11 performs convolution operation on the input images by 20 convolutional cores with size of 4 × 4; the pooling layer P11 down-samples the feature map area with the size of 2 × 2 for the feature map after convolution layer C11 convolution, and obtains a feature map with the size of 18 × 18 through maximum pooling; the convolution layer C12 performs convolution operation on the input image by 40 convolution kernels of size 3 × 3; the pooling layer P12 down-samples the feature map area with the size of 2 x 2 for the feature map after convolution layer C12 convolution, and obtains a feature map with the size of 8 x 8 through maximum pooling; the convolution layer C13 performs convolution operation on the input image by 60 convolution kernels of size 3 × 3; the pooling layer P13 down-samples the feature map area with the size of 2 x 2 for the feature map after convolution layer C13 convolution, and obtains a feature map with the size of 3 x 3 through maximum pooling; the convolution layer C14 performs convolution operation on the input image by 80 convolution kernels of size 2 × 2; the pooling layer P14 down-samples the feature map region with the size of 2 x 2 for the feature map after convolution layer C14 convolution, and obtains a feature map with the size of 1 x 1 through maximum pooling; the fully connected layers F11 and F12 have 120 neurons and 4 neurons, respectively, corresponding to the size of the image to be input by the second layer.
9. The fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning of claim 1, which is characterized in that: the specific method of step S5 is as follows:
the second layer of convolutional neural network 15-net is responsible for performing local accurate fine tuning on the result output by the first layer of algorithm, and mainly comprises two convolutional layers C21 and C22, two pooling layers P21 and P22 and two full-connection layers F21 and F22; a second layer convolutional neural network algorithm, which takes a 15 × 15 image as an input, and the convolutional layer C21 performs a convolution operation on the input image by using 40 convolutional cores having a size of 2 × 2; the pooling layer P21 down-samples the feature map area with the size of 2 x 2 for the feature map after convolution layer C21 convolution, and obtains a feature map with the size of 6 x 6 through maximum pooling; the convolution layer C22 performs convolution operation on the input image by 40 convolution kernels of size 3 × 3; the pooling layer P22 down-samples the feature map area with the size of 2 x 2 for the feature map after convolution layer C22 convolution, and obtains the feature map with the size of 2 x 2 through maximum pooling; the two fully-connected layers F21 and F22 respectively have 60 neurons and 2 neurons, and one network is responsible for coordinate prediction of one peak point.
10. The fluorescence immunochromatographic quantitative image peak-finding algorithm based on deep learning of claim 1, which is characterized in that: the specific steps of step S6 are as follows:
s61, an algorithm testing stage, namely firstly, cutting and compressing an image to be tested, then loading trained algorithm parameters, importing a data set to be tested, carrying out a first-layer algorithm, extracting and combining image characteristics through a convolution layer, sensing local fields, carrying out down-sampling on image data through a plurality of pooling layers, carrying out dimensionality reduction on the data, removing redundant information, carrying out peak point prediction on a global image, and finally outputting two peak point coordinates with relatively rough precision;
and S62, expanding the coordinate information output by the first layer into patches, cutting and compressing the patches, establishing a new training set, inputting the new training set into a second layer of convolutional neural network, wherein the second layer comprises four network models, two parallel networks are responsible for accurate fine tuning of a peak point, and the results of the parallel networks are weighted and averaged, so that regression accuracy can be further improved, and finally each parallel network outputs a peak point regression coordinate.
CN202010807072.2A 2020-08-12 2020-08-12 Fluorescence immunochromatography quantitative image peak-finding algorithm based on deep learning Pending CN111931663A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010807072.2A CN111931663A (en) 2020-08-12 2020-08-12 Fluorescence immunochromatography quantitative image peak-finding algorithm based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010807072.2A CN111931663A (en) 2020-08-12 2020-08-12 Fluorescence immunochromatography quantitative image peak-finding algorithm based on deep learning

Publications (1)

Publication Number Publication Date
CN111931663A true CN111931663A (en) 2020-11-13

Family

ID=73310770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010807072.2A Pending CN111931663A (en) 2020-08-12 2020-08-12 Fluorescence immunochromatography quantitative image peak-finding algorithm based on deep learning

Country Status (1)

Country Link
CN (1) CN111931663A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239965A (en) * 2021-04-12 2021-08-10 北京林业大学 Bird identification method based on deep neural network and electronic equipment
CN113313109A (en) * 2021-05-13 2021-08-27 中国计量大学 Semi-quantitative analysis method of fluorescence immunochromatographic test paper
CN113362279A (en) * 2021-05-13 2021-09-07 北京理工大学 Intelligent concentration detection method of immunochromatographic test paper

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106483285A (en) * 2016-09-22 2017-03-08 天津博硕东创科技发展有限公司 A kind of checking matter density calculating method for test strips Fast Detection Technique
CN110060236A (en) * 2019-03-27 2019-07-26 天津大学 Stereo image quality evaluation method based on depth convolutional neural networks
CN111311522A (en) * 2020-03-26 2020-06-19 重庆大学 Two-photon fluorescence microscopic image restoration method based on neural network and storage medium
CN111507884A (en) * 2020-04-19 2020-08-07 衡阳师范学院 Self-adaptive image steganalysis method and system based on deep convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106483285A (en) * 2016-09-22 2017-03-08 天津博硕东创科技发展有限公司 A kind of checking matter density calculating method for test strips Fast Detection Technique
CN110060236A (en) * 2019-03-27 2019-07-26 天津大学 Stereo image quality evaluation method based on depth convolutional neural networks
CN111311522A (en) * 2020-03-26 2020-06-19 重庆大学 Two-photon fluorescence microscopic image restoration method based on neural network and storage medium
CN111507884A (en) * 2020-04-19 2020-08-07 衡阳师范学院 Self-adaptive image steganalysis method and system based on deep convolutional neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LUCAS NEGRI 等: "Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement", 《SENSORS》, vol. 11, no. 4, pages 3466 - 3482 *
YI SUN 等: "Deep Convolutional Network Cascade for Facial Point Detection", 《PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, pages 3476 - 3483 *
张栋 等: "基于级联卷积神经网络的荧光免疫层析图像峰值点定位方法研究", 《仪器仪表学报》, vol. 42, no. 01, pages 217 - 227 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239965A (en) * 2021-04-12 2021-08-10 北京林业大学 Bird identification method based on deep neural network and electronic equipment
CN113239965B (en) * 2021-04-12 2023-05-02 北京林业大学 Bird recognition method based on deep neural network and electronic equipment
CN113313109A (en) * 2021-05-13 2021-08-27 中国计量大学 Semi-quantitative analysis method of fluorescence immunochromatographic test paper
CN113362279A (en) * 2021-05-13 2021-09-07 北京理工大学 Intelligent concentration detection method of immunochromatographic test paper

Similar Documents

Publication Publication Date Title
CN111931663A (en) Fluorescence immunochromatography quantitative image peak-finding algorithm based on deep learning
CN111079602B (en) Vehicle fine granularity identification method and device based on multi-scale regional feature constraint
CN108447057B (en) SAR image change detection method based on significance and depth convolution network
CN111863244B (en) Functional connection mental disease classification method and system based on sparse pooling graph convolution
CN111090764B (en) Image classification method and device based on multitask learning and graph convolution neural network
CN109978872B (en) White matter microstructure characteristic screening system and method based on white matter fiber tracts
CN104573699B (en) Trypetid recognition methods based on middle equifield intensity magnetic resonance anatomy imaging
CN113283419B (en) Convolutional neural network pointer instrument image reading identification method based on attention
CN111950488B (en) Improved Faster-RCNN remote sensing image target detection method
CN109993230A (en) A kind of TSK Fuzzy System Modeling method towards brain function MRI classification
CN109509170A (en) A kind of die casting defect inspection method and device
CN113392931A (en) Hyperspectral open set classification method based on self-supervision learning and multitask learning
CN112365497A (en) High-speed target detection method and system based on Trident Net and Cascade-RCNN structures
CN110264454A (en) Cervical cancer tissues pathological image diagnostic method based on more hidden layer condition random fields
CN113298780A (en) Child bone age assessment method and system based on deep learning
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN108805181B (en) Image classification device and method based on multi-classification model
CN114596253A (en) Alzheimer's disease identification method based on brain imaging genome features
CN114881286A (en) Short-time rainfall prediction method based on deep learning
CN112215044A (en) Driving tendency identification method based on probabilistic neural network
CN113344046A (en) Method for improving SAR image ship classification precision
CN110781828A (en) Fatigue state detection method based on micro-expression
CN116310618A (en) Registration network training device and method for multimode images and registration method
CN113889274B (en) Method and device for constructing risk prediction model of autism spectrum disorder
CN113095265B (en) Fungal target detection method based on feature fusion and attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination