CN114792386B - Method for classifying brightness and darkness of microbeads of high-density gene chip, terminal and storage medium - Google Patents

Method for classifying brightness and darkness of microbeads of high-density gene chip, terminal and storage medium Download PDF

Info

Publication number
CN114792386B
CN114792386B CN202210714565.0A CN202210714565A CN114792386B CN 114792386 B CN114792386 B CN 114792386B CN 202210714565 A CN202210714565 A CN 202210714565A CN 114792386 B CN114792386 B CN 114792386B
Authority
CN
China
Prior art keywords
self
expression
encoder
matrix
microbeads
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210714565.0A
Other languages
Chinese (zh)
Other versions
CN114792386A (en
Inventor
刘超钧
刘若愚
许心意
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Lasso Biochip Technology Co ltd
Original Assignee
Suzhou Lasso Biochip Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Lasso Biochip Technology Co ltd filed Critical Suzhou Lasso Biochip Technology Co ltd
Priority to CN202210714565.0A priority Critical patent/CN114792386B/en
Publication of CN114792386A publication Critical patent/CN114792386A/en
Application granted granted Critical
Publication of CN114792386B publication Critical patent/CN114792386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for classifying brightness and darkness of microbeads of a high-density gene chip, a terminal and a storage medium. The bead brightness classification method is based on an automatic encoder and high-dimensional spatial clustering and is used for automatically extracting the high-dimensional characteristics of the beads serving as nucleic acid probe fixing carriers in the high-density gene chip and directly clustering the extracted high-dimensional characteristics, so that the brightness classification result of the beads is obtained, and the whole process has better robustness. In addition, according to the technical scheme, richer information can be extracted from the complete image of the microbeads, so that the accuracy of the extracted characteristics of the microbeads is guaranteed.

Description

Microbead brightness and darkness classification method of high-density gene chip, terminal and storage medium
Technical Field
The invention relates to the technical field of biochips, in particular to a microbead brightness and darkness classification method of a gene chip.
Background
In the production and manufacturing process of gene chips, the gene chips need to be decoded. In the decoding process, the gene chip is required to be scanned into an image, then the image characteristics are extracted, the light and dark distribution of each microbead on the gene chip is identified, and then the type of the probe carried by each microbead is decoded by contrasting a reference table, so that the decoding process of the gene chip is completed. For the scanned gene chip image, the conventional feature extraction method is based on a fixed rule specified by a technician, for example, the mean value of gray values in a certain rectangular or circular range is extracted as the gray value of each bead by taking the central point of each bead as the center, and then subsequent analysis processing is performed. However, such artificial rules, which are ideally defined, do not always express microbeads well, since microbeads are not always a very uniform circle. For example, if two microbeads have the same brightness, one of the microbeads will darken in a small area in the middle of the microbead due to some disturbance, such as dust, bead breakage, etc.; in this case, the two beads may be distinct if the mean of the central rectangular area is still taken as an indication of the bead grayscale value. Therefore, the fixity of manually selecting the features can limit the effect of feature extraction in many cases. We need a more robust and adaptable method.
Disclosure of Invention
In order to overcome the technical defects, the first aspect of the invention provides a method for classifying the brightness of microbeads of a high-density gene chip, which comprises the following steps:
step S1: taking the gray-scale image of each microbead as an input image and a result label of an automatic encoder model, and pre-training an encoder and a decoder of the automatic encoder model until the automatic encoder model learns to automatically extract high-dimensional features of the microbead from the microbead and can reconstruct an image which is the same as the input image from the extracted high-dimensional features of the microbead, so that the pre-training of the automatic encoder model is completed;
step S2: after the pre-training of the automatic encoder model is completed, the output of the encoder is used as the input of a subsequent model, and a decoder is used for calculating the reconstruction loss so as to optimize the model;
it is worth noting that after the pre-training of the automatic encoder model is completed, the output of the encoder is used as the input of the subsequent model, and the decoder is only used for calculating the reconstruction loss during model optimization and is reserved, and besides, the decoder does not play other roles in the subsequent process;
and step S3: adding a full-connection layer in a neural network behind an encoding layer of an automatic encoder as a self-expression layer, further training the automatic encoder and the self-expression layer until high-dimensional features output by the encoder are input as the self-expression layer and the high-dimensional features input into the self-expression layer are self-expressed through an expression coefficient matrix of the self-expression layer, finishing training of the automatic encoder and the self-expression layer, and constructing the expression coefficient matrix in the self-expression layer in the process;
and step S4: inputting the gray scale image of each microbead to be classified into the trained encoder to obtain the current high-dimensional characteristics;
step S5: inputting the current high-dimensional features into the trained self-expression layer to obtain a current expression coefficient matrix;
step S6: and clustering by using a spectral clustering algorithm and taking the current expression coefficient matrix of the self-expression layer as a similarity matrix in the spectral clustering algorithm so as to separate the brightness and darkness of the microbeads.
The code extracted by the encoder is a set of abstract high-dimensional features. Since the black-box property of neural networks and the coding exist in a high-dimensional space, they do not have a human-understandable meaning as "mean". However, the encoding can be successfully reconstructed back to the original input image by the decoding layer of the automatic encoder, and the feature of 'mean value', 'variance', etc. cannot be used, which indicates that the encoding contains more abundant information.
Further, in step S6, the similarity matrix is cut by using a spectral clustering algorithm, and samples that can be linearly expressed with each other are found out and clustered as a class, so as to obtain a light and dark classification result of the microbeads.
Further, clustering using a spectral clustering algorithm comprises the steps of: (1) Taking an expression coefficient matrix in a self-expression layer as a similarity matrix W; (2) Constructing a standardized Laplace matrix by calculating a similarity matrix W; (3) Calculating the first k eigenvalues and eigenvectors of the normalized Laplace matrix to construct an eigenvector matrix Q; (4) And clustering the eigenvectors in the characteristic matrix Q by using a K-means clustering algorithm to correspondingly obtain the category of each row of objects in the similarity matrix W.
Further, in step S3, assuming that the high-dimensional feature obtained by the encoder is Z, the expression coefficient matrix between the self-expression layers is C, the self-expression layers are trained such that the output of the input Z obtained by the self-expression layers is also Z, i.e., ZC = Z, and the input Z successfully expresses itself by linear combination, i.e., "self-expression property".
Further, step S2 further comprises: the reconstruction loss of the auto-encoder is calculated using the following loss function:
Figure GDA0003779534960000021
where X is the original image input to the auto-encoder,
Figure GDA0003779534960000031
for reconstructed images output from an automatic encoder, lambda 1 Weight coefficients lost for the reconstruction of the auto-encoder.
Further, step S3 further includes: the self-expression loss from the expression layer was calculated using the following loss function:
Figure GDA0003779534960000032
where Z is the output of the encoder in the auto-encoder and is also the input of the self-expression layer, C is the matrix of expression coefficients in the self-expression layer, λ 2 Is the weight coefficient of the self-expression loss of the self-expression layer.
Further, step S3 further includes: regularization constraint is performed on the weights of the self-expression layer by using the following loss function, so as to calculate the regularized loss of the similarity matrix:
Loss 3 =λ 3 ||C||
where C is the matrix of expression coefficients in the self-expression layer, λ 3 The weight coefficients are regularly lost to the similarity matrix.
A second aspect of the present invention provides a terminal, comprising:
a memory for storing executable program code; and
a processor for reading the executable program code stored in the memory to execute the bead light and dark classification method of the high-density gene chip.
The third aspect of the present invention provides a storage medium, wherein the storage medium stores computer program instructions, and the computer program instructions, when executed by a processor, implement the method for classifying the light and dark of the microbeads of the high-density gene chip.
After the technical scheme is adopted, compared with the prior art, the method has the following beneficial effects:
the invention provides a gene chip microbead brightness and darkness classification method based on an automatic encoder and high-dimensional space clustering, which is used for automatically extracting high-dimensional characteristics of microspheres serving as nucleic acid probe fixing carriers in a high-density gene chip and directly clustering the extracted high-dimensional characteristics, so that a microbead brightness and darkness classification result is obtained, and the whole process has better robustness. In addition, according to the technical scheme, richer information can be extracted from the complete image of the microbeads, so that the accuracy of the extracted characteristics of the microbeads is guaranteed.
Drawings
FIG. 1 is a flowchart of a bead light-dark classification method for a high-density gene chip according to an embodiment of the present application;
FIG. 2 is an example of ten sets of reconstructed images from an auto-encoder, two of which are in one set as viewed from left to right, the left image in each set being the original image and the right image being the reconstructed image from the auto-encoder;
FIG. 3 is a diagram illustrating the structural relationship among an encoding layer, a self-expression layer and a decoding layer;
fig. 4 is a graph showing the effect of classification using the bead light and dark classification method of the present application, where black "+" represents the classified light beads and white "-" represents the classified dark beads.
Detailed Description
The advantages of the invention are further illustrated in the following description of specific embodiments in conjunction with the accompanying drawings. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and is not to be taken as limiting the scope of the invention.
This embodiment provides an intelligent terminal, and this intelligent terminal includes: a memory for storing executable program code; and a processor for reading the executable program code stored in the memory to execute the bead light and dark classification method of the high-density gene chip.
As shown in FIG. 1, the method for classifying the light and dark of the microbeads of the high-density gene chip comprises the following steps 1-6:
step 1: pre-training the autoencoder: taking the gray-scale image of each microbead as an input image and a result label of an automatic encoder model, and pre-training an encoder and a decoder of the automatic encoder model until the automatic encoder model learns to automatically extract high-dimensional features of the microbead from the microbead and can reconstruct an image which is the same as the input image from the extracted high-dimensional features of the microbead, so that the pre-training of the automatic encoder model is completed;
the code extracted by the encoder is a set of abstract, high-dimensional features. Since the black-box property of neural networks and the coding exist in a high-dimensional space, they do not have a human-understandable meaning as "mean". However, the encoding can be successfully reconstructed back to the original input image by the decoding layer of the automatic encoder, and the feature of 'mean value', 'variance', etc. cannot be used, which indicates that the encoding contains more abundant information.
The automatic encoder serving as a neural network model has very good robustness and can still output correct results under various disturbances. Its structure is composed of two parts: an encoder and a decoder; in the encoder, the image is converted from the original gray information into a set of codes, and then the set of codes can effectively represent the input image. Meanwhile, the encoding process can also be understood as a feature extraction process, each layer of the encoder is equivalent to a feature extractor, and various rich features are extracted; the decoder reconstructs the original image from the features extracted by the encoder, and if the decoder is able to reconstruct the original image, it indicates that the encoder is superior because the encoder successfully extracted the information-rich code. We can also get more high-dimensional and rich features by changing the depth and width of the encoder. Therefore, we can use the model of the autoencoder for feature extraction of microbeads.
Specifically, the image of each bead is used as the input of an automatic encoder model; as the true value of the training model, the image of the bead as input is still used as a label as the result of model training: that is, the model learns to automatically extract features from the beads, and then reconstruct a uniform image from the extracted features.
Since the model can successfully reconstruct an image which is nearly identical to the input image from the automatically extracted features, we can fully consider that the features extracted by the model are very efficient and contain very abundant information. If the step of feature extraction is artificially completed, it is difficult to restore the image as if it is the original image according to the extracted data such as "mean", "variance", "image entropy", etc. This can be done by the extracted features of the autoencoder, which is shown in fig. 2, which illustrates that the extracted features are more efficient.
Preferably, in order to ensure that the features extracted by the automatic encoder are more accurate and efficient, the method further comprises the following steps: the reconstruction error of the auto-encoder is calculated using a reconstruction error function.
And 2, step: after the pre-training of the automatic encoder model is completed, the output of the encoder is used as the input of a subsequent model, and a decoder is used for calculating the reconstruction loss so as to optimize the model;
after the model training is completed and the image can be perfectly reconstructed, the input image can be coded into the characteristics which are wanted by people, and the characteristics are used for representing the image. It is noted that after the pre-training of the auto-encoder model is completed, the output of the encoder will be used as input for the subsequent model, and the decoder will only be retained as calculating the reconstruction loss during model optimization, except that the decoder will not play any other role in the subsequent process.
This step also includes calculating the reconstruction loss of the auto-encoder using the following loss function:
Figure GDA0003779534960000051
wherein X is an input auto-codeThe original image of the device is shown,
Figure GDA0003779534960000052
for reconstructed images from the output of the automatic encoder, lambda 1 Weight coefficients lost for the reconstruction of the auto-encoder.
And step 3: add and train self-expression layers: adding a full-connection layer in a neural network behind an encoding layer of an automatic encoder as a self-expression layer, further training the automatic encoder and the self-expression layer until high-dimensional characteristics output by the encoder are input as the self-expression layer and the high-dimensional characteristics input into the self-expression layer are self-expressed through an expression coefficient matrix of the self-expression layer, finishing the training of the automatic encoder and the self-expression layer, and finishing the construction of the expression coefficient matrix in the self-expression layer in one process;
after the extracted features of the automatic encoder are obtained, we need to classify the beads according to the extracted features. Here we use a clustering approach to classify samples that are similar to each other into a class. Since the automatic encoder has already extracted features for us, the most straightforward idea is to use these codes extracted from each input sample as the features of the sample, and perform clustering using the well-known algorithms of KMEANS, DBSCAN, etc. However, in practical tests, the results thus obtained were not good. This can also be explained by: traditional clustering algorithms such as KMEANS are developed based on lower dimensional data, and the measurement adopted in the distance calculation is more suitable for the lower dimensional data; in a high-dimensional space, data points with similar distances (such as Euclidean distance) may not belong to a class, and due to sparsity in the high-dimensional space, no data cluster exists in the high-dimensional space. Therefore, we need methods that are specific to high-dimensional data.
Depth subspace clustering, just as applicable to our case. The basic idea is as follows: in the high-dimensional space, data of the same class belong to the same subspace, and the data points of the same class have a self-expression property: a data point may be represented by a linear combination with other data points that are in the same subspace. The structure of the full connection layer in the neural network just meets the following point: if one sample is represented by each neuron, the weighted linear connection between neurons is exactly a linear representation of neurons to each other. For this purpose, we need to add a full-link layer called "self-expression layer" after the coding layer of the auto-encoder, as shown in fig. 3, to obtain a similarity matrix recording the expression coefficients of each sample point. Specifically, assuming that the coding obtained by the encoder is Z and the expression coefficient matrix between the sub-expression layers is C, we train the self-expression layer so that its output is also Z, i.e., ZC = Z. Thus, the input Z successfully expresses itself by linear combination, the so-called "self-expression property". After the similarity matrix is obtained through the self-expression layer, the similarity matrix can be cut through a spectral clustering method, samples which can be linearly expressed mutually are found out to be clustered as a class, and the light and dark classification result of the microbeads is obtained.
In order to obtain a sparse matrix with "subspace keeping", that is, for a certain sample point, the sample points corresponding to the non-zero expression coefficients of the sparse matrix also belong to the same subspace, we want the obtained solution to be sparse. When training the model, regularization constraint is needed for the weights of the sub-expression layers. In order to enable the self-expression property to be more accurate in the self-expression layer, so that a more accurate similarity matrix is obtained by subsequent spectral clustering, loss is calculated for output and input of the self-expression layer, and the self-expression property is enabled to be more accurate. Preferably, the input loss and the output loss from the expression layer are calculated using a loss function. Preferably, the weights of the self-expression layers are regularized using a regularization loss function to compute a similarity matrix regularization loss.
This step also includes calculating a self-expression loss for the self-expression layer using the following loss function:
Figure GDA0003779534960000061
wherein Z is the output of the encoder in the automatic encoder and the input of the self-expression layer, and C is the expression coefficient matrix in the self-expression layer,λ 2 Is the weight coefficient of the self-expression loss of the self-expression layer.
The method also comprises the following steps of carrying out regularization constraint on the weight of the self-expression layer by using the following loss function, thereby calculating the regularized loss of the similarity matrix:
Loss 2 =λ 3 ||C||
where C is the matrix of expression coefficients in the self-expression layer, λ 3 The weight coefficients are regularly lost to the similarity matrix.
And 4, step 4: inputting the gray scale image of each microbead to be classified into the trained encoder to obtain the current high-dimensional characteristics;
after all training steps of the whole model are completed through the steps 1 to 3, the trained model can be adopted to classify the brightness of the microbeads on the high-density gene chip. And the automatic encoder acquires a gray image of each microbead to be classified and outputs a code serving as the current high-dimensional characteristic.
It should be noted that in the technical solution of the present application, before each bead classification using the model, the model needs to be trained according to steps 1 to 3.
And 5: inputting the current high-dimensional features into the trained self-expression layer to obtain a current expression coefficient matrix;
and taking the current high-dimensional features (namely codes) output by the coding layer as the input of the self-expression layer, namely acquiring the current high-dimensional features output by the coding layer from the self-expression layer and obtaining a current expression coefficient matrix.
Step 6: and clustering by using a spectral clustering algorithm and taking the current expression coefficient matrix of the self-expression layer as a similarity matrix in the spectral clustering algorithm so as to separate the brightness and darkness of the microbeads.
FIG. 4 is a graph showing the effect of classifying the light and dark of microbeads on a high-density gene chip. And cutting the similarity matrix by using a spectral clustering algorithm, and finding out samples capable of mutually linearly expressing to be clustered as a class, thereby obtaining a light and dark classification result of the microbeads. Clustering using a spectral clustering algorithm comprises the steps of: (1) Taking an expression coefficient matrix in a self-expression layer as a similarity matrix W; (2) Constructing a standardized Laplace matrix by calculating a similarity matrix W; (3) Calculating the first k eigenvalues and eigenvectors of the normalized Laplace matrix to construct an eigenvector matrix Q; (4) And clustering the eigenvectors in the characteristic matrix Q by using a K-means clustering algorithm to correspondingly obtain the category of each row of objects in the similarity matrix W.
In another embodiment of the present application, a computer-readable storage medium is further provided, wherein the computer-readable storage medium has stored thereon computer program instructions, and when the computer program instructions are executed by a processor, the steps 1 to 6 in the bead light and dark classification method of the high-density gene chip are realized. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
It should be noted that the embodiments of the present invention have been described in a preferred embodiment and not limited to the embodiments, and those skilled in the art may modify and modify the above-disclosed embodiments to equivalent embodiments without departing from the scope of the present invention.

Claims (9)

1. A method for classifying brightness of microbeads of a high-density gene chip is characterized by comprising the following steps:
step S1: taking the gray-scale image of each microbead as an input image and a result label of an automatic encoder model, and pre-training an encoder and a decoder of the automatic encoder model until the automatic encoder model learns to automatically extract high-dimensional features of the microbead from the microbead and can reconstruct an image which is the same as the input image from the extracted high-dimensional features of the microbead, so that the pre-training of the automatic encoder model is completed;
step S2: after the pre-training of the automatic encoder model is completed, the output of the encoder is used as the input of a subsequent model, and a decoder is used for calculating the reconstruction loss so as to optimize the model;
and step S3: adding a full-connection layer in a neural network behind an encoding layer of an automatic encoder as a self-expression layer, further training the automatic encoder and the self-expression layer until high-dimensional characteristics output by the encoder are input as the self-expression layer and the high-dimensional characteristics input into the self-expression layer are self-expressed through an expression coefficient matrix of the self-expression layer, finishing the training of the automatic encoder and the self-expression layer, and in the process, finishing the construction of the expression coefficient matrix in the self-expression layer;
and step S4: inputting the gray scale image of each microbead to be classified into the trained encoder to obtain the current high-dimensional characteristics;
step S5: inputting the current high-dimensional features into the trained self-expression layer to obtain a current expression coefficient matrix;
step S6: and clustering by using a spectral clustering algorithm by taking the current expression coefficient matrix of the self-expression layer as a similarity matrix in the spectral clustering algorithm so as to separate the brightness and darkness of the microbeads.
2. The method for classifying the brightness of microbeads through gene chips as claimed in claim 1, wherein in step S6, the similarity matrix is segmented by using a spectral clustering algorithm to find out samples capable of being linearly expressed with each other to be clustered as a class, thereby obtaining the brightness classification result of the microbeads.
3. The method for classifying the brightness of microbeads of high-density gene chip as set forth in claim 2, wherein the clustering using a spectral clustering algorithm comprises the steps of: (1) Taking an expression coefficient matrix in the self-expression layer as a similarity matrix W; (2) Constructing a standardized Laplace matrix by calculating a similarity matrix W; (3) Calculating the first k eigenvalues and eigenvectors of the normalized Laplacian matrix, and constructing an eigenvector matrix Q; (4) And clustering the eigenvectors in the characteristic matrix Q by using a K-means clustering algorithm to correspondingly obtain the category of each row of objects in the similarity matrix W.
4. The method for classifying brightness of microbeads on high-density gene chip as set forth in claim 1, wherein in step S3, assuming that the high-dimensional feature obtained by the encoder is Z and the matrix of expression coefficients in the self-expression layer is C, the self-expression layer is trained such that the output of the input Z obtained by the self-expression layer is also Z, i.e., ZC = Z, and the input Z successfully expresses itself by linear combination, i.e., "self-expression properties".
5. The method for classifying brightness of microbeads according to claim 1, wherein step S2 further comprises: the reconstruction loss of the auto-encoder is calculated using the following loss function:
Figure FDA0003779534950000021
where X is the original image input to the auto-encoder,
Figure FDA0003779534950000023
for reconstructed images from the output of the automatic encoder, lambda 1 Weight coefficients lost for the reconstruction of the auto-encoder.
6. The method for classifying the brightness of microbeads of high-density gene chip as set forth in claim 1, wherein the step S3 further comprises: the self-expression loss from the expression layer was calculated using the following loss function:
Figure FDA0003779534950000022
where Z is the output of the encoder in the auto-encoder and is also the input of the self-expression layer, C is the matrix of expression coefficients in the self-expression layer, and λ 2 Is the weight coefficient of the self-expression loss of the self-expression layer.
7. The method for classifying brightness of microbeads of high-density gene chip as set forth in claim 1, wherein the step S3 further includes: regularization constraint is performed on the weights of the self-expression layer by using the following loss function, so as to calculate the regularized loss of the similarity matrix:
Loss 3 =λ 3 ||C||
where C is the matrix of expression coefficients in the self-expression layer, λ 3 The lost weight coefficients are regularized by a similarity matrix.
8. A terminal, comprising:
a memory for storing executable program code; and
a processor for reading the executable program code stored in the memory to execute the bead light and dark classification method of the high-density gene chip according to any one of claims 1 to 7.
9. A storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method for bead light-dark classification of a high-density gene chip according to any one of claims 1 to 7.
CN202210714565.0A 2022-06-23 2022-06-23 Method for classifying brightness and darkness of microbeads of high-density gene chip, terminal and storage medium Active CN114792386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210714565.0A CN114792386B (en) 2022-06-23 2022-06-23 Method for classifying brightness and darkness of microbeads of high-density gene chip, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210714565.0A CN114792386B (en) 2022-06-23 2022-06-23 Method for classifying brightness and darkness of microbeads of high-density gene chip, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN114792386A CN114792386A (en) 2022-07-26
CN114792386B true CN114792386B (en) 2022-10-11

Family

ID=82463080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210714565.0A Active CN114792386B (en) 2022-06-23 2022-06-23 Method for classifying brightness and darkness of microbeads of high-density gene chip, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114792386B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012088972A (en) * 2010-10-20 2012-05-10 Nippon Telegr & Teleph Corp <Ntt> Data classification device, data classification method and data classification program
CN111144463A (en) * 2019-12-17 2020-05-12 中国地质大学(武汉) Hyperspectral image clustering method based on residual subspace clustering network
CN113971735A (en) * 2021-09-16 2022-01-25 西安电子科技大学 Depth image clustering method, system, device, medium and terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012088972A (en) * 2010-10-20 2012-05-10 Nippon Telegr & Teleph Corp <Ntt> Data classification device, data classification method and data classification program
CN111144463A (en) * 2019-12-17 2020-05-12 中国地质大学(武汉) Hyperspectral image clustering method based on residual subspace clustering network
CN113971735A (en) * 2021-09-16 2022-01-25 西安电子科技大学 Depth image clustering method, system, device, medium and terminal

Also Published As

Publication number Publication date
CN114792386A (en) 2022-07-26

Similar Documents

Publication Publication Date Title
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN107330074B (en) Image retrieval method based on deep learning and Hash coding
TW201909112A (en) Image feature acquisition
CN109919252B (en) Method for generating classifier by using few labeled images
CN110636445B (en) WIFI-based indoor positioning method, device, equipment and medium
CN112633382A (en) Mutual-neighbor-based few-sample image classification method and system
JP2011248879A (en) Method for classifying object in test image
CN110941734A (en) Depth unsupervised image retrieval method based on sparse graph structure
CN110598061A (en) Multi-element graph fused heterogeneous information network embedding method
CN111027636B (en) Unsupervised feature selection method and system based on multi-label learning
CN114821299B (en) Remote sensing image change detection method
CN115393606A (en) Method and system for image recognition
CN112560949B (en) Hyperspectral classification method based on multilevel statistical feature extraction
CN112527959B (en) News classification method based on pooling convolution embedding and attention distribution neural network
CN114792386B (en) Method for classifying brightness and darkness of microbeads of high-density gene chip, terminal and storage medium
CN113065520A (en) Multi-modal data-oriented remote sensing image classification method
CN113516156A (en) Fine-grained image classification method based on multi-source information fusion
CN105740916B (en) Characteristics of image coding method and device
CN117095754A (en) Method for classifying proteins by machine learning
CN111666999A (en) Remote sensing image classification method
WO2018203551A1 (en) Signal retrieval device, method, and program
Akwensi et al. Fisher vector encoding of supervoxel-based features for airborne LiDAR data classification
CN114926872A (en) Model training method, relationship identification method, electronic device, and storage medium
CN109359694B (en) Image classification method and device based on mixed collaborative representation classifier
CN110119465B (en) Mobile phone application user preference retrieval method integrating LFM potential factors and SVD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant