CN110969086A - Handwritten image recognition method based on multi-scale CNN (CNN) features and quantum flora optimization KELM - Google Patents

Handwritten image recognition method based on multi-scale CNN (CNN) features and quantum flora optimization KELM Download PDF

Info

Publication number
CN110969086A
CN110969086A CN201911050472.7A CN201911050472A CN110969086A CN 110969086 A CN110969086 A CN 110969086A CN 201911050472 A CN201911050472 A CN 201911050472A CN 110969086 A CN110969086 A CN 110969086A
Authority
CN
China
Prior art keywords
layer
kelm
cnn
image
quantum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911050472.7A
Other languages
Chinese (zh)
Other versions
CN110969086B (en
Inventor
廖一鹏
张进
陈诗媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201911050472.7A priority Critical patent/CN110969086B/en
Publication of CN110969086A publication Critical patent/CN110969086A/en
Application granted granted Critical
Publication of CN110969086B publication Critical patent/CN110969086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/30Writer recognition; Reading and verifying signatures
    • G06V40/33Writer recognition; Reading and verifying signatures based only on signature image, e.g. static signature recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Abstract

The invention relates to a handwritten image recognition method based on multi-scale CNN characteristics and quantum flora optimization KELM, which comprises the following steps of firstly, carrying out NSST decomposition on a handwritten image to obtain a multi-scale image with three scales; secondly, extracting detail features of each scale by using a CNN model, and fusing the detail features in a maximum value mode; then, optimizing parameters of the KELM model by using a quantum flora algorithm to obtain an optimal KELM model; and finally, taking the fused features as the input of the KELM, and recognizing the features to finish the recognition of the handwritten image. The invention not only enhances the recognition precision and generalization capability, but also adopts a KELM model with less time in the model training, the KELM only needs to optimize two parameters, and the CNN-KELM can extract more detailed characteristics, thereby greatly reducing the workload of handwritten image recognition and improving the recognition precision.

Description

Handwritten image recognition method based on multi-scale CNN (CNN) features and quantum flora optimization KELM
Technical Field
The invention relates to the field of image recognition, in particular to a handwritten image recognition method based on multi-scale CNN characteristics and quantum flora optimization KELM.
Background
The handwritten image has the characteristics of diversity and complexity, the handwritten image is not convenient to identify by a machine and is difficult to correctly identify, and meanwhile, the quality of the handwritten image is also uneven, so that the handwritten image with poor quality exists. The recognition of the handwritten image is necessary for the development of science and technology, and is also required to overcome the defect in the image recognition. If the handwritten image can be recognized quickly and accurately, a new method is provided for image recognition, and the life and learning of people can be facilitated.
In recent years, several methods of recognition of handwritten images have emerged. In the traditional identification method, a nearest neighbor algorithm is adopted, the algorithm is simple to realize and high in identification speed, but cannot be accurately identified in the face of complex handwritten images, and the generalization identification capability is insufficient; the method of the support vector machine can enhance the generalization recognition capability of the handwritten image, but has the defect of low recognition precision; the recognition accuracy of the handwritten image is improved by adopting a sparse coding method, but the recognition accuracy of the handwritten image with poor quality is not high. In the aspect of deep learning, a neural network method is adopted to successfully solve the problem that complex handwritten images are difficult to recognize, but the neural network has the problem that convergence is difficult, so a Convolutional Neural Network (CNN) method is adopted to solve the convergence problem, but the time for training a model by the CNN is too long, then a CNN-ELM model is proposed to reduce the model training time and improve the recognition accuracy, but an Extreme Learning Machine (ELM) has too many parameters to be optimized, and an optimal value is difficult to find.
Disclosure of Invention
In view of this, an object of the present invention is to provide a handwritten image recognition method based on multi-scale CNN features and quantum flora optimization KELM, which can quickly recognize a handwritten image while ensuring extremely high accuracy.
The invention is realized by adopting the following scheme: a handwritten image recognition method based on multi-scale CNN features and quantum flora optimization KELM comprises the following steps:
step S1: establishing an NSST-CNN image decomposition and feature extraction model, obtaining a handwritten image, and performing NSST multi-scale decomposition on the handwritten image, wherein the high-frequency image is not a molecular band image any more;
step S2: respectively extracting characteristic values of the low-frequency image and the high-frequency images subjected to NSST decomposition in the step S1 through a CNN model; the CNN network structure passed by each scale comprises 3 convolutional layers, 3 pooling layers and 1 full-connection layer;
step S3: carrying out feature fusion on feature values extracted by CNN in each scale in a maximum value mode to form 1024 feature values; the fusion is to fuse the characteristic values corresponding to each scale; there are four ways, which are: taking the maximum value, taking the minimum value, taking the average value and taking the median.
Step S4: taking the 1024 characteristic value after each scale fusion as the input of a kernel limit learning machine (KELM), and taking the number corresponding to each handwritten image as the output of the KELM; 80% of all input data are used for training out a KELM model, 20% of the input data are used for testing the trained model, and the testing precision is obtained;
step S5: optimizing two parameters, namely a punishment coefficient C and a kernel function sigma, of the KELM by using an improved quantum flora algorithm, wherein the two parameters are variables of the algorithm, and the accuracy of handwritten image recognition is used as a fitness value of the algorithm;
step S6: constructing a KELM model by using the optimal penalty coefficient C and the kernel function sigma obtained by optimizing the improved quantum flora algorithm to obtain an optimal KELM identification model;
step S7: and substituting the characteristic value of the test image obtained through the NSST-CNN model into the optimal KEIM model to obtain the test precision, and predicting the handwritten image of the number to be tested through the steps S1 to S6 to obtain a recognition result so as to realize the recognition of the handwritten image.
Further, the specific content of establishing the image decomposition and feature extraction model of NSST-CNN in step S1 is as follows: decomposing a handwritten image under an NSST (non-subsampling pyramid) domain, performing multi-scale decomposition by adopting a non-subsampling pyramid (NSP), performing k times of NSP decomposition on the image to obtain 1 low-frequency image and k layers of high-frequency subband images, and performing l-level multi-directional decomposition on the high-frequency subband by adopting a Shearing Filter (SF) to obtain 2lA plurality of multi-directional sub-bands with the same size as the original image;
the formula for two-dimensional image processing by NSST is shown in the following formula (1)
Figure BDA0002255151940000031
In which psi ∈ L2(R)2Is the basis function of an affine system; k. l and b are respectively a decomposed scale coefficient, a direction coefficient and a translation coefficient, A is an anisotropic expansion matrix for controlling NSST scale decomposition, and S is a shearing matrix for controlling NSST direction decomposition;
extracting detail features of the multiscale image subjected to NSST decomposition by using a convolutional neural network; in CNN, a convolutional layer of neurons is connected to only a part of the neuron nodes of the previous layer, i.e. its connections between neurons are not fully connected, and the weights w and offsets b of the connections between some neurons in the same layer are shared, so as to reduce the number of parameters to be trained; the CNN structure comprises an input layer, a convolution layer, an excitation layer, a pooling layer, a full-connection layer and an output layer; the input layer is used for inputting data, and pictures are input into the CNN structure in the form of data; the convolution layer uses convolution kernels to carry out feature extraction and feature mapping, and extracts different feature maps by carrying out convolution operation on an input image for multiple times; then, each unit of the current layer is connected with the local block of the feature diagram of the precursor layer through a weight value, and local weighting and nonlinear transformation are carried out;
the excitation layer carries out nonlinear mapping on the output of the convolution layer, the excitation function used by the convolution layer is a ReLu function, and f (x) is max (x, 0);
the concrete operation of the pooling layer is as follows: first, by coarse graining the location of each feature; second, each typical pooling unit typically calculates the maximum of local blocks within one or several feature maps; the adjacent units acquire input data through blocks formed by row or column translation, so that the feature expression dimensionality is reduced, and robustness is provided for small deformations such as translation and distortion; the full connection layer is refitted at the tail part of the CNN to reduce the loss of characteristic information; the output layer is used for outputting the result.
Further, the specific content of step S5 is as follows:
firstly, quantum computation is added to the flora algorithm, qubits are physical media used to store information, one qubit being represented as: i phi>=α|0>+β|1>Wherein (α) is two amplitude constants, | α2+|β|2=1,|0>And |1>Represents a spin state; obtaining a quantum from the preceding formula while containing |0>And |1>The information of two states, the gene quantum-encoding n parameters, is shown in equation (2):
Figure BDA0002255151940000051
then, adding a nonlinear adaptive rotation angle into the flora algorithm, wherein the specific formula is as follows:
θi=-sgn(Aii(3)
wherein: -sgn (A)i) Indicating a direction of the rotation angle;
Figure BDA0002255151940000052
the probability amplitude corresponding to a certain qubit of the currently best bacteria,
Figure BDA0002255151940000053
the probability amplitude corresponding to a certain quantum bit of the current bacteria; thetaiThe rotation angle is calculated as follows:
Figure BDA0002255151940000054
wherein: thetamaxIs the maximum value of the rotation angle; thetaminIs the minimum value of the rotation angle; theta0And thetaiThe angles of a certain quantum bit of the current optimal bacterium and the current bacterium on a unit circle respectively; c is a constant, representing | θ0iThe maximum value of the | angle difference; and m is the nonlinear modulation index of the current evolution step.
The specific steps for realizing the improved quantum flora algorithm are as follows:
step SA: dimension C of initial parameter search space, bacterial population size S, and number N of times bacteria perform tropismcMaximum number of steps N advancing in one direction in a trending operationsThe number of replicative behaviors N of the bacteriumreAnd the number N of times the bacteria performs migratory behaviouredMigration probability PedThe step length C (i) of forward swimming (i is 1,2, …, S), and a penalty coefficient C and a kernel function sigma which are the positions theta of the bacteria randomly generated according to the value range;
step SB: the migration operation cycle l is l + 1;
step SC: a copy operation cycle k is k + 1;
step SD: trending the operating cycle j ═ j + 1;
step SE if j is less than NcAnd returning to the step SD for tropism operation.
Step SF: copying: for a given k, l and each i ═ 1,2, …, S, the bacterial energy values J are measuredbealthAnd sorting the materials in the order from small to large. Is eliminated
Figure BDA0002255151940000061
Selecting for smaller energy bacteria SrThe larger energy bacteria introduce quantum computing, encode them using qubits, and perform nonlinear adaptive quantum rotating gate operations.
Step SG, if k is less than NreAnd returning to the step 3.
Step SH, migration: after the flora is subjected to a plurality of generations of replication operations, each bacterium has a probability PedBack re-random scoringDistributing the cloth into an optimizing space; if l is less than NedAnd returning to the step 2, otherwise, ending the optimization.
Further, the step SD specifically includes the following steps:
step SD 1: bacteria i were allowed to go one step as follows, i ═ 1,2, …, S;
step SD 2: calculating a fitness value function J (i, J, k, l) using equation (5);
J(i,j,k,l)=J(i,j,k,l)+Jcci(j,k,l),P(j,k,l)) (5)
step SD 3: let JlastJ (i, J, k, l), stored as the best current adaptation value for bacterium i;
step SD 4: rotating: generating a random vector Δ (i) ∈ RPEach element of which is Δm(i) (i ═ 1,2, …, P), are distributed in [ -1,1]A random number of (c);
step SD 5: moving: order to
Figure BDA0002255151940000062
Wherein, C (i) is the size of one step of the bacteria i swimming along the direction randomly generated after rotation;
step SD 6: calculating J (i, J +1, k, l), and let
J(i,j+1,k,l)=J(i,j,k,l)+Jcci(j+1,k,l),P(j+1,k,l)) (7)
Step SD 7: swimming: firstly, making m equal to 0; then ensuring that m is less than NsLet m equal m + l, if J (i, J +1, k, l) < JlastLet JlastJ (i, J +1, k, l), and
Figure BDA0002255151940000075
return to step SD6 by θi(J +1, k, l) calculating a new J (i, J +1, k, l); otherwise, let m equal to Ns
Step SD 8: returning to step SD2, the next bacterium i +1 is processed.
Further, the specific content of the kernel limit learning machine, i.e. the KELM, obtained in the step S4 is as follows:
in the whole training process of the extreme learning machine, the obtained hidden layer output matrix H can be ensured to be unique by selecting the connection weight omega between the input layer and the hidden layer and the bias b of the neuron of the hidden layer, and the whole learning process is to solve a linear system H β T, so that the connection weight β between the hidden layer and the output layer is calculated through a formula (8);
Figure BDA0002255151940000071
in the above formula, H+Represents the generalized inverse of matrix H; the generalized inverse of the matrix H is calculated using orthogonal projection to obtain the equation (9)
H+=HT(HHT)-1(9)
If in HHTAdding a positive number to the diagonal
Figure BDA0002255151940000072
The solution is more stable and the generalization performance is stronger; de formula (10)
Figure BDA0002255151940000073
Therefore, the output function of the extreme learning machine can be expressed as shown in equation (11)
Figure BDA0002255151940000074
In the above formula: i is a diagonal matrix; and C is a penalty coefficient.
To eliminate the effect of uncertainty in the hidden layer function h (x), HH is replaced by a kernel functionTThe kernel matrix is defined according to the Mercer condition, as shown in equations (12) and (13):
Figure BDA0002255151940000081
Figure BDA0002255151940000082
therefore, the expression of the output of the kernel-limit learning machine is obtained as shown in the formula (14)
Figure BDA0002255151940000083
RBF is selected as the kernel function of KELM as shown in equation (15)
K(xi,xj)=exp(-σ||xi-xj||2),σ>0 (15)。
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention utilizes the NSST-CNN model to extract the detail characteristics of the handwritten image, and the KELM identifies the characteristics, so that the handwritten image can be identified quickly under the condition of ensuring extremely high accuracy, a new method is provided for image identification, and meanwhile, the life and the study of people can be facilitated.
(2) The invention not only enhances the recognition precision and generalization capability, but also adopts a KELM model with less time in the model training, the KELM only needs to optimize two parameters, and the CNN-KELM can extract more detailed characteristics, thereby greatly reducing the workload of handwritten image recognition and improving the recognition precision.
Drawings
FIG. 1 is a flow chart of the NSST-CNN model according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating classification and identification of feature information according to an embodiment of the present invention.
Fig. 3 is a flowchart illustrating a handwritten image recognition process according to an embodiment of the present invention.
FIG. 4 is a comparative algorithm diagram of a binary function according to an embodiment of the present invention.
FIG. 5 is a partially handwritten image of an embodiment of the invention.
Fig. 6 is a comparison experimental diagram of MNIST handwritten data sets in different ways according to an embodiment of the present invention.
Fig. 7 is a visualization diagram of each layer of the CNN model according to the embodiment of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an", and/or "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of the features, steps, operations, devices, components, and/or combinations thereof.
The embodiment provides a handwritten image recognition method based on multi-scale CNN characteristics and quantum flora optimization KELM, which comprises the following steps:
step S1: establishing an NSST-CNN image decomposition and feature extraction model, obtaining a handwritten image, performing NSST multi-scale decomposition on the handwritten image to obtain a low-frequency image and a plurality of high-frequency images, wherein the high-frequency images do not subdivide sub-band images;
step S2: respectively extracting characteristic values of the low-frequency image and the high-frequency images subjected to NSST decomposition in the step S1 through a CNN model; the CNN network structure passed by each scale comprises 3 convolutional layers, 3 pooling layers and 1 full-connection layer;
step S3: carrying out feature fusion on feature values extracted by CNN in each scale in a maximum value mode to form 1024 feature values; the fusion is to fuse the characteristic values corresponding to each scale; there are four ways, which are: taking the maximum value, taking the minimum value, taking the average value and taking the median. Experiments verify that taking the maximum value is one way to retain the most characteristic information per scale.
Step S4: taking the 1024 characteristic value after each scale fusion as the input of a kernel limit learning machine (KELM), and taking the number corresponding to each handwritten image as the output of the KELM; 80% of all input data are used for training out a KELM model, 20% of the input data are used for testing the trained model, and the testing precision is obtained;
step S5: optimizing two parameters, namely a punishment coefficient C and a kernel function sigma, of the KELM by using an improved quantum flora algorithm, wherein the two parameters are variables of the algorithm, and the accuracy of handwritten image recognition is used as a fitness value of the algorithm;
step S6: constructing a KELM model by using the optimal penalty coefficient C and the kernel function sigma obtained by optimizing the improved quantum flora algorithm to obtain an optimal KELM identification model;
step S7: and substituting the characteristic value of the test image obtained through the NSST-CNN model into the optimal KEIM model to obtain the test precision, and predicting the handwritten image of the number to be tested through the steps S1 to S6 to obtain a recognition result so as to realize the recognition of the handwritten image.
Preferably, in the present embodiment, in terms of feature extraction of a handwritten image, a detailed operation flowchart of the NSST-CNN model adopted in the present embodiment is shown in fig. 1. The handwritten image first needs to be decomposed under the NSST domain. The non-downsampling Shearlet transformation is an optimization improvement on Shearlet transformation, a downsampling link is removed through a non-downsampling pyramid filter bank (NSLP), so that an image has translation invariance and a pseudo Gibbs effect is restrained, and meanwhile, multidirectional decomposition of high-frequency sub-bands is achieved through a Shear Filter (SF).
In this embodiment, the specific contents of the image decomposition and feature extraction model for establishing NSST-CNN in step S1 are as follows: decomposing a handwritten image under an NSST (non-subsampled pyramid) domain, performing multi-scale decomposition by adopting a non-subsampled pyramid (NSP), performing k times of NSP decomposition on the image to obtain 1 low-frequency image and k layers of high-frequency sub-band images, and performing l-level multi-directional decomposition on the high-frequency sub-band by adopting a Shearing Filter (SF) to obtain 2lA multi-directional sub-band with the same size as the original image; the invention does not decompose multiple directional subbands.
The formula for two-dimensional image processing by NSST is shown in the following formula (1)
Figure BDA0002255151940000111
In which psi ∈ L2(R)2Is the basis function of an affine system; k. l and b are respectively a decomposed scale coefficient, a direction coefficient and a translation coefficient, A is an anisotropic expansion matrix for controlling NSST scale decomposition, and S is a shearing matrix for controlling NSST direction decomposition;
the multiscale image after NSST decomposition is then used to extract detail features using a Convolutional Neural Network (CNN). In the artificial fully-connected neural network, each neuron between every two adjacent layers is connected with edges. When the feature dimension of the input layer becomes very high, the parameters to be trained of the fully-connected network are increased greatly, the calculation speed becomes very slow, and in the CNN, the neurons of the convolutional layer are connected with only part of the neuron nodes of the previous layer, namely, the connections among the neurons are not fully connected, and the weights w and the offsets b of the connections among some neurons in the same layer are shared, so that the quantity of the parameters to be trained is greatly reduced. The structure of CNN generally contains these several layers: input layer, convolution layer, excitation layer, pooling layer, full-link layer and output layer.
The input layer is used for data input, and pictures are input to the CNN structure in the form of data.
The convolution layer uses convolution kernel to extract and map features, and extracts different feature maps by performing convolution operation on an input image for multiple times; and then, each unit of the current layer is connected with the local block of the feature map of the precursor layer through a weight value, and local weighting and nonlinear transformation are carried out.
The excitation layer mainly performs a non-linear mapping on the output of the convolutional layer, because the convolutional layer is calculated as a linear calculation, and the excitation function used is generally a ReLu function, as shown in the following formula
f(x)=max(x,0)
The pooling layer is used for carrying out down-sampling, carrying out sparse processing on the characteristic diagram and reducing the data operation amount, and the pooling self specific operation is as follows: firstly, by coarsely graining the positions of the features, slight changes of the formed theme caused by relative positions are avoided, thereby realizing reliable detection of the theme. Second, each typical pooling unit typically calculates the maximum of local blocks within one or several feature maps; and the adjacent units acquire input data through blocks formed by row or column translation, so that the feature expression dimension is reduced, and robustness is provided for small deformations such as translation and distortion.
The full-connection layer is usually refitted at the tail part of the CNN, the loss of characteristic information is reduced, and a plurality of convolution, nonlinear transformation and pooling stages can be connected in series according to actual needs. The output layer is used for outputting the result.
In the CNN model of the present invention, the parameters of each layer setting of each scale are different, as shown in table 1, where C1, C3, and C3 represent convolutional layers, and P2, P4, and P6 represent pooling layers.
TABLE 1 CNN model parameters
Figure BDA0002255151940000131
After the NSST-CNN model feature extraction, the classification and identification of the medicine adding state of the extracted feature are required, and the flow chart is shown in FIG. 2.
Firstly, a quantum flora algorithm is used for training parameters of the KELM, the quantum flora algorithm is an improvement of the flora algorithm, the flora algorithm is an intelligent behavior embodied in the foraging process of escherichia coli in human intestinal tracts, and researches show that the foraging behavior of bacteria has four typical behaviors, namely a tropism behavior, an aggregation behavior, a replication behavior and a migration behavior, so that the intelligent random search algorithm of the flora algorithm is provided. The flora algorithm has the characteristics of insensitivity to initial value and parameter selection, strong robustness and the like, but the capability of searching for the optimal fitness is still to be improved.
Firstly, adding quantum ratio in quantum calculation in flora algorithmA physical medium that is used to store information, a qubit is represented as: i phi>=α|0>+β|1>Wherein (α) is two amplitude constants, | α2+|β|2=1,|0>And |1>Represents a spin state; obtaining a quantum from the preceding formula while containing |0>And |1>The information of two states, the gene quantum-encoding n parameters, is shown in equation (2):
Figure BDA0002255151940000141
the quantum rotating gate is used for updating the position of the chromosome, and different rotating phases are selected, so that the convergence speed and the optimizing capability of the algorithm are greatly influenced. In the practical process of the flora, the rotation angle is not a constant, the fixed rotation phase is not beneficial to the convergence of the algorithm, the larger rotation phase is beneficial to exploring a new area and accelerating the convergence speed, and the smaller rotation phase can promote bacteria to perform fine search in a local area to find an optimal solution. Thus, the present embodiment proposes a non-linear adaptive rotation angle to improve the performance of the flora algorithm.
Adding a nonlinear adaptive rotation angle into a flora algorithm, wherein the specific formula is as follows:
θi=-sgn(Aii(3)
wherein: -sgn (A)i) Indicating a direction of the rotation angle;
Figure BDA0002255151940000142
Figure BDA0002255151940000143
the probability amplitude corresponding to a certain qubit of the currently best bacteria,
Figure BDA0002255151940000144
the probability amplitude corresponding to a certain quantum bit of the current bacteria; thetaiThe rotation angle is calculated as follows:
Figure BDA0002255151940000145
wherein: thetamaxIs the maximum value of the rotation angle; thetaminIs the minimum value of the rotation angle; theta0And thetaiThe angles of a certain quantum bit of the current optimal bacterium and the current bacterium on a unit circle respectively; c is a constant, representing | θ0iThe maximum value of the | angle difference; and m is the nonlinear modulation index of the current evolution step.
The specific steps for realizing the improved quantum flora algorithm are as follows:
step SA: dimension C of initial parameter search space, bacterial population size S, and number N of times bacteria perform tropismcMaximum number of steps N advancing in one direction in a trending operationsThe number of replicative behaviors N of the bacteriumreAnd the number N of times the bacteria performs migratory behaviouredMigration probability PedThe step length C (i) of forward swimming (i is 1,2, …, S), and a penalty coefficient C and a kernel function sigma which are the positions theta of the bacteria randomly generated according to the value range;
step SB: the migration operation cycle l is l + 1;
step SC: a copy operation cycle k is k + 1;
step SD: trending the operating cycle j ═ j + 1;
step SE if j is less than NcAnd returning to the step SD for tropism operation.
Step SF: copying: for a given k, l and each i ═ 1,2, …, S, the bacterial energy values J are measuredbealthAnd sorting the materials in the order from small to large. Is eliminated
Figure BDA0002255151940000151
Selecting for smaller energy bacteria SrThe larger energy bacteria introduce quantum computing, encode them using qubits, and perform nonlinear adaptive quantum rotating gate operations.
Step SG, if k is less than NreAnd returning to the step 3.
Step SH, migration: after the flora is subjected to a plurality of generations of replication operations, each bacterium is cultured in the same mannerProbability PedBack is distributed into the optimizing space randomly; if l is less than NedAnd returning to the step 2, otherwise, ending the optimization.
In this embodiment, the step SD specifically includes the following steps:
step SD 1: bacteria i were allowed to go one step as follows, i ═ 1,2, …, S;
step SD 2: calculating a fitness value function J (i, J, k, l) using equation (5);
J(i,j,k,l)=J(i,j,k,l)+Jcci(j,k,l),P(j,k,l)) (5)
step SD 3: let JlastJ (i, J, k, l), stored as the best current adaptation value for bacterium i;
step SD 4: rotating: generating a random vector Δ (i) ∈ RPEach element of which is Δm(i) (i ═ 1,2, …, P), are distributed in [ -1,1]A random number of (c);
step SD 5: moving: order to
Figure BDA0002255151940000161
Wherein, C (i) is the size of one step of the bacteria i swimming along the direction randomly generated after rotation;
step SD 6: calculating J (i, J +1, k, l), and let
J(i,j+1,k,l)=J(i,j,k,l)+Jcci(j+1,k,l),P(j+1,k,l)) (7)
Step SD 7: swimming: firstly, making m equal to 0; then ensuring that m is less than NsLet m equal m + l, if J (i, J +1, k, l) < JlastLet JlastJ (i, J +1, k, l), and
Figure BDA0002255151940000162
return to step SD6 by θi(J +1, k, l) calculating a new J (i, J +1, k, l); otherwise, let m equal to Ns
Step SD 8: returning to step SD2, the next bacterium i +1 is processed.
In this embodiment, the specific content of the kernel limit learning machine, i.e. the KELM, obtained in step S4 is as follows:
then, a kernel extreme learning machine part is provided, the kernel extreme learning machine is an improvement of the extreme learning machine, in the process of using the extreme learning machine, the number of nodes of the hidden layer needs to be manually set, and the input weight and the hidden layer bias can be randomly selected, so that a unique optimal solution can be directly obtained, and the defects of a feedforward neural network method are overcome.
For the whole training process of the extreme learning machine, the connection weight omega between the input layer and the hidden layer and the bias b of the neuron of the hidden layer are required to be selected, so that the obtained hidden layer output matrix H can be ensured to be unique, and the whole learning process is to solve a linear system H β -T, so that the connection weight β between the hidden layer and the output layer is calculated through a formula (8);
Figure BDA0002255151940000163
in the above formula, H+Represents the generalized inverse of matrix H; the generalized inverse of the computation matrix H can be solved in a variety of ways, such as: orthogonal projection method, singular value decomposition method, iterative method, and the like. The generalized inverse of the matrix H is typically computed using orthogonal projection, as shown in equation (9)
H+=HT(HHT)-1(9)
If in HHTAdding a positive number to the diagonal
Figure BDA0002255151940000171
The solution is more stable and the generalization performance is stronger; de formula (10)
Figure BDA0002255151940000172
Therefore, the output function of the extreme learning machine can be expressed as shown in equation (11)
Figure BDA0002255151940000173
In the above formula: i is a diagonal matrix; and C is a penalty coefficient.
To eliminate the effect of uncertainty in the hidden layer function h (x), HH is replaced by a kernel functionTThe kernel matrix is defined according to the Mercer condition, as shown in equations (12) and (13):
Figure BDA0002255151940000174
Figure BDA0002255151940000175
therefore, the expression of the output of the kernel-limit learning machine is obtained as shown in the formula (14)
Figure BDA0002255151940000176
In the KELM algorithm, we do not need to worry about the influence caused by parameters such as hidden layer functions h (x), connection weights omega between an input layer and a hidden layer, bias b of hidden layer neurons and the number L of hidden nodes, and only need to select corresponding kernel functions. Usually we will choose RBF as kernel function of KELM, as shown in equation (15)
K(xi,xj)=exp(-σ||xi-xj||2),σ>0 (15)。
Although the KELM introduces the RBF kernel function and the penalty parameter to solve the problem of random initialization of the input weight of the ELM and enhance the generalization performance of the algorithm, the performance of the KELM is easily influenced by the penalty coefficient C and the kernel function sigma. Because the choice of the ridge regression parameters C and the RBF kernel σ is typically determined by trial and error or empirically. Different learning parameter combinations (C, σ) all affect the fit performance of the KELM. Therefore, in order to obtain the optimal classification performance, the present embodiment uses a quantum flora algorithm to optimize two parameters, namely a penalty coefficient C and a kernel function σ of the KELM.
In order to verify the parameter optimization performance of the quantum flora algorithm provided by the embodiment, a complex binary nonlinear function is specially selected to solve the maximum value. The binary non-linear function is shown below:
maxf(x,y)=xsin(4πx)+ysin(20πy)
wherein the two ranges of values for x and y are as follows:
Figure BDA0002255151940000181
under the condition of using the quadratic function and the same threshold, the quantum flora algorithm, the quantum harmony search algorithm and the quantum genetic algorithm are tested, and the three algorithms are tested through the test functions, so that the obtained results are shown in fig. 4.
It can be found from fig. 4 that the convergence rate of the quantum flora algorithm is much faster than that of the quantum genetic algorithm and the quantum harmony search algorithm, and the convergence effect is also better than that of the quantum genetic algorithm, the quantum harmony search algorithm and the flora algorithm, and meanwhile, the convergence accuracy of the quantum flora algorithm is much better than that of the quantum genetic algorithm, the quantum harmony search algorithm and the flora algorithm. The four algorithms are respectively used for 10 times of test experiments, the optimal solution average value and the final algebraic average value of the three algorithms are calculated, and the obtained results are as follows: the two values for the quantum harmony search algorithm are 17.0035 and 50.1, the two values for the quantum genetic algorithm are 17.3503 and 96.9, the two values for the flora algorithm are 17.0012 and 65.3, and the two values for the quantum flora algorithm are 17.8024 and 94.8. The maximum value of the binary function is 17.9213, and it can be seen from the above results that the average value of the optimal solution calculated by the quantum flora algorithm is the maximum value closest to the binary function, and the optimal solution is not easy to fall into the local optimal solution while the convergence rate is high, which can indicate to a certain extent that the quantum flora algorithm can not only approach the global optimal solution well, but also improve the convergence rate, and has higher robustness.
Meanwhile, the value ranges of two parameters, namely the penalty coefficient C and the kernel function sigma, of the quantum bacterial foraging algorithm optimized KELM are as follows.
Figure BDA0002255151940000191
Based on the multi-scale CNN characteristics and quantum flora optimization KELM, an example is shown in the following, 55000 training sets, 5000 verification sets and 10000 test sets are selected from MNIST handwriting data sets of the national institute of standards and technology, and part of handwriting images are shown in FIG. 5.
After the image is decomposed by the NSST domain, the image is decomposed into pictures with a plurality of scales. In order to verify that the decomposition into a plurality of scales is the most suitable for feature extraction and classification and identification of the MNIST handwriting data set, respectively selecting: experiments were performed without NSST, two scales, three scales, four scales and five scales. After the feature values are extracted from each scale through CNN, in order to verify that what fusion mode is the most suitable feature extraction and classification recognition of the MNIST handwriting data set, four fusion modes are provided, which are respectively: taking the maximum value, taking the minimum value, taking the average value and taking the median. For the decomposition scale and the fusion mode, experiments are intensively performed in order to select the most appropriate method, and the result is shown in fig. 6, wherein the test precision is highest when the feature fusion mode with the maximum value of the three scales is adopted, and reaches 99.95%. The test accuracy obtained for different decomposition modes is different. The test precision without NSST domain decomposition is the lowest, which shows that the NSST domain decomposition plays an important role in the aspect of extracting the characteristic details of the image; meanwhile, the test precision is different when the NSST domain is decomposed into different scales, the test precision obtained in the three scales is the highest, and the two scales, the four scales and the five scales are the next to each other. The resulting test accuracy will be different for different fusion modes. Wherein, the maximum value is taken to have the highest test precision, then the minimum value is taken, the median is taken, and the average value is taken. Therefore, what is optimal is the three-scale and maximum approach. Under the optimal state, two parameters optimized by a quantum flora algorithm are obtained, and the penalty coefficient C of the KELM is 97.1309 and the kernel function sigma is 82.3399.
Selecting a handwritten picture, decomposing the handwritten picture into three scales by NSST multi-scale, and outputting the image passing through each layer of the CNN through the CNN model, as shown in FIG. 7. Therefore, the pictures passing through the convolutional layer and the pooling layer have a constantly changing trend, have layered characteristics, and are similar to a human visual system, and more details are extracted from the handwritten characteristics.
Finally, the method of the invention is greatly improved in the test precision and the experiment time consumption of the hand-written image recognition. Respectively selecting a nearest neighbor algorithm, a support vector machine method, a CNN method and a CNN-ELM method, applying the methods to a handwritten image for experiment, and comparing the methods with the method of the invention. The results of the training accuracy, training time, test accuracy and test time of the experiment are shown in table 2. In handwritten image recognition experiments. The nearest neighbor algorithm is limited by being too simple to realize, and the training precision and the testing precision of the nearest neighbor algorithm are not too high; the support vector machine method is limited by lacking generalization capability, and the training precision and the testing precision are improved, but are relatively low; the CNN method is limited in that the CNN model takes long time in the process of identifying the training model; the CNN-ELM method is greatly improved in the aspects of training precision and testing precision, but compared with the method provided by the invention, the training precision and the testing precision are lower than those of the method provided by the invention. In conclusion, the handwritten image recognition method based on the multi-scale CNN features and the quantum flora optimization KELM provided by the embodiment has high recognition accuracy, and meets the complex handwritten image recognition requirements.
TABLE 2 comparison of handwritten image recognition results for different methods
Figure BDA0002255151940000211
The traditional handwritten image recognition method is simple and easy to realize, but lacks generalization capability and has low recognition accuracy for complex handwritten image recognition. The handwriting image recognition method for deep learning enhances generalization ability and recognition accuracy, but has the problems of too long model training time or too many optimization parameters and the like. Firstly, performing NSST decomposition on a handwritten image to obtain a multi-scale image with three scales; secondly, extracting detail features of each scale by using a CNN model, and fusing the detail features in a maximum value mode; then, optimizing parameters of the KELM model by using a quantum flora algorithm to obtain an optimal KELM model; and finally, taking the fused features as the input of the KELM, and recognizing the features to finish the recognition of the handwritten image.
According to the embodiment, the recognition accuracy and the generalization capability are enhanced, meanwhile, a KELM model with less time is adopted during model training, the KELM only needs to optimize two parameters, and more detail features can be extracted by using the CNN-KELM, so that the workload of handwritten image recognition is greatly reduced, and the recognition accuracy is improved.
Handwritten images have the characteristics of diversity and complexity, and the quality of the handwritten images is uneven and difficult to recognize quickly and accurately. In the embodiment, the NSST-CNN model is used for extracting the detail features of the handwritten image, and the KELM is used for identifying the features, so that the handwritten image can be identified quickly under the condition of ensuring extremely high accuracy, a new method is provided for image identification, and meanwhile, the life and learning of people can be facilitated.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (5)

1. A handwritten image recognition method based on multi-scale CNN features and quantum flora optimization KELM is characterized in that: the method comprises the following steps:
step S1: establishing an NSST-CNN image decomposition and feature extraction model, obtaining a handwritten image, performing NSST multi-scale decomposition on the handwritten image to obtain a low-frequency image and a plurality of high-frequency images, wherein the high-frequency images do not subdivide sub-band images;
step S2: respectively extracting characteristic values of the low-frequency image and the high-frequency images subjected to NSST decomposition in the step S1 through a CNN model; the CNN network structure passed by each scale comprises 3 convolutional layers, 3 pooling layers and 1 full-connection layer;
step S3: carrying out feature fusion on feature values extracted by CNN in each scale in a maximum value mode to form 1024 feature values; the fusion is to fuse the characteristic values corresponding to each scale; there are four ways, which are: taking a maximum value, a minimum value, an average value and a median;
step S4: taking the 1024 characteristic value after each scale fusion as the input of a kernel limit learning machine (KELM), and taking the number corresponding to each handwritten image as the output of the KELM; 80% of all input data are used for training a KELM model, 20% of all input data are used for testing the trained model, and the test precision is obtained;
step S5: optimizing two parameters, namely a punishment coefficient C and a kernel function sigma, of the KELM by using an improved quantum flora algorithm, wherein the two parameters are variables of the algorithm, and the accuracy of handwritten image recognition is used as a fitness value of the algorithm;
step S6: constructing a KELM model by using the optimal penalty coefficient C and the kernel function sigma obtained by optimizing the improved quantum flora algorithm to obtain an optimal KELM identification model;
step S7: and substituting the characteristic value of the test image obtained by the NSST-CNN model into the optimal KELM model to obtain the test precision, and predicting the handwritten image of the number to be tested by the steps S1 to S6 to obtain a recognition result so as to realize the recognition of the handwritten image.
2. The method of claim 1, wherein the method comprises the following steps: the specific contents of establishing the image decomposition and feature extraction model of the NSST-CNN in the step S1 are as follows: decomposing the hand-written image under NSST domain, adopting a non-subsampled pyramid to carry out multi-scale decomposition, carrying out k-time decomposition on the image to obtain 1 low-frequency image and k-layer high-frequency sub-band image, then adopting a shearing filter to carry out l-level multi-directional decomposition on the high-frequency sub-band to obtain 2lA plurality of multi-directional sub-bands with the same size as the original image;
the formula for two-dimensional image processing by NSST is shown in the following formula (1)
Figure FDA0002255151930000021
In which psi ∈ L2(R)2Is the basis function of an affine system; k. l and b are respectively a scale factor, a direction factor and a translation of the decompositionThe coefficient, A is the anisotropic expansion matrix for controlling NSST scale decomposition, and S is the shearing matrix for controlling NSST direction decomposition;
extracting detail features of the multiscale image subjected to NSST decomposition by using a convolutional neural network; in CNN, the neurons in the convolutional layer are connected to only some of the neuron nodes in the previous layer, i.e. the connections between its neurons are not fully connected, and the weights w and offsets b of the connections between some neurons in the same layer are shared, so as to reduce the number of parameters to be trained; the CNN structure comprises an input layer, a convolution layer, an excitation layer, a pooling layer, a full-connection layer and an output layer; the input layer is used for inputting data, and pictures are input into the CNN structure in a data form; the convolution layer uses convolution kernels to carry out feature extraction and feature mapping, and extracts different feature maps by carrying out convolution operation on an input image for multiple times; then, each unit of the current layer is connected with the local block of the feature diagram of the precursor layer through a weight value, and local weighting and nonlinear transformation are carried out;
the excitation layer carries out nonlinear mapping on the output of the convolution layer, the excitation function used by the convolution layer is a ReLu function, and f (x) is max (x, 0);
the concrete operation of the pooling layer is as follows: first, by coarse graining the location of each feature; second, each typical pooling unit typically calculates the maximum of local blocks within one or several feature maps; the adjacent units acquire input data through blocks formed by row or column translation so as to reduce feature expression dimensionality and have robustness to smaller deformations such as translation and distortion; the full connection layer is refitted at the tail part of the CNN to reduce the loss of characteristic information; the output layer is used for outputting the result.
3. The method of claim 1, wherein the method comprises the following steps: the specific content of step S5 is as follows:
firstly, quantum computation is added to the flora algorithm, qubits are physical media used to store information, one qubit being represented as: i phi>=α|0>+β|1>Wherein (α) is two amplitude constants, | α2+|β|2=1,|0>And |1>Represents a spin state; obtaining a quantum from the preceding formula while containing |0>And |1>The information of two states, the gene quantum-encoding n parameters, is shown in equation (2):
Figure FDA0002255151930000031
then, adding a nonlinear adaptive rotation angle into the flora algorithm, wherein the specific formula is as follows:
θi=-sgn(Aii(3)
wherein: -sgn (A)i) Indicating a direction of the rotation angle;
Figure FDA0002255151930000041
Figure FDA0002255151930000042
the probability amplitude corresponding to a certain qubit of the currently optimal bacterium,
Figure FDA0002255151930000043
the probability amplitude corresponding to a certain quantum bit of the current bacteria; thetaiThe rotation angle is calculated as follows:
Figure FDA0002255151930000044
wherein: thetamaxIs the maximum value of the rotation angle; thetaminIs the minimum value of the rotation angle; theta0And thetaiThe angles of a certain quantum bit of the current optimal bacterium and the current bacterium on a unit circle respectively; c is a constant, representing | θ0iThe maximum value of the | angle difference; and m is the nonlinear modulation index of the current evolution step.
The specific steps for realizing the improved quantum flora algorithm are as follows:
step SA: initialization parameter searchDimension C of cable space, size S of bacterial population, and number N of times bacteria perform tropismcMaximum number of steps N of advancing in one direction in a trending operationsThe number of replicative behaviors N of the bacteriumreAnd the number N of times the bacteria performs migratory behaviouredMigration probability PedThe step length C (i) of forward swimming (i is 1,2, …, S), and a penalty coefficient C and a kernel function sigma which are the positions theta of the bacteria are randomly generated according to the value range;
step SB: the migration operation cycle l is l + 1;
step SC: a copy operation cycle k is k + 1;
step SD: trending the operating cycle j ═ j + 1;
step SE if j is less than NcAnd returning to the step SD for tropism operation.
Step SF: copying: for a given k, l and each i ═ 1,2, …, S, the bacterial energy values J are measuredbealthSorting according to the sequence from small to large; is eliminated
Figure FDA0002255151930000045
Selecting for smaller energy bacteria SrThe larger energy bacteria introduce quantum computing, encode them using qubits, and perform nonlinear adaptive quantum rotating gate operations.
Step SG, if k is less than NreAnd returning to the step 3.
Step SH, migration: after the flora is subjected to a plurality of generations of replication operations, each bacterium has a probability PedBack is distributed into the optimizing space randomly; if l is less than NedAnd returning to the step 2, otherwise, ending the optimization.
4. The method of claim 3, wherein the method comprises the following steps: the step SD specifically includes the steps of:
step SD 1: bacteria i were allowed to go one step as follows, i ═ 1,2, …, S;
step SD 2: calculating a fitness value function J (i, J, k, l) using equation (5);
J(i,j,k,l)=J(i,j,k,l)+Jcci(j,k,l),P(j,k,l)) (5)
step SD 3: let JlastJ (i, J, k, l), stored as the best current adaptation value for bacterium i;
step SD 4: rotating: generating a random vector Δ (i) ∈ RPEach element of which is Δm(i) (i ═ 1,2, …, P), are distributed in [ -1,1]A random number of (c);
step SD 5: moving: order to
Figure FDA0002255151930000051
Wherein, C (i) is the size of one step of the bacteria i swimming along the direction randomly generated after rotation;
step SD 6: calculating J (i, J +1, k, l), and let
J(i,j+1,k,l)=J(i,j,k,l)+Jcci(j+1,k,l),P(j+1,k,l)) (7)
Step SD 7: swimming: firstly, making m equal to 0; then ensuring that m is less than NsLet m equal m + l, if J (i, J +1, k, l) < JlastLet JlastJ (i, J +1, k, l), and
Figure FDA0002255151930000061
return to step SD6 by θi(J +1, k, l) calculating a new J (i, J +1, k, l); otherwise, let m equal to Ns
Step SD 8: returning to step SD2, the next bacterium i +1 is processed.
5. The method of claim 1, wherein the method comprises the following steps: the specific content of the kernel limit learning machine, i.e., the KELM, obtained in the step S4 is as follows:
in the whole training process of the extreme learning machine, the connection weight omega between the input layer and the hidden layer and the bias b of the neuron of the hidden layer are selected, so that the obtained hidden layer output matrix H can be ensured to be unique, and the whole learning process is to solve a linear system H β T, so that the connection weight β between the hidden layer and the output layer is calculated through a formula (8);
Figure FDA0002255151930000062
in the above formula, H+Represents the generalized inverse of matrix H; the generalized inverse of the calculation matrix H is performed using an orthogonal projection method, as shown in equation (9)
H+=HT(HHT)-1(9)
If in HHTAdding a positive number to the diagonal
Figure FDA0002255151930000063
The solution is more stable and the generalization performance is stronger; de formula (10)
Figure FDA0002255151930000064
Therefore, the output function of the extreme learning machine can be expressed as shown in equation (11)
Figure FDA0002255151930000065
In the above formula: i is a diagonal matrix; and C is a penalty coefficient.
To eliminate the effect of uncertainty in the hidden layer function h (x), HH is replaced by a kernel functionTThe kernel matrix is defined according to the Mercer condition, as shown in equations (12) and (13):
Figure FDA0002255151930000071
Figure FDA0002255151930000072
therefore, the expression of the output of the kernel-limit learning machine is obtained as shown in the formula (14)
Figure FDA0002255151930000073
RBF is selected as the kernel function of KELM as shown in equation (15)
K(xi,xj)=exp(-σ||xi-xj||2),σ>0 (15)。
CN201911050472.7A 2019-10-31 2019-10-31 Handwritten image recognition method based on multi-scale CNN (CNN) features and quantum flora optimization KELM Active CN110969086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911050472.7A CN110969086B (en) 2019-10-31 2019-10-31 Handwritten image recognition method based on multi-scale CNN (CNN) features and quantum flora optimization KELM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911050472.7A CN110969086B (en) 2019-10-31 2019-10-31 Handwritten image recognition method based on multi-scale CNN (CNN) features and quantum flora optimization KELM

Publications (2)

Publication Number Publication Date
CN110969086A true CN110969086A (en) 2020-04-07
CN110969086B CN110969086B (en) 2022-05-13

Family

ID=70030221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911050472.7A Active CN110969086B (en) 2019-10-31 2019-10-31 Handwritten image recognition method based on multi-scale CNN (CNN) features and quantum flora optimization KELM

Country Status (1)

Country Link
CN (1) CN110969086B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459140A (en) * 2020-04-10 2020-07-28 北京工业大学 Fermentation process fault monitoring method based on HHT-DCNN
CN112037868A (en) * 2020-11-04 2020-12-04 腾讯科技(深圳)有限公司 Training method and device for neural network for determining molecular reverse synthetic route
CN112507863A (en) * 2020-12-04 2021-03-16 西安电子科技大学 Handwritten character and picture classification method based on quantum Grover algorithm
CN112861594A (en) * 2020-07-17 2021-05-28 宁夏大学 Online handwritten digit recognition method based on incremental semi-supervised kernel extreme learning machine
CN113361664A (en) * 2021-08-10 2021-09-07 北京航空航天大学 Image recognition system and method based on quantum convolution neural network
CN113673415A (en) * 2021-08-18 2021-11-19 山东建筑大学 Handwritten Chinese character identity authentication method and system
CN114219076A (en) * 2021-12-15 2022-03-22 北京百度网讯科技有限公司 Quantum neural network training method and device, electronic device and medium
CN117809230A (en) * 2024-02-29 2024-04-02 四川省水利科学研究院 Water flow velocity identification method based on image identification and related products

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165610A (en) * 2018-08-31 2019-01-08 昆明理工大学 A kind of Handwritten Digital Recognition detection method evolved based on simple form
CN110246106A (en) * 2019-06-22 2019-09-17 福州大学 The enhancing of the domain NSST floatation foam image and denoising method based on quantum harmony search fuzzy set
CN110287975A (en) * 2019-06-28 2019-09-27 福州大学 Flotation dosing abnormity detection method based on NSST morphological characteristics and depth KELM
US20190306526A1 (en) * 2018-04-03 2019-10-03 Electronics And Telecommunications Research Institute Inter-prediction method and apparatus using reference frame generated based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190306526A1 (en) * 2018-04-03 2019-10-03 Electronics And Telecommunications Research Institute Inter-prediction method and apparatus using reference frame generated based on deep learning
CN109165610A (en) * 2018-08-31 2019-01-08 昆明理工大学 A kind of Handwritten Digital Recognition detection method evolved based on simple form
CN110246106A (en) * 2019-06-22 2019-09-17 福州大学 The enhancing of the domain NSST floatation foam image and denoising method based on quantum harmony search fuzzy set
CN110287975A (en) * 2019-06-28 2019-09-27 福州大学 Flotation dosing abnormity detection method based on NSST morphological characteristics and depth KELM

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIU SHUAIGI ET AL.: "CNNin Non-sampled Shearlet Domain", 《JOURNAL OF ZHENGZHOU UNIVERSITY (ENGINEERING SCIENCE)》 *
张进 等: "基于多尺度CNN特征及RAE-KELM的浮选加药状态识别", 《激光与光电子学进展》 *
陈建原: "基于极限学习机的图像分类新算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459140A (en) * 2020-04-10 2020-07-28 北京工业大学 Fermentation process fault monitoring method based on HHT-DCNN
CN112861594A (en) * 2020-07-17 2021-05-28 宁夏大学 Online handwritten digit recognition method based on incremental semi-supervised kernel extreme learning machine
CN112861594B (en) * 2020-07-17 2023-07-28 宁夏大学 Online handwriting digital recognition method based on incremental semi-supervised kernel extreme learning machine
CN112037868A (en) * 2020-11-04 2020-12-04 腾讯科技(深圳)有限公司 Training method and device for neural network for determining molecular reverse synthetic route
CN112037868B (en) * 2020-11-04 2021-02-12 腾讯科技(深圳)有限公司 Training method and device for neural network for determining molecular reverse synthetic route
CN112507863B (en) * 2020-12-04 2023-04-07 西安电子科技大学 Handwritten character and picture classification method based on quantum Grover algorithm
CN112507863A (en) * 2020-12-04 2021-03-16 西安电子科技大学 Handwritten character and picture classification method based on quantum Grover algorithm
CN113361664B (en) * 2021-08-10 2021-11-05 北京航空航天大学 Image recognition system and method based on quantum convolution neural network
CN113361664A (en) * 2021-08-10 2021-09-07 北京航空航天大学 Image recognition system and method based on quantum convolution neural network
CN113673415B (en) * 2021-08-18 2022-03-04 山东建筑大学 Handwritten Chinese character identity authentication method and system
CN113673415A (en) * 2021-08-18 2021-11-19 山东建筑大学 Handwritten Chinese character identity authentication method and system
CN114219076A (en) * 2021-12-15 2022-03-22 北京百度网讯科技有限公司 Quantum neural network training method and device, electronic device and medium
CN114219076B (en) * 2021-12-15 2023-06-20 北京百度网讯科技有限公司 Quantum neural network training method and device, electronic equipment and medium
CN117809230A (en) * 2024-02-29 2024-04-02 四川省水利科学研究院 Water flow velocity identification method based on image identification and related products

Also Published As

Publication number Publication date
CN110969086B (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN110969086B (en) Handwritten image recognition method based on multi-scale CNN (CNN) features and quantum flora optimization KELM
Hu et al. Nas-count: Counting-by-density with neural architecture search
Liu et al. Scene classification via triplet networks
Sau et al. Deep model compression: Distilling knowledge from noisy teachers
CN111723674B (en) Remote sensing image scene classification method based on Markov chain Monte Carlo and variation deduction and semi-Bayesian deep learning
Huang et al. Deep and wide multiscale recursive networks for robust image labeling
Zhang et al. Rotation invariant local binary convolution neural networks
Yee et al. DeepScene: Scene classification via convolutional neural network with spatial pyramid pooling
CN115731441A (en) Target detection and attitude estimation method based on data cross-modal transfer learning
CN114067385A (en) Cross-modal face retrieval Hash method based on metric learning
CN111931801B (en) Dynamic route network learning method based on path diversity and consistency
Wickramasinghe et al. Parallalizable deep self-organizing maps for image classification
CN116188900A (en) Small sample image classification method based on global and local feature augmentation
Love et al. Topological deep learning
CN111310820A (en) Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration
CN114492581A (en) Method for classifying small sample pictures based on transfer learning and attention mechanism element learning application
Roy et al. Classification of massive noisy image using auto-encoders and convolutional neural network
Luo et al. Piecewise linear regression-based single image super-resolution via Hadamard transform
Chacon-Murguia et al. Moving object detection in video sequences based on a two-frame temporal information CNN
Zhang et al. Effective traffic signs recognition via kernel PCA network
CN114882288B (en) Multi-view image classification method based on hierarchical image enhancement stacking self-encoder
CN115222998A (en) Image classification method
Chen et al. Understanding the role of self-supervised learning in out-of-distribution detection task
Lin et al. Ml-capsnet meets vb-di-d: A novel distortion-tolerant baseline for perturbed object recognition
Wang et al. Adaptive normalized risk-averting training for deep neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant