CN110110845B - Learning method based on parallel multi-level width neural network - Google Patents
Learning method based on parallel multi-level width neural network Download PDFInfo
- Publication number
- CN110110845B CN110110845B CN201910331708.8A CN201910331708A CN110110845B CN 110110845 B CN110110845 B CN 110110845B CN 201910331708 A CN201910331708 A CN 201910331708A CN 110110845 B CN110110845 B CN 110110845B
- Authority
- CN
- China
- Prior art keywords
- neural network
- level
- test
- width
- sample set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 218
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000012360 testing method Methods 0.000 claims abstract description 103
- 238000012549 training Methods 0.000 claims abstract description 78
- 238000012795 verification Methods 0.000 claims abstract description 58
- 238000004364 calculation method Methods 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 72
- 239000011159 matrix material Substances 0.000 claims description 42
- 239000013598 vector Substances 0.000 claims description 27
- 230000008569 process Effects 0.000 claims description 19
- 238000013501 data transformation Methods 0.000 claims description 6
- 238000009826 distribution Methods 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 5
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 230000003190 augmentative effect Effects 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims description 2
- 230000001537 neural effect Effects 0.000 claims 2
- 238000013461 design Methods 0.000 claims 1
- 238000012163 sequencing technique Methods 0.000 claims 1
- 230000010354 integration Effects 0.000 abstract description 2
- 238000010200 validation analysis Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 5
- 238000007637 random forest analysis Methods 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000012706 support-vector machine Methods 0.000 description 4
- 238000013136 deep learning model Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
The invention discloses a learning method based on a parallel multi-level width neural network, which comprises the following steps: obtaining a verification set and constructing a base classifier; training and verifying each level of the parallel M-level width neural network to obtain a trained parallel M-level width neural network and verification output corresponding to each level of the width neural network; obtaining a decision threshold value of each level of width neural network through statistical calculation; and testing the verified parallel multi-level width neural network through a test set. The neural network has a multi-stage structure, each stage learns different parts of data, and parallel training and testing can be realized. Each level adopts a width neural network to carry out feature learning in the width direction; realizing the integration of the classifiers in two width directions by using a plurality of width neural networks as the reconnection of the base classifiers in the width directions; incremental learning of the network is realized by adding a new level of width neural network; and can realize parallelization test.
Description
Technical Field
The invention belongs to the technical field of artificial intelligence and machine learning, and particularly relates to a learning method based on a parallel multi-level width neural network.
Background
With the great success of learning models based on deep learning networks in the fields of large-scale image processing, machine vision and the like, the complexity of the learning models is rapidly increased, and a large amount of high-dimensional data is required for training, so that the required computing resources and computing time are greatly increased. In addition, actual data is often not homogeneous, some samples are very easy to classify, but many samples are difficult to classify. Most classification errors occur when the input samples are difficult to classify, such as unbalanced distribution of samples, abnormally acquired samples, and samples that are close to classification boundaries or linearly inseparable.
In the existing deep learning model, both the simple sample and the complex sample are processed in the same way, so that the use efficiency of computing resources is reduced. Meanwhile, the existing deep learning network such as the convolutional neural network often has many layers, all samples need to pass through all network layers, and it is time-consuming to generalize or test the network. Early parallel multi-stage ad-hoc networks received at each stage only non-linearly transformed samples rejected by the previous stage, which were transformed to other spaces that were easy to classify, and thus classified again. However, the problem of how to adjust and allocate computing resources for high-dimensional data with respect to data samples with different difficulties to improve the speed and efficiency of learning classification is not well solved.
Disclosure of Invention
In view of the above-mentioned drawbacks, the present invention provides a learning method based on a parallel multi-level width neural network, the neural network of the present invention has a multi-level structure, each level learns for different parts in data, and can implement parallelization training and testing. Each level adopts a width neural network to carry out feature learning in the width direction; realizing the integration of the classifiers in two width directions by using a plurality of width neural networks as the reconnection of the base classifiers in the width directions; incremental learning of the network is realized by adding a new level of width neural network; and the parallel test can be realized, the learning classification time of the complex samples is greatly shortened, and the network operation efficiency is improved.
In order to achieve the above object, the present invention adopts the following technical solutions.
(II) a learning method based on a parallel multi-level width neural network, wherein the parallel multi-level width neural network comprises a multi-level width neural network, each level width neural network comprises an input layer, a hidden layer, a decision layer and an output layer which are connected in sequence, the decision layer is used for determining whether each test sample is output by a current level, and the learning method comprises the following steps:
Wherein, the total number of samples of the original training sample set is Ntr。
step 4, acquiring a test set, and inputting the test set serving as input data of the parallel M-level width neural network determined by the decision threshold to the neural network with each level determined by the decision threshold in parallel for testing to obtain the output of the neural network with each level determined by the decision threshold; obtaining an error vector of each level of width neural network, and judging the output of each level of width neural network determined by the decision threshold, thereby obtaining a label y corresponding to the test output of each level of width neural network determined by the decision thresholdtest_ind_m。
The technical scheme of the invention has the characteristics and further improvements that:
(1) in step 1, the data transformation compresses or deforms the samples in the original sample set through Elastic transformation (Elastic); or the data transformation is to rotate, flip, zoom in or zoom out the samples in the original sample set through Affine transformation (Affine).
(2) In step 2, the original training sample set and M verification sets x are adoptedv_1,…xv_m,…xv_MRespectively to parallel M stagesTraining and validation at each stage of the wide neural network, comprising the sub-steps of:
substep 2.1, using the original training sample set as the 1 st level width neural network Net1For the 1 st order width neural network Net1And training to obtain the trained first-stage width neural network.
Substep 2.2, using a first verification set xv_1Verifying the trained 1 st-level width neural network to obtain an error classification sample set y of a verification set of the 1 st-level width neural networkvw_1。
Substep 2.3, misclassification sample set y of first-level width neural networkvw_1Input samples A as a level 2 wide neural networkv_1(ii) a Then randomly extracting a training sample set A from the original training sample setv_2Let the total input sample set { A }v_1+Av_2The number of samples in (A) is equal to the number of samples in the original training sample set, and the total input sample set (A) isv_1+Av_2As input samples for a level 2 wide neural network.
Substep 2.4, use the total input sample set { A }v_1+Av_2Training the 2 nd-level width neural network to obtain a trained 2 nd-level width neural network; using a second verification set xv_2Verifying the trained 2 nd-level width neural network to obtain an error classification sample set y of a verification set of the 2 nd-level width neural networkvw_2。
And analogizing in sequence, respectively training the neural networks with the widths from the 3 rd level to the M th level to obtain the trained parallel neural networks with the widths of the M levels and the corresponding verification output y of the neural networks with the widths of each levelv_m(m=1,2…,M)。
(3) In step 2, the minimum error method is as follows:
firstly, setting the total class number of an original training sample set as C, and constructing a reference matrix Rj(1≤j≤C)。
Wherein, the reference matrix RjEach reference matrix R having 1 for the element in the jth row and 0 for the remaining elementsjOf dimension C × Ntr。
Secondly, outputting y according to the verification of each level of the trained width neural networkv_mObtaining verification output yv_mReference matrix R corresponding to the stagejError vector between:
Jv_mj=||softmax(yv_m)-Rj||2,1≤j≤C;
wherein, Jv_mjDimension of 1 × Ntr;yv_mOf dimension C × Ntr。
Finally, output y to verificationv_mReference matrix R corresponding to the stagejError vector J therebetweenv_mjCalculating the minimum value to obtain the class label y corresponding to the trained neural network with each level of widthv_ind_m:
Wherein, yv_ind_mDimension of 1 × Ntr。
(4) In step 3, the statistical calculation comprises the following substeps:
and 3.1, setting a correct classification sample set and an incorrect classification sample set of the mth level width neural network of the trained parallel M level width neural network as follows: y isvc_mAnd yvw_mThe total number of samples in the correctly classified sample set and the incorrectly classified sample set is respectively as follows: n is a radical ofvc_mAnd Nvw_mAnd N isvc_m+Nvw_m=NtrThen, the errors of the correctly classified sample set and the incorrectly classified sample set are respectively:
evc_m=||softmax(yvc_m)-tvc_m||2;
evw_m=||softmax(yvw_m)-tvw_m||2;
wherein, tvc_mIs a correctly classified sample y in an m-level width neural networkvc_mCorresponding real label, tvw_mIs a misclassified sample y in an m-level wide neural networkvw_mA corresponding real tag.
Substep 3.2, sample set y is classified correctlyvc_mAnd misclassification sample set yvw_mRespectively calculate the correct classification sample set yvc_mRespectively, mean and variance ofcAnd σc(ii) a Misclassification sample set yvw_mThe mean and variance of (a) are respectively: u. ofwAnd σw(ii) a Then the sample set y is correctly classifiedvc_mAnd misclassification sample set yvw_mThe corresponding gaussian distributions are:
correctly classifying sample set yvc_mAnd misclassification sample set yvw_mThe corresponding gaussian probability density functions are:
substep 3.3, sorting the sample set y according to errorsvw_mError e ofvw_mSum variance σwObtaining a decision threshold T of the neural network with m-level widthm=min(evw_m)-ασw。
Wherein α is a constant to give a margin to allow all misclassified samples yvw_mIs rejected at the current stage.
(5) In step 4, the test set acquisition is as follows: obtaining an original test sample set xtest(ii) a Correspondingly acquiring M groups of test sample sets x through M times of data expansiontest_1,...,xtest_m,...,xtest_MI.e. a test set.
Further, the data is augmented as: for the original measurementTest sample set xtestIs performed for each sample in NtestDConverting the data to obtain NtestDTest sample set as test set x of M-th order width neural network of parallel M-order width neural network determined by decision thresholdtest_m。
Wherein, the original test sample set XtestTotal number of middle test samples is Ntest_saples。
(6) In step 4, the obtaining of the error vector of the neural network with each level of width comprises the following substeps:
substep 4.1, set M groups of test samples xtest_1,xtest_2,...,xtest_MRespectively parallelly inputting the data to parallel M-level width neural networks determined by the decision threshold, and correspondingly obtaining N of each-level width neural network determined by the decision thresholdtestDAn output ytest_m_d,(d=1,2…NtestD)。
Substep 4.2, N for each level of width neural network determined for decision thresholdtestDAn output ytest_m_d,(d=1,2…NtestD) Calculating the average value to obtain the test output of each level of width neural network determined by the decision threshold
Substep 4.3, setting the total class number of the test set as C, and constructing a reference matrix Rj(j is more than or equal to 1 and less than or equal to C); obtaining verification output yv_mReference matrix R corresponding to the stagejError vector between:
Jtest_mj=||softmax(ytest_m)-Rj||2,1≤j≤C;
wherein, the reference matrix RjEach reference matrix R having 1 for the element in the jth row and 0 for the remaining elementsjOf dimension C × Ntest_samples;Jtest_mjDimension of 1 × Ntest_samples,yv_mOf dimension C × Ntest_samples。
(7) The output of each level of the neural network with the width determined by the decision threshold is judged as follows:
when the minimum error of the current stage width neural network is less than or equal to the current stage decision threshold, judging that the current stage is the correct classification output stage of the output:
min(Jtest_mj)≤Tm。
when the minimum error of the current stage width neural network is larger than the decision threshold of the current stage, judging that the current stage can not correctly classify the output, transferring the output to the next stage width neural network for testing, and repeating the steps until the output finds the correctly classified output stage:
min(Jtest_mj)>Tm。
(8) in step 4, the label y corresponding to the test output of the neural network with each level of width determined by the decision threshold is obtainedtest_ind_mComprises the following steps:
wherein, ytest_ind_mDimension of 1 × Ntest_samples。
Compared with the prior art, the invention has the beneficial effects that:
(1) the neural network provided by the invention is provided with the multi-stage base classifier, each stage is used for learning different part samples of the data set, the structure of the neural network can be determined in a self-adaptive manner according to problems and the complexity of the data set, and the optimization of computing resources is realized.
(2) The neural network has the advantage of incremental learning, when new training data is available, the judgment of the current neural network is realized, whether newly added training data can be correctly classified is determined according to the judgment result, and if the newly added training data cannot be correctly separated, a new sample is learned by adding a new width radial basis function as a new first level of the neural network without retraining the whole network.
(3) The neural network of the invention can carry out parallel test during testing, namely test data are simultaneously sent to all stages of the network, and the decision threshold value of each stage obtained in the training process is used for deciding which stage of the neural network each test sample is finally output by, so that the waiting time during actual use of the network is greatly reduced in the parallel test process.
(4) The neural network can be used as a universal learning framework, has strong flexibility, and can use a BP neural network, a convolutional neural network or other types of classifiers according to actual needs at each stage.
Drawings
The invention is described in further detail below with reference to the figures and specific embodiments.
FIG. 1 is a schematic diagram of a parallel multi-stage neural network of the present invention and its training test process; wherein, FIG. 1(a) is a schematic diagram of a parallel multi-level width neural network of the present invention; FIG. 1(b) is a schematic diagram of the training and validation process of the parallel multi-level width neural network of the present invention; FIG. 1(c) is a schematic diagram of the testing process of the parallel multi-level width neural network of the present invention.
FIG. 2 is a block diagram of a parallel multi-level width neural network of the present invention.
FIG. 3(a) is an error profile of a validation set of a parallel multi-level width neural network of the present invention at one of the levels; fig. 3(b) is a gaussian probability density function of the statistical parameters in fig. 3 (a).
Fig. 4 is a comparison graph of the test result of the parallel 26-level width neural network on the MNIST data set and the classification result of the existing learning model in the embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to examples, but those skilled in the art will appreciate that the following examples are only illustrative of the present invention and should not be construed as limiting the scope of the present invention.
An MNIST handwriting data set is adopted, each image of the data set is an image with 8-bit gray-scale handwriting numbers of 0-9, the size of the image is 28 multiplied by 28, 10 types of images are obtained in total, 60000 original training sample sets exist, 10000 images serve as test sets, and the MNIST handwriting data set is one of important general image data sets for training and testing a new learning model. For the data set, referring to fig. 1 and fig. 2, the present embodiment uses a width radial basis function network as a basis classifier, that is, each stage of the parallel multi-stage width neural network uses a width radial basis function network, and the number of stages of the parallel width neural network is selected to be 26.
(1) And obtaining a verification set and constructing a base classifier.
Firstly, for NtrElastic transformation is carried out on image samples in 60000 original training sample sets for 26 times respectively to obtain M26 verification sets xv_1,xv_2,...,xv_26In this embodiment, in order to ensure that there are enough misclassified samples of verification sets, each verification set includes N val10 data sets transformed from the original training set. Wherein the number of samples of each verification set is N of the original training sample setval10 times.
Secondly, designing a parallel multi-level width neural network by adopting a width radial basis function network as a basis classifier; m is 26 width radial basis function networks and is connected together to form a parallel multi-level width neural network Net1,Net2,...NetM(ii) a Each base classifier, as a level, focuses on a different portion of the dataset.
And finally, constructing a width radial basis function network. The specific process is as follows:
construction includes N0k1000 Gaussian basis functions ofThe center of the radial basis function network is a subset randomly taken from an original training sample set, and the value of the standard deviation is a constant. And obtaining a plurality of groups of local characteristic images of each image sample in the original training sample set by adopting a sliding window so as to obtain a plurality of groups of local characteristic matrixes, and taking the plurality of groups of local characteristic matrixes as input data of a Gaussian basis function to obtain a plurality of radial basis function networks, namely the width radial basis function networks.
2) Training and verifying each level of the parallel M-level width neural network to obtain the trained parallel M-level width neural network and verification output y corresponding to each level of the width neural networkv_m(m=1,2…,M)。
The level 1 wide radial basis function network is trained using the original set of training samples, after which the misclassified training samples are sent to the level 2 wide radial basis function network as part of a second training set to train the level 2 network. And (3) verifying the training network of the current stage by adopting the verification set obtained in the step (1), and simultaneously providing more misclassification samples as a part of the training set of the next stage. As shown in fig. 1(a), (b), specifically, the following sub-steps are included:
substep 2.1, using the original training sample set as the 1 st level width neural network Net1For the 1 st order width neural network Net1And training to obtain the trained 1 st-level width neural network.
Substep 2.2, using a first verification set xv_1Verifying the trained 1 st-level width neural network to obtain a misclassification sample set y of the 1 st-level width neural networkvw_1。
Substep 2.3, misclassification sample set y of level 1 wide neural networkvw_1Input samples A as a level 2 wide neural networkv_1(ii) a Then randomly extracting a training sample set A from the original training sample setv_2Let the total input sample set { A }v_1+Av_2The number of samples in (A) is equal to the number of samples in the original training sample set, and the total input sample set (A) isv_1+Av_2As input samples for a level 2 wide neural network.
Substep 2.4, use the total input sample set { A }v_1+Av_2Training the 2 nd-level width neural network to obtain a trained 2 nd-level width neural network; using a second verification set xv_2Verifying the trained 2 nd-level width neural network to obtain a misclassification sample set y of the 2 nd-level width neural networkvw_2。
Repeating the substeps 2.3 and 2.4, respectively training the neural networks with the widths from the 3 rd level to the M th level to obtain the trained parallel neural networks with the widths of the M levels and the corresponding verification output y of the neural networks with the widths of each levelv_m(m=1,2…,M)。
The specific training and verification process of the above-mentioned wide radial basis function network is as follows:
taking image samples in an original training sample set as input data, wherein the image size is M1×M228 × 28, the size of the sliding window is r 13 × 13, the initial position of the sliding window is set at the top left corner of each image sample, the sliding step is selected to be 1 pixel, the sliding window slides from left to right and from top to bottom in sequence, and the 3-dimensional image blocks of 60000 image samples in the sliding window are stretched into the matrix xk∈Rr×NRespectively forming corresponding original matrixes by each local characteristic image according to pixels, and sequentially arranging the 2 nd to the last columns of each original matrix to the 1 st column to form a column vector; arranging N column vectors in sequence to form a local feature matrix x of a group of training image samplesk(K is more than or equal to 1 and less than or equal to K) and a local feature matrix xkEach column of (a) represents a sample. Then the local feature matrix xkInput to the device including N0k1000 Gaussian basis functions ofThe output is noted as:
Each sliding of the sliding window corresponds to one radial basis function network, and after the final sliding is finished, K-M (M) can be obtained1-m+1)(M2-m +1) — (28-13+1) × (28-13+1) — 256 radial basis function networks.
For each radial basis function network, ordering and downsampling is introduced on its output through the gaussian basis function. For each radial basis function network, the output data phi of the Gaussian basis function subjected to nonlinear transformationkOrdering and downsampling are introduced. To width radial basis functionOutput data of a digital network phikEach row of the image frames is summed to obtain a row vector, each element of the row vector is the sum of the pixels of the local specific position of each image to be processed, the sums of the pixels of the local specific position of each image to be processed are arranged in a descending order to obtain a descending order vectorUsing an index skVector a in descending orderkMarking the original position corresponding to the local specific position of each image to be processed to obtain sequenced output data phi'k=sort(Φk,sk)。
Downsampling the sorted output data, and setting a downsampling interval NkSThe number of sampled outputs is 20:
the total number of outputs of the wide radial basis function network isSampling output of phiks=subsample(Φ′k,NkS) The output of the Gaussian base function is phi ═ phi1S,Φ2S,…,ΦKS]。
Setting the desired output to D ═ D1,D2,…,DC](ii) a And performing linear layer connection on the output of the Gaussian basis function of the wide radial basis function network, wherein the weight of the linear layer is as follows: w ═ W1,W2,…,WC];
Where C-10 is the total number of categories of the original sample.
Obtaining class output Y ═ Y of the wide radial basis function network1,Y2,…,YC]Phi W; in particular, a least mean square estimate of the weights of the linear layer is calculated by minimizing the square errorThe concrete formula is as follows:
least mean square estimation of weights of linear layers by pseudo-inverse matrix of gaussian basis function output phi of wide radial basis function network
Wherein phi+A pseudo-inverse matrix of phi is output for the gaussian basis function of the wide radial basis function network.
Finally, the class output of the wide radial basis function network obtained by calculation is as follows:
and further obtaining a trained width radial basis function network, verifying the trained width radial basis function network of each stage by adopting a corresponding verification set, and obtaining verification output y corresponding to the trained width radial basis function network of each stagev_m(m=1,2…,M)。
By the verification output y obtainedv_m(M-1, 2 …, M), and further obtaining each verification output yv_mCorresponding category label yv_ind_mThe method comprises the following specific steps:
firstly, setting the total class number of an original training sample set as C, and constructing a reference matrix Rj(1≤j≤C)。
Wherein, the reference matrix RjEach reference matrix R having 1 for the element in the jth row and 0 for the remaining elementsjOf dimension C × Ntr。
Secondly, outputting y according to the verification of each level of the trained width neural networkv_mObtaining verification output yv_mReference matrix R corresponding to the stagejError vector between:
Jv_mj=||softmax(yv_m)-Rj||2,1≤j≤C;
wherein, Jv_mjDimension of 1 × Ntr;yv_mOf dimension C × Ntr。
Finally, output y to verificationv_mReference matrix R corresponding to the stagejError vector J therebetweenv_mjCalculating the minimum value to obtain the class label y corresponding to the trained neural network with each level of widthv_ind_m:
Wherein, yv_ind_mDimension of 1 × Ntr。
The class label y corresponding to each level of the trained neural networkv_ind_mAnd verification output y of each stagev_mComparing to obtain a correct classification sample set y of the neural network with each level of widthvc_mAnd misclassification sample set yvw_m。
(3) Obtaining the decision threshold T of the neural network with each level width through statistical calculationm
The more difficult part of the present network is the determination of the decision threshold at each stage, which is used to determine which stage of the network each sample should be output by at the time of testing. After training and testing, statistical calculations are performed on the correctly classified sample set and the misclassified sample set, respectively. Assuming that in an m-level width neural network, a correctly classified sample set and an incorrectly classified sample set are respectively: y isvc_mAnd yvw_mThe total number of samples in the correctly classified sample set and the incorrectly classified sample set is respectively as follows: n is a radical ofvc_mAnd Nvw_mAnd N isvc_m+Nvw_m=Ntr。
In the verification process, in order to ensure that the final sample has enough misclassified sample sets, each verification set may include NvalTransforming the original training sample set with dataThe transformed validation sample sets, i.e., each validation set may contain NvalGroup validation sample sets, i.e. each validation set has N samples of the original training samplesvalAnd (4) doubling.
The error for both sample sets is calculated by:
evc_m=||softmax(yvc_m)-tvc_m||2;
evw_m=||softmax(yvw_m)-tvw_m||2;
wherein, tvc_mAnd tvw_mIs the correctly classified sample y in the m-levelvc_mAnd misclassification sample yvw_mA corresponding real tag. The mean and variance of the two types of sample statistics, correctly classified and incorrectly classified, are assumed to be: mu.sc,uw,σc,σwThe two gaussian distributions corresponding to this are:
the Gaussian probability density functions are respectively as follows:
at one level of the parallel multi-level width neural network, the verification set error distribution and the probability density function thereof are shown in fig. 3(a) and (b), and then the decision threshold of the m-level width neural network is:
Tm=min(evw_m)-ασw;
wherein α is a constant to give a margin to allow all misclassified samples yvw_mIs rejected at the current stage.
(4) Testing parallel multi-level-width neural networks with decision threshold determination through test set
As shown in fig. 1(c), the specific test procedure is as follows:
firstly, a test set is obtained, and the specific process is as follows: obtaining an original test sample set Xtest(ii) a Correspondingly acquiring M groups of test sample sets x through M times of data expansiontest_1,...,xtest_m,...,xtest_MNamely, the test set is obtained; wherein, the original test sample set xtestTotal number of middle test samples is Ntest_samples。
The data expansion is as follows: for the original test sample set XtestIs performed for each sample in NtestDConverting the secondary data to obtain NtestDTest sample set as test set x of M-th order width neural network of parallel M-order width neural network determined by decision thresholdtest_m。
The test set acquisition method can obtain the test stability in the subsequent test process.
Second, set M groups of test samples xtest_1,...,xtest_m,...,xtest_MInputting the parallel data into parallel M-level width neural networks determined by the decision threshold in parallel, testing the test sets, namely correspondingly inputting each group of test sets into each level width neural network determined by the decision threshold to test, and correspondingly obtaining N of each level width neural network determined by the decision thresholdtestDOutputting a test sample set; to NtestDAveraging the outputs of the test sample sets to obtain the test output of each stage of the neural network with the determined decision threshold
Thirdly, setting the total class number of the test set as C, and constructing a reference matrix Rj(j is more than or equal to 1 and less than or equal to C); obtaining verification output yv_mReference matrix R corresponding to the stagejError vector between:
Jtest_mj=||softmax(ytest_m)-Rj||2,1≤j≤C;
wherein, the reference matrix RjEach reference matrix R having 1 for the element in the jth row and 0 for the remaining elementsjOf dimension C × Ntest_samples;Jtest_mjDimension of 1 × Ntest_samples,yv_mOf dimension C × Ntest_samples。
Finally, the output of each level of the neural network with the width determined by the decision threshold is judged, specifically: min (J) when the minimum error of the current-stage width neural network is less than or equal to the current-stage decision thresholdtest_mj)≤TmThen it is determined that the current stage is the correct classification output stage for that output.
When the minimum error of the current stage width neural network is greater than the current stage decision threshold, i.e., min (J)test_mj)>TmIf the output can not be classified correctly, the output is transferred to the next stage width neural network for testing, and the process is circulated until the output finds the correct classification output stage. Further obtaining a label corresponding to the test output of the neural network with each level of width determined by the decision thresholdWherein, ytest_ind_mDimension of 1 × Ntest_samples。
If the test sample cannot be output at the previous 25 stages, it is directly output at the last 26 th stage.
Finally, the output L of the test set in the whole network can be obtainedtest(ii) a The correct classification samples and the incorrect classification samples can be calculated, and the sample classification precision of the parallel multi-level width neural network can be further obtained.
Comparative example
The original training sample set, the verification set and the test set which are the same as those in the above embodiment are adopted, a Random Forest (RF), a Multilayer Perceptron (MP), a traditional radial basis function network (RBF), a Support Vector Machine (SVM), a Breadth Learning System (BLS), a conditional deep learning model (CDL), a deep belief network (DBL), a convolutional neural network LeNet-5, a Deep Boltzmann Machine (DBM) and a deep random deep forest (gc) are respectively adopted as basis classifiers to perform learning classification, and the precision of data classification by various finally obtained learning methods is shown in fig. 4.
As can be seen from fig. 4, compared to the current mainstream learning model: the method comprises the steps of Random Forest (RF), a Multilayer Perceptron (MP), a traditional radial basis function network (RBF), a Support Vector Machine (SVM), a Breadth Learning System (BLS), a conditional deep learning model (CDL), a deep belief network (DBL), a convolutional neural network LeNet-5, a Deep Boltzmann Machine (DBM) and a deep random deep forest (gc forest). Compared with a deep random forest learning model, the neural network has multiple levels of base neural networks, each level of base neural networks is used for learning different part samples of a data set, the structure of the neural network can be determined in a self-adaptive mode according to problems and the complexity of the data set, and optimization of computing resources is achieved; meanwhile, the neural network of the invention can carry out parallel test during testing, namely test data are simultaneously sent to all stages of the network, and the decision threshold value of each stage obtained in the training process is used for deciding which stage of the neural network each test sample is finally output by, so that the waiting time during actual use of the network is greatly reduced in the parallel test process.
In addition, the parallel multi-level width neural network can realize incremental learning, namely when new data comes, a new width radial basis function network can be added to learn new characteristics without retraining the whole parallel multi-level width neural network, which means that the proposed network can learn new knowledge without forgetting old knowledge. New training data is input to the current level M network, if there are misclassified samples, they together with the original training set after data expansion establish a new training data set, train a new wide radial basis function network while performing validation using the new validation set, and calculate a decision threshold, thereby establishing a level M +1 network. Finally, the new parallel multi-level width neural network will consist of an M +1 level width radial basis function network. Meanwhile, the parallel multi-level width neural network designed by the invention can be tested in parallel during testing, all test samples are sent to all the width radial basis function networks, and a decision threshold determines which width radial basis function network is allocated to the corresponding test sample. The process does not need to wait for the network output of other stages, so that the process is parallelized during testing, and the testing process is accelerated.
Each level of the width neural network in the parallel multistage width neural network can be a width radial basis function network, a BP neural network, a convolution neural network or other classifiers, and the types of the base classifiers of each level of the multistage width neural network can be different.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such changes and modifications of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is intended to include such changes and modifications.
Claims (9)
1. A learning method based on a parallel multi-level width neural network comprises a multi-level width neural network, wherein each level of width neural network comprises an input layer, a hidden layer and an output layer which are connected in sequence, and the learning method is characterized by comprising the following steps:
step 1, obtaining an original training sample set, and constructing a parallel M-level width neural network Net1,…Netm,…,NetMM is 1, 2 …, M, each level width neural network is used as a base classifier of the corresponding level; performing M times of data transformation on an original training sample set to correspondingly obtain M verification sets xv_1,…xv_m,…xv_M;
Wherein, the total number of samples of the original training sample set is NtrEach training sample is an image sample to be learned; the parallel M-level width neural network Net is constructed1,…Netm,…,NetMThe specific process is as follows:
designing a parallel multi-level width neural network by adopting a width radial basis function network as a basis classifier; m width radial basis function networks are connected together to form a parallel multi-level width neural network Net1,Net2,...NetM(ii) a Each base classifier is used as a first level;
the specific process for constructing the wide radial basis function network comprises the following steps:
construction includes N0kA Gaussian base function ofThe center of the radial basis function network is a subset randomly taken from an original training sample set, and the value of the standard deviation is a constant; acquiring multiple groups of local characteristic images of each image sample to be learned in an original training sample set by adopting a sliding window so as to obtain multiple groups of local characteristic matrixes, and taking the multiple groups of local characteristic matrixes as input data of a Gaussian basis function to obtain multiple radial basis function networks, namely a wide radial basis function network;
step 2, adopting an original training sample set and M verification sets xv_1,…xv_m,…xv_MRespectively training and verifying each level of the parallel M-level width neural network to obtain the trained parallel M-level width neural network and the verification output y corresponding to each level of the width neural networkv_mM is 1, 2 …, M; obtaining each verification output y by adopting a minimum error methodv_mCorresponding label yv_ind_mFurther obtaining a correct classification sample set y of the verification set of each level of width neural network of the trained parallel M level width neural networkvc_mAnd misclassification sample set yvw_m;
The original training sample set and M verification sets x are adoptedv_1,…xv_m,…xv_MTraining and verifying each stage of the parallel M-stage width neural network respectively, comprising the following sub-steps:
substep 2.1, using the original training sample set as the level 1 width neural networkNet1For the 1 st order width neural network Net1Training to obtain a trained first-level width neural network;
substep 2.2, using a first verification set xv_1Verifying the trained 1 st-level width neural network to obtain an error classification sample set y of a verification set of the 1 st-level width neural networkvw_1;
Substep 2.3, misclassification sample set y of first-level width neural networkvw_1Input samples A as a level 2 wide neural networkv_1(ii) a Then randomly extracting a training sample set A from the original training sample setv_2Let the total input sample set { A }v_1+Av_2The number of samples in (A) is equal to the number of samples in the original training sample set, and the total input sample set (A) isv_1+Av_2Taking the samples as input samples of a 2 nd-level width neural network;
substep 2.4, use the total input sample set { A }v_1+Av_2Training the 2 nd-level width neural network to obtain a trained 2 nd-level width neural network; using a second verification set xv_2Verifying the trained 2 nd-level width neural network to obtain an error classification sample set y of a verification set of the 2 nd-level width neural networkvw_2;
And analogizing in sequence, respectively training the neural networks with the widths from the 3 rd level to the M th level to obtain the trained parallel neural networks with the widths of the M levels and the corresponding verification output y of the neural networks with the widths of each levelv_m;
The training process of each stage of the width neural network comprises the following steps:
(a) using image samples to be learned in an original training sample set as input data, setting the initial position of a sliding window at the upper left corner of each image sample to be learned, selecting a sliding step length of 1 pixel, sliding the sliding window from left to right in sequence from top to bottom, and stretching 3-dimensional image blocks of all the image samples in the sliding window into a matrix xkRespectively forming corresponding original matrixes by each local characteristic image according to pixels, and sequentially arranging the 2 nd to the last columns of each original matrix to the 1 st column to form a column vector;arranging N column vectors in sequence to form a local feature matrix x of a group of training image sampleskK is more than or equal to 1 and less than or equal to K, and a local feature matrix xkEach column of (a) represents one image sample to be learned;
(b) matrix x of local featureskInput to the device including N0kA Gaussian base function ofThe output is noted as:
sliding the sliding window to correspond to one radial basis function network every time, and obtaining K radial basis function networks after sliding is finished;
(c) for each radial basis function network, the output data phi of the Gaussian basis function subjected to nonlinear transformationkIntroducing sequencing and downsampling:
output data phi for wide radial basis function networkskEach row of the image to be learned is summed to obtain a row vector, each element of the row vector is the sum of the pixels of the local specific position of each image to be learned, the sums of the pixels of the local specific position of each image to be learned are arranged in a descending order to obtain a descending order vectorUsing an index skVector a in descending orderkMarking an original position corresponding to the local specific position of each image to be learned to obtain sequenced output data phi'k=sort(Φk,sk);
Downsampling the sorted output data, and setting a downsampling interval NkSThen the sampling output is phikS=subsample(Φ′k,NkS) The output of the Gaussian base function is [ phi ] - [ phi ]1S,Φ2S,…,ΦKS];
(d) Setting the desired output to D ═ D1,D2,…,DC](ii) a And performing linear layer connection on the output of the Gaussian basis function of the wide radial basis function network, wherein the weight of the linear layer is as follows: w ═ W1,W2,…,WC];
Wherein C is the total number of categories of the original sample;
obtaining class output Y ═ Y of the wide radial basis function network1,Y2,…,YC]Phi W; in particular, a least mean square estimate of the weights of the linear layer is calculated by minimizing the square errorThe concrete formula is as follows:
least mean square estimation of weights of linear layers by pseudo-inverse matrix of gaussian basis function output phi of wide radial basis function network
Wherein phi+Outputting a pseudo-inverse matrix of phi for the Gaussian basis function of the wide radial basis function network;
finally, the class output of the wide radial basis function network obtained by calculation is as follows:
further obtaining a trained width radial basis function network, and completing the training process of each level of width neural network;
step 3, correctly classifying a sample set y of a verification set of each level of width neural network of the trained parallel M level width neural networkvc_mAnd misclassification sample set yvw_mRespectively carrying out statistical calculation to correspondingly obtain the decision threshold T of the trained neural network with each level of widthm(ii) a Decision threshold T of neural network with each level widthmThe decision basis is used as the decision basis of the neural network with the corresponding level width, and the parallel neural network with the M level width determined by the decision threshold is obtained;
step 4, obtaining a test set, taking the test set as input data of the parallel M-level width neural network determined by the decision threshold, and inputting the input data to each level of width neural network determined by the decision threshold in parallel to test to obtain the output of each level of width neural network determined by the decision threshold; obtaining an error vector of each level of width neural network, and judging the output of each level of width neural network determined by the decision threshold, thereby obtaining a label y corresponding to the test output of each level of width neural network determined by the decision thresholdtest_ind_m。
2. The parallel multi-level width neural network-based learning method of claim 1, wherein in step 1, the data transformation compresses or deforms the samples in the original sample set by elastic transformation; or the data transformation rotates, flips, zooms in, or zooms out samples in the original sample set through affine transformation.
3. The learning method based on the parallel multi-level width neural network as claimed in claim 1, wherein in step 2, the minimum error method is:
firstly, setting the total class number of an original training sample set as C, and constructing a reference matrix Rj,1≤j≤C;
Wherein, the reference matrix RjEach reference matrix R having 1 for the element in the jth row and 0 for the remaining elementsjOf dimension C × Ntr;
Second, it is used forAccording to the verification output y of the trained neural network with each stage widthv_mObtaining verification output yv_mReference matrix R corresponding to the stagejError vector between:
Jv_mj=||softmax(yv_m)-Rj||2,1≤j≤C;
wherein | | | purple hair2Representing the 2 norm of the matrix, softmax () being a normalized exponential function; j. the design is a squarev_mjDimension of 1 × Ntr;yv_mOf dimension C × Ntr;
Finally, output y to verificationv_mReference matrix R corresponding to the stagejError vector J therebetweenv_mjCalculating the minimum value to obtain the class label y corresponding to the trained neural network with each level of widthv_ind_m:
Wherein, yv_ind_mDimension of 1 × Ntr。
4. The parallel multi-level width neural network-based learning method of claim 1, wherein in step 3, the statistical calculation comprises the following sub-steps:
and 3.1, setting a correct classification sample set and an incorrect classification sample set of the mth level width neural network of the trained parallel M level width neural network as follows: y isvc_mAnd yvw_mThe total number of samples in the correctly classified sample set and the incorrectly classified sample set is respectively as follows: n is a radical ofvc_mAnd Nvw_mAnd N isvc_m+Nvw_m=NtrThen, the errors of the correctly classified sample set and the incorrectly classified sample set are respectively:
evc_m=||softmax(yvc_m)-tvc_m||2;
evw_m=||softmax(yvw_m)-tvw_m||2;
wherein, tvc_mIs an m-level width neural networkIn correctly classifying the sample yvc_mCorresponding real label, tvw_mIs a misclassified sample y in an m-level wide neural networkvw_mA corresponding real label;
substep 3.2, sample set y is classified correctlyvc_mAnd misclassification sample set yvw_mRespectively calculate the correct classification sample set yvc_mRespectively, mean and variance ofcAnd σc(ii) a Misclassification sample set yvw_mThe mean and variance of (a) are respectively: u. ofwAnd σw(ii) a Then the sample set y is correctly classifiedvc_mAnd misclassification sample set yvw_mThe corresponding gaussian distributions are:
correctly classifying sample set yvc_mAnd misclassification sample set yvw_mThe corresponding gaussian probability density functions are:
substep 3.3, sorting the sample set y according to errorsvw_mError e ofvw_mSum variance σwObtaining a decision threshold T of the neural network with m-level widthm=min(evw_m)-ασw;
Wherein α is a constant to give a margin to allow all misclassified samples yvw_mIs rejected at the current stage.
5. Parallel multi-level-width neural network-based science according to claim 2The learning method is characterized in that in step 4, the test set acquisition is as follows: obtaining an original test sample set xtest(ii) a Correspondingly acquiring M groups of test sample sets x through M times of data expansiontest_1,...,xtest_m,...,xtest_MI.e. a test set.
6. The parallel multi-level-width neural network-based learning method of claim 5, wherein the data is augmented as: for the original test sample set xtestIs performed for each sample in NtestDConverting the data to obtain NtestDTest sample set as test set x of M-th order width neural network of parallel M-order width neural network determined by decision thresholdtest_m;
Wherein, the original test sample set xtestTotal number of middle test samples is Ntest_samples。
7. The parallel multi-level width neural network-based learning method of claim 1, wherein in step 4, the obtaining the error vector of each level of width neural network comprises the following sub-steps:
substep 4.1, set M groups of test samples xtest_1,xtest_2,...,xtest_MRespectively parallelly inputting the data to parallel M-level width neural networks determined by the decision threshold, and correspondingly obtaining N of each-level width neural network determined by the decision thresholdtestDAn output ytest_m_d,d=1,2…NtestD;
Substep 4.2, N for each level of width neural network determined for decision thresholdtestDAn output ytest_m_d,d=1,2…NtestDCalculating the average value to obtain the test output of each level of width neural network determined by the decision threshold
Substep 4.3, setting the total class number of the test set as C, and constructing a reference matrix Rj,1≤j is less than or equal to C; obtaining verification output yv_mReference matrix R corresponding to the stagejError vector between:
Jtest_mj=||softmax(ytest_m)-Rj||2,1≤j≤C;
wherein, the reference matrix RjEach reference matrix R having 1 for the element in the jth row and 0 for the remaining elementsjOf dimension C × Ntest_samples;Jtest_mjDimension of 1 × Ntest_samples,yv_mOf dimension C × Ntest_samples。
8. The learning method based on the parallel multi-level width neural network of claim 7, wherein the output of each level of width neural network determined by the decision threshold is determined as:
when the minimum error of the current stage width neural network is less than or equal to the current stage decision threshold, judging that the current stage is the correct classification output stage of the output:
min(Jtest_mj)≤Tm;
when the minimum error of the current stage width neural network is larger than the decision threshold of the current stage, judging that the current stage can not correctly classify the output, transferring the output to the next stage width neural network for testing, and repeating the steps until the output finds the correctly classified output stage:
min(Jtest_mj)>Tm。
9. the learning method based on the parallel multi-level width neural network of claim 8, wherein in step 4, the label y corresponding to the test output of each level of width neural network determined by the decision threshold is obtainedtest_ind_mComprises the following steps:
wherein, ytest_ind_mDimension of 1 × Ntest_samples。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910331708.8A CN110110845B (en) | 2019-04-24 | 2019-04-24 | Learning method based on parallel multi-level width neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910331708.8A CN110110845B (en) | 2019-04-24 | 2019-04-24 | Learning method based on parallel multi-level width neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110110845A CN110110845A (en) | 2019-08-09 |
CN110110845B true CN110110845B (en) | 2020-09-22 |
Family
ID=67486407
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910331708.8A Active CN110110845B (en) | 2019-04-24 | 2019-04-24 | Learning method based on parallel multi-level width neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110110845B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111008647B (en) * | 2019-11-06 | 2022-02-08 | 长安大学 | Sample extraction and image classification method based on void convolution and residual linkage |
CN111340184B (en) * | 2020-02-12 | 2023-06-02 | 北京理工大学 | Deformable reflector surface shape control method and device based on radial basis function |
CN113449569B (en) * | 2020-03-27 | 2023-04-25 | 威海北洋电气集团股份有限公司 | Mechanical signal health state classification method and system based on distributed deep learning |
CN112966761B (en) * | 2021-03-16 | 2024-03-19 | 长安大学 | Extensible self-adaptive width neural network learning method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107784312A (en) * | 2016-08-24 | 2018-03-09 | 腾讯征信有限公司 | Machine learning model training method and device |
CN108351985A (en) * | 2015-06-30 | 2018-07-31 | 亚利桑那州立大学董事会 | Method and apparatus for large-scale machines study |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9811775B2 (en) * | 2012-12-24 | 2017-11-07 | Google Inc. | Parallelizing neural networks during training |
US10242313B2 (en) * | 2014-07-18 | 2019-03-26 | James LaRue | Joint proximity association template for neural networks |
-
2019
- 2019-04-24 CN CN201910331708.8A patent/CN110110845B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108351985A (en) * | 2015-06-30 | 2018-07-31 | 亚利桑那州立大学董事会 | Method and apparatus for large-scale machines study |
CN107784312A (en) * | 2016-08-24 | 2018-03-09 | 腾讯征信有限公司 | Machine learning model training method and device |
Also Published As
Publication number | Publication date |
---|---|
CN110110845A (en) | 2019-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110110845B (en) | Learning method based on parallel multi-level width neural network | |
CN110443143B (en) | Multi-branch convolutional neural network fused remote sensing image scene classification method | |
CN109063724B (en) | Enhanced generation type countermeasure network and target sample identification method | |
CN111414849B (en) | Face recognition method based on evolution convolutional neural network | |
CN105138973B (en) | The method and apparatus of face authentication | |
EP0464327A2 (en) | Neutral network apparatus and method for pattern recognition | |
JPH07296117A (en) | Constitution method of sort weight matrix for pattern recognition system using reduced element feature section set | |
CN112465120A (en) | Fast attention neural network architecture searching method based on evolution method | |
CN113222011B (en) | Small sample remote sensing image classification method based on prototype correction | |
CN106096661B (en) | The zero sample image classification method based on relative priority random forest | |
JPH06176202A (en) | Method and device for controlled training- increasing polynominal for character recognition | |
CN112633382A (en) | Mutual-neighbor-based few-sample image classification method and system | |
CN104850890A (en) | Method for adjusting parameter of convolution neural network based on example learning and Sadowsky distribution | |
CN110310345A (en) | A kind of image generating method generating confrontation network based on hidden cluster of dividing the work automatically | |
CN110929798A (en) | Image classification method and medium based on structure optimization sparse convolution neural network | |
CN112819063B (en) | Image identification method based on improved Focal loss function | |
CN111582396A (en) | Fault diagnosis method based on improved convolutional neural network | |
CN112364974B (en) | YOLOv3 algorithm based on activation function improvement | |
Cho et al. | Virtual sample generation using a population of networks | |
CN114639000A (en) | Small sample learning method and device based on cross-sample attention aggregation | |
CN104598898B (en) | A kind of Aerial Images system for rapidly identifying and its method for quickly identifying based on multitask topology learning | |
CN111371611A (en) | Weighted network community discovery method and device based on deep learning | |
Cho et al. | Genetic evolution processing of data structures for image classification | |
US6934405B1 (en) | Address reading method | |
Gagula-Palalic et al. | Human chromosome classification using competitive neural network teams (CNNT) and nearest neighbor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |