CN105701506B - A kind of improved method based on transfinite learning machine and rarefaction representation classification - Google Patents

A kind of improved method based on transfinite learning machine and rarefaction representation classification Download PDF

Info

Publication number
CN105701506B
CN105701506B CN201610018444.7A CN201610018444A CN105701506B CN 105701506 B CN105701506 B CN 105701506B CN 201610018444 A CN201610018444 A CN 201610018444A CN 105701506 B CN105701506 B CN 105701506B
Authority
CN
China
Prior art keywords
output
hidden node
classification
matrix
output vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610018444.7A
Other languages
Chinese (zh)
Other versions
CN105701506A (en
Inventor
曹九稳
郝娇平
张凯
曾焕强
赖晓平
赵雁飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201610018444.7A priority Critical patent/CN105701506B/en
Publication of CN105701506A publication Critical patent/CN105701506A/en
Application granted granted Critical
Publication of CN105701506B publication Critical patent/CN105701506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Abstract

The invention discloses a kind of improved methods based on transfinite learning machine and rarefaction representation classification.Steps are as follows by the present invention: 1, hidden node parameter is randomly generated;2, hidden node output matrix is calculated, 3, according to the size relation of L and N, the output weight of connection hidden node and output neuron is calculated using different formula4, the output vector of inquiry picture y is calculated;5, to the maximum ο in ELM output vector οfWith second largest value οsDifference judged, if difference is greater than the set value, find out maximum value in output vector it is corresponding index be inquiry picture generic;Otherwise 6 are entered step;6, using training sample corresponding to k maximum value in output vector ο, constructor dictionary calculates the linear expression coefficient of picture y using coefficient restructing algorithm, calculates residual error and the classification according to corresponding to residual error determines the affiliated class of inquiry picture.Calculation amount of the present invention will greatly reduce, and realize higher discrimination, can also substantially reduce computation complexity.

Description

A kind of improved method based on transfinite learning machine and rarefaction representation classification
Technical field
The invention belongs to image classification field more particularly to a kind of improvement based on transfinite learning machine and rarefaction representation classification Method.
Background technique
Image classification, that is, input picture is integrated into a certain specific classification automatically attracted it is more and more extensive Concern, especially because it is in security system, medical diagnosis, the application of the multiple fields such as human-computer interaction.At past several years, from Some technologies that machine learning develops also produce very big influence in image classification field.In fact, almost mentioning in the past Each method out has its advantages and disadvantage.One inevitable problem is exactly computation complexity and classification accuracy Compromise.In other words, that is, one kind can not be designed in all applications in efficiency and discrimination all the best ways. In order to solve this problem, mixed system is come into being, that is, to form one the advantages of be integrated with various distinct methods The significantly more efficient method of kind.
One vital factor of successful image classification system is exactly classifier.One good classifier of design is not Can because of some other factor, such as different feature extracting methods and be affected.In the past few decades, artificial neuron Network is benefited significantly since input parameter can arbitrarily be arranged, and not only pace of learning is than very fast, and achieves preferably Generalization Capability.Among them, the learning machine (Extreme Learning Machine, ELM) that transfinites widely is paid close attention to and is ground Study carefully.Why so welcome the learning machine (ELM) that transfinites is, is because it has quick pace of learning, the ability handled in real time With the measurability of neural network.In addition to the learning machine that transfinites (ELM), another is also concerned with by research institution based on sparse The classification (Sparse Representation based Classification, SRC) of expression.Rarefaction representation classifies (SRC) Most it is initially the sparse performance in order to study human vision neuron, found it in recognition of face, machine vision and direction later Estimation etc. also has good performance.Rarefaction representation classification (SRC) be to try to find out from of a sort samples pictures it Between connection and the rarefaction representation coefficient of picture to be checked is set up by linear regression.Although ELM and SRC each have prominent Out the advantages of, but they there are still some drawbacks the development for limiting them in practical applications.Experiment shows ELM It practises speed quickly, cannot preferably handle noise, although SRC can preferably handle noise but paid very big meter Calculate cost.It is additionally noted that a good classifier of design not only needs to show higher discrimination, but also Need faster recognition efficiency.Since the learning machine that transfinites (ELM) and rarefaction representation classification (SRC) have their own advantages, then designing A kind of hybrid classifer is exactly reasonable.Experiment shows that ELM-SRC does very well in terms of discrimination than the learning machine (ELM) that transfinites, Computation complexity is also reduced than rarefaction representation classification (SRC), but due to having used complete dictionary, transfinite learning machine with it is sparse Presentation class (ELM-SRC) computation complexity is still very high.
Artificial neural network (Artificial Neural Networks) is also referred to as neural network (NNs), it is one Kind imitates animal nerve network behavior feature, carries out the algorithm mathematics model of distributed parallel information processing.This network relies on The complexity of system, by adjusting relationship interconnected between internal great deal of nodes, to achieve the purpose that handle information. Artificial neural network is a kind of mathematical model of structure progress information processing that application couples similar to cerebral nerve cynapse.In work Journey and academia are also often directly referred to as neural network.Each neuron for constituting feedforward network receives previous stage input, and exports To next stage, no feedback can be indicated with a directed acyclic graph.The node of figure is divided into two classes, i.e. input node and computing unit. Each computing unit can have any input, but only one is exported, and export may be coupled to it is any other it is multiple other The input of node.Feedforward neural network is generally divided into different layers, and i-th layer of input is only associated with (i-1)-th layer of output, defeated Enter with output node due to can be connected with the external world, is directly protected from environmental, referred to as visible layer, and other middle layers then claim For hidden layer.Single hidden layer feedforward neural networks (Single-hidden Layer Feedforword neural Networks, SLFNs) as the term suggests being exactly the feedforward neural network that hidden layer only has one layer.For N number of any different sample (xi,ti), wherein xi=[xi1,xi2,…xin]T, ti=[ti1,ti2,…,tin]T, the standard Single hidden layer feedforward neural networks with M hidden node (SLFNs) for, mathematical model isWherein wiBe input node and The weight of hidden node, βiIt is the weight between hidden node and output node, biIt is hidden layer deviation, g (x) is activation primitive.
Single hidden layer feedforward neural networks (SLFNs) with M hidden node can be approached with zero error, it is meant thatNamelyN number of equation above can also be write as H β=T, wherein
H is hidden layer output matrix.Experiment shows that N can be accurately obtained by randomly selecting input weight and hidden layer deviation A different observation.It turns out that not only pace of learning is fast for Single hidden layer feedforward neural networks (SLFNs), but also have preferable Generalization Capability.
Summary of the invention
The purpose of the present invention is being directed to existing method, provide a kind of based on transfinite learning machine and sparse table Show the improved method of classification.This method is a kind of based on learning machine and rarefaction representation classification (the Extreme Learning of transfiniting Machine-Sparse Representation based Classification, ELM-SRC) it improved adaptively transfinites Habit machine and rarefaction representation classification (Extreme Learning Machine and Adaptive Sparse Representation based Classification, EA-SRC) algorithm.To achieve the goals above, the present invention uses Following scheme.
The technical solution adopted by the present invention to solve the technical problems includes the following steps:
Hidden node parameter (w is randomly generated in step 1i,bi), i=1,2 ..., L, wherein wiIt is i-th of hidden layer section of connection The input weight of point and input neuron, biIt is the deviation of i-th of hidden node, L is hidden node number.
Step 2 calculates hidden node output matrix H (w1,…wL,x1,…,xN,b1,…bi,…,bL), and
Wherein w is the input weight for connecting hidden node and inputting neuron, and x is training sample input, and N is training sample Number, biIt is the deviation of i-th of hidden node, g () indicates activation primitive.
Step 3, according to the size relation of L and N, different formula is respectively adopted and calculates connection hidden node and output mind Output weight through member
Step 4, the output vector for calculating inquiry picture y
Step 5, to the maximum o in ELM output vector ofWith second largest value osDifference judged, and find out output to The corresponding index of maximum value is inquiry picture generic in amount.
If the maximum o in ELM output vector ofWith second largest value osDifference be greater than preset threshold value σ, i.e. of- os> σ then directlys adopt the learning machine that transfinites (ELM) trained neural network, finds out the corresponding rope of maximum value in output vector Draw as inquiry picture generic.
If the maximum o in output vectorfWith second largest value osDifference be less than our preset threshold values, i.e. of-os < σ, then it is assumed that the noise that picture includes is higher, is classified using rarefaction representation sorting algorithm.
In the step 3, if the size of L and N be L≤N, i.e., hidden node number be less than or equal to sample number, then for Raising computational efficiency carries out singular value decomposition to hidden node output matrix, specific:
3-1. singular value decomposition H=UDVT, wherein D=diag { d1,…di,…dNIt is diagonal matrix, diIt is the of matrix H I singular value, then HHT=VD2VT.Wherein U is n rank unitary matrice matrix, and V is n rank unitary matrice.And UUT=UTU=I, VVT= VTV=I.
3-2. sets the upper limit λ of adjusting parameter λmaxWith lower limit λmin, in λi∈[λminmax] in range, calculate separately Every layer of decomposition square HAT of orthogonal intersection cast shadow matrix HAT outr, HATr=HV (D2nI)-1VTHT, wherein HAT=HH+=H (HTH)- 1HT
3-3. is in λi∈[λminmax] the corresponding mean square deviation of the different adjusting parameter λ of the interior calculating of rangeIt calculates public Formula are as follows:
Wherein tjIt is desired output, and ojIt is actual output.
The calculated Minimum Mean Square Error of 3-4.Corresponding λ, i.e. λopt.Preferable generalization can be obtained at this time Can, and can also maximize classification boundaries.
3-5. calculates the output weight of connection hidden node and output neuron
In the step 3, if the size of L and N is L > N, i.e. hidden node number is greater than sample number, then in order to improve Computational efficiency carries out singular value decomposition to hidden node output matrix, specific:
3-6. singular value decomposition H=UDVT, wherein D=diag { d1,…di,…dNIt is diagonal matrix, diIt is the of matrix H I singular value, then HHT=UD2UT, and UUT=UTU=I, VVT=VTV=I.
3-7. sets the upper limit λ of adjusting parameter λmaxWith lower limit λmin, in λi∈[λminmax] in range, calculate separately Every layer of decomposition square HAT of orthogonal intersection cast shadow matrix HAT outr=HHTU(D2iΙ)-1UT, wherein HAT=HH+=H (HTH)-1HT
3-8. is in λi∈[λminmax] in range, calculate the corresponding mean square deviation of different adjusting parameter λIt calculates Formula are as follows:
Wherein tjIt is desired output, and ojIt is actual output.
The calculated Minimum Mean Square Error of 3-9.Corresponding λ, i.e. λopt.Preferable generalization can be obtained at this time Can, and can also maximize classification boundaries.
3-10. calculates the output weight of connection hidden node and output neuron
Rarefaction representation sorting algorithm described in step 5 establishes adaptive son first with k maximum value preceding in output vector o Then dictionary reconstructs the rarefaction representation coefficient of training sample, find out corresponding residual error, and it is corresponding finally to find out minimum value in residual error Index be the classification that belongs to of picture, it is specific:
Classification corresponding to preceding k maximum value in output vector o is found out first, it is then maximum with first k in output vector o It is worth corresponding vector and establishes adaptive sub- dictionary vectorWherein m (i) ∈ { 1,2 ... m }.
Then rarefaction representation coefficient is reconstructed, formula is as follows,Wherein τ is adjustment Coefficient.
Finally calculate corresponding residual error
Wherein AdIt is the training sample of d class;It is the corresponding rarefaction representation coefficient of d class sample.
The present invention has the beneficial effect that:
It is this based on transfinite learning machine and rarefaction representation classification (ELM-SRC) it is improved it is adaptive transfinite learning machine with it is sparse The algorithm of presentation class (EA-SRC) not only discrimination with higher, but also faster pace of learning, greatly reduces calculating Complexity.
Detailed description of the invention
Fig. 1 is flow diagram of the present invention;
Fig. 2 is Single hidden layer feedforward neural networks schematic diagram of the present invention.
Specific embodiment
Improvements of the invention are verified below with reference to specific experiment, are described below only as demonstration and explanation, and It is intended that the present invention is limited in any way.
As depicted in figs. 1 and 2, any one database is chosen, L hidden node ginseng is randomly generated by random function first Number (wi,bi), i=1,2 ..., L, wherein wiIt is the input weight for connecting i-th of hidden node and input neuron, biIt is i-th The deviation of a hidden node, L are hidden node numbers.Calculate hidden layer output matrix
(1) if hidden node number is less than sample number, i.e. L≤N.In order to improve computational efficiency, to hidden layer input matrix Carry out singular value decomposition, it is assumed that H=UDVT, wherein D=diag { d1,…di,…dNIt is diagonal matrix, diIt is i-th of matrix H Singular value, then HHT=VD2VT, and VVT=VTV=I.The upper limit λ of adjusting parameter λ has been previously setmaxWith lower limit λmin, in λi ∈[λminmax] in range, calculate separately out every layer of decomposition square HAT of HAT matrixr=HV (D2optI)-1VTHT, wherein HAT =HH+=H (HTH)-1HT.In λi∈[λminmax] in range, calculate the corresponding mean square deviation of different adjusting parameter λ Calculation formula wherein tjIt is desired output, and ojIt is actual output.It returns Calculated Minimum Mean Square Error corresponding λ, i.e. λ in above formulaopt.Preferable Generalization Capability can be obtained at this time, and can also be most Bigization classification boundaries.Then the output weight calculation formula for calculating connection hidden node and output neuron is as follows
(2) if hidden node number is less than sample number, i.e. L >=N.Singular value decomposition is carried out to hidden layer input matrix, it is false If H=UDVT, wherein D=diag { d1,…di,…dNIt is diagonal matrix, diIt is i-th of singular value of matrix H, then HHT= UD2UT, and UUT=UTU=I.The upper limit λ of adjusting parameter λ has been previously setmaxWith lower limit λmin, in λi∈[λminmax] range It is interior, calculate separately out every layer of decomposition square HAT of HAT matrixr=HHTU(D2optΙ)-1UT, wherein HAT=HH+=H (HTH)-1HT, The corresponding mean square deviation of different adjusting parameter λ, calculation formula are calculated againWherein tj It is desired output, and ojIt is actual output.Then calculated Minimum Mean Square Error corresponding λ, i.e. λ in above formula is returnedopt。 Preferable Generalization Capability can be obtained at this time, and can also maximize classification boundaries.Calculate connection hidden node and output The output weight of neuron
Then output vector is calculatedIf inquired in the output vector o of picture Maximum and the difference of second largest value are greater than the threshold value being previously set, i.e. of-os> σ directlys adopt the learning machine that transfinites (ELM) and has trained Good neural network, the corresponding index of maximum value is inquiry picture generic in output vector.If ELM output vector o Middle maximum and the difference of second largest value are less than the threshold value being previously set, i.e. of-osIt is a most to find out preceding k in output vector o first by < σ The corresponding index of big value, wherein d ∈ { 1,2 ... m }, then with the corresponding vector foundation of k maximum value preceding in output vector o Adaptive sub- dictionary vectorThe rarefaction representation coefficient calculation formula of reconstruct is as follows,Wherein τ is regulation coefficient.For d=1, d < m finds out corresponding A respectivelydWithCalculate corresponding residual errorFind out the corresponding classification y of residual error minimum value, as picture Generic.

Claims (2)

1. a kind of improved method based on transfinite learning machine and rarefaction representation classification, it is characterised in that include the following steps:
Hidden node parameter (w is randomly generated in step 1i,bi), i=1,2 ..., L, wherein wiBe i-th hidden node of connection and Input the input weight of neuron, biIt is the deviation of i-th of hidden node, L is hidden node number;
Step 2 calculates hidden node output matrix H (w1,…wL,x1,…,xN,b1,…bi,…,bL), and
Wherein w is the input weight for connecting hidden node and inputting neuron, x1..., xNIt is training sample input, N is trained sample This number, biIt is the deviation of i-th of hidden node, g () indicates activation primitive;
Step 3, according to the size relation of L and N, different formula is respectively adopted and calculates connection hidden node and output neuron Output weight
Step 4, the output vector for calculating inquiry picture y
Step 5, to the maximum ο in ELM output vector οfWith second largest value οsDifference judged, and find out in output vector The corresponding index of maximum value is inquiry picture generic;
If the maximum ο in ELM output vector οfWith second largest value οsDifference be greater than preset threshold value σ, i.e. οf-οs> σ, The learning machine that transfinites (ELM) trained neural network is then directlyed adopt, finds out the corresponding index of maximum value in output vector i.e. To inquire picture generic;
If the maximum ο in output vectorfWith second largest value οsDifference be less than our preset threshold values, i.e. οf-οs< σ, The noise for then thinking that picture includes is higher, is classified using rarefaction representation sorting algorithm;
In the step 3, if the size of L and N be L <=N, i.e., hidden node number be less than or equal to sample number, then in order to Computational efficiency is improved, singular value decomposition is carried out to hidden node output matrix, specific:
3-1. singular value decomposition H=UDVT, wherein D=diag { d1,…di,…dNIt is diagonal matrix, diIt is i-th of matrix H Singular value, then HHT=VD2VT;Wherein U is n rank unitary matrice, and V is n rank unitary matrice;And UUT=UTU=I, VVT=VTV=I;
3-2. sets the upper limit λ of adjusting parameter λmaxWith lower limit λmin, in λi∈[λminmax] in range, calculate separately out orthogonal Every layer of decomposition square HAT of projection matrix HATr, HATr=HV (D2nI)-1VTHT, wherein HAT=HH+=H (HTH)-1HT
3-3. is in λi∈[λminmax] the corresponding statistics mean square error of the different adjusting parameter λ of the interior calculating of range, calculation formula are as follows:
Wherein tjIt is desired output, and οjIt is actual output;
The calculated minimum statistics mean square error of 3-4.Corresponding λ, i.e. λopt;Preferable generalization can be obtained at this time Can, and can also maximize classification boundaries;
3-5. calculates the output weight of connection hidden node and output neuron
In the step 3, if the size of L and N is L > N, i.e. hidden node number is greater than sample number, then counts to improve Efficiency is calculated, singular value decomposition is carried out to hidden node output matrix, specific:
3-6. singular value decomposition H=UDVT, wherein D=diag { d1,…di,…dNIt is diagonal matrix, diIt is i-th of matrix H Singular value, then HHT=UD2UT, and UUT=UTU=I, VVT=VTV=I;
3-7. sets the upper limit λ of adjusting parameter λmaxWith lower limit λmin, in λi∈[λminmax] in range, calculate separately out orthogonal Every layer of decomposition square HAT of projection matrix HATr=HHTU(D2iI)-1UT, wherein HAT=HH+=H (HTH)-1HT
3-8. is in λi∈[λminmax] in range, calculate the corresponding statistics mean square error of different adjusting parameter λIt calculates Formula are as follows:
Wherein tjIt is desired output, and οjIt is actual output;
The calculated minimum statistics mean square error of 3-9.Corresponding λ, i.e. λopt;Preferable generalization can be obtained at this time Can, and can also maximize classification boundaries;
3-10. calculates the output weight of connection hidden node and output neuron
2. a kind of improved method based on transfinite learning machine and rarefaction representation classification as described in claim 1, it is characterised in that Rarefaction representation sorting algorithm described in step 5 establishes adaptive sub- dictionary first with k maximum value preceding in output vector ο, so The rarefaction representation coefficient for reconstructing training sample afterwards, finds out corresponding residual error, finally finds out the corresponding index of minimum value in residual error The as classification that belongs to of picture, specific:
Classification corresponding to preceding k maximum value in output vector ο is found out first, then with preceding k maximum value pair in output vector ο The vector answered establishes adaptive sub- dictionary vectorWherein m (i) ∈ { 1,2 ... m };
Then rarefaction representation coefficient is reconstructed, formula is as follows,Wherein τ is regulation coefficient;
Finally calculate corresponding residual error
Wherein AdIt is the training sample of d class;It is the corresponding rarefaction representation coefficient of d class sample.
CN201610018444.7A 2016-01-12 2016-01-12 A kind of improved method based on transfinite learning machine and rarefaction representation classification Active CN105701506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610018444.7A CN105701506B (en) 2016-01-12 2016-01-12 A kind of improved method based on transfinite learning machine and rarefaction representation classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610018444.7A CN105701506B (en) 2016-01-12 2016-01-12 A kind of improved method based on transfinite learning machine and rarefaction representation classification

Publications (2)

Publication Number Publication Date
CN105701506A CN105701506A (en) 2016-06-22
CN105701506B true CN105701506B (en) 2019-01-18

Family

ID=56226315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610018444.7A Active CN105701506B (en) 2016-01-12 2016-01-12 A kind of improved method based on transfinite learning machine and rarefaction representation classification

Country Status (1)

Country Link
CN (1) CN105701506B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326677B (en) * 2016-09-12 2018-12-25 北京化工大学 A kind of flexible measurement method of PTA device acetic acid consumption
CN106779091B (en) * 2016-12-23 2019-02-12 杭州电子科技大学 A kind of periodic vibration signal localization method based on transfinite learning machine and arrival distance
US10325340B2 (en) * 2017-01-06 2019-06-18 Google Llc Executing computational graphs on graphics processing units
CN106897737B (en) * 2017-01-24 2019-10-11 北京理工大学 A kind of high-spectrum remote sensing terrain classification method based on the learning machine that transfinites
CN107369147B (en) * 2017-07-06 2020-12-25 江苏师范大学 Image fusion method based on self-supervision learning
CN107820093B (en) * 2017-11-15 2019-09-03 深圳大学 Information detecting method, device and receiving device based on grouping energy differences
CN108470337A (en) * 2018-04-02 2018-08-31 江门市中心医院 A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature
CN109902644A (en) * 2019-03-07 2019-06-18 北京海益同展信息科技有限公司 Face identification method, device, equipment and computer-readable medium
CN109934295B (en) * 2019-03-18 2022-04-22 重庆邮电大学 Image classification and reconstruction method based on transfinite hidden feature learning model
CN109934304B (en) * 2019-03-25 2022-03-29 重庆邮电大学 Blind domain image sample classification method based on out-of-limit hidden feature model
CN110533101A (en) * 2019-08-29 2019-12-03 西安宏规电子科技有限公司 A kind of image classification method based on deep neural network subspace coding

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104980442A (en) * 2015-06-26 2015-10-14 四川长虹电器股份有限公司 Network intrusion detection method based on element sample sparse representation
CN104992165A (en) * 2015-07-24 2015-10-21 天津大学 Extreme learning machine based traffic sign recognition method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9330315B2 (en) * 2012-08-22 2016-05-03 International Business Machines Corporation Determining foregroundness of an object in surveillance video data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104980442A (en) * 2015-06-26 2015-10-14 四川长虹电器股份有限公司 Network intrusion detection method based on element sample sparse representation
CN104992165A (en) * 2015-07-24 2015-10-21 天津大学 Extreme learning machine based traffic sign recognition method

Also Published As

Publication number Publication date
CN105701506A (en) 2016-06-22

Similar Documents

Publication Publication Date Title
CN105701506B (en) A kind of improved method based on transfinite learning machine and rarefaction representation classification
CN108717568B (en) A kind of image characteristics extraction and training method based on Three dimensional convolution neural network
CN110163299B (en) Visual question-answering method based on bottom-up attention mechanism and memory network
CN108229444B (en) Pedestrian re-identification method based on integral and local depth feature fusion
Karami et al. Persian sign language (PSL) recognition using wavelet transform and neural networks
US11170256B2 (en) Multi-scale text filter conditioned generative adversarial networks
CN107679462A (en) A kind of depth multiple features fusion sorting technique based on small echo
CN109829541A (en) Deep neural network incremental training method and system based on learning automaton
US20140122399A1 (en) Apparatus and methods for activity-based plasticity in a spiking neuron network
Ju et al. A deep learning method combined sparse autoencoder with SVM
US20140122400A1 (en) Apparatus and methods for activity-based plasticity in a spiking neuron network
Lu et al. Facial expression recognition based on convolutional neural network
CN110188794B (en) Deep learning model training method, device, equipment and storage medium
CN110378208B (en) Behavior identification method based on deep residual error network
CN109344759A (en) A kind of relatives&#39; recognition methods based on angle loss neural network
CN109086802A (en) A kind of image classification method based on biquaternion convolutional neural networks
US11223782B2 (en) Video processing using a spectral decomposition layer
Cheng et al. Augmented reality dynamic image recognition technology based on deep learning algorithm
CN114861838B (en) Intelligent classification method for pulsatile neural brains based on neuron complex dynamics
CN110009108A (en) A kind of completely new quantum transfinites learning machine
CN108154156A (en) Image Ensemble classifier method and device based on neural topic model
Zhang et al. Towards cross-task universal perturbation against black-box object detectors in autonomous driving
Sun et al. A spiking neural network for extraction of features in colour opponent visual pathways and FPGA implementation
CN107169958A (en) Machine learning, background suppress with perceiving the vision significance detection method that positive feedback is combined
CN109284765A (en) The scene image classification method of convolutional neural networks based on negative value feature

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant