CN108460426A - A kind of image classification method based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder - Google Patents

A kind of image classification method based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder Download PDF

Info

Publication number
CN108460426A
CN108460426A CN201810269829.XA CN201810269829A CN108460426A CN 108460426 A CN108460426 A CN 108460426A CN 201810269829 A CN201810269829 A CN 201810269829A CN 108460426 A CN108460426 A CN 108460426A
Authority
CN
China
Prior art keywords
pseudoinverse
learning
encoding encoder
training
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810269829.XA
Other languages
Chinese (zh)
Inventor
尹乾
冯思博
郭平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
Original Assignee
Beijing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University filed Critical Beijing Normal University
Priority to CN201810269829.XA priority Critical patent/CN108460426A/en
Publication of CN108460426A publication Critical patent/CN108460426A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of image classification method based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder, image gradient features are extracted using histograms of oriented gradients (HOG) including (1), calculate the directional diagram of image, the direction character that several overlapping regional areas are counted by HOG operators, obtains the HOG features of image.The parameter that we set different HOG operators obtains several HOG features, by these Fusion Features at the feature vector of higher-dimension.(2) pseudoinverse learning algorithm training storehouse self-encoding encoder (PILAE) is used, the high dimensional feature being fused into upper step is put into PILAE and continues learning characteristic.(3) feature learnt in PILAE is put into grader and is classified.HOG can extract the information of two-dimensional image.Pseudoinverse learning algorithm is a kind of non-iterative method, is used for Training Multilayer Feedforward Neural Networks.Model proposed by the present invention, training time are advantageous compared with other models, and most of hyper parameters are decided in its sole discretion by input data and network structure, need not be manually set.

Description

It is a kind of based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder Image classification method
Technical field
The invention belongs to field of artificial intelligence, are related to a kind of quick training depth nerve net of image characteristics extraction combination The method of network, the more particularly to model of histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder are used for image classification Task.
Background technology
In recent years, the development of deep learning causes the upsurge of artificial intelligence study again.Deep learning originates from previous generation Artificial intelligence technology --- the artificial nerve network model recorded the forties.First neural network model perceptron algorithm is by instruction White silk, which can reach, classifies to certain input vector model, but later it has been proposed that the function of simple perceptron algorithm is It is limited, linearly inseparable problem can not be handled, artificial neural network research is initially located in low tide one.In recent years, due to calculating Machine hardware advances are rapid, and the calculated level of computer is greatly improved so that deep learning is risen again.
Deep learning handles data flow by simulating human brain, is that use includes labyrinth from the training data provided Or the method that multiple processing layer datas carry out higher level of abstraction is constituted by multiple nonlinear transformation.Currently, deep learning is regarded in image Frequently it, made breakthrough progress in sound and text identification, there is stronger learning ability than shallow-layer learning algorithm.In addition to strong Big Function Fitting ability and good generalization ability, every layer of learning table of deep learning have revealed from low-level image feature to high-level characteristic Process.
In the Internet era of data explosion, the largely data without mark are generated daily, and train depth model often The data for needing largely to carry label, create a complete tape label data set need to expend a large amount of manpower and when Between.Currently, deep learning Most models are all to have supervision, such as the multilayer feedforward neural network of tape label, convolutional Neural net Network etc.;Unsupervised model is mainly self-encoding encoder.Unsupervised learning is advantageous in that data need not mark, and model can be automatic Learning characteristic therefore develop the future thrust that unsupervised deep learning is deep learning.
Invention content
(1) in view of this, the technical problem to be solved in the present invention is to provide one kind combining puppet based on histograms of oriented gradients The model of storehouse self-encoding encoder composition is trained in reversal learning, is used for image classification.The model is extracted using image manual features first Method histograms of oriented gradients extracts feature, and the storehouse self-encoding encoder for connecting pseudoinverse learning training later further extracts feature, Then feature is put into grader and is classified.
(2) present invention is to solve the above problems, the following technical solutions are proposed:
1) Gradient Features of histograms of oriented gradients (HOG) extraction image are used.Histograms of oriented gradients is that computer regards Operator is described for image detection clarification of objective in feel.HOG obtains whole picture by the Gradient direction information of statistical picture part The shape and profile information feature of image.HOG divides an image into several networks (block), cell factory first (cell) gradient information is calculated in each block, in order to preferably describe image gradient features, HOG uses local overlapping It calculates.Since the feature of different cell sizes, different block sizes and different sliding steps extraction is different.We make Different HOG features are extracted with several different size of cell, block and sliding step parameter, then spell these features It is connected into the feature vector of a higher-dimension.This thought is similar in convolutional neural networks, is extracted using different convolution kernels different Feature map.
2) use pseudoinverse learning algorithm (Pseudoinverse Learning Algorithm, PIL) training storehouse self-editing Code device.The high dimensional feature vector that upper step obtains is put into self-encoding encoder by we continues learning characteristic.Self-encoding encoder using The input of unsupervised learning mode, the network is approximately equal to output, and the quantity by controlling hidden node can make input vector rise dimension (increasing the number of hidden nodes) or dimensionality reduction (reducing hidden layer energy number of nodes), we reduce hidden node quantity here so that self-editing The code automatic learning characteristic of device, certainly, we are added to weight attenuation term in loss function, avoid the identical study of self-encoding encoder. General trained self-encoding encoder needs to set loss function, and optimization is iterated using gradient descent algorithm.In order to save training Time, we are using pseudoinverse learning algorithm training storehouse self-encoding encoder here.Pseudoinverse learning algorithm is in nineteen ninety-five, by Guo Pingjiao Award a kind of algorithm (Guo et al, " An Exact for efficient training Single hidden layer feedforward neural networks of proposition Supervised Learning for a Three-Layer Supervised Neural Network”,ICONIP'95, Pp.1041-1044,1995.), multilayer neural network (Guo et al, " Pseudoinverse were expanded in 2001 Learning Algorithm for Feedforward Neural Networks”,in Mastorakis Eds., Advances in Neural Networks and Applications,WSES Press(Athens)pp.321-326, 2001.).PIL algorithm ideas are to replace weight using the pseudo inverse matrix of input matrix, substitute gradient descent algorithm.PIL passes through square Battle array operation, quickly trains multilayer neural network, more efficient compared to gradient descent algorithm.Specifically, when the nerve of a multilayer Network has N number of sample, each sample to have m dimensions, and input matrix X, expectation target O, we train the purpose of neural network It is to find one group of parameter so that the value of loss function reaches minimum:
Wherein, g (x, Θ) is the mapping function of neural network, and Θ is parameter sets.We can be represented by the following formula more Relationship between l layers and l+1 layers of neural network of layer.
Yl+1=σ (YlWl),
Wherein σ () is activation primitive, YlRepresent l layers of output.In this way last layer network export us can be as Lower expression:
G=YLWL,
G is target output, W in this formulaLRepresent the weight of last layer.In summary three formulas, Wo Menke Loss function is rewritten as following formula:
In this way, this problem becomes Linear least squares minimization problem.The optimal pseudoinverse solution of above formula is W=(YL)+O, optimal Solution brings loss function into, and loss function can be written as form again by us:
From the above equation, we can see that only needing YL(YL)+Close to unit matrix I, loss function is optimal solution, and therefore, we reset Optimization aimWherein e is the error threshold that we set, as long as square error is less than this in the calculation Threshold value is considered as and reaches optimization aim.In training process, every layer is calculated | | Yl(Yl)+-I||2, if it is less than the threshold value of setting E, training are completed, and algorithm terminates, and otherwise increase a hidden layer, continue to calculate square error, the threshold value model until converging to setting In enclosing or the hidden layer number of training reaches the number of plies of setting.
In the training process, regularization term is added in we in loss function, avoids causing to intend due to data volume deficiency It closes.After introducing weight decaying regularization, optimization aim is changed to:
Wherein, λ > 0 are regularization coefficient, by above formula, it can be deduced that (YL)+=(YYT+λΙ)-1YT
(3) the advantage of the invention is that:
Hyper parameter need not be set:The number of HOG operators, cell sizes and block sizes by setting by hand in the present invention; Hidden layer number is determined by loss function in PILAE, the training stopping when error rises or reaches the number of plies set by user;Hidden unit Number is determined that the intrinsic dimensionality for commonly entering matrix is more than rank of matrix, and the dimension that we reduce matrix disappears by the order of input matrix Except redundant data, correlated characteristic is reduced, when the intrinsic dimensionality of input matrix is equal to rank of matrix, we are with certain proportion The dimension for forcing reduction matrix, achievees the purpose that feature learning;Since PILAE does not need gradient optimization algorithm and iteration is excellent Change, so there is no need to set learning rate and study rounds;The weight of neural network of the present invention is the pseudo inverse matrix of input matrix, therefore not It needs to initialize weight;
Training time is short:The present invention using the different HOG features of different HOG operator extractions finally form high dimensional feature to Amount.Given HOG operator parameters only need the feature that picture extracts merging features into higher-dimension one by one.Connection PILAE is carried out below The training of further feature learning, wherein PILAE does not need iteration optimization, only needs linear algebra calculating that training can be completed, compares Other Model of Neural Network iterate, undated parameter, and the present invention has advantage on the training time.And in model of the present invention not It needs to adjust hyper parameter, so for the user, not only the training time is fast, but also saves many and spend in the super ginseng of debugging Several time.
AI democratizes:Nowadays, many network models achieve the error rate of very little on image classification data collection, but work as Network model is applied in the actual use scene of oneself by user, often cannot get ideal effect.It often also needs to adjust Many network parameters, training is for a long time.It is a thing for being difficult for professional to adjust neural network parameter, more Let alone the people of not specialty background.The present invention is more conducive to AI democratizations using complicated tune ginseng simply, is not needed.
Description of the drawings
Fig. 1 histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder structural schematic diagrams
Specific implementation mode
(1) below with reference to attached drawing, the preferred embodiment of the present invention is described in detail:
The present invention proposes a kind of image based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder Sorting technique.In order to make the purpose , technical scheme and advantage of the present invention be clearer, below in conjunction with specific implementation example and This method is described in further detail in attached drawing.It should be appreciated that herein specific implementation example description only to explain the present invention, and It is not used in the restriction present invention.
Specifically, it is that one kind of embodiment of the present invention being based on histograms of oriented gradients pseudo- reversal learning is combined to instruct shown in Fig. 1 Practice storehouse self-encoding encoder and combines the method for being used for Handwritten Digit Classification.For the picture group of N given n × n-pixel sizes At training sample set X, matrix is expressed as X=[x1,x2,…,xN].The present invention embodiment based on histograms of oriented gradients Include following basic step in conjunction with pseudoinverse learning training storehouse self-encoding encoder image classification method:
Step 1) selects t HOG to describe operator, one by one image zooming-out HOG features, the parameter cell sizes of these operators, Block sizes and gradient direction are different.The feature vector that the feature vector of t operator extraction is connected into m dimensions forms spy Sign matrix F is for training PILAE, matrix to be expressed as F=[f1,f2,…,fN]。
Step 2) solves the pseudo inverse matrix F of F using features described above matrix F as the input matrix of self-encoding encoder+, first by F Singular value decomposition is carried out to obtain
F=U Σ VT
The order r=Rank (Σ) of input matrix, wherein Rank () function be calculate Σ in be 0 element number, input The dimension m=Dim (f) of feature vector, wherein Dim () function are to calculate feature vector Characteristic Number.Self-encoding encoder is arranged in we Hidden unit number p be r<p<m.If rank of matrix r is less than the dimension m of feature vector, Hidden unit number p is set in r<p<Between m:
P=r+ α (m-r)
Wherein, α is the parameter that user makes by oneself.When matrix full rank, i.e. rank of matrix and intrinsic dimensionality is equal, for characterology It practises, forcing, which reduces intrinsic dimensionality, makes p<m:
P=β m
Wherein, β is the parameter that user makes by oneself.
F is carried out SVD decomposition first, obtained by step 3) according to PIL algorithms
By F=U Σ VT Wherein, Σ ' is the inverse for the element being not zero in Σ.
Matrix after being blocked for V,
V=[v1,v2..., vp..., vm]T,
Wherein p is the Hidden unit number being arranged in step 2).Then, W is enablede=F+, matrix is mapped to the feature sky of hidden layer Between in:
H=σ (WeF)
Wherein σ () is activation primitive.
Step 4) solves decoder weight according to pseudo- reversal learning.Self-encoding encoder decoder weight WdH=X, according to minimum two Multiply that there are optimal pseudoinverse approximate solution Wd=XH+, therefore calculate the pseudoinverse H of hidden layer output H+.The loss function definition of pseudo- reversal learning is such as Under:
MinE=| | X-WH | |2
In order to avoid model over-fitting, we increase weight decaying regularization term, and loss function is amended as follows:
Loss function minimum value is solved, is obtained:
MinE=- (X-WH) HT+ kW=0
W=XHT(ΗΗT+kI)-1
Step 5) obtains decoder weight W by step 4)d, using the transposition of decoder weight as the weight of encoderIn this way, the output H=σ (W of the hidden layer of self-encoding encodereF the character representation for) indicating initial data, hidden layer is exported As the input data of next self-encoding encoder, the next self-encoding encoder of step (1-4) training is repeated.When the requirement for reaching user Afterwards, deconditioning opens trained self-encoding encoder, forms storehouse self-encoding encoder, decoder section is removed, last is defeated It is exactly the feature of initial data to go out, and is put into grader classifies later.
(2) embodiment
In order to prove that the present invention is practical, we use the performance of the common data set testing model of machine learning.And Experiment is compared with correlation model.
Experiment database used is handwritten form database (THE MNIST DATABASE of handwritten Digits), the existing standard data sets that detection sorting algorithm function admirable is known as by industry of MNIST.We are detected using MNIST The model performance of the present invention.MNIST by Yann LeCun et al. create include 0~9 handwriting digital image data set, data Collection includes altogether 70000 handwriting digital images, wherein 60000 training images, 10000 detection images, every image is all It is pre-processed through past background, and by its image to 28 × 28=784 pixel.We use classical engineering Practise, the model of neural network model and the present invention be based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder into Row comparison, as a result as illustrated in chart 1.
Comparing remaining model and can be seen that model proposed by the present invention has apparent advantage on the training time, and takes Obtained good accuracy of identification.
Model Time consumption for training (second) Training precision (%) Measuring accuracy (%)
SAE 298.43 97.53 96.72
Lenet-5 523.43 100.00 98.33
SVM 2583.82 98.72 96.46
HOG 30.83 94.88 94.32
PILAE 62.32 97.32 96.39
HOG+PILAE 92.58 98.82 98.01
Table 1
The foregoing is merely the preferred embodiment of the present invention, are not intended to restrict the invention, it is clear that those skilled in the art Various changes and modifications can be made to the invention by member without departing from the spirit and scope of the present invention.If in this way, the present invention Within the scope of the claims of the present invention and its equivalent technology, then the present invention is also intended to include these these modifications and variations Including modification and variation.

Claims (5)

1. a kind of image classification method based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder, including with Lower step:
1) Gradient Features of use direction histogram of gradients extraction image, used here as the HOG operator extractions of several different parameters Then these different features are cascaded into high dimensional feature by different Gradient Features.
2) pseudoinverse learning algorithm training storehouse self-encoding encoder (PILAE) is used, using the high dimensional feature of step 1) as mode input, Further learning characteristic.
3) feature that PILAE is trained is input in grader and carries out image classification.
2. according to claim 1, use direction histogram of gradients extracts image gradient information, which is characterized in that step 1) histograms of oriented gradients used by is the Gradient Features for extracting image, is calculated its main feature is that setting multiple HOG descriptions The parameter of son, extracts different Gradient Features, by these Fusion Features at the feature vector of higher-dimension.
3. the image according to claim 1, based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder Sorting technique, which is characterized in that the pseudoinverse learning algorithm employed in step 2) is a kind of no iterative algorithm, for training multilayer Feedforward neural network is not necessarily to iteration optimization its main feature is that without backpropagation, the weight of network by input matrix pseudo inverse matrix It determines, using mean square error as loss function, last layer of weight is solved using least square in last layer of network.
4. it is by input matrix according to claim 1, to train the Hidden unit number of self-encoding encoder using pseudoinverse learning algorithm Order determine that the dimension m=Dim (f) of input feature value, Dim () function is to calculate feature vector Characteristic Number and input Rank of matrix r=Rank (Σ), wherein Rank () function be calculate in Σ be not 0 element number, m and r are established connection by us System, sets the Hidden unit number p of self-encoding encoder as r<p<m.If rank of matrix r is less than the dimension m of feature vector, will be hidden Layer unit number p is set in r<p<Between m:P=r+ α (m-r), wherein α is the parameter that user makes by oneself, when matrix full rank, i.e. matrix Sum of ranks intrinsic dimensionality it is equal, for feature learning, we, which force to reduce intrinsic dimensionality, makes p<m:P=β m, wherein β is to use The parameter that family is made by oneself.
5. according to claim 1, the hidden layer number of pseudoinverse learning algorithm training storehouse self-encoding encoder be by the formula that defines from Dynamic confirmation:Wherein e is the threshold value of setting, calculates every layer of training error, when less than given threshold, training Stop.
CN201810269829.XA 2018-03-29 2018-03-29 A kind of image classification method based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder Pending CN108460426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810269829.XA CN108460426A (en) 2018-03-29 2018-03-29 A kind of image classification method based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810269829.XA CN108460426A (en) 2018-03-29 2018-03-29 A kind of image classification method based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder

Publications (1)

Publication Number Publication Date
CN108460426A true CN108460426A (en) 2018-08-28

Family

ID=63237188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810269829.XA Pending CN108460426A (en) 2018-03-29 2018-03-29 A kind of image classification method based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder

Country Status (1)

Country Link
CN (1) CN108460426A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211205A (en) * 2019-06-14 2019-09-06 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN115908311A (en) * 2022-11-16 2023-04-04 湖北华鑫光电有限公司 Lens forming detection equipment based on machine vision and method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127804A (en) * 2016-06-17 2016-11-16 淮阴工学院 The method for tracking target of RGB D data cross-module formula feature learning based on sparse depth denoising own coding device
CN106874879A (en) * 2017-02-21 2017-06-20 华南师范大学 Handwritten Digit Recognition method based on multiple features fusion and deep learning network extraction
US20170328194A1 (en) * 2016-04-25 2017-11-16 University Of Southern California Autoencoder-derived features as inputs to classification algorithms for predicting failures
CN107480777A (en) * 2017-08-28 2017-12-15 北京师范大学 Sparse self-encoding encoder Fast Training method based on pseudo- reversal learning
CN107609637A (en) * 2017-09-27 2018-01-19 北京师范大学 A kind of combination data represent the method with the raising pattern-recognition precision of pseudo- reversal learning self-encoding encoder

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170328194A1 (en) * 2016-04-25 2017-11-16 University Of Southern California Autoencoder-derived features as inputs to classification algorithms for predicting failures
CN106127804A (en) * 2016-06-17 2016-11-16 淮阴工学院 The method for tracking target of RGB D data cross-module formula feature learning based on sparse depth denoising own coding device
CN106874879A (en) * 2017-02-21 2017-06-20 华南师范大学 Handwritten Digit Recognition method based on multiple features fusion and deep learning network extraction
CN107480777A (en) * 2017-08-28 2017-12-15 北京师范大学 Sparse self-encoding encoder Fast Training method based on pseudo- reversal learning
CN107609637A (en) * 2017-09-27 2018-01-19 北京师范大学 A kind of combination data represent the method with the raising pattern-recognition precision of pseudo- reversal learning self-encoding encoder

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211205A (en) * 2019-06-14 2019-09-06 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
WO2020248898A1 (en) * 2019-06-14 2020-12-17 腾讯科技(深圳)有限公司 Image processing method, apparatus and device, and storage medium
CN110211205B (en) * 2019-06-14 2022-12-13 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
US11663819B2 (en) 2019-06-14 2023-05-30 Tencent Technology (Shenzhen) Company Limited Image processing method, apparatus, and device, and storage medium
CN115908311A (en) * 2022-11-16 2023-04-04 湖北华鑫光电有限公司 Lens forming detection equipment based on machine vision and method thereof
CN115908311B (en) * 2022-11-16 2023-10-20 湖北华鑫光电有限公司 Lens forming detection equipment and method based on machine vision

Similar Documents

Publication Publication Date Title
Mascarenhas et al. A comparison between VGG16, VGG19 and ResNet50 architecture frameworks for Image Classification
CN108717568B (en) A kind of image characteristics extraction and training method based on Three dimensional convolution neural network
CN105184298B (en) A kind of image classification method of quick local restriction low-rank coding
CN111753828B (en) Natural scene horizontal character detection method based on deep convolutional neural network
CN106295507B (en) A kind of gender identification method based on integrated convolutional neural networks
CN113627472B (en) Intelligent garden leaf feeding pest identification method based on layered deep learning model
Tang et al. Deep fishernet for object classification
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN109785344A (en) The remote sensing image segmentation method of binary channel residual error network based on feature recalibration
CN110334715A (en) A kind of SAR target identification method paying attention to network based on residual error
CN108537121A (en) The adaptive remote sensing scene classification method of environment parament and image information fusion
CN107767416A (en) The recognition methods of pedestrian&#39;s direction in a kind of low-resolution image
CN109344898A (en) Convolutional neural networks image classification method based on sparse coding pre-training
CN108268890A (en) A kind of hyperspectral image classification method
CN105631478A (en) Plant classification method based on sparse expression dictionary learning
Singh et al. Leaf identification using feature extraction and neural network
Gao et al. Natural scene recognition based on convolutional neural networks and deep Boltzmannn machines
Sharma et al. Recognition of plant species based on leaf images using multilayer feed forward neural network
CN112800927A (en) AM-Softmax loss-based butterfly image fine granularity identification method
CN112766283A (en) Two-phase flow pattern identification method based on multi-scale convolution network
CN115222998A (en) Image classification method
CN114913379A (en) Remote sensing image small sample scene classification method based on multi-task dynamic contrast learning
CN108460426A (en) A kind of image classification method based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder
Manzari et al. A robust network for embedded traffic sign recognition
CN108805280A (en) A kind of method and apparatus of image retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180828

WD01 Invention patent application deemed withdrawn after publication