CN106127230A - Image-recognizing method based on human visual perception - Google Patents
Image-recognizing method based on human visual perception Download PDFInfo
- Publication number
- CN106127230A CN106127230A CN201610427497.4A CN201610427497A CN106127230A CN 106127230 A CN106127230 A CN 106127230A CN 201610427497 A CN201610427497 A CN 201610427497A CN 106127230 A CN106127230 A CN 106127230A
- Authority
- CN
- China
- Prior art keywords
- image
- training
- hmax
- ffnn
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000016776 visual perception Effects 0.000 title claims abstract description 6
- 238000012549 training Methods 0.000 claims abstract description 52
- 238000012360 testing method Methods 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 13
- 230000005284 excitation Effects 0.000 claims description 4
- 238000003709 image segmentation Methods 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 3
- 238000013461 design Methods 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 claims 1
- 238000004422 calculation algorithm Methods 0.000 abstract description 7
- 230000004438 eyesight Effects 0.000 abstract description 6
- 238000000605 extraction Methods 0.000 abstract description 5
- 230000000007 visual effect Effects 0.000 abstract description 4
- 238000002203 pretreatment Methods 0.000 abstract description 2
- 230000007423 decrease Effects 0.000 abstract 1
- 230000003247 decreasing effect Effects 0.000 abstract 1
- 238000004088 simulation Methods 0.000 abstract 1
- 230000008569 process Effects 0.000 description 12
- 230000008901 benefit Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 239000013535 sea water Substances 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 210000000857 visual cortex Anatomy 0.000 description 2
- 241000406668 Loxodonta cyclotis Species 0.000 description 1
- 241000288906 Primates Species 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of image-recognizing method based on human vision.The method constructs one on the basis of degree of depth study with human vision can be across the image recognition structure of Problem Areas identification.Apply this model structure, the image of multiple Problem Areas can be identified by a model, be the further simulation to human visual system.This method utilizes the method for character both HMAX of human visual perception directly original image to be carried out feature extraction, decreases complicated pre-treatment step, improves computational efficiency and the feasibility of method.Decreased the number of parameters during the degree of depth learns by the method for SDA, improve the versatility of algorithm.To improve the training performance of general forward direction BP.From the point of view of actual experimental result, use the classification accuracy of the method also above other sorting technique.Therefore, the method is the image-recognizing method of a kind of efficient feasibility, has the universal suitability in field of image recognition.
Description
Technical field
The present invention relates to pattern recognition, artificial intelligence, computer vision, stack automatic coding machine.Particularly to based on spy
Levy the characteristics of objects extraction model HMAX and the stacking automatic coding machine SDA under degree of depth learning model of combination.
Background technology
Image is accurately identified has very important Research Significance, image recognition technology medicine and pharmacology, space flight,
All many-sides such as military affairs, industrial or agricultural play an important role.Present image recognition methods mostly uses and manually extracts feature, no
Only waste time and energy, and extract difficulty.Since the recovery of degree of depth study, it has become as the most advanced system of different subject
A part, particularly in terms of computer vision.At present, the form of deep neural network has proved to be degree of depth study structure
In almost the most best structure.
Degree of depth study is a kind of unsupervised learning, can not know the label value of sample in learning process, whole process without
Need to manually participate in the feature also can extracted.In recent years, degree of depth study is used for image recognition and becomes field of image recognition
Study hotspot, achieved with good effect, and have wide research space.
The degree of depth belief network (DBN) that Hinton etc. propose then applies a kind of unsupervised pre-training method and successfully solves
This problem.The DBN built by restriction Bo Ziman machine (RBM) basic module is the production network of a multiple structure
Model.By level training method from bottom to top, model parameter is limited to the numerical range favourable to next step study
In.This thought without supervision pre-training with supervised training study fine setting creates tremendous influence in machine learning field.
DBN is widely used in field of image recognition, such as recognition of face, hand-written script identification, natural field
Scape identification etc..But degree of depth belief network (DBNs) is compared with convolutional neural networks (CNNs) on discrimination and is declined slightly;Due to
Its parameter amount is big.If fine not to parameter regulation, the most do not has advantage the training time.
It is currently based on the image recognition algorithm of human perception and there is some problem to be solved following:
1, before image is carried out feature extraction, it is necessary to original image is carried out complexity and anticipates flow process, including: filter
Ripple, splits and the sequence of operations such as registration, as shown in Figure 1.
The problem that pretreatment mode exists following several respects:
A) any noise-reduction method certainly will cause the loss of advantageous information in original image.
B) inappropriate dividing method meeting strong influence image object form, edge and textural characteristics.This is to follow-up
Identification work cause puzzlement.
C) part dividing method is higher to the resolution requirement of image, and this exactly imaging of most of image with reality
Principle is runed counter to.
D) the different scene of same preprocess method application does not have universal, causes Same Way knowledge in different scenes
Other accuracy rate is the highest.
2, classical sorting technique based on human perception (e.g., DBN) method training parameter is too much.And a superelevation dimension ginseng
The searching process of the optimal result of number is a considerably complicated process.Which increase the complexity that method uses, reduce calculation
The calculated performance of method.
3, original image target area or background area need to be labeled by major part method at present.This process needs big
The calculating process of amount and artificial Attended Operation, practicality is the highest.
Summary of the invention
Intrinsic dimensionality owing to using in master pattern based on human vision is higher, and training image number is bigger so that
Amount of calculation is very big and the training time is considerably long, gives stronger restriction to reality application.
In consideration of it, this method makes every effort to, on the premise of the classification accuracy ensureing algorithm, simplify intrinsic dimensionality, reduce and calculate
Complexity, reduces the training time.Put forth effort to improve the computational efficiency of algorithm, make algorithm have higher availability.Owing to the mankind regard
Feeling that sensory perceptual system can be to capturing the key message of major part image, and it is discernable, this method passes through simulating human
Differentiate that in image, the mode of target distinguishes picture material.
First, this method uses HMAX feature, extracts original image.HMAX model is that a kind of general object is known
Other model, its basis is the research of the mechanism that visual cortex carried out in biology Object identifying.HMAX modeling primates
Animal identify object time visual cortex neural activity process and generate.It is right that HMAX model is mainly used in identifying at area of pattern recognition
The feature extraction of elephant, the feature extracted is referred to as HMAX feature.
The advantage of HMAX is:
1) HMAX simulating human visually-perceptible mechanism, it is possible to it is carried out feature extraction based on original image, it is to avoid right
The pretreatment operation of image.
2) key feature that HMAX is extracted reduces the dimension of image recognition, is conducive to improving the fortune of discriminant classification process
Calculate performance.
Secondly, utilize stacking automatic coding machine (Stacked Denoising Autoencoder, SDA) SDA to obtaining
HMAX feature is trained.SDA is the distressed structure in degree of depth study, has good degree of depth learning data special as DBN
The ability levied.
The advantage of SDA is:
1) it is unsupervised learning process and the destructive process to data during SDA training, so may learn data
The feature concentrated and data structure, reduce intrinsic dimensionality further, and the implicit expression that study obtains is more suitable for Supervised classification.
2) because SDA need not gibbs sampler, SDA is better than degree of depth confidence network (DBN) in most cases, and
And training is more prone to.
Finally, the target in test image is identified by application FFNN.
The method is capable of when detecting target carrying out original image pretreatment, and classification accuracy has also reached people
The class identification grade to target.Therefore, target characteristic during this method integrated application human knowledge visual system extracts image;And
And utilize the SDA method in degree of depth study, improve computational efficiency and the classification accuracy of algorithm, it is achieved to the classification of image,
Differentiate.
For existing method, the advantage of this method includes:
1) original image can directly be operated by HMAX feature, it is to avoid original image carries out loaded down with trivial details pretreatment behaviour
Make, improve the availability of method.
2) can simulating human vision, hold the essential eigenvalue of target in image, following sort operation served
The effect of dimensionality reduction.
3) stacking denoising own coding device (SDA) is first real multiple structure learning algorithm, and it utilizes space relatively to close
System reduces training parameter number, to improve the training performance of general forward direction BP.SDA has obtained preferable property in multiple experiments
Energy.
4) forming degree of depth confidence network as the stacking of limited Boltzmann machine, the stacking of denoising own coding device can form heap
Folded denoising own coding device.The noise removal capability of every layer network, so training every layer coder out is trained with superimposed noise input
More preferable vigorousness can be had as a feature extractor with fault freedom, the character representation learning to obtain simultaneously, this
Also the accuracy rate for raising classification provides condition.
Image-recognizing method based on human visual perception, is broadly divided into 8 parts.Specifically comprise the following steps that
(1) original training image input
Obtain original training image (TIF or JPG).For larger-size original image, image need to be carried out simply
Piecemeal operates, and is the subgraph of 200*200 by a scape image segmentation resolution sizes.
(2) the HMAX feature templates extracting method after application enhancements generates eigenmatrix.
Input: module training picture set
Parameter: in this method, classical HMAX parameter is arranged and be partially improved.First, the direction ginseng of Gabor filter
Number numbers be 8 (0, π/4, pi/2,3 π/4, π, 5 π/4,3 pi/2 7 π/4), totally 16 yardsticks, obtain 128 wave filter corresponding.
In Gabor function, depth-width ratio is set to 0.7, which determines the ellipticity that visual experience is wild;The standard deviation of the Gaussian factor with
The ratio of wavelength is set to 0.65, and this parameter determines the bandwidth of spatial frequency;Angle number ∈ [0,2 π), i.e. this method is chosen
8 directions towards;Owing to the phase place in 8 directions itself has symmetry, therefore phase compensation is set to 0.
HMAX training stage template number is 10.
Output: HMAX eigenmatrix
This method have employed tetra-kinds of graders of SDA, SVM, DBN to carry out classification learning, classifies comparing their identification
Performance.The operational performance of SDA is the twice of SVM from the results of view, is 125 times of DBN.Use SDA to identifying that object is instructed
Practice and identify, specifically comprising the following steps that
(3) SDA training
Input: the eigenvalue of the image object obtained by above-mentioned HMAX method.Use the training of a number of tape label
Sample is trained by feed-forward network, if SAE (TrainingSet, G1vAll) is training function, TrainingSet
For test sample collection, G1vAll is classification number.Design parameter is as follows:
The number of plies of SAE is 3 layers (picSize, 100)
Excitation function is sigma function
The iterations of SAE is set to 2;
Training batch size is 100;
Learning rate is 1
Plus noise coefficient is 0.5
Output: train the network weight (Net Weight) obtained through SDA
(4) FFNN is initialized
Weigh as the initial of FFNN network by the trained network weight weight values (Net Weight) obtained
The number of plies is [picsize, 100,10]
Excitation function is sigma function
Iterations: 1
Batch size: 100
(5) training FFNN
Input: FFNN network is trained by the training sample HMAX eigenvalue of application (2nd) step.
Output: the training network value (FFNN Net Weight) of FFNN.
(6) test (checking) image
Obtain original test (checking) image (TIF or JPG).For larger-size original image, image need to be carried out
Simple cutting, is the subgraph of 200*200 by a scape image segmentation resolution sizes.
(7) the HMAX eigenmatrix of the module training subgraph set of test image, the same second step of method are generated.
(8) FFNN classification
Input: the FFNN network weight that application training is good, the HMAX eigenmatrix generating the 7th step is classified.
Output: obtain result [lable, Score].Lable is class label, and Score is classification confidence level.
Accompanying drawing explanation
The existing knowledge graph of Fig. 1 is as pre-treatment step
Fig. 2 image-recognizing method based on human visual perception structure chart
Fig. 3 200 × 200 oil film sample image
Fig. 4 200 × 200 class oil film sample image
Fig. 5 200 × 200 Seawater Samples image
Fig. 6 SAR original image
Fig. 7 oil film location drawing picture
Fig. 8 class oil film position
Owing to SDA itself does not have classification feature, it is a feature extractor, so classification feature to be realized also need to be
Network finally adds grader, is higher than SVM and Bayes classifier by the performance of contrast experiment's feed-forward neutral net,
And the accuracy rate classified is basically identical.Therefore, this method uses feed-forward neutral net (FFNN) to carry out image object point
Class.The hidden layer quantity that in this method, SDA uses is 3 layers.
Fig. 2 illustrates image object, background etc., training and testing classification flow process.First regarding the HAMX of classification samples
Feel that eigenvalue composition eigenmatrix gives the network model that SDA training obtains SDA.Then, the network parameter that SDA obtains is used
(Net Weight) initializes FFNN network weight, is trained FFNN.The network parameter FFNN of the FFNN after being trained
Net Weight.Then, calculate the HMAX eigenvalue of test image object, use FFNN predict function and training to obtain
This feature value is judged by FFNN Net Weight.Finally, class label and the confidence level thereof of this test image are given.
Embodiment: synthetic aperture radar (SAR) marine oil overflow image recognition
Marine oil overflow image identification: marine oil overflow image is the target that a kind of complex shape is the most easy to identify.Application this paper side
The classifying quality of method has exceeded the accuracy rate of human expert's Direct Recognition oil spilling.
Fig. 3, Fig. 4, Fig. 5 show three class sample, both oil films, class oil film and sea water.Fig. 6, Fig. 7 are oil film in original image
Position with class oil film.First the visual characteristic composition characteristic matrix of HAMX is given at the beginning of the network that SDA training obtains FFNN
Beginning weights.Then, apply the HMAX eigenvalue of three class sample pictures, continue the network architecture parameters of training FFNN
FFNNStructure.Finally, apply this FFNNStructure that test image is detected, obtain classification results.
Table 1SDA classification results confusion matrix, 1 represents class oil film, and 2 represent oil film, and 3 represent sea water
In table 1, classification 1 represents class oil film, and classification 2 represents oil film, and classification 3 represents sea water.Can from this confusion matrix
Going out, overall classification accuracy reaches 100%.Although the sample number of test is few, but this result is special apparently higher than the mankind
The judgement level that family is had.For from performance, SDA has only to 2 iteration, and classification accuracy can reach more than 90%.
Claims (1)
1. an image-recognizing method based on human visual perception, it is characterised in that comprise the following steps:
One, original training image input, generation module training subgraph set
Obtain original training image;For larger-size original image, image is cut, by a scape image segmentation composition
Resolution size is the module training subgraph set of 200*200;
Two, application training sample generates HMAX eigenmatrix
Input is module training picture set, the directioin parameter number of Gabor filter be 8 (0, π/4, pi/2,3 π/4, π, 5 π/
4,3 pi/2 7 π/4), totally 16 yardsticks, totally 128 wave filter are schemed accordingly.In Gabor function, depth-width ratio is set to 0.7;
The standard deviation of the Gaussian factor and the ratio of wavelength are set to 0.65;Angle number ∈ [0,2 π) i.e. this method chooses 8 directions
Towards;Phase compensation is set to 0.HMAX training stage template number is 10.It is output as HMAX eigenmatrix
Three, SDA training
Input original training image HMAX eigenmatrix, use the training sample of a number of tape label by feed-forward net
Network is trained, for training function be SAE (TrainingSet, G1vAll), TrainingSet be test sample collection, G1vAll
For classification number, design parameter is: the number of plies of SAE is 3 layers (picSize, 100), and excitation function is sigma function, the iteration of SAE
Number of times is 2, and training batch size is 100, and learning rate is 1, and plus noise coefficient is 0.5;Export and train, through SDA, the network weight obtained
Value (Net Weight);
Four, FFNN is initialized
With the trained network weight (Net Weight) obtained as the initial power of FFNN network, the number of plies be [picsize,
100,10], excitation function is sigma function, and iterations is 1, and batch size is 100;
Five, training FFNN
Input original training image HMAX eigenmatrix FFNN network is trained, the training network value (FFNN of output FFNN
Net Weight);
Six, the module training subgraph set of test authentication image is generated
Obtain original test authentication image;For larger-size original image, image is simply cut, by a scape figure
As the test image module training subgraph set that segmentation resolution sizes is 200*200;
Seven, the HMAX eigenmatrix of the module training subgraph set of test image, the same second step of method are generated;
Eight, FFNN classification
Input: the FFNN network weight that application training is good, the HMAX eigenmatrix generating the 7th step is classified, and exports
[lable, Score], wherein lablel is class label, and Score is classification confidence level.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610427497.4A CN106127230B (en) | 2016-06-16 | 2016-06-16 | Image-recognizing method based on human visual perception |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610427497.4A CN106127230B (en) | 2016-06-16 | 2016-06-16 | Image-recognizing method based on human visual perception |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106127230A true CN106127230A (en) | 2016-11-16 |
CN106127230B CN106127230B (en) | 2019-10-01 |
Family
ID=57469708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610427497.4A Expired - Fee Related CN106127230B (en) | 2016-06-16 | 2016-06-16 | Image-recognizing method based on human visual perception |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106127230B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780499A (en) * | 2016-12-07 | 2017-05-31 | 电子科技大学 | A kind of multi-modal brain tumor image partition method based on stacking autocoding network |
CN106991429A (en) * | 2017-02-27 | 2017-07-28 | 陕西师范大学 | The construction method of image recognition depth belief network structure |
CN107657250A (en) * | 2017-10-30 | 2018-02-02 | 四川理工学院 | Bearing fault detection and localization method and detection location model realize system and method |
CN107729992A (en) * | 2017-10-27 | 2018-02-23 | 深圳市未来媒体技术研究院 | A kind of deep learning method based on backpropagation |
CN108133233A (en) * | 2017-12-18 | 2018-06-08 | 中山大学 | A kind of multi-tag image-recognizing method and device |
CN109271898A (en) * | 2018-08-31 | 2019-01-25 | 电子科技大学 | Solution cavity body recognizer based on optimization convolutional neural networks |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103955702A (en) * | 2014-04-18 | 2014-07-30 | 西安电子科技大学 | SAR image terrain classification method based on depth RBF network |
CN104751172A (en) * | 2015-03-12 | 2015-07-01 | 西安电子科技大学 | Method for classifying polarized SAR (Synthetic Aperture Radar) images based on de-noising automatic coding |
CN104966075A (en) * | 2015-07-16 | 2015-10-07 | 苏州大学 | Face recognition method and system based on two-dimensional discriminant features |
CN105139028A (en) * | 2015-08-13 | 2015-12-09 | 西安电子科技大学 | SAR image classification method based on hierarchical sparse filtering convolutional neural network |
WO2015191396A1 (en) * | 2014-06-09 | 2015-12-17 | Tyco Fire & Security Gmbh | Acoustic-magnetomechanical marker having an enhanced signal amplitude and the manufacture thereof |
CN105224948A (en) * | 2015-09-22 | 2016-01-06 | 清华大学 | A kind of generation method of the largest interval degree of depth generation model based on image procossing |
CN105513019A (en) * | 2015-11-27 | 2016-04-20 | 西安电子科技大学 | Method and apparatus for improving image quality |
-
2016
- 2016-06-16 CN CN201610427497.4A patent/CN106127230B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103955702A (en) * | 2014-04-18 | 2014-07-30 | 西安电子科技大学 | SAR image terrain classification method based on depth RBF network |
WO2015191396A1 (en) * | 2014-06-09 | 2015-12-17 | Tyco Fire & Security Gmbh | Acoustic-magnetomechanical marker having an enhanced signal amplitude and the manufacture thereof |
CN104751172A (en) * | 2015-03-12 | 2015-07-01 | 西安电子科技大学 | Method for classifying polarized SAR (Synthetic Aperture Radar) images based on de-noising automatic coding |
CN104966075A (en) * | 2015-07-16 | 2015-10-07 | 苏州大学 | Face recognition method and system based on two-dimensional discriminant features |
CN105139028A (en) * | 2015-08-13 | 2015-12-09 | 西安电子科技大学 | SAR image classification method based on hierarchical sparse filtering convolutional neural network |
CN105224948A (en) * | 2015-09-22 | 2016-01-06 | 清华大学 | A kind of generation method of the largest interval degree of depth generation model based on image procossing |
CN105513019A (en) * | 2015-11-27 | 2016-04-20 | 西安电子科技大学 | Method and apparatus for improving image quality |
Non-Patent Citations (5)
Title |
---|
QUANXUE GAO 等: "Enhanced fisher discriminant criterion for image recognition", 《PATTERN RECOGNITION》 * |
UMAPADA PAL 等: "Handwriting Recognition in Indian Regional Scripts: A Survey of Offline Techniques", 《ASIAN LANGUAGE INFORMATION PROCESSING》 * |
初秀民 等: "基于神经网络的沥青路面破损图像识别研究", 《武汉理工大学学报(交通科学与工程版)》 * |
朱庆: "基于HMAX特征的层次式柑桔溃疡病识别方法", 《计算机科学》 * |
梁鑫 等: "基于深度学习神经网络的SAR图像目标识别算法", 《江汉大学学报》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780499A (en) * | 2016-12-07 | 2017-05-31 | 电子科技大学 | A kind of multi-modal brain tumor image partition method based on stacking autocoding network |
CN106991429A (en) * | 2017-02-27 | 2017-07-28 | 陕西师范大学 | The construction method of image recognition depth belief network structure |
CN106991429B (en) * | 2017-02-27 | 2018-10-23 | 陕西师范大学 | The construction method of image recognition depth belief network structure |
CN107729992A (en) * | 2017-10-27 | 2018-02-23 | 深圳市未来媒体技术研究院 | A kind of deep learning method based on backpropagation |
CN107729992B (en) * | 2017-10-27 | 2020-12-29 | 深圳市未来媒体技术研究院 | Deep learning method based on back propagation |
CN107657250A (en) * | 2017-10-30 | 2018-02-02 | 四川理工学院 | Bearing fault detection and localization method and detection location model realize system and method |
CN107657250B (en) * | 2017-10-30 | 2020-11-24 | 四川理工学院 | Bearing fault detection and positioning method and detection and positioning model implementation system and method |
CN108133233A (en) * | 2017-12-18 | 2018-06-08 | 中山大学 | A kind of multi-tag image-recognizing method and device |
CN109271898A (en) * | 2018-08-31 | 2019-01-25 | 电子科技大学 | Solution cavity body recognizer based on optimization convolutional neural networks |
Also Published As
Publication number | Publication date |
---|---|
CN106127230B (en) | 2019-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106127230A (en) | Image-recognizing method based on human visual perception | |
CN107609601B (en) | Ship target identification method based on multilayer convolutional neural network | |
CN106845510B (en) | Chinese traditional visual culture symbol recognition method based on depth level feature fusion | |
CN107145830B (en) | Hyperspectral image classification method based on spatial information enhancing and deepness belief network | |
Lim et al. | Sketch tokens: A learned mid-level representation for contour and object detection | |
CN103258204B (en) | A kind of automatic micro-expression recognition method based on Gabor and EOH feature | |
CN108304873A (en) | Object detection method based on high-resolution optical satellite remote-sensing image and its system | |
CN107563439A (en) | A kind of model for identifying cleaning food materials picture and identification food materials class method for distinguishing | |
CN106570521B (en) | Multilingual scene character recognition method and recognition system | |
CN107945153A (en) | A kind of road surface crack detection method based on deep learning | |
CN111339935B (en) | Optical remote sensing picture classification method based on interpretable CNN image classification model | |
CN103996056A (en) | Tattoo image classification method based on deep learning | |
CN104268593A (en) | Multiple-sparse-representation face recognition method for solving small sample size problem | |
CN110503613A (en) | Based on the empty convolutional neural networks of cascade towards removing rain based on single image method | |
CN106529570B (en) | Image classification method based on depth ridge ripple neural network | |
CN115170805A (en) | Image segmentation method combining super-pixel and multi-scale hierarchical feature recognition | |
CN105718955B (en) | A kind of vision landform classification method based on multiple encoding and Fusion Features | |
CN104636732A (en) | Sequence deeply convinced network-based pedestrian identifying method | |
Nguyen et al. | Satellite image classification using convolutional learning | |
CN105956610B (en) | A kind of remote sensing images classification of landform method based on multi-layer coding structure | |
Steinberg et al. | A Bayesian nonparametric approach to clustering data from underwater robotic surveys | |
CN107341505A (en) | A kind of scene classification method based on saliency Yu Object Bank | |
Sarigül et al. | Comparison of different deep structures for fish classification | |
CN117079097A (en) | Sea surface target identification method based on visual saliency | |
Soumya et al. | Emotion recognition from partially occluded facial images using prototypical networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20191001 |
|
CF01 | Termination of patent right due to non-payment of annual fee |