CN107480723A - Texture Recognition based on partial binary threshold learning network - Google Patents
Texture Recognition based on partial binary threshold learning network Download PDFInfo
- Publication number
- CN107480723A CN107480723A CN201710726395.7A CN201710726395A CN107480723A CN 107480723 A CN107480723 A CN 107480723A CN 201710726395 A CN201710726395 A CN 201710726395A CN 107480723 A CN107480723 A CN 107480723A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- threshold
- mtd
- partial binary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Abstract
The present invention relates to a kind of Texture Recognition based on partial binary threshold learning network, including step:Step 1, prepare texture image dataset D to be sorted, data set D is divided into training dataset DtWith test data set Dv;Step 2, partial binary threshold learning network, input training dataset D are builtt, local binary threshold learning network is trained by the backpropagation and stochastic gradient algorithm of error sensitive item;The partial binary training threshold value network includes 1 input layer, 1 threshold coding layer, 2 convolutional layers, 3 down-sampling layers, 1 full articulamentum and 1 output layer;Step 3, by test data set DvIt is input in the partial binary threshold learning network trained, training result is verified.The present invention propose it is a kind of can threshold learning partial binary Network of Threshold be used for texture image sorting technique, by learning the structural information of textural characteristics, suitable for the identification of texture image under condition of small sample.
Description
Technical field
The invention belongs to technical field of image processing, more particularly to a kind of partial binary network based on threshold learning is used
In the sorting technique of texture image.
Background technology
Texture analysis is one active research theme of computer vision field, for a long time in such as Object identifying, distant
Played an important role in the various fields such as sense analysis, CBIR.Textural characteristics include body surface
Structure distribution and its a series of important informations such as the relation between surrounding environment, research for computer picture with should
With, especially have great importance for the classification of image, the effective classification often too busy to get away description to image texture.
Traditional image classification method generally includes feature extraction, feature representation and grader and selects three steps.Its
Middle characteristics of the underlying image extraction is always the core of Image Classfication Technology, has there is the algorithm of many maturations both at home and abroad at present.
Ojala and D.Harwood was in the LBP operators proposed in 1994, due to the remarkable advantage such as its rotational invariance and gray scale consistency,
It is usually used in describing and strengthens image local textural characteristics.But it e insufficient to describe image only by these low-level image features
Full detail, the present situation being especially all exponentially increased in face of amount of images and scale, the weak point of manual Feature Engineering are got over
Come more obvious.
In recent years, deep learning is increasingly becoming the Developing mainstream of image procossing.Convolutional neural networks are as most popular at present
One of deep learning model, it eliminates the process of artificial selected characteristic, the substitute is and is independently learned by the network designed
Practise feature.The characteristic parameter extremely strong by feat of ability to express, convolutional neural networks achieve breakthrough in image classification problem
Development.Set however, up to a hundred layers of network and up to ten million data be unable to do without expensive hardware while actual effect is ensured
Standby support, it can also consume substantial amounts of operation time.What is more important, in face of reality application demand, many times at all
The hardware device that enough training samples or abundance can not be obtained ensures time cost.In this case, often lead
Feature learning is caused to lose its advantage.
Threshold value LBP features provided by the invention based on threshold learning can retain the structure letter of texture image well
Breath, by the adaptive learning to local binary coding threshold value, so as to improve the descriptive power to texture structure;By manual line
The structural information of reason feature is attached to the feature learning process of convolutional neural networks, overcomes the dependence to training data, improves small
To the classifying quality of texture image under sample conditions.The inventive method provides 8 passages and 16 passage two ways carry out threshold value
Practise, generate threshold value LBP features;Coding threshold is changed come repetition learning by introducing convolutional neural networks, obtains adaptive LBP volume
Code feature,
The learning ability of the structural information of manual feature and neutral net is combined and realizes texture image point under small sample
Class.This partial binary Network of Threshold sorting algorithm based on threshold learning so that deep learning more has specific aim, together
When also solve the problems, such as to depend on training data unduly, the feature for learning to obtain has stronger texture structure characteristic.
The content of the invention
The advantages of present invention aims at traditional-handwork design feature and feature learning is combined each, find between
One balance.On the one hand the feature learning process of neutral net is instructed using the design philosophy of manual feature, is on the other hand utilized
The learning ability of neutral net improves the ability to express of traditional-handwork feature, is identified so as to improve texture image under condition of small sample
Accuracy rate.It is proposed the partial binary Network of Threshold sorting algorithm based on threshold learning, the image obtained with party's calligraphy learning
Feature preferably remains the texture features of image, under condition of small sample, for texture image classifying quality have it is brighter
Aobvious lifting.
The technical scheme is that a kind of partial binary Network of Threshold based on threshold learning is used for texture image
Sorting technique, comprise the following steps:
Step 1, prepare texture image dataset D to be sorted, data set D is divided into training dataset DtAnd test data set
Dv;
Step 2, partial binary threshold learning network, input training dataset D are builtt, pass through the anti-of error sensitive item
Local binary threshold learning network is trained with stochastic gradient algorithm to propagating;The partial binary threshold learning net
Network includes 1 input layer, 1 threshold coding layer, 2 convolutional layers, 3 down-sampling layers, 1 full articulamentum and 1 output layer, its
Middle threshold coding layer, which carries out coding to image, to be realized by 8 passage threshold coding modes or 16 passage threshold coding modes, tool
Body process is as follows,
(1) 8 passage threshold coding mode,
A., the convolutional channel of 83 × 3 is set;
B. each convolution kernel fixed center point parameter is -1, and it is w to set one orientation threshold parameter of surroundingi, remaining 7 point
0 is fixed as, encoding formula is,
C is 3 × 3 image block medians in formula (1), xiThe pixel value around image block is represented, i represents the index of pixel
Number, wiFor threshold parameter, sgn () represents sign function, i.e.,
C.8 individual convolution kernel does inner product processing by formula (1) and input, obtains partial binary threshold coding result;
(2) 16 passage threshold coding modes,
A., the convolutional channel of 16 1 × 1 is set;
B. each convolution kernel corresponds to study w respectively0,w1,…,w7And wc0,wc1,…,wc716 threshold parameters, it is encoded
Formula is,
C is 3 × 3 image block medians in formula (3), xiThe pixel value around image block is represented, i represents the index of pixel
Number, wiAnd wciFor threshold parameter;
C.16 individual convolution kernel does inner product processing by formula (3) and input, obtains partial binary threshold coding result;
Step 3, by test data set DvIt is input in the partial binary threshold learning network trained, to training result
Verified.
Further, the output layer is Softmax graders.
Thought of the threshold value LBP coding characteristics based on threshold learning of the present invention, by the structural information of LBP codings and convolution god
Learning ability through network is combined.Corresponding differential threshold when changing coding every time by convolutional neural networks, relative to original
The binary pattern of beginning LBP codings only 0 and 1, learns the LBP feature representations of adaptive multi-thresholding, then pass through convolutional Neural net
Network learns abstract characteristics.The present invention propose it is a kind of can threshold learning partial binary Network of Threshold be used for texture image classification
Method, by learning the structural information of textural characteristics, suitable for the identification of texture image under condition of small sample.
Brief description of the drawings
The passage principle explanatory diagram of threshold learning 8 of Fig. 1 embodiment of the present invention.
The passage principle explanatory diagram of threshold learning 16 of Fig. 2 embodiment of the present invention.
The 8 path partially binary threshold network structures explanation figure of Fig. 3 embodiment of the present invention.
The 16 path partially binary threshold network structures explanation figure of Fig. 4 embodiment of the present invention.
Embodiment
Technical solution of the present invention is described in detail below in conjunction with drawings and examples.
The present invention provides two kinds of threshold learning schemes of 8 passages and 16 passages, principle as shown in Figure 1 and Figure 2, corresponding network
Structure is as shown in Figure 3, Figure 4.The idiographic flow of the embodiment of the present invention comprises the following steps,
Step 1 prepares texture image dataset to be sorted, and implementation is as follows:
Need to get out M texture image dataset D to be sorted before execution, data set D is divided into two not
Overlapping Sub Data Set, training dataset DtWith test data set Dv, it is respectively used to training and cross validation;All data set figures
The size of picture is n × n-pixel.
Step 2 builds partial binary threshold learning network, input training dataset DtLocal binary threshold is learnt
Network is trained, and is realized and classified by Softmax graders.
Fig. 3 and Fig. 4 network structures include 1 input layer, 1 threshold coding layer, 2 convolutional layers, 3 down-sampling layers, 1
Individual full articulamentum and 1 output layer, partial binary Network of Threshold of the invention share 9 layers, and every layer of parameter setting is respectively:
(1) input layer:Input data is the texture image of n × n-pixel;
(2) threshold coding layer:The layer is partial binary threshold coding layer, learns corresponding code coefficient (i.e. threshold parameter
wiOr wiAnd wci) after encoded after image;
(3) S1 layers:The layer is down-sampling layer, and window size is 2 × 2, sliding step 1;
(4) C1 layers:The layer is convolutional layer, and convolution kernel size is 9 × 9, and convolution depth is 20;
(5) S2 layers:The layer is down-sampling layer, and window size is 2 × 2, sliding step 1;
(6) C2 layers:The layer is convolutional layer, and convolution kernel size is 5 × 5, and convolution depth is 50;
(7) S3 layers:The layer is down-sampling layer, and window size is 2 × 2, sliding step 1;
(8) FC layers:The layer is full articulamentum;
(9) output layer:It is made up of 10 Euclidean RBFs.
After setting network structure, (it is implemented as existing by the backpropagation of error sensitive item and stochastic gradient algorithm
Have technology, it will not go into details by the present invention) network is trained, the unknown parameter in learning network, unknown parameter is included in network
The coefficient of convolution kernel, by the training threshold value parameter that iterates of network, obtain the mark sheet fully with texture information
Reach.
Wherein threshold coding layer realizes the partial binary threshold coding to texture image to be sorted, so as to extract image
Texture information, specific implementation are as follows:
Partial binary threshold coding in the embodiment of the present invention has two kinds of implementations of 8 passages and 16 passages, with 3 × 3
Topography's block exemplified by, corresponding coding principle is respectively as depicted in figs. 1 and 2.
(1) it is for 8 passage threshold modes, its processing formula
C is 3 × 3 image block medians in formula (1), xiThe pixel value around image block is represented, i represents the index of pixel
Number, take 0~7, wiFor threshold parameter, sgn () represents sign function, i.e.,
Specific implementation is as shown in figure 1, including following 3 steps:
1) convolutional channel of 83 × 3 is set;
2) each convolution kernel fixed center point parameter is -1, and it is w to set one orientation threshold parameter of surroundingi, remaining 7 point
Then it is fixed as 0;
3) 8 convolution kernels do inner product by formula (1) and input and handle and obtain partial binary threshold coding result.
(2) it is for 16 passages, its coding formula,
C is 3 × 3 image block medians in formula (3), xiThe pixel value around image block is represented, i represents the index of pixel
Number, take 0~7, wiAnd wciFor threshold parameter;
Specific implementation process is as shown in Fig. 2 comprise the following steps:
1) convolutional channel of 16 1 × 1 is set;
2) each convolution kernel corresponds to study w respectively0,w1,…,w7And wc0,wc1,…,wc716 threshold parameters;
3) input data obtains partial binary threshold coding result after the processing of 16 passages.
Step 3 is by test data set DvIt is input in the partial binary threshold learning network trained, to training result
Verified.Higher precision can be obtained after general training, needs to train again in extreme circumstances.
Specific embodiment described herein is only to spirit explanation for example of the invention.The technical field of the invention
Technical staff described specific embodiment can be made it is various modification supplement or using similar mode substitute,
But without departing from the spiritual of the present invention or surmount scope defined in appended claims.
Claims (2)
1. the Texture Recognition based on partial binary threshold learning network, it is characterised in that comprise the following steps:
Step 1, prepare texture image dataset D to be sorted, data set D is divided into training dataset DtWith test data set Dv;
Step 2, partial binary threshold learning network, input training dataset D are builtt, pass through the backpropagation of error sensitive item
Local binary threshold learning network is trained with stochastic gradient algorithm;The partial binary threshold learning network includes
1 input layer, 1 threshold coding layer, 2 convolutional layers, 3 down-sampling layers, wherein 1 full articulamentum and 1 output layer, threshold value
Coding layer, which carries out coding to image, to be realized by 8 passage threshold coding modes or 16 passage threshold coding modes, detailed process
It is as follows,
(1) 8 passage threshold coding mode,
A., the convolutional channel of 83 × 3 is set;
B. each convolution kernel fixed center point parameter is -1, and it is w to set one orientation threshold parameter of surroundingi, remaining 7 point fixes
For 0, coding formula is,
<mrow>
<mi>y</mi>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>7</mn>
</munderover>
<mi>sgn</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mi>i</mi>
</msub>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<mi>c</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msup>
<mn>2</mn>
<mi>i</mi>
</msup>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
C is 3 × 3 image block medians in formula (1), xiThe pixel value around image block is represented, i represents the call number of pixel, wiFor
Threshold parameter, sgn () represent sign function, i.e.,
<mrow>
<mi>s</mi>
<mi>g</mi>
<mi>n</mi>
<mrow>
<mo>(</mo>
<mi>z</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mn>1</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>z</mi>
<mo>&GreaterEqual;</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>0</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>z</mi>
<mo><</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
C.8 individual convolution kernel does inner product processing by formula (1) and input, obtains partial binary threshold coding result;
(2) 16 passage threshold coding modes,
A., the convolutional channel of 16 1 × 1 is set;
B. each convolution kernel corresponds to study w respectively0,w1,…,w7And wc0,wc1,…,wc716 threshold parameters, it encodes formula
For,
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<mi>y</mi>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>7</mn>
</munderover>
<mi>sgn</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mi>i</mi>
</msub>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>w</mi>
<mrow>
<mi>c</mi>
<mi>i</mi>
</mrow>
</msub>
<mi>c</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msup>
<mn>2</mn>
<mi>i</mi>
</msup>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mn>0</mn>
</msub>
<msub>
<mi>x</mi>
<mn>0</mn>
</msub>
<mo>-</mo>
<msub>
<mi>w</mi>
<mrow>
<mi>c</mi>
<mn>0</mn>
</mrow>
</msub>
<mi>c</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msup>
<mn>2</mn>
<mn>0</mn>
</msup>
<mo>+</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mn>1</mn>
</msub>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>w</mi>
<mrow>
<mi>c</mi>
<mn>1</mn>
</mrow>
</msub>
<mi>c</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msup>
<mn>2</mn>
<mn>1</mn>
</msup>
<mo>+</mo>
<mn>...</mn>
<mo>+</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mn>7</mn>
</msub>
<msub>
<mi>x</mi>
<mn>7</mn>
</msub>
<mo>-</mo>
<msub>
<mi>w</mi>
<mrow>
<mi>c</mi>
<mn>7</mn>
</mrow>
</msub>
<mi>c</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msup>
<mn>2</mn>
<mn>7</mn>
</msup>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
C is 3 × 3 image block medians in formula (3), xiThe pixel value around image block is represented, i represents the call number of pixel, wiWith
wciFor threshold parameter;
C.16 individual convolution kernel does inner product processing by formula (3) and input, obtains partial binary threshold coding result;
Step 3, by test data set DvIt is input in the partial binary threshold learning network trained, training result is carried out
Checking.
2. the Texture Recognition as claimed in claim 1 based on partial binary threshold learning network, it is characterised in that:Institute
It is Softmax graders to state output layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710726395.7A CN107480723B (en) | 2017-08-22 | 2017-08-22 | Texture Recognition based on partial binary threshold learning network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710726395.7A CN107480723B (en) | 2017-08-22 | 2017-08-22 | Texture Recognition based on partial binary threshold learning network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107480723A true CN107480723A (en) | 2017-12-15 |
CN107480723B CN107480723B (en) | 2019-11-08 |
Family
ID=60601297
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710726395.7A Active CN107480723B (en) | 2017-08-22 | 2017-08-22 | Texture Recognition based on partial binary threshold learning network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107480723B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108197087A (en) * | 2018-01-18 | 2018-06-22 | 北京奇安信科技有限公司 | Character code recognition methods and device |
CN110222647A (en) * | 2019-06-10 | 2019-09-10 | 大连民族大学 | A kind of human face in-vivo detection method based on convolutional neural networks |
CN110781936A (en) * | 2019-10-16 | 2020-02-11 | 武汉大学 | Construction method of threshold learnable local binary network based on texture description and deep learning and remote sensing image classification method |
CN112353368A (en) * | 2020-05-08 | 2021-02-12 | 北京理工大学 | Multi-input signal epileptic seizure detection system based on feedback adjustment |
US11189015B2 (en) | 2018-05-30 | 2021-11-30 | Samsung Electronics Co., Ltd. | Method and apparatus for acquiring feature data from low-bit image |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103778434A (en) * | 2014-01-16 | 2014-05-07 | 重庆邮电大学 | Face recognition method based on multi-resolution multi-threshold local binary pattern |
CN105550658A (en) * | 2015-12-24 | 2016-05-04 | 蔡叶荷 | Face comparison method based on high-dimensional LBP (Local Binary Patterns) and convolutional neural network feature fusion |
CN106682616A (en) * | 2016-12-28 | 2017-05-17 | 南京邮电大学 | Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning |
-
2017
- 2017-08-22 CN CN201710726395.7A patent/CN107480723B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103778434A (en) * | 2014-01-16 | 2014-05-07 | 重庆邮电大学 | Face recognition method based on multi-resolution multi-threshold local binary pattern |
CN105550658A (en) * | 2015-12-24 | 2016-05-04 | 蔡叶荷 | Face comparison method based on high-dimensional LBP (Local Binary Patterns) and convolutional neural network feature fusion |
CN106682616A (en) * | 2016-12-28 | 2017-05-17 | 南京邮电大学 | Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning |
Non-Patent Citations (2)
Title |
---|
CHUN-YUAN WANG, YE ZHANG: "Affine invariant feature extraction algorithm based on multiscale autoconvolution combining with texture structure analysis", 《OPTIK》 * |
FELIX JUEFEI-XU ET AL.: "Local Binary Convolutional Neural Networks", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108197087A (en) * | 2018-01-18 | 2018-06-22 | 北京奇安信科技有限公司 | Character code recognition methods and device |
US11189015B2 (en) | 2018-05-30 | 2021-11-30 | Samsung Electronics Co., Ltd. | Method and apparatus for acquiring feature data from low-bit image |
US11636575B2 (en) | 2018-05-30 | 2023-04-25 | Samsung Electronics Co., Ltd. | Method and apparatus for acquiring feature data from low-bit image |
US11893497B2 (en) | 2018-05-30 | 2024-02-06 | Samsung Electronics Co., Ltd. | Method and apparatus for acquiring feature data from low-bit image |
CN110222647A (en) * | 2019-06-10 | 2019-09-10 | 大连民族大学 | A kind of human face in-vivo detection method based on convolutional neural networks |
CN110222647B (en) * | 2019-06-10 | 2022-05-10 | 大连民族大学 | Face in-vivo detection method based on convolutional neural network |
CN110781936A (en) * | 2019-10-16 | 2020-02-11 | 武汉大学 | Construction method of threshold learnable local binary network based on texture description and deep learning and remote sensing image classification method |
CN110781936B (en) * | 2019-10-16 | 2022-11-18 | 武汉大学 | Construction method of threshold learnable local binary network based on texture description and deep learning and remote sensing image classification method |
CN112353368A (en) * | 2020-05-08 | 2021-02-12 | 北京理工大学 | Multi-input signal epileptic seizure detection system based on feedback adjustment |
Also Published As
Publication number | Publication date |
---|---|
CN107480723B (en) | 2019-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107480723A (en) | Texture Recognition based on partial binary threshold learning network | |
CN111914907B (en) | Hyperspectral image classification method based on deep learning space-spectrum combined network | |
CN105956560B (en) | A kind of model recognizing method based on the multiple dimensioned depth convolution feature of pondization | |
CN105046277B (en) | Robust mechanism study method of the feature significance in image quality evaluation | |
CN104778702A (en) | Image stego-detection method on basis of deep learning | |
CN107808132A (en) | A kind of scene image classification method for merging topic model | |
CN108921822A (en) | Image object method of counting based on convolutional neural networks | |
CN108121975A (en) | A kind of face identification method combined initial data and generate data | |
CN108090472B (en) | Pedestrian re-identification method and system based on multi-channel consistency characteristics | |
CN106372648A (en) | Multi-feature-fusion-convolutional-neural-network-based plankton image classification method | |
CN108280233A (en) | A kind of VideoGIS data retrieval method based on deep learning | |
CN107862668A (en) | A kind of cultural relic images restored method based on GNN | |
CN107423747A (en) | A kind of conspicuousness object detection method based on depth convolutional network | |
CN107491729B (en) | Handwritten digit recognition method based on cosine similarity activated convolutional neural network | |
CN106960176B (en) | Pedestrian gender identification method based on transfinite learning machine and color feature fusion | |
CN111627080B (en) | Gray level image coloring method based on convolution nerve and condition generation antagonistic network | |
CN106022355A (en) | 3DCNN (three-dimensional convolutional neural network)-based high-spectral image space spectrum combined classification method | |
CN105279519A (en) | Remote sensing image water body extraction method and system based on cooperative training semi-supervised learning | |
CN107516103A (en) | A kind of image classification method and system | |
CN110751072B (en) | Double-person interactive identification method based on knowledge embedded graph convolution network | |
CN104700100A (en) | Feature extraction method for high spatial resolution remote sensing big data | |
CN112950780B (en) | Intelligent network map generation method and system based on remote sensing image | |
CN108564166A (en) | Based on the semi-supervised feature learning method of the convolutional neural networks with symmetrical parallel link | |
CN107958219A (en) | Image scene classification method based on multi-model and Analysis On Multi-scale Features | |
CN108376257B (en) | Incomplete code word identification method for gas meter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |