CN107451565A - A kind of semi-supervised small sample deep learning image model classifying identification method - Google Patents
A kind of semi-supervised small sample deep learning image model classifying identification method Download PDFInfo
- Publication number
- CN107451565A CN107451565A CN201710647312.5A CN201710647312A CN107451565A CN 107451565 A CN107451565 A CN 107451565A CN 201710647312 A CN201710647312 A CN 201710647312A CN 107451565 A CN107451565 A CN 107451565A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msup
- feature
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of semi-supervised small sample deep learning image model classifying identification method, belong to field of image recognition.The method comprising the steps of:S1:Image pattern is pre-processed;S2:In the network that the data input that pretreatment obtains is trained, network carries out feature extraction by 3D convolutional layers, obtains feature figure layer;S3:Each convolutional layer is followed by a pond layer, for reducing the size of feature figure layer to reduce the number of parameter in network;S4:Feature after multilayer convolutional layer and the extraction of pond layer is connected with a full articulamentum, to extract and rearrange the feature for needing to classify;This layer introduces the local regularization operation for protecting neighbour;S5:Sample to be tested is inputted, obtains the degree of accuracy of classifying.Present invention utilizes the position correlation between the unlabeled exemplars largely gathered, improves applicability and the degree of accuracy of the algorithm under small sample set.
Description
Technical field
The invention belongs to field of image recognition, is related to a kind of semi-supervised small sample deep learning image model Classification and Identification side
Method.
Background technology
In the case where exemplar quantity is enough, by establishing the model of stratification, deep learning method can be with adaptive
The feature of image should be extracted, and then significantly improves the degree of accuracy of pattern classification identification.Depth convolutional neural networks (CNN) are in part
The degree of accuracy on remote sensing image data collection reaches 95%~99%.The article that YushiChen2016 is published on TGRS proposes
A kind of Feature Selection Model for being based on 3D convolutional neural networks (3DCNN), realize more than 98% classification accuracy.But
Because model parameter is in large scale, to being that exemplar quantitative requirement is higher for training.
The deep learning model parameter of the supervision such as CNN (needing label training sample) is in large scale, it is necessary to a large amount of label samples
Originally it is trained to improve classification accuracy.But in some fields, due to the original such as sample collection is difficult, label analysis cost is big
Cause, the pattern classification rcognition method based on deep learning face the difficulty of exemplar shortage.It is currently based on deep learning method
Hyperspectral imagery processing generally use per classification 103~104Individual label, which is trained, can be only achieved optimal performance, and this is exceeded well over
The exemplar scale that can be provided in the practical applications such as Minerals identification.
The content of the invention
In view of this, it is an object of the invention to provide a kind of semi-supervised small sample deep learning image model Classification and Identification
Method, with reference to unsupervised (without label training sample) mode identification method, using largely obtaining without mark on the basis of 3DCNN
Signed-off sample notebook data, dependence of the deep learning method to exemplar is reduced, improve the pattern classification identification based on deep learning
The degree of accuracy.
To reach above-mentioned purpose, the present invention provides following technical scheme:
A kind of semi-supervised small sample deep learning image model classifying identification method, comprises the following steps:
S1:Image pattern is pre-processed;
S2:In the network that the data input that pretreatment obtains is trained, network carries out feature extraction by 3D convolutional layers,
Obtain feature figure layer;
S3:Each convolutional layer is followed by a pond layer, for reducing the size of feature figure layer to reduce parameter in network
Number;
S4:Feature after multilayer convolutional layer and the extraction of pond layer is connected with a full articulamentum, to extract and again
New arrangement needs the feature classified;This layer introduces the local feature difference protected adjacent regularization operation, reduce adjacent sample, from
And reduce the degree of accuracy of classifying as caused by lacking exemplar and decline;
S5:Sample to be tested is inputted, obtains the degree of accuracy of classifying.
Further, the S1 is specially:Selection quantity A target pixel points and corresponding label, will as training sample
Quantity A pixel forms a Neighborhood matrix around each target pixel points, including target pixel points;By all targets
Label, the Neighborhood matrix of pixel are defeated as 3D convolutional neural networks (Convolutional Neural Networ, CNN)
Enter data.
Further, the S2 specific algorithms are:
WhereinI-th layer of j-th of upper neuron output value in (x, y, z) of feature figure layer is represented, m represents i-1 layers
The index value for the feature figure layer being connected with j-th of feature figure layer, Pi、Qi、RiRepresent respectively i-th layer the height of convolution kernel, width,
Depth,For the value put in m-th of feature figure layer with being located at (p, q, r), bijFor the biasing of j-th of feature figure layer, g () is sharp
Function living.
Further, the S3 specific algorithms are:
Wherein u (n × n × n) represents to act on the three dimensional window of convolutional layer output characteristic, αi,j,mRepresent characteristic point in neighborhood
Maximum.
Further, the S4 specific algorithms are:
hf (i)=sig (Wfvf (i)+bf)
Wherein sijThe distance between i-th and j-th training sample of expression, g(i)And g(j)Represent i-th and j-th respectively
The coordinate of training sample, hf (i)And hf (j)The feature of i-th and j-th training sample extraction is represented respectively;R(Wf) represent canonical
Change item;When training sample i is adjacent with j, sijWill be bigger, parameter Wfvf (i)And Wfvf (j)Gap will be smaller so that special
Levy hf (i)And hf (j)Also it is adjacent.
Further, the R (Wf) meet:
Wherein Vf=[vf (1),...vf (L),...vf (N)] matrix form of full articulamentum input vector is represented, D is to angular moment
Battle array (dii=∑jsiJ), P=D-S;
Regularization term derivation is obtained:
The beneficial effects of the present invention are:The present invention proposes a kind of small label on the basis of existing 3DCNN algorithms
The deep learning model of sample, make use of the position correlation between the unlabeled exemplars largely gathered, improve algorithm small
Applicability and the degree of accuracy under sample set.
Brief description of the drawings
In order that the purpose of the present invention, technical scheme and beneficial effect are clearer, the present invention provides drawings described below and carried out
Explanation:
Fig. 1 is block diagram of the present invention;
Fig. 2 implements schematic diagram for the present invention.
Embodiment
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described in detail.
In substantial amounts of image steganalysis application, the correlation for closing on sample is widely present.For example, in scene Recognition, distant
In the application for feeling Objects recognition and medical image recognition, the label of adjacent pixels sample often has correlation.We carry accordingly
Go out and protect adjacent convolutional neural networks (Locality Preserving Convolutional Neural Network) based on local
Pattern classification rcognition method, the position correlation between the unlabeled exemplars that can largely gather is make use of, improve algorithm small
Applicability and the degree of accuracy under sample set.As shown in figure 1, comprise the following steps that:
1st, data prediction
A number of target pixel points and corresponding label are chosen as training sample.Around each target pixel points
The pixel of identical quantity forms a Neighborhood matrix (including target pixel points).By the label of all target pixel points, neighborhood
Input data of the matrix as 3DCNN.
2nd, feature extraction
(1) convolutional layer (multiple)
It will obtain in the network that data input trains, network carries out feature extraction by 3D convolutional layers, and each layer includes
The 3D convolution kernels of varying number, the different feature figure layer of feature extraction generation is carried out for the data of input.
Specific algorithm is as follows:
WhereinRepresent i-th layer of j-th of upper neuron output value in (x, y, z) of feature figure layer.M represents i-1 layers
The index value for the feature figure layer being connected with j-th of feature figure layer.PiAnd QiThe height and width R of i-th layer of convolution kernel is represented respectivelyi
Represent the depth of i-th layer of convolution kernel.For the value in m-th of feature figure layer with being put positioned at (p, q, r).bijFor j-th of feature
The biasing of figure layer.G () is activation primitive.
(2) pond layer (multiple)
Each convolutional layer is followed by a pond layer, and effect is the size of diminution feature figure layer to reduce the number of parameter in network
Mesh.The specific algorithm of most common pondization operation (maximum pondization operation) is as follows:
Wherein u (n × n × n) represents to act on the three dimensional window of convolutional layer output characteristic, αi,j,mRepresent characteristic point in neighborhood
Maximum.
(3) with the local full articulamentum for protecting adjacent regularization
Feature after multilayer convolutional layer and the extraction of pond layer is connected with a full articulamentum, arranged with extraction and again
Row need the feature classified.This layer introduces the local feature difference protected adjacent regularization operation, reduce adjacent sample, so as to subtract
The small degree of accuracy of classifying as caused by lacking exemplar declines.
Specific algorithm is as follows:
hf (i)=sig (Wfvf (i)+bf)
Wherein sijThe distance between i-th and j-th training sample of expression, g(i)And g(j)Represent i-th and j-th respectively
The coordinate of training sample.hf (i)And hf (j)The feature of i-th and j-th training sample extraction is represented respectively.
When training sample i is adjacent with j, sijWill be bigger, parameter Wfvf (i)And Wfvf (j)Gap will be smaller, so as to
Make feature hf (i)And hf (j)Also it is adjacent.
It is as follows that regularization term in above-mentioned equation is expressed as matrix form:
Wherein Vf=[vf (1),...vf (L),...vf (N)] matrix form of full articulamentum input vector is represented, D is to angular moment
Battle array (dii=∑jsij), P=D-S.
Regularization term derivation is obtained:Therefore, canonical can be reduced using standard gradient descent methods
Change item.
3rd, classification layer
Sample to be tested is inputted, output width is the species of sample label, obtains the degree of accuracy of classifying.
As illustrated in fig. 2, it is assumed that A1、A2And B1、B2It is associated respectively from different labels, it is contemplated that close on the correlation of sample
Property, the output characteristic for extraction of classifying also retains such correlation.
Finally illustrate, preferred embodiment above is merely illustrative of the technical solution of the present invention and unrestricted, although logical
Cross above preferred embodiment the present invention is described in detail, it is to be understood by those skilled in the art that can be
Various changes are made to it in form and in details, without departing from claims of the present invention limited range.
Claims (6)
- A kind of 1. semi-supervised small sample deep learning image model classifying identification method, it is characterised in that:This method includes following Step:S1:Image pattern is pre-processed;S2:In the network that the data input that pretreatment obtains is trained, network carries out feature extraction by 3D convolutional layers, obtains Feature figure layer;S3:Each convolutional layer is followed by a pond layer, for reducing the size of feature figure layer to reduce the number of parameter in network;S4:Feature after multilayer convolutional layer and the extraction of pond layer is connected with a full articulamentum, arranged with extraction and again Row need the feature classified;This layer introduces the local feature difference protected adjacent regularization operation, reduce adjacent sample, so as to subtract The small degree of accuracy of classifying as caused by lacking exemplar declines;S5:Sample to be tested is inputted, obtains the degree of accuracy of classifying.
- 2. a kind of semi-supervised small sample deep learning image model classifying identification method as claimed in claim 1, its feature exist In:The S1 is specially:Selection quantity A target pixel points and corresponding label are as training sample, by each object pixel Point surrounding quantity A pixel forms a Neighborhood matrix, including target pixel points;By the mark of all target pixel points Label, input data of the Neighborhood matrix as 3D convolutional neural networks (Convolutional Neural Networ, CNN).
- 3. a kind of semi-supervised small sample deep learning image model classifying identification method as claimed in claim 1, its feature exist In:The S2 specific algorithms are:<mrow> <msubsup> <mi>v</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mrow> <mi>x</mi> <mi>y</mi> <mi>z</mi> </mrow> </msubsup> <mo>=</mo> <mi>g</mi> <mrow> <mo>(</mo> <munder> <mo>&Sigma;</mo> <mi>m</mi> </munder> <munderover> <mo>&Sigma;</mo> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mo>&Sigma;</mo> <mrow> <mi>q</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <msub> <mi>Q</mi> <mi>i</mi> </msub> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mo>&Sigma;</mo> <mrow> <mi>r</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msubsup> <mi>w</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>m</mi> </mrow> <mrow> <mi>p</mi> <mi>q</mi> <mi>r</mi> </mrow> </msubsup> <msubsup> <mi>v</mi> <mrow> <mrow> <mo>(</mo> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> <mi>m</mi> </mrow> <mrow> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>+</mo> <mi>p</mi> </mrow> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mrow> <mi>y</mi> <mo>+</mo> <mi>q</mi> </mrow> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mrow> <mi>z</mi> <mo>+</mo> <mi>r</mi> </mrow> <mo>)</mo> </mrow> </mrow> </msubsup> <mo>+</mo> <msub> <mi>b</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow>WhereinI-th layer of j-th of upper neuron output value in (x, y, z) of feature figure layer is represented, m represents i-1 layers and jth The index value for the feature figure layer that individual feature figure layer is connected, Pi、Qi、RiThe height, width, depth of i-th layer of convolution kernel are represented respectively,For the value put in m-th of feature figure layer with being located at (p, q, r), bijFor the biasing of j-th of feature figure layer, g () is activation letter Number.
- 4. a kind of semi-supervised small sample deep learning image model classifying identification method as claimed in claim 1, its feature exist In:The S3 specific algorithms are:<mrow> <msub> <mi>&alpha;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> <mo>=</mo> <munder> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mrow> <mi>n</mi> <mo>&times;</mo> <mi>n</mi> <mo>&times;</mo> <mi>n</mi> </mrow> </munder> <msub> <mi>&alpha;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> <mo>*</mo> <mi>u</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>&times;</mo> <mi>n</mi> <mo>&times;</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow>Wherein u (n × n × n) represents to act on the three dimensional window of convolutional layer output characteristic, αi,j,mRepresent that characteristic point is most in neighborhood Big value.
- 5. a kind of semi-supervised small sample deep learning image model classifying identification method as claimed in claim 1, its feature exist In:The S4 specific algorithms are:<mrow> <msub> <mi>s</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <msup> <mi>exp</mi> <mrow> <mo>-</mo> <mo>|</mo> <mo>|</mo> <msup> <mi>g</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mi>g</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>/</mo> <mi>&sigma;</mi> </mrow> </msup> </mrow><mrow> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>W</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munder> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </munder> <mo>|</mo> <mo>|</mo> <msub> <mi>W</mi> <mi>f</mi> </msub> <msup> <msub> <mi>v</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msub> <mi>W</mi> <mi>f</mi> </msub> <msup> <msub> <mi>v</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <msub> <mi>s</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow>hf (i)=sig (Wfvf (i)+bf)Wherein sijThe distance between i-th and j-th training sample of expression, g(i)And g(j)I-th and j-th training is represented respectively The coordinate of sample, hf (i)And hf (j)The feature of i-th and j-th training sample extraction is represented respectively;R(Wf) represent regularization term; When training sample i is adjacent with j, sijWill be bigger, parameter Wfvf (i)And Wfvf (j)Gap will be smaller so that feature hf (i)And hf (j)Also it is adjacent.
- 6. a kind of semi-supervised small sample deep learning image model classifying identification method as claimed in claim 5, its feature exist In:R (the Wf) meet:<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>W</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </munder> <mi>t</mi> <mi>r</mi> <mrow> <mo>&lsqb;</mo> <mrow> <msup> <msub> <mi>W</mi> <mi>f</mi> </msub> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mrow> <msup> <msub> <mi>v</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <msub> <mi>v</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> </mrow> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mrow> <msup> <msub> <mi>v</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <msub> <mi>v</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msub> <mi>W</mi> <mi>f</mi> </msub> </mrow> <mo>&rsqb;</mo> </mrow> <msub> <mi>s</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mi>t</mi> <mi>r</mi> <mrow> <mo>&lsqb;</mo> <mrow> <msup> <msub> <mi>W</mi> <mi>f</mi> </msub> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mrow> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msup> <msub> <mi>v</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mi>i</mi> </mrow> </msub> <msup> <msub> <mi>v</mi> <mi>f</mi> </msub> <mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> </mrow> <mo>)</mo> </mrow> <msub> <mi>W</mi> <mi>f</mi> </msub> </mrow> <mo>&rsqb;</mo> </mrow> <mo>-</mo> <mi>t</mi> <mi>r</mi> <mrow> <mo>&lsqb;</mo> <mrow> <msup> <msub> <mi>W</mi> <mi>f</mi> </msub> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mrow> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msup> <msub> <mi>v</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>s</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <msup> <msub> <mi>v</mi> <mi>f</mi> </msub> <mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mi>T</mi> </mrow> </msup> </mrow> <mo>)</mo> </mrow> <msub> <mi>W</mi> <mi>f</mi> </msub> </mrow> <mo>&rsqb;</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mi>t</mi> <mi>r</mi> <mrow> <mo>(</mo> <mrow> <msup> <msub> <mi>W</mi> <mi>f</mi> </msub> <mi>T</mi> </msup> <msub> <mi>V</mi> <mi>f</mi> </msub> <msup> <msub> <mi>DV</mi> <mi>f</mi> </msub> <mi>T</mi> </msup> <msub> <mi>W</mi> <mi>f</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>t</mi> <mi>r</mi> <mrow> <mo>(</mo> <mrow> <msup> <msub> <mi>W</mi> <mi>f</mi> </msub> <mi>T</mi> </msup> <msub> <mi>V</mi> <mi>f</mi> </msub> <msup> <msub> <mi>SV</mi> <mi>f</mi> </msub> <mi>T</mi> </msup> <msub> <mi>W</mi> <mi>f</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mi>t</mi> <mi>r</mi> <mrow> <mo>(</mo> <mrow> <msup> <msub> <mi>W</mi> <mi>f</mi> </msub> <mi>T</mi> </msup> <msub> <mi>V</mi> <mi>f</mi> </msub> <msup> <msub> <mi>PV</mi> <mi>f</mi> </msub> <mi>T</mi> </msup> <msub> <mi>W</mi> <mi>f</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced>Wherein Vf=[vf (1),...vf (L),...vf (N)] matrix form of full articulamentum input vector is represented, D is diagonal matrix (dii=∑jsij), P=D-S;Regularization term derivation is obtained:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710647312.5A CN107451565B (en) | 2017-08-01 | 2017-08-01 | Semi-supervised small sample deep learning image mode classification and identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710647312.5A CN107451565B (en) | 2017-08-01 | 2017-08-01 | Semi-supervised small sample deep learning image mode classification and identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107451565A true CN107451565A (en) | 2017-12-08 |
CN107451565B CN107451565B (en) | 2020-12-11 |
Family
ID=60490637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710647312.5A Active CN107451565B (en) | 2017-08-01 | 2017-08-01 | Semi-supervised small sample deep learning image mode classification and identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107451565B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108009284A (en) * | 2017-12-22 | 2018-05-08 | 重庆邮电大学 | Using the Law Text sorting technique of semi-supervised convolutional neural networks |
CN108171200A (en) * | 2018-01-12 | 2018-06-15 | 西安电子科技大学 | SAR image sorting technique based on SAR image statistical distribution and DBN |
CN108537119A (en) * | 2018-03-06 | 2018-09-14 | 北京大学 | A kind of small sample video frequency identifying method |
CN109657697A (en) * | 2018-11-16 | 2019-04-19 | 中山大学 | Classified optimization method based on semi-supervised learning and fine granularity feature learning |
CN109685135A (en) * | 2018-12-21 | 2019-04-26 | 电子科技大学 | A kind of few sample image classification method based on modified metric learning |
CN110245714A (en) * | 2019-06-20 | 2019-09-17 | 厦门美图之家科技有限公司 | Image-recognizing method, device and electronic equipment |
CN111024912A (en) * | 2019-12-27 | 2020-04-17 | 重庆国环绿源科技有限公司 | Ship wastewater pretreatment type detection device |
CN111259366A (en) * | 2020-01-22 | 2020-06-09 | 支付宝(杭州)信息技术有限公司 | Verification code recognizer training method and device based on self-supervision learning |
CN111353583A (en) * | 2020-02-20 | 2020-06-30 | 南京工程学院 | Deep learning network based on group convolution characteristic topological space and training method thereof |
CN111639714A (en) * | 2020-06-01 | 2020-09-08 | 贝壳技术有限公司 | Method, device and equipment for determining attributes of users |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102346847A (en) * | 2011-09-26 | 2012-02-08 | 青岛海信网络科技股份有限公司 | License plate character recognizing method of support vector machine |
CN104809426A (en) * | 2014-01-27 | 2015-07-29 | 日本电气株式会社 | Convolutional neural network training method and target identification method and device |
CN106204587A (en) * | 2016-05-27 | 2016-12-07 | 孔德兴 | Multiple organ dividing method based on degree of depth convolutional neural networks and region-competitive model |
CN106778494A (en) * | 2016-11-21 | 2017-05-31 | 河海大学 | A kind of target in hyperspectral remotely sensed image feature extracting method based on SIFT LPP |
-
2017
- 2017-08-01 CN CN201710647312.5A patent/CN107451565B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102346847A (en) * | 2011-09-26 | 2012-02-08 | 青岛海信网络科技股份有限公司 | License plate character recognizing method of support vector machine |
CN104809426A (en) * | 2014-01-27 | 2015-07-29 | 日本电气株式会社 | Convolutional neural network training method and target identification method and device |
CN106204587A (en) * | 2016-05-27 | 2016-12-07 | 孔德兴 | Multiple organ dividing method based on degree of depth convolutional neural networks and region-competitive model |
CN106778494A (en) * | 2016-11-21 | 2017-05-31 | 河海大学 | A kind of target in hyperspectral remotely sensed image feature extracting method based on SIFT LPP |
Non-Patent Citations (6)
Title |
---|
HEECHUL JUNG等: "Joint Fine-Tuning in Deep Neural Networks for Facial Expression Recognition", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 * |
LI, SHAN , W. DENG , AND J. P. DU: "Reliable Crowdsourcing and Deep LocalityPreserving Learning for Unconstrained Facial Expression Recognition", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) IEEE》 * |
WANJIANG XU;;CAN LUO;;AIMING JI: "Coupled locality preserving projections for cross-view gait recognition", 《NEUROCOMPUTING》 * |
XIAOFEI HE,PARTHA NIYOGI: "Locality Preserving Projections", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 16》 * |
翟冬灵, 王正群, 徐春林: "基于QR分解的正则化邻域保持嵌入算法", 《计算机应用》 * |
袁智等: "一种基于双流卷积神经网络跌倒识别方法", 《河南师范大学学报(自然科学版)》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108009284A (en) * | 2017-12-22 | 2018-05-08 | 重庆邮电大学 | Using the Law Text sorting technique of semi-supervised convolutional neural networks |
CN108171200A (en) * | 2018-01-12 | 2018-06-15 | 西安电子科技大学 | SAR image sorting technique based on SAR image statistical distribution and DBN |
CN108171200B (en) * | 2018-01-12 | 2022-05-17 | 西安电子科技大学 | SAR image classification method based on SAR image statistical distribution and DBN |
CN108537119B (en) * | 2018-03-06 | 2020-07-10 | 北京大学 | Small sample video identification method |
CN108537119A (en) * | 2018-03-06 | 2018-09-14 | 北京大学 | A kind of small sample video frequency identifying method |
CN109657697A (en) * | 2018-11-16 | 2019-04-19 | 中山大学 | Classified optimization method based on semi-supervised learning and fine granularity feature learning |
CN109657697B (en) * | 2018-11-16 | 2023-01-06 | 中山大学 | Classification optimization method based on semi-supervised learning and fine-grained feature learning |
CN109685135B (en) * | 2018-12-21 | 2022-03-25 | 电子科技大学 | Few-sample image classification method based on improved metric learning |
CN109685135A (en) * | 2018-12-21 | 2019-04-26 | 电子科技大学 | A kind of few sample image classification method based on modified metric learning |
CN110245714A (en) * | 2019-06-20 | 2019-09-17 | 厦门美图之家科技有限公司 | Image-recognizing method, device and electronic equipment |
CN111024912A (en) * | 2019-12-27 | 2020-04-17 | 重庆国环绿源科技有限公司 | Ship wastewater pretreatment type detection device |
CN111259366A (en) * | 2020-01-22 | 2020-06-09 | 支付宝(杭州)信息技术有限公司 | Verification code recognizer training method and device based on self-supervision learning |
CN111353583A (en) * | 2020-02-20 | 2020-06-30 | 南京工程学院 | Deep learning network based on group convolution characteristic topological space and training method thereof |
CN111353583B (en) * | 2020-02-20 | 2023-04-07 | 南京工程学院 | Deep learning network based on group convolution characteristic topological space and training method thereof |
CN111639714A (en) * | 2020-06-01 | 2020-09-08 | 贝壳技术有限公司 | Method, device and equipment for determining attributes of users |
CN111639714B (en) * | 2020-06-01 | 2021-07-23 | 贝壳找房(北京)科技有限公司 | Method, device and equipment for determining attributes of users |
Also Published As
Publication number | Publication date |
---|---|
CN107451565B (en) | 2020-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107451565A (en) | A kind of semi-supervised small sample deep learning image model classifying identification method | |
Cheng et al. | Pest identification via deep residual learning in complex background | |
Lee et al. | Contextual deep CNN based hyperspectral classification | |
CN104217214B (en) | RGB D personage's Activity recognition methods based on configurable convolutional neural networks | |
CN113239784B (en) | Pedestrian re-identification system and method based on space sequence feature learning | |
CN106023065A (en) | Tensor hyperspectral image spectrum-space dimensionality reduction method based on deep convolutional neural network | |
CN106991382A (en) | A kind of remote sensing scene classification method | |
CN104866810A (en) | Face recognition method of deep convolutional neural network | |
CN107016357A (en) | A kind of video pedestrian detection method based on time-domain convolutional neural networks | |
CN105574534A (en) | Significant object detection method based on sparse subspace clustering and low-order expression | |
CN106023145A (en) | Remote sensing image segmentation and identification method based on superpixel marking | |
CN108734719A (en) | Background automatic division method before a kind of lepidopterous insects image based on full convolutional neural networks | |
CN107463919A (en) | A kind of method that human facial expression recognition is carried out based on depth 3D convolutional neural networks | |
CN107767416B (en) | Method for identifying pedestrian orientation in low-resolution image | |
CN110991257B (en) | Polarized SAR oil spill detection method based on feature fusion and SVM | |
Yang et al. | A deep multiscale pyramid network enhanced with spatial–spectral residual attention for hyperspectral image change detection | |
CN113487576B (en) | Insect pest image detection method based on channel attention mechanism | |
CN104462494A (en) | Remote sensing image retrieval method and system based on non-supervision characteristic learning | |
CN110222760A (en) | A kind of fast image processing method based on winograd algorithm | |
CN107545571A (en) | A kind of image detecting method and device | |
CN104298974A (en) | Human body behavior recognition method based on depth video sequence | |
CN108197650A (en) | The high spectrum image extreme learning machine clustering method that local similarity is kept | |
CN113344045B (en) | Method for improving SAR ship classification precision by combining HOG characteristics | |
CN113011386B (en) | Expression recognition method and system based on equally divided characteristic graphs | |
CN110334656A (en) | Multi-source Remote Sensing Images Clean water withdraw method and device based on information source probability weight |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |