CN107392230A - A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability - Google Patents
A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability Download PDFInfo
- Publication number
- CN107392230A CN107392230A CN201710483627.0A CN201710483627A CN107392230A CN 107392230 A CN107392230 A CN 107392230A CN 201710483627 A CN201710483627 A CN 201710483627A CN 107392230 A CN107392230 A CN 107392230A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msup
- munderover
- mtd
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2155—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
Abstract
The invention discloses a kind of semi-supervision image classification method for possessing maximization knowledge utilization ability.This method does not have breakthrough for being absorbed in inside general pattern sorting technique in pre-processing image data and Feature Selection in sorting technique, it is proposed that a kind of semi-supervision image classification method for possessing maximization knowledge utilization ability.The image classification method is considering the problem of reality hypograph mark cost is big first, sets about from semi-supervised method, then carries out maximization excavation to marked image data, sets about excavating view data knowledge in terms of having mark image and unmarked image two;Meanwhile in the pretreatment and Feature Selection of view data, take the normalization with applicability and principal component analysis sign dimension reduction method handles view data in advance, fully ensure that the integrality of image data information.
Description
Technical field
It is specifically a kind of to possess the semi-supervised of maximization knowledge utilization ability the invention belongs to image procossing and application field
Image classification method.
Background technology
SVMs (Support Vector Machine, SVM) was progressively grown up after the 1990s
A kind of machine learning method based on Statistical Learning Theory, solid theoretical foundation makes it successfully solve in machine learning
" dimension disaster " and " over-fitting " problem of generally existing, and there is good generalization ability, led in many Practical Projects
Good application prospect is illustrated in domain.But traditional SVM can only have as a kind of learning method for having supervision in a small amount of
Learnt on the sample of label, so as to cause study less abundant to a certain extent, have impact on the party to a certain extent
The ability that specific pattern is identified method.If semi-supervised learning thought can be introduced into SVMs, it will make up mark
The defects of quasi- SVMs, obtain more preferable classifying quality.
Traditional machine learning techniques are divided into two classes, and one kind is unsupervised learning, and one kind is supervised learning.Unsupervised learning
Unlabelled sample set is only utilized, and supervised learning is then only learnt using the sample set of mark.But in many practical problems
In, the markd data of only a small amount of band, generally require to expend substantial amounts of manpower and materials because carrying out category label to sample,
Cost is sometimes very high, and substantial amounts of unlabelled data are readily available.This just promotes that marker samples and not can be utilized simultaneously
The semi-supervised learning technology of marker samples develops rapidly.Semisupervised support vector machines algorithm based on SVMs extension
Just there is preferable effect.
Image classification be by image assign to it is set in advance it is different classes of in.For computer, identifying and classifying is
More difficult.Of both reason is, first, it is various and be difficult to retouch that large amount of complex is filled with terms of view data, in image
The object stated, it is various by the data processing of specific data and feature selection approach in original image to mode identification method;Two
It is the selection of specific sorting technique, sorting technique is varied, respectively there is excellent lack.For image classification, common practices be from
One side is set out, and finds out suitable feature selection approach as far as possible during data prediction and feature selecting, then
Using some classical sorting techniques such as SVM carry out image classification, but the premise of this method be generally require it is as more as possible
, training grader required for, the suitable flag data of be well defined and feature selecting, that this cost often compared with
High.Often so there is preferable classifying quality from the selection of sorting technique, semisupervised support vector machines.
The content of the invention
The purpose of the present invention by the markd and unmarked image sample data of more effective and deep excavation from
And improve the image classification ability of machine.
According to technical scheme provided by the invention, the semi-supervision image classification side for possessing maximization knowledge utilization ability
Method, comprising being defined as below and step:
Definition:
Define 1:Data setThe set of presentation class device training sample, d are number
According to dimension, l is that to have the sample size of label and u be unlabeled exemplars quantity;
Define 2:yi∈ {+1, -1 } (i=1 ..., l) represent that l of data set X have sample mark corresponding to exemplar
Label;
Define 3:Presentation class decision function, wherein αi>=0 is support vector indicator
Element, b are a constants, and K () is kernel function, often takes Radial basis kernel function:K(xi,xj)=exp (- | | xi-xj||2/2σ2),i∈
[1, l], j ∈ [1, l+u], σ are core broadband;
Define 4:F=[f1,...,fl,fl+1,...,fl+u]TFor data setAccording to
The predicted value that categorised decision function obtains;
Define 5:Paired constraint set MS (Must-Link Set, it is necessary to connection collection) and CS (Cannot-Link Set, it is impossible to
Connection collection) here by the sample label provided it is converted Lai, transition form such as Fig. 2;
Define 6:L=D-W is figure Laplacian Matrix, wherein W=[Wij](u+l)×(u+l)For data set X adjacency matrix, and
D turns into diagonal matrix, and
Define 7:Define matrix Z=H-Q, wherein Q=[Qij](u+l)×(u+l)Represent the paired restriction relation between marker samples
Matrix, its matrix element QijCalculating such as formula (2), H is diagonal matrix, H=diag (Q1(l+u)×1), 1(l+u)×1For (l+u) ×
1 vector and element is all 1;
Wherein | MS | represent that the recording capacity of collection must be connected, | CS | then represent that the recording capacity of collection can not be connected;
Define 8:Define manifold regularization (Manifold Regularization) form be
Define 9:Constraint regularization form in pairs:
In formula, i, j, p, q be data set X in sample sequence number, i, j, p, q ∈ [1, l+u],<i,j>Represent to appoint in MS set
A pair of meaning,<p,q>Represent in CS set any pair, | MS | and | CS | the element number of MS set and CS set is represented respectively,
Corresponding formula (4) can be rewritten as:
Define 10:Matrix P concrete form is
Possess the semi-supervision image classification method of maximization knowledge utilization ability, comprise the following steps:
Step 1:The size of all original images is unified for same format, and using all pixels point of every image as
The feature of one sample, so obtain a preliminary view data;
Step 2:Data normalization and Feature Dimension Reduction (principal component analytical method) place are carried out to the data that obtain in step 1
Reason, obtains corresponding view data;
Step 3:Generation possesses the semisupervised classification model of maximization knowledge utilization ability, as shown in formula (6):
In above formula,
Wherein, data setThe l samples for having label needed for the training of presentation class device
Originally with u unlabeled exemplars, sample dimension is d, yi∈ {+1, -1 } (i=1 ..., l) it is this l samples for having exemplar
Label, f () presentation class decision function, HKFor reproducing kernel Hilbert space (Reproducing Kernel Hilbert
Space, RKHS), K is the nuclear matrix that is calculated by kernel function K, f=[f1,...,fl,fl+1,...,fl+u]TFor data setThe predicted value obtained according to categorised decision function f (), fi(i=1 ..., l+u), classification
Decision function such as formula (7), γA> 0, γI> 0, γD> 0 is three regularization coefficients, and L and Z are respectively figure Laplacian Matrix
With paired constraint matrix;
The semisupervised classification model that formula (6) possesses maximization knowledge utilization ability can be divided into three parts, first
I.e. formula (6-1) is divided to control empiric risk to be expressed as hinge loss function (hinge loss function), Part II is formula
(6-2) purpose is to avoid over-fitting, and Part III is last expression manifold of formula (6) and constrains joint regularization frame in pairs
Frame, pass through parameter γI,γDRegulation come reasonable arrangement manifold regularization and in pairs constraint regularization each to entirety influence;
Formula (6-1)-(6-2) is updated into formula (6) can specifically be possessed half prison of maximization knowledge utilization ability
Superintend and direct sorting technique model:
Step 4:Solved according to disaggregated model in step 3, obtain the last solution α needed for categorised decision function f ()*With
b*, the grader needed for image classification is formed, and the predicted image data handled in process of data preprocessing is imported into classification
In device model, the classification results of prognostic chart picture are obtained.
Further, the optimal solution α needed for categorised decision function f () is obtained described in step 4*And b*Optimization Solution step
Suddenly include:
(1) the semisupervised classification model i.e. formula (8) that will be provided with maximization knowledge utilization ability is changed into quadratic programming problem
Solution form, detailed process are introducing Lagrange coefficient β=(β1,β2,...,βl) and γ=(γ1,γ2,...,γl), can
To obtain corresponding LagrangianL (α, b, ξ, β, γ):
According to Caro need-Kuhn-Tucker condition (Karush-Kuhn-Tucker, KKT), using classics mathematical method-
Lagrange conditioned extreme value makesIt can obtain following formula:
Wushu (10)-(12) substitute into formula (9) and can obtain corresponding quadratic programming problem,
S=P in formula (13)T(γAK+K(γIL+γDZ)K)-1P;
(2) quadratic programming problem solution is carried out to formula (13), obtains optimal solution β*, and β*α solution in substitution formula (10)
Go to obtain last solution α in form*, i.e. formula (14):
(3) according to the last solution α obtained in (2)*, it is updated in formula (15) and obtains b last solution b*。
It is an advantage of the invention that:Comparatively speaking the present invention has in grader selection with traditional sorting technique SVM first
There is more preferable learning ability, manifold is contained to having exemplar on a small quantity with the construction of constraint joint regularization framework in pairs
The further excavation (by the excavation to MS and CS collection) of knowledge, it also contains (logical to the exploration of knowledge of a large amount of unlabeled exemplars
Cross manifold regularization mode);Secondly relatively broad applicable feature normalization is have selected on the data prediction of view data
With Feature Dimension Reduction (principal component analytical method), the integrality of data message has fully been ensured.
Brief description of the drawings
Fig. 1 is the semi-supervision image classification method flow chart of the present invention for possessing maximization knowledge utilization ability.
Fig. 2 is the process schematic that sample label is converted into paired constraint set and MS and CS.
Fig. 3 is for two class images required for classifier training and prediction, extracts a portion displaying.
Fig. 4 is the tendency chart of precision of prediction value of the grader under different labels.
Embodiment
The invention will be further described with reference to the accompanying drawings and examples.
For convenience, term involved in the inventive method is defined as below:
Define 1:Data setThe l samples for having label needed for the training of presentation class device
Sheet and u unlabeled exemplars, yi∈ {+1, -1 } (i=1 ..., l) it is this l sample labels for having exemplar;
Define 2:F () presentation class decision function,HKIt is empty for reproducing kernel Hilbert
Between (Reproducing Kernel Hilbert Space, RKHS), K is kernel function, here be Radial basis kernel function, meter
Calculation mode isσ is core broadband;
Define 3:F=[f1,...,fl,fl+1,...,fl+u]TFor data setAccording to
The predicted value that categorised decision function f () is obtained, fi(i=1 ..., l+u),;
Define 4:Paired constraint set MS and CS here by the sample label provided it is converted Lai, transition form such as Fig. 2;
Define 5:Wij∈ W (i, j=1 ..., u+l) are the side right of data set X adjacency matrix, and L=D-W is figure Laplce
Matrix, D are diagonal matrix,
Define 6:Qij∈ Q (i, j=1 ..., u+l) illustrate the paired restriction relation between marker samples, its matrix element
QijCalculating such as formula (2), Z=H-Q is similar to figure Laplce's matrix form, and H is diagonal matrix, H=diag (Q
1(l+u)×1), wherein, 1(l+u)×11 is all for the matrix and matrix element of (l+u) × 1;
Define 7:Manifold regularization (Manifold Regularization) form:
Define 8:Constraint regularization form in pairs:
In formula, i, j, p, q be X in sample sequence number, i, j, p, q ∈ [1, l+u],<i,j>Represent in MS set any pair,
<Any pair in p, q > expression CS set, | MS | and | CS | the element number of MS set and CS set, corresponding formula are represented respectively
(4) can be rewritten as:
Define 9:Matrix P concrete form is
As shown in figure 1, the semi-supervision image classification method for possessing maximization knowledge utilization ability, is determined based on foregoing
Justice, implement according to following steps:
Step 1:The size of all original images is unified for same format, and using all pixels point of every image as
The feature of one sample, so obtain a preliminary view data;
Step 2:Data normalization and Feature Dimension Reduction (principal component analytical method) place are carried out to the data that obtain in step 1
Reason, obtains corresponding view data;
Step 3:Generation possesses the semisupervised classification model of maximization knowledge utilization ability, as shown in formula (6):
In above formula,
Wherein, data setThe l samples for having label needed for the training of presentation class device
Originally with u unlabeled exemplars, sample dimension is d, yi∈ {+1, -1 } (i=1 ..., l) it is this l samples for having exemplar
Label, f () presentation class decision function, HKFor reproducing kernel Hilbert space (Reproducing Kernel Hilbert
Space, RKHS), K is the nuclear matrix that is calculated by kernel function K, its Kernel Function K here be Radial basis kernel function,
Calculation isσ is core broadband, f=[f1,...,fl,fl+1,...,fl+u]TFor
Data setThe predicted value obtained according to categorised decision function f (), fi(i=1 ..., l+
U), categorised decision function such as formula (7), γA> 0, γI> 0, γD> 0 is three regularization coefficients, and L and Z are respectively Tula pula
This matrix and paired constraint matrix;
The semisupervised classification model that formula (6) possesses maximization knowledge utilization ability can be divided into three parts, first
I.e. formula (6-1) is divided to control empiric risk to be expressed as hinge loss function (hinge loss function), Part II is formula
(6-2) purpose is to avoid over-fitting, and Part III is that last in formula (6) represents manifold and constraint joint regularization frame in pairs
Frame, pass through parameter γI,γDRegulation come reasonable arrangement manifold regularization and in pairs constraint regularization each to entirety influence;
Formula (6-1)-(6-2) is updated into formula (6) can specifically be possessed half prison of maximization knowledge utilization ability
Superintend and direct sorting technique model:
Step 4:Solved according to disaggregated model in step 3, obtain the last solution α needed for categorised decision function f ()*With
b*, the grader needed for image classification is formed, and the predicted image data handled in process of data preprocessing is imported into classification
In device model, the classification results of prognostic chart picture are obtained.
Further, the optimal solution α needed for categorised decision function f () is obtained described in step 4*And b*Optimization Solution step
Suddenly include:
(1) the semisupervised classification model i.e. formula (8) that will be provided with maximization knowledge utilization ability is changed into quadratic programming problem
Solution form, detailed process are introducing Lagrange coefficient β=(β1,β2,...,βl) and γ=(γ1,γ2,...,γl), can
To obtain corresponding LagrangianL (α, b, ξ, β, γ):
According to Caro need-Kuhn-Tucker condition (Karush-Kuhn-Tucker, KKT), using classics mathematical method-
Lagrange conditioned extreme value makesIt can obtain following formula:
Wushu (10)-(12) substitute into formula (9) and can obtain corresponding quadratic programming problem,
S=P in formula (13)T(γAK+K(γIL+γDZ)K)-1P;
(2) quadratic programming problem solution is carried out to formula (13), obtains optimal solution β*, and β*α solution in substitution formula (10)
Go to obtain last solution α in form*, i.e. formula (14):
(3) according to obtained last solution α*That is formula (14), it is updated in formula (15) and obtains b last solution b*。
It is a detailed implementation process below.
1st, raw image data preprocessing process:
1) size of all original images (including all images trained and tested) is unified for phase formula, specific resolution ratio
For 256*256, and using all pixels o'clock of every image as the feature of a sample, so obtain a preliminary picture number
According to collection;
2) view data after the preliminary image data set obtained in 1) being normalized and normalized
Collection;
3) Feature Dimension Reduction is carried out with principal component analytical method to the image data set after the normalization that obtains in 2), chosen special
Maximum preceding 1000 dimension (contribution rate of accumulative total now reaches 94%, ensure that the integrality of data message) of value indicative, completes feature
Extraction;
2nd, classifier training forecast period:
1) training data obtained according to data prediction, and data label, MS set CS collection is generated, and obtains Tula
This matrix L of pula and paired constraint matrix Z;
2) High Dimensional Mapping is carried out to given training dataset according to Radial basis kernel function, obtains training needed for grader
Nuclear matrix K, and calculate matrix P and S;
3) quadratic programming problem formula (15) is obtained using quadratic programming tool box, and obtains optimal solution β*;
4) and β*Go to obtain last solution α in α solution form in substitution formula (12)*, and according to the α of solution*It is updated to formula
(17) b last solution b is obtained in*, so obtain the sorter model needed for image classification, i.e. formula (9);
5) prediction data that data prediction is obtained is imported into formula (9), and image is divided according to the positive negativity of predicted value
Classification, on the occasion of corresponding positive class, negative value correspondingly bears class.By the above-mentioned two stage, the optimal maximization that possesses is finally given and has known
Know the image classification degree of accuracy of the semi-supervision image classification method of Utilization ability.
Embodiment 1
Fig. 3 (a) -3 (d) represents part sample of all kinds of images for image used in training test, wherein training has used 400
220 images have been used in figure, test, and Fig. 4 is the image classification degree of accuracy tendency chart under different marker samples points, in the present embodiment
Parameter selection method is cross validation, parameter γA, γI, γDIt is { 10 that respective scope, which is set,-5,10-4,10-3,10-2,10-1,
101,102, the classification degree of accuracy is the result that cross validation obtains optimized parameter in embodiment, while the present invention should not limit to
In the embodiment and accompanying drawing disclosure of that.So it is every do not depart from lower complete equivalent of spirit disclosed in this invention or
Modification, both falls within the scope of protection of the invention.
It is above the preferable embodiment of the present invention, those skilled in the art in the invention can also be to above-mentioned embodiment
Changed and changed.Therefore, the invention is not limited in above-mentioned embodiment, every those skilled in the art are at this
Any conspicuously improved, replacement or modification made on the basis of invention belong to protection scope of the present invention.
Claims (3)
1. a kind of semi-supervision image classification method for possessing maximization knowledge utilization ability, is characterized in, according to being defined as below and
Step is implemented:
Define 1:Data setThe l samples and u for having label needed for the training of presentation class device
Individual unlabeled exemplars, yi∈ {+1, -1 } (i=1 ..., l) it is this l sample labels for having exemplar;
Define 2:F () presentation class decision function,HKFor reproducing kernel Hilbert space
(Reproducing Kernel Hilbert Space, RKHS), K is kernel function, here be Radial basis kernel function, calculate
Mode isσ is core broadband;
Define 3:F=[f1,...,fl,fl+1,...,fl+u]TFor data setDetermined according to classification
The predicted value that plan function f () is obtained, fi(i=1 ..., l+u);
Define 4:Paired constraint set MS and CS here by the sample label provided it is converted Lai;
Define 5:Wij∈ W (i, j=1 ..., u+l) are the side right of data set X adjacency matrix, and L=D-W is figure Laplce's square
Battle array, D is diagonal matrix,
Define 6:Qij∈ Q (i, j=1 ..., u+l) illustrate the paired restriction relation between marker samples, its matrix element Qij's
Calculate such as formula (2), Z=H-Q is similar to figure Laplce's matrix form, and H is diagonal matrix, H=diag (Q1(l+u)×1), its
In, 1(l+u)×11 is all for the matrix and matrix element of (l+u) × 1;
Define 7:Popular regularization (Manifold Regularization) form:
<mrow>
<mo>|</mo>
<mo>|</mo>
<mi>f</mi>
<mo>|</mo>
<msubsup>
<mo>|</mo>
<mi>I</mi>
<mn>2</mn>
</msubsup>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msup>
<mrow>
<mo>(</mo>
<mi>u</mi>
<mo>+</mo>
<mi>l</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>l</mi>
<mo>+</mo>
<mi>u</mi>
</mrow>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<mi>f</mi>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
<mo>-</mo>
<mi>f</mi>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>j</mi>
</msub>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<msub>
<mi>W</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msup>
<mrow>
<mo>(</mo>
<mi>u</mi>
<mo>+</mo>
<mi>l</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<msup>
<mi>f</mi>
<mi>T</mi>
</msup>
<mi>L</mi>
<mi>f</mi>
<mo>,</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
Define 8:Constraint regularization form in pairs:
<mrow>
<munder>
<mi>min</mi>
<mi>f</mi>
</munder>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<munder>
<mi>&Sigma;</mi>
<mrow>
<mo><</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>></mo>
<mo>&Element;</mo>
<mi>M</mi>
<mi>S</mi>
</mrow>
</munder>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>f</mi>
<mi>j</mi>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mrow>
<mo>|</mo>
<mi>M</mi>
<mi>S</mi>
<mo>|</mo>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<munder>
<mi>&Sigma;</mi>
<mrow>
<mo><</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>></mo>
<mo>&Element;</mo>
<mi>C</mi>
<mi>S</mi>
</mrow>
</munder>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mi>p</mi>
</msub>
<mo>-</mo>
<msub>
<mi>f</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mrow>
<mo>|</mo>
<mi>C</mi>
<mi>S</mi>
<mo>|</mo>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>4</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula, i, j, p, q be X in sample sequence number, i, j, p, q ∈ [1, l+u],<i,j>Represent in MS set any pair,<p,q
>Represent in CS set any pair, | MS | and | CS | the element number of MS set and CS set, corresponding formula (4) are represented respectively
It can be rewritten as:
<mrow>
<munder>
<mi>min</mi>
<mi>f</mi>
</munder>
<mrow>
<mo>(</mo>
<msup>
<mi>f</mi>
<mi>T</mi>
</msup>
<mi>Z</mi>
<mi>f</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>5</mn>
<mo>)</mo>
</mrow>
</mrow>
1
Define 9:Matrix P concrete form is
Step 1:The size of all original images is unified for same format, and using all pixels o'clock of every image as one
The feature of sample, so obtain a preliminary view data;
Step 2:Data normalization and Feature Dimension Reduction (principal component analytical method) processing are carried out to the data that obtain in step 1, obtained
To corresponding view data;
Step 3:Generation possesses the semisupervised classification model of maximization knowledge utilization ability, as shown in formula (6):
<mrow>
<munder>
<mi>min</mi>
<mrow>
<mi>f</mi>
<mo>&Element;</mo>
<msub>
<mi>H</mi>
<mi>k</mi>
</msub>
</mrow>
</munder>
<mrow>
<mo>(</mo>
<mfrac>
<mn>1</mn>
<mi>l</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>l</mi>
</munderover>
<msub>
<mrow>
<mo>(</mo>
<mrow>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mi>f</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mo>)</mo>
</mrow>
<mo>+</mo>
</msub>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mi>A</mi>
</msub>
<mo>|</mo>
<mo>|</mo>
<mi>f</mi>
<mo>|</mo>
<msubsup>
<mo>|</mo>
<mi>K</mi>
<mn>2</mn>
</msubsup>
<mo>+</mo>
<msup>
<mi>f</mi>
<mi>T</mi>
</msup>
<mo>(</mo>
<mrow>
<msub>
<mi>&gamma;</mi>
<mi>I</mi>
</msub>
<mi>L</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mi>D</mi>
</msub>
<mi>Z</mi>
</mrow>
<mo>)</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>)</mo>
</mrow>
</mrow>
In above formula,
<mrow>
<mfrac>
<mn>1</mn>
<mi>l</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>l</mi>
</munderover>
<msub>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mi>f</mi>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>+</mo>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mi>l</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>l</mi>
</munderover>
<msub>
<mi>&xi;</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>|</mo>
<mo>|</mo>
<mi>f</mi>
<mo>|</mo>
<msubsup>
<mo>|</mo>
<mi>K</mi>
<mn>2</mn>
</msubsup>
<mo>=</mo>
<msup>
<mi>&alpha;</mi>
<mi>T</mi>
</msup>
<mi>K</mi>
<mi>&alpha;</mi>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>-</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, data setThe l samples and u for having label needed for the training of presentation class device
Individual unlabeled exemplars, sample dimension are d, yi∈ {+1, -1 } (i=1 ..., l) is the sample labels that this l has exemplar, f
() presentation class decision function, HKFor reproducing kernel Hilbert space (Reproducing Kernel Hilbert Space,
RKHS), K is the nuclear matrix that is calculated by kernel function K, f=[f1,...,fl,fl+1,...,fl+u]TFor data setThe predicted value obtained according to categorised decision function f (), fi(i=1 ..., l+u), classification
Decision function such as formula (7), γA> 0, γI> 0, γD> 0 is three regularization coefficients, and L and Z are respectively figure Laplacian Matrix
With paired constraint matrix;
<mrow>
<msup>
<mi>f</mi>
<mo>*</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>l</mi>
<mo>+</mo>
<mi>u</mi>
</mrow>
</munderover>
<msubsup>
<mi>&alpha;</mi>
<mi>i</mi>
<mo>*</mo>
</msubsup>
<mi>K</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msup>
<mi>b</mi>
<mo>*</mo>
</msup>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</mrow>
Formula (6-1)-(6-2) is updated into formula (6) can specifically be possessed semi-supervised point of maximization knowledge utilization ability
Class method model:
<mrow>
<mtable>
<mtr>
<mtd>
<munder>
<mi>min</mi>
<mrow>
<mi>&alpha;</mi>
<mo>&Element;</mo>
<msup>
<mi>R</mi>
<mrow>
<mi>l</mi>
<mo>+</mo>
<mi>u</mi>
</mrow>
</msup>
<mo>,</mo>
<msub>
<mi>&xi;</mi>
<mi>i</mi>
</msub>
<mo>&Element;</mo>
<mi>R</mi>
</mrow>
</munder>
</mtd>
<mtd>
<mrow>
<mo>(</mo>
<mfrac>
<mn>1</mn>
<mi>l</mi>
</mfrac>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>l</mi>
</munderover>
<msub>
<mi>&xi;</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mi>A</mi>
</msub>
<msup>
<mi>&alpha;</mi>
<mi>T</mi>
</msup>
<mi>K</mi>
<mi>&alpha;</mi>
<mo>+</mo>
<msup>
<mi>&alpha;</mi>
<mi>T</mi>
</msup>
<mi>K</mi>
<mo>(</mo>
<mrow>
<msub>
<mi>&gamma;</mi>
<mi>I</mi>
</msub>
<mi>L</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mi>D</mi>
</msub>
<mi>Z</mi>
</mrow>
<mo>)</mo>
<mi>K</mi>
<mi>&alpha;</mi>
<mo>)</mo>
<mo>,</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>s</mi>
<mo>.</mo>
<mi>t</mi>
<mo>.</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>l</mi>
<mo>+</mo>
<mi>u</mi>
</mrow>
</munderover>
<msub>
<mi>&alpha;</mi>
<mi>j</mi>
</msub>
<mi>K</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<msub>
<mi>x</mi>
<mi>j</mi>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mi>b</mi>
<mo>)</mo>
</mrow>
<mo>&GreaterEqual;</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>&xi;</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mn>...</mn>
<mo>,</mo>
<mi>l</mi>
<mo>,</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>&xi;</mi>
<mi>i</mi>
</msub>
<mo>&GreaterEqual;</mo>
<mn>0</mn>
<mo>,</mo>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mn>...</mn>
<mo>,</mo>
<mi>l</mi>
<mo>.</mo>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>8</mn>
<mo>)</mo>
</mrow>
</mrow>
Step 4:Utilize the last solution obtained by step 1 needed for categorised decision function and point needed for composition image classification
Class device, and the predicted image data handled in process of data preprocessing is imported into sorter model, obtain prognostic chart picture
Classification results.
2. possessing the semi-supervision image classification method of maximization knowledge utilization ability as claimed in claim 1, it is characterized in, it is described
Obtain the optimal solution α needed for categorised decision function f ()*And b*Optimization Solution step include:
(1) the semisupervised classification model i.e. formula (8) that will be provided with maximization knowledge utilization ability is changed into quadratic programming problem solution
Form, detailed process are introducing Lagrange coefficient β=(β1,β2,...,βl) and γ=(γ1,γ2,...,γl), it can obtain
To corresponding LagrangianL (α, b, ξ, β, γ):
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>a</mi>
<mo>,</mo>
<mi>b</mi>
<mo>,</mo>
<mi>&xi;</mi>
<mo>,</mo>
<mi>&beta;</mi>
<mo>,</mo>
<mi>&gamma;</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mi>l</mi>
</mfrac>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>l</mi>
</munderover>
<msub>
<mi>&xi;</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<msup>
<mi>a</mi>
<mi>T</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>&gamma;</mi>
<mi>A</mi>
<mi>K</mi>
<mo>+</mo>
<mi>K</mi>
<mo>(</mo>
<mrow>
<msub>
<mi>&gamma;</mi>
<mi>I</mi>
</msub>
<mi>L</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mi>D</mi>
</msub>
<mi>Z</mi>
</mrow>
<mo>)</mo>
<mi>K</mi>
<mo>)</mo>
</mrow>
<mi>&alpha;</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>l</mi>
</munderover>
<msub>
<mi>&beta;</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mo>(</mo>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>l</mi>
<mo>+</mo>
<mi>u</mi>
</mrow>
</munderover>
<mrow>
<msub>
<mi>&alpha;</mi>
<mi>j</mi>
</msub>
<mi>K</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<msub>
<mi>x</mi>
<mi>j</mi>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mi>b</mi>
</mrow>
</mrow>
<mo>)</mo>
<mo>-</mo>
<mn>1</mn>
<mo>+</mo>
<msub>
<mi>&xi;</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>l</mi>
</munderover>
<msub>
<mi>&gamma;</mi>
<mi>i</mi>
</msub>
<msub>
<mi>&xi;</mi>
<mi>i</mi>
</msub>
<mo>.</mo>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>9</mn>
<mo>)</mo>
</mrow>
</mrow>
According to Caro need-Kuhn-Tucker condition (Karush-Kuhn-Tucker, KKT), mathematical method-glug of classics is utilized
Bright day conditional extremum decreeIt can obtain following formula:
Wushu (10)-(12) substitute into formula (9) and can obtain corresponding quadratic programming problem,
<mrow>
<mtable>
<mtr>
<mtd>
<munder>
<mi>max</mi>
<mrow>
<mi>&beta;</mi>
<mo>&Element;</mo>
<msup>
<mi>R</mi>
<mi>l</mi>
</msup>
</mrow>
</munder>
</mtd>
<mtd>
<mrow>
<mo>(</mo>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>l</mi>
</munderover>
<msub>
<mi>&beta;</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<mn>4</mn>
</mfrac>
<msup>
<mi>&beta;</mi>
<mi>T</mi>
</msup>
<mi>S</mi>
<mi>&beta;</mi>
<mo>)</mo>
<mo>,</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>s</mi>
<mo>.</mo>
<mi>t</mi>
<mo>.</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>l</mi>
</munderover>
<msub>
<mi>&beta;</mi>
<mi>i</mi>
</msub>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mn>0</mn>
<mo>,</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow>
<mn>0</mn>
<mo>&le;</mo>
<msub>
<mi>&beta;</mi>
<mi>i</mi>
</msub>
<mo>&le;</mo>
<mfrac>
<mi>1</mi>
<mi>l</mi>
</mfrac>
<mo>,</mo>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mn>...</mn>
<mo>,</mo>
<mi>l</mi>
<mo>.</mo>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>13</mn>
<mo>)</mo>
</mrow>
</mrow>
S=P in formula (13)T(γAK+K(γIL+γDZ)K)-1P;
(2) quadratic programming problem solution is carried out to formula (13), obtains optimal solution β*, and β*α solution form in substitution formula (10)
In go to obtain last solution α*, i.e. formula (14):
<mrow>
<msup>
<mi>&alpha;</mi>
<mo>*</mo>
</msup>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>&gamma;</mi>
<mi>A</mi>
</msub>
<mi>K</mi>
<mo>+</mo>
<mi>K</mi>
<mo>(</mo>
<mrow>
<msub>
<mi>&gamma;</mi>
<mi>I</mi>
</msub>
<mi>L</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mi>D</mi>
</msub>
<mi>Z</mi>
</mrow>
<mo>)</mo>
<mi>K</mi>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<msup>
<mi>P&beta;</mi>
<mo>*</mo>
</msup>
<mo>,</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>14</mn>
<mo>)</mo>
</mrow>
</mrow>
(3) according to the last solution α obtained in (2)*, it is updated in formula (15) and obtains b last solution b*。
<mrow>
<msup>
<mi>b</mi>
<mo>*</mo>
</msup>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mi>l</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>l</mi>
</munderover>
<mrow>
<mo>(</mo>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>l</mi>
<mo>+</mo>
<mi>u</mi>
</mrow>
</munderover>
<msubsup>
<mi>&alpha;</mi>
<mi>j</mi>
<mo>*</mo>
</msubsup>
<mi>K</mi>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<msub>
<mi>x</mi>
<mi>j</mi>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>15</mn>
<mo>)</mo>
</mrow>
</mrow>
3. possess the semisupervised classification method of maximization knowledge utilization ability as claimed in claim 1, it is characterized in that, possess greatly
The semisupervised classification model for changing knowledge utilization ability is that formula (6) can be divided into three parts, and Part I is that formula (6-1) controls
Empiric risk is expressed as hinge loss function (hinge loss function), and Part II is that formula (6-2) purpose is to avoid
Fitting, Part III are that last in formula (6) represents manifold and constraint joint regularization framework in pairs, pass through parameter γI,γD
Regulation come reasonable arrangement manifold regularization and in pairs constraint regularization each to entirety influence.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710483627.0A CN107392230A (en) | 2017-06-22 | 2017-06-22 | A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710483627.0A CN107392230A (en) | 2017-06-22 | 2017-06-22 | A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107392230A true CN107392230A (en) | 2017-11-24 |
Family
ID=60333598
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710483627.0A Pending CN107392230A (en) | 2017-06-22 | 2017-06-22 | A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107392230A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109299732A (en) * | 2018-09-12 | 2019-02-01 | 北京三快在线科技有限公司 | The method, apparatus and electronic equipment of unmanned behaviour decision making and model training |
CN109711456A (en) * | 2018-12-21 | 2019-05-03 | 江南大学 | A kind of semi-supervised image clustering method having robustness |
CN110781942A (en) * | 2019-10-18 | 2020-02-11 | 中国科学技术大学 | Semi-supervised classification method and system |
CN111046951A (en) * | 2019-12-12 | 2020-04-21 | 安徽威奥曼机器人有限公司 | Medical image classification method |
CN111126297A (en) * | 2019-12-25 | 2020-05-08 | 淮南师范学院 | Experience analysis method based on learner expression |
CN111898710A (en) * | 2020-07-15 | 2020-11-06 | 中国人民解放军火箭军工程大学 | Method and system for selecting characteristics of graph |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426923A (en) * | 2015-12-14 | 2016-03-23 | 北京科技大学 | Semi-supervised classification method and system |
CN104463202B (en) * | 2014-11-28 | 2017-09-19 | 苏州大学 | A kind of multiclass image semisupervised classification method and system |
-
2017
- 2017-06-22 CN CN201710483627.0A patent/CN107392230A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463202B (en) * | 2014-11-28 | 2017-09-19 | 苏州大学 | A kind of multiclass image semisupervised classification method and system |
CN105426923A (en) * | 2015-12-14 | 2016-03-23 | 北京科技大学 | Semi-supervised classification method and system |
Non-Patent Citations (1)
Title |
---|
奚臣等: "流形与成对约束联合正则化半监督分类方法", 《计算机科学与探索》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109299732A (en) * | 2018-09-12 | 2019-02-01 | 北京三快在线科技有限公司 | The method, apparatus and electronic equipment of unmanned behaviour decision making and model training |
US11983245B2 (en) | 2018-09-12 | 2024-05-14 | Beijing Sankuai Online Technology Co., Ltd | Unmanned driving behavior decision-making and model training |
CN109711456A (en) * | 2018-12-21 | 2019-05-03 | 江南大学 | A kind of semi-supervised image clustering method having robustness |
CN109711456B (en) * | 2018-12-21 | 2023-04-28 | 江南大学 | Semi-supervised image clustering method with robustness |
CN110781942A (en) * | 2019-10-18 | 2020-02-11 | 中国科学技术大学 | Semi-supervised classification method and system |
CN111046951A (en) * | 2019-12-12 | 2020-04-21 | 安徽威奥曼机器人有限公司 | Medical image classification method |
CN111126297A (en) * | 2019-12-25 | 2020-05-08 | 淮南师范学院 | Experience analysis method based on learner expression |
CN111126297B (en) * | 2019-12-25 | 2023-10-31 | 淮南师范学院 | Experience analysis method based on learner expression |
CN111898710A (en) * | 2020-07-15 | 2020-11-06 | 中国人民解放军火箭军工程大学 | Method and system for selecting characteristics of graph |
CN111898710B (en) * | 2020-07-15 | 2023-09-29 | 中国人民解放军火箭军工程大学 | Feature selection method and system of graph |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107392230A (en) | A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability | |
CN110059694B (en) | Intelligent identification method for character data in complex scene of power industry | |
CN109492099B (en) | Cross-domain text emotion classification method based on domain impedance self-adaption | |
Wang et al. | Object-scale adaptive convolutional neural networks for high-spatial resolution remote sensing image classification | |
CN105005794B (en) | Merge the image pixel semanteme marking method of more granularity contextual informations | |
CN111652232B (en) | Bill identification method and device, electronic equipment and computer readable storage medium | |
CN104809187A (en) | Indoor scene semantic annotation method based on RGB-D data | |
CN105184226A (en) | Digital identification method, digital identification device, neural network training method and neural network training device | |
CN111626279B (en) | Negative sample labeling training method and highly-automatic bill identification method | |
CN105787516A (en) | High-spectral image classification method base on space spectral locality low-rank hypergraph learning | |
CN103745233B (en) | The hyperspectral image classification method migrated based on spatial information | |
CN111598101A (en) | Urban area intelligent extraction method, system and equipment based on remote sensing image scene segmentation | |
CN112070135A (en) | Power equipment image detection method and device, power equipment and storage medium | |
CN102855486A (en) | Generalized image target detection method | |
CN104484886A (en) | Segmentation method and device for MR image | |
CN107330448A (en) | A kind of combination learning method based on mark covariance and multiple labeling classification | |
CN114863091A (en) | Target detection training method based on pseudo label | |
CN114782967B (en) | Software defect prediction method based on code visual chemistry | |
CN109034213A (en) | Hyperspectral image classification method and system based on joint entropy principle | |
CN110414575A (en) | A kind of semi-supervised multiple labeling learning distance metric method merging Local Metric | |
Wang et al. | An improved U-Net-based network for multiclass segmentation and category ratio statistics of ore images | |
CN105787045A (en) | Precision enhancing method for visual media semantic indexing | |
Zhang et al. | License plate recognition model based on CNN+ LSTM+ CTC | |
CN115359304A (en) | Single image feature grouping-oriented causal invariance learning method and system | |
CN108197663A (en) | Based on the calligraphy work image classification method to pairing set Multi-label learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20171124 |