CN108229552A - A kind of model treatment method, apparatus and storage medium - Google Patents
A kind of model treatment method, apparatus and storage medium Download PDFInfo
- Publication number
- CN108229552A CN108229552A CN201711475434.7A CN201711475434A CN108229552A CN 108229552 A CN108229552 A CN 108229552A CN 201711475434 A CN201711475434 A CN 201711475434A CN 108229552 A CN108229552 A CN 108229552A
- Authority
- CN
- China
- Prior art keywords
- objects
- library
- sample
- model
- object library
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of model treatment method, including:The first characteristics of objects is extracted from the library of source respectively and the second characteristics of objects is extracted from object library, wherein, first characteristics of objects and second characteristics of objects are respectively sample set in respective affiliated database;First characteristics of objects and second characteristics of objects are respectively mapped to same feature space, correspond to the first characteristics of objects after being mapped and the second characteristics of objects after mapping;According to the first characteristics of objects after the mapping and the second characteristics of objects after the mapping, the transformation matrix when differentiation between the source library and the object library meets default minimum difference condition is determined;The first characteristics of objects after the mapping is converted by the transformation matrix, obtains third characteristics of objects;By the third characteristics of objects and corresponding label, applied to the model classified to sample in the object library.The present invention also provides a kind of model treatment device and storage medium simultaneously.
Description
Technical field
The present invention relates to electric Digital data processing technology more particularly to a kind of model treatment method, apparatus and storage mediums.
Background technology
With the arrival in big data epoch, acquisition mass data that people can be more prone to, further, since machine learning
Field is constantly developed, and computer how to be allowed to have the ability drawn inferences about other cases from one instance, mass data how to be allow preferably to play work
With, these problems become very practical and valuable, in order to solve these problems, transfer learning be suggested and increasingly by
The attention of people.
Regular machinery study in there are one it is important it is assumed that i.e. the sample of the sample in source library and object library must have it is identical
Distribution or from identical feature space, however in actual life, this hypothesis is difficult to realize;Specifically,
For a classification problem, if the sample of the sample in source library and object library does not have identical distribution (reason that can be rough
Solve and be not belonging to same library for source), this with object library for source library it can be understood that do not have identical feature space;In image
Identification field, the model trained by an image library, preferable recognition result can be obtained by applying to the image library, still, fortune
It is often not fully up to expectations in other image libraries or actual environment.
It to sum up analyzes, since source library does not have identical feature space with object library in the prior art, this is resulted in by source
The model use that library trains is when object library, it is impossible to obtain good recognition result.
Invention content
In view of this, an embodiment of the present invention is intended to provide a kind of model treatment method, apparatus and storage medium, can overcome
The problem of model accuracy is influenced due to the difference of source library and object library feature space.
In order to achieve the above objectives, the technical solution of the embodiment of the present invention is realized in:
The embodiment of the present invention provides a kind of model treatment method, the method includes:
The first characteristics of objects is extracted from the library of source respectively and the second characteristics of objects is extracted from object library, wherein, it is described
First characteristics of objects and second characteristics of objects are respectively sample set in respective affiliated database;
First characteristics of objects and second characteristics of objects are respectively mapped to same feature space, correspondence is reflected
The second characteristics of objects after the first characteristics of objects and mapping after penetrating;
According to the first characteristics of objects after the mapping and the second characteristics of objects after the mapping, determine the source library and
Differentiation between the object library meets transformation matrix during default minimum difference condition;
The first characteristics of objects after the mapping is converted by the transformation matrix, obtains third characteristics of objects;
By the third characteristics of objects and corresponding label, applied to the mould classified to sample in the object library
Type.
The embodiment of the present invention also provides a kind of model treatment device, and described device includes:Extraction module, mapping block, really
Cover half block, modular converter and application module;Wherein,
The extraction module, for the first characteristics of objects is extracted from the library of source respectively and from object library extract second pair
As feature, wherein, first characteristics of objects and second characteristics of objects are respectively sample set in respective affiliated database;
The mapping block, for first characteristics of objects and second characteristics of objects to be respectively mapped to same spy
Space is levied, corresponds to the first characteristics of objects after being mapped and the second characteristics of objects after mapping;
The determining module, for special according to the first characteristics of objects after the mapping and the second object after the mapping
Sign determines the transformation matrix when differentiation between the source library and the object library meets default minimum difference condition;
The modular converter, for the first characteristics of objects after the mapping to be converted by the transformation matrix,
Obtain third characteristics of objects;
The application module, for by the third characteristics of objects and corresponding label, applied to the object library
The model that middle sample is classified.
The embodiment of the present invention also provides a kind of storage medium, is stored thereon with executable program, the executable code
Processor realizes any one aforementioned model treatment method when performing.
The embodiment of the present invention also provides a kind of model treatment device, including memory, processor and storage on a memory
And the executable program that can be run by the processor, the processor perform aforementioned arbitrary when running the executable program
A kind of model treatment method.
The model treatment method, apparatus and storage medium that the embodiment of the present invention is provided, extract first from the library of source respectively
Characteristics of objects and the second characteristics of objects is extracted from object library, wherein, first characteristics of objects and second object are special
Sign is respectively sample set in respective affiliated database;First characteristics of objects and second characteristics of objects are mapped respectively
To same feature space, the first characteristics of objects after being mapped and the second characteristics of objects after mapping are corresponded to;It is reflected according to described
The second characteristics of objects after the first characteristics of objects and the mapping after penetrating, determines the difference between the source library and the object library
Alienation meets transformation matrix during default minimum difference condition;It is by the transformation matrix that the first object after the mapping is special
Sign is converted, and obtains third characteristics of objects;By the third characteristics of objects and corresponding label, applied to the target
The model that sample is classified in library.In this way, having trained the model of suitable object library by source library, therefore, trained by source library
The model use gone out can obtain good recognition result when object library, so as to improve the accuracy of model.
Description of the drawings
Fig. 1 is the realization flow diagram of model treatment method provided in an embodiment of the present invention;
Fig. 2 is the specific implementation flow diagram of model treatment method provided in an embodiment of the present invention;
Fig. 3 is face calibration schematic diagram provided in an embodiment of the present invention;
Fig. 4 is Gabor spaces provided in an embodiment of the present invention schematic diagram;
Fig. 5 is the image classification model AlexNet structure composition schematic diagrams of deep learning provided in an embodiment of the present invention;
Fig. 6 is the composition structure diagram of model treatment device provided in an embodiment of the present invention;
Fig. 7 is the hardware architecture diagram of model treatment device provided in an embodiment of the present invention.
Specific embodiment
The characteristics of in order to more fully hereinafter understand the embodiment of the present invention and technology contents, below in conjunction with the accompanying drawings to this hair
The realization of bright embodiment is described in detail, appended attached drawing purposes of discussion only for reference, is not used for limiting the present invention.
Fig. 1 is a kind of model treatment method provided in an embodiment of the present invention;As shown in Figure 1, the mould in the embodiment of the present invention
The realization flow of type processing method, may comprise steps of:
Step 101:The first characteristics of objects is extracted from the library of source respectively and the second characteristics of objects is extracted from object library,
In, first characteristics of objects and second characteristics of objects are respectively sample set in respective affiliated database.
In some embodiments, the first characteristics of objects is extracted from the library of source and the second object is extracted from object library respectively
Wherein it is possible to extract different types of characteristics of objects from the library of source, the different types of characteristics of objects of extraction is combined for feature
Afterwards, as first characteristics of objects in source library;Different types of characteristics of objects can be extracted from object library, by the inhomogeneity of extraction
After the characteristics of objects combination of type, the second characteristics of objects as object library.
Then, by the first characteristics of objects extracted from the library of source and the second characteristics of objects extracted from object library into
Row dimensionality reduction;The detailed process of dimensionality reduction can include:It is extracted by the first characteristics of objects extracted from the library of source and from object library
The second characteristics of objects be normalized respectively;Extraction is default respectively from the first characteristics of objects and the second characteristics of objects ties up
Several special object feature, to should be used as the characteristics of objects to be mapped in source library and object library.Wherein, default dimension, which is less than, carries
Corresponding dimension before taking;The special object feature for extracting default dimension respectively from the first characteristics of objects and the second characteristics of objects can
To include:From the corresponding characteristic vector of the first characteristics of objects and the corresponding characteristic vector of the second characteristics of objects, select respectively
The corresponding characteristics of objects of maximum d dimension characteristic values, as respective special object feature.
Step 102:First characteristics of objects and second characteristics of objects are respectively mapped to same feature space, it is right
The second characteristics of objects after the first characteristics of objects and mapping after should being mapped.
In some embodiments, in order to avoid the letter being directly mapped to source library and object library caused by same feature space
Breath loss by the first characteristics of objects and the second characteristics of objects by the way of a transition matrix is multiplied by respectively, projects to respectively
Corresponding subspace, so as to by space reflection, obtain the first characteristics of objects after the mapping of same dimensional space and reflect
The second characteristics of objects after penetrating.
Step 103:According to the first characteristics of objects after the mapping and the second characteristics of objects after the mapping, institute is determined
State the transformation matrix when differentiation between source library and the object library meets default minimum difference condition.
In some embodiments, can based on using the first characteristics of objects after mapping and mapping after the second characteristics of objects as
The factor builds the difference function of distance between expression source library and object library;In the hope of solving the mode of difference function minimum, source is determined
Differentiation between library and object library meets transformation matrix during default minimum difference condition.Here it is possible to after using mapping
The product of first characteristics of objects and transformation matrix, the difference with the second characteristics of objects after mapping is as the factor;Structure is based on institute
State the function that the factor calculates norm.
Step 104:The first characteristics of objects after the mapping is converted by the transformation matrix, obtains third pair
As feature.
In some embodiments, transformation matrix can pass through the first characteristics of objects after mapping and the second object after mapping
It is special to be converted into the second object after mapping by the mode of feature alignment for subspace coordinate system where the first characteristics of objects after mapping
Subspace coordinate system, obtains third characteristics of objects where sign.
Step 105:By the third characteristics of objects and corresponding label, carried out applied to sample in the object library
The model of classification.
In some embodiments, can third characteristics of objects and corresponding label be subjected to model training, specifically:With
Third characteristics of objects and corresponding label are new sample;The updated value component of the relatively new sample of solving model parameter;From
Sample extraction characteristics of objects in object library, calculating sample in object library based on model has the probability value of different labels;Choose symbol
Close label of the label of Probability Condition as sample in object library.
It in some embodiments, can be based on the similarity function using third characteristics of objects as the factor, to sample in object library
This progress nearest neighbour classification obtains the label of sample in object library.
In some embodiments, when the type that the model applied includes is at least two, distinguished based on each model true
Surely for the output result of sample in object library;The output result of each model is compared, target is determined according to comparison result
The label of sample in library.
For example, when the output result of each model is identical, the output result of any one model is chosen as target
The label of sample in library;When the output result of each model is different, and does not include specific label, the defeated of any one model is chosen
Go out label of the result as sample in object library;When the output result of each model is different, and during including specific label, selection is specific
Label of the label as sample in object library.
The specific implementation process of model treatment method of the embodiment of the present invention is done below and is further described in detail.
Fig. 2 gives the specific implementation flow diagram of model treatment method of the embodiment of the present invention;As shown in Fig. 2, this hair
The model treatment method of bright embodiment includes the following steps:
Step 201:The first characteristics of objects is extracted from the library of source respectively and the second characteristics of objects is extracted from object library,
In, the first characteristics of objects and the second characteristics of objects are respectively sample set in respective affiliated database.
In the embodiment of the present invention, to extract human face expression feature S and the people from object library from the face picture in source library
For extracting human face expression feature T in face picture, here it is possible to which the human face expression feature S extracted from the library of source is interpreted as the
The human face expression feature T extracted from object library is interpreted as the second characteristics of objects by an object feature.
For example, different types of human face expression feature can be extracted from the library of source, by the different types of people of extraction
After the combination of face expressive features, the human face expression feature S as source library;Different types of human face expression can be extracted from object library
Feature, after the different types of human face expression feature of extraction is combined, the human face expression feature T as object library.
For source library and object library, following two modes may be used and extract different types of human face expression feature.
1) Gabor characteristic g can be extracted:
For example, face alignment Intraface handling implements can be used to carry out face calibration, Intraface can be with
The calibration of 49 points is carried out to the eyebrow of people, eyes, nose and face, 49 points can be good at determining the feature of face
Position, specific example may refer to Fig. 3;When being demarcated to face picture, other handling implements, different places also can be used
Calibration point number used in science and engineering tool is different, and the embodiment of the present invention is not limited to the Intraface handling implements using 49 points.
It is then possible to which the point to calibration carries out feature sampling, human face expression feature is obtained;For example, after calibration,
Gabor characteristic can be extracted to the part of calibration, for differentiating indignation, detesting, frightened, happy, sad and pleasantly surprised etc. faces institute
The basic emotion of expression;Wherein, Gabor characteristic is a kind of can be used for describing the feature of image texture information, Gabor filter
Frequency and direction it is similar with the vision system of the mankind, particularly suitable for texture representation with differentiate.
Gabor characteristic characterizes the matrix B of face picturesAnd BtIt is obtained respectively by carrying out convolution with Gabor filter
Gabor characteristic SgAnd Tg;
Bs* g=Sg;Bt* g=Tg;
The calculating of wave filter is as follows:
Wherein λ represents wavelength, and θ represents rotation angle, and ψ represents phase offset, and σ represents Gaussian function and obtains standard deviation, γ generations
Table space ratio, x, y represent pixel;By above formula it is found that Gabor filter can be divided into real part R and imaginary number I, pass through
Triangular transformation can obtain:
WhereinThe real part of Gabor filter can be seen
Work is the edge detection operator of all directions, using this characteristic, can obtain Gabor spaces, specifically may refer to Fig. 4.
2) a kind of specific type can be can be regarded as to entire face extraction deep learning feature, convolutional neural networks
Feedforward network, these models are designed to imitate the behavior of visual cortex, and convolutional neural networks CNN is in visual identity task upper table
Now very well, CNN has the special layers for being known as convolutional layer and pond layer, and network is allowed to encode certain image attributes, leads to
The convolutional neural networks model is crossed, CNN features can be extracted:ScAnd Tc。
Wherein, the image classification model AlexNet of deep learning is made of 11 layers of CNN with following framework, specifically may be used
With referring to Fig. 5.
The Gabor characteristic S that will be extracted from the library of sourcegWith convolutional neural networks CNN features Sc, combined by the way of series connection
Characteristics of objects i.e. human face expression feature S of the new feature formed afterwards as source library;The Gabor characteristic that will be extracted from object library
TgWith convolutional neural networks CNN features Tc, the new feature formed after being combined by the way of the series connection is special as the object of object library
Sign is human face expression feature T, i.e.,:
The human face expression of face picture is characterized as in the library of source:S=[Sg,Sc];
The human face expression of face picture is characterized as in object library:T=[Tg,Tc];
Wherein, SgAnd ScLine number (quantity of picture) it is identical, similarly, TgAnd TcLine number it is also identical.
Traditional feature is preferable to the details characterization of face, and deep learning feature portrays the overall permanence of face
Preferably, therefore, by the combination of both features, more efficiently human face expression can be portrayed.
Step 202:By the first characteristics of objects extracted from the library of source and the second characteristics of objects extracted from object library
Carry out dimensionality reduction.
In the embodiment of the present invention, it can extract the first characteristics of objects extracted from the library of source and from object library
Second characteristics of objects is normalized respectively.
For example, it is special with the human face expression feature S extracted from the library of source and the human face expression extracted from object library
Sign T is carried out for dimensionality reduction;It can be by the human face expression feature S extracted from the library of source and the face table extracted from object library
Feelings feature T is normalized respectively, since the dimension of S and T is identical, S and T are present in a given D dimension sky
Between on, in order to obtain that more there is the expression of robustness, and the difference of two image libraries can be obtained, can be S and T
All it is converted into D dimension normalized vectors (such as zero-mean and unit norm deviation).
In the embodiment of the present invention, the spy of default dimension can be extracted respectively from the first characteristics of objects and the second characteristics of objects
Characteristics of objects is determined, to should be used as the characteristics of objects to be mapped in source library and object library.Wherein, it is right less than before extraction to preset dimension
The dimension answered;Extracting the special object feature of default dimension respectively from the first characteristics of objects and the second characteristics of objects can wrap
It includes:From the corresponding characteristic vector of the first characteristics of objects and the corresponding characteristic vector of the second characteristics of objects, select respectively wherein
The maximum corresponding characteristics of objects of d dimension characteristic values, as respective special object feature.
For example, core principle component analysis KPCA can be used, to the feature vector selection in source library and object library wherein
Maximum d dimension characteristic values, center take Gaussian kernel.
Step 203:First characteristics of objects and the second characteristics of objects are respectively mapped to same feature space, correspondence is reflected
The second characteristics of objects after the first characteristics of objects and mapping after penetrating.
In the embodiment of the present invention, in order to avoid the letter being directly mapped to source library and object library caused by same feature space
Breath loss by the first characteristics of objects and the second characteristics of objects by the way of a transition matrix is multiplied by respectively, projects to respectively
Corresponding subspace, so as to by space reflection, obtain the first characteristics of objects after the mapping of same dimensional space and reflect
The second characteristics of objects after penetrating.
For example, characteristic vector is denoted as X respectively as source library and the basis of object librarysAnd Xt(Xs,Xt∈RD×d).Note
Anticipate XsAnd XtBe it is orthogonal, thereforeWherein IdIt is the unit matrix of d;In order to avoid directly by source
Library and object library are mapped to the information loss caused by same feature space, can be special by human face expression feature S and human face expression
Sign T is used is multiplied by a transition matrix A respectivelys,At∈R1×DMode, project to corresponding subspace, so as to, pass through
Space reflection obtains the human face expression feature X after the mapping of same dimensional spacesWith the human face expression feature X after mappingt。
Step 204:According to the first characteristics of objects after mapping and the second characteristics of objects after mapping, source library and target are determined
Differentiation between library meets transformation matrix during default minimum difference condition.
In the embodiment of the present invention, can based on using the first characteristics of objects after mapping and mapping after the second characteristics of objects as
The factor builds the difference function of distance between expression source library and object library;In the hope of solving the mode of difference function minimum, source is determined
Differentiation between library and object library meets transformation matrix during default minimum difference condition.Here it is possible to after using mapping
The product of first characteristics of objects and transformation matrix, the difference with the second characteristics of objects after mapping is as the factor;Structure is based on institute
State the function that the factor calculates norm.
It for example, can be based on the human face expression feature X after above-mentioned mappingsAnd XtFor the factor, structure represents face table
Feelings feature XsAnd human face expression feature XtBetween distance difference function;It can utilize and minimize Bu Laigeman matrix differences
(Bregman Matrix Divergence) Bregman, determines corresponding transformation matrix M.
In order to realize this target, the method that space arrangement can be used, by using from XsTo XtTransformation matrix M come
Base vector is aligned, matrix M is learnt to obtain by Bregman:
M*=argminM(F(M)) (5)
Wherein,It is Frobenius specifications;Due to XsAnd XtIt is to be generated by d-th of characteristic vector, in fact they become
In inherent regularization, due to Frobenius norms be for quadrature operation it is constant, formula (4) can be rewritten as
Lower form:
From formula (6) as can be seen that best M*It can be byIt is calculated, if source library and object library are
It is identical, then Xs=Xt, and M*For unit matrix.
Step 205:The first characteristics of objects after mapping is converted by transformation matrix, obtains third characteristics of objects.
In the embodiment of the present invention, transformation matrix can pass through the first characteristics of objects after mapping and the second object after mapping
It is special to be converted into the second object after mapping by the mode of feature alignment for subspace coordinate system where the first characteristics of objects after mapping
Subspace coordinate system, obtains third characteristics of objects where sign.
For example, above-mentioned transformation matrix M passes through the human face expression feature X after being mapped in the library of sourcesWith being mapped in object library
Human face expression feature X afterwardstThe mode of alignment by source images space coordinates converting into target image space coordinate system, passes through
By the human face expression feature X after mappingsThe mode of transformation matrix M is multiplied by, obtains transformed human face expression feature;If source library
Human face expression feature X after middle mappingsWith the human face expression feature X after mapping in object librarytOrthogonal, then it is insignificant;When
For in object library map after human face expression feature XtHuman face expression feature X after being mapped in the source library of good alignmentsWhen,
Give high weight.
Step 206:By third characteristics of objects and corresponding label, applied to the mould classified to sample in object library
Type.
In the embodiment of the present invention, can third characteristics of objects and corresponding label be subjected to model training, specifically:With
Third characteristics of objects and corresponding label are new sample;The updated value component of the relatively new sample of solving model parameter;From
Sample extraction characteristics of objects in object library, calculating sample in object library based on model has the probability value of different labels;Choose symbol
Close label of the label of Probability Condition as sample in object library.
For example, supporting vector SVM models can be chosen or the other types of mould such as neural network can also be chosen
Type is trained:Using transformed human face expression feature and corresponding label as new sample, it is put into SVM models and is instructed
Practice;The updated value component of the relatively new sample of solving model parameter;The sample extraction human face expression feature from object library, based on mould
Type, which calculates sample in object library, has the probability value of different labels;The label for meeting Probability Condition is chosen as sample in object library
Label.
It, can be based on the similarity function using third characteristics of objects as the factor, to sample in object library in the embodiment of the present invention
This progress nearest neighbour classification obtains the label of sample in object library.
For example, for the k nearest neighbor criterion model that sample is classified in be applied to object library, for Jiang Yuanku
The corresponding A of middle human face expression feature SsA corresponding with human face expression feature S in object librarytIt is compared, needs a kind of measurement phase
Like property function Sim (As,At).By AsAnd AtProject to respective subspace XsAnd XtAnd application optimal transform matrix M*, so as to lead to
It crosses the similarity function submodel and carries out nearest neighbour classification, wherein, similarity function can be defined as follows:
Wherein,It is Relative Contribution of the different components of coding vector in its luv space.
Pass through the optimal M for learning to obtain*, source images are converted, it can be directly using similarity function Sim as neighbour
The measurement facility of grader KNN, KNN here are not needed to carry out model training, directly be classified;It is assumed that in the library of source
There are 1-6 expression classification, there are 10 expressive features in every one kind, then, sample any in object library is substituted into Sim formula
Afterwards, expression classification 1-6 respectively obtains a value, takes table of the corresponding label of maximum value (i.e. most like) as the sample
Feelings label;Wherein, the value of each expression classification, by it includes 10 expressive features substitute into after obtained average value determine.
In the embodiment of the present invention, when the type that the model applied includes is two, here, suppose that the model applied
For k nearest neighbor criterion model and SVM models, the output result for sample in object library is determined respectively based on the two models;It will
The output result of the two models is compared, and the label of sample in object library is determined according to comparison result.
For example, when the output result of above-mentioned two model is identical, the output result for choosing any one model is made
For the expression label in object library;When the output result of above-mentioned two model is different, and does not include particular emotion label, choose
The output result of any one model is as the expression label in object library, and here, particular emotion label can include fearing, is sad
Wound and detest;When the output result of above-mentioned two model is different, and during including specific label, selection specific label is as object library
In expression label, here it is possible to according to priority choose successively it is sad, detest and fear.
Using this judgement mode be because expression label happiness and it is surprised belong to more strong positive emotion, instructing
It is easily distinguished in experienced model;And intuitively more similar expression label is feared, is sad and the emotions such as detest and all only achieve
More general discrimination, when identification is feared, sad and detest is when expressions, it is easy to it is identified as the positive emotions such as happiness,
Therefore, error probability is larger, and tendentiousness existing for disaggregated model can be alleviated by doing so.Pass through above-mentioned moving method, test
As a result accuracy rate is significantly larger than, and the model trained using source library is directly used in the accuracy rate that object library is tested.
To realize the above method, the embodiment of the present invention additionally provides a kind of model treatment device, as shown in fig. 6, the device
Including extraction module 601, mapping block 602, determining module 603, modular converter 604 and application module 605;Wherein,
The extraction module 601, for extracting the first characteristics of objects from the library of source respectively and extracting from object library
Two characteristics of objects, wherein, first characteristics of objects and second characteristics of objects are respectively sample in respective affiliated database
Set.
The extraction module 601, is specifically used for:Different types of characteristics of objects is extracted from the source library, by extraction
After different types of characteristics of objects combination, the first characteristics of objects as the source library;
Different types of characteristics of objects is extracted from the object library, the different types of characteristics of objects of extraction is combined
Afterwards, as the second characteristics of objects of the object library.
The mapping block 602, for first characteristics of objects and second characteristics of objects to be respectively mapped to together
One feature space corresponds to the first characteristics of objects after being mapped and the second characteristics of objects after mapping.
The determining module 603, for according to the first characteristics of objects after the mapping and the second couple after the mapping
As feature, the transformation matrix when differentiation between the source library and the object library meets default minimum difference condition is determined.
The determining module 603, is specifically used for:After with the first characteristics of objects after the mapping and the mapping
Second characteristics of objects is the factor, and structure represents the difference function of distance between the source library and the object library;
In a manner of solving the difference function minimum value, determine that the differentiation between the source library and the object library expires
Foot presets transformation matrix during minimum difference condition.
The determining module 603, is specifically used for:Utilize the first characteristics of objects after the mapping and the transformation matrix
The difference of the second characteristics of objects after product, with the mapping is as the factor;Build the function that norm is calculated based on the factor.
The modular converter 604, for being turned the first characteristics of objects after the mapping by the transformation matrix
It changes, obtains third characteristics of objects.
The application module 605, for by the third characteristics of objects and corresponding label, applied to the target
The model that sample is classified in library.
The application module 605, is specifically used for:Using the third characteristics of objects and corresponding label as new sample;
The updated value component of the relatively described new sample of solving model parameter;Sample extraction characteristics of objects, is based on from the object library
The model, which calculates sample in the object library, has the probability value of different labels;The label for meeting Probability Condition is chosen as institute
State the label of sample in object library.
The application module 605, is specifically used for:It is right based on the similarity function using the third characteristics of objects as the factor
Sample carries out nearest neighbour classification in the object library, obtains the label of sample in the object library.
Described device further includes dimensionality reduction module 606, is used for:By the first characteristics of objects extracted from the source library and
The second characteristics of objects extracted from the object library carries out dimensionality reduction.
The dimensionality reduction module 606, is specifically used for:By the first characteristics of objects extracted from the source library and from described
The second characteristics of objects extracted in object library is normalized respectively;
The special object feature of default dimension is extracted respectively, and the default dimension is less than corresponding dimension before extraction.
Described device further includes comparison module 607, is used for:When the type that the model applied includes is at least two,
Determine the output result for sample in the object library respectively based on each model;
The output result of each model is compared, the mark of sample in the object library is determined according to comparison result
Label.
Institute's comparison module 607, is specifically used for:When the output result of each model is identical, any one model is chosen
Label of the output result as sample in the object library;
When the output result of each model is different, and does not include specific label, the output of any one model is chosen
As a result the label as sample in the object library;
When the output result of each model is different, and during including specific label, choose described in the specific label conduct
The label of sample in object library.
In practical applications, the extraction module 601, mapping block 602, determining module 603, modular converter 604, application
Module 605, dimensionality reduction module 606 and comparison module 607 can by be located on computer equipment central processing unit (CPU,
Central Processing Unit), microprocessor (MPU, Micro Processor Unit), digital signal processor
(DSP, Digital Signal Processor) or field programmable gate array (FPGA, Field Programmable Gate
The realizations such as Array).
It should be noted that:The model treatment device that above-described embodiment provides is when carrying out model treatment, only with above-mentioned each
The division progress of program module can as needed distribute above-mentioned processing by different journeys for example, in practical application
Sequence module is completed, i.e., the internal structure of device is divided into different program modules, to complete whole described above or portion
It manages office.In addition, the model treatment device that above-described embodiment provides belongs to same design with model treatment embodiment of the method, have
Body realizes that process refers to embodiment of the method, and which is not described herein again.
In order to realize above-mentioned model treatment method, the embodiment of the present invention additionally provides a kind of hardware knot of model treatment device
Structure.The model treatment device of the embodiment of the present invention is realized in description with reference to the drawings, and the model treatment device can be with end
End equipment, such as smart mobile phone, tablet computer, palm PC computer equipment are implemented.The embodiment of the present invention is provided below
The hardware configuration of model treatment device be described further, it will be understood that Fig. 7 illustrate only the example of model treatment device
Property structure rather than entire infrastructure, can implement the part-structure or entire infrastructure shown in Fig. 7 as needed.
Referring to Fig. 7, Fig. 7 is a kind of hardware architecture diagram of model treatment device provided in an embodiment of the present invention, practical
It can be applied to the terminal device of aforementioned operation application program in, model treatment device 700 shown in Fig. 7 includes:At least
One processor 701, memory 702, user interface 703 and at least one network interface 704.The model treatment device 700
In various components be coupled by bus system 705.It is appreciated that bus system 705 be used to implement these components it
Between connection communication.Bus system 705 further includes power bus, controlling bus and status signal in addition to including data/address bus
Bus.But for the sake of clear explanation, various buses are all designated as bus system 705 in the figure 7.
Wherein, user interface 703 can include display, keyboard, mouse, trace ball, click wheel, button, button, sense of touch
Plate or touch screen etc..
It is appreciated that memory 702 can be volatile memory or nonvolatile memory, may also comprise volatibility and
Both nonvolatile memories.
Memory 702 in the embodiment of the present invention is used to store various types of data with support model processing unit 700
Operation.The example of these data includes:For any computer program operated on model treatment device 700, can such as hold
Line program 7021 and operating system 7022, realize the model treatment method of the embodiment of the present invention program may be embodied in it is executable
In program 7021.
The model treatment method that the embodiment of the present invention discloses can be applied in processor 701 or by processor 701 in fact
It is existing.Processor 701 may be a kind of IC chip, have the processing capacity of signal.During realization, at above-mentioned model
Each step of reason method can be completed by the integrated logic circuit of the hardware in processor 701 or the instruction of software form.
Above-mentioned processor 701 can be that either other programmable logic device, discrete gate or transistor are patrolled by general processor, DSP
Collect device, discrete hardware components etc..Each model treatment provided in the embodiment of the present invention can be realized or be performed to processor 701
Method, step and logic diagram.General processor can be microprocessor or any conventional processor etc..With reference to the present invention
The step of model treatment method that embodiment is provided, can be embodied directly in hardware decoding processor and perform completion, Huo Zheyong
Hardware and software module combination in decoding processor perform completion.Software module can be located in storage medium, which is situated between
Matter is located at memory 702, and processor 701 reads the information in memory 702, and foregoing model processing method is completed with reference to its hardware
The step of.
The embodiment of the present invention additionally provides a kind of hardware configuration of model treatment device, and the model treatment device 700 wraps
It includes memory 702, processor 701 and is stored in the executable program that can be run on memory 702 and by the processor 701
7021, the processor 701 is realized when running the executable program 7021:
The first characteristics of objects is extracted from the library of source respectively and the second characteristics of objects is extracted from object library, wherein, it is described
First characteristics of objects and second characteristics of objects are respectively sample set in respective affiliated database;By first object spy
Second characteristics of objects of seeking peace is respectively mapped to same feature space, corresponds to the first characteristics of objects after being mapped and mapping
The second characteristics of objects afterwards;According to the first characteristics of objects after the mapping and the second characteristics of objects after the mapping, determine
Differentiation between the source library and the object library meets transformation matrix during default minimum difference condition;Pass through the transformation
Matrix converts the first characteristics of objects after the mapping, obtains third characteristics of objects;By the third characteristics of objects with
And corresponding label, applied to the model classified to sample in the object library.
In some embodiments, it is realized when the processor 701 runs the executable program 7021:
Different types of characteristics of objects is extracted from the source library, after the different types of characteristics of objects of extraction is combined,
The first characteristics of objects as the source library;Different types of characteristics of objects is extracted from the object library, by the difference of extraction
After the characteristics of objects combination of type, the second characteristics of objects as the object library.
In some embodiments, it is realized when the processor 701 runs the executable program 7021:
By the first characteristics of objects extracted from the source library and the second characteristics of objects extracted from the object library
Carry out dimensionality reduction.
In some embodiments, it is realized when the processor 701 runs the executable program 7021:
By the first characteristics of objects extracted from the source library and the second characteristics of objects extracted from the object library
It is normalized respectively;The special object feature of default dimension is extracted respectively, and the default dimension corresponds to before being less than extraction
Dimension.
In some embodiments, it is realized when the processor 701 runs the executable program 7021:
Based on using the first characteristics of objects after the mapping and the second characteristics of objects after the mapping as the factor, table is built
Show the difference function of distance between the source library and the object library;In a manner of solving the difference function minimum value, determine
Differentiation between the source library and the object library meets transformation matrix during default minimum difference condition.
In some embodiments, it is realized when the processor 701 runs the executable program 7021:
Using the first characteristics of objects after the mapping and the product of the transformation matrix, with the second couple after the mapping
As the difference of feature is as the factor;Build the function that norm is calculated based on the factor.
In some embodiments, it is realized when the processor 701 runs the executable program 7021:
Using the third characteristics of objects and corresponding label as new sample;The relatively described new sample of solving model parameter
This updated value component;The sample extraction characteristics of objects from the object library calculates sample in the object library based on the model
This has the probability value of different labels;Choose label of the label for meeting Probability Condition as sample in the object library.
In some embodiments, it is realized when the processor 701 runs the executable program 7021:
Based on the similarity function using the third characteristics of objects as the factor, neighbour point is carried out to sample in the object library
Class obtains the label of sample in the object library.
In some embodiments, it is realized when the processor 701 runs the executable program 7021:
When the type that the model applied includes is at least two, determined respectively for the object library based on each model
The output result of middle sample;The output result of each model is compared, is determined in the object library according to comparison result
The label of sample.
In some embodiments, it is realized when the processor 701 runs the executable program 7021:
When the output result of each model is identical, the output result of any one model is chosen as the object library
The label of middle sample;When the output result of each model is different, and does not include specific label, any one model is chosen
Export label of the result as sample in the object library;When the output result of each model is different, and including specific label
When, choose label of the specific label as sample in the object library.
The embodiment of the present invention additionally provides a kind of storage medium, and the storage medium can be that CD, flash memory or disk etc. are deposited
Storage media is chosen as non-moment storage medium.Wherein, executable program 7021 is stored on the storage medium, it is described to hold
Line program 7021 is realized when being performed by processor 701:
The first characteristics of objects is extracted from the library of source respectively and the second characteristics of objects is extracted from object library, wherein, it is described
First characteristics of objects and second characteristics of objects are respectively sample set in respective affiliated database;By first object spy
Second characteristics of objects of seeking peace is respectively mapped to same feature space, corresponds to the first characteristics of objects after being mapped and mapping
The second characteristics of objects afterwards;According to the first characteristics of objects after the mapping and the second characteristics of objects after the mapping, determine
Differentiation between the source library and the object library meets transformation matrix during default minimum difference condition;Pass through the transformation
Matrix converts the first characteristics of objects after the mapping, obtains third characteristics of objects;By the third characteristics of objects with
And corresponding label, applied to the model classified to sample in the object library.
In some embodiments, it is realized when the executable program 7021 is performed by processor 701:
Different types of characteristics of objects is extracted from the source library, after the different types of characteristics of objects of extraction is combined,
The first characteristics of objects as the source library;Different types of characteristics of objects is extracted from the object library, by the difference of extraction
After the characteristics of objects combination of type, the second characteristics of objects as the object library.
In some embodiments, it is realized when the executable program 7021 is performed by processor 701:
By the first characteristics of objects extracted from the source library and the second characteristics of objects extracted from the object library
Carry out dimensionality reduction.
In some embodiments, it is realized when the executable program 7021 is performed by processor 701:
By the first characteristics of objects extracted from the source library and the second characteristics of objects extracted from the object library
It is normalized respectively;The special object feature of default dimension is extracted respectively, and the default dimension corresponds to before being less than extraction
Dimension.
In some embodiments, it is realized when the executable program 7021 is performed by processor 701:
Based on using the first characteristics of objects after the mapping and the second characteristics of objects after the mapping as the factor, table is built
Show the difference function of distance between the source library and the object library;In a manner of solving the difference function minimum value, determine
Differentiation between the source library and the object library meets transformation matrix during default minimum difference condition.
In some embodiments, it is realized when the executable program 7021 is performed by processor 701:
Using the first characteristics of objects after the mapping and the product of the transformation matrix, with the second couple after the mapping
As the difference of feature is as the factor;Build the function that norm is calculated based on the factor.
In some embodiments, it is realized when the executable program 7021 is performed by processor 701:
Using the third characteristics of objects and corresponding label as new sample;The relatively described new sample of solving model parameter
This updated value component;The sample extraction characteristics of objects from the object library calculates sample in the object library based on the model
This has the probability value of different labels;Choose label of the label for meeting Probability Condition as sample in the object library.
In some embodiments, it is realized when the executable program 7021 is performed by processor 701:
Based on the similarity function using the third characteristics of objects as the factor, neighbour point is carried out to sample in the object library
Class obtains the label of sample in the object library.
In some embodiments, it is realized when the executable program 7021 is performed by processor 701:
When the type that the model applied includes is at least two, determined respectively for the object library based on each model
The output result of middle sample;The output result of each model is compared, is determined in the object library according to comparison result
The label of sample.
In some embodiments, it is realized when the executable program 7021 is performed by processor 701:
When the output result of each model is identical, the output result of any one model is chosen as the object library
The label of middle sample;When the output result of each model is different, and does not include specific label, any one model is chosen
Export label of the result as sample in the object library;When the output result of each model is different, and including specific label
When, choose label of the specific label as sample in the object library.
In conclusion model treatment method, apparatus and storage medium that the embodiment of the present invention is provided, respectively from the library of source
It extracts the first characteristics of objects and the second characteristics of objects is extracted from object library, wherein, first characteristics of objects and described
Two characteristics of objects are respectively sample set in respective affiliated database;By first characteristics of objects and second characteristics of objects
Same feature space is respectively mapped to, corresponds to the first characteristics of objects after being mapped and the second characteristics of objects after mapping;Root
According to the first characteristics of objects after the mapping and the second characteristics of objects after the mapping, the source library and the object library are determined
Between transformation matrix of differentiation when meeting default minimum difference condition;By the transformation matrix by after the mapping
An object feature is converted, and obtains third characteristics of objects;By the third characteristics of objects and corresponding label, applied to pair
The model that sample is classified in the object library.In this way, the model of suitable object library has been trained by source library, therefore, by
The model use that source library trains can obtain good recognition result when object library, so as to improve the accuracy of model.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or executable program
Product.Therefore, the shape of the embodiment in terms of hardware embodiment, software implementation or combination software and hardware can be used in the present invention
Formula.Moreover, the present invention can be used can use storage in one or more computers for wherein including computer usable program code
The form of executable program product that medium is implemented on (including but not limited to magnetic disk storage and optical memory etc.).
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and executable program product
Figure and/or block diagram describe.It should be understood that it can be realized by executable program instructions every first-class in flowchart and/or the block diagram
The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These executable programs can be provided
The processor of all-purpose computer, special purpose computer, Embedded Processor or reference programmable data processing device is instructed to produce
A raw machine so that the instruction performed by computer or with reference to the processor of programmable data processing device is generated for real
The device of function specified in present one flow of flow chart or one box of multiple flows and/or block diagram or multiple boxes.
These executable program instructions, which may also be stored in, can guide computer or with reference to programmable data processing device with spy
Determine in the computer-readable memory that mode works so that the instruction generation being stored in the computer-readable memory includes referring to
Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or
The function of being specified in multiple boxes.
These executable program instructions can also be loaded into computer or with reference in programmable data processing device so that count
Calculation machine or with reference to performing series of operation steps on programmable device to generate computer implemented processing, so as in computer or
It is used to implement with reference to the instruction offer performed on programmable device in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in a box or multiple boxes.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention, it is all
All any modification, equivalent and improvement made within the spirit and principles in the present invention etc. should be included in the protection of the present invention
Within the scope of.
Claims (22)
- A kind of 1. model treatment method, which is characterized in that the method includes:The first characteristics of objects is extracted from the library of source respectively and the second characteristics of objects is extracted from object library, wherein, described first Characteristics of objects and second characteristics of objects are respectively sample set in respective affiliated database;First characteristics of objects and second characteristics of objects are respectively mapped to same feature space, corresponded to after obtaining mapping The first characteristics of objects and mapping after the second characteristics of objects;According to the first characteristics of objects after the mapping and the second characteristics of objects after the mapping, the source library and described is determined Differentiation between object library meets transformation matrix during default minimum difference condition;The first characteristics of objects after the mapping is converted by the transformation matrix, obtains third characteristics of objects;By the third characteristics of objects and corresponding label, applied to the model classified to sample in the object library.
- 2. according to the method described in claim 1, it is characterized in that, it is described extracted from the library of source respectively the first characteristics of objects and The second characteristics of objects is extracted from object library, including:Different types of characteristics of objects is extracted from the source library, after the different types of characteristics of objects of extraction is combined, as First characteristics of objects in the source library;Different types of characteristics of objects is extracted from the object library, after the different types of characteristics of objects of extraction is combined, is made The second characteristics of objects for the object library.
- 3. according to the method described in claim 1, it is characterized in that, the method further includes:It is described first characteristics of objects and second characteristics of objects are respectively mapped to same feature space before,The first characteristics of objects extracted from the source library and the second characteristics of objects extracted from the object library are carried out Dimensionality reduction.
- It is 4. according to the method described in claim 3, it is characterized in that, described by the first object extracted from the source library spy Sign and the second characteristics of objects extracted from the object library carry out dimensionality reduction, including:The first characteristics of objects extracted from the source library and the second characteristics of objects extracted from the object library are distinguished It is normalized;The special object feature of default dimension is extracted respectively, and the default dimension is less than corresponding dimension before extraction.
- 5. according to the method described in claim 1, it is characterized in that, first characteristics of objects and institute according to after the mapping The second characteristics of objects after mapping is stated, determines that the differentiation between the source library and the object library meets default minimum difference item Transformation matrix during part, including:Based on using the first characteristics of objects after the mapping and the second characteristics of objects after the mapping as the factor, structure represents institute State the difference function of distance between source library and the object library;In a manner of solving the difference function minimum value, it is pre- to determine that the differentiation between the source library and the object library meets If transformation matrix during minimum difference condition.
- 6. according to the method described in claim 5, it is characterized in that, the structure is represented between the source library and the object library The difference function of distance, including:It is special with the second object after the mapping using the first characteristics of objects after the mapping and the product of the transformation matrix The difference of sign is as the factor;Build the function that norm is calculated based on the factor.
- It is 7. according to the method described in claim 1, it is characterized in that, described by the third characteristics of objects and corresponding mark Label, applied to the model classified to sample in the object library, including:Using the third characteristics of objects and corresponding label as new sample;The updated value component of the relatively described new sample of solving model parameter;The sample extraction characteristics of objects from the object library, calculating sample in the object library based on the model has different marks The probability value of label;Choose label of the label for meeting Probability Condition as sample in the object library.
- It is 8. according to the method described in claim 1, it is characterized in that, described by the third characteristics of objects and corresponding mark Label, applied to the model classified to sample in the object library, including:Based on the similarity function using the third characteristics of objects as the factor, nearest neighbour classification is carried out to sample in the object library, Obtain the label of sample in the object library.
- 9. method according to claim 7 or 8, which is characterized in that the method further includes:When the type that the model applied includes is at least two,Determine the output result for sample in the object library respectively based on each model;The output result of each model is compared, the label of sample in the object library is determined according to comparison result.
- 10. according to the method described in claim 9, it is characterized in that, the output result by each model is compared, The label of sample in the object library is determined according to comparison result, including:When the output result of each model is identical, the output result of any one model is chosen as sample in the object library This label;When the output result of each model is different, and does not include specific label, the output result of any one model is chosen Label as sample in the object library;When the output result of each model is different, and during including specific label, the specific label is chosen as the target The label of sample in library.
- 11. a kind of model treatment device, which is characterized in that described device includes:Extraction module, determining module, turns mapping block Change the mold block and application module;Wherein,The extraction module, for extracting the first characteristics of objects from the library of source respectively and the second object spy being extracted from object library Sign, wherein, first characteristics of objects and second characteristics of objects are respectively sample set in respective affiliated database;The mapping block, it is empty for first characteristics of objects and second characteristics of objects to be respectively mapped to same feature Between, correspond to the first characteristics of objects after being mapped and the second characteristics of objects after mapping;The determining module, for according to the first characteristics of objects after the mapping and the second characteristics of objects after the mapping, Determine the transformation matrix when differentiation between the source library and the object library meets default minimum difference condition;The modular converter for being converted the first characteristics of objects after the mapping by the transformation matrix, obtains Third characteristics of objects;The application module, for by the third characteristics of objects and corresponding label, applied to sample in the object library This model classified.
- 12. according to the devices described in claim 11, which is characterized in that the extraction module is specifically used for:Different types of characteristics of objects is extracted from the source library, after the different types of characteristics of objects of extraction is combined, as First characteristics of objects in the source library;Different types of characteristics of objects is extracted from the object library, after the different types of characteristics of objects of extraction is combined, is made The second characteristics of objects for the object library.
- 13. according to the devices described in claim 11, which is characterized in that described device further includes dimensionality reduction module, is used for:The first characteristics of objects extracted from the source library and the second characteristics of objects extracted from the object library are carried out Dimensionality reduction.
- 14. device according to claim 13, which is characterized in that the dimensionality reduction module is specifically used for:The first characteristics of objects extracted from the source library and the second characteristics of objects extracted from the object library are distinguished It is normalized;The special object feature of default dimension is extracted respectively, and the default dimension is less than corresponding dimension before extraction.
- 15. according to the devices described in claim 11, which is characterized in that the determining module is specifically used for:Based on using the first characteristics of objects after the mapping and the second characteristics of objects after the mapping as the factor, structure represents institute State the difference function of distance between source library and the object library;In a manner of solving the difference function minimum value, it is pre- to determine that the differentiation between the source library and the object library meets If transformation matrix during minimum difference condition.
- 16. device according to claim 15, which is characterized in that the determining module is specifically used for:It is special with the second object after the mapping using the first characteristics of objects after the mapping and the product of the transformation matrix The difference of sign is as the factor;Build the function that norm is calculated based on the factor.
- 17. according to the devices described in claim 11, which is characterized in that the application module is specifically used for:Using the third characteristics of objects and corresponding label as new sample;The updated value component of the relatively described new sample of solving model parameter;The sample extraction characteristics of objects from the object library, calculating sample in the object library based on the model has different marks The probability value of label;Choose label of the label for meeting Probability Condition as sample in the object library.
- 18. according to the devices described in claim 11, which is characterized in that the application module is specifically used for:Based on the similarity function using the third characteristics of objects as the factor, nearest neighbour classification is carried out to sample in the object library, Obtain the label of sample in the object library.
- 19. the device according to claim 17 or 18, which is characterized in that described device further includes comparison module, is used for:When the type that the model applied includes is at least two,Determine the output result for sample in the object library respectively based on each model;The output result of each model is compared, the label of sample in the object library is determined according to comparison result.
- 20. device according to claim 19, which is characterized in that institute's comparison module is specifically used for:When the output result of each model is identical, the output result of any one model is chosen as sample in the object library This label;When the output result of each model is different, and does not include specific label, the output result of any one model is chosen Label as sample in the object library;When the output result of each model is different, and during including specific label, the specific label is chosen as the target The label of sample in library.
- 21. a kind of storage medium, is stored thereon with executable program, which is characterized in that the executable code processor is held The model treatment method as described in any one of claims 1 to 10 is realized during row.
- 22. a kind of model treatment device, can be transported on a memory and by the processor including memory, processor and storage Capable executable program, which is characterized in that the processor performs claims 1 to 10 such as when running the executable program and appoints One model treatment method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711475434.7A CN108229552B (en) | 2017-12-29 | 2017-12-29 | Model processing method and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711475434.7A CN108229552B (en) | 2017-12-29 | 2017-12-29 | Model processing method and device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108229552A true CN108229552A (en) | 2018-06-29 |
CN108229552B CN108229552B (en) | 2021-07-09 |
Family
ID=62646962
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711475434.7A Active CN108229552B (en) | 2017-12-29 | 2017-12-29 | Model processing method and device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108229552B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109214421A (en) * | 2018-07-27 | 2019-01-15 | 阿里巴巴集团控股有限公司 | A kind of model training method, device and computer equipment |
CN110097873A (en) * | 2019-05-14 | 2019-08-06 | 苏州沃柯雷克智能系统有限公司 | A kind of method, apparatus, equipment and storage medium confirming the degree of lip-rounding by sound |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62184586A (en) * | 1986-02-07 | 1987-08-12 | Matsushita Electric Ind Co Ltd | Character recognizing device |
CN103093235A (en) * | 2012-12-30 | 2013-05-08 | 北京工业大学 | Handwriting digital recognition method based on improved distance core principal component analysis |
CN104574638A (en) * | 2014-09-30 | 2015-04-29 | 上海层峰金融设备有限公司 | Method for identifying RMB |
CN104616319A (en) * | 2015-01-28 | 2015-05-13 | 南京信息工程大学 | Multi-feature selection target tracking method based on support vector machine |
CN104700089A (en) * | 2015-03-24 | 2015-06-10 | 江南大学 | Face identification method based on Gabor wavelet and SB2DLPP |
CN104751191A (en) * | 2015-04-23 | 2015-07-01 | 重庆大学 | Sparse self-adaptive semi-supervised manifold learning hyperspectral image classification method |
CN105069447A (en) * | 2015-09-23 | 2015-11-18 | 河北工业大学 | Facial expression identification method |
CN105139039A (en) * | 2015-09-29 | 2015-12-09 | 河北工业大学 | Method for recognizing human face micro-expressions in video sequence |
CN105512331A (en) * | 2015-12-28 | 2016-04-20 | 海信集团有限公司 | Video recommending method and device |
CN106548145A (en) * | 2016-10-31 | 2017-03-29 | 北京小米移动软件有限公司 | Image-recognizing method and device |
CN106599854A (en) * | 2016-12-19 | 2017-04-26 | 河北工业大学 | Method for automatically recognizing face expressions based on multi-characteristic fusion |
CN106803098A (en) * | 2016-12-28 | 2017-06-06 | 南京邮电大学 | A kind of three mode emotion identification methods based on voice, expression and attitude |
CN107292246A (en) * | 2017-06-05 | 2017-10-24 | 河海大学 | Infrared human body target identification method based on HOG PCA and transfer learning |
CN107346434A (en) * | 2017-05-03 | 2017-11-14 | 上海大学 | A kind of plant pest detection method based on multiple features and SVMs |
-
2017
- 2017-12-29 CN CN201711475434.7A patent/CN108229552B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62184586A (en) * | 1986-02-07 | 1987-08-12 | Matsushita Electric Ind Co Ltd | Character recognizing device |
CN103093235A (en) * | 2012-12-30 | 2013-05-08 | 北京工业大学 | Handwriting digital recognition method based on improved distance core principal component analysis |
CN104574638A (en) * | 2014-09-30 | 2015-04-29 | 上海层峰金融设备有限公司 | Method for identifying RMB |
CN104616319A (en) * | 2015-01-28 | 2015-05-13 | 南京信息工程大学 | Multi-feature selection target tracking method based on support vector machine |
CN104700089A (en) * | 2015-03-24 | 2015-06-10 | 江南大学 | Face identification method based on Gabor wavelet and SB2DLPP |
CN104751191A (en) * | 2015-04-23 | 2015-07-01 | 重庆大学 | Sparse self-adaptive semi-supervised manifold learning hyperspectral image classification method |
CN105069447A (en) * | 2015-09-23 | 2015-11-18 | 河北工业大学 | Facial expression identification method |
CN105139039A (en) * | 2015-09-29 | 2015-12-09 | 河北工业大学 | Method for recognizing human face micro-expressions in video sequence |
CN105512331A (en) * | 2015-12-28 | 2016-04-20 | 海信集团有限公司 | Video recommending method and device |
CN106548145A (en) * | 2016-10-31 | 2017-03-29 | 北京小米移动软件有限公司 | Image-recognizing method and device |
CN106599854A (en) * | 2016-12-19 | 2017-04-26 | 河北工业大学 | Method for automatically recognizing face expressions based on multi-characteristic fusion |
CN106803098A (en) * | 2016-12-28 | 2017-06-06 | 南京邮电大学 | A kind of three mode emotion identification methods based on voice, expression and attitude |
CN107346434A (en) * | 2017-05-03 | 2017-11-14 | 上海大学 | A kind of plant pest detection method based on multiple features and SVMs |
CN107292246A (en) * | 2017-06-05 | 2017-10-24 | 河海大学 | Infrared human body target identification method based on HOG PCA and transfer learning |
Non-Patent Citations (4)
Title |
---|
BASURA FERNANDO: "Unsupervised Visual Domain Adaptation Using Subspace Alignment", 《IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
LIJUN YIN: "A 3D facial expression database for facial behavior research", 《7TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FGR06)》 * |
吴松松等: "基于核子空间对齐的非监督领域自适应", 《南京邮电大学学报》 * |
杜兴: "视觉感知机制启发的人脸识别方法研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109214421A (en) * | 2018-07-27 | 2019-01-15 | 阿里巴巴集团控股有限公司 | A kind of model training method, device and computer equipment |
CN109214421B (en) * | 2018-07-27 | 2022-01-28 | 创新先进技术有限公司 | Model training method and device and computer equipment |
CN110097873A (en) * | 2019-05-14 | 2019-08-06 | 苏州沃柯雷克智能系统有限公司 | A kind of method, apparatus, equipment and storage medium confirming the degree of lip-rounding by sound |
CN110097873B (en) * | 2019-05-14 | 2021-08-17 | 苏州沃柯雷克智能系统有限公司 | Method, device, equipment and storage medium for confirming mouth shape through sound |
Also Published As
Publication number | Publication date |
---|---|
CN108229552B (en) | 2021-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Combining data-driven and model-driven methods for robust facial landmark detection | |
Bose et al. | Efficient inception V2 based deep convolutional neural network for real‐time hand action recognition | |
CN109934173A (en) | Expression recognition method, device and electronic equipment | |
US10339369B2 (en) | Facial expression recognition using relations determined by class-to-class comparisons | |
CN110084313A (en) | A method of generating object detection model | |
CN109145759A (en) | Vehicle attribute recognition methods, device, server and storage medium | |
Cheong et al. | Defects and components recognition in printed circuit boards using convolutional neural network | |
Patten et al. | Dgcm-net: dense geometrical correspondence matching network for incremental experience-based robotic grasping | |
Shang et al. | Facilitating efficient mars terrain image classification with fuzzy-rough feature selection | |
CN112132739A (en) | 3D reconstruction and human face posture normalization method, device, storage medium and equipment | |
CN113705297A (en) | Training method and device for detection model, computer equipment and storage medium | |
Ghazaei et al. | Dealing with ambiguity in robotic grasping via multiple predictions | |
Zafeiriou et al. | Discriminant graph structures for facial expression recognition | |
Hu et al. | A grasps-generation-and-selection convolutional neural network for a digital twin of intelligent robotic grasping | |
WO2021010342A1 (en) | Action recognition device, action recognition method, and action recognition program | |
Zhai et al. | Face verification across aging based on deep convolutional networks and local binary patterns | |
CN108229552A (en) | A kind of model treatment method, apparatus and storage medium | |
CN111126358A (en) | Face detection method, face detection device, storage medium and equipment | |
CN113822144A (en) | Target detection method and device, computer equipment and storage medium | |
Abid et al. | Dynamic hand gesture recognition from Bag-of-Features and local part model | |
Zhu et al. | One-shot texture retrieval using global grouping metric | |
Križaj et al. | Simultaneous multi-descent regression and feature learning for facial landmarking in depth images | |
CN113139540B (en) | Backboard detection method and equipment | |
US20220092448A1 (en) | Method and system for providing annotation information for target data through hint-based machine learning model | |
Liu et al. | Research on vision of intelligent car based on broad learning system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |