CN112966585A - Face image relative relationship verification method for relieving information island influence - Google Patents
Face image relative relationship verification method for relieving information island influence Download PDFInfo
- Publication number
- CN112966585A CN112966585A CN202110227220.8A CN202110227220A CN112966585A CN 112966585 A CN112966585 A CN 112966585A CN 202110227220 A CN202110227220 A CN 202110227220A CN 112966585 A CN112966585 A CN 112966585A
- Authority
- CN
- China
- Prior art keywords
- relativity
- relatives
- relieving
- vector
- verifying
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000012795 verification Methods 0.000 title claims abstract description 16
- 239000013598 vector Substances 0.000 claims abstract description 44
- 239000011159 matrix material Substances 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 7
- 238000013499 data model Methods 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 4
- 230000001174 ascending effect Effects 0.000 claims description 3
- 230000009977 dual effect Effects 0.000 claims description 3
- 230000002068 genetic effect Effects 0.000 claims description 3
- 238000005065 mining Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 7
- 230000009471 action Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 3
- 238000007477 logistic regression Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of relationship verification, and discloses a human face image relative relationship verification method for relieving information island influence, which comprises the following steps: s1: and (3) learning of a spatial structure: firstly, a certain face image representation method is adopted for the Q (Q is 1,2, Q) th relativity face image, the operation of extracting the face features is carried out, and M pairs of samples D under the relativity can be obtainedq={(pqi,cqi,yi) 1,2. The method for verifying the relative relationship of the face image for relieving the influence of the information island utilizes the correlation information among different types of relative relationships, can avoid the information island problem in the process of verifying the relative relationship, has larger chance for a learner to utilize more discrimination information to improve the generalization performance, can relieve the influence problem of label noise on a classifier in the process of verifying the relative relationship because a support vector data description model can be regarded as a single-class classifier,the learner has a greater opportunity to learn the similarity characteristics between the relatives samples using cleaner data.
Description
Technical Field
The invention relates to the technical field of relative relationship verification, in particular to a facial image relative relationship verification method for relieving information island influence.
Background
The objective of the relationship verification based on the face image is to obtain a classifier through learning so as to judge whether a given face image has a relationship. The relationship here refers to the relationship between parents and children. The relationship information learned by the classification model can be applied to face recognition, social media analysis, face labeling, image tracking and the like. At present, methods for performing relationship verification on a face image can be divided into a feature representation-based method and a model learning-based method. The feature representation-based method aims at extracting some stable feature representation from a face image, and the feature representation contains judgment information whether a given face image has a relationship or not. The proposed features include local or global texture features, gradient direction pyramid based gradient, dense matching, gated self-encoding, discriminative feature learning based on prototypes, feature representation based on spatial pyramid learning, dynamic spatio-temporal features, attributes, fusion representation of multiple different features, feature selection based on soft voting, and the like. The idea of the method based on model learning is completely different from that of the method based on feature representation, and the method usually searches a discrimination space by means of some statistical analysis or machine learning methods. Currently, there are two main approaches. First, using transfer learning, this approach relies on the bridge of young parent images to reduce the difference in facial appearance between older parent images and young child images based on the observation that parents' facial appearance at young age is more similar to children. Second, using metric learning, the goal of this approach is to learn some similarity metric from given affinity data so that in the new metric space, the inter-sample distances that do not have affinity are larger than those that do. Generally speaking, existing relationship verification methods obtain better generalization performance on face image relationship data sets such as KinFaceW, but all of the existing learning methods only process data of certain fixed relationship (such as father-son relationship, etc.), and the idea of solving problems with information islands lacks the ability to explore and utilize the correlation among different types of relationships because only a certain fixed relationship type is focused on, so that a classifier cannot utilize more discrimination information. Under the background, the invention discards the conventional thought of processing a certain fixed type of relatives in an information island mode, and researches how to mine and utilize the correlation among different types of relatives to improve the generalization performance of the conventional metric learning.
Disclosure of Invention
The invention aims to provide a method for verifying the relativity of face images, which can relieve the influence of information isolated islands and solve the problems brought forward by the background.
In order to achieve the purpose, the invention provides the following technical scheme: the method for verifying the relative relationship of the face image for relieving the information island influence comprises the following steps:
s1: learning of spatial structure
Firstly, a certain face image representation method is adopted for the Q (Q is 1,2, Q) th relativity face image, the operation of extracting the face features is carried out, and M pairs of samples D under the relativity can be obtainedq={(pqi,cqi,yi)|i=1,2,...,M};
For each kind of relativity relation sample, using algorithm I to mine its space structure, inputting { DqAnd (1, 2, a., Q), and then outputting the sphere center and the radius of the support vector data and the screened sample setFirst, for each DqDigging its space structure, taking out all the positive example sample pairs to form set PqThen, the absolute value of the difference of the feature vectors is used as the feature of all the image pairs, and the dual model of the support vector data model is solved to obtain the center a of the sphereqRadius RqAnd a sample set covered by SVDD spheresFinally, carrying out sample screening and output operation;
s2: mining of auxiliary discrimination information
Firstly, for each kind of relatives Q (Q is 1,2.. multidot.q.) to be learned, the algorithm two is used to calculate the relativity with other kinds of relatives, and a is inputq、RqAndoutputting a correlation vector UqAnd a category number vector Iq, and finally, integrating auxiliary information, specifically, a setting parameter T for indicating and indicating the current categoryThe category sequence number vector I is used for learning T relatives with larger correlation among the relativesqThe T relatives sample sets with the top rank in the middle are merged intoThen use the vector UqTaking out the middle and front T values for normalization processing to obtain a new correlation weight vector Kq=[β1,β2,...,βT];
S3: learning of classifiers
Firstly, a metric matrix is learned, different relatives samples are respectively regarded as a single learning task, and the T learning tasks can be respectively used as a bilinear function ft:
S4: optimization
Preferably, p in step S1qi,cqi∈RdIs a feature vector extracted from face images of parents and children of the ith pair of relatives, and yiE { +1, -1} indicates whether the corresponding sample has a relationship.
Preferably, the algorithm in the step S1 is based on learning of the relationship space structure of the support vector data model, and the operation of screening samples in the step S1 requires taking out DqAll negative example sample pairs in (A) form a set NqFinally, all negative example image pairs and the spherical center a are calculated by using the absolute value of the difference of the feature vectors as the features of all the image pairsqIs selected before the distance is largerSamples, forming a setMergingAndas a new training sample setThe output operation in step S1 requires the output of the center of sphere aqRadius RqAnd the screened sample set
Preferably, the second algorithm in the step S2 is to calculate the correlation, and the step of calculating the correlation is to:
a1:for i=1:Q;
a2:for j=1:Q;
a3 calculates C (i, j) | | | ai-ajAnd finally, obtaining a correlation matrix C.
6. Preferably, the obtaining step of the correlation vector in the step S2 is:
b 1: arranging the q-th row C (q,: of C) in ascending order;
b 2: taking out the sorted values and storing the values into a vector Uq;
b 3: sequentially put U inqThe category serial numbers corresponding to the values in (1) are merged to obtain a category serial number vector Iq。
Preferably, W in step S3t *∈Rd×dIs the metric transformation matrix to be learned and Wt *=W0+WtWherein W is0For characterizing the shared characteristics of all relatives samples, WtIt is shared by some kind of relatives and used to characterize its genetic characteristics.
Preferably, L (,) in step S4 is an empirical loss function, and a logistic regression model, R (W), is used for implementation0,Wt) Is a regularization term for controlling sharing and exclusive sharingThe impact of the transformation matrix on the model is measured.
The invention provides a method for verifying the relativity of a face image, which can relieve the influence of information isolated island. The method for verifying the relative relationship of the face image for relieving the information island influence has the following beneficial effects:
according to the face image relative relation verification method for relieving the information island influence, the correlation information among different types of relative relations is utilized, the information island problem in the relative relation verification process can be avoided, the learner has a larger chance to utilize more discrimination information to improve the generalization performance, the support vector data description model can be regarded as a single-class classifier, the influence problem of label noise on the classifier in the relative relation verification process can be relieved, and the learner has a larger chance to utilize cleaner data to learn the similarity characteristics among the relative relation samples.
Drawings
FIG. 1 is a schematic view of the flow structure of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
As shown in fig. 1, the present invention provides a technical solution: the method for verifying the relative relationship of the face image for relieving the information island influence comprises the following steps:
s1: learning of spatial structure
Firstly, a certain face image representation method is adopted for the Q (Q is 1,2, Q) th relativity face image, the operation of extracting the face features is carried out, and M pairs of samples D under the relativity can be obtainedq={(pqi,cqi,yi)|i=1,2,...,M},pqi,cqi∈RdIs a feature vector extracted from face images of parents and children of the ith pair of relatives, and yiE { +1, -1} represents whether the corresponding sample has a relationship;
for each kind of relativity relation sample, using algorithm I to mine its space structure, inputting { DqAnd (1, 2, a., Q), and then outputting the sphere center and the radius of the support vector data and the screened sample setFirst, for each DqDigging its space structure, taking out all the positive example sample pairs to form set PqThen, the absolute value of the difference of the feature vectors is used as the feature of all the image pairs, and the dual model of the support vector data model is solved to obtain the center a of the sphereqRadius RqAnd a sample set covered by SVDD spheresFinally, sample screening and output operation are carried out, the algorithm is based on learning of the relationship space structure of the support vector data model, and D is required to be taken out in sample screening operationqAll negative example sample pairs in (A) form a set NqFinally, all negative example image pairs and the spherical center a are calculated by using the absolute value of the difference of the feature vectors as the features of all the image pairsqIs selected before the distance is largerSamples, forming a setMergingAndas a new training sample setThe output operation requires the output of the center of sphere aqRadius RqAnd the screened sample set
S2: mining of auxiliary discrimination information
Firstly, for each kind of relatives Q (Q is 1,2.. multidot.q.) to be learned, the algorithm two is used to calculate the relativity with other kinds of relatives, and a is inputq、RqAndoutputting a correlation vector UqAnd a category sequence number vector Iq, setting a parameter T for representing T relatives with larger correlation with the current relatives to be learned, and setting a category sequence number vector IqThe T relatives sample sets with the top rank in the middle are merged intoThen use the vector UqTaking out the middle and front T values for normalization processing to obtain a new correlation weight vector Kq=[β1,β2,...,βT]And the second algorithm is used for calculating the correlation, and the algorithm steps for calculating the correlation are as follows:
a1:for i=1:Q;
a2:for j=1:Q;
a3 calculates C (i, j) | | | ai-ajAnd finally obtaining a correlation matrix C, wherein the correlation vector is obtained by the following steps:
b 1: arranging the q-th row C (q,: of C) in ascending order;
b 2: taking out the sorted values and storing the values into a vector Uq;
b 3: sequentially put U inqThe category serial numbers corresponding to the values in (1) are merged to obtain a category serial number vector Iq;
S3: learning of classifiers
First, a metric matrix is learned, anddifferent relatives samples are respectively regarded as a single learning task, and the T learning tasks can be regarded as a bilinear function ft:Wt *∈Rd×dIs the metric transformation matrix to be learned and Wt *=W0+WtWherein W is0For characterizing the shared characteristics of all relatives samples, WtThe sample is exclusively shared by a certain relativity relation sample and is used for describing the genetic characteristics of the sample;
s4: optimization
L (,) is an empirical loss function, and is implemented using a logistic regression model, R (W)0,Wt) Is a regularization term that controls the effect of shared and exclusive metric transformation matrices on the model.
When the face image relative relation verification method for relieving the information island influence is used, correlation information among different types of relative relations is utilized, the information island problem in the relative relation verification process can be avoided, the learner has a larger chance to utilize more discrimination information to improve generalization performance, the support vector data description model can be regarded as a single-class classifier, the influence problem of label noise on the classifier in the relative relation verification process can be relieved, and the learner has a larger chance to utilize cleaner data to learn the similar characteristics among the relative relation samples.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. A face image relative relationship verification method for relieving information island influence is characterized by comprising the following steps: the method comprises the following steps:
s1: learning of spatial structure
Firstly, a certain face image representation method is adopted for the Q (Q is 1,2, Q) th relativity face image, the operation of extracting the face features is carried out, and M pairs of samples D under the relativity can be obtainedq={(pqi,cqi,yi)|i=1,2,...,M};
For each kind of relativity relation sample, using algorithm I to mine its space structure, inputting { DqAnd (1, 2, a., Q), and then outputting the sphere center and the radius of the support vector data and the screened sample setFirst, for each DqDigging its space structure, taking out all the positive example sample pairs to form set PqThen, the absolute value of the difference of the feature vectors is used as the feature of all the image pairs, and the dual model of the support vector data model is solved to obtain the center a of the sphereqRadius RqAnd a sample set covered by SVDD spheresFinally, carrying out sample screening and output operation;
s2: mining of auxiliary discrimination information
Firstly, for each kind of relatives Q (Q is 1,2.. multidot.q.) to be learned, the algorithm two is used to calculate the relativity with other kinds of relatives, and a is inputq、RqAndoutputting a correlation vector UqAnd class order vector Iq, and finally, integerCombining auxiliary information, specifically, setting a parameter T for representing T relatives with larger correlation with the current relatives to be learned, and combining the category sequence number vector IqThe T relatives sample sets with the top rank in the middle are merged intoThen use the vector UqTaking out the middle and front T values for normalization processing to obtain a new correlation weight vector Kq=[β1,β2,...,βT];
S3: learning of classifiers
Firstly, a metric matrix is learned, different relatives samples are respectively regarded as a single learning task, and the T learning tasks can be respectively used as a bilinear function ft:
S4: optimization
2. The method for verifying the relativity between the face images for relieving the information island influence according to claim 1, which is characterized in that: p in step S1qi,cqi∈RdIs a feature vector extracted from face images of parents and children of the ith pair of relatives, and yiE { +1, -1} indicates whether the corresponding sample has a relationship.
3. The method for verifying the relativity between the face images for relieving the information island influence according to claim 1, which is characterized in that: the algorithm in the step S1 is based on learning of the relationship space structure of the support vector data model, and the operation of screening samples in the step S1 needs to take out DqAll negative example sample pairs in (A) form a set NqUsing absolute difference of feature vectorsThe values are used as the characteristics of all the image pairs, and finally all the negative example image pairs and the spherical center a are calculatedqIs selected before the distance is largerSamples, forming a setMergingAndas a new training sample setThe output operation in step S1 requires the output of the center of sphere aqRadius RqAnd the screened sample set
4. The method for verifying the relativity between the face images for relieving the information island influence according to claim 1, which is characterized in that: the second algorithm in the step S2 is to calculate the correlation, and the second algorithm for calculating the correlation includes:
a1:for i=1:Q;
a2:for j=1:Q;
a3 calculates C (i, j) | | | ai-ajAnd finally, obtaining a correlation matrix C.
5. The method for verifying the relativity between the face images for relieving the information island influence according to claim 1, which is characterized in that: the correlation vector obtaining step in the step S2 is:
b 1: arranging the q-th row C (q,: of C) in ascending order;
b 2: after sequencingIs taken out and stored in a vector Uq;
b 3: sequentially put U inqThe category serial numbers corresponding to the values in (1) are merged to obtain a category serial number vector Iq。
6. The method for verifying the relativity between the face images for relieving the information island influence according to claim 1, which is characterized in that: in step S3Is a metric transformation matrix to be learned andwherein W0For characterizing the shared characteristics of all relatives samples, WtIt is shared by some kind of relatives and used to characterize its genetic characteristics.
7. The method for verifying the relativity between the face images for relieving the information island influence according to claim 1, which is characterized in that: l (,) in step S4 is an empirical loss function, and a logistic stewart regression model, R (W), is used for implementation0,Wt) Is a regularization term that controls the effect of shared and exclusive metric transformation matrices on the model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110227220.8A CN112966585A (en) | 2021-03-01 | 2021-03-01 | Face image relative relationship verification method for relieving information island influence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110227220.8A CN112966585A (en) | 2021-03-01 | 2021-03-01 | Face image relative relationship verification method for relieving information island influence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112966585A true CN112966585A (en) | 2021-06-15 |
Family
ID=76277551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110227220.8A Pending CN112966585A (en) | 2021-03-01 | 2021-03-01 | Face image relative relationship verification method for relieving information island influence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112966585A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115205941A (en) * | 2022-07-13 | 2022-10-18 | 山西大学 | Generic multi-view graph embedding-based relationship verification method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102682478A (en) * | 2012-05-15 | 2012-09-19 | 北京航空航天大学 | Three-dimensional target multi-viewpoint view modeling method based on support vector data description |
CN105005774A (en) * | 2015-07-28 | 2015-10-28 | 中国科学院自动化研究所 | Face relative relation recognition method based on convolutional neural network and device thereof |
CN109344759A (en) * | 2018-06-12 | 2019-02-15 | 北京理工大学 | A kind of relatives' recognition methods based on angle loss neural network |
-
2021
- 2021-03-01 CN CN202110227220.8A patent/CN112966585A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102682478A (en) * | 2012-05-15 | 2012-09-19 | 北京航空航天大学 | Three-dimensional target multi-viewpoint view modeling method based on support vector data description |
CN105005774A (en) * | 2015-07-28 | 2015-10-28 | 中国科学院自动化研究所 | Face relative relation recognition method based on convolutional neural network and device thereof |
CN109344759A (en) * | 2018-06-12 | 2019-02-15 | 北京理工大学 | A kind of relatives' recognition methods based on angle loss neural network |
Non-Patent Citations (1)
Title |
---|
秦晓倩: ""基于Web图像的Kinship关系验证研究"", 《中国博士学位论文全文数据库 (信息科技辑)》, pages 138 - 15 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115205941A (en) * | 2022-07-13 | 2022-10-18 | 山西大学 | Generic multi-view graph embedding-based relationship verification method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111814584B (en) | Vehicle re-identification method based on multi-center measurement loss under multi-view environment | |
CN109583322B (en) | Face recognition deep network training method and system | |
WO2020114118A1 (en) | Facial attribute identification method and device, storage medium and processor | |
Lin et al. | Cir-net: Automatic classification of human chromosome based on inception-resnet architecture | |
Zheng et al. | Improving the generalization ability of deep neural networks for cross-domain visual recognition | |
CN113657561B (en) | Semi-supervised night image classification method based on multi-task decoupling learning | |
CN106570477A (en) | Vehicle model recognition model construction method based on depth learning and vehicle model recognition method based on depth learning | |
Li et al. | On improving the accuracy with auto-encoder on conjunctivitis | |
CN110580268A (en) | Credit scoring integrated classification system and method based on deep learning | |
CN109711426A (en) | A kind of pathological picture sorter and method based on GAN and transfer learning | |
CN105894050A (en) | Multi-task learning based method for recognizing race and gender through human face image | |
CN113609569B (en) | Distinguishing type generalized zero sample learning fault diagnosis method | |
Bai et al. | Correlative channel-aware fusion for multi-view time series classification | |
Gupta et al. | Single attribute and multi attribute facial gender and age estimation | |
CN112580502A (en) | SICNN-based low-quality video face recognition method | |
CN112883931A (en) | Real-time true and false motion judgment method based on long and short term memory network | |
CN113052017A (en) | Unsupervised pedestrian re-identification method based on multi-granularity feature representation and domain adaptive learning | |
Li et al. | Sparse trace ratio LDA for supervised feature selection | |
CN116229179A (en) | Dual-relaxation image classification method based on width learning system | |
CN114492634A (en) | Fine-grained equipment image classification and identification method and system | |
Mo et al. | Weighted pseudo labeled data and mutual learning for semi-supervised classification | |
CN114625908A (en) | Text expression package emotion analysis method and system based on multi-channel attention mechanism | |
CN112966585A (en) | Face image relative relationship verification method for relieving information island influence | |
CN112668633B (en) | Adaptive graph migration learning method based on fine granularity field | |
CN112883930A (en) | Real-time true and false motion judgment method based on full-connection network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |