CN112966585A - Face image relative relationship verification method for relieving information island influence - Google Patents

Face image relative relationship verification method for relieving information island influence Download PDF

Info

Publication number
CN112966585A
CN112966585A CN202110227220.8A CN202110227220A CN112966585A CN 112966585 A CN112966585 A CN 112966585A CN 202110227220 A CN202110227220 A CN 202110227220A CN 112966585 A CN112966585 A CN 112966585A
Authority
CN
China
Prior art keywords
relativity
relatives
relieving
vector
verifying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110227220.8A
Other languages
Chinese (zh)
Inventor
秦晓倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Normal University
Original Assignee
Huaiyin Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Normal University filed Critical Huaiyin Normal University
Priority to CN202110227220.8A priority Critical patent/CN112966585A/en
Publication of CN112966585A publication Critical patent/CN112966585A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of relationship verification, and discloses a human face image relative relationship verification method for relieving information island influence, which comprises the following steps: s1: and (3) learning of a spatial structure: firstly, a certain face image representation method is adopted for the Q (Q is 1,2, Q) th relativity face image, the operation of extracting the face features is carried out, and M pairs of samples D under the relativity can be obtainedq={(pqi,cqi,yi) 1,2. The method for verifying the relative relationship of the face image for relieving the influence of the information island utilizes the correlation information among different types of relative relationships, can avoid the information island problem in the process of verifying the relative relationship, has larger chance for a learner to utilize more discrimination information to improve the generalization performance, can relieve the influence problem of label noise on a classifier in the process of verifying the relative relationship because a support vector data description model can be regarded as a single-class classifier,the learner has a greater opportunity to learn the similarity characteristics between the relatives samples using cleaner data.

Description

Face image relative relationship verification method for relieving information island influence
Technical Field
The invention relates to the technical field of relative relationship verification, in particular to a facial image relative relationship verification method for relieving information island influence.
Background
The objective of the relationship verification based on the face image is to obtain a classifier through learning so as to judge whether a given face image has a relationship. The relationship here refers to the relationship between parents and children. The relationship information learned by the classification model can be applied to face recognition, social media analysis, face labeling, image tracking and the like. At present, methods for performing relationship verification on a face image can be divided into a feature representation-based method and a model learning-based method. The feature representation-based method aims at extracting some stable feature representation from a face image, and the feature representation contains judgment information whether a given face image has a relationship or not. The proposed features include local or global texture features, gradient direction pyramid based gradient, dense matching, gated self-encoding, discriminative feature learning based on prototypes, feature representation based on spatial pyramid learning, dynamic spatio-temporal features, attributes, fusion representation of multiple different features, feature selection based on soft voting, and the like. The idea of the method based on model learning is completely different from that of the method based on feature representation, and the method usually searches a discrimination space by means of some statistical analysis or machine learning methods. Currently, there are two main approaches. First, using transfer learning, this approach relies on the bridge of young parent images to reduce the difference in facial appearance between older parent images and young child images based on the observation that parents' facial appearance at young age is more similar to children. Second, using metric learning, the goal of this approach is to learn some similarity metric from given affinity data so that in the new metric space, the inter-sample distances that do not have affinity are larger than those that do. Generally speaking, existing relationship verification methods obtain better generalization performance on face image relationship data sets such as KinFaceW, but all of the existing learning methods only process data of certain fixed relationship (such as father-son relationship, etc.), and the idea of solving problems with information islands lacks the ability to explore and utilize the correlation among different types of relationships because only a certain fixed relationship type is focused on, so that a classifier cannot utilize more discrimination information. Under the background, the invention discards the conventional thought of processing a certain fixed type of relatives in an information island mode, and researches how to mine and utilize the correlation among different types of relatives to improve the generalization performance of the conventional metric learning.
Disclosure of Invention
The invention aims to provide a method for verifying the relativity of face images, which can relieve the influence of information isolated islands and solve the problems brought forward by the background.
In order to achieve the purpose, the invention provides the following technical scheme: the method for verifying the relative relationship of the face image for relieving the information island influence comprises the following steps:
s1: learning of spatial structure
Firstly, a certain face image representation method is adopted for the Q (Q is 1,2, Q) th relativity face image, the operation of extracting the face features is carried out, and M pairs of samples D under the relativity can be obtainedq={(pqi,cqi,yi)|i=1,2,...,M};
For each kind of relativity relation sample, using algorithm I to mine its space structure, inputting { DqAnd (1, 2, a., Q), and then outputting the sphere center and the radius of the support vector data and the screened sample set
Figure BDA0002956960010000021
First, for each DqDigging its space structure, taking out all the positive example sample pairs to form set PqThen, the absolute value of the difference of the feature vectors is used as the feature of all the image pairs, and the dual model of the support vector data model is solved to obtain the center a of the sphereqRadius RqAnd a sample set covered by SVDD spheres
Figure BDA0002956960010000022
Finally, carrying out sample screening and output operation;
s2: mining of auxiliary discrimination information
Firstly, for each kind of relatives Q (Q is 1,2.. multidot.q.) to be learned, the algorithm two is used to calculate the relativity with other kinds of relatives, and a is inputq、RqAnd
Figure BDA0002956960010000023
outputting a correlation vector UqAnd a category number vector Iq, and finally, integrating auxiliary information, specifically, a setting parameter T for indicating and indicating the current categoryThe category sequence number vector I is used for learning T relatives with larger correlation among the relativesqThe T relatives sample sets with the top rank in the middle are merged into
Figure BDA0002956960010000031
Then use the vector UqTaking out the middle and front T values for normalization processing to obtain a new correlation weight vector Kq=[β1,β2,...,βT];
S3: learning of classifiers
Firstly, a metric matrix is learned, different relatives samples are respectively regarded as a single learning task, and the T learning tasks can be respectively used as a bilinear function ft
Figure BDA0002956960010000032
S4: optimization
Figure BDA0002956960010000033
Preferably, p in step S1qi,cqi∈RdIs a feature vector extracted from face images of parents and children of the ith pair of relatives, and yiE { +1, -1} indicates whether the corresponding sample has a relationship.
Preferably, the algorithm in the step S1 is based on learning of the relationship space structure of the support vector data model, and the operation of screening samples in the step S1 requires taking out DqAll negative example sample pairs in (A) form a set NqFinally, all negative example image pairs and the spherical center a are calculated by using the absolute value of the difference of the feature vectors as the features of all the image pairsqIs selected before the distance is larger
Figure BDA0002956960010000034
Samples, forming a set
Figure BDA0002956960010000035
Merging
Figure BDA0002956960010000036
And
Figure BDA0002956960010000037
as a new training sample set
Figure BDA0002956960010000038
The output operation in step S1 requires the output of the center of sphere aqRadius RqAnd the screened sample set
Figure BDA0002956960010000039
Preferably, the second algorithm in the step S2 is to calculate the correlation, and the step of calculating the correlation is to:
a1:for i=1:Q;
a2:for j=1:Q;
a3 calculates C (i, j) | | | ai-ajAnd finally, obtaining a correlation matrix C.
6. Preferably, the obtaining step of the correlation vector in the step S2 is:
b 1: arranging the q-th row C (q,: of C) in ascending order;
b 2: taking out the sorted values and storing the values into a vector Uq
b 3: sequentially put U inqThe category serial numbers corresponding to the values in (1) are merged to obtain a category serial number vector Iq
Preferably, W in step S3t *∈Rd×dIs the metric transformation matrix to be learned and Wt *=W0+WtWherein W is0For characterizing the shared characteristics of all relatives samples, WtIt is shared by some kind of relatives and used to characterize its genetic characteristics.
Preferably, L (,) in step S4 is an empirical loss function, and a logistic regression model, R (W), is used for implementation0,Wt) Is a regularization term for controlling sharing and exclusive sharingThe impact of the transformation matrix on the model is measured.
The invention provides a method for verifying the relativity of a face image, which can relieve the influence of information isolated island. The method for verifying the relative relationship of the face image for relieving the information island influence has the following beneficial effects:
according to the face image relative relation verification method for relieving the information island influence, the correlation information among different types of relative relations is utilized, the information island problem in the relative relation verification process can be avoided, the learner has a larger chance to utilize more discrimination information to improve the generalization performance, the support vector data description model can be regarded as a single-class classifier, the influence problem of label noise on the classifier in the relative relation verification process can be relieved, and the learner has a larger chance to utilize cleaner data to learn the similarity characteristics among the relative relation samples.
Drawings
FIG. 1 is a schematic view of the flow structure of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
As shown in fig. 1, the present invention provides a technical solution: the method for verifying the relative relationship of the face image for relieving the information island influence comprises the following steps:
s1: learning of spatial structure
Firstly, a certain face image representation method is adopted for the Q (Q is 1,2, Q) th relativity face image, the operation of extracting the face features is carried out, and M pairs of samples D under the relativity can be obtainedq={(pqi,cqi,yi)|i=1,2,...,M},pqi,cqi∈RdIs a feature vector extracted from face images of parents and children of the ith pair of relatives, and yiE { +1, -1} represents whether the corresponding sample has a relationship;
for each kind of relativity relation sample, using algorithm I to mine its space structure, inputting { DqAnd (1, 2, a., Q), and then outputting the sphere center and the radius of the support vector data and the screened sample set
Figure BDA0002956960010000051
First, for each DqDigging its space structure, taking out all the positive example sample pairs to form set PqThen, the absolute value of the difference of the feature vectors is used as the feature of all the image pairs, and the dual model of the support vector data model is solved to obtain the center a of the sphereqRadius RqAnd a sample set covered by SVDD spheres
Figure BDA0002956960010000052
Finally, sample screening and output operation are carried out, the algorithm is based on learning of the relationship space structure of the support vector data model, and D is required to be taken out in sample screening operationqAll negative example sample pairs in (A) form a set NqFinally, all negative example image pairs and the spherical center a are calculated by using the absolute value of the difference of the feature vectors as the features of all the image pairsqIs selected before the distance is larger
Figure BDA0002956960010000053
Samples, forming a set
Figure BDA0002956960010000054
Merging
Figure BDA0002956960010000055
And
Figure BDA0002956960010000056
as a new training sample set
Figure BDA0002956960010000057
The output operation requires the output of the center of sphere aqRadius RqAnd the screened sample set
Figure BDA0002956960010000058
S2: mining of auxiliary discrimination information
Firstly, for each kind of relatives Q (Q is 1,2.. multidot.q.) to be learned, the algorithm two is used to calculate the relativity with other kinds of relatives, and a is inputq、RqAnd
Figure BDA0002956960010000061
outputting a correlation vector UqAnd a category sequence number vector Iq, setting a parameter T for representing T relatives with larger correlation with the current relatives to be learned, and setting a category sequence number vector IqThe T relatives sample sets with the top rank in the middle are merged into
Figure BDA0002956960010000062
Then use the vector UqTaking out the middle and front T values for normalization processing to obtain a new correlation weight vector Kq=[β1,β2,...,βT]And the second algorithm is used for calculating the correlation, and the algorithm steps for calculating the correlation are as follows:
a1:for i=1:Q;
a2:for j=1:Q;
a3 calculates C (i, j) | | | ai-ajAnd finally obtaining a correlation matrix C, wherein the correlation vector is obtained by the following steps:
b 1: arranging the q-th row C (q,: of C) in ascending order;
b 2: taking out the sorted values and storing the values into a vector Uq
b 3: sequentially put U inqThe category serial numbers corresponding to the values in (1) are merged to obtain a category serial number vector Iq
S3: learning of classifiers
First, a metric matrix is learned, anddifferent relatives samples are respectively regarded as a single learning task, and the T learning tasks can be regarded as a bilinear function ft
Figure BDA0002956960010000063
Wt *∈Rd×dIs the metric transformation matrix to be learned and Wt *=W0+WtWherein W is0For characterizing the shared characteristics of all relatives samples, WtThe sample is exclusively shared by a certain relativity relation sample and is used for describing the genetic characteristics of the sample;
s4: optimization
Figure BDA0002956960010000064
L (,) is an empirical loss function, and is implemented using a logistic regression model, R (W)0,Wt) Is a regularization term that controls the effect of shared and exclusive metric transformation matrices on the model.
When the face image relative relation verification method for relieving the information island influence is used, correlation information among different types of relative relations is utilized, the information island problem in the relative relation verification process can be avoided, the learner has a larger chance to utilize more discrimination information to improve generalization performance, the support vector data description model can be regarded as a single-class classifier, the influence problem of label noise on the classifier in the relative relation verification process can be relieved, and the learner has a larger chance to utilize cleaner data to learn the similar characteristics among the relative relation samples.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (7)

1. A face image relative relationship verification method for relieving information island influence is characterized by comprising the following steps: the method comprises the following steps:
s1: learning of spatial structure
Firstly, a certain face image representation method is adopted for the Q (Q is 1,2, Q) th relativity face image, the operation of extracting the face features is carried out, and M pairs of samples D under the relativity can be obtainedq={(pqi,cqi,yi)|i=1,2,...,M};
For each kind of relativity relation sample, using algorithm I to mine its space structure, inputting { DqAnd (1, 2, a., Q), and then outputting the sphere center and the radius of the support vector data and the screened sample set
Figure FDA0002956949000000011
First, for each DqDigging its space structure, taking out all the positive example sample pairs to form set PqThen, the absolute value of the difference of the feature vectors is used as the feature of all the image pairs, and the dual model of the support vector data model is solved to obtain the center a of the sphereqRadius RqAnd a sample set covered by SVDD spheres
Figure FDA0002956949000000012
Finally, carrying out sample screening and output operation;
s2: mining of auxiliary discrimination information
Firstly, for each kind of relatives Q (Q is 1,2.. multidot.q.) to be learned, the algorithm two is used to calculate the relativity with other kinds of relatives, and a is inputq、RqAnd
Figure FDA0002956949000000013
outputting a correlation vector UqAnd class order vector Iq, and finally, integerCombining auxiliary information, specifically, setting a parameter T for representing T relatives with larger correlation with the current relatives to be learned, and combining the category sequence number vector IqThe T relatives sample sets with the top rank in the middle are merged into
Figure FDA0002956949000000014
Then use the vector UqTaking out the middle and front T values for normalization processing to obtain a new correlation weight vector Kq=[β1,β2,...,βT];
S3: learning of classifiers
Firstly, a metric matrix is learned, different relatives samples are respectively regarded as a single learning task, and the T learning tasks can be respectively used as a bilinear function ft
Figure FDA0002956949000000015
S4: optimization
Figure FDA0002956949000000021
2. The method for verifying the relativity between the face images for relieving the information island influence according to claim 1, which is characterized in that: p in step S1qi,cqi∈RdIs a feature vector extracted from face images of parents and children of the ith pair of relatives, and yiE { +1, -1} indicates whether the corresponding sample has a relationship.
3. The method for verifying the relativity between the face images for relieving the information island influence according to claim 1, which is characterized in that: the algorithm in the step S1 is based on learning of the relationship space structure of the support vector data model, and the operation of screening samples in the step S1 needs to take out DqAll negative example sample pairs in (A) form a set NqUsing absolute difference of feature vectorsThe values are used as the characteristics of all the image pairs, and finally all the negative example image pairs and the spherical center a are calculatedqIs selected before the distance is larger
Figure FDA0002956949000000022
Samples, forming a set
Figure FDA0002956949000000023
Merging
Figure FDA0002956949000000024
And
Figure FDA0002956949000000025
as a new training sample set
Figure FDA0002956949000000026
The output operation in step S1 requires the output of the center of sphere aqRadius RqAnd the screened sample set
Figure FDA0002956949000000027
4. The method for verifying the relativity between the face images for relieving the information island influence according to claim 1, which is characterized in that: the second algorithm in the step S2 is to calculate the correlation, and the second algorithm for calculating the correlation includes:
a1:for i=1:Q;
a2:for j=1:Q;
a3 calculates C (i, j) | | | ai-ajAnd finally, obtaining a correlation matrix C.
5. The method for verifying the relativity between the face images for relieving the information island influence according to claim 1, which is characterized in that: the correlation vector obtaining step in the step S2 is:
b 1: arranging the q-th row C (q,: of C) in ascending order;
b 2: after sequencingIs taken out and stored in a vector Uq
b 3: sequentially put U inqThe category serial numbers corresponding to the values in (1) are merged to obtain a category serial number vector Iq
6. The method for verifying the relativity between the face images for relieving the information island influence according to claim 1, which is characterized in that: in step S3
Figure FDA0002956949000000031
Is a metric transformation matrix to be learned and
Figure FDA0002956949000000032
wherein W0For characterizing the shared characteristics of all relatives samples, WtIt is shared by some kind of relatives and used to characterize its genetic characteristics.
7. The method for verifying the relativity between the face images for relieving the information island influence according to claim 1, which is characterized in that: l (,) in step S4 is an empirical loss function, and a logistic stewart regression model, R (W), is used for implementation0,Wt) Is a regularization term that controls the effect of shared and exclusive metric transformation matrices on the model.
CN202110227220.8A 2021-03-01 2021-03-01 Face image relative relationship verification method for relieving information island influence Pending CN112966585A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110227220.8A CN112966585A (en) 2021-03-01 2021-03-01 Face image relative relationship verification method for relieving information island influence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110227220.8A CN112966585A (en) 2021-03-01 2021-03-01 Face image relative relationship verification method for relieving information island influence

Publications (1)

Publication Number Publication Date
CN112966585A true CN112966585A (en) 2021-06-15

Family

ID=76277551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110227220.8A Pending CN112966585A (en) 2021-03-01 2021-03-01 Face image relative relationship verification method for relieving information island influence

Country Status (1)

Country Link
CN (1) CN112966585A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205941A (en) * 2022-07-13 2022-10-18 山西大学 Generic multi-view graph embedding-based relationship verification method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682478A (en) * 2012-05-15 2012-09-19 北京航空航天大学 Three-dimensional target multi-viewpoint view modeling method based on support vector data description
CN105005774A (en) * 2015-07-28 2015-10-28 中国科学院自动化研究所 Face relative relation recognition method based on convolutional neural network and device thereof
CN109344759A (en) * 2018-06-12 2019-02-15 北京理工大学 A kind of relatives' recognition methods based on angle loss neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682478A (en) * 2012-05-15 2012-09-19 北京航空航天大学 Three-dimensional target multi-viewpoint view modeling method based on support vector data description
CN105005774A (en) * 2015-07-28 2015-10-28 中国科学院自动化研究所 Face relative relation recognition method based on convolutional neural network and device thereof
CN109344759A (en) * 2018-06-12 2019-02-15 北京理工大学 A kind of relatives' recognition methods based on angle loss neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
秦晓倩: ""基于Web图像的Kinship关系验证研究"", 《中国博士学位论文全文数据库 (信息科技辑)》, pages 138 - 15 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205941A (en) * 2022-07-13 2022-10-18 山西大学 Generic multi-view graph embedding-based relationship verification method

Similar Documents

Publication Publication Date Title
CN111814584B (en) Vehicle re-identification method based on multi-center measurement loss under multi-view environment
CN109583322B (en) Face recognition deep network training method and system
WO2020114118A1 (en) Facial attribute identification method and device, storage medium and processor
Lin et al. Cir-net: Automatic classification of human chromosome based on inception-resnet architecture
Zheng et al. Improving the generalization ability of deep neural networks for cross-domain visual recognition
CN113657561B (en) Semi-supervised night image classification method based on multi-task decoupling learning
CN106570477A (en) Vehicle model recognition model construction method based on depth learning and vehicle model recognition method based on depth learning
Li et al. On improving the accuracy with auto-encoder on conjunctivitis
CN110580268A (en) Credit scoring integrated classification system and method based on deep learning
CN109711426A (en) A kind of pathological picture sorter and method based on GAN and transfer learning
CN105894050A (en) Multi-task learning based method for recognizing race and gender through human face image
CN113609569B (en) Distinguishing type generalized zero sample learning fault diagnosis method
Bai et al. Correlative channel-aware fusion for multi-view time series classification
Gupta et al. Single attribute and multi attribute facial gender and age estimation
CN112580502A (en) SICNN-based low-quality video face recognition method
CN112883931A (en) Real-time true and false motion judgment method based on long and short term memory network
CN113052017A (en) Unsupervised pedestrian re-identification method based on multi-granularity feature representation and domain adaptive learning
Li et al. Sparse trace ratio LDA for supervised feature selection
CN116229179A (en) Dual-relaxation image classification method based on width learning system
CN114492634A (en) Fine-grained equipment image classification and identification method and system
Mo et al. Weighted pseudo labeled data and mutual learning for semi-supervised classification
CN114625908A (en) Text expression package emotion analysis method and system based on multi-channel attention mechanism
CN112966585A (en) Face image relative relationship verification method for relieving information island influence
CN112668633B (en) Adaptive graph migration learning method based on fine granularity field
CN112883930A (en) Real-time true and false motion judgment method based on full-connection network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination