CN111753641A - Gender prediction method based on high-dimensional features of human face - Google Patents

Gender prediction method based on high-dimensional features of human face Download PDF

Info

Publication number
CN111753641A
CN111753641A CN202010378403.5A CN202010378403A CN111753641A CN 111753641 A CN111753641 A CN 111753641A CN 202010378403 A CN202010378403 A CN 202010378403A CN 111753641 A CN111753641 A CN 111753641A
Authority
CN
China
Prior art keywords
gender
human face
face
model
thousand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010378403.5A
Other languages
Chinese (zh)
Other versions
CN111753641B (en
Inventor
李梦婷
李翔
印鉴
刘威
余建兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202010378403.5A priority Critical patent/CN111753641B/en
Publication of CN111753641A publication Critical patent/CN111753641A/en
Application granted granted Critical
Publication of CN111753641B publication Critical patent/CN111753641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention provides a gender prediction method based on human face high-dimensional characteristics, which is a gender prediction method derived from a human face recognition model based on a CNN network structure of a ResNet100 layer, and comprises the steps of firstly training million-order human face recognition data to obtain a human face high-dimensional characteristic expression mode, then training a model capable of judging gender through a human face photo by using a shallow network through hundreds of thousands of human face photos marked with gender; the invention completely multiplexes the calculation result of high-dimensional characteristics in the face recognition process, can obtain the gender prediction result only by a small calculation amount, and can also keep very high prediction precision.

Description

Gender prediction method based on high-dimensional features of human face
Technical Field
The invention relates to the field of image processing algorithms, in particular to a gender prediction method based on high-dimensional characteristics of a human face.
Background
Face recognition services, such as face-brushing access control, face-brushing payment and the like, are ubiquitous in daily life of people, and can provide better experience for users. The gender prediction service based on the human face is usually used as a sub-service of the human face recognition, and various help is provided for people, for example, a clothing retail store can better know the gender ratio obtained by the human face passenger flow, so that the user can plan how to plan and layout the clothes in the store. Therefore, in a specific application, besides the recognition of the human face, some human face attributes such as gender and the like are returned.
There are two types of methods for predicting gender attribute based on human face, one is to extract the features of several key points by traditional image processing method, then classify the extracted features by shallow layer method, thus predicting the specific gender result, the advantage of this type of method is faster, but the precision is very low; the other type is that a deep learning mode is used, a deep neural network model is trained, an end-to-end mode is used, the human face is mapped into a high-dimensional space, the high-dimensional feature vector is used for representing the human face, and then the human face is divided into specific gender categories. In addition, in order to keep high recognition accuracy of the current commercial face recognition models, most of the face recognition models are also recognized by using a deep convolutional neural network. Meanwhile, since most of the calculation of the convolutional neural network needs to be accelerated by the GPU graphics card, the cost of the GPU is relatively high. Therefore, in order to greatly reduce the calculation cost and reduce the time delay, the invention completely reuses the calculation result of the high-dimensional characteristics in the face recognition process, and can obtain the gender prediction result only by a small calculation amount, namely after a face photo is transmitted, two results of identity recognition and gender prediction can be obtained only by one-time deep network, and simultaneously, the very high prediction precision can be kept.
Disclosure of Invention
The invention provides a gender prediction method with high precision based on high-dimensional characteristics of a human face.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a gender prediction method based on human face high-dimensional features comprises the following steps:
s1: adopting 100 layers of convolutional neural networks ResNet as a backbone network, using a million face data set MS1M to train a face recognition model, generating a usable deep face model, and fixing the weight of parameters passing through 100 layers, so that each face picture can generate a high-dimensional vector after passing through the 100 layers of backbone network;
s2: based on the backbone network generated in S1, inputting 10 million face photos with gender labels into the backbone network, training a two-layer shallow network by taking a high-dimensional vector generated by the backbone network as a feature, and generating a two-classification model for predicting male and female;
s3: and deploying the two models generated in the S1 and the S2 into a server, providing face recognition service and gender prediction service at the same time, and returning the identity and gender attribute corresponding to the face photo in real time.
Further, the specific process of step S1 is:
s11: using an MTCNN model to extract and cut a human face from a human face data set, inputting original human face photo data, outputting 5 key coordinate points corresponding to each human face, including two eyes, two mouth corners and a nose, and cutting the human face corresponding to the 5 points according to a proportion of 112 x 112, wherein the used margin value is 16;
s12: inputting the cut human face and the corresponding identity label into a 100-layer convolutional neural network ResNet, and training a model by using a loss function based on Arcface, wherein the blocksize is 512, the initial learning rate is set to be 0.1, the iteration is performed for 18 ten thousand times in total, the learning rate is reduced to 0.01 when the iteration is performed for 10 ten thousand times, and the learning rate is reduced to 0.001 when the iteration is performed for 16 ten thousand times until the learning is finished.
Further, in step S12, after the training is finished, the last full connection layer is removed, only the backbone network remains, and a facial photo is calculated by the backbone network, so as to obtain a 512-dimensional feature vector.
Further, the specific process of step S2 is:
s21: collecting 10 ten thousand face photos of each scene on the line, manually labeling the gender of each photo, so that 10 ten thousand pieces of data with labeling information exist, inputting the photos into a backbone network generated by S1, generating 10 ten thousand feature vectors with 512 dimensions, and labeling each feature vector with the corresponding gender;
s22: randomly dividing 10 ten thousand feature vectors into two parts according to the proportion of 8:2, wherein 8 ten thousand feature vectors are used as a training set, a shallow two-layer network is used, a loss function adopts a softmax function based on cross entropy, a two-class gender model is trained, and after the training is finished, the rest 2 ten thousand photos are used as test data of the gender model.
Further, the specific process of step S3 is:
s31: the quantization optimization of the model is carried out on the backbone network generated by S12, the corresponding calculated amount is reduced, and because the current mainstream training mode uses 32-bit floating point number, the 16-bit integer quantization mode is mainly adopted, and the corresponding calculated amount is greatly reduced under the condition of very small influence on the precision;
s32: the models generated by S31 and S22 are deployed on a line, when a photo comes, a 512-dimensional feature vector is obtained through the model of S31, the identity of the photo is judged by comparing the similarity of the existing face vectors in the face base library, the vector is input into the model of S22, the corresponding gender of the photo is returned, and the identified identity information and gender information are returned to the client for display.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the method can reduce half of the calculated amount and greatly reduce the time delay, and after a face photo is transmitted, two results of identity recognition and gender prediction can be obtained only by one-time deep network, and meanwhile, the extremely high gender prediction precision can be kept. For example, the server with 8 Telsa P40 display cards has a lease price of 45 ten thousand and 6 thousand yuan per year on Tencent cloud, which reduces the calculation amount by half, and each server can save 22 ten thousand and 8 thousand yuan per year.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
fig. 2 is a schematic diagram of the algorithm structure in embodiment 1.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
As shown in fig. 1-2, a gender prediction method based on human face high-dimensional features comprises the following steps:
s1: adopting 100 layers of convolutional neural networks ResNet as a backbone network, using a million face data set MS1M to train a face recognition model, generating a usable deep face model, and fixing the weight of parameters passing through 100 layers, so that each face picture can generate a high-dimensional vector after passing through the 100 layers of backbone network;
s2: based on the backbone network generated in S1, inputting 10 million face photos with gender labels into the backbone network, training a two-layer shallow network by taking a high-dimensional vector generated by the backbone network as a feature, and generating a two-classification model for predicting male and female;
s3: and deploying the two models generated in the S1 and the S2 into a server, providing face recognition service and gender prediction service at the same time, and returning the identity and gender attribute corresponding to the face photo in real time.
The specific process of step S1 is:
s11: using an MTCNN model to extract and cut a human face from a human face data set, inputting original human face photo data, outputting 5 key coordinate points corresponding to each human face, including two eyes, two mouth corners and a nose, and cutting the human face corresponding to the 5 points according to a proportion of 112 x 112, wherein the used margin value is 16;
s12: inputting the cut human face and the corresponding identity label into a 100-layer convolutional neural network ResNet, and training a model by using a loss function based on Arcface, wherein the blocksize is 512, the initial learning rate is set to be 0.1, the iteration is performed for 18 ten thousand times in total, the learning rate is reduced to 0.01 when the iteration is performed for 10 ten thousand times, and the learning rate is reduced to 0.001 when the iteration is performed for 16 ten thousand times until the learning is finished.
In step S12, after training is completed, the last full connection layer is removed, only the backbone network remains, and a 512-dimensional feature vector is obtained after a face photo is calculated by the backbone network.
The specific process of step S2 is:
s21: collecting 10 ten thousand face photos of each scene on the line, manually labeling the gender of each photo, so that 10 ten thousand pieces of data with labeling information exist, inputting the photos into a backbone network generated by S1, generating 10 ten thousand feature vectors with 512 dimensions, and labeling each feature vector with the corresponding gender;
s22: randomly dividing 10 ten thousand feature vectors into two parts according to the proportion of 8:2, wherein 8 ten thousand feature vectors are used as a training set, a shallow two-layer network is used, a loss function adopts a softmax function based on cross entropy, a two-class gender model is trained, and after the training is finished, the rest 2 ten thousand photos are used as test data of the gender model.
The specific process of step S3 is:
s31: the quantization optimization of the model is carried out on the backbone network generated by S12, the corresponding calculated amount is reduced, and because the current mainstream training mode uses 32-bit floating point number, the 16-bit integer quantization mode is mainly adopted, and the corresponding calculated amount is greatly reduced under the condition of very small influence on the precision;
s32: the models generated by S31 and S22 are deployed on a line, when a photo comes, a 512-dimensional feature vector is obtained through the model of S31, the identity of the photo is judged by comparing the similarity of the existing face vectors in the face base library, the vector is input into the model of S22, the corresponding gender of the photo is returned, and the identified identity information and gender information are returned to the client for display.
For the extraction of high-dimensional features of the human face, a million human face data set MS1M disclosed by Microsoft is adopted, and in addition, as the gender data set of the human face disclosed on the network is poor in both quantity and quality, and a large amount of high-quality data is needed in the training process of a machine learning model, human face photos of each scene on 10 million lines are collected by people, and the gender of each photo is labeled manually. In addition, there is no special requirement for the data set as long as there is a gender attribute corresponding to the face. When a face photo is input, the corresponding identity information and gender information can be output by using the model provided by the method.
The method comprises the following specific steps
1. The MS1M dataset was preprocessed to clean the photos where the logo was problematic or the picture quality was poor.
2. And performing face detection on the cleaned photos by using the MTCNN, and cutting and aligning the photos according to the proportion of 112 multiplied by 112 according to the output 5 key points.
3. And inputting the aligned photos into a Resnet network with 100 layers, training by using an Arcface loss function, and removing the final full-connection layer after training.
4. And inputting the collected gender data set into a 100-layer backbone network to obtain corresponding high-dimensional characteristic vectors, and training a shallow gender recognition model.
In the training process, in the inference process, the operations of 1-4 are performed on a new face photo, the generated 512-dimensional features are input into a shallow gender model to obtain corresponding gender attributes, and meanwhile, the face bottom library is searched for and matched with corresponding identities.
The same or similar reference numerals correspond to the same or similar parts;
the positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (6)

1. A gender prediction method based on human face high-dimensional features is characterized by comprising the following steps:
s1: adopting 100 layers of convolutional neural networks ResNet as a backbone network, using a million face data set MS1M to train a face recognition model, generating a usable deep face model, and fixing the weight of parameters passing through 100 layers, so that each face picture can generate a high-dimensional vector after passing through the 100 layers of backbone network;
s2: based on the backbone network generated in S1, inputting 10 million face photos with gender labels into the backbone network, training a two-layer shallow network by taking a high-dimensional vector generated by the backbone network as a feature, and generating a two-classification model for predicting male and female;
s3: and deploying the two models generated in the S1 and the S2 into a server, providing face recognition service and gender prediction service at the same time, and returning the identity and gender attribute corresponding to the face photo in real time.
2. The gender prediction method based on human face high-dimensional features as claimed in claim 1, wherein the specific process of step S1 is:
s11: using an MTCNN model to extract and cut a human face from a human face data set, inputting original human face photo data, outputting 5 key coordinate points corresponding to each human face, including two eyes, two mouth corners and a nose, and cutting the human face corresponding to the 5 points according to a proportion of 112 x 112, wherein the used margin value is 16;
s12: and inputting the cut human face and the corresponding identity label into a 100-layer convolutional neural network ResNet, and training a model by using a loss function based on Arcface until the learning rate is reduced to 0.001 until the learning is finished.
3. The gender prediction method based on the high-dimensional characteristics of the human face according to claim 2, wherein in step S12, the blocksize of the model of the loss function of arcfacce is 512, the initial learning rate is set to 0.1, 18 ten thousand iterations are performed in total, the learning rate is reduced to 0.01 when 10 ten thousand iterations are performed, and the learning rate is reduced to 0.001 when 16 ten thousand iterations are performed until the learning is completed.
4. The method for predicting gender based on human face high-dimensional features of claim 3, wherein in step S12, after the training is finished, the last full connection layer is removed, only the backbone network is left, and a human face photo is subjected to the calculation of the backbone network to obtain a 512-dimensional feature vector.
5. The gender prediction method based on the human face high-dimensional features as claimed in claim 4, wherein the specific process of the step S2 is:
s21: collecting 10 ten thousand face photos of each scene on the line, manually labeling the gender of each photo, so that 10 ten thousand pieces of data with labeling information exist, inputting the photos into a backbone network generated by S1, generating 10 ten thousand feature vectors with 512 dimensions, and labeling each feature vector with the corresponding gender;
s22: randomly dividing 10 ten thousand feature vectors into two parts according to the proportion of 8:2, wherein 8 ten thousand feature vectors are used as a training set, a shallow two-layer network is used, a loss function adopts a softmax function based on cross entropy, a two-class gender model is trained, and after the training is finished, the rest 2 ten thousand photos are used as test data of the gender model.
6. The gender prediction method based on human face high-dimensional features as claimed in claim 5, wherein the specific process of step S3 is:
s31: the quantization optimization of the model is carried out on the backbone network generated by S12, the corresponding calculated amount is reduced, and because the current mainstream training mode uses 32-bit floating point number, the 16-bit integer quantization mode is mainly adopted, and the corresponding calculated amount is greatly reduced under the condition of very small influence on the precision;
s32: the models generated by S31 and S22 are deployed on a line, when a photo comes, a 512-dimensional feature vector is obtained through the model of S31, the identity of the photo is judged by comparing the similarity of the existing face vectors in the face base library, the vector is input into the model of S22, the corresponding gender of the photo is returned, and the identified identity information and gender information are returned to the client for display.
CN202010378403.5A 2020-05-07 2020-05-07 Gender prediction method based on high-dimensional characteristics of human face Active CN111753641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010378403.5A CN111753641B (en) 2020-05-07 2020-05-07 Gender prediction method based on high-dimensional characteristics of human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010378403.5A CN111753641B (en) 2020-05-07 2020-05-07 Gender prediction method based on high-dimensional characteristics of human face

Publications (2)

Publication Number Publication Date
CN111753641A true CN111753641A (en) 2020-10-09
CN111753641B CN111753641B (en) 2023-07-18

Family

ID=72673167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010378403.5A Active CN111753641B (en) 2020-05-07 2020-05-07 Gender prediction method based on high-dimensional characteristics of human face

Country Status (1)

Country Link
CN (1) CN111753641B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095833A (en) * 2014-05-08 2015-11-25 中国科学院声学研究所 Network constructing method for human face identification, identification method and system
CN106815566A (en) * 2016-12-29 2017-06-09 天津中科智能识别产业技术研究院有限公司 A kind of face retrieval method based on multitask convolutional neural networks
CN107038429A (en) * 2017-05-03 2017-08-11 四川云图睿视科技有限公司 A kind of multitask cascade face alignment method based on deep learning
CN107247947A (en) * 2017-07-07 2017-10-13 北京智慧眼科技股份有限公司 Face character recognition methods and device
CN107918780A (en) * 2017-09-01 2018-04-17 中山大学 A kind of clothes species and attributive classification method based on critical point detection
CN109522872A (en) * 2018-12-04 2019-03-26 西安电子科技大学 A kind of face identification method, device, computer equipment and storage medium
CN109800648A (en) * 2018-12-18 2019-05-24 北京英索科技发展有限公司 Face datection recognition methods and device based on the correction of face key point
CN109815801A (en) * 2018-12-18 2019-05-28 北京英索科技发展有限公司 Face identification method and device based on deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095833A (en) * 2014-05-08 2015-11-25 中国科学院声学研究所 Network constructing method for human face identification, identification method and system
CN106815566A (en) * 2016-12-29 2017-06-09 天津中科智能识别产业技术研究院有限公司 A kind of face retrieval method based on multitask convolutional neural networks
CN107038429A (en) * 2017-05-03 2017-08-11 四川云图睿视科技有限公司 A kind of multitask cascade face alignment method based on deep learning
CN107247947A (en) * 2017-07-07 2017-10-13 北京智慧眼科技股份有限公司 Face character recognition methods and device
CN107918780A (en) * 2017-09-01 2018-04-17 中山大学 A kind of clothes species and attributive classification method based on critical point detection
CN109522872A (en) * 2018-12-04 2019-03-26 西安电子科技大学 A kind of face identification method, device, computer equipment and storage medium
CN109800648A (en) * 2018-12-18 2019-05-24 北京英索科技发展有限公司 Face datection recognition methods and device based on the correction of face key point
CN109815801A (en) * 2018-12-18 2019-05-28 北京英索科技发展有限公司 Face identification method and device based on deep learning

Also Published As

Publication number Publication date
CN111753641B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
Hidasi et al. Parallel recurrent neural network architectures for feature-rich session-based recommendations
CN113255694B (en) Training image feature extraction model and method and device for extracting image features
CN107515873B (en) Junk information identification method and equipment
CN109635680B (en) Multitask attribute identification method and device, electronic equipment and storage medium
CN108170755A (en) Cross-module state Hash search method based on triple depth network
KR102190897B1 (en) Method and Apparatus for analyzing fashion trend based on big data
Zeng et al. Deep convolutional neural network used in single sample per person face recognition
CN107861972A (en) The method and apparatus of the full result of display of commodity after a kind of user's typing merchandise news
Mo et al. Image feature learning for cold start problem in display advertising
CN110413825B (en) Street-clapping recommendation system oriented to fashion electronic commerce
Miao et al. Spectrum-guided multi-granularity referring video object segmentation
CN112287238A (en) User characteristic determination method and device, storage medium and electronic equipment
CN113642481A (en) Recognition method, training method, device, electronic equipment and storage medium
Liu et al. Intelligent design of multimedia content in Alibaba
Babnik et al. DifFIQA: Face image quality assessment using denoising diffusion probabilistic models
CN107622071B (en) Clothes image retrieval system and method under non-source-retrieval condition through indirect correlation feedback
CN108717436B (en) Commodity target rapid retrieval method based on significance detection
Modak et al. A deep learning framework to reconstruct face under mask
CN117635275A (en) Intelligent electronic commerce operation commodity management platform and method based on big data
CN113486143A (en) User portrait generation method based on multi-level text representation and model fusion
Zeng et al. Video‐driven state‐aware facial animation
CN116910294A (en) Image filter generation method based on emotion analysis
US20220375223A1 (en) Information generation method and apparatus
Yamamoto et al. Image emotion recognition using visual and semantic features reflecting emotional and similar objects
CN111539782A (en) Merchant information data processing method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant