CN111753641B - Gender prediction method based on high-dimensional characteristics of human face - Google Patents

Gender prediction method based on high-dimensional characteristics of human face Download PDF

Info

Publication number
CN111753641B
CN111753641B CN202010378403.5A CN202010378403A CN111753641B CN 111753641 B CN111753641 B CN 111753641B CN 202010378403 A CN202010378403 A CN 202010378403A CN 111753641 B CN111753641 B CN 111753641B
Authority
CN
China
Prior art keywords
face
gender
model
thousand
photo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010378403.5A
Other languages
Chinese (zh)
Other versions
CN111753641A (en
Inventor
李梦婷
李翔
印鉴
刘威
余建兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202010378403.5A priority Critical patent/CN111753641B/en
Publication of CN111753641A publication Critical patent/CN111753641A/en
Application granted granted Critical
Publication of CN111753641B publication Critical patent/CN111753641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention provides a sex prediction method based on high-dimensional characteristics of a human face, which is based on a sex prediction method derived from a face recognition model of a CNN network structure of a ResNet100 layer, wherein a mode of expressing the high-dimensional characteristics of the human face is obtained by training millions of face recognition data, and then a model capable of judging the sex through the face photo is trained by utilizing a shallow network through hundreds of thousands of face photos; the method and the device completely multiplex the calculation results of the high-dimensional features in the face recognition process, can obtain the gender prediction result with small calculation amount, and can also keep very high prediction precision.

Description

Gender prediction method based on high-dimensional characteristics of human face
Technical Field
The invention relates to the field of image processing algorithms, in particular to a gender prediction method based on high-dimensional characteristics of human faces.
Background
Face recognition services, such as face access control and face recognition payment, are ubiquitous in people's daily lives, and can provide better experience for users. The gender prediction service based on the face is generally used as a sub-service of face recognition, and various aids are provided for people, such as gender occupation ratio obtained by clothing retail stores through face passenger flow, so that the consumers can be better informed, and the user can plan how to plan and layout the clothing in the store. Therefore, in specific application, in addition to face recognition, some face attributes such as gender and the like are returned.
There are two general methods for predicting sex attribute based on face, one is to extract the features of several key points by traditional image processing mode, then classify the extracted features by shallow layer method, so as to predict specific sex result, the advantages of such method are faster but the accuracy is very low; another type is to train a deep neural network model by deep learning, map the face into a high-dimensional space by end-to-end, use high-dimensional feature vectors to represent, and divide the face into specific gender classifications. In addition, in order to maintain high recognition accuracy, most of the face recognition models currently in commercial use deep convolutional neural networks for recognition. Meanwhile, most of calculation of the convolutional neural network needs to be accelerated through a GPU display card, so that the cost of the GPU is relatively high. Therefore, in order to greatly reduce the calculation cost and delay, the invention completely multiplexes the calculation results of the high-dimensional features in the face recognition process, and can obtain the gender prediction result with small calculation amount, namely, after a face photo is transmitted, the two results of identity recognition and gender prediction can be obtained only by one-time deep network, and meanwhile, the very high prediction precision can be maintained.
Disclosure of Invention
The gender prediction method based on the high-dimensional characteristics of the human face is high in accuracy.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a gender prediction method based on high-dimensional characteristics of human faces comprises the following steps:
s1: the method comprises the steps of adopting a convolutional neural network ResNet of 100 layers as a backbone network, training a face recognition model by using a million face data set MS1M, generating a usable deep face model, fixing parameter weights passing through 100 layers, and generating a vector with high dimensionality after each face photo passes through the backbone network of 100 layers;
s2: based on the backbone network generated in the step S1, 10 ten thousand face photos with gender labels are input into the backbone network, a two-layer shallow network is trained by taking a high-dimensional vector generated through the backbone network as a characteristic, and a two-class model for predicting men and women is generated;
s3: and deploying the two models generated in the S1 and the S2 into a server, simultaneously providing face recognition service and gender prediction service, and returning the corresponding identity and gender attribute of the face photo in real time.
Further, the specific process of the step S1 is:
s11: extracting and cutting a face data set by using an MTCNN model, inputting original face photo data, outputting 5 key coordinate points corresponding to each face, including two eyes, two mouth angles and a nose, cutting the face corresponding to the 5 points according to a 112X 112 proportion, wherein the used margin value is 16;
s12: the face after clipping and the corresponding identity label are input into a convolutional neural network ResNet of 100 layers, model training is carried out by using a loss function based on Arcface, wherein the batch size is 512, the initial learning rate is set to be 0.1, the total iteration is 18 ten thousand times, the learning rate is reduced to 0.01 when the iteration is carried out for 10 ten thousand times, and the learning rate is reduced to 0.001 when the iteration is carried out for 16 ten thousand times, so that the learning is finished.
Further, in the step S12, after the training is finished, the last full connection layer is removed, only the backbone network is left, and a face photo is calculated through the backbone network, so as to obtain a 512-dimensional feature vector.
Further, the specific process of step S2 is as follows:
s21: face photos of each scene on 10 ten thousand lines are collected, and the gender of each photo is marked manually, so that 10 ten thousand pieces of data with marking information are obtained, 10 ten thousand feature vectors with 512 dimensions are generated after the photos are input into a backbone network generated by S1, and each feature vector has corresponding gender marking;
s22: 10 ten thousand feature vectors are randomly divided into two parts according to the proportion of 8:2, 8 ten thousand feature vectors are used as training sets, a shallow two-layer network is used, a loss function adopts a softmax function based on cross entropy, a two-class gender model is trained, and after training is finished, the rest 2 ten thousand photos are used as test data of the gender model.
Further, the specific process of step S3 is as follows:
s31: the main network generated in the step S12 is subjected to quantization optimization of the model, so that the corresponding calculated amount is reduced, and because the current main stream training mode uses 32-bit floating point numbers, the main stream training mode mainly adopts a 16-bit integer quantization mode, and the corresponding calculated amount is greatly reduced under the condition of very little influence on the precision;
s32: the models generated in S31 and S22 are deployed on a line, when a photo comes, a 512-dimensional feature vector is obtained through the model in S31, the identity of the feature vector is judged through comparing the similarity of the existing face vectors in the face database, the vector is input into the model in S22, the gender corresponding to the photo is returned, and meanwhile the identified identity information and gender information are returned to a client for display.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the method can reduce half of calculated amount and greatly reduce delay, and can obtain two results of identity recognition and gender prediction only by once passing through the deep network after a face photo is transmitted, and can also keep very high gender prediction precision. For example, a server with 8 Telsa P40 graphics cards has a lease price of 45 thousands of 6 kiloyuan per year on the Telsa cloud, which reduces the calculation by half, and each server can save 22 thousands of 8 kiloyuan per year.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
fig. 2 is a schematic diagram of the algorithm structure in embodiment 1.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions;
it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
As shown in fig. 1-2, a gender prediction method based on high-dimensional characteristics of a human face comprises the following steps:
s1: the method comprises the steps of adopting a convolutional neural network ResNet of 100 layers as a backbone network, training a face recognition model by using a million face data set MS1M, generating a usable deep face model, fixing parameter weights passing through 100 layers, and generating a vector with high dimensionality after each face photo passes through the backbone network of 100 layers;
s2: based on the backbone network generated in the step S1, 10 ten thousand face photos with gender labels are input into the backbone network, a two-layer shallow network is trained by taking a high-dimensional vector generated through the backbone network as a characteristic, and a two-class model for predicting men and women is generated;
s3: and deploying the two models generated in the S1 and the S2 into a server, simultaneously providing face recognition service and gender prediction service, and returning the corresponding identity and gender attribute of the face photo in real time.
The specific process of step S1 is:
s11: extracting and cutting a face data set by using an MTCNN model, inputting original face photo data, outputting 5 key coordinate points corresponding to each face, including two eyes, two mouth angles and a nose, cutting the face corresponding to the 5 points according to a 112X 112 proportion, wherein the used margin value is 16;
s12: the face after clipping and the corresponding identity label are input into a convolutional neural network ResNet of 100 layers, model training is carried out by using a loss function based on Arcface, wherein the batch size is 512, the initial learning rate is set to be 0.1, the total iteration is 18 ten thousand times, the learning rate is reduced to 0.01 when the iteration is carried out for 10 ten thousand times, and the learning rate is reduced to 0.001 when the iteration is carried out for 16 ten thousand times, so that the learning is finished.
In step S12, after training, the last full connection layer is removed, and only the backbone network is left, and a face photo is calculated through the backbone network, so as to obtain a 512-dimensional feature vector.
The specific process of step S2 is:
s21: face photos of each scene on 10 ten thousand lines are collected, and the gender of each photo is marked manually, so that 10 ten thousand pieces of data with marking information are obtained, 10 ten thousand feature vectors with 512 dimensions are generated after the photos are input into a backbone network generated by S1, and each feature vector has corresponding gender marking;
s22: 10 ten thousand feature vectors are randomly divided into two parts according to the proportion of 8:2, 8 ten thousand feature vectors are used as training sets, a shallow two-layer network is used, a loss function adopts a softmax function based on cross entropy, a two-class gender model is trained, and after training is finished, the rest 2 ten thousand photos are used as test data of the gender model.
The specific process of step S3 is:
s31: the main network generated in the step S12 is subjected to quantization optimization of the model, so that the corresponding calculated amount is reduced, and because the current main stream training mode uses 32-bit floating point numbers, the main stream training mode mainly adopts a 16-bit integer quantization mode, and the corresponding calculated amount is greatly reduced under the condition of very little influence on the precision;
s32: the models generated in S31 and S22 are deployed on a line, when a photo comes, a 512-dimensional feature vector is obtained through the model in S31, the identity of the feature vector is judged through comparing the similarity of the existing face vectors in the face database, the vector is input into the model in S22, the gender corresponding to the photo is returned, and meanwhile the identified identity information and gender information are returned to a client for display.
In addition, because the sex data set of the face disclosed on the internet is relatively poor in quantity and quality, and a large amount of high-quality data is required in the training process of the machine learning model, people gather face photos of each scene on 10 ten thousand lines by themselves, and the sex of each photo is marked manually. In addition, there is no particular need for the data set as long as there is a gender attribute corresponding to the face. After a face photo is input, the model provided by the method can be used for outputting corresponding identity information and gender information.
The specific method comprises the following steps of
1. The MS1M dataset is preprocessed and photos marked with problems or poor picture quality are washed out.
2. And performing face detection on the cleaned photo by using the MTCNN, and cutting and aligning according to the proportion of 112 multiplied by 112 according to 5 key points output.
3. Inputting the aligned photos into a 100-layer Resnet network, training by using an Arcface loss function, and removing the last full-connection layer after training.
4. And inputting the collected gender data set into a 100-layer backbone network to obtain corresponding high-dimensional feature vectors, and training a shallow gender identification model.
The training process is that in the deducing process, the operations of 1-4 are carried out on the new face photo, the generated 512-dimensional characteristics are input into the shallow gender model to obtain the corresponding gender attribute, and the corresponding identities are searched and matched in the face database.
The same or similar reference numerals correspond to the same or similar components;
the positional relationship depicted in the drawings is for illustrative purposes only and is not to be construed as limiting the present patent;
it is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (3)

1. The gender prediction method based on the high-dimensional characteristics of the human face is characterized by comprising the following steps of:
s1: the method comprises the steps of adopting a convolutional neural network ResNet of 100 layers as a backbone network, training a face recognition model by using a million face data set MS1M, generating a usable deep face model, fixing parameter weights passing through 100 layers, and generating a vector with high dimensionality after each face photo passes through the backbone network of 100 layers; the specific process of the step S1 is as follows:
s11: extracting and cutting a face data set by using an MTCNN model, inputting original face photo data, outputting 5 key coordinate points corresponding to each face, including two eyes, two mouth angles and a nose, cutting the face corresponding to the 5 points according to a 112X 112 proportion, wherein the used margin value is 16;
s12: inputting the cut face and the corresponding identity label into a convolutional neural network ResNet of 100 layers, and training a model by using a loss function based on Arcface until the learning rate is reduced to 0.001, wherein the learning is finished;
s2: based on the backbone network generated in the step S1, 10 ten thousand face photos with gender labels are input into the backbone network, a two-layer shallow network is trained by taking a high-dimensional vector generated through the backbone network as a characteristic, and a two-class model for predicting men and women is generated; the specific process of the step S2 is as follows:
s21: face photos of each scene on 10 ten thousand lines are collected, and the gender of each photo is marked manually, so that 10 ten thousand pieces of data with marking information are obtained, 10 ten thousand feature vectors with 512 dimensions are generated after the photos are input into a backbone network generated by S1, and each feature vector has corresponding gender marking;
s22: randomly dividing 10 ten thousand feature vectors into two parts according to the proportion of 8:2, wherein 8 ten thousand feature vectors are used as training sets, a shallow two-layer network is used, a loss function adopts a softmax function based on cross entropy, a two-class gender model is trained, and after training is finished, the rest 2 ten thousand photos are used as test data of the gender model;
s3: deploying the two models generated in the S1 and the S2 into a server, providing face recognition service and gender prediction service at the same time, and returning the corresponding identity and gender attribute of the face photo in real time;
the specific process of the step S3 is as follows:
s31: the main network generated in the step S12 is subjected to quantization optimization of the model, so that the corresponding calculated amount is reduced, and because the current main stream training mode uses 32-bit floating point numbers, the main stream training mode mainly adopts a 16-bit integer quantization mode, and the corresponding calculated amount is greatly reduced under the condition of very little influence on the precision;
s32: the models generated in S31 and S22 are deployed on a line, when a photo comes, a 512-dimensional feature vector is obtained through the model in S31, the identity of the feature vector is judged through comparing the similarity of the existing face vectors in the face database, the vector is input into the model in S22, the gender corresponding to the photo is returned, and meanwhile the identified identity information and gender information are returned to a client for display.
2. The gender prediction method based on the high-dimensional feature of the human face according to claim 1, wherein in the step S12, the batch size of the model of the loss function of Arcface is 512, the initial learning rate is set to 0.1, the total iteration is 18 ten thousand times, the learning rate is reduced to 0.01 when the iteration is 10 ten thousand times, and the learning rate is reduced to 0.001 when the iteration is 16 ten thousand times, and the learning is finished.
3. The gender prediction method based on the high-dimensional features of the human face according to claim 2, wherein in the step S12, after the training is finished, the last full connection layer is removed, only the backbone network is left, and a human face photo is calculated through the backbone network to obtain a 512-dimensional feature vector.
CN202010378403.5A 2020-05-07 2020-05-07 Gender prediction method based on high-dimensional characteristics of human face Active CN111753641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010378403.5A CN111753641B (en) 2020-05-07 2020-05-07 Gender prediction method based on high-dimensional characteristics of human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010378403.5A CN111753641B (en) 2020-05-07 2020-05-07 Gender prediction method based on high-dimensional characteristics of human face

Publications (2)

Publication Number Publication Date
CN111753641A CN111753641A (en) 2020-10-09
CN111753641B true CN111753641B (en) 2023-07-18

Family

ID=72673167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010378403.5A Active CN111753641B (en) 2020-05-07 2020-05-07 Gender prediction method based on high-dimensional characteristics of human face

Country Status (1)

Country Link
CN (1) CN111753641B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095833A (en) * 2014-05-08 2015-11-25 中国科学院声学研究所 Network constructing method for human face identification, identification method and system
CN106815566A (en) * 2016-12-29 2017-06-09 天津中科智能识别产业技术研究院有限公司 A kind of face retrieval method based on multitask convolutional neural networks
CN107038429A (en) * 2017-05-03 2017-08-11 四川云图睿视科技有限公司 A kind of multitask cascade face alignment method based on deep learning
CN107247947A (en) * 2017-07-07 2017-10-13 北京智慧眼科技股份有限公司 Face character recognition methods and device
CN107918780A (en) * 2017-09-01 2018-04-17 中山大学 A kind of clothes species and attributive classification method based on critical point detection
CN109522872A (en) * 2018-12-04 2019-03-26 西安电子科技大学 A kind of face identification method, device, computer equipment and storage medium
CN109800648A (en) * 2018-12-18 2019-05-24 北京英索科技发展有限公司 Face datection recognition methods and device based on the correction of face key point
CN109815801A (en) * 2018-12-18 2019-05-28 北京英索科技发展有限公司 Face identification method and device based on deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095833A (en) * 2014-05-08 2015-11-25 中国科学院声学研究所 Network constructing method for human face identification, identification method and system
CN106815566A (en) * 2016-12-29 2017-06-09 天津中科智能识别产业技术研究院有限公司 A kind of face retrieval method based on multitask convolutional neural networks
CN107038429A (en) * 2017-05-03 2017-08-11 四川云图睿视科技有限公司 A kind of multitask cascade face alignment method based on deep learning
CN107247947A (en) * 2017-07-07 2017-10-13 北京智慧眼科技股份有限公司 Face character recognition methods and device
CN107918780A (en) * 2017-09-01 2018-04-17 中山大学 A kind of clothes species and attributive classification method based on critical point detection
CN109522872A (en) * 2018-12-04 2019-03-26 西安电子科技大学 A kind of face identification method, device, computer equipment and storage medium
CN109800648A (en) * 2018-12-18 2019-05-24 北京英索科技发展有限公司 Face datection recognition methods and device based on the correction of face key point
CN109815801A (en) * 2018-12-18 2019-05-28 北京英索科技发展有限公司 Face identification method and device based on deep learning

Also Published As

Publication number Publication date
CN111753641A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN109635680B (en) Multitask attribute identification method and device, electronic equipment and storage medium
CN105930425A (en) Personalized video recommendation method and apparatus
CN113255694B (en) Training image feature extraction model and method and device for extracting image features
CN105426356B (en) A kind of target information recognition methods and device
CN110362677B (en) Text data category identification method and device, storage medium and computer equipment
US20220405607A1 (en) Method for obtaining user portrait and related apparatus
CN107357793B (en) Information recommendation method and device
KR102190897B1 (en) Method and Apparatus for analyzing fashion trend based on big data
CN113779308B (en) Short video detection and multi-classification method, device and storage medium
CN104572775B (en) Advertisement classification method, device and server
CN107169002A (en) A kind of personalized interface method for pushing and device recognized based on face
CN109903053B (en) Anti-fraud method for behavior recognition based on sensor data
CN112801425B (en) Method and device for determining information click rate, computer equipment and storage medium
CN110232331B (en) Online face clustering method and system
CN113793256A (en) Animation character generation method, device, equipment and medium based on user label
CN111177367A (en) Case classification method, classification model training method and related products
CN114387061A (en) Product pushing method and device, electronic equipment and readable storage medium
Zhang et al. Analysis of purchase history data based on a new latent class model for RFM analysis
CN111753641B (en) Gender prediction method based on high-dimensional characteristics of human face
CN113327132A (en) Multimedia recommendation method, device, equipment and storage medium
CN115983873B (en) User data analysis management system and method based on big data
CN116910294A (en) Image filter generation method based on emotion analysis
CN114820867B (en) Font generation method, font generation model training method and device
CN107590163B (en) The methods, devices and systems of text feature selection
CN112749711B (en) Video acquisition method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant