CN112766019A - Data processing method, information recommendation method and related device - Google Patents

Data processing method, information recommendation method and related device Download PDF

Info

Publication number
CN112766019A
CN112766019A CN201911060037.2A CN201911060037A CN112766019A CN 112766019 A CN112766019 A CN 112766019A CN 201911060037 A CN201911060037 A CN 201911060037A CN 112766019 A CN112766019 A CN 112766019A
Authority
CN
China
Prior art keywords
face
image data
face image
distance
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911060037.2A
Other languages
Chinese (zh)
Inventor
朱筱筱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201911060037.2A priority Critical patent/CN112766019A/en
Publication of CN112766019A publication Critical patent/CN112766019A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a data processing method, an information recommendation method and a related device, and relates to the technical field of computers. One embodiment of the method comprises: positioning the position information of the characteristic points in the face area according to the input face image data; calculating distance feature vectors of the human face by using the position information of the feature points, wherein the distance feature vectors are vectors formed by distance features between the feature points; determining label values of all organs through a deep learning network according to the distance feature vectors, wherein negative label values in the label values are black label values; and calculating an evaluation value corresponding to the face image data according to a preset rule by using the distance feature vector and the black label value, wherein the evaluation value is used for evaluating the difference between the face image data and standard face image data corresponding to the standard score. The implementation method can grade the face based on the face aesthetic related knowledge and aesthetic standard, can perform information recommendation according to the face characteristics in a targeted manner, and is high in interestingness and good in user experience.

Description

Data processing method, information recommendation method and related device
Technical Field
The invention relates to the technical field of computers, in particular to a data processing method and device and an information recommendation method and device.
Background
At present, various mobile applications based on face recognition technology are popular, and the main schemes are divided into two types: firstly, inputting a video stream, identifying a human face based on the dynamic state, positioning key points of the human face, and applying dynamic mapping; and secondly, the user uploads a photo or the photo is in a complete state, the links of face recognition and face key point positioning are carried out based on the static picture data stream, the age, nationality and the like of the user can be judged based on an algorithm, or the user can drag the face key point to carry out portrait processing.
The functional points of face scoring systems such as age and nationality judgment are too scattered and the integrity is not enough. At present, mobile application based on face recognition is limited to portrait processing, does not integrate face aesthetic related knowledge, and lacks a scheme for carrying out information recommendation according to face features in a targeted manner.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
currently, a face scoring scheme integrated with face aesthetic related knowledge is lacked, and information recommendation cannot be performed according to face features in a targeted manner.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data processing method, an information recommendation method, and a related apparatus, which can score a face based on face aesthetic related knowledge and aesthetic standards, and can perform information recommendation according to face features in a targeted manner, so that the data processing method is highly interesting and has good user experience.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a data processing method.
A method of data processing, comprising: positioning the position information of the characteristic points in the face area according to the input face image data; calculating distance feature vectors of the human face by using the feature point position information, wherein the distance feature vectors are vectors formed by distance features between feature points; determining label values of the organs through a deep learning network according to the distance feature vectors, wherein negative label values in the label values are black label values; and calculating an evaluation value corresponding to the face image data according to a preset rule by using the distance feature vector and the black label value, wherein the evaluation value is used for evaluating the difference between the face image data and standard face image data corresponding to a standard score.
Optionally, the step of locating feature point position information in the face region according to the input face image data includes: performing principal component analysis processing on the input face image to obtain a characteristic face image; positioning a face area in the characteristic face image to obtain a face area image; and carrying out feature point positioning on the face region image through a first cascade convolution neural network to obtain feature point position information in the face region.
Optionally, the step of performing feature point positioning on the face region image through a first cascaded convolutional neural network to obtain feature point position information in the face region includes: inputting the face region image into a first layer network of the first cascade convolution neural network to obtain a minimum bounding box image of the face; obtaining a preset number of feature points of the minimum bounding box image through a second layer network of the first cascade convolution neural network; and in a third layer network of the first cascade convolutional neural network, cutting the organs of the minimum bounding box image by using the preset number of feature points, and outputting the position information of the feature points in the face region.
Optionally, the deep learning network is a trained second cascaded convolutional neural network, and the distance feature vector of the face and the evaluation value corresponding to the face image data are calculated by the second cascaded convolutional neural network, where: calculating an evaluation value corresponding to the face image data according to a preset rule by using the distance feature vector and the black label value, wherein the step comprises the following steps: adding the difference between each distance feature and the corresponding optimal distance according to the scoring weight of each distance feature to obtain a first deduction value; adding the black label values according to the corresponding organ black label weights to obtain a second deduction value; and deducting the first item deduction value and the second item deduction value from the standard value to obtain an evaluation value corresponding to the face image data, wherein the scoring weight of each distance feature, the organ black label weight and each optimal distance are obtained by training the second cascade convolution neural network.
Optionally, the step of training the second cascaded convolutional neural network comprises: collecting sample face image data required by training, and marking the sample face image data, wherein the marking comprises marking evaluation values and marking black label values on selected organs; and training the second cascade convolution neural network by using the marked sample face image data.
According to another aspect of the embodiments of the present invention, there is provided a method of recommending information.
A method for recommending information for a data processing result using the data processing method provided by the present invention, the data processing result including the black tag value, the method comprising: and searching recommendation information by using the keywords matched with the organs with the black labels according to the matching relation between the human face organs and the keywords, and outputting the searched recommendation information.
According to another aspect of the embodiments of the present invention, there is provided a method of recommending information.
A method for recommending information based on a data processing result of a data processing method according to the present invention, the data processing result including an evaluation value corresponding to the face image data, the method comprising: and outputting recommendation information corresponding to the numerical value interval according to the numerical value interval to which the evaluation value corresponding to the face image data belongs.
According to still another aspect of an embodiment of the present invention, there is provided a data processing apparatus.
A data processing apparatus comprising: the characteristic point positioning module is used for positioning the position information of the characteristic points in the face area according to the input face image data; the distance feature vector calculation module is used for calculating a distance feature vector of the human face by using the position information of the feature points, wherein the distance feature vector is a vector formed by distance features between the feature points; the label distribution determining module is used for determining label values of the organs through a deep learning network according to the distance feature vectors, wherein negative label values in the label values are black label values; and the evaluation value calculation module is used for calculating an evaluation value corresponding to the face image data according to a preset rule by using the distance feature vector and the black label value, wherein the evaluation value is used for evaluating the difference between the face image data and standard face image data corresponding to a standard score.
According to another aspect of the embodiments of the present invention, there is provided an apparatus for recommending information.
An apparatus for recommending information for a data processing result using a data processing method provided by the present invention, the data processing result including the black tag value, the apparatus comprising: and the first information recommendation module is used for searching recommendation information by using the keywords matched with the organs with the black labels according to the matching relation between the human face organs and the keywords, and outputting the searched recommendation information.
According to another aspect of the embodiments of the present invention, there is provided an apparatus for recommending information.
An apparatus for recommending information based on a data processing result of a data processing method according to the present invention, the data processing result including an evaluation value corresponding to the face image data, the apparatus comprising: and the second information recommendation module is used for outputting recommendation information corresponding to the numerical value interval according to the numerical value interval to which the evaluation value corresponding to the face image data belongs.
According to yet another aspect of an embodiment of the present invention, an electronic device is provided.
An electronic device, comprising: one or more processors; a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the data processing method or the method of recommending information provided by the present invention.
According to yet another aspect of an embodiment of the present invention, a computer-readable medium is provided.
A computer-readable medium, on which a computer program is stored, which, when executed by a processor, implements the data processing method or the method of recommending information provided by the present invention.
One embodiment of the above invention has the following advantages or benefits: positioning the position information of the characteristic points in the face area according to the input face image data; calculating distance feature vectors of the human face by using the position information of the feature points; determining label values of all organs through a deep learning network according to the distance feature vectors, wherein negative label values in the label values are black label values; and calculating an evaluation value corresponding to the face image data according to a preset rule by using the distance feature vector and the black label value, wherein the evaluation value is used for evaluating the difference between the face image data and standard face image data corresponding to the standard score. The face can be scored based on face aesthetic related knowledge and aesthetic standards, information can be specifically recommended according to face features, interestingness is high, and user experience is good.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main steps of a data processing method according to a first embodiment of the present invention;
FIG. 2 is a schematic flow chart of data processing according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of an interface for marking sample face image data according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating distance features of a human face according to an embodiment of the invention;
fig. 5 is a schematic diagram of the main steps of an information recommendation method according to a third embodiment of the present invention;
fig. 6 is a schematic diagram of the main steps of an information recommendation method according to a fourth embodiment of the present invention;
FIG. 7 is a schematic diagram of a face scoring and information recommendation interface according to an embodiment of the invention;
FIG. 8 is an interaction diagram of a system for face scoring and information recommendation according to a fifth embodiment of the present invention;
FIG. 9 is a schematic diagram of the main blocks of a data processing apparatus according to a sixth embodiment of the present invention;
fig. 10 is a schematic diagram of main blocks of an information recommendation apparatus according to a seventh embodiment of the present invention;
fig. 11 is a schematic diagram of main blocks of an information recommendation apparatus according to an eighth embodiment of the present invention;
FIG. 12 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 13 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
Fig. 1 is a schematic diagram of main steps of a data processing method according to a first embodiment of the present invention.
As shown in fig. 1, the data processing method according to the first embodiment of the present invention mainly includes steps S101 to S104 as follows.
Step S101: and positioning the position information of the characteristic points in the face area according to the input face image data.
Specifically, performing principal component analysis processing on an input face image to obtain a characteristic face image; positioning a face area in the characteristic face image to obtain a face area image; and carrying out feature point positioning on the obtained face region image through the first cascade convolution neural network to obtain feature point position information in the face region.
The step of performing feature point positioning on the obtained face region image through the first cascaded convolutional neural network to obtain feature point position information in the face region may specifically include: inputting the face region image into a first layer network of a first cascade convolution neural network to obtain a minimum bounding box image of the face; obtaining a preset number of feature points of the minimum bounding box image through a second layer network of the first cascade convolutional neural network; and in a third layer network of the first cascade convolutional neural network, cutting organs of the minimum bounding box image by using a preset number of characteristic points, and outputting the position information of the characteristic points in the face region. The three layers of the network of the first cascaded convolutional neural network are all Convolutional Neural Networks (CNNs).
Step S102: and calculating distance feature vectors of the human face by using the position information of the feature points, wherein the distance feature vectors are vectors formed by distance features between the feature points.
Step S103: and determining the label value of each organ through a deep learning network according to the distance feature vector, wherein the negative label value in the label values is a black label value.
The deep learning network may be a trained second cascaded convolutional neural network.
The distribution of label values for the organs may be 1, -1, where, -1 represents the black label of the five sense organs (eyes, nose, mouth, skin, eyebrows), for example, assuming that the label of the eyes is the black label, the label value is-1, which indicates that the eyes are the organs that reduce the evaluation value of the face image.
Step S104: and calculating an evaluation value corresponding to the face image data according to a preset rule by using the distance feature vector and the black label value.
The evaluation value is used for evaluating the difference between the face image data and standard face image data corresponding to the standard score, the standard face image data is the face image data corresponding to the standard score, and the standard face image data can be used as the standard of face beauty. Therefore, the evaluation value corresponding to the face image data can be used to evaluate the beauty of the face corresponding to a certain face image data.
The steps S102, S103 and S104 may be performed by the trained second cascaded convolutional neural network. Inputting the position information of the feature points into a first layer network of a second cascade convolution neural network to obtain a distance feature vector of the face; inputting the distance characteristic vector of the human face into a second layer network of a second cascade convolution neural network to obtain the label value distribution of each organ; and inputting the distance characteristic vector of the human face and the label value distribution of each organ into a third-layer network of the second cascade convolutional neural network to obtain an evaluation value corresponding to the human face image data. The three layers of the network of the second cascaded convolutional neural network are all Convolutional Neural Networks (CNNs).
The step of training the second cascaded convolutional neural network may include: collecting sample face image data required by training, and marking the sample face image data, wherein the marking comprises marking evaluation values and marking black label values on selected organs; and training a second cascade convolution neural network by using the marked sample face image data. For organs that are not labeled with a black label value (-1), the label value may be defaulted to 1.
Step S104 specifically includes: adding the difference value between each distance feature in the distance feature vector and the corresponding optimal distance according to the scoring weight of each distance feature to obtain a first deduction value; adding the black label values according to the corresponding organ black label weights to obtain a second deduction value; and deducting the first item deduction value and the second item deduction value from the standard value to obtain an evaluation value corresponding to the face image data.
The scoring weight, the organ black label weight and each optimal distance of each distance feature are obtained by training the second cascade convolution neural network. The optimal distance corresponding to the distance features is the optimal distance between the feature points.
The common features of the distance features can be extracted from the sample face image data with the labeled evaluation value above a preset value (for example, 90 points) by combining the evaluation values labeled in the links labeling the sample face image data, and then the second cascade convolution neural network is trained to obtain the optimal distance feature vector, wherein the optimal distance feature vector comprises a plurality of optimal distances between feature points.
Fig. 2 is a schematic flow chart of data processing according to a second embodiment of the invention.
The data processing flow of the second embodiment of the present invention may include steps S201 to S206 as follows.
Step S201: and collecting sample face image data required by training, and marking the sample face image data.
Sample face image data can be collected by using a given face image library, each face in the face image library corresponds to a respective picture set, and the picture set of one face comprises a plurality of pictures of the face. When data marking is carried out, due to the fact that the face scoring has certain subjectivity, the sample data size needs to be expanded as much as possible, and the marking of the sample face image data is carried out on a face image library by as many people as possible.
Marking contents specifically comprise marking the face and marking the five sense organs, wherein the face is marked, namely the face appearance degree is marked, and the score range can be 0 to 100 points; when the five sense organs are marked, black labels of the five sense organs can be provided for a marking person to select, the marking content of the five sense organs comprises black labels for marking eyes, noses, mouths, skins and eyebrows, the black label value is a negative label value in the label values, and for example, when the marking person considers that a certain organ of a sample face reduces the aesthetic score of the sample face, the marking person marks the black label value on the organ.
A schematic diagram of the interface marking the sample face image data may be as shown in fig. 3. The labeler may enter a face score (a number between 0 and 100 points) in the interface and select organs (eyes, nose, mouth, skin, eyebrows) deemed to affect the score, with the selected organs labeled with black labels, e.g., -1 for the black labels, and 1 for the other organs not selected by default.
Step S202: and performing Principal Component Analysis (PCA) processing on the face image of the user to be evaluated to obtain a characteristic face image.
The Principal Component Analysis (PCA) processing can reduce the data magnitude during the human face data processing and the CNN (convolutional neural network) training, thereby achieving the purpose of reducing the calculation difficulty.
The PCA treatment comprises the following steps: preprocessing a face image; reading in a face image library, and training to form a feature subspace; projecting the training image and the test image onto a feature subspace; and selecting a certain distance function for identification, and continuously adjusting the feature subspace to ensure that the projection average mean square error is as small as possible.
The PCA processing steps of the face image of the user to be evaluated are as follows: vectorizing each face picture in the face image library to obtain a column vector corresponding to each face picture; averaging the column vectors of all the face pictures to obtain an average vector; subtracting the average vector from each column vector to obtain a difference vector corresponding to each column vector, wherein all the difference vectors form a matrix A; calculating a covariance matrix sigma of the matrix A; calculating eigenvectors and eigenvalues of a covariance matrix sigma by using a singular value decomposition method, wherein an eigenvector matrix formed by eigenvectors corresponding to the largest k eigenvalues is a characteristic subspace, multiplying (i.e. projecting) a column vector (n-dimension) of a face image of a user to be evaluated with the characteristic subspace (the eigenvector matrix) to obtain a k-dimension column vector of the face image of the user to be evaluated, and the face image corresponding to the k-dimension column vector is a characteristic face image
Step S203: and positioning the face area in the characteristic face image to obtain a face area image.
A SVM (support vector machine) classifier can be used to locate the face regions in the characteristic face image. In the embodiment of the invention, cross validation is used for determining a punishment parameter C of an SVM, a Radial Basis Function (RBF) is preferably used for a kernel function to construct a multi-class SVM classifier, in the training stage of the multi-class SVM, an SVM bipartite is constructed by using a sample, the training result (SVMStrucct structure) of each SVM bipartite is stored into a cell array CASVMStruct of a structure, and finally all information required by the multi-class SVM classification is stored into the structure multisSVMStruct and returned; the multissvmstructt training result is applied to a face sample to perform face recognition, so that a face area in a picture sample is located, preparation is made for locating the face feature point, specifically, whether a face module exists in the sample can be confirmed through the multissvmstructt training result, the position of the face module in the picture is located, and an image of the face area is captured according to the located position. By the method, the face area in the characteristic face image of the user to be evaluated can be positioned, and the face area image is obtained.
Step S204: and carrying out feature point positioning on the face region image of the user to be evaluated through the first cascade convolution neural network to obtain feature point position information in the face region.
The first concatenated convolutional neural network (DCNN) includes three levels of CNNs (convolutional neural networks) which are respectively denoted as a first level (level 1), a second level (level 2), and a third level (level 3). The feature point location mainly comprises 51 feature points of five sense organs in the face contour. The network input and network output of each level CNN are respectively as follows:
level 1):
network input: a face region image;
and (3) network output: the minimum bounding box of 51 feature points comprises the coordinates of the upper left corner and the lower right corner of a rectangle and is a 4-dimensional vector;
level 2):
network input: a minimum bounding box image;
and (3) network output: 51 feature points corresponding to 102 neurons;
level 3):
network input: and cutting the five sense organs picture by using 51 points of level 2, and outputting the position information of the characteristic points in the face area. Wherein, the cut facial features need to be trained and predicted separately by using the labeled sample face image data.
And (3) network output: finer feature points of each organ; and obtaining the position information of the characteristic points in the face area of the user to be evaluated by using the trained first cascade convolution neural network.
Step S205: and constructing a face scoring model by using the second cascade convolution neural network, and training the face scoring model.
The face scoring model constructed by using the second-level convolutional neural network (DCNN) comprises CNNs (convolutional neural networks) of three levels, which are respectively marked as a first level (level 1), a second level (level 2) and a third level (level 3). The network input and network output of each level CNN are respectively as follows:
level 1):
network input: position information of characteristic points in the face area;
and (3) network output: distance feature vectors of the human face;
in the CNN, seventeen dimensions required by the beautification degree score are calculated according to the position information of the feature points in the face region, and 17 columns of 1 distance feature vectors corresponding to each sample face are obtained, wherein 11 are transverse distance features, 6 are longitudinal distance features, and the distance feature vectors are vectors formed by the distance features between the feature points.
The distance feature is also called a feature quantity, i.e., a distance between facial feature points of a human face. The distances between the facial feature points have an important influence on the beauty of the human face, and fig. 4 is a schematic diagram showing the distance features of the human face, wherein F1 to F17 are shown, and 17 distance features are shown in total, and the 17 distance features constitute a distance feature vector of the human face. A detailed description of the 17 distance features is shown in table 1. And continuously training parameters of the convolutional neural network by using the position information of the characteristic points in the face area of the sample face image data until the distance characteristic vector of the face with more accurate prediction is obtained.
TABLE 1
Figure BDA0002257672160000101
Figure BDA0002257672160000111
Level 2):
network input: distance feature vectors of the human face comprise 17 human face feature quantities;
and (3) network output: and obtaining the label value distribution of each organ according to the distance feature vector, wherein the dimensions comprise eyes, noses, mouths, skins and eyebrows, the label value distribution is 1 or-1, and the-1 represents the black label value of the five sense organs.
And continuously training parameters of the convolutional neural network by using black label values labeled by data marking on the sample face image with various distance characteristic vectors until label value distribution of each organ with more accurate prediction is obtained.
Level 3):
network input: distance feature vectors and black label values of the human face;
and (3) network output: and (5) grading the face appearance degree.
The distance feature vectors of the human face and the black labels of the five sense organs are respectively and directly used for separate training, the common features of the distance features are extracted from the sample human face image data with the human face score of more than 90 points by combining the human face score (namely, the aesthetic degree score) marked in the marking link of the sample human face image data, and then network training is carried out to obtain the optimal distance feature vectors which comprise 17 optimal distances (corresponding to F1-F17 in the table 1).
When the cosmetology score is calculated, the scoring weights corresponding to 17 individual face distance features are respectively set to be theta 1 … theta 17, the difference value between the 17 distance features and the optimal distance is beta 1 … beta 17, and the black label weight of the five sense organs is phi 1 … phi 5;
the deduction introduced by the difference between the 17 face distance features and the optimal distance is as follows: θ 1 × β 1+ θ 2 × β 2+ … + θ 17 × β 17;
the deduction introduced by the black label of the five sense organs is as follows: φ 1+ φ 2+ … + φ 5 (assuming that the five sense organs are labeled with black labels, the black labels are homogenized to a score of 1);
and adopting 100 points as standard scores (namely full points), and deducting from the standard scores by the two items to obtain deduction introduced by the difference between the human face range characteristic and the optimal range, namely deduction introduced by the five-sense organ black label, wherein the face appearance degree score is 100-17.
In the training stage, parameters such as the scoring weight of each distance feature, the organ black label weight and the like are trained by using the face scores marked on the sample face image data and the marked five sense organs black label values.
Step S206: and inputting the position information of the feature points in the face area of the user to be evaluated into the trained face scoring model so as to output the face aesthetic degree score of the user.
The algorithm for calculating the face-beauty score of the user is described in detail in step S205, and will not be described again here. The embodiment of the invention calculates the face aesthetic degree score by using the distance feature vector and the black label value of the face, and can score the face based on the face aesthetic related knowledge and aesthetic standard, so that the scoring result accords with the public aesthetic sense, and the user experience is better.
Fig. 5 is a schematic diagram of main steps of an information recommendation method according to a third embodiment of the present invention.
The information recommendation method according to the third embodiment of the present invention mainly includes steps S501 to S504 as follows.
Step S501: and positioning the position information of the characteristic points in the face area according to the input face image data.
Step S502: and calculating distance feature vectors of the human face by using the position information of the feature points, wherein the distance feature vectors are vectors formed by distance features between the feature points.
Step S503: and determining the label value of each organ through a deep learning network according to the distance feature vector, wherein the negative label value in the label values is a black label value.
Step S504: and searching recommendation information by using the keywords matched with the organs with the black labels according to the matching relation between the human face organs and the keywords, and outputting the searched recommendation information.
The recommended information may be, for example, information of products such as skin care and makeup related to organs having black labels, advertisement information displayed in the form of characters, pictures, links, and the like, and advice information of skin care, makeup, and the like given for the facial features of the user.
Taking the recommended product information as an example, the black label can correspond to the use part (part keyword) of the cosmetic SKU (stock keeping unit) commodity, a mapping table is established, and the recommended SKU commodity is inquired through table lookup.
As an alternative embodiment, after step S503, the evaluation value corresponding to the face image data may also be calculated according to a preset rule by using the distance feature vector and the black label value.
Alternatively, the step S504 may be executed when the evaluation value corresponding to the face image data is greater than a preset threshold, and the step S504 may not be executed when the evaluation value is less than or equal to the preset threshold.
The evaluation value corresponding to the face image data may be used to evaluate the beauty of the face corresponding to the face image data.
Fig. 6 is a schematic diagram of main steps of an information recommendation method according to a fourth embodiment of the present invention.
The information recommendation method according to the fourth embodiment of the present invention mainly includes steps S601 to S605 as follows.
Step S601: positioning the position information of the characteristic points in the face area according to the input face image data;
step S602: calculating distance feature vectors of the human face by using the position information of the feature points, wherein the distance feature vectors are vectors formed by distance features between the feature points;
step S603: determining label values of all organs through a deep learning network according to the distance feature vectors, wherein negative label values in the label values are black label values;
step S604: calculating an evaluation value corresponding to the face image data according to a preset rule by using the distance feature vector and the black label value;
step S605: and outputting recommendation information corresponding to the numerical value interval according to the numerical value interval to which the evaluation value corresponding to the face image data belongs.
The input face image data can be a user head portrait to be subjected to beautification degree grading.
The recommended information may be information on products such as skin care products and makeup products related to organs having black labels, or advertisement information displayed in the form of characters, pictures, links, and the like.
The evaluation value corresponding to the face image data may be used to evaluate the beauty of the face corresponding to the face image data.
Fig. 7 is a schematic view of a face scoring and information recommendation interface according to an embodiment of the present invention. By taking the recommended product information as an example, according to the introduction of the above embodiments, the face image provided by the user can be subjected to face scoring, and beauty cosmetics and skin care products can be recommended to the user according to the five sense organs black label or the finally obtained score, so that personalized recommendation is realized.
According to the data processing method and the information recommending method, disclosed by the embodiment of the invention, the face beautification degree score can be calculated according to the input face image data, personalized information recommendation is carried out according to the user characteristics, the interestingness is high, the user experience is good, the user can be attracted, the obtained scoring result can be shared by other users or a social platform, so that more users are attracted, and the order conversion rate of commodities (or the access rate of information such as advertisements) is improved.
Fig. 8 is an interaction diagram of a system for face scoring and information recommendation according to a fifth embodiment of the present invention.
The system for face scoring and information recommendation in the fifth embodiment of the invention comprises a client and a server. Wherein the content of the first and second substances,
the method comprises the steps that a user is photographed in real time through a client, or a photo selected by the user is read, and the photo of the user is uploaded to a server;
the server performs PCA dimension reduction on the user's picture, positions a face region, determines face feature point position information, and returns the face feature point position information (coordinates) to the client;
the client displays the position information of the face feature points, calculates face aesthetic degree scores according to the position information of the face feature points, and sends the face aesthetic degree scores to the server;
the server searches keywords according to the face physiognomy degree score and returns the searched recommendation information;
the client displays the face aesthetic degree score and the recommendation information through the user interface, and displays a sharing score entrance so that the user can share the face aesthetic degree score.
As an exemplary application scenario, the system of the embodiment of the present invention may be embedded in an e-commerce platform system, and may collect a user's photo when the user swipes his face and logs in the e-commerce platform system, without separately taking a picture of the user in real time or reading a photo selected by the user. And moreover, a function of taking pictures and uploading scores is provided for the user, a background interface of the e-commerce platform system can be called to process the user picture file, the user picture file comprises a face area and a face key point, and detailed parameters such as coordinates of five sense organs and scoring results are returned through the interface. Through the embodiment of the invention, the user can receive product recommendation service with very individuality while feeling the interest of the system, and social interaction is enhanced.
Fig. 9 is a schematic diagram of main blocks of a data processing apparatus according to a sixth embodiment of the present invention.
The data processing apparatus 900 according to the sixth embodiment of the present invention mainly includes: a feature point positioning module 901, a distance feature vector calculation module 902, a label distribution determination module 903, and an evaluation value calculation module 904.
The feature point positioning module 901 is configured to position feature point position information in a face region according to input face image data.
A distance feature vector calculating module 902, configured to calculate a distance feature vector of the human face by using the feature point position information, where the distance feature vector is a vector formed by distance features between feature points.
And the label distribution determining module 903 is configured to determine label values of the organs through a deep learning network according to the distance feature vector, where a negative label value in the label values is a black label value.
And an evaluation value calculating module 904, configured to calculate, according to a preset rule, an evaluation value corresponding to the face image data by using the distance feature vector and the black label value, where the evaluation value is used to evaluate a difference between the face image data and standard face image data corresponding to the standard score.
Fig. 10 is a schematic diagram of main blocks of an information recommendation apparatus according to a seventh embodiment of the present invention.
The information recommendation device 1000 according to the seventh embodiment of the present invention mainly includes: the system comprises a feature point positioning module 1001, a distance feature vector calculation module 1002, a label distribution determination module 1003 and a first information recommendation module 1004.
The feature point positioning module 1001, the distance feature vector calculation module 1002, and the label distribution determination module 1003 correspond to and have the same function as the feature point positioning module 901, the distance feature vector calculation module 902, and the label distribution determination module 903, respectively, and therefore, the three modules refer to the descriptions of the feature point positioning module 901, the distance feature vector calculation module 902, and the label distribution determination module 903, and are not described herein again.
And the first information recommending module 1004 is used for searching the recommendation information by using the keywords matched with the organs with the black labels according to the matching relationship between the human face organs and the keywords, and outputting the searched recommendation information.
As an alternative embodiment, the information recommendation device 1000 may further include an evaluation value calculation module 1005, which is functionally identical to the evaluation value calculation module 904, and thus, will not be described again.
Also, alternatively, the corresponding function of the first information recommendation module 1004 may be executed if the evaluation value corresponding to the face image data obtained by the evaluation value calculation module 1005 is greater than a preset threshold, and the corresponding function of the first information recommendation module 1004 may not be executed if the evaluation value is less than or equal to the preset threshold.
Fig. 11 is a schematic diagram of main blocks of an information recommendation apparatus according to an eighth embodiment of the present invention.
The information recommendation apparatus 1100 according to the eighth embodiment of the present invention mainly includes: a feature point positioning module 1101, a distance feature vector calculation module 1102, a label distribution determination module 1103, an evaluation value calculation module 1104, and a second information recommendation module 1105.
The feature point positioning module 1101, the distance feature vector calculation module 1102, the label distribution determination module 1103, and the evaluation value calculation module 1104 are respectively corresponding to and have the same functions as the feature point positioning module 901, the distance feature vector calculation module 902, the label distribution determination module 903, and the evaluation value calculation module 904, and therefore, the four modules may refer to the descriptions of the feature point positioning module 901, the distance feature vector calculation module 902, the label distribution determination module 903, and the evaluation value calculation module 904, and are not described herein again.
The second information recommending module 1105 is configured to output recommendation information corresponding to a numerical value interval according to the numerical value interval to which the evaluation value corresponding to the face image data belongs.
The detailed implementation contents of the data processing device and the information recommendation device in the embodiment of the invention are already described in detail in the data processing method and the information recommendation method, so that repeated contents are not described herein.
Fig. 12 shows an exemplary system architecture 1200 to which the data processing method and the information recommendation method or the data processing apparatus and the information recommendation apparatus of the embodiments of the present invention can be applied.
As shown in fig. 12, the system architecture 1200 may include terminal devices 1201, 1202, 1203, a network 1204 and a server 1205. Network 1204 is the medium used to provide communication links between terminal devices 1201, 1202, 1203 and server 1205. Network 1204 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal devices 1201, 1202, 1203 to interact with a server 1205 through a network 1204 to receive or send messages, etc. The terminal devices 1201, 1202, 1203 may have installed thereon various messenger client applications such as shopping applications, web browser applications, search applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 1201, 1202, 1203 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 1205 may be a server that provides various services, such as a background management server (for example only) that supports shopping websites browsed by users using the terminal devices 1201, 1202, 1203. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the data processing method and the information recommendation method provided by the embodiment of the present invention may be executed by the server 1205 or the terminal devices 1201, 1202, and 1203, and accordingly, the data processing apparatus and the information recommendation apparatus are generally disposed in the server 1205 or the terminal devices 1201, 1202, and 1203.
It should be understood that the number of terminal devices, networks, and servers in fig. 12 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 13, shown is a block diagram of a computer system 1300 suitable for use in implementing a terminal device or server of an embodiment of the present application. The terminal device or the server shown in fig. 13 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 13, the computer system 1300 includes a Central Processing Unit (CPU)1301 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1302 or a program loaded from a storage section 1308 into a Random Access Memory (RAM) 1303. In the RAM 1303, various programs and data necessary for the operation of the system 1300 are also stored. The CPU 1301, the ROM 1302, and the RAM 1303 are connected to each other via a bus 1304. An input/output (I/O) interface 1305 is also connected to bus 1304.
The following components are connected to the I/O interface 1305: an input portion 1306 including a keyboard, a mouse, and the like; an output section 1307 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1308 including a hard disk and the like; and a communication section 1309 including a network interface card such as a LAN card, a modem, or the like. The communication section 1309 performs communication processing via a network such as the internet. A drive 1310 is also connected to the I/O interface 1305 as needed. A removable medium 1311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1310 as necessary, so that a computer program read out therefrom is mounted into the storage portion 1308 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications component 1309 and/or installed from removable media 1311. The computer program executes the above-described functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 1301.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprises a feature point positioning module, a distance feature vector calculation module, a label distribution determination module and an evaluation value calculation module. The names of these modules do not constitute a limitation to the module itself in some cases, and for example, the feature point location module may also be described as a "module for locating feature point location information in a face region from input face image data".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: positioning the position information of the characteristic points in the face area according to the input face image data; calculating distance feature vectors of the human face by using the feature point position information, wherein the distance feature vectors are vectors formed by distance features between feature points; determining label values of the organs through a deep learning network according to the distance feature vectors, wherein negative label values in the label values are black label values; and calculating an evaluation value corresponding to the face image data according to a preset rule by using the distance feature vector and the black label value, wherein the evaluation value is used for evaluating the difference between the face image data and standard face image data corresponding to a standard score.
According to the technical scheme of the embodiment of the invention, the position information of the characteristic point in the face area is positioned according to the input face image data; calculating distance feature vectors of the human face by using the position information of the feature points; determining label values of all organs through a deep learning network according to the distance feature vectors, wherein negative label values in the label values are black label values; and calculating an evaluation value corresponding to the face image data according to a preset rule by using the distance feature vector and the black label value, wherein the evaluation value is used for evaluating the difference between the face image data and standard face image data corresponding to the standard score. The face can be scored based on face aesthetic related knowledge and aesthetic standards, information can be specifically recommended according to face features, interestingness is high, and user experience is good.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A data processing method, comprising:
positioning the position information of the characteristic points in the face area according to the input face image data;
calculating distance feature vectors of the human face by using the feature point position information, wherein the distance feature vectors are vectors formed by distance features between feature points;
determining label values of the organs through a deep learning network according to the distance feature vectors, wherein negative label values in the label values are black label values;
and calculating an evaluation value corresponding to the face image data according to a preset rule by using the distance feature vector and the black label value, wherein the evaluation value is used for evaluating the difference between the face image data and standard face image data corresponding to a standard score.
2. The method of claim 1, wherein the step of locating the position information of the feature point in the face region according to the input face image data comprises:
performing principal component analysis processing on the input face image to obtain a characteristic face image;
positioning a face area in the characteristic face image to obtain a face area image;
and carrying out feature point positioning on the face region image through a first cascade convolution neural network to obtain feature point position information in the face region.
3. The method according to claim 2, wherein the step of performing feature point positioning on the face region image through a first cascaded convolutional neural network to obtain feature point position information in the face region comprises:
inputting the face region image into a first layer network of the first cascade convolution neural network to obtain a minimum bounding box image of the face;
obtaining a preset number of feature points of the minimum bounding box image through a second layer network of the first cascade convolution neural network;
and in a third layer network of the first cascade convolutional neural network, cutting the organs of the minimum bounding box image by using the preset number of feature points, and outputting the position information of the feature points in the face region.
4. The method according to claim 1, wherein the deep learning network is a second cascaded convolutional neural network trained, and the distance feature vector of the face is calculated by the second cascaded convolutional neural network and an evaluation value corresponding to the face image data is calculated, wherein:
calculating an evaluation value corresponding to the face image data according to a preset rule by using the distance feature vector and the black label value, wherein the step comprises the following steps:
adding the difference between each distance feature and the corresponding optimal distance according to the scoring weight of each distance feature to obtain a first deduction value;
adding the black label values according to the corresponding organ black label weights to obtain a second deduction value;
and deducting the first item deduction value and the second item deduction value from the standard value to obtain an evaluation value corresponding to the face image data, wherein the scoring weight of each distance feature, the organ black label weight and each optimal distance are obtained by training the second cascade convolution neural network.
5. The method of claim 4, wherein the step of training the second cascaded convolutional neural network comprises:
collecting sample face image data required by training, and marking the sample face image data, wherein the marking comprises marking evaluation values and marking black label values on selected organs;
and training the second cascade convolution neural network by using the marked sample face image data.
6. A method of recommending information using the data processing result of any one of claims 1 to 5, the data processing result including the black tag value, the method comprising:
and searching recommendation information by using the keywords matched with the organs with the black labels according to the matching relation between the human face organs and the keywords, and outputting the searched recommendation information.
7. A method of recommending information using the data processing result of any one of claims 1 to 5, the data processing result including an evaluation value corresponding to the face image data, characterized by comprising:
and outputting recommendation information corresponding to the numerical value interval according to the numerical value interval to which the evaluation value corresponding to the face image data belongs.
8. A data processing apparatus, comprising:
the characteristic point positioning module is used for positioning the position information of the characteristic points in the face area according to the input face image data;
the distance feature vector calculation module is used for calculating a distance feature vector of the human face by using the position information of the feature points, wherein the distance feature vector is a vector formed by distance features between the feature points;
the label distribution determining module is used for determining label values of the organs through a deep learning network according to the distance feature vectors, wherein negative label values in the label values are black label values;
and the evaluation value calculation module is used for calculating an evaluation value corresponding to the face image data according to a preset rule by using the distance feature vector and the black label value, wherein the evaluation value is used for evaluating the difference between the face image data and standard face image data corresponding to a standard score.
9. An apparatus for recommending information using the data processing result of any one of claims 1 to 5, said data processing result including said black tag value, said apparatus comprising:
and the first information recommendation module is used for searching recommendation information by using the keywords matched with the organs with the black labels according to the matching relation between the human face organs and the keywords, and outputting the searched recommendation information.
10. An apparatus for recommending information using the data processing result including the evaluation value corresponding to the face image data of any one of claims 1 to 5, characterized by comprising:
and the second information recommendation module is used for outputting recommendation information corresponding to the numerical value interval according to the numerical value interval to which the evaluation value corresponding to the face image data belongs.
11. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-7.
12. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN201911060037.2A 2019-11-01 2019-11-01 Data processing method, information recommendation method and related device Pending CN112766019A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911060037.2A CN112766019A (en) 2019-11-01 2019-11-01 Data processing method, information recommendation method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911060037.2A CN112766019A (en) 2019-11-01 2019-11-01 Data processing method, information recommendation method and related device

Publications (1)

Publication Number Publication Date
CN112766019A true CN112766019A (en) 2021-05-07

Family

ID=75692175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911060037.2A Pending CN112766019A (en) 2019-11-01 2019-11-01 Data processing method, information recommendation method and related device

Country Status (1)

Country Link
CN (1) CN112766019A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022243498A1 (en) * 2021-05-20 2022-11-24 Ica Aesthetic Navigation Gmbh Computer-based body part analysis methods and systems

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022243498A1 (en) * 2021-05-20 2022-11-24 Ica Aesthetic Navigation Gmbh Computer-based body part analysis methods and systems

Similar Documents

Publication Publication Date Title
CN110489582B (en) Method and device for generating personalized display image and electronic equipment
CN108416310B (en) Method and apparatus for generating information
CN110737801A (en) Content classification method and device, computer equipment and storage medium
US20200311198A1 (en) N-ary relation prediction over text spans
US11915298B2 (en) System and method for intelligent context-based personalized beauty product recommendation and matching
CN111582409A (en) Training method of image label classification network, image label classification method and device
CN111581926B (en) Document generation method, device, equipment and computer readable storage medium
CN114332680A (en) Image processing method, video searching method, image processing device, video searching device, computer equipment and storage medium
US20220358552A1 (en) Methods and systems for hair-service based digital image searching and ranking
TW201814556A (en) Information matching method and related device
EP4150513A1 (en) Systems and methods for improved facial attribute classification and use thereof
CN116894711A (en) Commodity recommendation reason generation method and device and electronic equipment
CN113392179A (en) Text labeling method and device, electronic equipment and storage medium
CN111967924A (en) Commodity recommendation method, commodity recommendation device, computer device, and medium
WO2023020160A1 (en) Recommendation method and apparatus, training method and apparatus, device, and recommendation system
CN112783468A (en) Target object sorting method and device
CN115482021A (en) Multimedia information recommendation method and device, electronic equipment and storage medium
US11373057B2 (en) Artificial intelligence driven image retrieval
CN112766019A (en) Data processing method, information recommendation method and related device
US20230385903A1 (en) System and method for intelligent context-based personalized beauty product recommendation and matching at retail environments
CN112449217B (en) Method and device for pushing video, electronic equipment and computer readable medium
Lin et al. LA-Net: LSTM and attention based point cloud down-sampling and its application
CN110851629A (en) Image retrieval method
CN113761281B (en) Virtual resource processing method, device, medium and electronic equipment
CN113032614A (en) Cross-modal information retrieval method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination