CN115841432B - Method, device, equipment and medium for determining and training beauty special effect data - Google Patents

Method, device, equipment and medium for determining and training beauty special effect data Download PDF

Info

Publication number
CN115841432B
CN115841432B CN202310133083.0A CN202310133083A CN115841432B CN 115841432 B CN115841432 B CN 115841432B CN 202310133083 A CN202310133083 A CN 202310133083A CN 115841432 B CN115841432 B CN 115841432B
Authority
CN
China
Prior art keywords
beauty
data
sample
feature
feature data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310133083.0A
Other languages
Chinese (zh)
Other versions
CN115841432A (en
Inventor
徐慎昆
宿华
王仲远
李雅子
施侃乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202310133083.0A priority Critical patent/CN115841432B/en
Publication of CN115841432A publication Critical patent/CN115841432A/en
Application granted granted Critical
Publication of CN115841432B publication Critical patent/CN115841432B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides a method, apparatus, device and medium for determining and model training of special effect data for beauty, which relates to the field of computer operation, and the method for determining special effect data for beauty includes: acquiring picture characteristic data of a picture to be processed, wherein the picture characteristic data comprises face characteristic data and scene characteristic data; determining first beautifying material data matched with a picture to be processed according to the picture characteristic data; and generating a target beauty set according to the first beauty material data, and determining the target beauty set as beauty special effect data corresponding to the picture to be processed. The method and the device provide an automatic scheme for determining the target beauty set for the picture to be processed, and can improve the matching degree of the determined beauty special effect data and the picture to be processed.

Description

Method, device, equipment and medium for determining and training beauty special effect data
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a method for determining special effect data of beauty, a device for determining special effect data of beauty, a method for training a predictive model of Yan Texiao data, a device for training a predictive model of Yan Texiao data, an electronic device, and a computer readable storage medium.
Background
The beauty is to shoot a photo, shoot a video or a common portrait beautifying tool in the live broadcast process, and a user can remove facial flaws and amplify facial features by using the beauty Yan Texiao in the photo shooting, video shooting or live broadcast process so as to improve the facial expression of a person in a picture, thereby bringing better shooting experience for the user.
In the related art, various kinds of beautifying materials or a beautifying set system composed of the beautifying materials can be selected by a user to beautify so as to obtain the beautified shooting content.
However, this method of performing beauty directly based on the beauty material or the beauty system generally requires the user to adjust the beauty strength of the beauty material or select the system from a plurality of beauty systems to perform shooting, which tends to increase the operation cost of the user.
Disclosure of Invention
The invention provides a method, a device, equipment and a medium for determining and training beauty special effect data, which can automatically determine the beauty special effect data for a user and improve the matching degree of the determined beauty special effect data and a picture to be processed. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a method for determining special effect data of beauty, including:
Acquiring picture characteristic data of a picture to be processed, wherein the picture characteristic data comprises face characteristic data and scene characteristic data;
determining first beautifying material data matched with the picture to be processed according to the picture characteristic data, wherein the first beautifying material data comprises first beautifying materials and beautifying intensity values of the first beautifying materials;
and generating a target beauty set according to the first beauty material data, and determining the target beauty set as beauty special effect data corresponding to the picture to be processed.
According to a second aspect of the embodiments of the present disclosure, there is provided a method for training a model for predicting skin-care effect data, including:
collecting sample picture feature data and sample labels of a sample picture, wherein the sample picture feature data comprises sample face feature data and sample scene feature data, the sample labels comprise sample beauty sets, sample beauty Yan Sucai and sample beauty intensity values, the sample beauty sets are beauty sets used when the sample picture is shot, the sample beauty materials are beauty materials adjusted in the sample beauty sets when the sample picture is shot, and the sample beauty intensity values are beauty intensity values of the sample beauty materials when the sample picture is shot;
And carrying out iterative training on the to-be-trained beauty special effect data prediction model according to the sample picture characteristic data and the sample label until the to-be-trained beauty special effect data prediction model converges, and determining that the to-be-trained beauty special effect data prediction model is trained.
According to a third aspect of the embodiments of the present disclosure, there is provided a beauty effect data determining apparatus, including:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is configured to acquire picture characteristic data of a picture to be processed, and the picture characteristic data comprises face characteristic data and scene characteristic data;
a first determining module configured to determine first beauty material data matched with the to-be-processed picture according to the picture feature data, wherein the first beauty material data comprises first beauty materials and beauty intensity values of the first beauty materials;
and the second determining module is configured to generate a target beauty set according to the first beauty material data and determine the target beauty set as beauty special effect data corresponding to the picture to be processed.
According to a fourth aspect of embodiments of the present disclosure, there is provided a training device for a model prediction model of special effects data for beauty, including:
The sample face feature data comprises sample face feature data and sample scene feature data, the sample tag comprises a sample face feature system, sample face feature Yan Sucai and sample face feature value, the sample face feature system is a face feature system used when the sample picture is shot, the sample face feature material is a face feature material adjusted in the sample face feature system when the sample picture is shot, and the sample face feature value is a face feature value of the sample face feature material when the sample picture is shot;
and the training module is configured to perform iterative training on the to-be-trained beauty special effect data prediction model according to the sample picture characteristic data and the sample label until the to-be-trained beauty special effect data prediction model converges, and determine that the to-be-trained beauty special effect data prediction model is trained.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of determining the special effect data for beauty treatment as described in the first aspect, or the processor is configured to execute the instructions to implement the method of training the model of predicting the special effect data for beauty treatment as described in the second aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the method for determining the special effects of beauty treatment data as described in the first aspect, or the processor is configured to execute the instructions to implement the method for training a model for predicting special effects of beauty treatment data as described in the second aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the method, the device, the equipment and the medium for determining the special effect data of the beauty, which are provided by the embodiment of the disclosure, on one hand, in the process of determining the special effect data of the beauty matched with the picture to be processed, the face characteristics and the scene characteristics in the picture to be processed are considered, so that the determined special effect data of the beauty, the face of the user and the fitting degree of the current scene can be improved; on the other hand, according to the picture characteristic data of the picture to be processed, the first beautifying material required to be used in shooting and the beautifying intensity value of the first beautifying material can be directly determined, the processes of screening the beautifying material and adjusting the beautifying intensity value of the beautifying material by a user are omitted, the operation cost of the user is reduced, and the use experience of the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic architecture diagram of a beauty effect data determination system, shown in accordance with an exemplary embodiment;
FIG. 2 is a flowchart illustrating a method of determining beauty effect data according to an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a model of predicting skin effect data according to an exemplary embodiment;
FIG. 4 is a flowchart illustrating a method of training a model for predicting skin effect data, according to an exemplary embodiment;
FIG. 5 is a flowchart illustrating another method of training a model for predicting face-beautifying special effects data, according to an exemplary embodiment;
FIG. 6 is a schematic diagram of a model of a beauty set predictor model in accordance with an exemplary embodiment;
FIG. 7 is a schematic diagram of a model of a predictor of skin material, according to an example embodiment;
FIG. 8 is a schematic diagram of a model of a predictor of skin material, according to an example embodiment;
FIG. 9 is a flowchart illustrating yet another method of training a model for predicting face-beautifying special effects data, according to an exemplary embodiment;
FIG. 10 is a flowchart illustrating yet another method of training a model for predicting beauty effect data according to an exemplary embodiment;
fig. 11 is a block diagram of a beauty effect data determining apparatus according to an exemplary embodiment;
FIG. 12 is a block diagram of a beauty effect data prediction model training apparatus, according to an example embodiment;
fig. 13 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In the related art, when a user needs to photograph contents, a target material to be used can be selected from various beauty materials provided by a photographing tool to perform beauty. Or, in order to facilitate the use of users, the shooting tool service side can combine different beauty materials into different beauty sets for users to directly select the beauty sets for beauty; the beautifying material is the minimum beautifying unit, such as skin grinding, face thinning, black eye ring removing, eye brightening, tooth whitening, face three-dimensional, large eye, nose thinning, clear, whitening, nose shadow or face shaving, etc.
However, the manner of directly providing various beauty materials to the user generally requires the user to manually select the target beauty material and set the beautifying intensity value of the beauty material, so that the operation cost of the user is high, and the shooting tool usually displays the commonly used beauty materials preferentially for the user to select, so that the exposure rate and the use rate of a large amount of beauty materials are low, and the due beautifying value cannot be exerted; the method for providing the user with the beauty set is determined empirically by the service side of the shooting tool, and usually accords with most scenes, but the beauty set may be different from each user feature and user preference, so that the beautification effect is different from the user expectation, or the beautification result is not in accordance with the aesthetic.
In view of the foregoing, exemplary embodiments of the present disclosure provide a method for determining special effects of beauty, and application scenarios of the method for determining special effects of beauty include, but are not limited to: in the live broadcast process, the picture characteristic data of a picture to be processed can be obtained, wherein the picture characteristic data comprises face characteristic data and scene characteristic data; determining first beautifying material data matched with the picture to be processed according to the picture characteristic data, wherein the first beautifying material data comprises first beautifying materials and beautifying intensity values of the first beautifying materials; further, a target beauty set is generated according to the first beauty material data, and the target beauty set is determined to be beauty special effect data corresponding to the picture to be processed. According to the face features and scene features in the scene to be shot, the face special effect data matched with the picture to be processed can be determined, so that the determined face special effect data better accords with the facial features of the user and the scene features of the scene where the user is located, and the attractiveness of the finally obtained shot picture is improved.
In order to implement the above-described method for determining special effects of beauty, exemplary embodiments of the present disclosure provide a special effects of beauty data determination system. Fig. 1 shows a schematic architecture diagram of the beautifying special effects data determination system. As shown in fig. 1, the beauty special effect data determining system 100 may include a server 110 and a terminal device 120. The server 110 is a background server deployed by a shooting tool server, and the shooting tool server may be a server of a short video application with a shooting function, a server of a camera application, a server of a shopping application, or a server of a live broadcast application, etc. The terminal device 120 is a terminal device installed with a photographing tool (e.g., a short video application, a camera application, a shopping application, or a live broadcast application), and more specifically, the terminal device may be, for example, a smart phone, a personal computer, or a tablet computer. The server 110 and the terminal device 120 can establish connection through a network to realize the determination of the special effect data of the beauty.
It should be understood that the server 110 may be one server or may be a cluster formed by a plurality of servers, and the specific architecture of the server 110 is not limited in this disclosure.
In an optional implementation manner, the method for determining the special effect data of the beauty provided in the embodiment of the present disclosure may be applied to the terminal device 120, so that the terminal device 120 may obtain the picture feature data of the picture to be processed, and further, the terminal device 120 may determine the first beauty material data matched with the picture to be processed according to the picture feature data; and generating a target beauty set according to the first beauty material data, determining the target beauty set as beauty special effect data corresponding to the picture to be processed, and displaying a preview shooting picture according to the target beauty set.
In an optional implementation manner, the method for determining the special effect data of the beauty provided in the embodiment of the present disclosure may be applied to the server 110, and then the terminal device 120 may obtain the picture feature data of the picture to be processed, generate a special effect data obtaining request according to the picture feature data, and send the special effect data obtaining request to the server 110, and the server 110 may analyze the special Yan Texiao data obtaining request to obtain the picture feature data; further, the server 110 may determine, according to the frame feature data, first beautifying material data matched with the frame to be processed; and generating a target beauty set according to the first beauty material data, determining the target beauty set as beauty special effect data corresponding to the picture to be processed, and sending the target beauty set to the terminal equipment 120, wherein after the terminal equipment 120 receives the target beauty set, a preview shooting picture can be displayed according to the target beauty set.
It may be appreciated that, in the process of displaying the preview shooting picture, the terminal device 120 receives the shooting operation of the user, and then shoots the content according to the target beauty set, where the picture to be processed may be determined based on the shooting scene, for example, a live broadcast picture acquired in a live broadcast scene, or a picture acquired in a photo shooting scene, or a video picture acquired in a video shooting scene.
Fig. 2 is a flowchart illustrating a method for determining special effects of beauty, according to an exemplary embodiment, the embodiment of the disclosure uses the special effects of beauty data determining method applied to a server as an example, and illustrates the special effects of beauty data determining method, as shown in fig. 2, including the following steps:
step S201, obtaining picture characteristic data of a picture to be processed;
in an embodiment of the present disclosure, the picture feature data includes face feature data and scene feature data.
Step S202, determining first beauty material data matched with a picture to be processed according to picture characteristic data;
in an embodiment of the present disclosure, the first beauty material data includes a first beauty material, and a beauty intensity value of the first beauty material.
Step S203, a target beauty set is generated according to the first beauty material data, and the target beauty set is determined to be beauty special effect data corresponding to the picture to be processed.
In summary, according to the method for determining the special effect data of beauty provided by the embodiment of the present disclosure, on one hand, in the process of determining the special effect data of beauty matched with the picture to be processed, the face features and scene features in the picture to be processed are considered, so that the determined special effect data of beauty, the user face and the fitting degree of the current scene can be improved; on the other hand, according to the picture characteristic data of the picture to be processed, the first beautifying material required to be used in shooting and the beautifying intensity value of the first beautifying material can be directly determined, the processes of screening the beautifying material and adjusting the beautifying intensity value of the beautifying material by a user are omitted, the operation cost of the user is reduced, and the use experience of the user is improved.
Each step in fig. 2 is described in detail below:
in step S201, the server may acquire screen feature data of a screen to be processed.
In an embodiment of the present disclosure, the image feature data may include face feature data and scene feature data, where the face feature data is used to represent feature information of a face in the image to be processed, for example, a face contour feature, a five-sense feature, a skin color feature, an age and/or a gender, and the like; the scene characteristic data is used to characterize environmental characteristics in the picture to be processed, e.g. outdoors, indoors, day, night and/or field etc.
In an optional implementation manner, the to-be-processed picture may be a picture acquired in real time, and then the terminal device used by the user may respond to the enabling operation of the user to the camera, control the camera to acquire the to-be-processed picture, identify the to-be-processed picture to obtain picture feature data, further generate a request for acquiring the special effect data of beauty according to the picture feature data, and send the request for acquiring the special effect data of beauty to the server.
In an alternative embodiment, in order to obtain a more accurate identification result, the process of responding to the starting operation of the user on the camera by the terminal device to control the camera to collect the to-be-processed picture and identify the to-be-processed picture to obtain the picture feature data may include: acquiring multi-frame pictures in response to starting operation of the camera, and respectively identifying the multi-frame pictures to obtain a plurality of picture characteristic data to be processed; and clustering the picture characteristic data to be processed to obtain the picture characteristic data of the picture to be processed.
In an alternative embodiment, in order to protect privacy of a user, privacy protection processing may be performed on the face feature data, for example, the age in the face feature data is determined, the age range in which the age is located is determined, the identification of the age range is determined as the face feature data, for example, the age may be divided into a plurality of age ranges, and the plurality of age ranges may be marked as infants, teenagers, middle-aged and elderly people, and the like in order that the age ranges include the age values from small to large.
In an optional implementation manner, the to-be-processed picture may be a picture of the historical shot content, and then the terminal device used by the user may respond to the loading operation of the historical shot content to obtain the to-be-processed picture by obtaining the historical shot content, and identify the to-be-processed picture to obtain picture feature data, and further, the terminal device may generate a request for obtaining the special effect data according to the picture feature data, and send the request for obtaining the special effect data to the server; the historical shot content can be a previously shot photo, a live video or a non-live video.
It will be appreciated that the process of the server obtaining the picture characteristic data corresponding to the picture to be processed may comprise: after receiving the request for acquiring the special effect data of beauty sent by the terminal equipment, the server can analyze the request for acquiring the data of beauty Yan Texiao to obtain the picture characteristic data corresponding to the picture to be processed.
In step S202, the server may determine first beautifying material data matching the to-be-processed picture according to the picture feature data.
In an embodiment of the disclosure, the first beauty material data includes a first beauty material and a beautification intensity value of the first beauty material, where the beautification intensity value is used to characterize a beautification degree, for example, the first beauty material is a thin face, and the beautification intensity value of the first beauty material is used to characterize a degree of strength of the thin face.
In an alternative embodiment, the number of the beauty materials is generally huge, and the process of directly determining the first beauty material data matched with the to-be-processed picture in the massive beauty materials and consuming a certain time and data processing resources is required, so that the process of determining the first beauty material data matched with the to-be-processed picture by the server according to the picture feature data may include: determining a second candidate beauty set matched with the picture to be processed according to the picture characteristic data; and determining a first beauty material matched with the picture to be processed and a beautifying intensity value of the first beauty material from a plurality of beauty material data of a second candidate beauty set according to the picture characteristic data, wherein the second candidate beauty set comprises a plurality of beauty materials, and before the second candidate beauty set appears, the second candidate beauty set can be any one of the beauty sets constructed by a shooting tool service side for providing beauty processing service. The established beauty set matched with the picture features can be determined according to the picture feature data, and the first beauty material data matched with the picture to be processed is determined in the established beauty set, so that the consumption of data resources and time can be reduced in the process of determining the first beauty material data, and the efficiency of determining the first beauty material data can be improved.
The process of determining the first beauty material matched with the picture to be processed and the beautifying intensity value of the first beauty material from a plurality of beauty material data of the second candidate beauty set according to the picture characteristic data by the server can be realized based on a pre-trained beauty special effect data prediction model; as shown in fig. 3, the aesthetic Yan Texiao data prediction model includes a aesthetic set prediction sub-model 301, a aesthetic material prediction sub-model 302, and a aesthetic intensity value prediction sub-model 303; the beauty set prediction sub-model is used for predicting a second candidate beauty set matched with the picture to be processed according to the picture specific data; the beauty material predictor model is used for determining a first beauty material matched with a picture to be processed in a plurality of beauty material data of a second candidate beauty set according to the picture specific data and the second candidate beauty set; and the beautifying intensity value predictor model is used for determining the beautifying intensity value of the first beautifying material matched with the picture to be processed according to the picture characteristic data and the second candidate beautifying set.
It can be understood that in the embodiment of the disclosure, the model Yan Texiao data prediction model includes a beauty set prediction sub-model, because a shooting tool server generally provides a huge amount of beauty materials, and the model is directly made to learn to determine different beauty material data for different pictures, so that the situations that the model training difficulty is high and the prediction process consumes more time and data processing resources exist; and the beauty effect data prediction model is added with the beauty effect system prediction sub-model, so that the beauty effect data prediction model can learn a second candidate beauty effect system possibly selected by a user under different pictures, and the first beauty material matched with the picture to be processed and the beautifying intensity value of the first beauty material are selected from a plurality of pieces of beauty material data of the second candidate beauty effect system. The second candidate beauty set possibly selected by the user selects the first beauty material matched with the picture to be processed and the beautifying intensity value of the first beauty material to obtain the target beauty set matched with the picture to be processed, so that the beauty effect data prediction model can learn and predict the target beauty set matched with the picture to be processed by utilizing the second candidate beauty set, the beauty effect data determination method provided by the embodiment of the invention can determine the target beauty set matched with the picture to be processed according to the picture characteristic data of the picture to be processed, the artificial design of the beauty set is not needed, the matching degree of the beauty effect data provided for the shooting of the user, the user characteristics and the scene characteristics is improved, and the efficiency of determining the beauty effect data is improved.
In an alternative embodiment, as shown in fig. 4, the server may train the beauty effect data prediction model by:
step S401, the server can collect sample picture feature data and sample labels of a sample picture;
in the embodiment of the disclosure, the sample picture feature data includes sample face feature data and sample scene feature data, the sample tag includes a sample beauty set, a sample beauty Yan Sucai and a sample beauty intensity value, the sample beauty set is a beauty set used when a sample picture is taken, the sample beauty material is a beauty material adjusted in the sample beauty set when the sample picture is taken, and the sample beauty intensity value is a beauty intensity value of the sample beauty material when the sample picture is taken; the sample picture may be a sample image or a sample video, wherein the sample video may be a sample live video.
In an alternative embodiment, the process of collecting the sample picture feature data and the sample label of the sample picture by the server may include: determining at least one sample user account in which shooting behavior occurs in a plurality of unit time of a history period in a user shooting information database, and collecting sample picture feature data and sample labels of each sample picture associated with each sample user account; wherein, the historical time period is a time period between model training time, and the unit time can be determined based on actual needs, which is not limited by the embodiments of the present disclosure, for example, the unit time can be daily or every two days; the sample user account is any user account that has used the photographing tool.
Step S402, carrying out iterative training on the to-be-trained beauty special effect data prediction model according to the sample picture feature data and the sample label until the to-be-trained beauty special effect data prediction model converges, and determining that the to-be-trained beauty special effect data prediction model is trained;
in the embodiment of the disclosure, the face feature and the scene feature in the picture can be learned by training the face feature data and the scene feature data, so that the face feature and the scene feature in the picture can be learned by the face feature data and the scene feature data, the relationship between the face material matched with the picture and the beautifying intensity value of the face material can be used for determining the first face material data matched with the picture to be processed, and the face feature data prediction model can be used for determining the first face material data matched with the picture to be processed.
Note that, in the embodiment of the present disclosure, the model structure of the model Yan Texiao data prediction model may be a network model structure of any machine learning model, which is not limited in the embodiment of the present disclosure. In an alternative embodiment, the model structure of the mei Yan Texiao data prediction model may be as shown in fig. 3, and then as shown in fig. 5, the process of performing iterative training on the to-be-trained mei special effect data prediction model by the server according to the sample picture feature data and the sample label includes:
Step S501, inputting sample picture characteristic data into a beauty set prediction sub-model to be trained to obtain a prediction beauty set matched with a sample picture, and adjusting model parameters of the beauty set prediction sub-model to be trained based on the prediction beauty set and the sample beauty set;
in the embodiment of the present disclosure, the model structure of the beauty set predictor model may be any model structure of a machine learning model, which is not limited in the embodiment of the present disclosure. Alternatively, as shown in fig. 6, the beauty set predictor model may include a first feature intersection module 601, a first feature processing module 602, and a first feature extraction module 603, where the first feature intersection module may be a factored network module (Factorization Machine, FM), the first feature processing module may be a attention-mechanism network module (SE), the first feature extraction module may be a Multi-gate mix-of-expertise (MMOE) network, and the first feature extraction module may include a plurality of expert networks, and a gate network associated with the beauty set predictor task.
It should be noted that, in the beauty set prediction sub-model shown in fig. 6, the number of expert networks may be determined based on actual needs, which is not limited in the embodiment of the present disclosure; as shown in fig. 6, the expert network (Em) in the first feature extraction module may include 6; in the MMOE network, the number of gate networks is generally related to the number of tasks in the predicted task, and since the beauty set determination task includes one task of the predicted beauty set, the gate network (Gm) in the first feature extraction module may include 1.
In an alternative embodiment, if the model structure of the beauty set predictor model is shown in fig. 6, the process of inputting the sample picture feature data into the beauty set predictor model to be trained to obtain the predicted beauty set matched with the sample picture may include: performing feature cross processing on sample picture feature data through a first feature cross module to obtain a first sample feature data set, wherein the first sample feature data set comprises sample picture feature data and combined data of the sample picture feature data; further, through a first feature processing module, performing feature extraction on the first sample feature data set to obtain second sample feature data of a preset number of feature channels, determining a first sample feature weight of each feature channel according to the second sample feature data of each feature channel, and obtaining first sample weighted feature data according to the second sample feature data and the first sample feature weight associated with each feature channel; then, respectively inputting the first sample weighted feature data into a plurality of expert networks through a first feature extraction module to obtain a plurality of third sample feature data, inputting the first sample weighted feature data into a gate network associated with a beauty set prediction task to obtain a first sample weight value of each third sample feature data in the beauty set prediction task, and obtaining second sample weighted feature data according to the plurality of third sample feature data and the first sample weight value of each third sample feature data; and finally, carrying out candidate beauty set prediction based on the first sample characteristic data set and the second sample weighted characteristic data to obtain a predicted beauty set matched with the sample picture. The feature cross processing can be carried out on the sample picture feature data to obtain richer sample picture feature data, then the feature extraction is carried out on the sample feature data after the cross processing to obtain sample weighted features, so that the feature which needs to be focused on in the beauty set prediction process of the beauty set prediction sub-model is obtained, finally the cross feature and the extracted sample weighted features are combined to obtain the prediction beauty set, and the more accurate prediction beauty set of the beauty set prediction sub-model can be achieved.
In an alternative embodiment, adjusting model parameters of the beauty set predictor model to be trained based on the predicted beauty set and the sample beauty set may include: determining a first loss function value according to the predicted beauty set, the sample beauty set and the first loss function; if the first loss function value is determined to be greater than the first preset threshold value, adjusting model parameters of the beauty set predictor model to be trained, wherein the first loss function can be any loss function for adjusting the model parameters, and the embodiment of the disclosure is not limited to this; optionally, the first loss function is a loss function based on weighting positive and negative samples. By way of example, the first loss function may be a focal loss (focal loss) based cross entropy loss function; the first preset threshold may be determined based on actual needs, which is not limited by the embodiments of the present disclosure; in the training process of the beauty special effect data prediction model, the convergence condition of the beauty set prediction sub-model can be evaluated by utilizing a loss function based on weight adjustment of positive and negative samples, and the condition of sample imbalance can be solved, so that the accuracy of a prediction result determined in practical application of the beauty set prediction sub-model obtained through training is further improved.
Step S502, determining a sample candidate beauty set according to the predicted beauty set;
in an alternative embodiment, the process of determining a sample candidate beauty set from a predicted beauty set may include: the predicted beauty set is determined as the sample candidate beauty set, so that the beauty material predictor model and the beauty strength value predictor model can be conveniently trained according to the predicted beauty set and sample picture characteristic data in the training process of the beauty special effect data prediction model.
In an alternative embodiment, the process of determining a sample candidate beauty set from a predicted beauty set may include: selecting a sample beauty set as a sample candidate beauty set in a prediction beauty set and a sample beauty set based on a plan sampling strategy (Scheduled Sampling), wherein a sample candidate beauty set selection rule in the plan sampling strategy can be determined based on actual needs, which is not limited by the embodiment of the present disclosure; by way of example, the plan sampling strategy may include: after each round of iterative training, when the candidate beauty set is selected from the prediction beauty set and the sample beauty set, the sampling probability of the prediction beauty set can be increased by 10%. The sample candidate beauty set can be selected based on a plan sampling strategy, the probability of predicting the beauty set in determining the sample candidate beauty set is gradually increased, so that the beauty material predictor model and the beauty strength value predictor model are learned, the strength values of the beauty materials and the beauty materials are determined through the sample beauty set, the strength values of the beauty materials and the beauty materials are determined through the predicting the beauty set, the problem that the prediction result is inaccurate due to the lack of the sample beauty set in the application process of the beauty special effect data prediction model is prevented, and the accuracy of the prediction result of the trained beauty special effect data prediction model is further improved.
Step S503, inputting sample picture feature data and sample candidate beauty set into a beauty material predictor model to be trained to obtain a predicted beauty material matched with a sample picture, and adjusting model parameters of the beauty material predictor model to be trained based on the predicted beauty Yan Sucai and the sample beauty material;
in the embodiment of the present disclosure, the model structure of the beauty material predictor model may be any model structure of a machine learning model, which is not limited in the embodiment of the present disclosure. Alternatively, as shown in fig. 7, the beauty material predictor model may include a second feature-crossing module 701, a second feature-processing module 702, and a second feature-extraction module 703, where the second feature-crossing module may be a factored network module (Factorization Machine, FM), the second feature-processing module may be a attention-mechanism network module (SE), the second feature-extraction module may be a Multi-gate hybrid-of-expertise (MMOE), and the second feature-extraction module may include a plurality of expert networks, and gate networks respectively associated with a plurality of beauty material predictor tasks.
It should be noted that, in the beauty material predictor model shown in fig. 7, the number of expert networks may be determined based on actual needs, which is not limited in the embodiment of the present disclosure; as shown in fig. 7, the expert network (E'm) in the second feature extraction module may include 6; in the MMOE network, the number of gate networks is generally related to the number of tasks in the prediction task, and since a plurality of beauty materials are required to be completed cooperatively in the beauty process, the beauty material determination task comprises a plurality of beauty material prediction tasks, and the number of gate networks also comprises a plurality of gate networks; by way of example, the gate network (G'm) in the second feature extraction module may comprise 18.
In an alternative embodiment, if the model structure of the beauty material predictor model is shown in fig. 7, the process of inputting the sample picture feature data and the sample candidate beauty set into the beauty material predictor model to be trained to obtain the predicted beauty material matched with the sample picture may include: the sample target feature data is subjected to cross processing through a second feature cross module to obtain a second sample feature data set, wherein the sample target feature data comprises a sample candidate beauty set and sample picture feature data, and the second feature data set comprises sample target feature data and combined data of the sample target feature data; further, through a second feature processing module, performing feature extraction on a second sample feature data set to obtain fourth sample feature data of a preset number of feature channels, determining a second sample feature weight of each feature channel according to the fourth sample feature data of each feature channel, and obtaining third sample weighted feature data according to the fourth sample feature data and the second sample feature weight associated with each feature channel; then, respectively inputting the third sample weighted feature data into a plurality of expert networks through a second feature extraction module to obtain a plurality of fifth sample feature data, inputting the third sample weighted feature data into a gate network respectively associated with a plurality of beauty material prediction tasks to obtain a second sample weight value of each fifth sample feature data in each beauty material prediction task, and obtaining fourth sample weighted feature data according to the plurality of fifth sample feature data and the second sample weight value of each fifth sample feature data in each beauty material prediction task; and finally, carrying out beauty material prediction based on the second sample characteristic data set and the fourth sample weighted characteristic data, and determining predicted beauty materials matched with the sample picture in a plurality of beauty material data of the sample candidate beauty set. The feature cross processing can be carried out on the sample picture feature data and the sample candidate beauty set system to obtain richer feature data, then the feature extraction is carried out on the crossed sample feature data twice to obtain sample weighted features so as to obtain features which need to be focused in the beauty material prediction process of the beauty material prediction sub-model, finally the feature cross processing and the extracted sample weighted features are combined to obtain the predicted beauty material, and the more accurate predicted beauty material of the beauty material prediction sub-model can be obtained.
In an alternative embodiment, the process of adjusting model parameters of the beauty material predictor model to be trained based on the predicted beauty Yan Sucai and the sample beauty material may include: determining a second loss function value based on the predicted beauty material, the sample beauty Yan Sucai, and the second loss function; and if the second loss function value is determined to be larger than a second preset threshold value, adjusting model parameters of the beauty material predictor model to be trained. Wherein, the second loss function may be any loss function for adjusting model parameters, which is not limited by the embodiments of the present disclosure; optionally, the second loss function is a loss function based on weight adjustment of the positive and negative samples; by way of example, the second loss function may be a log loss function (log loss) based on focal loss (focal loss); the second preset threshold may be determined based on actual needs, which is not limited by the embodiments of the present disclosure; in the training process of the beauty special effect data prediction model, the convergence condition of the beauty material prediction sub-model can be evaluated by utilizing a loss function based on weight adjustment of positive and negative samples, and the condition of sample imbalance can be solved, so that the accuracy of a prediction result determined in practical application of the beauty material prediction sub-model obtained through training is improved.
Step S504, inputting the sample picture characteristic data and the sample candidate beauty set into a to-be-trained beautification intensity value predictor model to obtain a predicted beautification intensity value of a predicted beauty material matched with the sample picture, and adjusting model parameters of the to-be-trained beautification intensity value predictor model based on the predicted beautification intensity value and the sample beautification intensity value;
in the embodiment of the present disclosure, the model structure of the beautification intensity value predictor model may be any model structure of a machine learning model, which is not limited in the embodiment of the present disclosure. Optionally, as shown in fig. 8, the beautification intensity value predictor model may include a third feature intersection module 801, a third feature processing module 802, a third feature extraction module 803, and a masking layer 804, where the third feature intersection module may be a factorization network module (Factorization Machine, FM), the third feature processing module may be a attention-mechanism network module (SE), the third feature extraction module may be a Multi-gate mixed-of-expertise (MMOE), and the third feature extraction module may include a plurality of expert networks, and gate networks respectively associated with beautification intensity value prediction tasks of a plurality of beautification materials.
It should be noted that, in the beautification strength value predictor model shown in fig. 8, the number of expert networks may be determined based on actual needs, which is not limited in the embodiment of the disclosure; as shown in fig. 8, the expert network (e″ m) in the third feature extraction module may include 6; in the MMOE network, the number of gate networks is generally related to the number of tasks in the task prediction, and since a plurality of beautifying materials are required to be completed cooperatively in the beautifying process, the beautifying intensity value of the beautifying materials determines that the task comprises the beautifying intensity value prediction task of a plurality of beautifying materials, and the number of gate networks also comprises a plurality of gate networks; by way of example, the gate network (g″ m) in the third feature extraction module may comprise 18.
In an alternative embodiment, the process of inputting the sample picture feature data and the sample candidate beauty set into the to-be-trained beauty value predictor model to obtain the predicted beauty value of the predicted beauty material matched with the sample picture may include: the sample target feature data is subjected to cross processing through a third feature cross module to obtain a second sample feature data set, wherein the sample target feature data comprises a sample candidate beauty set and sample picture feature data, and the second feature data set comprises sample target feature data and combined data of the sample target feature data; further, through a third feature processing module, feature extraction is performed on the second sample feature data set to obtain sixth sample feature data of a preset number of feature channels, third sample feature weights of each feature channel are determined according to the sixth sample feature data of each feature channel, and fifth sample weighted feature data is obtained according to the sixth sample feature data and the third sample feature weights associated with each feature channel; then, respectively inputting the fifth sample weighted feature data into a plurality of expert networks through a third feature extraction module to obtain a plurality of seventh sample feature data, inputting the fifth sample weighted feature data into a gate network respectively associated with the beautifying intensity value prediction tasks of the plurality of beautifying materials to obtain a third sample weight value of each seventh sample feature data in the beautifying intensity value prediction task of each beautifying material, and obtaining sixth sample weighted feature data according to the plurality of seventh sample feature data and the third sample weight value of each seventh sample feature data in the beautifying intensity value prediction task of each beautifying material; then, inputting the second sample characteristic data set and the sixth sample weighted characteristic data into a masking layer to mask the characteristic data of the sample beautifying materials which are not contained in the sample candidate beautifying set in the sixth sample weighted characteristic data according to the sample candidate beautifying set in the second sample characteristic data set, so as to obtain updated sixth sample weighted characteristic data; and finally, carrying out beautification intensity value prediction according to the second sample characteristic data set and the updated sixth sample weighted characteristic data to obtain a predicted beautification intensity value of the predicted beautification material matched with the sample picture. The feature data of the sample picture and the sample candidate beauty set can be subjected to feature cross processing to obtain richer feature data, then the feature data after cross is subjected to feature extraction for two times to obtain sample weighted features, so that the feature which needs to be focused on in the process of predicting the beautifying intensity value of the beautifying intensity value predictor model is obtained, the feature data of the sample beauty material which is not contained in the candidate beauty set in the weighted feature data is masked to eliminate the influence of the feature data of the sample beauty material which is not contained in the candidate beauty set on the beautifying intensity value of the predicting beauty material, finally the sample weighted features after cross feature and influence elimination are combined to obtain the predicting beautifying intensity value, and the accuracy of the beautifying intensity value predicted by the beautifying intensity value predictor model can be improved.
In an alternative embodiment, the process of adjusting model parameters of the beautification intensity value predictor model to be trained by the server based on the predicted beautification intensity value and the sample beautification intensity value may include: determining a third loss function value based on the predicted beautification intensity value, the sample beautification intensity value, and the third loss function; and if the third loss function value is determined to be larger than a third preset threshold value, adjusting the model parameters of the beautification strength value predictor model to be trained. Wherein, the third loss function may be any loss function for adjusting model parameters, which is not limited by the embodiments of the present disclosure; optionally, the third loss function is a mean-square error (MSE) loss function based on mask weighting; the third preset threshold may be determined based on actual needs, which is not limited by the embodiments of the present disclosure.
Step S505, repeating the above process until the beauty set predictor model to be trained, the beauty material predictor model to be trained and the beauty strength value predictor model to be trained converge, and determining that the beauty special effect data predictor model to be trained converges.
It should be noted that, in the embodiment of the present disclosure, for the beauty set predictor model, when the first loss function value is less than or equal to the first preset threshold value, convergence of the beauty set predictor model may be determined; for the beauty material predictor model, when the second loss function value is smaller than or equal to a second preset threshold value, determining that the beauty material predictor model converges; for the beautification intensity value predictor model, convergence of the beautification intensity value predictor model may be determined when the third loss function value is less than or equal to a third preset threshold value.
It will be appreciated that after the model for predicting the special effect data for beauty treatment as shown in fig. 3 is trained, in an alternative embodiment, the process of determining the second candidate beauty set matching the picture to be processed according to the picture feature data may include: performing feature cross processing on the picture feature data to obtain a third feature data set, wherein the third feature data set comprises the picture feature data and combined data of the picture feature data; then, carrying out feature extraction on the third feature data set to obtain eighth feature data of a preset number of feature channels, determining fourth feature weights of each feature channel according to the eighth feature data of each feature channel, and obtaining seventh weighted feature data according to the eighth feature data and the fourth feature weights associated with each feature channel; further, the seventh weighted feature data are respectively input into a plurality of expert networks to obtain a plurality of ninth feature data, the seventh weighted feature data are input into a gate network associated with a beauty set prediction task to obtain a fourth weight value of each ninth feature data in the beauty set prediction task, and eighth weighted feature data are obtained according to the plurality of ninth feature data and the fourth weight value of each ninth feature data; and finally, carrying out candidate beauty set prediction based on the third characteristic data set and the eighth weighted characteristic data to obtain a second candidate beauty set matched with the picture to be processed. The image characteristic data can be subjected to characteristic cross processing to obtain richer characteristic data, the cross characteristic data is subjected to characteristic extraction twice to obtain weighted characteristics, so that the characteristics of the model which need important attention in the second candidate beauty set prediction process are obtained, and finally the cross characteristics and the weighted characteristics are combined to obtain the second candidate beauty set, so that the accuracy of the determined second candidate beauty set can be improved.
In an alternative embodiment, the process of determining, by the server, the first beauty material matching the to-be-processed picture from the plurality of beauty material data of the second candidate beauty set according to the picture feature data may include: performing cross processing on the second target feature data to obtain a fourth feature data set, wherein the second target feature data comprises a second candidate beautifying set and picture feature data, and the fourth feature data set comprises second target feature data and combined data of the second target feature data; then, carrying out feature extraction on the fourth feature data set to obtain tenth feature data of a preset number of feature channels, determining fifth feature weights of each feature channel according to the tenth feature data of each feature channel, and obtaining ninth weighted feature data according to the tenth feature data and the fifth feature weights associated with each feature channel; further, inputting the ninth weighted feature data into a plurality of expert networks to obtain a plurality of eleventh feature data, inputting the ninth weighted feature data into a gate network respectively associated with a plurality of beauty material prediction tasks to obtain fifth weight values of each eleventh feature data in each beauty material prediction task, and obtaining tenth weighted feature data according to the plurality of eleventh feature data and the fifth weight values of each eleventh feature data in each beauty material prediction task; and finally, carrying out beauty material prediction based on the fourth characteristic data set and tenth weighted characteristic data, and determining a first beauty material matched with the picture to be processed in a plurality of beauty material data of the second candidate beauty set. The image characteristic data can be subjected to characteristic cross processing to obtain richer characteristic data, the cross characteristic data is subjected to characteristic extraction twice to obtain weighted characteristics, so that the characteristics of the model, which need to be focused in the first beautifying material determining process, are obtained, and finally, the cross characteristics and the weighted characteristics are combined to obtain the first beautifying material, so that the accuracy of the determined first beautifying material can be improved.
In an alternative embodiment, the process of determining, by the server, the beautification intensity value of the first beautification material matched with the picture to be processed from the plurality of beautification material data of the second candidate beautification family according to the picture feature data may include: performing cross processing on the second target feature data to obtain a fourth feature data set, wherein the second target feature data comprises a second candidate beautifying set and picture feature data, and the fourth feature data set comprises second target feature data and combined data of the second target feature data; then, carrying out feature extraction on the fourth feature data set to obtain twelfth feature data of a preset number of feature channels, determining sixth feature weights of each feature channel according to the twelfth feature data of each feature channel, and obtaining eleventh weighted feature data according to the twelfth feature data and the sixth feature weights associated with each feature channel; further, the eleventh weighted feature data are respectively input into a plurality of expert networks to obtain a plurality of thirteenth feature data, the eleventh weighted feature data are input into a gate network respectively associated with the beautifying intensity value prediction tasks of the plurality of beautifying materials to obtain a sixth weight value of each thirteenth feature data in the beautifying intensity value prediction task of each beautifying material, and the twelfth weighted feature data are obtained according to the plurality of thirteenth feature data and the sixth weight value of each thirteenth feature data in the beautifying intensity value prediction task of each beautifying material; further, inputting the fourth feature data set and the twelfth weighted feature data into a masking layer to mask feature data which does not contain the beauty material in the second candidate beauty set in the twelfth weighted feature data according to the second candidate beauty set in the fourth feature data set, so as to obtain updated twelfth weighted feature data; and finally, carrying out beautification intensity value prediction according to the fourth characteristic data set and the updated twelfth weighted characteristic data to obtain the beautification intensity value of the first beautification material matched with the picture to be processed. The image characteristic data can be subjected to characteristic cross processing to obtain richer characteristic data, the characteristic data after cross is subjected to characteristic extraction for two times to obtain weighted characteristics, so that the characteristic of the model which needs important attention in the process of determining the beautifying intensity value of the first beautifying material is obtained, and finally the beautifying intensity value of the first beautifying material is obtained by combining the cross characteristic and the weighted characteristics, so that the accuracy of the determined beautifying intensity value of the first beautifying material can be improved.
In an alternative embodiment, to further enhance the accuracy of the determined special effects data, the user history behavior data may be taken into account to determine special effects data, wherein the user history behavior data characterizes the special effects data usage characteristics of the user. The process of determining, by the server, first beauty material data matching the to-be-processed picture according to the picture feature data may include: determining first beauty material data matched with a to-be-processed picture according to picture characteristic data and user history behavior data associated with a target user account, wherein the user history behavior data comprises at least one of a history beauty set used by the target user account and second beauty material data associated with the history beauty set; it is to be appreciated that the second aesthetic material data associated with the historical beauty set includes historical beauty material contained by the historical beauty set and/or historical aesthetic intensity values of the historical beauty material. In the process of determining the beauty special effect data matched with the picture to be processed, the face characteristics, scene characteristics and the use behavior characteristics of the user's beauty special effect data in the picture to be processed are considered, and on the basis of improving the matching degree of the determined beauty special effect data and the face of the user and the current scene, the matching degree of the determined beauty special effect data and the user's beauty special data using habit or preference is further improved, and the satisfaction degree of the user to the determined beauty special effect data is improved.
In an alternative embodiment, the user historical behavior data associated with the target user account further includes: at least one of a beauty set display amount, a beauty set click amount, a beauty set use amount, a click rate of a displayed beauty set, and a use rate of a clicked beauty set associated with the target user account; the display amount of the beauty jacket system, the click amount of the beauty jacket system, the use amount of the beauty jacket system, the click rate of the displayed beauty jacket system and the use rate of the clicked beauty jacket are statistics of a first preset time length associated with a sample target user account, and the first preset time length can be a certain time length before the current time. For example, one week, ten days, or one month, etc., to which embodiments of the present disclosure are not limited. In the process of determining the beauty special effect data matched with the picture to be processed, on the basis of considering the face characteristics, scene characteristics and the use behavior characteristics of the beauty special effect data of the user in the picture to be processed, the interactive behavior of the user and the beauty set is further considered to determine the beauty special effect data, the adaptation degree of the determined beauty special effect data and the use habit or preference of the beauty special data of the user is further improved, and the satisfaction degree of the user to the determined beauty special effect data is improved.
In an alternative embodiment, the number of the beauty materials is generally huge, and the process of directly determining the first beauty material data matched with the to-be-processed picture in the massive beauty materials needs to consume a certain time and data processing resources, and the process of determining the first beauty material data matched with the to-be-processed picture by the server according to the picture characteristic data and the user historical behavior data associated with the target user account may include: determining a first candidate beauty set matched with the picture to be processed according to the feature data to be processed; and determining the first beauty material matched with the picture to be processed and the beauty intensity value of the first beauty material from a plurality of beauty material data of the first candidate beauty set according to the feature data to be processed. The first candidate beautifying set comprises a plurality of beautifying materials; the first candidate beauty set may be any one of beauty sets constructed by the photographing tool service side in order to provide the beauty processing service before the appearance of the beauty special effect data determination method provided by the embodiment of the present disclosure. The established beauty set matched with the picture features can be determined according to the picture feature data and the user history behavior data, and the first beauty material data matched with the picture to be processed is determined in the established beauty set, so that the consumption of time and data resources can be reduced in the process of determining the first beauty material data, and the efficiency of determining the first beauty material data can be improved; the user history behavior data is also considered in the process of determining the first beautifying material data, so that the adaptation degree of the use preference and habit of the determined first beautifying material data and the user beautifying special effect data can be improved, and the user satisfaction is improved.
The server determines a first candidate beauty set matched with the picture to be processed according to the feature data to be processed; the process of determining the first beauty material matched with the to-be-processed picture and the beautifying intensity value of the first beauty material from the plurality of beauty material data of the first candidate beauty set according to the to-be-processed feature data can be realized based on a pre-trained beauty special effect data prediction model, and the model structure of the beauty Yan Texiao data prediction model can be shown in fig. 3. In an alternative embodiment, as shown in fig. 9, the server may train the beauty effect data prediction model by:
step S901, the server may collect sample picture feature data and sample labels of a sample picture, and collect sample user history behavior data associated with a sample target user account;
in the embodiment of the disclosure, the sample target user account is a user account of a sample picture, and the sample user historical behavior data is used for representing sample beauty special effect data use characteristics of a sample user; the sample user historical behavioral data may include: at least one of the sample historical beauty treatment system used by the sample target user account and the sample beauty treatment material data associated with the sample historical beauty treatment system, the sample beauty treatment material data associated with the sample historical beauty treatment system may include: the sample history beauty set comprises sample beauty materials and/or sample beautifying intensity values of the sample beauty materials; the sample user historical behavioral data may also include: at least one of a sample beauty set display amount, a sample beauty set click amount, a sample beauty set use amount, a sample click rate of a sample displayed beauty set, and a sample use rate of a sample clicked beauty set associated with a sample target user account; the sample beauty set display amount, the sample beauty set click amount, the sample beauty set use amount, the sample click rate of the sample displayed beauty set or the sample use rate of the sample clicked beauty set is the statistic of a second preset duration associated with the sample target user account, and the second preset duration can be a certain duration before the sample sampling time. For example, one week, ten days, or one month, etc., to which embodiments of the present disclosure are not limited.
In an optional implementation manner, the process of collecting the sample frame feature data and the sample label of the sample frame by the server may refer to the step S401, which is not described in detail in the embodiment of the present disclosure; the process of the server collecting sample user historical behavioral data associated with the sample target user account may include: sample user historical behavior data associated with a sample target user account is collected in a user photographing information database.
In step S902, the server may perform iterative training on the to-be-trained beauty special effect data prediction model according to the sample picture feature data and the sample label, and the sample user historical behavior data associated with the sample target user account.
In the embodiment of the disclosure, the face feature, the scene feature and the user behavior in the picture can be learned by training the face feature data, the scene feature data and the user behavior data of the sample, so that the face feature, the scene feature and the user behavior in the picture can be learned by the face feature data, the scene feature data and the user behavior data of the sample, and the face material matched with the picture and the beautifying strength value of the face material can be accurately determined, so that the first face material data matched with the picture to be processed can be accurately determined; and the influence factors of the user behavior data on the determination of the beauty special effect data are considered, so that the adaptation degree of the usage habit and preference of the determined first beauty material data and the user's beauty special effect data is further improved.
Note that, in the embodiment of the present disclosure, the model structure of the model Yan Texiao data prediction model may be a network model structure of any machine learning model, which is not limited in the embodiment of the present disclosure. In the case that the model structure of the beauty special effect data prediction model is shown in fig. 3, as shown in fig. 10, the process of performing iterative training on the beauty special effect data prediction model to be trained by the server according to the sample picture feature data and the sample label and the sample user historical behavior data associated with the sample target user account may include:
step S1001, inputting sample picture characteristic data and sample user historical behavior data into a beauty set prediction sub-model to be trained to obtain a prediction beauty set matched with a sample picture, and adjusting model parameters of the beauty set prediction sub-model to be trained based on the prediction beauty set and the sample beauty set;
step S1002, determining a sample candidate beauty set according to the predicted beauty set;
step S1003, inputting sample picture feature data, sample user history behavior data and sample candidate beauty set into a beauty material predictor model to be trained to obtain a predicted beauty material matched with a sample picture, and adjusting model parameters of the beauty material predictor model to be trained based on the predicted beauty Yan Sucai and the sample beauty material;
Step S1004, inputting sample picture feature data, sample user historical behavior data and sample candidate beauty set into a to-be-trained beauty intensity value predictor model to obtain a predicted beauty intensity value of a predicted beauty material matched with a sample picture, and adjusting model parameters of the to-be-trained beauty intensity value predictor model based on the predicted beauty intensity value and the sample beauty intensity value;
step S1005, repeating the above process until the to-be-trained beauty set predictor model, the to-be-trained beauty material predictor model and the to-be-trained beauty strength value predictor model converge, and determining that the to-be-trained beauty special effect data predictor model converges.
It should be noted that, in the embodiment of the present disclosure, the specific implementation process of step S1001 may refer to the specific implementation process of step S501, the specific implementation process of step S1002 may refer to the specific implementation process of step S502, the specific implementation process of step S1003 may refer to the specific implementation process of step S503, and the specific implementation process of step S1004 may refer to the specific implementation process of step S504, which is not described herein in detail.
It will be appreciated that after the model for predicting the beauty effect data as shown in fig. 3 is trained, in an alternative embodiment, the process of determining, by the server, the first candidate beauty set matching the to-be-processed picture according to the to-be-processed feature data may include: performing feature cross processing on feature data to be processed to obtain a first feature data set, wherein the first feature data set comprises the feature data to be processed and combined data of the feature data to be processed; then, carrying out feature extraction on the first feature data set to obtain second feature data of a preset number of feature channels, determining first feature weights of each feature channel according to the second feature data of each feature channel, and obtaining first weighted feature data according to the second feature data and the first feature weights associated with each feature channel; further, the first weighted feature data are respectively input into a plurality of expert networks to obtain a plurality of third feature data, the first weighted feature data are input into a gate network associated with a beauty set prediction task to obtain a first weight value of each third feature data in the beauty set prediction task, and the second weighted feature data are obtained according to the plurality of third feature data and the first weight value of each third feature data; and finally, carrying out candidate beauty set prediction based on the first characteristic data set and the second weighted characteristic data to obtain a first candidate beauty set matched with the picture to be processed. The image characteristic data and the user history behavior data can be subjected to characteristic crossing processing to obtain richer characteristic data, the crossed characteristic data is subjected to characteristic extraction twice to obtain weighted characteristics to obtain the characteristics which need to be focused on in the process of determining the first candidate beauty set, finally the first candidate beauty set is obtained by combining the crossed characteristics and the weighted characteristics, and the matching degree of the determined first candidate beauty set and the image to be processed is improved, and meanwhile, the user history behavior data are considered in the process of determining the first candidate beauty set, so that the use preference and the habit adaptation degree of the determined first candidate beauty set and the beauty set of the user can be improved.
In an alternative embodiment, the process of determining the first beauty material matched with the to-be-processed picture from the plurality of beauty material data of the first candidate beauty set according to the to-be-processed feature data may include: performing cross processing on the first target feature data to obtain a second feature data set, wherein the first target feature data comprises a first candidate beautifying set and feature data to be processed, and the second feature data set comprises the first target feature data and combined data of the first target feature data; then, carrying out feature extraction on the second feature data set to obtain fourth feature data of a preset number of feature channels, determining second feature weights of each feature channel according to the fourth feature data of each feature channel, and obtaining third weighted feature data according to the fourth feature data and the second feature weights associated with each feature channel; further, the third weighted feature data are respectively input into a plurality of expert networks to obtain a plurality of fifth feature data, the third weighted feature data are input into a gate network respectively associated with a plurality of beauty material prediction tasks to obtain a second weight value of each fifth feature data in each beauty material prediction task, and fourth weighted feature data are obtained according to the plurality of fifth feature data and the second weight value of each fifth feature data in each beauty material prediction task; and finally, carrying out face beautifying material prediction based on the second characteristic data set and the fourth weighted characteristic data, and determining a first face beautifying material matched with the picture to be processed from a plurality of face beautifying material data of the first candidate face beautifying set. The image characteristic data, the user history behavior data and the first candidate beauty set system can be subjected to characteristic cross processing firstly to obtain richer characteristic data, then the crossed characteristic data is subjected to characteristic extraction twice to obtain weighted characteristics so as to obtain the characteristics of the model which need to be focused in the first beauty material determining process, finally the first beauty material is obtained by combining the crossed characteristics and the weighted characteristics, and the matching degree of the determined first beauty material and the image to be processed is improved, and meanwhile, the matching degree of the determined first beauty material and the user beauty material using preference and habit is improved due to the consideration of the user history behavior data.
In an alternative embodiment, the process of determining, by the server, the beautification intensity value of the first beautification material matched with the frame to be processed from the plurality of beautification material data of the first candidate beautification family according to the feature data to be processed may include: performing cross processing on the first target feature data to obtain a second feature data set, wherein the first target feature data comprises a first candidate beautifying set and feature data to be processed, and the second feature data set comprises the first target feature data and combined data of the first target feature data; then, carrying out feature extraction on the second feature data set to obtain sixth feature data of a preset number of feature channels, determining third feature weights of each feature channel according to the sixth feature data of each feature channel, and obtaining fifth weighted feature data according to the sixth feature data and the third feature weights associated with each feature channel; further, the fifth weighted feature data are respectively input into a plurality of expert networks to obtain a plurality of seventh feature data, the fifth weighted feature data are input into a gate network respectively associated with the beautifying intensity value prediction tasks of the plurality of beautifying materials to obtain a third weight value of each seventh feature data in the beautifying intensity value prediction task of each beautifying material, and according to the plurality of seventh feature data and the third weight value of each seventh feature data in the beautifying intensity value prediction task of each beautifying material, sixth weighted feature data are obtained; further, the second feature data set and the sixth weighted feature data are input into a masking layer, so that weighted feature data which does not contain the beautifying materials in the first candidate beautifying set in the sixth weighted feature data are masked according to the first candidate beautifying set in the second feature data set, and updated sixth weighted feature data are obtained; and finally, carrying out beautification intensity value prediction according to the second characteristic data set and the updated sixth weighted characteristic data to obtain the beautification intensity value of the first beautification material matched with the picture to be processed. The method comprises the steps of firstly carrying out characteristic cross processing on picture characteristic data, user history behavior data and a first candidate beauty set system to obtain richer characteristic data, then carrying out characteristic extraction on the crossed characteristic data twice to obtain weighted characteristics so as to obtain the characteristics which need to be focused in the process of predicting the beautifying intensity value of a model, masking the characteristic data of the beautifying material which is not contained in the first candidate beauty set system in the weighted characteristic data to eliminate the influence of the characteristic data of the beautifying material which is not contained in the first candidate beauty set system on the beautifying intensity value of the predicted beautifying material, finally combining the crossed characteristic and the weighted characteristics after the influence to obtain the predicted beautifying intensity value, and taking the matching degree of the beautifying intensity value of the further obtained first beautifying material and the picture to be processed into consideration, thereby improving the determined beautifying intensity value of the first beautifying material and the use preference and habit adaptation degree of the beautifying intensity value of the beautifying material of the user.
In an optional implementation manner, in order to further improve the matching degree between the determined first beauty material data and the to-be-processed picture, the first beauty material data matched with the to-be-processed picture may be determined directly in a plurality of beauty materials provided by the shooting tool service side; the process of determining, by the server, first beautifying material data matching the picture to be processed according to the picture feature data may include: and inputting the picture characteristic data into a pre-trained prediction model to perform the prediction of the beauty special effect data so as to obtain first beauty material data matched with the picture to be processed. The first beautifying material data matched with the picture to be processed can be directly determined in the plurality of beautifying materials, and the matching degree of the determined first beautifying material data and the picture to be processed is improved.
The process of inputting the picture feature data into the pre-trained prediction model to predict the beauty special effect data so as to obtain the first beauty material data matched with the picture to be processed may include: and inputting the picture characteristic data into a beautifying material prediction sub-model to conduct beautifying material prediction to obtain a first beautifying material, and inputting the picture characteristic data into a beautifying intensity value prediction sub-model to conduct beautifying intensity value prediction to obtain a beautifying intensity value of the first beautifying material. The beauty material predictor model and the beauty intensity value predictor model may be model structures of any machine learning model, respectively, which is not limited in the embodiments of the present disclosure. For example, the beauty material predictor model may be the beauty material predictor model shown in fig. 7, and the beauty strength value predictor model may be the beauty strength value predictor model shown in fig. 8.
In step S203, the server may generate a target beauty set according to the first beauty material data, and determine the target beauty set as beauty special effect data corresponding to the to-be-processed picture.
It should be noted that, in the embodiment of the present disclosure, when the user is not satisfied with the determined first beauty material data, the first beauty material data may be redetermined, where, in order to break the historical special effect behavior habit of the user, to help the user search for the beauty Yan Texiao that is richer and more accords with the characteristics of the user, information different from the information used to determine the first beauty material data may be selected, and the updated first beauty material data may be determined; for example, if the server is the first beauty material data determined according to the picture feature data of the picture to be processed, the updated first beauty material data may be determined according to the picture feature data of the picture to be processed and the user history behavior data when the updated first beauty material is redetermined; if the server is the first beauty material data determined according to the picture characteristic data of the picture to be processed and the history beauty system used by the target user account in the user history behavior data and the second beauty material data related to the history beauty system, when the updated first beauty material can be redetermined, determining the updated first beauty material data according to at least one of the picture characteristic data of the picture to be processed and the picture characteristic data of the user history behavior data, the display quantity of the beauty system related to the target user account, the click quantity of the beauty system, the use quantity of the beauty system, the click rate of the displayed beauty system and the use rate of the clicked beauty system.
In an alternative embodiment, the server is the first beauty material data determined according to the picture characteristic data of the picture to be processed, and then the server responds to receiving a beauty material data replacement instruction, and determines updated first beauty material data matched with the picture to be processed according to the picture characteristic data and the user history behavior data associated with the target user account; and generating an updated target beauty set according to the updated first beauty material data, and determining the updated target beauty set as updated beauty special effect data corresponding to the picture to be processed. The first beautifying material data can be redetermined when the user is not satisfied with the determined first beautifying material data, so that richer beautifying special effect data which accords with the characteristics of the user can be explored for the user.
FIG. 4 is a flowchart illustrating a method of training a model for predicting skin effect data, according to an exemplary embodiment, as shown in FIG. 4, comprising the steps of:
step S401, sample picture characteristic data and sample labels of a sample picture are collected;
the sample picture feature data comprises sample face feature data and sample scene feature data, the sample tag comprises a sample beauty sleeve system, a sample beauty Yan Sucai and a sample beauty intensity value, the sample beauty sleeve system is a beauty sleeve system used when a sample picture is shot, the sample beauty material is a beauty material adjusted in the sample beauty sleeve system when the sample picture is shot, and the sample beauty intensity value is a beauty intensity value of the sample beauty material when the sample picture is shot;
It can be appreciated that the process of collecting the sample frame feature data and the sample label of the sample frame by the server may refer to the above embodiment, which is not described in detail in this disclosure.
And step S402, carrying out iterative training on the to-be-trained beauty special effect data prediction model according to the sample picture feature data and the sample label until the to-be-trained beauty special effect data prediction model converges, and determining that the to-be-trained beauty special effect data prediction model is trained.
It can be understood that, according to the sample picture feature data and the sample label, the process of performing iterative training on the to-be-trained beauty special effect data prediction model by the server may refer to the above embodiment, which is not described in detail in this disclosure.
It should be noted that, in the embodiment of the present disclosure, different special effect data prediction models may be trained based on different sample data, so as to determine special effect data of beauty matched with the to-be-processed frame; for example, one type of special effect data prediction model is trained according to sample picture feature data, another type of special effect data prediction model is trained according to sample picture feature data and sample user historical behavior data associated with a sample target user account, another type of special effect data prediction model is trained according to sample picture feature data and a historical beauty set used by the target user account in the sample user historical behavior data associated with the sample target user account, and another type of special effect data prediction model is trained according to sample picture feature data and strength values of beauty materials used by the target user account and beauty materials in the sample user historical behavior data associated with the sample target user account.
In an alternative embodiment, a plurality of pre-trained pre-model of the special effect data of beauty can also be used to redetermine the special effect data of beauty in the scene; for example, after obtaining the frame feature data of the frame to be processed, the server may determine first beauty material data matched with the frame to be processed by using a beauty special effect data prediction model trained according to the sample frame feature data; after receiving the beautifying material data replacement instruction, the sample user historical behavior data associated with the target user account can be continuously obtained, and the updated first beautifying material data matched with the to-be-processed picture is determined by using the trained beautifying special effect data prediction model according to the sample picture characteristic data and the sample user historical behavior data associated with the sample target user account.
Fig. 11 is a schematic diagram showing a special effect data determining apparatus for beauty, according to an exemplary embodiment, as shown in fig. 11, a special effect data determining apparatus 1100 for beauty, comprising:
an acquisition module 1101 configured to acquire picture feature data of a picture to be processed, the picture feature data including face feature data and scene feature data;
a first determining module 1102 configured to determine first beauty material data matched with the to-be-processed picture according to the picture feature data, the first beauty material data including first beauty materials and beauty intensity values of the first beauty materials;
The second determining module 1103 is configured to generate a target beauty set according to the first beauty material data, and determine the target beauty set as beauty special effect data corresponding to the to-be-processed picture.
Optionally, the to-be-processed picture is a to-be-processed picture of the target user account;
a first determination module 1102 configured to:
and determining first beauty material data matched with the to-be-processed picture according to the picture characteristic data and user history behavior data associated with the target user account, wherein the user history behavior data comprises at least one of a history beauty set used by the target user account and second beauty material data associated with the history beauty set.
Optionally, the user historical behavior data associated with the target user account further includes: at least one of a beauty set display amount, a beauty set click amount, a beauty set use amount, a click rate of a displayed beauty set, and a use rate of a clicked beauty set associated with the target user account.
Optionally, the first determining module 1102 is configured to:
determining a first candidate beauty set matched with the to-be-processed picture according to the to-be-processed characteristic data, wherein the to-be-processed characteristic data comprises picture characteristic data and user historical behavior data, and the first candidate beauty set comprises a plurality of beauty materials;
And determining the first beauty material matched with the picture to be processed and the beauty intensity value of the first beauty material from a plurality of beauty material data of the first candidate beauty set according to the feature data to be processed.
Optionally, the first determining module 1102 is configured to:
performing feature cross processing on the feature data to be processed to obtain a first feature data set, wherein the first feature data set comprises the feature data to be processed and combined data of the feature data to be processed;
extracting features from the first feature data set to obtain second feature data of a preset number of feature channels, determining a first feature weight of each feature channel according to the second feature data of each feature channel, and obtaining first weighted feature data according to the second feature data and the first feature weight associated with each feature channel;
respectively inputting the first weighted feature data into a plurality of expert networks to obtain a plurality of third feature data, inputting the first weighted feature data into a gate network associated with a beauty set prediction task to obtain a first weight value of each third feature data in the beauty set prediction task, and obtaining second weighted feature data according to the plurality of third feature data and the first weight value of each third feature data;
And carrying out candidate beauty set prediction based on the first characteristic data set and the second weighted characteristic data to obtain a first candidate beauty set matched with the picture to be processed.
Optionally, the first determining module 1102 is configured to:
cross processing is carried out on the first target feature data to obtain a second feature data set, wherein the first target feature data comprises a first candidate beauty set and feature data to be processed, and the second feature data set comprises the first target feature data and combined data of the first target feature data;
extracting features from the second feature data set to obtain fourth feature data of a preset number of feature channels, determining second feature weights of each feature channel according to the fourth feature data of each feature channel, and obtaining third weighted feature data according to the fourth feature data and the second feature weights associated with each feature channel;
respectively inputting the third weighted feature data into a plurality of expert networks to obtain a plurality of fifth feature data, inputting the third weighted feature data into a gate network respectively associated with a plurality of beauty material prediction tasks to obtain a second weight value of each fifth feature data in each beauty material prediction task, and obtaining fourth weighted feature data according to the plurality of fifth feature data and the second weight value of each fifth feature data in each beauty material prediction task;
And carrying out beauty material prediction based on the second characteristic data set and the fourth weighted characteristic data, and determining a first beauty material matched with the picture to be processed from a plurality of beauty material data of the first candidate beauty set.
Optionally, the first determining module 1102 is configured to:
cross processing is carried out on the first target feature data to obtain a second feature data set, wherein the first target feature data comprises a first candidate beauty set and feature data to be processed, and the second feature data set comprises the first target feature data and combined data of the first target feature data;
extracting features from the second feature data set to obtain sixth feature data of a preset number of feature channels, determining third feature weights of each feature channel according to the sixth feature data of each feature channel, and obtaining fifth weighted feature data according to the sixth feature data and the third feature weights associated with each feature channel;
respectively inputting the fifth weighted feature data into a plurality of expert networks to obtain a plurality of seventh feature data, inputting the fifth weighted feature data into a gate network respectively associated with the beautifying intensity value prediction tasks of the plurality of beautifying materials to obtain a third weight value of each seventh feature data in the beautifying intensity value prediction task of each beautifying material, and obtaining sixth weighted feature data according to the plurality of seventh feature data and the third weight value of each seventh feature data in the beautifying intensity value prediction task of each beautifying material;
Inputting the second feature data set and the sixth weighted feature data into a masking layer to mask the weighted feature data which does not contain the beauty material in the first candidate beauty set in the sixth weighted feature data according to the first candidate beauty set in the second feature data set, so as to obtain updated sixth weighted feature data;
and predicting the beautification intensity value according to the second characteristic data set and the updated sixth weighted characteristic data to obtain the beautification intensity value of the first beautification material matched with the picture to be processed.
Optionally, the first determining module 1102 is configured to:
determining a second candidate beauty set system matched with the picture to be processed according to the picture characteristic data, wherein the second candidate beauty set system comprises a plurality of beauty materials;
and determining the first beautifying material matched with the picture to be processed and the beautifying intensity value of the first beautifying material from a plurality of beautifying material data of the second candidate beautifying set according to the picture characteristic data.
Optionally, the first determining module 1102 is configured to:
performing feature cross processing on the picture feature data to obtain a third feature data set, wherein the third feature data set comprises the picture feature data and combined data of the picture feature data;
Feature extraction is carried out on the third feature data set to obtain eighth feature data of a preset number of feature channels, fourth feature weight of each feature channel is determined according to the eighth feature data of each feature channel, and seventh weighted feature data is obtained according to the eighth feature data and the fourth feature weight associated with each feature channel;
respectively inputting the seventh weighted feature data into a plurality of expert networks to obtain a plurality of ninth feature data, inputting the seventh weighted feature data into a gate network associated with the beauty set prediction task to obtain a fourth weight value of each ninth feature data in the beauty set prediction task, and obtaining eighth weighted feature data according to the plurality of ninth feature data and the fourth weight value of each ninth feature data;
and carrying out candidate beauty set prediction based on the third characteristic data set and the eighth weighted characteristic data to obtain a second candidate beauty set matched with the picture to be processed.
Optionally, the first determining module 1102 is configured to:
performing cross processing on the second target feature data to obtain a fourth feature data set, wherein the second target feature data comprises a second candidate beauty set and picture feature data, and the fourth feature data set comprises second target feature data and combined data of the second target feature data;
Feature extraction is carried out on the fourth feature data set to obtain tenth feature data of a preset number of feature channels, fifth feature weight of each feature channel is determined according to the tenth feature data of each feature channel, and ninth weighted feature data is obtained according to the tenth feature data and the fifth feature weight associated with each feature channel;
respectively inputting the ninth weighted feature data into a plurality of expert networks to obtain a plurality of eleventh feature data, inputting the ninth weighted feature data into a gate network respectively associated with a plurality of beauty material prediction tasks to obtain fifth weight values of each eleventh feature data in each beauty material prediction task, and obtaining tenth weighted feature data according to the plurality of eleventh feature data and the fifth weight values of each eleventh feature data in each beauty material prediction task;
and carrying out beauty material prediction based on the fourth characteristic data set and tenth weighted characteristic data, and determining a first beauty material matched with the picture to be processed in a plurality of beauty material data of the second candidate beauty set.
Optionally, the first determining module 1102 is configured to:
performing cross processing on the second target feature data to obtain a fourth feature data set, wherein the second target feature data comprises a second candidate beauty set and picture feature data, and the fourth feature data set comprises second target feature data and combined data of the second target feature data;
Carrying out feature extraction on the fourth feature data set to obtain twelfth feature data of a preset number of feature channels, determining sixth feature weights of each feature channel according to the twelfth feature data of each feature channel, and obtaining eleventh weighted feature data according to the twelfth feature data and the sixth feature weights associated with each feature channel;
respectively inputting eleventh weighted feature data into a plurality of expert networks to obtain a plurality of thirteenth feature data, inputting the eleventh weighted feature data into a gate network respectively associated with the beautifying intensity value prediction tasks of the plurality of beautifying materials to obtain a sixth weight value of each thirteenth feature data in the beautifying intensity value prediction task of each beautifying material, and obtaining twelfth weighted feature data according to the plurality of thirteenth feature data and the sixth weight value of each thirteenth feature data in the beautifying intensity value prediction task of each beautifying material;
inputting the fourth feature data set and the twelfth weighted feature data into a masking layer to mask feature data which does not contain the beauty material in the second candidate beauty set in the twelfth weighted feature data according to the second candidate beauty set in the fourth feature data set, so as to obtain updated twelfth weighted feature data;
And predicting the beautification intensity value according to the fourth characteristic data set and the updated twelfth weighted characteristic data to obtain the beautification intensity value of the first beautification material matched with the picture to be processed.
Optionally, the apparatus further comprises an update module 1104 configured to:
in response to receiving a beauty material data replacement instruction, determining updated first beauty material data matched with a to-be-processed picture according to picture feature data and user history behavior data associated with a target user account;
and generating an updated target beauty set according to the updated first beauty material data, and determining the updated target beauty set as updated beauty special effect data corresponding to the picture to be processed.
Fig. 12 is a schematic diagram showing a training apparatus for predicting model of special effect data for beauty, as shown in fig. 12, a training apparatus 1200 for predicting model of Yan Texiao data for beauty, comprising:
the collecting module 1201 is configured to collect sample picture feature data and sample labels of a sample picture, wherein the sample picture feature data comprises sample face feature data and sample scene feature data, the sample labels comprise sample beauty sets, sample beauty Yan Sucai and sample beauty intensity values, the sample beauty sets are beauty sets used when the sample picture is shot, the sample beauty materials are beauty materials adjusted in the sample beauty sets when the sample picture is shot, and the sample beauty intensity values are beauty intensity values of the sample beauty materials when the sample picture is shot;
The training module 1202 is configured to perform iterative training on the to-be-trained special effect data prediction model according to the sample picture feature data and the sample label until the to-be-trained special effect data prediction model converges, and determine that the to-be-trained special effect data prediction model is trained.
Optionally, the training module 1202 is configured to:
inputting sample picture characteristic data into a beauty set prediction sub-model to be trained to obtain a prediction beauty set matched with a sample picture, and adjusting model parameters of the beauty set prediction sub-model to be trained based on the prediction beauty set and the sample beauty set;
determining a sample candidate beauty set according to the predicted beauty set;
inputting sample picture characteristic data and sample candidate beauty set into a beauty material predictor model to be trained to obtain a predicted beauty material matched with a sample picture, and adjusting model parameters of the beauty material predictor model to be trained based on the predicted beauty Yan Sucai and the sample beauty material;
inputting the sample picture characteristic data and the sample candidate beauty system into a to-be-trained beautification intensity value predictive sub-model to obtain a predicted beautification intensity value of a predicted beauty material matched with the sample picture, and adjusting model parameters of the to-be-trained beautification intensity value predictive sub-model based on the predicted beautification intensity value and the sample beautification intensity value;
And repeating the process until the beauty set predictive sub-model to be trained, the beauty material predictive sub-model to be trained and the beauty strength value predictive sub-model to be trained are converged, and determining that the beauty special effect data predictive model to be trained is converged.
Optionally, the training module 1202 is configured to:
and selecting the sample beauty set as a sample candidate beauty set from the predicted beauty set and the sample beauty set based on a plan sampling strategy.
Optionally, the training module 1202 is configured to:
performing feature cross processing on the sample picture feature data to obtain a first sample feature data set, wherein the first sample feature data set comprises sample picture feature data and combined data of the sample picture feature data;
carrying out feature extraction on the first sample feature data set to obtain second sample feature data of a preset number of feature channels, determining first sample feature weights of each feature channel according to the second sample feature data of each feature channel, and obtaining first sample weighted feature data according to the second sample feature data and the first sample feature weights associated with each feature channel;
respectively inputting the first sample weighted feature data into a plurality of expert networks to obtain a plurality of third sample feature data, inputting the first sample weighted feature data into a gate network associated with a beauty set prediction task to obtain a first sample weight value of each third sample feature data in the beauty set prediction task, and obtaining second sample weighted feature data according to the plurality of third sample feature data and the first sample weight value of each third sample feature data;
And carrying out candidate beauty set prediction based on the first sample characteristic data set and the second sample weighted characteristic data to obtain a predicted beauty set matched with the sample picture.
Optionally, the training module 1202 is configured to:
determining a first loss function value according to the predicted beauty set, the sample beauty set and a first loss function, wherein the first loss function is a loss function based on weight adjustment of positive and negative samples;
and if the first loss function value is determined to be larger than a first preset threshold value, adjusting model parameters of the beauty set predictor model to be trained.
Optionally, the training module 1202 is configured to:
performing cross processing on sample target feature data to obtain a second sample feature data set, wherein the sample target feature data comprises sample candidate beauty set and sample picture feature data, and the second feature data set comprises sample target feature data and combined data of the sample target feature data;
carrying out feature extraction on the second sample feature data set to obtain fourth sample feature data of a preset number of feature channels, determining second sample feature weights of each feature channel according to the fourth sample feature data of each feature channel, and obtaining third sample weighted feature data according to the fourth sample feature data and the second sample feature weights associated with each feature channel;
Respectively inputting the third sample weighted feature data into a plurality of expert networks to obtain a plurality of fifth sample feature data, inputting the third sample weighted feature data into a gate network respectively associated with a plurality of beauty material prediction tasks to obtain a second sample weight value of each fifth sample feature data in each beauty material prediction task, and obtaining fourth sample weighted feature data according to the plurality of fifth sample feature data and the second sample weight value of each fifth sample feature data in each beauty material prediction task;
and carrying out beauty material prediction based on the second sample characteristic data set and the fourth sample weighted characteristic data, and determining predicted beauty materials matched with the sample picture in a plurality of pieces of beauty material data of the sample candidate beauty set.
Optionally, the training module 1202 is configured to:
determining a second loss function value according to the predicted beautifying material, the sample beauty Yan Sucai and a second loss function, wherein the second loss function is a loss function based on weight adjustment of positive and negative samples;
and if the second loss function value is determined to be larger than a second preset threshold value, adjusting model parameters of the beauty material predictor model to be trained.
Optionally, the training module 1202 is configured to:
performing cross processing on sample target feature data to obtain a second sample feature data set, wherein the sample target feature data comprises sample candidate beauty set and sample picture feature data, and the second feature data set comprises sample target feature data and combined data of the sample target feature data;
carrying out feature extraction on the second sample feature data set to obtain sixth sample feature data of a preset number of feature channels, determining third sample feature weights of each feature channel according to the sixth sample feature data of each feature channel, and obtaining fifth sample weighted feature data according to the sixth sample feature data and the third sample feature weights associated with each feature channel;
respectively inputting the fifth sample weighted feature data into a plurality of expert networks to obtain a plurality of seventh sample feature data, inputting the fifth sample weighted feature data into a gate network respectively associated with the beautifying intensity value prediction tasks of the plurality of beautifying materials to obtain a third sample weight value of each seventh sample feature data in the beautifying intensity value prediction task of each beautifying material, and obtaining sixth sample weighted feature data according to the plurality of seventh sample feature data and the third sample weight value of each seventh sample feature data in the beautifying intensity value prediction task of each beautifying material;
Inputting the second sample characteristic data set and the sixth sample weighted characteristic data into a masking layer to mask the characteristic data of the sample beautifying materials which are not contained in the sample candidate beautifying set in the sixth sample weighted characteristic data according to the sample candidate beautifying set in the second sample characteristic data set, so as to obtain updated sixth sample weighted characteristic data;
and predicting the beautification intensity value according to the second sample characteristic data set and the updated sixth sample weighted characteristic data to obtain a predicted beautification intensity value of the predicted beautification material matched with the sample picture.
Optionally, the training module 1202 is configured to:
and carrying out iterative training on the beauty special effect data prediction model to be trained according to the sample picture characteristic data, the sample label and the sample user historical behavior data associated with the sample target user account, wherein the sample target user account is the user account of the sample picture.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The exemplary embodiments of the present disclosure also provide an electronic device, which may be a terminal device or a server. The electronic device is described below with reference to fig. 13. It should be understood that the electronic device is described below with reference to fig. 13. It should be understood that the electronic device 1300 shown in fig. 13 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 13, the electronic device 1300 is embodied in the form of a general purpose computing device. The components of the electronic device 1300 may include, but are not limited to: at least one processing unit 1310, at least one memory unit 1320, a bus 1330 connecting the different system components, including the memory unit 1320 and the processing unit 1310.
Wherein the storage unit stores program code that is executable by the processing unit 1310 such that the processing unit 1310 performs steps according to various exemplary embodiments of the present invention described in the above section of the "exemplary method" of the present specification. For example, the processing unit 1310 may perform the method steps shown in fig. 2 or fig. 4, etc.
The storage unit 1320 may include volatile storage units such as a Random Access Memory (RAM) 1321 and/or a cache memory 1322, and may further include a Read Only Memory (ROM) 1323.
The storage unit 1320 may also include a program/utility 1324 having a set (at least one) of program modules 1325, such program modules 1325 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1330 may include a data bus, an address bus, and a control bus.
The electronic device 1300 may also communicate with one or more external devices 1400 (e.g., keyboard, pointing device, bluetooth device, etc.), which may be via an input/output (I/O) interface 1340. The electronic device 1300 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, e.g., the internet, through a network adapter 1380. As shown, network adapter 1380 communicates with other modules of electronic device 1300 over bus 1330. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1300, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
In addition, the present disclosure also provides a computer-readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the method for determining the special effect data for beauty provided in the above embodiments. Or, the electronic device is enabled to execute the beauty special effect data prediction model training method provided by the embodiment.
In addition, the present disclosure also provides a computer program product comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the method for determining cosmetic special effect data as provided in the above embodiments. Or, the electronic device is caused to execute the training method of the beauty special effect data prediction model provided by the embodiment.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (22)

1. The method for determining the special effect data of the beauty is characterized by comprising the following steps of:
acquiring picture characteristic data of a picture to be processed, wherein the picture characteristic data comprises face characteristic data and scene characteristic data;
determining first beautifying material data matched with the picture to be processed according to the picture characteristic data, wherein the first beautifying material data comprises first beautifying materials and beautifying intensity values of the first beautifying materials;
Generating a target beauty set according to the first beauty material data, and determining the target beauty set as beauty special effect data corresponding to the picture to be processed;
wherein the determining, according to the picture feature data, first beauty material data matched with the picture to be processed includes:
performing feature cross processing on the picture feature data to obtain a third feature data set, wherein the third feature data set comprises the picture feature data and combined data of the picture feature data;
performing feature extraction on the third feature data set to obtain eighth feature data of a preset number of feature channels, determining fourth feature weights of each feature channel according to the eighth feature data of each feature channel, and obtaining seventh weighted feature data according to the eighth feature data and the fourth feature weights associated with each feature channel;
respectively inputting the seventh weighted feature data into a plurality of expert networks to obtain a plurality of ninth feature data, inputting the seventh weighted feature data into a gate network associated with a beauty set prediction task to obtain a fourth weight value of each ninth feature data in the beauty set prediction task, and obtaining eighth weighted feature data according to the plurality of ninth feature data and the fourth weight value of each ninth feature data;
Performing candidate beauty set prediction based on the third characteristic data set and the eighth weighted characteristic data to obtain a second candidate beauty set matched with the picture to be processed, wherein the second candidate beauty set comprises a plurality of beauty materials;
and determining the first beauty material matched with the picture to be processed and the beauty intensity value of the first beauty material from a plurality of beauty material data of the second candidate beauty set according to the picture characteristic data.
2. The method for determining the special effect data of beauty according to claim 1, wherein the picture to be processed is a picture to be processed of a target user account;
the determining the first beauty material data matched with the to-be-processed picture according to the picture characteristic data comprises the following steps:
and determining first beauty material data matched with the to-be-processed picture according to the picture characteristic data and user history behavior data associated with the target user account, wherein the user history behavior data comprises at least one of a history beauty set used by the target user account and second beauty material data associated with the history beauty set.
3. The method of claim 2, wherein the user historical behavioral data associated with the target user account further comprises: at least one of a beauty set display amount, a beauty set click amount, a beauty set use amount, a click rate of a displayed beauty set, and a use rate of a clicked beauty set associated with the target user account.
4. A method of determining special effects data for beauty treatment according to claim 2 or 3, wherein the determining first material data for beauty treatment matching the picture to be treated based on the picture feature data and user history behavior data associated with the target user account includes:
determining a first candidate beauty set matched with the to-be-processed picture according to-be-processed characteristic data, wherein the to-be-processed characteristic data comprise picture characteristic data and user historical behavior data, and the first candidate beauty set comprises a plurality of beauty materials;
and determining the first beauty material matched with the picture to be processed and the beauty intensity value of the first beauty material from a plurality of beauty material data of the first candidate beauty set according to the characteristic data to be processed.
5. The method for determining special effects data for beauty according to claim 4, wherein the determining a first candidate beauty set matching the to-be-processed picture according to the to-be-processed feature data comprises:
performing feature cross processing on the feature data to be processed to obtain a first feature data set, wherein the first feature data set comprises the feature data to be processed and combined data of the feature data to be processed;
extracting the first characteristic data set to obtain second characteristic data of a preset number of characteristic channels, determining first characteristic weights of each characteristic channel according to the second characteristic data of each characteristic channel, and obtaining first weighted characteristic data according to the second characteristic data and the first characteristic weights associated with each characteristic channel;
respectively inputting the first weighted feature data into a plurality of expert networks to obtain a plurality of third feature data, inputting the first weighted feature data into a gate network associated with a beauty set prediction task to obtain a first weight value of each third feature data in the beauty set prediction task, and obtaining second weighted feature data according to the plurality of third feature data and the first weight value of each third feature data;
And carrying out candidate beauty set prediction based on the first characteristic data set and the second weighted characteristic data to obtain a first candidate beauty set matched with the picture to be processed.
6. The method according to claim 4, wherein determining the first beauty material matching the to-be-processed picture from among a plurality of pieces of beauty material data of the first candidate beauty set based on the to-be-processed feature data includes:
performing cross processing on first target feature data to obtain a second feature data set, wherein the first target feature data comprises the first candidate beauty set and the feature data to be processed, and the second feature data set comprises the first target feature data and combined data of the first target feature data;
performing feature extraction on the second feature data set to obtain fourth feature data of a preset number of feature channels, determining second feature weights of each feature channel according to the fourth feature data of each feature channel, and obtaining third weighted feature data according to the fourth feature data and the second feature weights associated with each feature channel;
Respectively inputting the third weighted feature data into a plurality of expert networks to obtain a plurality of fifth feature data, inputting the third weighted feature data into a gate network respectively associated with a plurality of beauty material prediction tasks to obtain a second weight value of each fifth feature data in each beauty material prediction task, and obtaining fourth weighted feature data according to the plurality of fifth feature data and the second weight value of each fifth feature data in each beauty material prediction task;
and carrying out beauty material prediction based on the second characteristic data set and the fourth weighted characteristic data, and determining a first beauty material matched with the picture to be processed from a plurality of beauty material data of the first candidate beauty set.
7. The method according to claim 4, wherein determining the beautification intensity value of the first beautification material matched with the picture to be processed from a plurality of beautification material data of the first candidate beautification kit according to the feature data to be processed includes:
performing cross processing on first target feature data to obtain a second feature data set, wherein the first target feature data comprises the first candidate beauty set and the feature data to be processed, and the second feature data set comprises the first target feature data and combined data of the first target feature data;
Extracting features from the second feature data set to obtain sixth feature data of a preset number of feature channels, determining third feature weights of each feature channel according to the sixth feature data of each feature channel, and obtaining fifth weighted feature data according to the sixth feature data and the third feature weights associated with each feature channel;
respectively inputting the fifth weighted feature data into a plurality of expert networks to obtain a plurality of seventh feature data, inputting the fifth weighted feature data into a gate network respectively associated with the beautifying intensity value prediction tasks of a plurality of beautifying materials to obtain a third weight value of each seventh feature data in the beautifying intensity value prediction task of each beautifying material, and obtaining sixth weighted feature data according to the plurality of seventh feature data and the third weight value of each seventh feature data in the beautifying intensity value prediction task of each beautifying material;
inputting the second feature data set and the sixth weighted feature data into a masking layer to mask the weighted feature data which does not contain the beauty material in the first candidate beauty set in the sixth weighted feature data according to the first candidate beauty set in the second feature data set, so as to obtain updated sixth weighted feature data;
And predicting the beautification intensity value according to the second characteristic data set and the updated sixth weighted characteristic data to obtain the beautification intensity value of the first beautification material matched with the picture to be processed.
8. The method according to claim 1, wherein determining the first beauty material matching the picture to be processed from a plurality of pieces of beauty material data of the second candidate beauty set based on the picture feature data, comprises:
performing cross processing on second target feature data to obtain a fourth feature data set, wherein the second target feature data comprises the second candidate beauty set and the picture feature data, and the fourth feature data set comprises the second target feature data and combined data of the second target feature data;
carrying out feature extraction on the fourth feature data set to obtain tenth feature data of a preset number of feature channels, determining fifth feature weights of each feature channel according to the tenth feature data of each feature channel, and obtaining ninth weighted feature data according to the tenth feature data and the fifth feature weights associated with each feature channel;
Respectively inputting the ninth weighted feature data into a plurality of expert networks to obtain a plurality of eleventh feature data, inputting the ninth weighted feature data into a gate network respectively associated with a plurality of beauty material prediction tasks to obtain fifth weight values of each eleventh feature data in each beauty material prediction task, and obtaining tenth weighted feature data according to the plurality of eleventh feature data and the fifth weight values of each eleventh feature data in each beauty material prediction task;
and carrying out beauty material prediction based on the fourth characteristic data set and the tenth weighted characteristic data, and determining a first beauty material matched with the picture to be processed in a plurality of beauty material data of the second candidate beauty set.
9. The method according to claim 1, wherein determining the beautification intensity value of the first beautification material matched with the picture to be processed from a plurality of beautification material data of the second candidate beautification kit based on the picture feature data includes:
performing cross processing on second target feature data to obtain a fourth feature data set, wherein the second target feature data comprises the second candidate beauty set and the picture feature data, and the fourth feature data set comprises the second target feature data and combined data of the second target feature data;
Performing feature extraction on the fourth feature data set to obtain twelfth feature data of a preset number of feature channels, determining sixth feature weights of each feature channel according to the twelfth feature data of each feature channel, and obtaining eleventh weighted feature data according to the twelfth feature data and the sixth feature weights associated with each feature channel;
respectively inputting the eleventh weighted feature data into a plurality of expert networks to obtain a plurality of thirteenth feature data, inputting the eleventh weighted feature data into a gate network respectively associated with the beautifying intensity value prediction tasks of a plurality of beautifying materials to obtain a sixth weight value of each thirteenth feature data in the beautifying intensity value prediction task of each beautifying material, and obtaining twelfth weighted feature data according to the plurality of thirteenth feature data and the sixth weight value of each thirteenth feature data in the beautifying intensity value prediction task of each beautifying material;
inputting the fourth feature data set and the twelfth weighted feature data into a masking layer to mask feature data which does not contain beauty materials in a second candidate beauty set in the twelfth weighted feature data according to the second candidate beauty set in the fourth feature data set, so as to obtain updated twelfth weighted feature data;
And predicting the beautification intensity value according to the fourth characteristic data set and the updated twelfth weighted characteristic data to obtain the beautification intensity value of the first beautification material matched with the picture to be processed.
10. The method for determining the special effect data for beauty according to claim 1, further comprising:
in response to receiving a beauty material data replacement instruction, determining updated first beauty material data matched with the to-be-processed picture according to the picture characteristic data and user history behavior data associated with a target user account;
and generating an updated target beauty set according to the updated first beauty material data, and determining the updated target beauty set as updated beauty special effect data corresponding to the picture to be processed.
11. The training method of the beauty special effect data prediction model is characterized by comprising the following steps of:
collecting sample picture feature data and sample labels of a sample picture, wherein the sample picture feature data comprises sample face feature data and sample scene feature data, the sample labels comprise sample beauty sets, sample beauty Yan Sucai and sample beauty intensity values, the sample beauty sets are beauty sets used when the sample picture is shot, the sample beauty materials are beauty materials adjusted in the sample beauty sets when the sample picture is shot, and the sample beauty intensity values are beauty intensity values of the sample beauty materials when the sample picture is shot;
Carrying out iterative training on the to-be-trained beauty special effect data prediction model according to the sample picture characteristic data and the sample label until the to-be-trained beauty special effect data prediction model converges, and determining that the to-be-trained beauty special effect data prediction model is trained;
the iterative training of the beauty special effect data prediction model to be trained according to the sample picture characteristic data and the sample label comprises the following steps:
inputting the sample picture characteristic data into a beauty set prediction sub-model to be trained to obtain a prediction beauty set matched with the sample picture, and adjusting model parameters of the beauty set prediction sub-model to be trained based on the prediction beauty set and the sample beauty set;
determining a sample candidate beauty set according to the predicted beauty set;
inputting the sample picture characteristic data and the sample candidate beauty set into a beauty material predictor model to be trained to obtain a predicted beauty material matched with the sample picture, and adjusting model parameters of the beauty material predictor model to be trained based on the predicted beauty material and the sample beauty material;
Inputting the sample picture characteristic data and the sample candidate beauty set into a to-be-trained beautified intensity value predictor model to obtain a predicted beautified intensity value of a predicted beauty material matched with the sample picture, and adjusting model parameters of the to-be-trained beautified intensity value predictor model based on the predicted beautified intensity value and the sample beautified intensity value;
and repeating the process until the beauty set system predictive sub-model to be trained, the beauty material predictive sub-model to be trained and the beauty strength value predictive sub-model to be trained converge, and determining that the beauty special effect data predictive model to be trained converges.
12. The method of claim 11, wherein the determining a sample candidate beauty set from the predicted beauty set comprises:
and selecting the sample beauty set as a sample candidate beauty set from the predicted beauty set and the sample beauty set based on a plan sampling strategy.
13. The method for training a model for predicting a specific data of beauty according to claim 11, wherein inputting the feature data of the sample picture into a model for predicting a set of beauty to be trained to obtain a predicted set of beauty matched with the sample picture, comprises:
Performing feature cross processing on the sample picture feature data to obtain a first sample feature data set, wherein the first sample feature data set comprises the sample picture feature data and combined data of the sample picture feature data;
performing feature extraction on the first sample feature data set to obtain second sample feature data of a preset number of feature channels, determining first sample feature weights of each feature channel according to the second sample feature data of each feature channel, and obtaining first sample weighted feature data according to the second sample feature data and the first sample feature weights associated with each feature channel;
respectively inputting the first sample weighted feature data into a plurality of expert networks to obtain a plurality of third sample feature data, inputting the first sample weighted feature data into a gate network associated with a beauty set prediction task to obtain a first sample weight value of each third sample feature data in the beauty set prediction task, and obtaining second sample weighted feature data according to the plurality of third sample feature data and the first sample weight value of each third sample feature data;
And carrying out candidate beauty set prediction based on the first sample characteristic data set and the second sample weighted characteristic data to obtain a predicted beauty set matched with the sample picture.
14. The method for training a model for predicting special effects of beauty according to claim 11, wherein said adjusting model parameters of said model for a beauty to be trained based on said predicted model and said sample model comprises:
determining a first loss function value according to the predicted beauty set, the sample beauty set and a first loss function, wherein the first loss function is a loss function based on weight adjustment of positive and negative samples;
and if the first loss function value is determined to be larger than a first preset threshold value, adjusting model parameters of the beauty set predictor model to be trained.
15. The method for training a model for predicting specific data of beauty according to claim 11, wherein said inputting the sample picture feature data and the sample candidate beauty set into a beauty material prediction sub-model to be trained to obtain a predicted beauty material matched with the sample picture comprises:
Performing cross processing on sample target feature data to obtain a second sample feature data set, wherein the sample target feature data comprises the sample candidate beauty set and the sample picture feature data, and the second sample feature data set comprises the sample target feature data and combined data of the sample target feature data;
performing feature extraction on the second sample feature data set to obtain fourth sample feature data of a preset number of feature channels, determining second sample feature weights of each feature channel according to the fourth sample feature data of each feature channel, and obtaining third sample weighted feature data according to the fourth sample feature data and the second sample feature weights associated with each feature channel;
respectively inputting the third sample weighted feature data into a plurality of expert networks to obtain a plurality of fifth sample feature data, inputting the third sample weighted feature data into a gate network respectively associated with a plurality of beauty material prediction tasks to obtain a second sample weight value of each fifth sample feature data in each beauty material prediction task, and obtaining fourth sample weighted feature data according to the plurality of fifth sample feature data and the second sample weight value of each fifth sample feature data in each beauty material prediction task;
And carrying out beauty material prediction based on the second sample characteristic data set and the fourth sample weighted characteristic data, and determining predicted beauty materials matched with the sample picture in a plurality of pieces of beauty material data of the sample candidate beauty set.
16. The method for training a model for predicting specific data for beauty according to claim 11, wherein said adjusting model parameters of said model for predicting sub-model of said beauty to be trained based on said predicted beauty material and said sample beauty material comprises:
determining a second loss function value according to the predicted beautifying material, the sample beauty Yan Sucai and a second loss function, wherein the second loss function is a loss function based on weight adjustment of positive and negative samples;
and if the second loss function value is determined to be larger than a second preset threshold value, adjusting the model parameters of the beauty material predictor model to be trained.
17. The method for training a model for predicting a specific data of beauty according to claim 11, wherein the step of inputting the sample picture feature data and the sample candidate beauty set into a model for predicting a beauty intensity value to be trained to obtain a predicted beauty intensity value of a predicted beauty material matched with the sample picture comprises:
Performing cross processing on sample target feature data to obtain a second sample feature data set, wherein the sample target feature data comprises the sample candidate beauty set and the sample picture feature data, and the second sample feature data set comprises the sample target feature data and combined data of the sample target feature data;
carrying out feature extraction on the second sample feature data set to obtain sixth sample feature data of a preset number of feature channels, determining third sample feature weights of each feature channel according to the sixth sample feature data of each feature channel, and obtaining fifth sample weighted feature data according to the sixth sample feature data and the third sample feature weights associated with each feature channel;
respectively inputting the fifth sample weighted feature data into a plurality of expert networks to obtain a plurality of seventh sample feature data, inputting the fifth sample weighted feature data into a gate network respectively associated with the beautifying intensity value prediction tasks of a plurality of beautifying materials to obtain a third sample weight value of each seventh sample feature data in the beautifying intensity value prediction task of each beautifying material, and obtaining sixth sample weighted feature data according to the plurality of seventh sample feature data and the third sample weight value of each seventh sample feature data in the beautifying intensity value prediction task of each beautifying material;
Inputting the second sample characteristic data set and the sixth sample weighted characteristic data into a masking layer to mask the characteristic data of the sample beautifying materials which are not contained in the sample candidate beautifying set in the sixth sample weighted characteristic data according to the sample candidate beautifying set in the second sample characteristic data set, so as to obtain updated sixth sample weighted characteristic data;
and predicting the beautification intensity value according to the second sample characteristic data set and the updated sixth sample weighted characteristic data to obtain a predicted beautification intensity value of the predicted beautification material matched with the sample picture.
18. The training method of the model for predicting the special effect data of beauty according to claim 11, wherein the performing iterative training on the model for predicting the special effect data of beauty to be trained according to the sample picture feature data and the sample label comprises:
and carrying out iterative training on the beauty special effect data prediction model to be trained according to the sample picture characteristic data, the sample label and sample user historical behavior data associated with a sample target user account, wherein the sample target user account is the user account of the sample picture.
19. A beauty effect data determining apparatus, characterized by comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is configured to acquire picture characteristic data of a picture to be processed, and the picture characteristic data comprises face characteristic data and scene characteristic data;
a first determining module configured to determine first beauty material data matched with the to-be-processed picture according to the picture feature data, wherein the first beauty material data comprises first beauty materials and beauty intensity values of the first beauty materials;
the second determining module is configured to generate a target beauty set according to the first beauty material data and determine the target beauty set as beauty special effect data corresponding to the picture to be processed;
wherein the determining, according to the picture feature data, first beauty material data matched with the picture to be processed includes:
performing feature cross processing on the picture feature data to obtain a third feature data set, wherein the third feature data set comprises the picture feature data and combined data of the picture feature data;
performing feature extraction on the third feature data set to obtain eighth feature data of a preset number of feature channels, determining fourth feature weights of each feature channel according to the eighth feature data of each feature channel, and obtaining seventh weighted feature data according to the eighth feature data and the fourth feature weights associated with each feature channel;
Respectively inputting the seventh weighted feature data into a plurality of expert networks to obtain a plurality of ninth feature data, inputting the seventh weighted feature data into a gate network associated with a beauty set prediction task to obtain a fourth weight value of each ninth feature data in the beauty set prediction task, and obtaining eighth weighted feature data according to the plurality of ninth feature data and the fourth weight value of each ninth feature data;
performing candidate beauty set prediction based on the third characteristic data set and the eighth weighted characteristic data to obtain a second candidate beauty set matched with the picture to be processed, wherein the second candidate beauty set comprises a plurality of beauty materials;
and determining the first beauty material matched with the picture to be processed and the beauty intensity value of the first beauty material from a plurality of beauty material data of the second candidate beauty set according to the picture characteristic data.
20. The utility model provides a beauty special effect data prediction model trainer which is characterized in that includes:
the sample face feature data comprises sample face feature data and sample scene feature data, the sample tag comprises a sample face feature system, sample face feature Yan Sucai and sample face feature value, the sample face feature system is a face feature system used when the sample picture is shot, the sample face feature material is a face feature material adjusted in the sample face feature system when the sample picture is shot, and the sample face feature value is a face feature value of the sample face feature material when the sample picture is shot;
The training module is configured to perform iterative training on the to-be-trained special effect data prediction model according to the sample picture characteristic data and the sample label until the to-be-trained special effect data prediction model converges, and determine that the to-be-trained special effect data prediction model is trained;
the iterative training of the beauty special effect data prediction model to be trained according to the sample picture characteristic data and the sample label comprises the following steps:
inputting the sample picture characteristic data into a beauty set prediction sub-model to be trained to obtain a prediction beauty set matched with the sample picture, and adjusting model parameters of the beauty set prediction sub-model to be trained based on the prediction beauty set and the sample beauty set;
determining a sample candidate beauty set according to the predicted beauty set;
inputting the sample picture characteristic data and the sample candidate beauty set into a beauty material predictor model to be trained to obtain a predicted beauty material matched with the sample picture, and adjusting model parameters of the beauty material predictor model to be trained based on the predicted beauty material and the sample beauty material;
Inputting the sample picture characteristic data and the sample candidate beauty set into a to-be-trained beautified intensity value predictor model to obtain a predicted beautified intensity value of a predicted beauty material matched with the sample picture, and adjusting model parameters of the to-be-trained beautified intensity value predictor model based on the predicted beautified intensity value and the sample beautified intensity value;
and repeating the process until the beauty set system predictive sub-model to be trained, the beauty material predictive sub-model to be trained and the beauty strength value predictive sub-model to be trained converge, and determining that the beauty special effect data predictive model to be trained converges.
21. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of determining the special effect of beauty treatment data as claimed in any one of claims 1 to 10, or to execute the instructions to implement the method of training the model of predictive model of special effect of beauty treatment data as claimed in any one of claims 11 to 18.
22. A computer readable storage medium, wherein instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of determining the special effects of beauty treatment data according to any one of claims 1 to 10, or wherein instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of training a model of predictive special effects of beauty treatment data according to any one of claims 11 to 18.
CN202310133083.0A 2023-02-09 2023-02-09 Method, device, equipment and medium for determining and training beauty special effect data Active CN115841432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310133083.0A CN115841432B (en) 2023-02-09 2023-02-09 Method, device, equipment and medium for determining and training beauty special effect data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310133083.0A CN115841432B (en) 2023-02-09 2023-02-09 Method, device, equipment and medium for determining and training beauty special effect data

Publications (2)

Publication Number Publication Date
CN115841432A CN115841432A (en) 2023-03-24
CN115841432B true CN115841432B (en) 2023-08-08

Family

ID=85579814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310133083.0A Active CN115841432B (en) 2023-02-09 2023-02-09 Method, device, equipment and medium for determining and training beauty special effect data

Country Status (1)

Country Link
CN (1) CN115841432B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106603928A (en) * 2017-01-20 2017-04-26 维沃移动通信有限公司 Shooting method and mobile terminal
CN107025629A (en) * 2017-04-27 2017-08-08 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107680128A (en) * 2017-10-31 2018-02-09 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107835367A (en) * 2017-11-14 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN107845072A (en) * 2017-10-13 2018-03-27 深圳市迅雷网络技术有限公司 Image generating method, device, storage medium and terminal device
CN108121957A (en) * 2017-12-19 2018-06-05 北京麒麟合盛网络技术有限公司 The method for pushing and device of U.S. face material
CN109410128A (en) * 2018-09-21 2019-03-01 联想(北京)有限公司 A kind of image processing method, device and electronic equipment
CN113453027A (en) * 2020-03-27 2021-09-28 阿里巴巴集团控股有限公司 Live video and virtual makeup image processing method and device and electronic equipment
CN114399424A (en) * 2021-12-23 2022-04-26 北京达佳互联信息技术有限公司 Model training method and related equipment
WO2023284401A1 (en) * 2021-07-14 2023-01-19 Oppo广东移动通信有限公司 Image beautification processing method and apparatus, storage medium, and electronic device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106603928A (en) * 2017-01-20 2017-04-26 维沃移动通信有限公司 Shooting method and mobile terminal
CN107025629A (en) * 2017-04-27 2017-08-08 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107845072A (en) * 2017-10-13 2018-03-27 深圳市迅雷网络技术有限公司 Image generating method, device, storage medium and terminal device
CN107680128A (en) * 2017-10-31 2018-02-09 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107835367A (en) * 2017-11-14 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN108121957A (en) * 2017-12-19 2018-06-05 北京麒麟合盛网络技术有限公司 The method for pushing and device of U.S. face material
CN109410128A (en) * 2018-09-21 2019-03-01 联想(北京)有限公司 A kind of image processing method, device and electronic equipment
CN113453027A (en) * 2020-03-27 2021-09-28 阿里巴巴集团控股有限公司 Live video and virtual makeup image processing method and device and electronic equipment
WO2023284401A1 (en) * 2021-07-14 2023-01-19 Oppo广东移动通信有限公司 Image beautification processing method and apparatus, storage medium, and electronic device
CN114399424A (en) * 2021-12-23 2022-04-26 北京达佳互联信息技术有限公司 Model training method and related equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts;Jiaqi Ma 等;《Research Track Paper》;第1930-1939页 *

Also Published As

Publication number Publication date
CN115841432A (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN110364146B (en) Speech recognition method, speech recognition device, speech recognition apparatus, and storage medium
WO2020108396A1 (en) Video classification method, and server
CN111209970A (en) Video classification method and device, storage medium and server
CN110866563B (en) Similar video detection and recommendation method, electronic device and storage medium
CN113570689B (en) Portrait cartoon method, device, medium and computing equipment
CN111984821A (en) Method and device for determining dynamic cover of video, storage medium and electronic equipment
CN113179421B (en) Video cover selection method and device, computer equipment and storage medium
CN110956059B (en) Dynamic gesture recognition method and device and electronic equipment
CN110147833A (en) Facial image processing method, apparatus, system and readable storage medium storing program for executing
CN111199540A (en) Image quality evaluation method, image quality evaluation device, electronic device, and storage medium
CN112818995A (en) Image classification method and device, electronic equipment and storage medium
CN112148923A (en) Search result sorting method, sorting model generation method, device and equipment
Yang et al. Study of natural scene categories in measurement of perceived image quality
CN114528474A (en) Method and device for determining recommended object, electronic equipment and storage medium
CN113407772B (en) Video recommendation model generation method, video recommendation method and device
CN115841432B (en) Method, device, equipment and medium for determining and training beauty special effect data
CN115514995A (en) Method, device and equipment for displaying recommendation information of live broadcast room
CN115690544B (en) Multi-task learning method and device, electronic equipment and medium
CN111179155B (en) Image processing method and device, electronic equipment and storage medium
CN115205763A (en) Video processing method and device
CN109118469A (en) Prediction technique for saliency
CN114969544A (en) Hot data-based recommended content generation method, device, equipment and medium
CN114220175A (en) Motion pattern recognition method, motion pattern recognition device, motion pattern recognition apparatus, motion pattern recognition medium, and motion pattern recognition product
CN113569668A (en) Method, medium, apparatus and computing device for determining highlight segments in video
CN113947418A (en) Feedback information acquisition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant