CN118230377A - Image processing method, apparatus, device, computer readable storage medium, and product - Google Patents

Image processing method, apparatus, device, computer readable storage medium, and product Download PDF

Info

Publication number
CN118230377A
CN118230377A CN202410175634.4A CN202410175634A CN118230377A CN 118230377 A CN118230377 A CN 118230377A CN 202410175634 A CN202410175634 A CN 202410175634A CN 118230377 A CN118230377 A CN 118230377A
Authority
CN
China
Prior art keywords
face
image
model
preset
trained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410175634.4A
Other languages
Chinese (zh)
Inventor
王志强
赵亚飞
杜宗财
陈毅
范锡睿
秦勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202410175634.4A priority Critical patent/CN118230377A/en
Publication of CN118230377A publication Critical patent/CN118230377A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides image processing methods, apparatuses, devices, computer readable storage media, and products, relating to the field of artificial intelligence, and in particular to the field of deep learning. The specific implementation scheme is as follows: acquiring a training data set, wherein the training data set comprises a plurality of groups of training data, and the training data comprises a preset face image and a reference face image; identifying face parameters corresponding to a preset face image; performing face rendering operation according to the face parameters to obtain a rendered image; inputting the rendered image into a preset model to be trained, and obtaining a generated image output by the model to be trained; and carrying out iterative training on the model to be trained based on the generated image, the preset face image and the reference face image until the model to be trained meets the preset convergence condition, so as to obtain the face generation model. Therefore, more accurate and clear face images can be generated based on the face generation model. In addition, facial images with richer expressions can be generated through adjusting facial parameters.

Description

Image processing method, apparatus, device, computer readable storage medium, and product
Technical Field
The present disclosure relates to deep learning in artificial intelligence, and more particularly, to an image processing method, apparatus, device, computer readable storage medium, and product.
Background
Along with the continuous development of artificial intelligence technology, the face generation technology is increasingly applied to the related fields of digital media, film and television creation and the like. Because the mouth shape, expression and action of the face have certain changes when speaking, how to generate more accurate face images becomes a problem to be solved urgently.
Disclosure of Invention
The present disclosure provides an image processing method, apparatus, device, computer-readable storage medium, and product for accurately generating a face image.
According to a first aspect of the present disclosure, there is provided an image processing method including:
Acquiring a training data set, wherein the training data set comprises a plurality of groups of training data, the training data comprises a preset face image and a reference face image, and the reference face image and the preset face image comprise faces of the same person at different angles;
identifying face parameters corresponding to the preset face image;
performing face rendering operation according to the face parameters to obtain a rendered image;
inputting the rendering image into a preset model to be trained, and obtaining a generated image output by the model to be trained;
And carrying out iterative training on the model to be trained based on the generated image, a preset face image and a reference face image until the model to be trained meets a preset convergence condition, so as to obtain a face generation model. An image processing method, comprising:
Acquiring a training data set, wherein the training data set comprises a plurality of groups of training data, the training data comprises a preset face image and a reference face image, and the reference face image and the preset face image comprise faces of the same person at different angles;
identifying face parameters corresponding to the preset face image;
performing face rendering operation according to the face parameters to obtain a rendered image;
inputting the rendering image into a preset model to be trained, and obtaining a generated image output by the model to be trained;
and carrying out iterative training on the model to be trained based on the generated image, a preset face image and a reference face image until the model to be trained meets a preset convergence condition, so as to obtain a face generation model.
According to a second aspect of the present disclosure, there is provided an image processing method including:
Acquiring a face image to be processed;
Determining face parameter information corresponding to the face image to be processed;
performing face rendering operation according to the face parameter information to obtain a rendering image to be processed;
Inputting the rendering image to be processed into a preset face generation model to obtain a target face image output by the face generation model, wherein the face generation model is obtained based on training of the method of the first aspect.
According to a third aspect of the present disclosure, there is provided an image processing apparatus including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a training data set, the training data set comprises a plurality of groups of training data, the training data comprises a preset face image and a reference face image, and the reference face image and the preset face image comprise faces of the same person at different angles;
The identification module is used for identifying face parameters corresponding to the preset face image;
the rendering module is used for performing face rendering operation according to the face parameters to obtain a rendered image;
the input module is used for inputting the rendering image into a preset model to be trained to obtain a generated image output by the model to be trained;
And the training module is used for carrying out iterative training on the model to be trained based on the generated image, the preset face image and the reference face image until the model to be trained meets the preset convergence condition, so as to obtain the face generation model.
According to a fourth aspect of the present disclosure, there is provided an image processing apparatus including:
the image acquisition module is used for acquiring a face image to be processed;
The determining module is used for determining face parameter information corresponding to the face image to be processed;
The processing module is used for performing face rendering operation according to the face parameter information to obtain a rendering image to be processed;
The generating module is used for inputting the rendering image to be processed into a preset face generating model to obtain a target face image output by the face generating model, wherein the face generating model is obtained based on the device training of the third aspect.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first or second aspect.
According to a sixth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method according to the first or second aspect.
According to a seventh aspect of the present disclosure, there is provided a computer program product comprising: a computer program stored in a readable storage medium from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the method of the first or second aspect.
The technology according to the present disclosure can accurately generate a high-precision face image.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a diagram of a system architecture upon which the present disclosure is based;
fig. 2 is a flowchart of an image processing method according to an embodiment of the disclosure;
fig. 3 is a flowchart of an image processing method according to another embodiment of the disclosure;
fig. 4 is a flowchart of an image processing method according to an embodiment of the disclosure;
fig. 5 is a flowchart of an image processing method according to another embodiment of the disclosure;
FIG. 6 is a schematic view of an image generation scenario provided by an embodiment of the present disclosure;
fig. 7 is a flowchart of an image processing method according to another embodiment of the present disclosure;
Fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The disclosure provides an image processing method, an image processing device, image processing equipment, a computer readable storage medium and a computer readable storage medium, which are applied to deep learning in the field of artificial intelligence so as to achieve the effect of generating more accurate face images.
Note that, the face model in this embodiment is not a face model for a specific user, and cannot reflect personal information of a specific user. It should be noted that, the face image in this embodiment is from the public data set.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
Noun interpretation:
Soft rendering: a special rendering technique that allows a developer to simulate the GPU's rendering pipeline through software without graphics hardware. The method can calculate the coordinates of point application through the CPU and rasterize which pixels need to be drawn, and can simply realize coarse-grained rendering operation.
Along with the continuous development of artificial intelligence technology, the face generation technology is increasingly applied to the related fields of digital media, film and television creation and the like. The mouth shape, expression and action of the person are changed to a certain extent when speaking, so that a great challenge is brought to the generation of the high-precision face.
In the related art, in order to realize the generation of the face, a synchronous discriminator of audio and mouth shape can be trained first, then the training of the face generation model is performed, and the face generation is performed based on the face generation model obtained by the training.
However, the face generated by the method is often too smooth, the reduction degree of the face texture is low, and meanwhile, the effect on the generation of teeth, face edges and other parts is poor, and obvious tone difference can occur.
In order for the reader to more fully understand the principles of the implementations of the present disclosure, the embodiments are now further refined in conjunction with the following fig. 1-10.
Fig. 1 is a diagram of a system architecture on which the present disclosure is based, and as shown in fig. 1, the system architecture on which the present disclosure is based includes at least a data server 11 and a server 12. Wherein, the server 12 can be provided with an image processing device, and the image processing device is written by languages such as C/C++, java, shell or Python; the data server 11 may be a cloud server or a server cluster, in which a large amount of data is stored.
Based on the system architecture, the server 12 may acquire a training data set from the data server 11, and perform iterative training operation on a preset model to be trained based on the training data set.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the disclosure, as shown in fig. 2, where the method includes:
Step 201, acquiring a training data set, wherein the training data set comprises a plurality of groups of training data, the training data comprises a preset face image and a reference face image, and the reference face image and the preset face image comprise faces of the same person at different angles.
The execution subject of the present embodiment is an image processing apparatus. The image processing device may be coupled in a server. The server may be communicatively coupled to a predetermined data server, such that the training data set may be obtained from the data server.
In this embodiment, in order to implement a training operation on a preset model to be trained, a training data set may be acquired in advance. Wherein the training data set may comprise a plurality of sets of training data. Each group of training data comprises a preset face image and a reference face image. The reference face image and the preset face image comprise faces of the same person at different angles. For example, the preset face image may include a front face image of the user a, and the reference face image includes a side face image of the user a. Or the reference face image and the preset face image may also include faces of the same person under different lights, which is not limited in the disclosure.
Optionally, the reference face image and the preset face image both comprise the same face, and the faces are in different angles, so that the model to be trained can be subjected to consistency constraint based on the reference face image, and the trained face generation model can generate more accurate face images.
Step 202, identifying face parameters corresponding to the preset face image.
In this embodiment, for each preset face image in the training dataset, a face parameter corresponding to the preset face image may be identified. Any algorithm capable of realizing face parameter recognition can be adopted to perform the recognition operation on the preset face image, and the method is not limited in this disclosure. For example, a preset face image may be modeled by a preset face parameterized model, so as to obtain face parameters such as a face shape, a face expression, a face texture, a face pose, and the like. The face parameterized model includes, but is not limited to, a 3DMM model.
And 203, performing face rendering operation according to the face parameters to obtain a rendered image.
In this embodiment, after obtaining the face parameters corresponding to the preset face image, face rendering operation may be performed based on the face parameters to obtain the rendered image.
Alternatively, to reduce the amount of computation in the model training process, the rendered image may be a coarse-grained face image. The face image with coarse granularity is rendered based on the face parameters, so that the model to be trained can learn more accurate face characteristics in the model training process, and the trained model can output more accurate face images.
Step 204, inputting the rendered image to a preset model to be trained, and obtaining a generated image output by the model to be trained.
In this embodiment, after determining the rendering image based on the preset face image, the rendering image may be input to a preset model to be trained. After the rendered image is acquired by the model to be trained, an image may be generated based on the rendered image output.
And 205, performing iterative training on the model to be trained based on the generated image, a preset face image and a reference face image until the model to be trained meets a preset convergence condition, so as to obtain a face generation model.
In this embodiment, in order to improve the processing accuracy of the model, the model to be trained may be constrained based on the generated image and the preset face image in the training data set, so that the generated image output by the model to be trained is closer to the preset face image. And constraining the consistency of the model to be trained based on the generated image and the reference face image so that the generated image output by the model to be trained and the reference face image are identical in identity.
Therefore, after the generated image output by the model to be trained is obtained, iterative training can be performed on the model to be trained based on the generated image, the preset face image and the reference face image until the model to be trained meets the preset convergence condition, and the face generated model is obtained.
The preset convergence condition may be that a loss value of the model to be trained is smaller than a preset loss value threshold, a difference between loss values of the model to be trained obtained by two rounds of training is smaller than a preset difference value threshold, a training duration reaches a preset market threshold, a training frequency reaches a preset frequency threshold, and the like, which is not limited in the disclosure.
According to the image processing method, parameter identification is carried out on the preset face image, a rendering image is built based on the face parameter, and the rendering image is input into a preset model to be trained to obtain a generated image. Iterative training is carried out on the model to be trained based on the generated image, the preset face image and the reference face image, so that the model to be trained can learn more accurate face characteristics in the training process, and the processing precision of the face generated model can be improved. Further, the generated image and the reference face image are adopted to jointly restrict the model to be trained, so that the face generation model can generate the face image which is more attached to the preset face image and has the same identity with the reference face image.
Fig. 3 is a flow chart of an image processing method according to another embodiment of the disclosure, where, based on any of the foregoing embodiments, as shown in fig. 3, step 205 includes:
Step 301, calculating a first loss value corresponding to the model to be trained based on the generated image and the preset face image.
And 302, calculating a second loss value corresponding to the model to be trained based on the generated image and the reference face image.
Step 303, determining a joint loss value corresponding to the model to be trained according to the first loss value and the second loss value.
And step 304, performing iterative training on the model to be trained through the joint loss value until the model to be trained meets a preset convergence condition, and obtaining a face generation model.
In this embodiment, in order to improve the processing precision of the model, the model to be trained may be constrained based on the generated image, the preset face image and the reference face image, so that the generated image output by the model to be trained is closer to the preset face image and is a face image with the same identity as the reference face image.
Therefore, after the generated image output by the model to be trained is obtained, the first loss value corresponding to the model to be trained can be calculated based on the generated image and the preset face image. The first loss value may be a pixel loss value, and the calculation of the first loss value may be performed by a preset loss algorithm.
Further, a second loss value corresponding to the model to be trained is calculated based on the generated image and the reference face image. The calculation of the second loss value may be implemented based on a face difference between the generated image and the reference face image and a preset loss algorithm.
After the first loss value and the second loss value are obtained respectively, the model to be trained can be constrained by the combination of the first loss value and the second loss value. And determining a joint loss value corresponding to the model to be trained according to the first loss value and the second loss value. The first loss value and the second loss value may have different weights, and the user may also adjust the weights according to actual requirements, which is not limited in the disclosure. And carrying out iterative training on the model to be trained through the joint loss value until the model to be trained meets a preset convergence condition, and obtaining the face generation model. For example, parameters of the model to be trained may be updated by a deep learning back propagation method based on the joint loss values.
According to the image processing method, the first loss value corresponding to the model to be trained is calculated based on the generated image and the preset face image, and the second loss value corresponding to the model to be trained is calculated based on the generated image and the reference face image, so that the image which is more attached to the preset face image can be generated by restricting the model to be trained based on the first loss value, and the face image which is identical to the reference face image can be output by restricting the model to be trained based on the second loss value.
Further, based on any of the above embodiments, step 302 includes:
And respectively inputting the generated image and the reference face image into a preset face recognition model to obtain a first recognition result and a second recognition result.
And determining characteristic difference information between the generated image and the reference face image according to the first identification result and the second identification result.
And determining the second loss value based on the characteristic difference information and a preset loss algorithm.
In this embodiment, the calculation of the second loss value may be implemented based on the face difference in the generated image and the reference face image and a preset loss algorithm.
Alternatively, the generated image and the reference face image may be input to a preset face recognition model, respectively, to obtain a first recognition result corresponding to the generated image and a second recognition result corresponding to the reference face image. Any face recognition model may be used to implement the recognition operation of the generated image and the reference face image, which is not limited in this disclosure. For example, faceNet may be employed to implement the recognition operations for the generated image as well as the reference face image. And determining characteristic difference information between the generated image and the reference face image according to the first recognition result and the second recognition result. And determining a second loss value based on the characteristic difference information and a preset loss algorithm. The predetermined loss algorithm includes, but is not limited to ArcFace Loss, cosine similarity loss, etc., which is not limited by the present disclosure.
According to the image processing method, the second loss value is calculated based on the face difference between the generated image and the reference face image and the preset loss algorithm, so that after the parameters of the model to be trained are updated based on the second loss value, the model to be trained can be constrained to output the face image with the same identity as the reference face image, and the model processing precision is further improved.
Further, based on any of the above embodiments, step 304 includes:
And if the model to be trained does not meet the preset convergence condition based on the joint loss value, adjusting the parameters of the model to be trained according to the joint loss value, taking the adjusted model to be trained as a current model to be trained, and returning to execute the step of inputting the rendering image into the preset model to be trained until the model to be trained meets the preset convergence condition, so as to obtain the face generation model.
And if the model to be trained meets the preset convergence condition based on the joint loss value, determining the current model to be trained as the face generation model.
In this embodiment, after obtaining the joint loss value, it may be determined whether the joint loss value determines whether the model to be trained meets a preset convergence condition. If yes, judging that the current model is trained, and determining the current model to be trained as a face generation model. Otherwise, the current model is characterized to be not trained, and iterative training is needed to be continued. Therefore, the adjustment operation can be performed on the parameters of the model to be trained according to the joint loss value, and the adjusted model to be trained is used as the current model to be trained, wherein the parameter update operation can be performed on the model to be trained by adopting a deep learning back propagation method, or the adjustment of the parameters of the model to be trained can be realized by adopting other training methods, and the method is not limited.
Further, after completing parameter adjustment of the model to be trained, the step of inputting the rendering image to the preset model to be trained may be performed in a returning manner, and the training process is repeated until the model to be trained meets the preset convergence condition, so as to obtain the face generating model.
According to the image processing method, iteration training is carried out on the model to be trained based on the combined loss value determined by the first loss value and the second loss value until the model to be trained meets the preset convergence condition, so that an image which is more fit with the preset face image can be generated on the basis of the first loss value constraint model to be trained, the face image which is identical to the reference face image can be output on the basis of the second loss value constraint model to be trained, and the processing precision of the model is improved.
Optionally, on the basis of any embodiment above, step 202 includes:
and cutting the preset face image to obtain a face area image.
Inputting the face region image into a preset face parameterization model to obtain face parameters corresponding to the preset face image, wherein the face parameters comprise one or more of face shape parameters, face expression parameters, face texture parameters and face posture parameters.
In this embodiment, after the training data set is acquired, in order to quickly determine the face parameters corresponding to the preset face image, the preset face image may be subjected to a preprocessing operation.
Optionally, a face region in the preset face image may be determined, and a clipping operation is performed on the preset face image based on the face region, so as to obtain a face region image. In addition, the preprocessing operations such as zooming and rotating can be performed on the preset face image according to actual requirements, and the method is not limited in this disclosure.
Further, after the face area image is obtained, the face area image may be input to a preset face parameterization model, so as to obtain face parameters corresponding to the preset face image, where the face parameters include one or more of face shape parameters, face expression parameters, face texture parameters, and face pose parameters.
According to the image processing method, after the preset face image is obtained, the face parameters are identified after the preset face image is preprocessed, so that accuracy and efficiency of face parameter identification can be improved.
Optionally, on the basis of any one of the embodiments above, step 203 includes:
and carrying out face soft rendering operation according to the face parameters to obtain a rendering result.
And adjusting the resolution of the rendering result to be a preset resolution to obtain the rendering image.
In this embodiment, in order to reduce the amount of computation in the model training process, the rendered image may be a coarse-grained face image. Therefore, after the face parameters are obtained, face soft rendering operation can be performed according to the face parameters, and a rendering result is obtained. And adjusting the rendering result based on the preset resolution to obtain a coarse-grained rendering image. The preset resolution may be smaller than the resolution of the rendering result. Or the preset resolution may be preset, and the rendering image with the preset resolution is directly rendered based on the face parameter in the rendering process, which is not limited in the disclosure.
According to the image processing method, the rendering result is adjusted based on the preset resolution to obtain the rendering image, so that the calculated amount in the model training process can be reduced, and the model training efficiency is improved.
Fig. 4 is a flowchart of an image processing method according to an embodiment of the disclosure, as shown in fig. 4, where the method includes:
step 401, acquiring a face image to be processed.
Step 402, determining face parameter information corresponding to the face image to be processed.
And 403, performing face rendering operation according to the face parameter information to obtain a rendering image to be processed.
Step 404, inputting the rendering image to be processed into a preset face generation model to obtain a target face image output by the face generation model, wherein the face generation model is obtained by training based on the method described in any embodiment.
The execution subject of the present embodiment is an image processing apparatus. The image processing device may be coupled in a server. The server may be communicatively coupled to the terminal device so as to enable image generation operations based on image generation operations initiated by a user on the terminal device. The server may be further provided with an image generation model obtained by training based on the image processing method of any of the above embodiments.
In this embodiment, the image processing apparatus may acquire a face image to be processed. The face image to be processed may be determined by the terminal device when the image generating operation is initiated. The method and the device can be used for acquiring the data of the user on the terminal equipment in real time, or selecting the data from a preset storage path, and the method and the device are not limited.
Further, after the face image to be processed is obtained, face parameter information corresponding to the face image to be processed can be determined. Any algorithm capable of realizing face parameter information recognition can be adopted to recognize the face image to be processed, and the method is not limited in this disclosure. For example, modeling can be performed on a face image to be processed through a preset face parameterized model, so as to obtain face parameter information such as face shape, face expression, face texture, face posture and the like. The face parameterized model includes, but is not limited to, a 3DMM model. And performing face rendering operation according to the face parameter information to obtain a rendering image to be processed. In order to reduce the calculation amount of the face generation model and improve the face generation efficiency, the to-be-processed rendering image may be a preset resolution image, and the preset resolution may be smaller than the original resolution of the to-be-processed rendering image. Inputting the rendering image to be processed into a preset face generation model to obtain a target face image output by the face generation model, wherein the face generation model is obtained by training based on the method of any embodiment
According to the image processing method, after the face image to be processed is obtained, face parameter information corresponding to the face image to be processed is determined. And performing face rendering operation according to the face parameter information to obtain a rendering image to be processed. Therefore, the rendering image to be processed can be input into a preset face generation model, so that the face generation model can generate a high-precision target face image. The face generation model can be obtained by performing iterative training on a model to be trained based on a first loss value determined by the generated image and a preset face image, a second loss value determined by the preset face image and a reference face image, so that an image which is more fit with the preset face image can be generated by restricting the model to be trained based on the first loss value, the face image which is identical to the reference face image can be output by restricting the model to be trained based on the second loss value, and the processing precision of the model is improved.
Fig. 5 is a schematic flow chart of an image processing method according to another embodiment of the disclosure, where, on the basis of any of the foregoing embodiments, as shown in fig. 5, after step 402, the method further includes:
step 501, obtaining a parameter adjustment request, where the parameter adjustment request includes an update parameter, where the update parameter is used to adjust an expression of a face in the face image to be processed.
Step 502, performing an adjustment operation on face parameter information corresponding to the face image to be processed according to the parameter adjustment request.
In this embodiment, since the face generation model may generate the target face image based on the to-be-processed rendered image rendered by the face parameter information, the user may adjust the facial expression and the mouth shape of the target face image by adjusting the face parameter information according to the actual requirement.
Optionally, after determining the face parameter information corresponding to the face image to be processed, the user may perform an update operation on the face parameter information. And acquiring a parameter adjustment request, wherein the parameter adjustment request comprises an update parameter used for adjusting the expression of the face in the face image to be processed. And carrying out adjustment operation on the face parameter information corresponding to the face image to be processed according to the parameter adjustment request. And then generating a to-be-processed rendering image with expression, mouth shape and the like different from the to-be-processed face image based on the adjusted face parameter information. And then, after the image to be rendered is input into the face generation model, a target face image with expression, mouth shape and the like different from the face image to be processed can be generated.
Fig. 6 is a schematic view of an image generation scenario provided in an embodiment of the present disclosure, as shown in fig. 6, a face image 61 to be processed determined by a user may be obtained, and face parameter information corresponding to the face image 61 to be processed is identified. And acquiring a parameter adjustment request, and adjusting the face parameter information based on the updated parameters in the parameter adjustment request. For example, the expression corresponding to the face image 61 to be processed may be smiling expression. The facial expression corresponding to the adjusted facial parameter information can be a pucker expression. And performing soft rendering operation based on the adjusted face parameter information to obtain a rendering image 62 to be processed. The rendering image 62 to be processed may be input into a preset face generation model 63, generating a target image 64, wherein the user may present the expression of pucker in the target image 64.
According to the image processing method, the face parameter information is adjusted based on the actual requirements of the user, so that the to-be-processed rendering image with the expression, the mouth shape and the like different from the to-be-processed face image can be generated based on the adjusted face parameter information. After the image to be rendered is input into the face generation model, target face images with different expressions, mouth shapes and the like, which are different from the face image to be processed, can be generated, and the target face images with different expressions, mouth shapes can be rapidly and accurately generated.
Fig. 7 is a flowchart of an image processing method according to another embodiment of the present disclosure, where, on the basis of any one of the above embodiments, the face image to be processed is a video frame in a preset video. Step 402 includes:
Step 701, acquiring at least one frame of associated image frame associated with the face image to be processed in a preset video.
Step 702, determining the face key point information of the face image to be processed and the at least one frame of associated image frame.
Step 703, determining the face parameter information based on the face key point information of the to-be-processed face image and the at least one associated image frame.
In this embodiment, the image to be processed may be a single image or may be a video frame in a preset video. When the face image to be processed is a video frame in the preset video, the face parameter information can be determined based on the associated video frame of the video frame in the preset video because the face expression and the mouth shape in each video frame are different and have continuity.
Optionally, at least one frame of associated image frame associated with the face image to be processed in the preset video may be acquired. The associated image frames may be the first N video frames and the last N video frames of the face image to be processed. Face key point information of the face image to be processed and at least one frame of associated image frame is determined. Face parameter information is determined based on the face image to be processed and face key point information of at least one associated image frame. For example, the positions of the face key points in the face image to be processed may be determined based on the positions of the face key points in the front N frames and the rear N frames, so that the face parameter information in the face region may be determined based on the position information of the face key points, which is not limited in the present disclosure.
According to the image processing method, the face parameter information is determined based on at least one frame of associated image frame associated with the face image to be processed, so that the face parameter information can be determined more accurately based on the continuous action of the face in the preset video.
Fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure, as shown in fig. 8, the apparatus includes: an acquisition module 81, a recognition module 82, a rendering module 83, an input module 84, and a training module 85. The acquiring module 81 is configured to acquire a training data set, where the training data set includes multiple sets of training data, the training data includes a preset face image and a reference face image, and the reference face image and the preset face image include faces of the same person at different angles. And the identification module 82 is configured to identify a face parameter corresponding to the preset face image. And the rendering module 83 is configured to perform face rendering operation according to the face parameters, and obtain a rendered image. The input module 84 is configured to input the rendered image to a preset model to be trained, and obtain a generated image output by the model to be trained. And the training module 85 is configured to iteratively train the model to be trained based on the generated image, a preset face image, and a reference face image until the model to be trained meets a preset convergence condition, thereby obtaining a face generation model.
Further, on the basis of any one of the foregoing embodiments, performing iterative training on the model to be trained based on the generated image, a preset face image, and a reference face image until the model to be trained meets a preset convergence condition, to obtain a face generation model, including: and calculating a first loss value corresponding to the model to be trained based on the generated image and the preset face image. And calculating a second loss value corresponding to the model to be trained based on the generated image and the reference face image. And determining a joint loss value corresponding to the model to be trained according to the first loss value and the second loss value. And carrying out iterative training on the model to be trained through the joint loss value until the model to be trained meets a preset convergence condition, and obtaining a face generation model.
Further, on the basis of any one of the foregoing embodiments, the calculating, based on the generated image and the reference face image, a second loss value corresponding to the model to be trained includes: and respectively inputting the generated image and the reference face image into a preset face recognition model to obtain a first recognition result and a second recognition result. And determining characteristic difference information between the generated image and the reference face image according to the first identification result and the second identification result. And determining the second loss value based on the characteristic difference information and a preset loss algorithm.
Further, on the basis of any one of the foregoing embodiments, performing iterative training on the model to be trained by using the joint loss value until the model to be trained meets a preset convergence condition, to obtain a face generation model, including: and if the model to be trained does not meet the preset convergence condition based on the joint loss value, adjusting the parameters of the model to be trained according to the joint loss value, taking the adjusted model to be trained as a current model to be trained, and returning to execute the step of inputting the rendering image into the preset model to be trained until the model to be trained meets the preset convergence condition, so as to obtain the face generation model. And if the model to be trained meets the preset convergence condition based on the joint loss value, determining the current model to be trained as the face generation model.
Further, on the basis of any one of the foregoing embodiments, the identifying the face parameter corresponding to the preset face image includes: and cutting the preset face image to obtain a face area image. Inputting the face region image into a preset face parameterization model to obtain face parameters corresponding to the preset face image, wherein the face parameters comprise one or more of face shape parameters, face expression parameters, face texture parameters and face posture parameters.
Further, on the basis of any one of the above embodiments, performing face rendering operation according to the face parameter to obtain a rendered image, including: and carrying out face soft rendering operation according to the face parameters to obtain a rendering result. And adjusting the resolution of the rendering result to be a preset resolution to obtain the rendering image.
Fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure, as shown in fig. 9, the apparatus includes: an image acquisition module 91, a determination module 92, a processing module 93 and a generation module 94. The image acquiring module 91 is configured to acquire a face image to be processed. And the determining module 92 is configured to determine face parameter information corresponding to the face image to be processed. And the processing module 93 is used for performing face rendering operation according to the face parameter information to obtain a rendering image to be processed. The generating module 94 is configured to input the rendering image to be processed into a preset face generating model, and obtain a target face image output by the face generating model, where the face generating model is obtained by training the apparatus according to any one of the foregoing embodiments.
Further, on the basis of any one of the above embodiments, after determining the face parameter information corresponding to the face image to be processed, the method further includes: and acquiring a parameter adjustment request, wherein the parameter adjustment request comprises an update parameter, and the update parameter is used for adjusting the expression of the face in the face image to be processed. And carrying out adjustment operation on the face parameter information corresponding to the face image to be processed according to the parameter adjustment request.
Further, on the basis of any one of the above embodiments, the face image to be processed is a video frame in a preset video. The determining the face parameter information corresponding to the face image to be processed comprises the following steps: and acquiring at least one frame of associated image frame associated with the face image to be processed in a preset video. And determining the face key point information of the face image to be processed and the at least one frame of associated image frame. And determining the face parameter information based on the face key point information of the to-be-processed face image and the at least one frame of associated image frame.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, the present disclosure further provides an electronic device, including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments described above.
According to an embodiment of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method of any one of the above embodiments.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the apparatus 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data required for the operation of the device 1000 can also be stored. The computing unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Various components in device 1000 are connected to I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and communication unit 1009 such as a network card, modem, wireless communication transceiver, etc. Communication unit 1009 allows device 1000 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1001 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1001 performs the various methods and processes described above, such as method XXX. For example, in some embodiments, method XXX may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1000 via ROM 1002 and/or communication unit 1009. When a computer program is loaded into RAM 1003 and executed by computing unit 1001, one or more steps of method XXX described above may be performed. Alternatively, in other embodiments, computing unit 1001 may be configured to perform method XXX by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual PRIVATE SERVER" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (21)

1. An image processing method, comprising:
Acquiring a training data set, wherein the training data set comprises a plurality of groups of training data, the training data comprises a preset face image and a reference face image, and the reference face image and the preset face image comprise faces of the same person at different angles;
identifying face parameters corresponding to the preset face image;
performing face rendering operation according to the face parameters to obtain a rendered image;
inputting the rendering image into a preset model to be trained, and obtaining a generated image output by the model to be trained;
and carrying out iterative training on the model to be trained based on the generated image, a preset face image and a reference face image until the model to be trained meets a preset convergence condition, so as to obtain a face generation model.
2. The method of claim 1, wherein the iteratively training the model to be trained based on the generated image, a preset face image, and a reference face image until the model to be trained meets a preset convergence condition, to obtain a face generated model, includes:
calculating a first loss value corresponding to the model to be trained based on the generated image and the preset face image;
calculating a second loss value corresponding to the model to be trained based on the generated image and the reference face image;
determining a joint loss value corresponding to the model to be trained according to the first loss value and the second loss value;
And carrying out iterative training on the model to be trained through the joint loss value until the model to be trained meets a preset convergence condition, and obtaining a face generation model.
3. The method according to claim 2, wherein the calculating the second loss value corresponding to the model to be trained based on the generated image and the reference face image includes:
Respectively inputting the generated image and the reference face image into a preset face recognition model to obtain a first recognition result and a second recognition result;
Determining characteristic difference information between the generated image and the reference face image according to the first recognition result and the second recognition result;
and determining the second loss value based on the characteristic difference information and a preset loss algorithm.
4. The method according to claim 2, wherein the iteratively training the model to be trained by the joint loss value until the model to be trained meets a preset convergence condition, to obtain a face generation model, includes:
If the model to be trained does not meet the preset convergence condition based on the joint loss value, adjusting parameters of the model to be trained according to the joint loss value, taking the adjusted model to be trained as a current model to be trained, and returning to execute the step of inputting the rendering image into the preset model to be trained until the model to be trained meets the preset convergence condition, so as to obtain the face generation model;
and if the model to be trained meets the preset convergence condition based on the joint loss value, determining the current model to be trained as the face generation model.
5. The method of claim 1, wherein the identifying the face parameter corresponding to the preset face image includes:
Cutting the preset face image to obtain a face area image;
Inputting the face region image into a preset face parameterization model to obtain face parameters corresponding to the preset face image, wherein the face parameters comprise one or more of face shape parameters, face expression parameters, face texture parameters and face posture parameters.
6. The method according to any one of claims 1-5, wherein the performing a face rendering operation according to the face parameters to obtain a rendered image includes:
performing face soft rendering operation according to the face parameters to obtain a rendering result;
and adjusting the resolution of the rendering result to be a preset resolution to obtain the rendering image.
7. An image processing method, comprising:
Acquiring a face image to be processed;
Determining face parameter information corresponding to the face image to be processed;
performing face rendering operation according to the face parameter information to obtain a rendering image to be processed;
Inputting the rendering image to be processed into a preset face generation model to obtain a target face image output by the face generation model, wherein the face generation model is obtained by training based on the method of any one of claims 1-6.
8. The method of claim 7, further comprising, after the determining the face parameter information corresponding to the face image to be processed:
Acquiring a parameter adjustment request, wherein the parameter adjustment request comprises an update parameter, and the update parameter is used for adjusting the expression of the face in the face image to be processed;
and carrying out adjustment operation on the face parameter information corresponding to the face image to be processed according to the parameter adjustment request.
9. The method of claim 7, wherein the face image to be processed is a video frame in a preset video; the determining the face parameter information corresponding to the face image to be processed comprises the following steps:
acquiring at least one frame of associated image frame associated with the face image to be processed in a preset video;
Determining the face key point information of the face image to be processed and the at least one frame of associated image frame;
and determining the face parameter information based on the face key point information of the to-be-processed face image and the at least one frame of associated image frame.
10. An image processing apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a training data set, the training data set comprises a plurality of groups of training data, the training data comprises a preset face image and a reference face image, and the reference face image and the preset face image comprise faces of the same person at different angles;
The identification module is used for identifying face parameters corresponding to the preset face image;
the rendering module is used for performing face rendering operation according to the face parameters to obtain a rendered image;
the input module is used for inputting the rendering image into a preset model to be trained to obtain a generated image output by the model to be trained;
And the training module is used for carrying out iterative training on the model to be trained based on the generated image, the preset face image and the reference face image until the model to be trained meets the preset convergence condition, so as to obtain the face generation model.
11. The apparatus of claim 10, wherein the iteratively training the model to be trained based on the generated image, a preset face image, and a reference face image until the model to be trained meets a preset convergence condition, to obtain a face generated model, includes:
calculating a first loss value corresponding to the model to be trained based on the generated image and the preset face image;
calculating a second loss value corresponding to the model to be trained based on the generated image and the reference face image;
determining a joint loss value corresponding to the model to be trained according to the first loss value and the second loss value;
And carrying out iterative training on the model to be trained through the joint loss value until the model to be trained meets a preset convergence condition, and obtaining a face generation model.
12. The apparatus of claim 11, the calculating a second loss value corresponding to the model to be trained based on the generated image and the reference face image, comprising:
Respectively inputting the generated image and the reference face image into a preset face recognition model to obtain a first recognition result and a second recognition result;
Determining characteristic difference information between the generated image and the reference face image according to the first recognition result and the second recognition result;
and determining the second loss value based on the characteristic difference information and a preset loss algorithm.
13. The apparatus of claim 11, wherein the iteratively training the model to be trained by the joint loss value until the model to be trained meets a preset convergence condition, to obtain a face generation model, includes:
If the model to be trained does not meet the preset convergence condition based on the joint loss value, adjusting parameters of the model to be trained according to the joint loss value, taking the adjusted model to be trained as a current model to be trained, and returning to execute the step of inputting the rendering image into the preset model to be trained until the model to be trained meets the preset convergence condition, so as to obtain the face generation model;
and if the model to be trained meets the preset convergence condition based on the joint loss value, determining the current model to be trained as the face generation model.
14. The apparatus of claim 10, the identifying the face parameter corresponding to the preset face image, comprising:
Cutting the preset face image to obtain a face area image;
Inputting the face region image into a preset face parameterization model to obtain face parameters corresponding to the preset face image, wherein the face parameters comprise one or more of face shape parameters, face expression parameters, face texture parameters and face posture parameters.
15. The apparatus according to any one of claims 10-14, wherein the performing a face rendering operation according to the face parameters to obtain a rendered image includes:
performing face soft rendering operation according to the face parameters to obtain a rendering result;
and adjusting the resolution of the rendering result to be a preset resolution to obtain the rendering image.
16. An image processing apparatus comprising:
the image acquisition module is used for acquiring a face image to be processed;
The determining module is used for determining face parameter information corresponding to the face image to be processed;
The processing module is used for performing face rendering operation according to the face parameter information to obtain a rendering image to be processed;
the generating module is configured to input the rendering image to be processed into a preset face generating model, and obtain a target face image output by the face generating model, where the face generating model is obtained based on the training of the apparatus according to any one of claims 10-15.
17. The apparatus of claim 16, further comprising, after the determining the face parameter information corresponding to the face image to be processed:
Acquiring a parameter adjustment request, wherein the parameter adjustment request comprises an update parameter, and the update parameter is used for adjusting the expression of the face in the face image to be processed;
and carrying out adjustment operation on the face parameter information corresponding to the face image to be processed according to the parameter adjustment request.
18. The apparatus of claim 16, wherein the face image to be processed is a video frame in a preset video; the determining the face parameter information corresponding to the face image to be processed comprises the following steps:
acquiring at least one frame of associated image frame associated with the face image to be processed in a preset video;
Determining the face key point information of the face image to be processed and the at least one frame of associated image frame;
and determining the face parameter information based on the face key point information of the to-be-processed face image and the at least one frame of associated image frame.
19. An electronic device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6 or 7-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6 or 7-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of any of claims 1-6 or 7-9.
CN202410175634.4A 2024-02-07 2024-02-07 Image processing method, apparatus, device, computer readable storage medium, and product Pending CN118230377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410175634.4A CN118230377A (en) 2024-02-07 2024-02-07 Image processing method, apparatus, device, computer readable storage medium, and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410175634.4A CN118230377A (en) 2024-02-07 2024-02-07 Image processing method, apparatus, device, computer readable storage medium, and product

Publications (1)

Publication Number Publication Date
CN118230377A true CN118230377A (en) 2024-06-21

Family

ID=91503966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410175634.4A Pending CN118230377A (en) 2024-02-07 2024-02-07 Image processing method, apparatus, device, computer readable storage medium, and product

Country Status (1)

Country Link
CN (1) CN118230377A (en)

Similar Documents

Publication Publication Date Title
CN115147265B (en) Avatar generation method, apparatus, electronic device, and storage medium
CN113379627A (en) Training method of image enhancement model and method for enhancing image
CN113379877B (en) Face video generation method and device, electronic equipment and storage medium
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN116109824A (en) Medical image and pixel-level label generation method and device based on diffusion model
CN113052962A (en) Model training method, information output method, device, equipment and storage medium
CN114708374A (en) Virtual image generation method and device, electronic equipment and storage medium
CN113962845B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN113380269B (en) Video image generation method, apparatus, device, medium, and computer program product
CN113112398A (en) Image processing method and device
CN114926322B (en) Image generation method, device, electronic equipment and storage medium
CN115393488B (en) Method and device for driving virtual character expression, electronic equipment and storage medium
CN115170919B (en) Image processing model training and image processing method, device, equipment and storage medium
CN114078184B (en) Data processing method, device, electronic equipment and medium
CN113781653B (en) Object model generation method and device, electronic equipment and storage medium
CN116229095A (en) Model training method, visual task processing method, device and equipment
CN118230377A (en) Image processing method, apparatus, device, computer readable storage medium, and product
CN114140320A (en) Image migration method and training method and device of image migration model
CN114119990A (en) Method, apparatus and computer program product for image feature point matching
CN116030150B (en) Avatar generation method, device, electronic equipment and medium
CN116363331B (en) Image generation method, device, equipment and storage medium
CN114037814B (en) Data processing method, device, electronic equipment and medium
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
CN116188640B (en) Three-dimensional virtual image generation method, device, equipment and medium
CN116452741B (en) Object reconstruction method, object reconstruction model training method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination