CN116452703B - User head portrait generation method, device, computer equipment and storage medium - Google Patents

User head portrait generation method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116452703B
CN116452703B CN202310710670.1A CN202310710670A CN116452703B CN 116452703 B CN116452703 B CN 116452703B CN 202310710670 A CN202310710670 A CN 202310710670A CN 116452703 B CN116452703 B CN 116452703B
Authority
CN
China
Prior art keywords
head
model
target
contour
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310710670.1A
Other languages
Chinese (zh)
Other versions
CN116452703A (en
Inventor
黄婷婷
何理达
董少灵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Rabbit Exhibition Intelligent Technology Co ltd
Original Assignee
Shenzhen Rabbit Exhibition Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Rabbit Exhibition Intelligent Technology Co ltd filed Critical Shenzhen Rabbit Exhibition Intelligent Technology Co ltd
Priority to CN202310710670.1A priority Critical patent/CN116452703B/en
Publication of CN116452703A publication Critical patent/CN116452703A/en
Application granted granted Critical
Publication of CN116452703B publication Critical patent/CN116452703B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a user head portrait generating method, a device, computer equipment and a storage medium, wherein the method comprises the following steps: after receiving a head portrait generation instruction of a target user, acquiring a head real image of the target user, and acquiring a target reconstruction model comprising a contour reconstruction layer and a texture reconstruction layer; inputting the head real image into a target reconstruction model, carrying out head contour reconstruction through a contour reconstruction layer to obtain a head contour model, and carrying out texture reconstruction through a texture reconstruction layer to obtain a head texture map; determining an expected head portrait style of a target user, and finally rendering to obtain a user head portrait of the target user based on the head outline model, the head texture map and the expected head portrait style; the reality of the user head portrait is ensured, the attribution sense of the user to the user head portrait is improved, the design sense is also improved, the style of the user head portrait is closer to the preference of the user, the individuation of the head portrait is revealed, and the use effect of the user head portrait is enhanced.

Description

User head portrait generation method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for generating a user avatar, a computer device, and a storage medium.
Background
With the development and popularity of the internet, users are using a wide variety of systems and applications every day. In general, when a user registers as a user account of a system or an application program, information such as a nickname, a user head portrait, gender, age, hobbies and the like of the user needs to be set so as to establish a more personalized user file or user portrait, and the user head portrait displays a window for carrying out personalized introduction of the user.
At present, two general methods for generating user head portraits exist: one is to provide a plurality of fixed head portrait templates for users to select, and the users select the proper head portrait templates as own head portraits of the users; the other is to receive the image material uploaded by the user (such as the album at the user end and/or the image collected in real time), and to generate the head portrait of the user after certain adjustment (such as size adjustment) is performed on the received image material. The two modes have very limited freedom of designing the head portrait of the user, only can select a fixed head portrait template or can only carry out fine adjustment according to preset requirements, the authenticity and design sense of the head portrait of the user are poor, the effect is single, the personalized display requirement of the user cannot be met, the user has no home sense on the head portrait of the user, and the use effect of the head portrait is poor.
Disclosure of Invention
The invention provides a user head portrait generation method, a device, computer equipment and a storage medium, which are used for solving the problems that the traditional user head portrait cannot meet the personalized display requirement of a user, and the user has no home feeling on the user head portrait, so that the using effect of the head portrait is poor.
In view of the above problems, the present invention provides a user avatar generation method, including:
after receiving a head portrait generating instruction of a target user, collecting a head real image of the target user;
determining a desired avatar style of a target user;
acquiring a target reconstruction model comprising a contour reconstruction layer and a texture reconstruction layer, wherein the target reconstruction model is a model obtained by training a preset neural network based on a plurality of head sample images;
inputting the head real image into a target reconstruction model, carrying out head contour reconstruction through a contour reconstruction layer to obtain a head contour model, and carrying out texture reconstruction through a texture reconstruction layer to obtain a head texture map;
and rendering to obtain a user head portrait of the target user based on the head outline model, the head texture map and the expected head portrait style.
Optionally, the target reconstruction model further includes an image segmentation layer, the head real image is input into the target reconstruction model, the head contour is reconstructed by the contour reconstruction layer to obtain a head contour model, and the texture reconstruction is performed by the texture reconstruction layer to obtain a head texture map, including:
Inputting the head real image into an image segmentation layer for image segmentation to obtain a hair image and a face image of the head real image;
inputting the hair image and the face image into a contour reconstruction layer to obtain a head contour model comprising hair contours and face contours;
and inputting the hair image and the face image into a texture reconstruction layer to obtain a head texture map comprising hair textures and face textures.
Optionally, rendering a user avatar of the target user based on the head contour model, the head texture map and the desired avatar style, including:
determining whether the target user enables the avatar function;
if the target user starts the virtual image function, a plurality of virtual image models with different designs are displayed to the target user through the user terminal, and the target virtual image model is determined according to feedback of the target user;
updating the head outline model based on the target virtual image model to obtain a target outline model, and updating the head texture map based on the target virtual image model to obtain a target texture map;
and rendering the target contour model based on the target texture map and the expected head portrait style to obtain a head portrait of the user.
Optionally, updating the head contour model based on the target avatar model to obtain a target contour model, and updating the head texture map based on the target avatar model to obtain a target texture map, including:
Determining whether the target avatar model includes a hair model and a face model;
if the target avatar model includes a hair model and a face model, determining contour difference data and color difference data between the target avatar model and the head contour model;
adjusting the shape contour of the head contour model based on the contour difference data to obtain a target contour model;
and adjusting the shape and the color of the head texture map based on the color difference data to obtain a target texture map.
Optionally, after determining whether the target avatar model includes the hair model and the face model, the method further includes:
if the target avatar model only comprises a hair model or a face model, correspondingly adjusting the hair contour model or the face contour model in the head contour model based on the hair model or the face model to obtain a target contour model;
and correspondingly adjusting a hair texture map or a face texture map in the head texture map based on the hair model or the face model to obtain a target texture map.
Optionally, determining the target avatar model according to the feedback of the target user includes:
after a plurality of virtual image models with different designs are displayed to a target user, determining whether a model selection instruction of the target user is received;
If a model selection instruction is received, taking the virtual image model selected by the target user as a target virtual image model;
if the model selection instruction of the target user is not received, determining an avatar model which is most matched with the target user according to the user portrait of the target user, and taking the avatar model as a target avatar model.
Optionally, after determining whether the avatar function is enabled by the target user, the method further comprises:
if the target user does not enable the virtual image function, determining holiday elements expected by the target user;
and rendering the head outline model based on the head texture map, the expected head portrait style and the holiday elements to obtain a head portrait of the user.
Optionally, the preset neural network includes a contour reconstruction network, a texture reconstruction network and a generation network, the generation network is respectively connected with the contour reconstruction network and the texture reconstruction network, and the target reconstruction model is obtained by training in the following manner:
acquiring a plurality of real head images, performing multi-stylization processing on each head image to obtain a plurality of head sample images with different styles, wherein each head sample image corresponds to one piece of standard style information;
inputting the head sample image into a preset neural network, carrying out contour reconstruction on the head sample image through a contour reconstruction network to obtain a reconstructed contour model, and carrying out texture reconstruction on the head sample image through a texture reconstruction network to obtain a reconstructed texture map;
Inputting standard style information, a reconstruction contour model and a reconstruction texture map of the head sample image into a generation network for image generation to obtain a head reconstruction image;
determining an image loss value of the head sample image and the head reconstruction image;
and when the image loss value does not meet the convergence condition, iteratively updating parameters of a preset neural network based on a plurality of head sample images, and outputting a converged contour reconstruction network and texture reconstruction network as a target reconstruction model when the image loss value meets the convergence condition.
Provided is a user avatar generation device including:
the acquisition module is used for acquiring a head real image of the target user after receiving the head portrait generation instruction of the target user;
the determining module is used for determining the expected head portrait style of the target user;
the acquisition module is used for a target reconstruction model comprising a contour reconstruction layer and a texture reconstruction layer, wherein the target reconstruction model is a model obtained by training a preset neural network based on a plurality of head sample images;
the reconstruction module is used for inputting the head real image into a target reconstruction model, reconstructing the head contour through the contour reconstruction layer to obtain a head contour model, and reconstructing the texture through the texture reconstruction layer to obtain a head texture map;
And the rendering module is used for rendering and obtaining a user head portrait of the target user based on the head outline model, the head texture map and the expected head portrait style.
There is provided a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor executing the computer program to perform the steps of the user avatar generation method described above.
There is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the user avatar generation method described above.
In one scheme provided by the user head portrait generating method, the device, the computer equipment and the storage medium, after receiving a head portrait generating instruction of a target user, a head real image of the target user is acquired, an expected head portrait style of the target user is determined, then a target reconstruction model comprising a contour reconstruction layer and a texture reconstruction layer is acquired, the head real image is input into the target reconstruction model, head contour reconstruction is carried out through the contour reconstruction layer to obtain a head contour model, texture reconstruction is carried out through the texture reconstruction layer to obtain a head texture map, and finally a user head portrait of the target user is rendered based on the head contour model, the head texture map and the expected head portrait style. In the process of generating the user head portrait, a pre-trained target reconstruction model is used for reconstructing the outline and the texture of the head real image, then the user head portrait is generated based on the reconstructed head outline model and the head texture image, the authenticity of the user head portrait is guaranteed, the attribution sense of the user to the user head portrait is improved, in addition, the expected head portrait style of the user is determined, so that the user head portrait is better rendered, the design sense of the user head portrait is improved, the style of the generated user head portrait is closer to the preference of the user, individuation of the head portrait is shown, the individuation requirement of the user is met, and the use effect of the user head portrait is enhanced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an application environment of a user head portrait creation method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a user head portrait generation method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating an implementation of step S30 in FIG. 2;
FIG. 4 is a flowchart illustrating an implementation of step S50 in FIG. 2;
FIG. 5 is a schematic flow chart of another implementation of step S50 in FIG. 2;
FIG. 6 is a schematic diagram of a user head portrait generating device according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The user avatar generation method provided by the embodiment of the invention can be applied to an application environment as shown in figure 1, wherein a user terminal communicates with a server through a network. When a user logs in/registers a certain system (such as a preset editing platform) through a user terminal and needs to generate a user head portrait at an account of the system, the user sends a head portrait generation instruction to a server of the system through the user terminal; after receiving the head portrait generation instruction of the target user, the server needs to acquire a head real image of the target user through a user terminal, for example, the head real image uploaded by an album of the user terminal is received or the head real image of the target user is acquired through a camera device of the user terminal, meanwhile, the expected head portrait style of the target user is determined, then a target reconstruction model comprising a contour reconstruction layer and a texture reconstruction layer is acquired, the head real image is input into the target reconstruction model, the head contour reconstruction is carried out through the contour reconstruction layer to obtain a head contour model, and the texture reconstruction is carried out through the texture reconstruction layer to obtain a head texture map; and finally, rendering to obtain a user head portrait of the target user based on the head outline model, the head texture map and the expected head portrait style. In the process of generating the user head portrait, a pre-trained target reconstruction model is used for reconstructing the outline and the texture of the head real image, then the user head portrait is generated based on the reconstructed head outline model and the head texture image, so that the authenticity of the user head portrait is ensured, the attribution sense of the user to the user head portrait is improved, in addition, the expected head portrait style of the user is determined, so that the user head portrait is better rendered, the design sense of the user head portrait is improved, the style of the generated user head portrait is closer to the preference of the user, the individuation of the head portrait is revealed, the individuation display requirement of the user is met, the use effect of the user head portrait is enhanced, and the individuation head portrait setting requirement of the user is met. In addition, the contour and texture reconstruction is carried out by using a pre-trained target reconstruction model, and the data processing efficiency can be improved on the basis of ensuring the contour and texture extraction precision, so that the generation efficiency and quality of the user head portrait are improved.
The user terminal refers to a device corresponding to a server and providing local services for clients, and may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a method for generating a user avatar is provided, and the method is applied to the server in fig. 1, and includes the following steps:
s10: and after receiving the head portrait generating instruction of the target user, collecting the real head image of the target user.
When the user needs to generate the head portrait of the user, the user sends a head portrait generating instruction to a server of the system through the user terminal, and the server acquires a head real image of the target user through the user terminal after receiving the head portrait generating instruction of the target user.
In this embodiment, there are many methods for acquiring the real image of the head of the target user, and this embodiment is not limited thereto. For example, the server may acquire a head real image of a history of the target user from a local gallery (album) of the user terminal; the shooting function on the user terminal can be started, and the shooting device of the user terminal can be called to shoot the head image of the target user in real time to serve as the head real image acquired at this time.
For example, when a target user logs in an application program corresponding to a certain internet system, if the target user clicks a head portrait generation button, the user terminal wants a server to send a head portrait generation instruction; after receiving the avatar generation instruction of the target user, the server displays an avatar setting interface to the target user through the user terminal, wherein the avatar setting interface has two options: shooting and selecting from photo albums; if the target user clicks the shooting option, the server invokes a shooting device of the user terminal to start a shooting function and prompts the target user to acquire a head image, so that a real head image of the target user is acquired. If the target user clicks the option selected from the album, the server calls the user terminal to enter the local gallery and displays images in the local gallery to the target user, prompts the target user to select a head image, and then acquires the image selected by the target user as a head real image.
S20: a desired avatar style of the target user is determined.
After obtaining the head contour model and the head texture map of the target user, the server also needs to determine the desired avatar style of the target user.
There are various ways of determining the desired avatar style. For example, the desired avatar style may be an avatar style determined after intention recognition based on style descriptive information (text or voice) input by the target user, and in this embodiment, the desired avatar style may be a section of style descriptive text, such as a avatar comparison cartoon, where the background is pink, and elements of kawa are added.
In addition, various head portrait styles can be displayed to the target user so that the target user can select the head portrait styles, and the head portrait styles selected by the target user are used as expected head portrait styles; the various head portrait styles comprise common styles such as art, fashion, freshness, conciseness, lovely, cool and dazzling, antique, hand drawing, cartoon and the like, and also can comprise personalized styles obtained after further subdivision design, such as styles of subdivision according to Chinese zodiac, constellation, holidays and the like. In this embodiment, the desired avatar style may be a style tag text, such as: cartoon; the desired avatar style may also be a standard style image template.
S30: a target reconstruction model is acquired that includes a contour reconstruction layer and a texture reconstruction layer.
After the head real image of the target user is acquired, the server also needs to acquire a pre-trained target reconstruction model so as to call the target reconstruction model to process the head real image. The target reconstruction model comprises a contour reconstruction layer and a texture reconstruction layer. The contour reconstruction layer is used for reconstructing the head contour of the target user based on the head real image, and the texture reconstruction layer is used for reconstructing the head texture of the target user based on the head real image.
In this embodiment, the target reconstruction model is a model obtained by training a preset neural network based on a plurality of head sample images. The preset neural network comprises a contour reconstruction network and a texture reconstruction network, when a target reconstruction model is trained, parameter iteration is carried out on the contour reconstruction network and the texture reconstruction network based on a plurality of head sample images until loss of the head sample images and the model reconstruction images meets requirements, the model is converged, the converged contour reconstruction network is output as a contour reconstruction layer, and the converged texture reconstruction network is output as a texture reconstruction layer, so that the head contour and texture of a target user can be reconstructed through the target reconstruction model obtained through training in advance, and the quality of reconstruction data and the efficiency of data processing are improved.
S40: inputting the head real image into a target reconstruction model, reconstructing the head outline through an outline reconstruction layer to obtain a head outline model, and reconstructing the texture through a texture reconstruction layer to obtain a head texture map.
After obtaining a target reconstruction model comprising a contour reconstruction layer and a texture reconstruction layer, the server inputs a head real image into the target reconstruction model, and the implementation steps are as follows: and reconstructing the head outline of the target user through the outline reconstruction layer to obtain a head outline model of the target user, and reconstructing the texture of the head of the target user through the texture reconstruction layer to obtain a head texture map of the target user.
In this embodiment, the head contour model includes not only the facial contour of the entire face of the target user, but also the contour of the face key parts, for example, the contour of the face key parts including eyebrows, eyes, cheekbones, nose, mouth, ears, and the like. When the head contour reconstruction is carried out through the contour reconstruction layer based on the head real image, the facial contour of the whole face is reconstructed based on the head real image, meanwhile, the face key part detection is carried out based on the head real image, the position of the face key part in the head real image is positioned, such as the areas of eyebrows, eyes, cheekbones, nose, mouth, ears and the like, is positioned, and the contours of the face key parts are reconstructed to obtain a head contour model comprising the facial contour and the face key part contour.
In this embodiment, the head texture map includes texture information of the key part of the face and texture information of the remaining face area. The texture information is used for representing the facial features of the target user, and the facial features are finer, more accurate and natural than the facial features represented by a plurality of pixel points, and are close to the appearance state of the skin of the real person.
S50: and rendering to obtain a user head portrait of the target user based on the head outline model, the head texture map and the expected head portrait style.
After determining the expected head portrait style of the target user, the server renders a user head portrait of the target user based on the head outline model, the head texture map and the expected head portrait style.
In this embodiment, a pre-trained renderer may be used to render the user avatar. For example, the head outline model, the head texture map and the expected head portrait style are directly input into the pre-trained renderer for image rendering to obtain a three-dimensional or two-dimensional user head portrait, the head outline model, the head texture map and the expected head portrait style are directly input into the renderer for rendering to obtain the user head portrait, the outline and the texture of the user head portrait are matched with the real facial features of the target user, the individuation style is also considered, the authenticity and the designability of the user head portrait are improved, the generation quality of the user head portrait is improved, and the generation efficiency of the user head portrait is improved.
In an embodiment, the target reconstruction model further includes a renderer, that is, the preset neural network includes a contour reconstruction network, a texture reconstruction network and a rendering network, that is, when the target reconstruction model is trained, parameter iteration is performed on the contour reconstruction network, the texture reconstruction network and the renderer based on a plurality of head sample images until loss of images obtained by the head sample images and the rendering network meets requirements, the model converges, the converged contour reconstruction network and the converged texture reconstruction network are respectively output as a contour reconstruction layer and a texture reconstruction layer, the converged rendering network is output as a pre-trained renderer, and the three models of the contour reconstruction layer, the texture reconstruction layer and the renderer can be obtained through training by using the same batch of data, so that training efficiency of each model is reduced, and when head image generating tasks are performed subsequently, matching degree of the contour reconstruction layer, the texture reconstruction layer and the renderer can be guaranteed, and quality of subsequently generated head images of users can be further improved.
Specifically, the target reconstruction model includes a contour reconstruction layer, a texture reconstruction layer and a renderer, and after the target reconstruction model is acquired in step S30, the head real image and the desired head portrait style are input into the target reconstruction model, so as to implement the following steps: performing head contour reconstruction through a contour reconstruction layer to obtain a head contour model, and performing texture reconstruction through a texture reconstruction layer to obtain a head texture map; and rendering by using a renderer based on the head contour model, the head texture map and the expected head portrait style to obtain a user head portrait of the target user. In the embodiment, the contour and texture reconstruction is directly carried out on the head real image of the target user by utilizing the target reconstruction model, so that the user head portrait is directly rendered based on the reconstruction result and the expected head portrait style, the method is simple and convenient, a plurality of models and tools are not required to be trained and called, the matching degree of the contour reconstruction layer, the texture reconstruction layer and the renderer in the training process can be ensured, and the quality of the subsequently generated user head portrait is improved.
In other embodiments, the renderer may also be a conventional renderer (e.g., a micro-renderer) or a separately trained renderer, the training samples of which are head contour models and head texture maps of multiple users derived based on head sample images.
Wherein, because the desired avatar style can be presented in a text mode or in an image template mode, two kinds of renderers need to be trained in advance according to different presentation forms of the desired avatar style: a first renderer and a second renderer. The head portrait style input in the first renderer training process is text, and the head portrait style input in the second renderer training process is image, so that the corresponding renderer is selected to conduct user head portrait rendering generation according to different input modes of the target user on the expected head portrait style, and the generation quality of the user head portrait is further improved.
In this embodiment, during the process of generating the user head portrait, a pre-trained target reconstruction model is used to reconstruct the contour and texture of the real head image, and then the user head portrait is generated based on the reconstructed head contour model and the head texture image, so that the authenticity of the user head portrait is ensured, the attribution sense of the user to the user head portrait is improved, in addition, the expected head portrait style of the user is determined, so that the user head portrait is better rendered, the design sense of the user head portrait is improved, the style of the generated user head portrait is closer to the preference of the user, the individuation of the head portrait is revealed, the individuation requirement of the user is met, the use effect of the user head portrait is enhanced, and the setting requirement of the user individuation head portrait is met. In addition, the contour and texture reconstruction is carried out by using a pre-trained target reconstruction model, and the data processing efficiency can be improved on the basis of ensuring the contour and texture extraction precision, so that the generation efficiency and quality of the user head portrait are improved.
In an embodiment, in step S50, a holiday element expected by the target user may be further determined, and then after the head contour model and the head texture map are obtained, the server may render a user avatar of the target user based on the head contour model, the head texture map, the expected avatar style and/or the holiday element. That is, rendering a user avatar of the target user based on the head contour model, the head texture map, and the desired avatar style, includes: and determining holiday elements expected by the target user, and rendering to obtain a user head portrait of the target user based on the head outline model, the head texture map, the expected head portrait style and the holiday elements.
For example, after determining the holiday element expected by the target user, the head contour model, the head texture map, the expected head portrait style and the holiday element are directly input into a pre-trained renderer to perform image rendering, namely, the head contour model is directly rendered based on the head texture map, the expected head portrait style and the holiday element to obtain the head portrait of the user, so that the method is simple and convenient. In other embodiments, the head outline model and the head texture map can be adjusted according to the user's expectations, and then the head outline model is rendered by a renderer based on the head texture map adjusted by the expected head portrait style and holiday elements, so as to obtain the head portrait of the user, further enable the user image to conform to the user's expectations, and improve the design sense and individuation of the head portrait.
The determination manner of the holiday element is similar to the determination manner of the desired avatar style, and will not be described herein. In this embodiment, the holiday elements include elements corresponding to different holidays (including traditional holidays, twenty-four solar terms, modern holidays, and foreign holidays in China), holidays (such as Saturday and sunday), and the like, such as Qingming festival elements including elements such as Dragon boats, zongzi, kites, step green, tree planting, willow insertion, tug of war, swing, bucket chicken, willow, cuju, silkworm shit, and green ball.
In this embodiment, based on the head contour model, the head texture map, the expected head portrait style and/or the holiday elements, the user head portrait of the target user is rendered, on the basis of ensuring that the contour and the texture of the user head portrait are matched with the real facial features of the target user, the head portrait style is considered, the holiday elements are added, the designability and individuation of the user head portrait are further improved, the actual head portrait design requirement of the user is attached, and the user head portrait matched with the holiday and the user can be generated in time during the holiday so that the target user can replace in time, the display effect of the user head portrait is improved, and the user experience is improved.
In an embodiment, before the target reconstruction model is used to reconstruct the contour and texture of the head real image, that is, before the target reconstruction model is acquired and the head real image is input into the target reconstruction model, the server needs to train the preset neural network based on a plurality of head sample images to obtain the target reconstruction model. The preset neural network comprises a contour reconstruction network, a texture reconstruction network and a generation network. The target reconstruction model is obtained by training in the following way:
s01: and acquiring a plurality of real head images, performing multi-stylization processing on each head image to obtain a plurality of head sample images with different styles, wherein each head sample image corresponds to one piece of standard style information.
Firstly, a plurality of head images of a user are required to be obtained, and then multi-stylization processing is carried out on each head image to obtain a plurality of head sample images with different styles. That is, a plurality of preset standard styles (see head portrait styles) are prepared in advance, standard style stylization processing is carried out on each head image, so that head images with the standard styles are obtained, the head images are used as head sample images, all the head images and the standard styles are traversed, and a plurality of head sample images with different styles are obtained and used as subsequent model training sample data.
In this embodiment, the standard style information may be a section of style descriptive text, or may be a style tag text or a standard style image template, so as to train a target reconstruction model with different inputs according to the requirements, thereby improving the model practicality.
In other embodiments, to further improve the quality of the sample data, each head image needs to be pre-processed to enhance the image quality before multi-stylization is performed on each head image, resulting in a better quality head sample image. The pretreatment comprises cleaning, denoising, image enhancement and the like.
S02: inputting the head sample image into a preset neural network, performing contour reconstruction on the head sample image through a contour reconstruction network to obtain a reconstructed contour model, and performing texture reconstruction on the head sample image through a texture reconstruction network to obtain a reconstructed texture map.
In this embodiment, the preset neural network includes a contour reconstruction network, a texture reconstruction network, and a generation network, where the generation network is connected to the contour reconstruction network and the texture reconstruction network, respectively.
After a plurality of head sample images are obtained, inputting a certain head sample image into a preset neural network, carrying out contour reconstruction on the head sample image through a contour reconstruction network to obtain a reconstructed contour model, and carrying out texture reconstruction on the head sample image through a texture reconstruction network to obtain a reconstructed texture map.
S03: and inputting the standard style information, the reconstruction contour model and the reconstruction texture map of the head sample image into a generation network to generate an image, so as to obtain a head reconstruction image.
After the reconstruction contour model and the reconstruction texture map of the head sample image are obtained, inputting the standard style information, the reconstruction contour model and the reconstruction texture map of the head sample image into a generating network for image generation to obtain the head reconstruction image. In this embodiment, the generation network may be a rendering network.
S04: an image loss value of the head sample image and the head reconstruction image is determined.
S05: and when the image loss value does not meet the convergence condition, iteratively updating parameters of a preset neural network based on a plurality of head sample images, and outputting a converged contour reconstruction network and texture reconstruction network as a target reconstruction model when the image loss value meets the convergence condition.
After obtaining the head reconstruction image, determining a direct loss value (difference) of the head sample image and the head reconstruction image as an image loss value, and determining whether the image loss value meets a convergence condition (such as whether the image loss value is smaller than a preset value), if the image loss value does not meet the convergence condition, iteratively updating parameters of a preset neural network based on a plurality of head sample images, that is, repeatedly executing the steps of steps S02-S04 until the image loss value meets the convergence condition. When the image loss value meets the convergence condition, judging that the model is converged, and outputting a converged contour reconstruction network and a texture reconstruction network as a target reconstruction model, wherein the converged contour reconstruction network is output as a contour reconstruction layer, and the converged texture reconstruction network is output as a texture reconstruction model.
In this embodiment, the preset neural network includes a contour reconstruction network, a texture reconstruction network, and a generation network, where the generation network is connected to the contour reconstruction network and the texture reconstruction network, respectively; performing multi-stylization processing on a plurality of real head images to obtain a plurality of head sample images with different styles; inputting the head sample image into a preset neural network, carrying out contour reconstruction on the head sample image through a contour reconstruction network to obtain a reconstructed contour model, and carrying out texture reconstruction on the head sample image through a texture reconstruction network to obtain a reconstructed texture map; and inputting standard style information of the head sample image, a reconstructed contour model and a reconstructed texture map into a generating network to generate an image so as to obtain a head reconstructed image, thereby determining image loss values of the head sample image and the head reconstructed image, updating parameters of a preset neural network based on a plurality of head sample images when the image loss values do not meet convergence conditions, outputting the converged contour reconstructed network and texture reconstructed network as a target reconstruction model until the image loss values meet the convergence conditions, defining a training process of the target reconstruction model, carrying out model training through the head sample image with multiple styles, taking the head contour, the texture information and various style information of a user into consideration, and obtaining a contour reconstruction layer and a texture reconstruction model with higher precision so as to directly call the model for contour and texture reconstruction later, thereby generating personalized user head images, and effectively improving the generation efficiency and quality of the user head images.
In other embodiments, the preset neural network further comprises a hair segmentation network and an illumination regression network, wherein the hair segmentation network is respectively connected with the contour reconstruction network and the texture reconstruction network, and the illumination regression network is connected with the generation network. Correspondingly, the target reconstruction model further comprises an image segmentation layer and an illumination regression layer, when the image loss value meets the convergence condition, the converged hair segmentation network is output as the image segmentation layer, and the converged illumination regression network is output as the illumination regression layer.
The hair segmentation network is used for segmenting the head sample image into a hair region image and a face region image, so that the hair region image and the face region image are input into the contour reconstruction network to perform contour modeling of the hair region and the face region, a reconstructed contour model comprising hair region contours and face region contours is obtained, and the hair region image and the face region image are input into the texture reconstruction network to perform texture modeling of the hair region and the face region, so that a reconstructed texture map comprising hair region textures and face region textures is obtained. When the head real image of the target user is processed subsequently, correspondingly, the image segmentation layer inputs the head real image into the image segmentation layer for image segmentation to obtain a hair image and a face image of the head real image, so that the subsequent contour reconstruction layer and the texture reconstruction layer can conveniently reconstruct the head contour and the texture, and the face and the hair area are distinguished for subsequent contour and texture modeling, so that the boundary area between the hair and the face of the user can be finely distinguished, and the accuracy of the head contour model and the head texture map is further improved.
The illumination regression network is used for determining illumination information of the head sample image so that the subsequent generation network can generate images based on standard style information, illumination information, reconstruction contour model and reconstruction texture map of the head sample image to obtain a head reconstruction image; the influence of illumination information on the imaging effect is considered, the imaging effect of the head reconstruction image is further improved, and the head reconstruction image is closer to the real head image. Correspondingly, when the head outline model and the head texture map of the target user are obtained, and then the user head portrait is generated, the user head portrait of the target user is rendered based on the expected head portrait style, illumination information, the head outline model and the head texture map (and holiday elements), and besides the expected head portrait style, the holiday elements and the real long phase of the target user, the influence of the illumination information on the imaging effect is considered, so that the reality and the head portrait effect of the user head portrait are further improved.
In an embodiment, the target reconstruction model further comprises an image segmentation layer. As shown in fig. 3, in step S40, a head real image is input into a target reconstruction model, a head contour model is obtained by performing a head contour reconstruction through a contour reconstruction layer, and a head texture map is obtained by performing a texture reconstruction through a texture reconstruction layer, which specifically includes the following steps:
S31: inputting the head real image into an image segmentation layer for image segmentation to obtain a hair image and a face image of the head real image.
In this embodiment, the target reconstruction model includes an image segmentation layer, a contour reconstruction layer, and a texture reconstruction layer, where the image segmentation layer is connected to the contour reconstruction layer and the texture reconstruction layer, respectively. After obtaining the head real image input to the target user, the head real image of the target user is input to the image segmentation layer of the target reconstruction model for image segmentation, resulting in a hair image (hair region image) and a face image (face region image including ears) of the head real image.
In order to ensure the quality of subsequent data processing, the head real image needs to be preprocessed (including cleaning, denoising and image enhancement) before being input into the image segmentation layer of the target reconstruction model, and then the preprocessed head real image is input into the image segmentation layer of the target reconstruction model for image segmentation.
S42: the hair image and the face image are input into a contour reconstruction layer, and a head contour model comprising hair contours and face contours is obtained.
After the hair image and the face image of the head real image are obtained, the hair image and the face image are input into a contour reconstruction layer, and the hair area contour and the face area contour of the target user are reconstructed through the contour reconstruction layer, so that a head contour model of the hair contour and the face contour of the target user is obtained. The head contour model includes the contour of the face key parts (the face key parts such as eyebrows, eyes, cheekbones, nose, mouth, ears, etc.).
So that when the following target user has the hair (hairstyle and color) replacement requirement and/or the facial area (facial form and facial key facial feature outline) replacement requirement, the user head portrait is rendered based on the updated head outline model, and the design sense of the user head portrait is further improved.
S43: and inputting the hair image and the face image into a texture reconstruction layer to obtain a head texture map comprising hair textures and face textures.
After obtaining the hair image and the face image of the head real image, the hair image and the face image are also required to be input into a texture reconstruction layer, and the hair texture and the face texture of the target user are respectively reconstructed through the texture reconstruction layer, so that a head texture image comprising the hair texture and the face texture of the target user is obtained. In this embodiment, the head texture map includes textures of key parts of the face.
In this embodiment, the head real image is input into the image segmentation layer for image segmentation to obtain a hair image and a face image of the head real image, then the hair image and the face image are input into the contour reconstruction layer to obtain a head contour model including a hair contour and a face contour, and the hair image and the face image are input into the texture reconstruction layer to obtain a head texture map including a hair texture and a face texture, so that the specific steps of reconstructing the head contour by the contour reconstruction layer to obtain the head contour model and reconstructing the texture by the texture reconstruction layer to obtain the head texture map are clarified. When the head contour of the target user is reconstructed, not only the hair contour and the face contour can be distinguished, but also the demarcation region of the hair contour and the face contour can be finely reconstructed, so that the accuracy of the head contour model is improved; when the head texture of the target user is rebuilt, not only the texture information of the hair area and the face area can be distinguished, but also the texture of the boundary area between the hair area and the face area can be finely rebuilt, so that the accuracy of the head texture map is improved.
In an embodiment, the head contour model may include a hair contour model and a face contour model, and the head texture map may include a hair texture map and a face texture map, so that when a subsequent target user has a hair style replacement requirement and/or a face type replacement requirement, the head contour model can be updated according to a hair style and/or a face type expected by the target user, the head texture map can be updated according to a hair style and/or a face type expected by the target user, and further, a user head image is rendered based on the updated head texture map and the head contour model, so that the design sense of the user head image is further improved. Wherein the hairstyle and/or face shape desired by the target user (i.e., the target avatar model) can be determined by selecting a pre-provided avatar model, which can be a hair model, a face model, and a combination of the hair model and the face model.
In one embodiment, as shown in fig. 4, in step S50, a user avatar of the target user is rendered based on the head contour model, the head texture map and the desired avatar style, which specifically includes the following steps:
s51: it is determined whether the target user enables the avatar function.
After obtaining the head contour model, the head texture map of the target user, the server needs to determine whether the user enables the avatar function.
In this embodiment, there may be various ways of determining whether to enable the avatar function, for example, after the user terminal obtains the real image of the head of the target user, the user terminal sends a prompt (such as a popup prompt) about whether to enable the avatar function to the target user, if the target user confirms that the avatar function is enabled, the user terminal sends feedback that the target user enables the avatar function to the server, and the server determines that the target user enables the avatar function after receiving the feedback that the target user enables the avatar function; if the server receives feedback that the target user does not start the avatar function or does not receive feedback that the target user starts the avatar function within a preset time after prompting, the method is simple and visual.
In other embodiments, when the server displays the avatar setting interface to the target user through the user terminal, the avatar setting interface may be further provided with an avatar function option, and when the user clicks the avatar function option of the avatar setting interface, the user terminal sends an avatar function enabling instruction to the server, and the server determines that the target user enables the avatar function after receiving the avatar function enabling instruction sent by the user terminal; otherwise, if the server does not receive (or does not receive after the user has received the head outline model and the head texture map) the avatar function enabling instruction sent by the user terminal, determining that the target user enables the avatar function. The avatar function options are set through the avatar setting interface, so that the target user can start the avatar function at any time, the situation that other timeliness prompts are missed and the avatar function cannot be started in time is avoided, and the user experience is improved.
S52: and if the target user does not enable the avatar function, determining a holiday element expected by the target user.
After determining whether the target user enables the avatar function, if it is determined that the target user does not enable the avatar function, which means that the target user does not change the user's avatar (such as changing skin color, facial form or facial contour, hair style or color), the avatar displays the real avatar of the target user, and then the server needs to determine the holiday element desired by the target user.
S53: and rendering the head outline model based on the head texture map, the expected head portrait style and the holiday elements to obtain a head portrait of the user.
After determining the holiday elements expected by the target user, a renderer is required to render the head outline model based on the head texture map, the expected head portrait style and the holiday elements to obtain the head portrait of the user. For example, the head outline model, the head texture map, the expected head style and the holiday elements (and illumination information) are directly input into the renderer for image rendering, so that the user head is obtained, the method is simple and visual, the head style is considered on the basis of ensuring that the contours and textures of the user head are matched with the real facial features of a target user, the holiday elements are added, the designability and individuation of the user head are further improved, the user head matched with the current node can be generated in time during the holiday, the target user can replace the user head in time, the display effect of the user head is improved, and the user experience is improved.
In other embodiments, if it is determined that the target user does not enable the avatar function, the head outline model may be rendered to obtain the user avatar based on the head texture map and the desired avatar style without consideration of holiday elements; or when the target user does not enable the avatar function and the current time is not in any holiday, rendering the head outline model to obtain the user head portrait only based on the head texture map and the expected head portrait style. On the basis of ensuring that the outline and texture of the head portrait of the user are matched with the real facial features of the target user, the design of a little style is maintained, and the data processing capacity is reduced.
In this embodiment, by determining whether the target user enables the avatar function, if the target user does not enable the avatar function, determining a holiday element expected by the target user, and rendering the head contour model based on the head texture map, the expected head portrait style and the holiday element to obtain a user head portrait, the specific step of rendering the user head portrait of the target user based on the head contour model, the head texture map and the expected head portrait style is clarified. When the target user does not have the need of adjusting the functions of the virtual image, the user head portrait is directly rendered based on the head outline model, the head texture map, the expected head portrait style, the holiday elements and other information, so that the method is simple and visual, ensures the matching of the user head portrait and the real image of the target user, also gives consideration to the head portrait style, the holiday elements and other information, further improves the designability and individuation of the user head portrait, and increases the display effect of the user head portrait.
In one embodiment, as shown in fig. 5, in step S50, a user avatar of the target user is rendered based on the head contour model, the head texture map and the desired avatar style, and the method specifically includes the following steps:
s51: it is determined whether the target user enables the avatar function.
After obtaining the head contour model, the head texture map of the target user, the server needs to determine whether the user enables the avatar function.
S54: and if the target user starts the avatar function, displaying a plurality of avatar models with different designs to the target user through the user terminal, and determining the target avatar model according to feedback of the target user.
If the target user starts the avatar function, the target user needs to change the user avatar (such as changing skin color, facial form or facial outline, hair style or color) so as to provide user head portrait designability and display effect, a plurality of avatar models with different designs are displayed to the target user through the user terminal, and the target avatar model is determined according to feedback of the target user.
For example, after determining that the target user enables the avatar function, the server presents an avatar page showing a plurality of avatar models of different designs to the target user through the user terminal, and the user can select (e.g., click on) an appropriate avatar model according to the needs, wherein the avatar model may be a hair model, a face model, and a combination of the hair model and the face model. After the user selects an appropriate avatar model, the server determines the avatar model selected by the target user as a target avatar model.
S55: updating the head outline model based on the target virtual image model to obtain a target outline model, and updating the head texture map based on the target virtual image model to obtain a target texture map.
After determining the target avatar model according to the feedback of the target user, the server updates the head contour model based on the target avatar model to obtain a target contour model, and updates the head texture map based on the target avatar model to obtain a target texture map.
For example, after determining the target avatar model according to feedback of the target user, determining difference data (such as differences) between the target avatar model and the head contour model, and then adjusting the head contour model according to the difference data to obtain a target contour model with a contour matched with the target avatar model; and meanwhile, carrying out transformation adjustment on the head texture map according to the difference data to obtain a target texture map. The difference data is determined in the following manner: and determining the corresponding relation between each unit area in the target avatar model and the head outline model, and obtaining a transformation matrix between the target avatar model and the head outline model as difference data according to the determined corresponding relation.
S56: and rendering the target contour model based on the target texture map and the expected head portrait style to obtain a head portrait of the user.
After the target contour model and the target texture map are obtained, the server adopts a renderer to render the target contour model based on the target texture map and the expected head portrait style to obtain the head portrait of the user.
In this embodiment, whether the target user starts the avatar function is determined, if the target user starts the avatar function, a plurality of avatar models with different designs are displayed to the target user through the user terminal, the target avatar model is determined according to feedback of the target user, then the head outline model is updated based on the target avatar model to obtain the target outline model, the head texture map is updated based on the target avatar model to obtain the target texture map, finally the target outline model is rendered based on the target texture map and the expected avatar style to obtain the user avatar, and the specific steps of rendering the user avatar of the target user based on the head outline model, the head texture map and the expected avatar style are defined. When the target user starts the virtual image function, namely when the target user has the requirements of changing hair, face areas and the like, the head outline model and the head texture map are updated according to the target virtual image model which is expected to be changed, so that the user head portrait is generated, the designability and individuation of the user head portrait are further improved, and the display effect of the user head portrait is improved.
In addition, after the head outline model and the head texture map are updated, the target outline model can be rendered based on the target texture map, the holiday elements and the expected head portrait style to obtain a user head portrait, the holiday elements are newly added, the designability and individuality of the user head portrait are further improved, the user head portrait matched with the current node can be timely generated during holidays, so that a target user can replace the user head portrait in time, and the display effect of the user head portrait is improved.
In one embodiment, in step S55, the head contour model is updated based on the target avatar model to obtain a target contour model, and the head texture map is updated based on the target avatar model to obtain a target texture map, which further specifically includes the following steps:
s551: it is determined whether the target avatar model includes a hair model and a face model.
In this embodiment, the target avatar model may be a hair model, a face model, and a combination of the hair model and the face model. After determining the target avatar model according to feedback of the target user, it is required to determine whether the target avatar model includes a hair model and a face model.
S552: if the target avatar model includes a hair model and a face model, contour difference data and color difference data between the target avatar model and the head contour model are determined.
If the target avatar model includes a hair model and a face model, indicating that the target user desires to transform the hair region and the face region (including the outline of the entire face, the outline of the key part of the face), it is necessary to determine outline difference data and color difference data between the target avatar model and the head outline model at this time.
S553: and adjusting the shape contour of the head contour model based on the contour difference data to obtain a target contour model.
After contour difference data and color difference data are obtained, the shape contour of the head contour model is adjusted based on the contour difference data, and a target contour model is obtained. Specifically, the contour difference data includes hair contour (hairstyle) difference data and face area contour difference data; and adjusting the shape contour of the hair region in the head contour model according to the hair contour difference data, and adjusting the shape contour of the whole face and the key part of the face in the head contour model according to the face region contour difference data, so as to obtain a target contour model. The contour difference data are subdivided according to the hair and face areas, accuracy of the contour difference data is guaranteed, and then the hair areas and the face areas in the head contour model are correspondingly adjusted, so that updating fineness and accuracy of the head contour model are guaranteed, and the shape contour of the target contour model can be matched with the target avatar model.
In other embodiments, the contours of the hair region and the face region are not distinguished, and the head contour model is directly subjected to contour adjustment and filling according to the difference between each unit region in the contour difference data, so that a target contour model is obtained, and the method is simple and visual.
S554: and adjusting the shape and the color of the head texture map based on the color difference data to obtain a target texture map.
After the contour difference data and the color difference data are obtained, the shape and the color of the head texture map are also required to be adjusted based on the color difference data, so that the target texture map is obtained.
Specifically, the color difference data includes hair color difference data and face area color difference data; and adjusting the shape and the color of the hair region in the head texture map according to the hair color difference data, and adjusting the shape and the color of the face region in the head texture map according to the face region color difference data, so as to obtain the target texture map. The method comprises the steps of adjusting and filling the shape outline of a hair region in a head texture map according to hair color difference data, and adjusting and filling the shape outline of a face region in the head texture map according to face region color difference data to obtain an updated head texture map; and then adjusting and updating the color of the hair region in the head texture map according to the hair color difference data, and adjusting and updating the color of each unit region (such as pixels) in the head texture map according to the face region color difference data to obtain the target texture map. The color difference data are subdivided according to the hair and face areas, the accuracy of the color difference data of each area of the model is ensured, and then the hair area and the face area in the head texture map are correspondingly adjusted, so that the updating fineness and accuracy of the head texture map are ensured, the shape contour and the color of the target texture map can be correspondingly matched with the target virtual image model, the suitability of the target texture map and the target contour model is ensured, and the subsequent access efficiency with the target contour model is further improved.
In other embodiments, the shape and color of the head texture map may be directly adjusted according to the difference between the unit areas in the color difference data without distinguishing the hair area and the face area, so as to obtain the target texture map, which is simple and intuitive.
In this embodiment, whether the target avatar model includes a hair model and a face model is determined, if the target avatar model includes a hair model and a face model, contour difference data and color difference data between the target avatar model and the head contour model are determined, a shape contour of the head contour model is adjusted based on the contour difference data to obtain a target contour model, a shape and a color of a head texture map are adjusted based on the color difference data to obtain a target texture map, updating the head contour model based on the target avatar model to obtain a target contour model is clarified, and updating the head texture map based on the target avatar model to obtain a specific step of the target texture map. When the target avatar model comprises a hair model and a face model, namely, the requirement of the target user for changing the hair and the face area is determined, the head contour model and the head texture map are respectively updated according to the shape contour and the color difference of the target avatar model and the head contour model, the updating accuracy of the head contour model and the head texture map is ensured, the target contour model and the target texture map which are matched with the shape contour and the color direction of the target avatar model are ensured, and an accurate data basis is provided for the follow-up.
In an embodiment, after step S552, i.e., after determining whether the target avatar model includes a hair model and a face model, the method further specifically includes the steps of:
s555: if the target avatar model only comprises a hair model or a face model, correspondingly adjusting the hair contour model or the face contour model in the head contour model based on the hair model or the face model to obtain the target contour model.
In this embodiment, the head contour model includes a hair contour model and a face contour model; the head texture map includes a hair texture map or a face texture map.
After step S551, i.e. after determining whether the target avatar model includes a hair model and a face model, if the target avatar model includes only a hair model or a face model, it means that the target user only desires to transform the hair (hairstyle, hair color), or only desires to transform the face area (skin color, whole face shape, outline of the key part of the face), then only the hair contour model or the face contour model in the head contour model needs to be correspondingly adjusted based on the hair model or the face model at this time, so as to obtain the target contour model.
That is, if the target avatar model includes only the hair model, adjusting the contour of the hair contour model in the head contour model according to the hair model to obtain the target contour model; if the target avatar model only comprises a face model, the contours of all areas (face shapes and key parts of the face) of the face contour model in the head contour model are adjusted according to the face model, so that the target contour model is obtained.
S556: and correspondingly adjusting a hair texture map or a face texture map in the head texture map based on the hair model or the face model to obtain a target texture map.
Meanwhile, if the target avatar model only comprises a hair model or a face model, correspondingly adjusting a hair texture map or a face texture map in the head texture map based on the hair model or the face model to obtain a target texture map. That is, if the target avatar model includes only the hair model, adjusting the shape and color of the hair texture map in the head texture map according to the hair model to obtain the target texture map; and if the target avatar model only comprises a face model, adjusting the face shape, the facial contours and the skin colors of the face texture map in the head texture map according to the face model to obtain the target texture map.
In this embodiment, by determining whether the target avatar model includes a hair model and a face model, if the target avatar model includes only the hair model or the face model, correspondingly adjusting the hair contour model or the face contour model in the head contour model based on the hair model or the face model to obtain a target contour model, and correspondingly adjusting the hair texture map or the face texture map in the head texture map based on the hair model or the face model to obtain a target texture map; the method definitely comprises the steps of updating a head outline model based on a target virtual image model to obtain the target outline model, and updating a head texture map based on the target virtual image model to obtain a specific implementation step of the target texture map. When it is determined that the target user only has a replacement requirement on a partial area of the head, updating and adjusting are performed on the corresponding areas of the head outline model and the head texture map only according to the target avatar model of the expected replacement area, and on the basis of guaranteeing the accuracy of the target outline model and the target texture map, data updating and adjusting are performed on the whole head outline model and the whole head texture map, the updating range is smaller, the updating area is more accurate, the data processing amount is reduced, and therefore the data processing efficiency is improved.
In one embodiment, the determining the target avatar model according to the feedback of the target user in step S54 specifically includes the following steps:
s541: after a plurality of avatar models of different designs are presented to the target user, it is determined whether a model selection instruction of the target user is received.
After determining whether the user enables the avatar function, if the target user enables the avatar function, a plurality of avatar models of different designs are presented to the target user through the user terminal so that the target user can select an appropriate avatar model as required. After a plurality of avatar models of different designs are displayed to the target user, determining whether a model selection instruction of the target user is received, thereby determining the target avatar model according to the model selection instruction.
S542: and if a model selection instruction is received, taking the avatar model selected by the target user as a target avatar model.
After determining whether a model selection instruction of the target user is received or not, if the model selection instruction is received, taking the virtual image model selected by the target user as a target virtual image model, so that the target virtual image model is in optimal accordance with the current requirement of the target user.
S543: if the model selection instruction of the target user is not received, determining an avatar model which is most matched with the target user according to the user portrait of the target user, and taking the avatar model as a target avatar model.
After determining whether a model selection instruction of the target user is received or not, if the model selection instruction of the target user is not received, the target user is not satisfied with the displayed avatar model, the user portrait of the target user needs to be acquired in order to ensure that the user head portrait expected or liked by the user can be normally generated later, and then the avatar model which is most matched with the target user is determined as the target avatar model according to the user portrait of the target user.
Wherein the user profile includes gender, age, occupation, and preference information (possibly including historical avatar preference information) of the target user. If a model selection instruction of the target user is not received, determining an avatar model which is most matched with the target user according to gender, age, occupation, preference information and the like of the target user, and taking the avatar model as the target avatar model.
For example, the user portrayal of the target user is: the sex is female, the age is 17 years old, the occupation is student, the preference information comprises that the sex is liked to pink, the cartoon is liked, the hair is liked to grow, the eyes are big, the history head portrait is liked to be cartoon head portrait, and the like, then according to the sex, the age, the occupation, the preference information and the like of a target user, the avatar model which is most matched with the target user is determined to be an avatar model A, the avatar model A meets the requirements of the user portrait, and then the avatar model A is the target avatar model.
In this embodiment, the user portrait of the target user and the avatar model most matched with the target user are only exemplary, and in other embodiments, the user portrait of the target user may be other, and the avatar model most matched with the target user may be other avatar models, which are not described herein.
In this embodiment, after a plurality of avatar models of different designs are displayed to a target user, determining whether a model selection instruction of the target user is received, and if the model selection instruction is received, taking the avatar model selected by the target user as a target avatar model; if the model selection instruction of the target user is not received, determining an avatar model which is most matched with the target user according to the user portrait of the target user, and taking the avatar model as a target avatar model. The process defines a specific implementation process of determining the target avatar model according to feedback of the target user, when the target user does not select the avatar model, the avatar model which is most matched with the target user is determined according to the user portrait of the target user and is used as the target avatar model, so that the user head portrait can be ensured to be normally generated subsequently, the user head portrait meets the requirements of reality and design sense, and on the basis, the target avatar model can be maximally matched with the preference of the target user, so that the user head portrait which is generated subsequently according to the target avatar model is matched with the preference of the target user, and the attribution sense of the target user to the user head portrait is further improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, a user head portrait generating device is provided, which corresponds to the user head portrait generating method in the above embodiment one by one. As shown in fig. 6, the user avatar generating apparatus includes an acquisition module 601, a determination module 602, an acquisition module 603, a reconstruction module 604, and a rendering module 605. The functional modules are described in detail as follows:
the acquisition module 601 is configured to acquire a real head image of a target user after receiving a head portrait generation instruction of the target user;
a determining module 602, configured to determine a desired avatar style of the target user;
the acquiring module 603 is configured to include a target reconstruction model of a contour reconstruction layer and a texture reconstruction layer, where the target reconstruction model is a model obtained by training a preset neural network based on a plurality of head sample images;
the reconstruction module 604 is configured to input the head real image into a target reconstruction model, reconstruct the head contour through the contour reconstruction layer to obtain a head contour model, and reconstruct the texture through the texture reconstruction layer to obtain a head texture map;
The rendering module 605 is configured to render a user avatar of the target user based on the head contour model, the head texture map and the desired avatar style.
Optionally, the target reconstruction model further includes an image segmentation layer, and the reconstruction module 604 is specifically configured to:
inputting the head real image into an image segmentation layer for image segmentation to obtain a hair image and a face image of the head real image;
inputting the hair image and the face image into a contour reconstruction layer to obtain a head contour model comprising hair contours and face contours;
and inputting the hair image and the face image into a texture reconstruction layer to obtain a head texture map comprising hair textures and face textures.
Optionally, the rendering module 605 is specifically configured to:
determining whether the target user enables the avatar function;
if the target user starts the virtual image function, a plurality of virtual image models with different designs are displayed to the target user through the user terminal, and the target virtual image model is determined according to feedback of the target user;
updating the head outline model based on the target virtual image model to obtain a target outline model, and updating the head texture map based on the target virtual image model to obtain a target texture map;
And rendering the target contour model based on the target texture map and the expected head portrait style to obtain a head portrait of the user.
Optionally, the rendering module 605 is specifically further configured to:
determining whether the target avatar model includes a hair model and a face model;
if the target avatar model includes a hair model and a face model, determining contour difference data and color difference data between the target avatar model and the head contour model;
adjusting the shape contour of the head contour model based on the contour difference data to obtain a target contour model;
and adjusting the shape and the color of the head texture map based on the color difference data to obtain a target texture map.
Optionally, after determining whether the target avatar model includes a hair model and a face model, the rendering module 605 is specifically further configured to:
if the target avatar model only comprises a hair model or a face model, correspondingly adjusting the hair contour model or the face contour model in the head contour model based on the hair model or the face model to obtain a target contour model;
and correspondingly adjusting a hair texture map or a face texture map in the head texture map based on the hair model or the face model to obtain a target texture map.
Optionally, the rendering module 605 is specifically further configured to:
after a plurality of virtual image models with different designs are displayed to a target user, determining whether a model selection instruction of the target user is received;
if a model selection instruction is received, taking the virtual image model selected by the target user as a target virtual image model;
if the model selection instruction of the target user is not received, determining an avatar model which is most matched with the target user according to the user portrait of the target user, and taking the avatar model as a target avatar model.
Optionally, after determining whether the avatar function is enabled by the target user, the rendering module 605 is specifically further configured to:
if the target user does not enable the virtual image function, determining holiday elements expected by the target user;
and rendering the head outline model based on the head texture map, the expected head portrait style and the holiday elements to obtain a head portrait of the user.
Optionally, the user avatar generating apparatus further includes a training module 606, where the preset neural network includes a contour reconstruction network, a texture reconstruction network, and a generating network, where the generating network is connected to the contour reconstruction network and the texture reconstruction network, and the training module 606 is specifically configured to train to obtain a target reconstruction model by:
Acquiring a plurality of real head images, performing multi-stylization processing on each head image to obtain a plurality of head sample images with different styles, wherein each head sample image corresponds to one piece of standard style information;
inputting the head sample image into a preset neural network, carrying out contour reconstruction on the head sample image through a contour reconstruction network to obtain a reconstructed contour model, and carrying out texture reconstruction on the head sample image through a texture reconstruction network to obtain a reconstructed texture map;
inputting standard style information, a reconstruction contour model and a reconstruction texture map of the head sample image into a generation network for image generation to obtain a head reconstruction image;
determining an image loss value of the head sample image and the head reconstruction image;
and when the image loss value does not meet the convergence condition, iteratively updating parameters of a preset neural network based on a plurality of head sample images, and outputting a converged contour reconstruction network and texture reconstruction network as a target reconstruction model when the image loss value meets the convergence condition.
The specific definition of the user avatar generation device may be referred to the definition of the user avatar generation method hereinabove, and will not be described herein. The respective modules in the above-described user avatar generation apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer equipment is used for storing data used and generated by the user head portrait generating method, and the data comprises a target reconstruction model, a head real image, a user head portrait and other summer and summer holidays. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a user avatar generation method.
In one embodiment, a computer device is provided comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of when executing the computer program:
After receiving a head portrait generating instruction of a target user, collecting a head real image of the target user;
determining a desired avatar style of a target user;
acquiring a target reconstruction model comprising a contour reconstruction layer and a texture reconstruction layer, wherein the target reconstruction model is a model obtained by training a preset neural network based on a plurality of head sample images;
inputting the head real image into a target reconstruction model, carrying out head contour reconstruction through a contour reconstruction layer to obtain a head contour model, and carrying out texture reconstruction through a texture reconstruction layer to obtain a head texture map;
and rendering to obtain a user head portrait of the target user based on the head outline model, the head texture map and the expected head portrait style.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
after receiving a head portrait generating instruction of a target user, collecting a head real image of the target user;
determining a desired avatar style of a target user;
acquiring a target reconstruction model comprising a contour reconstruction layer and a texture reconstruction layer, wherein the target reconstruction model is a model obtained by training a preset neural network based on a plurality of head sample images;
Inputting the head real image into a target reconstruction model, carrying out head contour reconstruction through a contour reconstruction layer to obtain a head contour model, and carrying out texture reconstruction through a texture reconstruction layer to obtain a head texture map;
and rendering to obtain a user head portrait of the target user based on the head outline model, the head texture map and the expected head portrait style.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A method for generating a user avatar, comprising:
after receiving an avatar generation instruction of a target user, acquiring a head real image of the target user;
Determining a desired avatar style of the target user;
obtaining a target reconstruction model comprising a contour reconstruction layer and a texture reconstruction layer, wherein the target reconstruction model is a model obtained by training a preset neural network based on a plurality of head sample images; the target reconstruction model is obtained by training in the following mode: acquiring a plurality of real head images, performing multi-stylization processing on each head image to obtain a plurality of head sample images with different styles, wherein each head sample image corresponds to one piece of standard style information; inputting the head sample image into the preset neural network, carrying out contour reconstruction on the head sample image through the contour reconstruction network to obtain a reconstructed contour model, and carrying out texture reconstruction on the head sample image through the texture reconstruction network to obtain a reconstructed texture map; inputting the standard style information of the head sample image, the reconstruction contour model and the reconstruction texture map into the generation network for image generation to obtain the head reconstruction image; determining an image loss value for the head sample image and the head reconstruction image; when the image loss value does not meet the convergence condition, iteratively updating parameters of the preset neural network based on a plurality of head sample images until the image loss value meets the convergence condition, and outputting the converged contour reconstruction network and texture reconstruction network as the target reconstruction model;
Inputting the head real image into the target reconstruction model, reconstructing the head contour through the contour reconstruction layer to obtain a head contour model, and reconstructing the texture through the texture reconstruction layer to obtain a head texture map; when the head contour is reconstructed through the contour reconstruction layer, reconstructing the facial contour of the whole face based on the head real image, detecting a face key part based on the head real image, and reconstructing the contour of the face key part to obtain the head contour model comprising the facial contour and the face key part contour;
and rendering to obtain a user head portrait of the target user based on the head outline model, the head texture map and the expected head portrait style.
2. The method for generating a user avatar according to claim 1, wherein said rendering the user avatar of the target user based on the head contour model, the head texture map, and the desired avatar style comprises:
determining whether the target user enables an avatar function;
if the target user starts the virtual image function, a plurality of virtual image models with different designs are displayed to the target user through a user terminal, and the target virtual image model is determined according to the feedback of the target user;
Updating the head outline model based on the target virtual image model to obtain a target outline model, and updating the head texture map based on the target virtual image model to obtain a target texture map;
and rendering the target contour model based on the target texture map and the expected head portrait style to obtain the head portrait of the user.
3. The method of generating a user avatar according to claim 2, wherein updating the head contour model based on the target avatar model to obtain a target contour model, and updating the head texture map based on the target avatar model to obtain a target texture map, comprises:
determining whether the target avatar model includes a hair model and a face model;
determining contour difference data and color difference data between the target avatar model and the head contour model if the target avatar model includes the hair model and the face model;
adjusting the shape contour of the head contour model based on the contour difference data to obtain the target contour model;
and adjusting the shape and the color of the head texture map based on the color difference data to obtain the target texture map.
4. The user avatar generation method of claim 3, wherein after the determining whether the target avatar model includes a hair model and a face model, the method further comprises:
if the target avatar model only comprises the hair model or the face model, correspondingly adjusting the hair contour model or the face contour model in the head contour model based on the hair model or the face model to obtain the target contour model;
and correspondingly adjusting a hair texture map or a face texture map in the head texture map based on the hair model or the face model to obtain the target texture map.
5. The user avatar generation method of claim 2, wherein the determining a target avatar model from feedback of the target user comprises:
after a plurality of virtual image models with different designs are displayed to the target user, determining whether a model selection instruction of the target user is received;
if the model selection instruction is received, the virtual image model selected by the target user is used as the target virtual image model;
And if the model selection instruction of the target user is not received, determining the avatar model which is most matched with the target user according to the user portrait of the target user, and taking the avatar model as the target avatar model.
6. The user avatar generation method of claim 2, wherein after the determining whether the target user enables an avatar function, the method further comprises:
if the virtual image function is not started by the target user, determining a holiday element expected by the target user;
and rendering the head outline model based on the head texture map, the expected head portrait style and the holiday elements to obtain the head portrait of the user.
7. The method for generating a user head portrait according to any one of claims 1 to 6, wherein the target reconstruction model further includes an image segmentation layer, the inputting the head real image into the target reconstruction model, performing head contour reconstruction by the contour reconstruction layer to obtain a head contour model, and performing texture reconstruction by the texture reconstruction layer to obtain a head texture map includes:
inputting the head real image into the image segmentation layer for image segmentation to obtain a hair image and a face image of the head real image;
Inputting the hair image and the face image into the contour reconstruction layer to obtain a head contour model comprising hair contours and face contours;
and inputting the hair image and the face image into the texture reconstruction layer to obtain a head texture map comprising hair textures and face textures.
8. A user avatar generation apparatus, comprising:
the acquisition module is used for acquiring a head real image of the target user after receiving a head portrait generation instruction of the target user;
a determining module, configured to determine a desired avatar style of the target user;
the acquisition module is used for acquiring a target reconstruction model comprising a contour reconstruction layer and a texture reconstruction layer, wherein the target reconstruction model is a model obtained by training a preset neural network based on a plurality of head sample images; the target reconstruction model is obtained by training in the following mode: acquiring a plurality of real head images, performing multi-stylization processing on each head image to obtain a plurality of head sample images with different styles, wherein each head sample image corresponds to one piece of standard style information; inputting the head sample image into the preset neural network, carrying out contour reconstruction on the head sample image through the contour reconstruction network to obtain a reconstructed contour model, and carrying out texture reconstruction on the head sample image through the texture reconstruction network to obtain a reconstructed texture map; inputting the standard style information of the head sample image, the reconstruction contour model and the reconstruction texture map into the generation network for image generation to obtain the head reconstruction image; determining an image loss value for the head sample image and the head reconstruction image; when the image loss value does not meet the convergence condition, iteratively updating parameters of the preset neural network based on a plurality of head sample images until the image loss value meets the convergence condition, and outputting the converged contour reconstruction network and texture reconstruction network as the target reconstruction model;
The reconstruction module is used for inputting the head real image into the target reconstruction model, reconstructing the head outline through the outline reconstruction layer to obtain a head outline model, and reconstructing the texture through the texture reconstruction layer to obtain a head texture map; when the head contour is reconstructed through the contour reconstruction layer, reconstructing the facial contour of the whole face based on the head real image, detecting a face key part based on the head real image, and reconstructing the contour of the face key part to obtain the head contour model comprising the facial contour and the face key part contour;
and the rendering module is used for rendering and obtaining the user head portrait of the target user based on the head outline model, the head texture map and the expected head portrait style.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the user avatar generation method of any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the user avatar generation method of any one of claims 1 to 7.
CN202310710670.1A 2023-06-15 2023-06-15 User head portrait generation method, device, computer equipment and storage medium Active CN116452703B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310710670.1A CN116452703B (en) 2023-06-15 2023-06-15 User head portrait generation method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310710670.1A CN116452703B (en) 2023-06-15 2023-06-15 User head portrait generation method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116452703A CN116452703A (en) 2023-07-18
CN116452703B true CN116452703B (en) 2023-10-27

Family

ID=87134088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310710670.1A Active CN116452703B (en) 2023-06-15 2023-06-15 User head portrait generation method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116452703B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017000217A1 (en) * 2015-06-30 2017-01-05 北京旷视科技有限公司 Living-body detection method and device and computer program product
WO2021261188A1 (en) * 2020-06-23 2021-12-30 パナソニックIpマネジメント株式会社 Avatar generation method, program, avatar generation system, and avatar display method
CN114549291A (en) * 2022-02-24 2022-05-27 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN116109798A (en) * 2023-04-04 2023-05-12 腾讯科技(深圳)有限公司 Image data processing method, device, equipment and medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NZ530738A (en) * 2004-01-21 2006-11-30 Stellure Ltd Methods and systems for compositing images
CN105184735B (en) * 2014-06-19 2019-08-06 腾讯科技(深圳)有限公司 A kind of portrait deformation method and device
US10326972B2 (en) * 2014-12-31 2019-06-18 Samsung Electronics Co., Ltd. Three-dimensional image generation method and apparatus
CN113112580B (en) * 2021-04-20 2022-03-25 北京字跳网络技术有限公司 Method, device, equipment and medium for generating virtual image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017000217A1 (en) * 2015-06-30 2017-01-05 北京旷视科技有限公司 Living-body detection method and device and computer program product
WO2021261188A1 (en) * 2020-06-23 2021-12-30 パナソニックIpマネジメント株式会社 Avatar generation method, program, avatar generation system, and avatar display method
CN114549291A (en) * 2022-02-24 2022-05-27 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN116109798A (en) * 2023-04-04 2023-05-12 腾讯科技(深圳)有限公司 Image data processing method, device, equipment and medium

Also Published As

Publication number Publication date
CN116452703A (en) 2023-07-18

Similar Documents

Publication Publication Date Title
US10853987B2 (en) Generating cartoon images from photos
CN108257084B (en) Lightweight face automatic makeup method based on mobile terminal
CN110390632B (en) Image processing method and device based on dressing template, storage medium and terminal
US10650564B1 (en) Method of generating 3D facial model for an avatar and related device
US11562536B2 (en) Methods and systems for personalized 3D head model deformation
US11587288B2 (en) Methods and systems for constructing facial position map
US11461970B1 (en) Methods and systems for extracting color from facial image
CN114266695A (en) Image processing method, image processing system and electronic equipment
CN113592988B (en) Three-dimensional virtual character image generation method and device
WO2022257766A1 (en) Image processing method and apparatus, device, and medium
US11417053B1 (en) Methods and systems for forming personalized 3D head and facial models
Chen et al. Deep generation of face images from sketches
CN116452703B (en) User head portrait generation method, device, computer equipment and storage medium
Zhang et al. PR-RL: Portrait relighting via deep reinforcement learning
CN113408452A (en) Expression redirection training method and device, electronic equipment and readable storage medium
CN116542846B (en) User account icon generation method and device, computer equipment and storage medium
CN116542846A (en) User account icon generation method and device, computer equipment and storage medium
EP3731189A1 (en) Method of generating 3d facial model for an avatar and related device
CN116993871A (en) Virtual element generation method, device, equipment, medium and program product
CN114820290A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116152388A (en) Method, apparatus, device and medium for coloring image
CN111488778A (en) Image processing method and apparatus, computer system, and readable storage medium
CN116152387A (en) Method, apparatus, device and medium for coloring image
KR20070107859A (en) The auto generation of semi-character using transparency of character's ear, eye, mouth, and nose template and transparency of photo

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant