CN115345980A - Generation method and device of personalized texture map - Google Patents

Generation method and device of personalized texture map Download PDF

Info

Publication number
CN115345980A
CN115345980A CN202211269727.0A CN202211269727A CN115345980A CN 115345980 A CN115345980 A CN 115345980A CN 202211269727 A CN202211269727 A CN 202211269727A CN 115345980 A CN115345980 A CN 115345980A
Authority
CN
China
Prior art keywords
texture
user
model
personalized
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211269727.0A
Other languages
Chinese (zh)
Other versions
CN115345980B (en
Inventor
刘豪杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211269727.0A priority Critical patent/CN115345980B/en
Publication of CN115345980A publication Critical patent/CN115345980A/en
Application granted granted Critical
Publication of CN115345980B publication Critical patent/CN115345980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2024Style variation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a method and a device for generating an individualized texture map, an electronic device and a storage medium, relates to the technical field of artificial intelligence, specifically to the technical fields of augmented reality, virtual reality, computer vision, deep learning and the like, and can be applied to scenes such as virtual digital people, meta universe and the like. The specific implementation scheme is as follows: acquiring a user image of a target user; generating a three-dimensional user model corresponding to a target user and an initial personalized texture mapping corresponding to the target user based on the user image; and transforming and migrating the initial personalized texture mapping based on the corresponding relation between the three-dimensional user model and the target virtual image model to obtain the personalized texture mapping corresponding to the target virtual image model. The method and the device remarkably improve the similarity between the personalized virtual image and the target user, and simultaneously improve the access efficiency of the target virtual image model to the personalized texture maps of multiple styles.

Description

Generation method and device of personalized texture map
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of augmented reality, virtual reality, computer vision, deep learning, and the like, and more particularly, to a method and an apparatus for generating a personalized texture map, an electronic device, a storage medium, and a computer program product, which are applicable to scenes such as virtual digital people and the meta universe.
Background
With the fire and heat of the meta-universe concept, the avatar moves into the field of view of people. In the face-pinching system of the avatar, the degree of freedom in manually adjusting the face shape and texture of the avatar is very limited and is a common parameter. In order to improve the attribution feeling of the virtual image to the user, the user often wants a customized role which is relatively close to the user. In the prior art, only the shape reconstruction of the virtual image is generally focused to improve the similarity between the virtual image and the real user, and the similarity between the texture of the virtual image and the real user is not focused.
Disclosure of Invention
The disclosure provides a method and a device for generating a personalized texture map, electronic equipment, a storage medium and a computer program product.
According to a first aspect, there is provided a method for generating a personalized texture map, comprising: acquiring a user image of a target user; generating a three-dimensional user model corresponding to a target user and an initial personalized texture map corresponding to the target user based on the user image; and transforming and migrating the initial personalized texture map based on the corresponding relation between the three-dimensional user model and the target virtual image model to obtain the personalized texture map corresponding to the target virtual image model.
According to a second aspect, there is provided an apparatus for generating a personalized texture map, comprising: an acquisition unit configured to acquire a user image of a target user; a generating unit configured to generate a three-dimensional user model corresponding to a target user and an initial personalized texture map corresponding to the target user based on a user image; and the obtaining unit is configured to transform and migrate the initial personalized texture map based on the corresponding relation between the three-dimensional user model and the target avatar model to obtain the personalized texture map corresponding to the target avatar model.
According to a third aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in any one of the implementations of the first aspect.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method as described in any one of the implementations of the first aspect.
According to a fifth aspect, there is provided a computer program product comprising: computer program which, when being executed by a processor, carries out the method as described in any of the implementations of the first aspect.
According to the technology disclosed by the invention, the method for generating the personalized texture map can generate the personalized texture map which is matched with the target user represented by the user image and is suitable for the target virtual image model based on the user image of the user, so that the similarity between the personalized virtual image and the target user is obviously improved, and the access efficiency of the target virtual image model to the personalized texture maps of multiple styles is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which one embodiment according to the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method of generating a personalized texture map in accordance with the present disclosure;
FIG. 3 is a schematic diagram of an application scenario of a method for generating a personalized texture map according to the present embodiment;
FIG. 4 is a flow diagram of yet another embodiment of a method of generating a personalized texture map in accordance with the present disclosure;
FIG. 5 is a schematic flow of a process of generating, rendering a personalized texture map in accordance with the present disclosure;
FIG. 6 is a block diagram of one embodiment of a personalized texture map generation apparatus according to the present disclosure;
FIG. 7 is a schematic block diagram of a computer system suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the technical scheme of the disclosure, the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the common customs of public order.
FIG. 1 illustrates an exemplary architecture 100 to which the personalized texture map generation methods and apparatus of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The communication connections between the terminal devices 101, 102, 103 form a topological network and the network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 101, 102, 103 may be hardware devices or software that support network connections for data interaction and data processing. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices supporting network connection, information acquisition, interaction, display, processing, and the like, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal devices 101, 102, 103 are software, they can be installed in the electronic devices listed above. It may be implemented, for example, as multiple software or software modules for providing distributed services, or as a single software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, for example, a background processing server generating a personalized texture map adapted to a target user represented by the user image and suitable for the target avatar model based on the user image obtained by the terminal device 101, 102, 103. Optionally, the server may render and present an avatar of the texture characterized by the personalized texture map. As an example, the server 105 may be a cloud server.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., software or software modules for providing distributed services) or as a single piece of software or software module. And is not particularly limited herein.
It should be further noted that the method for generating the personalized texture map provided by the embodiment of the present disclosure may be executed by a server, or may be executed by a terminal device, or may be executed by the server and the terminal device in cooperation with each other. Accordingly, each part (for example, each unit) included in the apparatus for generating the personalized texture map may be entirely disposed in the server, may be entirely disposed in the terminal device, and may be disposed in the server and the terminal device, respectively.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for an implementation. When the electronic device on which the generation method of the personalized texture map is executed does not need to perform data transmission with other electronic devices, the system architecture may include only the electronic device (e.g., a server or a terminal device) on which the generation method of the personalized texture map is executed.
Referring to fig. 2, fig. 2 is a flowchart of a method for generating a personalized texture map according to an embodiment of the disclosure, where the flowchart 200 includes the following steps.
Step 201, acquiring a user image of a target user.
In this embodiment, an execution subject (for example, the terminal device or the server in fig. 1) of the method for generating the personalized texture map may obtain the user image of the target user from a remote location or from a local location through a wired network connection or a wireless network connection.
The target user is the user whose facial texture is to be matched to the target avatar. The user image may be an image including a facial object of the target user. Specifically, the user image may be one or more images corresponding to the target user.
Step 202, based on the user image, a three-dimensional user model corresponding to the target user and an initial personalized texture map corresponding to the target user are generated.
In this embodiment, the execution body may generate a three-dimensional user model corresponding to the target user and an initial personalized texture map corresponding to the target user based on the user image. The three-dimensional user model corresponding to the target user is a three-dimensional model representing the facial shape characteristics of the target user; and the initial personalized texture mapping corresponding to the target user is a texture mapping representing the facial texture characteristics of the target user.
As an example, the user image includes a multi-view user image of a target user, and the execution subject may perform three-dimensional reconstruction according to the multi-view image to generate a three-dimensional user model corresponding to the target user; and splicing the textures in the multi-view images according to the pose information of the target user to generate an initial personalized texture mapping corresponding to the target user. Specifically, a three-dimensional user model corresponding to the target user and an initial personalized texture map corresponding to the target user may be generated according to the multi-view image based on the pre-trained neural network model.
As another example, the user image is a user image of a target user, and the executing entity may generate a three-dimensional user model corresponding to the target user and an initial personalized texture map corresponding to the target user by using 3D digital media models (3D deformable face models).
The three-dimensional deformable face model is a general three-dimensional face model, and represents a face by using fixed points. The core idea is that the faces can be matched one by one in a three-dimensional space, and can be linearly added by orthogonal basis weighting of other many faces. Each point (x, y, z) in the three-dimensional space is actually obtained by weighted addition of the basis quantities (1, 0), (0, 1, 0), (0, 1) in the three directions of the three-dimensional space, except for the weights x, y, z, respectively. The same applies to the transformation to three-dimensional space. Each three-dimensional face can be represented by a base vector space composed of all faces in a database, and solving a model of any three-dimensional face is actually equivalent to solving the problem of coefficients of each base vector.
Specifically, first, a three-dimensional face model is initialized, and the parameters to be initialized include internal parameters α and β (feature values of a shape covariance matrix and a texture covariance matrix, respectively), and external rendering parameters (including a position of a camera, a rotation angle of an image plane, components of direct light and ambient light, image contrast, and the like). With the above parameters, the projection of a three-dimensional model onto a two-dimensional image can be uniquely determined.
Then, under the control of the initialization parameters, a two-dimensional image can be obtained from one three-dimensional model through the projection from the three-dimensional model to the two-dimensional image; calculating an error (loss) between the projected two-dimensional image and the input image; the relevant parameters are adjusted based on the error back-propagation. And determining the projection from the three-dimensional model to the two-dimensional image again based on the adjusted parameters, obtaining the two-dimensional image from the three-dimensional model again, and continuously iterating in the way.
In the specific iterative process, a coarse-to-fine mode is adopted, an input image with low resolution is used initially, only the coefficient of the first principal component is optimized, and the principal components are increased gradually subsequently. In the subsequent iteration step, external parameters can be fixed, and each part of the human face is optimized respectively. And stopping iteration in response to reaching a preset end condition to obtain the three-dimensional deformable human face model. The preset ending condition may be that the iteration number exceeds a preset number threshold, the iteration time exceeds a preset time threshold, and the loss tends to converge.
And 203, transforming and migrating the initial personalized texture map based on the corresponding relation between the three-dimensional user model and the target virtual image model to obtain the personalized texture map corresponding to the target virtual image model.
In this embodiment, the execution body may transform and migrate the initial personalized texture map based on a correspondence between the three-dimensional user model and the target avatar model to obtain a personalized texture map corresponding to the target avatar model. The target avatar may be any avatar. For example, it may be a corresponding virtual cartoon character in the cartoon, a virtual digital person set by the service provider, etc.
As an example, the execution body may determine a correspondence between each unit region of the three-dimensional user model and each unit region of the target avatar model, obtain a transformation matrix between the three-dimensional user model and the target avatar model according to the determined correspondence, and further transform and migrate the initial personalized texture map adapted to the three-dimensional user model according to the transformation matrix to obtain the personalized texture map adapted to the target avatar model. The unit area may be an area obtained by dividing the model based on a preset size.
After obtaining the personalized texture map corresponding to the target avatar model, the execution subject may render and display the target avatar adapted to the texture represented by the personalized texture map through the renderer.
With continued reference to fig. 3, fig. 3 is a schematic diagram 300 of an application scenario of the method for generating a personalized texture map according to the present embodiment. In the application scenario of fig. 3, first, the terminal device 301 captures a user image 303 of the target user 302; then, the user image 303 is transmitted to the server 304, and the server 304 generates the three-dimensional user model 305 corresponding to the target user 302 and the initial personalized texture map 306 corresponding to the target user 302 based on the user image 303; finally, the server 304 transforms and migrates the initial personalized texture map 306 based on the correspondence between the three-dimensional user model 305 and the target avatar model 307, resulting in a personalized texture map 308 corresponding to the target avatar model 307.
In this embodiment, a method for generating an individualized texture map is provided, where based on a user image of a user, an individualized texture map adapted to a target user represented by the user image and suitable for a target avatar model may be generated, so that similarity between the individualized avatar and the target user is significantly improved, and access efficiency of the target avatar model to multiple styles of individualized texture maps is improved.
In some optional implementations of this embodiment, the executing main body may execute the step 202 by: and generating a three-dimensional user model corresponding to the target user and an initial personalized texture map corresponding to the target user through a pre-trained generation network based on the user image. The generation network is used for representing the corresponding relation between the user image, the three-dimensional user model corresponding to the target user and the initial personalized texture map.
In this implementation, the user image is a monocular image corresponding to the target user. The pre-trained generation network includes a shape reconstructor, a texture encoder, and a texture decoder. The shape reconstructor is used for generating a three-dimensional user model corresponding to the target user, and can be realized by a three-dimensional deformable human face model; the texture coder is used for coding the user image to obtain texture features in the user image; the texture decoder is used for decoding the texture features to obtain an initial personalized texture map.
In the implementation mode, the three-dimensional user model and the initial personalized texture map corresponding to the target user are generated based on the user image through the pre-trained generation network, so that the generation efficiency of the three-dimensional user model and the initial personalized texture map, and the similarity and the adaptability between the three-dimensional user model, the initial personalized texture map and the target user are improved.
In some alternative implementations of the present embodiment, the generation network includes a shape reconstructor, a texture encoder, a texture decoder, and a differentiable renderer. In this implementation, the generated network is trained as follows.
First, a sample model of a sample user, characterized by a sample user image, is determined by a shape reconstructor based on the sample user image.
In this implementation, the shape reconstructor may employ a three-dimensional deformable face model. And the execution main body selects an untrained sample user image from the training sample set, inputs the selected sample user image to the shape reconstructor, and obtains a sample model of the sample user represented by the sample user image.
And secondly, expanding the sample user image to a preset texture space to obtain a rough texture mapping.
In this implementation, the predetermined texture space is a UV texture space.
Thirdly, the coarse texture map is refined through a texture coder and a texture decoder to obtain a fine texture map.
In this implementation, the execution main body may encode the coarse texture map by using a texture encoder to obtain texture features in the coarse texture map, and decode the texture features by using a texture decoder to obtain the fine texture map.
And fourthly, rendering by a differentiable renderer to obtain a target image based on the sample model and the fine texture map.
The rendering process of a service engine (e.g., a typical game engine) is not trivial, and therefore, a differentiable renderer needs to be utilized to back-propagate the gradient in the rendered output to the various modules in the training process that need to update the parameters.
And fifthly, training to obtain a generation network based on the loss between the sample user image and the target image.
The loss between the sample user image and the target image is used to characterize the difference between the two. The larger the difference is, the less effective the generation of the generated network is. And iteratively executing the training steps until a preset end condition is reached to obtain a trained generation network.
In the implementation mode, a training mode for generating the network is provided, and the accuracy of generating the network is improved.
In some optional implementations of this embodiment, the executing body may execute the third step by: firstly, obtaining texture characteristics of a coarse texture mapping through a texture encoder; and then, refining the coarse texture mapping by a texture decoder based on the texture features and the image features of the user image to obtain a fine texture mapping.
In this implementation, the execution main body may extract image features of the user image through the image encoder, and further, in combination with the texture features and the image features, refine the coarse texture map through the texture decoder to obtain the fine texture map, thereby further improving the refinement degree of the fine texture map and contributing to improving the training efficiency of generating the network.
In some optional implementations of this embodiment, generating the network further includes: and (5) an illumination regressor. In this implementation, the execution main body may further perform the following operations: and determining illumination information corresponding to the sample user image through an illumination regressor. Specifically, the image features of the user image can be extracted through the image encoder, and then the image features are input into the illumination regressor, so that illumination information corresponding to the sample user image is obtained. And the illumination information corresponding to the sample user image is the illumination information of the surrounding environment where the target user represented by the sample user image is located.
In this implementation, the executing body may execute the fourth step as follows: and based on the sample model, the fine texture map and the light ray information, obtaining a target image through the rendering of a differentiable renderer.
In the implementation mode, in the process of obtaining the target image through rendering, the influence of illumination information is considered on the basis of the sample model and the fine texture map, so that the accuracy of the obtained target image and the training efficiency of generating the network are improved.
In this implementation, the execution subject determines the sample model of the sample user and determines the posture information of the sample user through the shape reconstructor. Furthermore, the execution subject may obtain the target image through a differential renderer rendering based on the sample model, the fine texture map, the light information, and the pose information.
In some optional implementations of this embodiment, the executing main body may execute the step 203 as follows.
Firstly, grid alignment and deformation are carried out on a target virtual image model according to a three-dimensional user model, and a deformed virtual image model is obtained.
In this implementation, the execution subject may first determine a transformation matrix between the three-dimensional user model and the target avatar model; then, based on the transformation matrix, the target virtual image model refers to the three-dimensional user model to carry out grid alignment; after grid alignment, under the condition that the target virtual image model keeps topology unchanged, the target virtual image model is deformed to obtain a deformed virtual image model which is consistent with the grid shape of the three-dimensional user model.
Second, a first corresponding relationship between the three-dimensional user model and the deformed avatar model with respect to the texture patch is determined. The texture patch is a triangular patch in the model. As an example, the execution subject may determine a first correspondence between the three-dimensional user model and the deformed avatar model with respect to the texture patch through a KNN (K Nearest Neighbors) algorithm.
And thirdly, determining a second corresponding relation between the texture surface patch of the three-dimensional user model and the pixel points of the initial personalized texture mapping.
UV mapping is a three-dimensional modeling process that projects a two-dimensional image onto the surface of a three-dimensional model for texture mapping, one model corresponding to each texture UV coordinate. Therefore, a corresponding relation exists between the model and the texture UV coordinate, and a second corresponding relation between the texture surface patch of the three-dimensional user model and the pixel point of the initial personalized texture mapping can be determined based on the corresponding relation between the model and the texture UV coordinate.
And fourthly, transforming and migrating the initial personalized texture mapping according to the first corresponding relation and the second corresponding relation to obtain the personalized texture mapping corresponding to the target virtual image model.
And combining the first corresponding relation and the second corresponding relation to determine the corresponding relation between the three-dimensional user model and the target virtual image model, and further transforming and migrating the initial personalized texture map to obtain the personalized texture map corresponding to the target virtual image model.
In the implementation mode, a specific implementation mode for obtaining the personalized texture map corresponding to the target virtual image model is provided, and the adaptability between the target virtual image model and the personalized texture map is improved.
In some optional implementations of the present embodiment, the execution main body may further perform the following operations.
Firstly, migrating the skin color information of the target user represented by the personalized texture mapping to an inherent native texture mapping of the target virtual image model to obtain the skin color information after migration. And secondly, fusing the skin color information after the migration and the skin color information of the native texture mapping to obtain a fused texture mapping. And thirdly, rendering to obtain the virtual image of the texture represented by the fused texture map based on the fused texture map and the target virtual image model.
The inherent native texture map of the target avatar model represents the texture map which can be completely adapted to the target avatar model when the target avatar model is created, and generally has higher appreciation. It will be appreciated that the native texture maps inherent to the target avatar model are typically only one or a limited number.
In the implementation mode, the skin color information of the individualized texture mapping after migration is fused with the skin color information of the original texture mapping, so that the fused texture mapping has the individualized characteristics of a target user while the attractiveness of the original texture mapping and the adaptation degree between the original texture mapping and a target virtual image model are kept, and the effect of thousands of people is achieved.
In some optional implementations of this embodiment, the executing body may execute the first step by: firstly, converting the personalized texture mapping and the native texture mapping into a first color space to obtain a converted personalized texture mapping and a converted native texture mapping; then, determining the mean value and the standard deviation of the pixel values of the pixels of the converted individualized texture mapping and the mean value and the standard deviation of the pixel values of the pixels in the converted native texture mapping; then, processing the converted personalized texture map according to the mean value and the standard deviation to obtain a processed personalized texture map; and finally, converting the processed personalized texture mapping to a second color space to obtain the skin color information after migration.
In this implementation, the first color space is a Lab color space, and the Lab color space is a color-opponent space, where L represents brightness, and a and b represent opposite dimensions of color. The second color space is an RGB (Red, green, blue, red, green, blue) color space. The personalized texture map and the native texture map themselves use a second color space.
Specifically, the following operations are executed for each pixel point in the converted personalized texture map: subtracting the pixel value of the pixel point from the average value corresponding to the converted personalized texture map to obtain a first pixel value; then, multiplying the first pixel value by the ratio of the converted native texture map to the converted personalized texture map relative to the standard deviation to obtain a second pixel value; and finally, adding the average value corresponding to the converted native texture map on the basis of the second pixel value.
In the implementation mode, the specific implementation mode of color migration is provided, and the accuracy of the skin color information after migration is improved.
With continued reference to FIG. 4, an illustrative flow 400 of yet another embodiment of a method for generating a personalized texture map in accordance with the present disclosure is shown comprising the following steps.
Step 401, acquiring a user image of a target user.
Step 402, generating a three-dimensional user model corresponding to a target user and an initial personalized texture map corresponding to the target user through a pre-trained generation network based on the user image.
And 403, performing grid alignment and deformation on the target virtual image model by referring to the three-dimensional user model to obtain a deformed virtual image model.
Step 404, determining a first corresponding relationship between the three-dimensional user model and the deformed avatar model with respect to the texture patches.
Step 405, determining a second corresponding relationship between the texture patch of the three-dimensional user model and the pixel point of the initial personalized texture map.
And 406, transforming and migrating the initial personalized texture map according to the first corresponding relation and the second corresponding relation to obtain a personalized texture map corresponding to the target virtual image model.
Step 407, migrating the skin color information of the target user represented by the personalized texture map to the inherent native texture map of the target virtual image model to obtain the skin color information after migration.
And step 408, fusing the skin color information after the migration and the skin color information of the native texture map to obtain a fused texture map.
And 409, rendering to obtain the virtual image of the texture represented by the fused texture map based on the fused texture map and the target virtual image model.
With continued reference to FIG. 5, an exemplary flow 500 of a generation process, a rendering process, of a personalized texture map is shown. Firstly, based on a generation network, generating an initial personalized texture map 502 and a three-dimensional user model according to a user image 501; then, deforming and transferring the initial personalized texture map 502 according to the corresponding relation between the three-dimensional user model and the target virtual image model to obtain a personalized texture map 503; then, based on the native texture map of the target avatar model, performing skin color migration on the personalized texture map 503, and fusing skin color information of the native texture map to obtain a fused texture map 504; finally, the fused texture map 504 and the target avatar model are rendered to obtain an avatar 505 that uses the texture represented by the fused texture map.
As can be seen from this embodiment, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for generating an personalized texture map in this embodiment specifically illustrates the transformation and migration process of the initial personalized texture map and the migration process of the skin color information of the personalized texture map, which further improves the similarity between the personalized avatar and the target user and improves the access efficiency of the target avatar model to multiple styles of personalized texture maps.
With continued reference to fig. 6, as an implementation of the method shown in the above-mentioned figures, the present disclosure provides an embodiment of an apparatus for generating a personalized texture map, where the apparatus embodiment corresponds to the method embodiment shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 6, the apparatus 600 for generating a personalized texture map includes: an acquisition unit 601 configured to acquire a user image of a target user; a generating unit 602 configured to generate a three-dimensional user model corresponding to a target user and an initial personalized texture map corresponding to the target user based on a user image; an obtaining unit 603 configured to transform and migrate the initial personalized texture map based on a correspondence between the three-dimensional user model and the target avatar model, obtaining a personalized texture map corresponding to the target avatar model.
In some optional implementations of this embodiment, the generating unit 602 is further configured to: and generating a three-dimensional user model corresponding to the target user and an initial personalized texture map corresponding to the target user through a pre-trained generation network based on the user image.
In some optional implementations of this embodiment, the generation network includes a shape reconstructor, a texture encoder, a texture decoder, and a differentiable renderer, and the apparatus further includes: a training unit (not shown in the figure) configured to train to obtain a generated network by: determining, by a shape reconstructor, a sample model of a sample user characterized by a sample user image based on the sample user image; expanding the sample user image to a preset texture space to obtain a coarse texture map; refining the coarse texture mapping by a texture encoder and a texture decoder to obtain a fine texture mapping; based on the sample model and the fine texture mapping, a target image is obtained through the rendering of a differentiable renderer; training results in a generated network based on the loss between the sample user image and the target image.
In some optional implementations of this embodiment, the training unit (not shown in the figure) is further configured to: obtaining texture features of the coarse texture map through a texture encoder; and refining the coarse texture mapping by a texture decoder based on the texture features and the image features of the user image to obtain a fine texture mapping.
In some optional implementations of this embodiment, generating the network further includes: an illumination regressor; and a training unit (not shown in the figures) further configured to: determining illumination information corresponding to the sample user image through an illumination regressor; and a training unit (not shown in the figures) further configured to: and based on the sample model, the fine texture map and the light information, rendering by a differentiable renderer to obtain a target image.
In some optional implementations of this embodiment, the obtaining unit 603 is further configured to: performing grid alignment and deformation on the target virtual image model by referring to the three-dimensional user model to obtain a deformed virtual image model; determining a first corresponding relation between the three-dimensional user model and the deformed virtual image model about a texture patch; determining a second corresponding relation between a texture patch of the three-dimensional user model and a pixel point of the initial personalized texture map; and transforming and migrating the initial personalized texture mapping according to the first corresponding relation and the second corresponding relation to obtain the personalized texture mapping corresponding to the target virtual image model.
In some optional implementations of this embodiment, the apparatus further includes: a migration unit (not shown in the figure) configured to migrate the skin color information of the target user represented by the personalized texture map to a native texture map inherent to the target avatar model to obtain migrated skin color information; a fusion unit (not shown in the figure) configured to fuse the migrated skin color information and the skin color information of the native texture map to obtain a fused texture map; and a rendering unit (not shown in the figure) configured to render the avatar using the texture characterized by the fused texture map based on the fused texture map and the target avatar model.
In some optional implementations of this embodiment, the migration unit (not shown in the figure) is further configured to: converting the personalized texture mapping and the native texture mapping into a first color space to obtain a converted personalized texture mapping and a converted native texture mapping; determining the mean value and the standard deviation of the pixel values of the pixels of the converted individualized texture mapping and the mean value and the standard deviation of the pixel values of the pixels in the converted native texture mapping; processing the converted personalized texture map according to the mean value and the standard deviation to obtain a processed personalized texture map; and converting the processed personalized texture mapping to a second color space to obtain the skin color information after migration.
In this embodiment, a device for generating an individualized texture map is provided, where based on a user image of a user, an individualized texture map that is adapted to a target user represented by the user image and is suitable for a target avatar model may be generated, so that similarity between the individualized avatar and the target user is significantly improved, and access efficiency of the target avatar model to multiple styles of individualized texture maps is improved.
According to an embodiment of the present disclosure, the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the method for generating a personalized texture map as described in any of the above embodiments.
According to an embodiment of the present disclosure, there is also provided a readable storage medium storing computer instructions for enabling a computer to implement the method for generating a personalized texture map described in any of the above embodiments when executed.
The disclosed embodiments provide a computer program product, which when executed by a processor can implement the method for generating a personalized texture map described in any of the above embodiments.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701 which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
A number of components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 performs the respective methods and processes described above, such as the generation method of the personalized texture map. For example, in some embodiments, the method of generating the personalized texture map may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the method for generating a personalized texture map described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g., by means of firmware) to perform the personalized texture map generation method.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility existing in the traditional physical host and Virtual Private Server (VPS) service; it may also be a server of a distributed system, or a server incorporating a blockchain.
According to the technical scheme of the embodiment of the disclosure, a method for generating the personalized texture map is provided, based on the user image of the user, the personalized texture map which is matched with the target user represented by the user image and is suitable for the target virtual image model can be generated, the similarity between the personalized virtual image and the target user is obviously improved, and meanwhile, the access efficiency of the target virtual image model to the personalized texture maps of multiple styles is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in this disclosure may be performed in parallel, sequentially or in different orders, as long as the desired results of the technical solutions provided by this disclosure can be achieved, which are not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (18)

1. A method for generating a personalized texture map comprises the following steps:
acquiring a user image of a target user;
generating a three-dimensional user model corresponding to the target user and an initial personalized texture map corresponding to the target user based on the user image;
and transforming and migrating the initial personalized texture mapping based on the corresponding relation between the three-dimensional user model and the target virtual image model to obtain the personalized texture mapping corresponding to the target virtual image model.
2. The method of claim 1, wherein the generating, based on the user image, a three-dimensional user model corresponding to the target user and an initial personalized texture map corresponding to the target user comprises:
and generating a three-dimensional user model corresponding to the target user and an initial personalized texture map corresponding to the target user through a pre-trained generation network based on the user image.
3. The method of claim 2, wherein the generating network comprises a shape reconstructor, a texture encoder, a texture decoder, and a differentiable renderer, the generating network being trained by:
determining, by the shape reconstructor, a sample model of a sample user characterized by a sample user image based on the sample user image;
expanding the sample user image to a preset texture space to obtain a coarse texture mapping;
refining the coarse texture map by the texture encoder and the texture decoder to obtain a fine texture map;
rendering by the differentiable renderer to obtain a target image based on the sample model and the fine texture map;
training to obtain the generation network based on the loss between the sample user image and the target image.
4. The method of claim 3, wherein said refining said coarse texture map by said texture encoder and said texture decoder to obtain a fine texture map comprises:
obtaining texture features of the coarse texture map through the texture encoder;
and refining the coarse texture mapping by the texture decoder based on the texture features and the image features of the user image to obtain the fine texture mapping.
5. The method of claim 3, wherein the generating a network further comprises: an illumination regressor; and the method further comprises:
determining illumination information corresponding to the sample user image through the illumination regressor; and
the obtaining a target image through the rendering of the differentiable renderer based on the sample model and the fine texture map includes:
and rendering the target image through the differentiable renderer based on the sample model, the fine texture map and the light information.
6. The method of claim 1, wherein said transforming and migrating the initial personalized texture map based on a correspondence between the three-dimensional user model and a target avatar model to obtain a personalized texture map corresponding to the target avatar model comprises:
performing grid alignment and deformation on the target virtual image model by referring to the three-dimensional user model to obtain a deformed virtual image model;
determining a first corresponding relation between the three-dimensional user model and the deformed avatar model about a texture patch;
determining a second corresponding relation between a texture patch of the three-dimensional user model and a pixel point of the initial personalized texture map;
and transforming and migrating the initial personalized texture mapping according to the first corresponding relation and the second corresponding relation to obtain a personalized texture mapping corresponding to the target virtual image model.
7. The method according to any one of claims 1-6, further comprising:
migrating the skin color information of the target user represented by the personalized texture map to a native texture map inherent to the target virtual image model to obtain migrated skin color information;
fusing the skin color information after the migration and the skin color information of the native texture mapping to obtain a fused texture mapping;
and rendering to obtain the virtual image of the texture represented by the fused texture map based on the fused texture map and the target virtual image model.
8. The method of claim 7, wherein the migrating the skin color information of the target user represented by the personalized texture map to a native texture map inherent to the target avatar model to obtain migrated skin color information comprises:
converting the personalized texture mapping and the native texture mapping to a first color space to obtain a converted personalized texture mapping and a converted native texture mapping;
determining the mean value and the standard deviation of the pixel values of the pixels of the converted individualized texture mapping and the mean value and the standard deviation of the pixel values of the pixels in the converted native texture mapping;
processing the converted personalized texture map according to the mean value and the standard deviation to obtain a processed personalized texture map;
and converting the processed personalized texture map into a second color space to obtain the skin color information after migration.
9. An apparatus for generating a personalized texture map, comprising:
an acquisition unit configured to acquire a user image of a target user;
a generating unit configured to generate a three-dimensional user model corresponding to the target user and an initial personalized texture map corresponding to the target user based on the user image;
an obtaining unit configured to transform and migrate the initial personalized texture map based on a correspondence between the three-dimensional user model and a target avatar model, obtaining a personalized texture map corresponding to the target avatar model.
10. The apparatus of claim 9, wherein the generating unit is further configured to:
and generating a three-dimensional user model corresponding to the target user and an initial personalized texture mapping corresponding to the target user through a pre-trained generation network based on the user image.
11. The apparatus of claim 10, wherein the generation network comprises a shape reconstructor, a texture encoder, a texture decoder, and a differentiable renderer, the apparatus further comprising:
a training unit configured to train the generated network by:
determining, by the shape reconstructor, a sample model of a sample user characterized by a sample user image based on the sample user image; expanding the sample user image to a preset texture space to obtain a coarse texture mapping; refining the coarse texture map through the texture encoder and the texture decoder to obtain a fine texture map; rendering by the differentiable renderer to obtain a target image based on the sample model and the fine texture map; training to obtain the generation network based on the loss between the sample user image and the target image.
12. The apparatus of claim 11, wherein the training unit is further configured to:
obtaining texture features of the coarse texture map through the texture encoder; and refining the coarse texture mapping by the texture decoder based on the texture features and the image features of the user image to obtain the fine texture mapping.
13. The apparatus of claim 11, wherein the generating a network further comprises: an illumination regressor; and a training unit further configured to:
determining illumination information corresponding to the sample user image through the illumination regressor; and
the training unit, further configured to:
and rendering the target image through the differentiable renderer based on the sample model, the fine texture map and the light information.
14. The apparatus of claim 9, wherein the deriving unit is further configured to:
performing grid alignment and deformation on the target virtual image model by referring to the three-dimensional user model to obtain a deformed virtual image model; determining a first corresponding relation between the three-dimensional user model and the deformed avatar model about a texture patch; determining a second corresponding relation between a texture patch of the three-dimensional user model and a pixel point of the initial personalized texture map; and transforming and migrating the initial personalized texture mapping according to the first corresponding relation and the second corresponding relation to obtain a personalized texture mapping corresponding to the target virtual image model.
15. The apparatus of any of claims 9-14, further comprising:
the migration unit is configured to migrate the skin color information of the target user represented by the personalized texture mapping to a native texture mapping inherent to the target virtual image model to obtain migrated skin color information;
the fusion unit is configured to fuse the skin color information after the migration and the skin color information of the native texture mapping to obtain a texture mapping after the fusion;
and the rendering unit is configured to render the virtual image of the texture represented by the fused texture map based on the fused texture map and the target virtual image model.
16. The apparatus of claim 15, wherein the migration unit is further configured to:
converting the personalized texture mapping and the native texture mapping to a first color space to obtain a converted personalized texture mapping and a converted native texture mapping; determining the mean value and the standard deviation of the pixel values of the pixel points of the converted personalized texture mapping and the mean value and the standard deviation of the pixel values of the pixel points in the converted native texture mapping; processing the converted personalized texture map according to the mean value and the standard deviation to obtain a processed personalized texture map; and converting the processed personalized texture map into a second color space to obtain the skin color information after migration.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202211269727.0A 2022-10-18 2022-10-18 Generation method and device of personalized texture map Active CN115345980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211269727.0A CN115345980B (en) 2022-10-18 2022-10-18 Generation method and device of personalized texture map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211269727.0A CN115345980B (en) 2022-10-18 2022-10-18 Generation method and device of personalized texture map

Publications (2)

Publication Number Publication Date
CN115345980A true CN115345980A (en) 2022-11-15
CN115345980B CN115345980B (en) 2023-03-24

Family

ID=83957593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211269727.0A Active CN115345980B (en) 2022-10-18 2022-10-18 Generation method and device of personalized texture map

Country Status (1)

Country Link
CN (1) CN115345980B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115809696A (en) * 2022-12-01 2023-03-17 支付宝(杭州)信息技术有限公司 Virtual image model training method and device
CN116012666A (en) * 2022-12-20 2023-04-25 百度时代网络技术(北京)有限公司 Image generation, model training and information reconstruction methods and devices and electronic equipment
CN116109798A (en) * 2023-04-04 2023-05-12 腾讯科技(深圳)有限公司 Image data processing method, device, equipment and medium
CN116152403A (en) * 2023-01-09 2023-05-23 支付宝(杭州)信息技术有限公司 Image generation method and device, storage medium and electronic equipment
CN116188640A (en) * 2022-12-09 2023-05-30 北京百度网讯科技有限公司 Three-dimensional virtual image generation method, device, equipment and medium
CN116503508A (en) * 2023-06-26 2023-07-28 南昌航空大学 Personalized model construction method, system, computer and readable storage medium
CN116542846A (en) * 2023-07-05 2023-08-04 深圳兔展智能科技有限公司 User account icon generation method and device, computer equipment and storage medium
CN117218266A (en) * 2023-10-26 2023-12-12 神力视界(深圳)文化科技有限公司 3D white-mode texture map generation method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210407216A1 (en) * 2020-11-09 2021-12-30 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for generating three-dimensional virtual image, and storage medium
CN114037802A (en) * 2021-11-24 2022-02-11 Oppo广东移动通信有限公司 Three-dimensional face model reconstruction method and device, storage medium and computer equipment
CN114792355A (en) * 2022-06-24 2022-07-26 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
WO2022157718A1 (en) * 2021-01-22 2022-07-28 Sony Group Corporation 3d face modeling based on neural networks
CN114820905A (en) * 2022-06-24 2022-07-29 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and readable storage medium
CN114842123A (en) * 2022-06-28 2022-08-02 北京百度网讯科技有限公司 Three-dimensional face reconstruction model training and three-dimensional face image generation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210407216A1 (en) * 2020-11-09 2021-12-30 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for generating three-dimensional virtual image, and storage medium
WO2022157718A1 (en) * 2021-01-22 2022-07-28 Sony Group Corporation 3d face modeling based on neural networks
CN114037802A (en) * 2021-11-24 2022-02-11 Oppo广东移动通信有限公司 Three-dimensional face model reconstruction method and device, storage medium and computer equipment
CN114792355A (en) * 2022-06-24 2022-07-26 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN114820905A (en) * 2022-06-24 2022-07-29 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and readable storage medium
CN114842123A (en) * 2022-06-28 2022-08-02 北京百度网讯科技有限公司 Three-dimensional face reconstruction model training and three-dimensional face image generation method and device

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115809696B (en) * 2022-12-01 2024-04-02 支付宝(杭州)信息技术有限公司 Virtual image model training method and device
CN115809696A (en) * 2022-12-01 2023-03-17 支付宝(杭州)信息技术有限公司 Virtual image model training method and device
CN116188640B (en) * 2022-12-09 2023-09-08 北京百度网讯科技有限公司 Three-dimensional virtual image generation method, device, equipment and medium
CN116188640A (en) * 2022-12-09 2023-05-30 北京百度网讯科技有限公司 Three-dimensional virtual image generation method, device, equipment and medium
CN116012666A (en) * 2022-12-20 2023-04-25 百度时代网络技术(北京)有限公司 Image generation, model training and information reconstruction methods and devices and electronic equipment
CN116012666B (en) * 2022-12-20 2023-10-27 百度时代网络技术(北京)有限公司 Image generation, model training and information reconstruction methods and devices and electronic equipment
CN116152403A (en) * 2023-01-09 2023-05-23 支付宝(杭州)信息技术有限公司 Image generation method and device, storage medium and electronic equipment
CN116152403B (en) * 2023-01-09 2024-06-07 支付宝(杭州)信息技术有限公司 Image generation method and device, storage medium and electronic equipment
CN116109798B (en) * 2023-04-04 2023-06-09 腾讯科技(深圳)有限公司 Image data processing method, device, equipment and medium
CN116109798A (en) * 2023-04-04 2023-05-12 腾讯科技(深圳)有限公司 Image data processing method, device, equipment and medium
CN116503508B (en) * 2023-06-26 2023-09-08 南昌航空大学 Personalized model construction method, system, computer and readable storage medium
CN116503508A (en) * 2023-06-26 2023-07-28 南昌航空大学 Personalized model construction method, system, computer and readable storage medium
CN116542846A (en) * 2023-07-05 2023-08-04 深圳兔展智能科技有限公司 User account icon generation method and device, computer equipment and storage medium
CN116542846B (en) * 2023-07-05 2024-04-26 深圳兔展智能科技有限公司 User account icon generation method and device, computer equipment and storage medium
CN117218266A (en) * 2023-10-26 2023-12-12 神力视界(深圳)文化科技有限公司 3D white-mode texture map generation method, device, equipment and medium

Also Published As

Publication number Publication date
CN115345980B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN115345980B (en) Generation method and device of personalized texture map
CN113327278B (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN114820905B (en) Virtual image generation method and device, electronic equipment and readable storage medium
CN113658309B (en) Three-dimensional reconstruction method, device, equipment and storage medium
CN114842123B (en) Three-dimensional face reconstruction model training and three-dimensional face image generation method and device
CN115082639A (en) Image generation method and device, electronic equipment and storage medium
CN114842121B (en) Method, device, equipment and medium for generating mapping model training and mapping
CN115409933B (en) Multi-style texture mapping generation method and device
CN114549710A (en) Virtual image generation method and device, electronic equipment and storage medium
CN114723888B (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN115147265A (en) Virtual image generation method and device, electronic equipment and storage medium
CN114972017A (en) Generation method and device of personalized face style graph and electronic equipment
CN113808249B (en) Image processing method, device, equipment and computer storage medium
CN114708374A (en) Virtual image generation method and device, electronic equipment and storage medium
CN114120413A (en) Model training method, image synthesis method, device, equipment and program product
CN113658035A (en) Face transformation method, device, equipment, storage medium and product
CN113052962A (en) Model training method, information output method, device, equipment and storage medium
CN115965735B (en) Texture map generation method and device
CN115393488B (en) Method and device for driving virtual character expression, electronic equipment and storage medium
CN116524162A (en) Three-dimensional virtual image migration method, model updating method and related equipment
CN115775300A (en) Reconstruction method of human body model, training method and device of human body reconstruction model
CN115311403A (en) Deep learning network training method, virtual image generation method and device
CN113421335B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN114648601A (en) Virtual image generation method, electronic device, program product and user terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant