CN114742940A - Method, device and equipment for constructing virtual image texture map and storage medium - Google Patents

Method, device and equipment for constructing virtual image texture map and storage medium Download PDF

Info

Publication number
CN114742940A
CN114742940A CN202210233787.0A CN202210233787A CN114742940A CN 114742940 A CN114742940 A CN 114742940A CN 202210233787 A CN202210233787 A CN 202210233787A CN 114742940 A CN114742940 A CN 114742940A
Authority
CN
China
Prior art keywords
model
cartoon
topological structure
gridding
real person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210233787.0A
Other languages
Chinese (zh)
Inventor
奉万森
李志文
芦爱余
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202210233787.0A priority Critical patent/CN114742940A/en
Publication of CN114742940A publication Critical patent/CN114742940A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a method, a device, equipment and a readable storage medium for constructing an avatar texture map, wherein the method comprises the following steps: acquiring a three-dimensional target image, performing face reconstruction based on the three-dimensional target image to obtain a first real person gridding model, and acquiring a cartoon topological structure model based on a cartoon topological structure corresponding to the three-dimensional target image; the first real person gridding model is topologically transferred to the cartoon topological structure model, a second real person gridding model and a second cartoon topological structure model are obtained, and therefore the number of vertexes of the second real person gridding model and the second cartoon topological structure model is the same as the vertex position; and mapping the texture coordinates of the first real person gridding model into the second cartoon topological structure model by utilizing the mapping relation of the texture coordinates of the second real person gridding model and the cartoon topological structure model so as to construct a target texture map in the second cartoon topological structure model. The method aims to solve the problem that the difference between a cartoon topological structure model and a real person image is large.

Description

Method, device and equipment for constructing virtual image texture map and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a readable storage medium for constructing an avatar texture map.
Background
The current concept of the meta universe is popular, and one of the important functions in the meta universe is the modeling of a role image by a user. Modeling the character image is divided into several different product forms: carrying out hundred-percent high-precision modeling on the real human image of the user; simulating a real human image of a user; the cartoon image. The high-precision modeling is to perform high-precision repeated engraving on the texture features of the real image of the user, extract the real texture of the user and implant the extracted real texture into a model, and the method needs complex equipment for support. The method is characterized in that the user real-person character is subjected to texture feature extraction, a model for simulating the user real-person image is established according to the extracted texture feature, the texture feature of the model is the texture feature extracted from the user real-person feature, special effect processing such as cartoon and the like is not carried out, so that the method has the problem similar to high-precision modeling, and the created image is too vivid, so that when some users are unsatisfied with certain image features of the users, the user is unsatisfied with the modeling result by using the two methods for role image modeling. The cartoon image is a cartoon topological structure model established through a cartoon topological structure, although the cartoon topological structure model is established according to a real person image input by a user, a texture map of the cartoon topological structure model is greatly different from a real person texture map due to the fact that the texture map passes through the cartoon topology, the cartoon image and the real person have large distortion, therefore, the texture characteristics on the model need the user to manually adjust the human face parameters for molding, and the process is complicated.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides a method, a device, equipment and a readable storage medium for constructing an avatar texture map.
According to a first aspect of embodiments of the present application, there is provided a method of constructing an avatar texture map, the method comprising:
acquiring a three-dimensional target image, performing face reconstruction based on the three-dimensional target image to obtain a first real person gridding model, and acquiring a cartoon topological structure model based on a cartoon topological structure corresponding to the three-dimensional target image; the real person gridding model and the cartoon topological structure model have different numbers of vertexes or different positions;
the first real person gridding model is topologically transferred to a cartoon topological structure model, a second real person gridding model is obtained, and the number of vertexes of the second real person gridding model and the cartoon topological structure model is the same as the position of the vertexes;
and mapping the texture coordinates of the first real person gridding model into the cartoon topological structure model by utilizing the mapping relation of the texture coordinates of the second real person gridding model and the cartoon topological structure model so as to construct a target texture map in the cartoon topological structure model.
According to a second aspect of the embodiments of the present application, there is provided an apparatus for constructing an avatar texture map, including:
the modeling module is used for carrying out face reconstruction based on the three-dimensional target image so as to obtain a first real person gridding model and obtaining a cartoon topological structure model based on a cartoon topological structure corresponding to the three-dimensional target image; the real person gridding model and the cartoon topological structure model have different numbers of vertexes or different positions;
the topology migration module is used for topologically migrating the first real person gridding model to the cartoon topological structure model to obtain a second real person gridding model so as to enable the number of vertexes of the second real person gridding model and the cartoon topological structure model to be the same as the positions of the vertexes;
and the texture migration module is used for mapping the texture coordinates of the first real person gridding model into the cartoon topological structure model based on the mapping relation of the texture coordinates of the second real person gridding model and the cartoon topological structure model so as to construct a target texture map in the cartoon topological structure model.
According to a third aspect of embodiments herein, there is provided an apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the operations of any one of the methods of the first aspect. According to a fourth aspect of embodiments herein, there is provided a readable storage medium comprising:
which when executed by a processor, carry out the operations of any of the methods as described in the preceding first aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, a cartoon topological structure model and a first real person gridding model are respectively established based on a three-dimensional target image, the cartoon topological structure model is obtained through a cartoon topological structure, the first real person gridding model is obtained through face reconstruction of the three-dimensional target image, the first real person gridding model has human face features of real persons, the cartoon topological structure model obtained through the cartoon topological structure has larger difference with the human face features of the real persons, the number and the positions of vertexes of the two models are different, the scheme further migrates the first real person gridding model topology to the cartoon topological structure model, so that the human face feature parts of the real persons are migrated to the cartoon topological structure model, and the cartoon topological structure model has human face features, wherein the human face features are partially similar to the real persons. And then, mapping the texture coordinates of the first real person gridding model to the cartoon topological structure model by using the mapping relation of the texture coordinates of the two models, and constructing a texture map according to the texture coordinates so as to achieve the effect that the cartoon topological structure model has real person texture characteristics, and meanwhile, the process of transferring the real person texture characteristics to the cartoon topological structure model plays a role in beautifying the real person texture.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1A is a schematic representation of a simulated human image in an embodiment of the present application;
FIG. 1B is a schematic illustration of a cartoon image in an embodiment of the present application;
FIG. 2 is a flow chart of a method for constructing a texture map of an avatar in an embodiment of the present application;
FIG. 3 is a schematic diagram of a human mesh model and its texture map in an embodiment of the present application;
FIG. 4 is a flowchart of a process for reconstructing mesh and texture coordinate mapping in an embodiment of the present application;
FIG. 5 is a block diagram of an apparatus for constructing a texture map of an avatar in an embodiment of the present application;
FIG. 6 is a flow chart of the present application in a live network scenario;
fig. 7 is a schematic diagram of an electronic device in an embodiment of the application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In building an avatar, the prior art usually selects one of the following three modeling methods to model the target: 1. modeling with high precision; 2. modeling by a simulation person; 3. and (5) modeling cartoon topology. Although the three modeling manners can establish a three-dimensional model of a target face, the inventor finds that the modeling manners have some defects in the application process, for example, the target is modeled with high precision, or a model simulating a real person image of a user is established for the user by using the real person modeling, and the models established by the two technical means greatly keep the texture characteristics of the real person image, as shown in fig. 1A; therefore, when some users are not satisfied with certain image characteristics, the role image modeling in the two ways can cause the users to be not satisfied with the modeling result. When the cartoon topological structure model is used, the established cartoon topological structure model can completely change the texture characteristics of the user, and as shown in fig. 1B, if the user wants to establish a cartoon topological structure model with a very similar image to the user, the user needs to adjust a great amount of texture characteristic parameters of the cartoon topological structure model.
In view of the above problems, an embodiment of the present application provides a method for constructing an avatar texture map, and referring to fig. 2, fig. 2 is a part of the steps of the embodiment.
In step S201, a three-dimensional target image is acquired;
in step S202, performing face reconstruction based on the three-dimensional target image to obtain a first real person gridding model, and obtaining a cartoon topological structure model based on a cartoon topological structure corresponding to the three-dimensional target image; the number of vertexes contained in the real person gridding model and the cartoon topological structure model is different or vertexes with different positions exist;
in step S203, the first real-person gridding model is topologically transferred to the cartoon topological structure model to obtain a second real-person gridding model, so that the number of vertexes of the second real-person gridding model and the cartoon topological structure model is the same as the vertex position;
in step S204, the texture coordinates of the first real person gridding model are mapped into the cartoon topology model by using the mapping relationship between the texture coordinates of the second real person gridding model and the cartoon topology model, so as to construct a target texture map in the cartoon topology model.
It should be noted that, PCA (Principal Component Analysis) is a feature algorithm, and is often used in the fields of face recognition and the like, and performs processing of reducing feature points of a face model, and since the cartoon topology does not have a PCA base, there are no feature points that can be subjected to dimension reduction processing, so that feature points of the cartoon topology model established according to the PCA base will be different from the first human-mesh model having the PCA base, and therefore, the possible number of model vertices of the first human-mesh model and the cartoon topology model is not consistent, and the locations of the vertices may not be all the same
As an example, when the first live-person gridding model is subjected to topology migration, an alignment parameter between the first live-person gridding model and the cartoon topology model may be obtained first, where the alignment parameter is a first alignment parameter and refers to a parameter reflecting a spatial relationship between a vertex of the first live-person gridding model and a vertex of the cartoon topology model during the topology migration, and the number and the positions of the vertices in the first live-person gridding model are aligned with the vertices in the cartoon topology model according to the obtained first alignment parameter, so as to obtain a second live-person gridding model; after the second real person gridding model is obtained, a mapping relation between texture coordinates of the second real person gridding model and the cartoon topological structure model is determined based on the first alignment parameter, and in one example, the specific method may be as follows: determining texture coordinates corresponding to the texture coordinates of the cartoon topological structure model in the second real person gridding model based on the first alignment parameters; determining a mapping relation according to the texture coordinates of the second real person gridding model and the texture coordinates of the cartoon topological structure: the mapping relation of the texture coordinates can be obtained by finding the corresponding relation between the cartoon topological structure model and the vertexes of the second real-person gridding model based on the first alignment parameters, and because the texture coordinates are formed by the model vertexes, a texture map of the cartoon topological structure model can be constructed on the second real-person gridding model according to the corresponding relation between the model vertexes, and the texture map is compared with the texture map of the second real-person gridding model. It should be noted that, the method for determining the mapping relationship of the texture coordinate through the first alignment parameter is not unique, and is not described in detail herein.
In another embodiment, when constructing the target texture map in the cartoon topology model, the texture map of the first real person gridding model may be obtained by texture sampling, and then the texture map of the first real person gridding model is mapped onto the cartoon topology model through a mapping relationship of texture coordinates, as shown in fig. 3, where 301 and 302 are a first real person texture feature and a first real person gridding model, 303 is a texture map of the grid model, coordinate points included in 302 are texture coordinates, and these texture coordinates are mapped onto the cartoon topology model according to the mapping relationship, so that the goal of enabling the cartoon topology model to have real person texture features may be achieved. It should be noted that, the texture sampling mode may be to perform texture sampling on a pixel point where a vertex of the model is located, or perform texture sampling on a pixel point in a mesh of the gridding model, and any sampling mode may be used to achieve the final purpose of the present application.
Optionally, in step S204, before constructing the target texture map, a mesh may be reconstructed based on the mapped texture coordinates, and the target texture map may be constructed according to pixel points in the mesh. In one embodiment, the reconstructed mesh may be a Delaunay triangle (i.e., a Delaunay triangle mesh, which is a set of a series of connected but non-overlapping triangles, and the circumcircle of the triangles does not contain any other point of the area, i.e., any triangle in the triangle mesh generated by using the method cannot contain the vertex of another triangle), and the target texture map is constructed according to the pixel points in the Delaunay triangle mesh; other mesh patterns such as regular triangles, quadrangles, etc. may also be constructed, which are not limited herein. In this embodiment, part of the steps for constructing the target texture map based on Delaunay triangles are shown in FIG. 4,
in step S401, texture coordinates of the first real person gridding model and the cartoon topological structure model are obtained;
in step S402, a delaunay triangle is constructed according to the texture coordinates of the cartoon topology model;
in step S403, since the texture coordinates are formed by the model vertices, the vertex coordinates of the corresponding cartoon topology model can be determined by the texture coordinates of the cartoon topology model, so as to convert the delaunay triangle formed by the texture coordinates of the cartoon topology model into a delaunay triangle formed by the vertices of the cartoon topology model;
in step S404, since the second human mesh model is obtained by topology migration of the first human mesh model and the cartoon topological structure model, the vertices of the cartoon topological structure model are mapped to the second human mesh model in combination with the first alignment parameters, so that the deltoid triangles formed by the vertices of the cartoon topological model are converted into the deltoid triangles formed by the vertices of the second human mesh model, and the texture coordinates are formed by the model vertices and are equivalent to obtaining the deltoid triangles formed by the texture coordinates of the second human mesh model;
in step S405, traversing all the delaunay triangles to obtain a mapping relationship between the delaunay triangles in the second real person gridding model and the delaunay triangles in the cartoon topological structure gridding model, and further obtaining a mapping relationship between texture coordinates of the delaunay triangles;
in step S406, performing warp processing on the triangular patch region according to the delaunay triangle constructed according to the texture coordinates of the first real person gridding model, and corresponding to the delaunay triangle of the cartoon topology model according to a mapping relationship; the warp processing refers to processing of affine transformation of an image, and the principle is as follows: for a coordinate point (x, y) of a two-dimensional coordinate system, a 2x2 matrix can be used to adjust the values of x and y, and linear Transformation (rotation and scaling) of a two-dimensional shape can be realized by adjusting x and y, so the whole Transformation process is the process of adjusting (x, y), and Affine Transformation (Affine Transformation) is the process of performing linear Transformation and translation once in a vector space and transforming to another vector space. The linear transformation multiplies (x, y) by a matrix, which may be a matrix containing the dimensions of the image transformation features; the translation transformation adds (x, y) to a vector, which may be a vector containing information for translation into a position transformation. In affine transformation, the target image will maintain flatness and parallelism, where flatness refers to: the straight line is still the straight line after the affine transformation, and the circular arc is still the circular arc after the affine transformation; parallelism refers to: the relative position relation between the straight lines is kept unchanged, the parallel lines are still parallel lines after affine transformation, the position sequence of coordinate points on the straight lines cannot be changed, and the included angle between the vectors is possible to be changed. That is, the affine transformation is a linear transformation of two-dimensional coordinates (x, y) to two-dimensional coordinates (u, v), and its mathematical expression is as follows:
Figure BDA0003540950140000081
wherein, a1a2Transform coefficients for the image with respect to x, such as displacement coefficients on the x-axis; b1b2Transform coefficients for the image with respect to y, such as displacement coefficients on the y-axis; c. C1c2Is a constant.
When the image needs to be scaled and rotated, the scaling and the rotation need to be realized through a matrix, and in order not to change the image size after the transformation, a 2 × 3 matrix constructed by the above formula is converted into a homogeneous matrix, so that an affine transformation homogeneous matrix representation of the image is obtained:
Figure BDA0003540950140000082
in step S407, after all the delaunay triangles have been processed, the target texture map is constructed.
It should be noted that the real-person texture coordinate space indicated by a in fig. 4 includes coordinate spaces of the first real-person gridding model and the second real-person gridding model, and vertices corresponding to texture coordinates of the first real-person gridding model are the same as vertices corresponding to texture coordinates of the second real-person gridding model, but the positions of the vertices are not all the same, and after the first real-person gridding model is changed into the second real-person gridding model through topology migration, texture coordinates based on the first real-person gridding model do not change with the progress of the topology migration.
In another embodiment, there is provided an apparatus for constructing a texture map of an avatar, as shown in fig. 5:
the modeling module 501 is configured to perform face reconstruction based on the three-dimensional target image to obtain a first real person gridding model, and obtain a cartoon topological structure model based on a cartoon topological structure corresponding to the three-dimensional target image; the real person gridding model and the cartoon topological structure model have different numbers of vertexes or different positions; it should be noted that, because the first real person gridding model and the cartoon topological structure model are different in modeling mode, and the cartoon topological model does not have a PCA base, the number of model vertexes thereof is inconsistent, and the positions of vertexes are not all the same;
the topology migration module 502 is configured to topologically migrate the established first real-person gridding model to a cartoon topology model to obtain a second real-person gridding model, so that the number of vertices and the positions of the vertices of the second real-person gridding model after topology migration are completely the same as those of the cartoon topology model;
the texture migration module 503 is configured to map the texture coordinates of the first real person gridding model into the cartoon topology model based on a mapping relationship between the texture coordinates of the second real person gridding model and the cartoon topology model, so as to construct a target texture map in the cartoon topology model;
it should be noted that, when the topology migration module 502 performs topology migration on the first real person gridding model, the alignment parameter between the first real person gridding model and the cartoon topology model may be obtained first, and the number and the position of the vertices in the first real person gridding model are aligned with the vertices in the cartoon topology model according to the obtained first alignment parameter, so as to obtain a second real person gridding model; after the second real person gridding model is obtained, determining the mapping relation of texture coordinates of the second real person gridding model and the cartoon topological structure model based on the first alignment parameters, and specifically: determining texture coordinates of a corresponding second real person gridding model in a second real person gridding model based on the texture coordinates of the cartoon topological structure model and the first alignment parameters; and determining a mapping relation according to the texture coordinates of the second real person gridding model and the texture coordinates of the cartoon topological structure.
Before the texture migration module 503 performs texture migration, a texture map of the first real person gridding model may be obtained through texture sampling, and then the texture map of the first real person gridding model is mapped onto the cartoon topological structure model through a mapping relationship of texture coordinates, optionally, a mesh may be re-established according to the texture coordinates of the cartoon topological structure model before the texture map is constructed, in an embodiment, the re-established mesh is a triangle in delaunay, and the specific flow is as described above.
Optionally, before the texture migration module 503 constructs the target texture map, a mesh may be reconstructed based on the mapped texture coordinates, and the target texture map may be constructed according to the pixel points in the mesh. In one embodiment, the reconstructed mesh is a Delaunay triangle, and the target texture map is constructed according to the pixel points in the Delaunay triangle mesh.
An application scene of the scheme for constructing the texture map of the virtual image is applied to a live broadcast scene, a network environment comprises a main broadcast client, a plurality of audience clients and a live broadcast server, and the main broadcast client distributes a live broadcast video stream to the audience clients through the live broadcast server. The function of the scheme for constructing the texture map of the virtual image in the application can be configured on the anchor client and also can be configured on the live broadcast server. Fig. 6 illustrates an example of the anchor client executing the solution of the present application.
In step S601, the anchor client acquires an anchor image through a camera;
in step S602, the anchor client reconstructs an acquired anchor image through a human face to obtain a first real person gridding model; carrying out cartoon topological structure modeling according to the anchor image to obtain a cartoon topological structure model;
in step S603, the anchor client topologically migrates the first real-person gridding model to the cartoon topological structure model to obtain a second real-person gridding model, and after the topology migration process, the first real-person gridding model which is not aligned with the vertex number and the vertex position of the cartoon topological model originally is converted into the second real-person gridding model which is aligned with both the vertex number and the vertex position of the cartoon topological model;
in step S604, the anchor client may obtain a mapping relationship between texture coordinates of the second real-person gridding model and the cartoon topology model according to the first alignment parameter during the topology migration, and map texture coordinates of the first real-person gridding model including the facial texture features of the photo input by the user onto the cartoon topology model according to the mapping relationship;
in step S605, the anchor client reconstructs a mesh on the cartoon topology model according to the mapped texture coordinates of the first real person gridding model, and constructs a texture map according to pixel points in the reconstructed mesh;
in step S606, the cartoon topology model with the texture map constructed is output, and the output cartoon topology model is a model capable of creating a cartoon image with a certain similarity to the anchor image.
In step S607, the anchor client generates a cartoon image by using the cartoon topology model according to the requirement of the scene, renders a live broadcast picture and sends the live broadcast picture to the live broadcast server, so that the live broadcast server distributes the live broadcast picture to each viewer client.
Accordingly, as shown in fig. 7, the present application further provides an apparatus 70 for constructing a texture map of an avatar, comprising a processor 71; a memory 72 for storing executable instructions, the memory 72 including a computer program; wherein the processor 71 is configured to:
acquiring a target three-dimensional target image, performing face reconstruction based on the target three-dimensional target image to obtain a first real person gridding model, and acquiring a cartoon topological structure model based on a cartoon topological structure corresponding to the target three-dimensional target image; the real person gridding model and the cartoon topological structure model have different numbers of vertexes and/or different positions;
the first real person gridding model is topologically transferred to a cartoon topological structure model, a second real person gridding model is obtained, and the number of vertexes of the second real person gridding model and the cartoon topological structure model is the same as the position of the vertexes;
and mapping the texture coordinates of the first real person gridding model into the cartoon topological structure model by utilizing the mapping relation between the texture coordinates of the second real person gridding model and the cartoon topological structure model so as to construct a target texture map in the cartoon topological structure model.
The Processor 71 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 72 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and so on. Also, the apparatus may cooperate with a network storage device that performs a storage function of the memory through a network connection. The storage 72 may be an internal storage unit of the device 70, such as a hard disk or a memory of the device 70. The memory 72 may also be an external storage device of the device 70, such as a plug-in hard disk, Smart Media Card (SMC), Secure Digital (SD) Card, Flash memory Card (Flash Card), etc. provided on the device 70.
Further, memory 72 may also include both internal and external storage units of device 70. The memory 72 is used to store computer programs and other programs and data required by the device. The memory 72 may also be used to temporarily store data that has been output or is to be output.
The device 70 may be a computing device such as a desktop computer, a notebook, a palm top computer, and a cloud server. The device may include, but is not limited to, a processor 71, a memory 72. Those skilled in the art will appreciate that fig. 7 is merely an example of a device 70 and does not constitute a limitation of device 70 and may include more or fewer components than shown, or some of the components may be combined, or different components, e.g., the device may also include input-output devices, network access devices, buses, etc.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.

Claims (14)

1. A method of constructing an avatar texture map, the method comprising:
acquiring a three-dimensional target image, performing face reconstruction based on the three-dimensional target image to obtain a first real person gridding model, and acquiring a cartoon topological structure model based on a cartoon topological structure corresponding to the three-dimensional target image; the number of vertexes contained in the real person gridding model and the cartoon topological structure model is different or vertexes with different positions exist;
the first real person gridding model is topologically transferred to a cartoon topological structure model, a second real person gridding model is obtained, and the number of vertexes of the second real person gridding model and the cartoon topological structure model is the same as the position of the vertexes;
and mapping the texture coordinates of the first real person gridding model into the cartoon topological structure model by utilizing the mapping relation of the texture coordinates of the second real person gridding model and the cartoon topological structure model so as to construct a target texture map in the cartoon topological structure model.
2. The method of claim 1, wherein migrating the first live-gridding model topology to a cartoon topology model comprises:
and acquiring a first alignment parameter between the first real person gridding model and the cartoon topological structure model, and respectively aligning the number and the positions of the vertexes of the first real person gridding model and the cartoon topological structure model according to the first alignment parameter to obtain a second real person gridding model.
3. The method of claim 2, wherein:
before the mapping relation of the texture coordinates of the second real person gridding model and the cartoon topological structure model is utilized, the method further comprises the following steps: and determining the mapping relation between the texture coordinates of the second real person gridding model and the cartoon topological structure model based on the first alignment parameters.
4. The method of claim 3, wherein:
the step of determining the mapping relationship comprises:
determining texture coordinates corresponding to the texture coordinates of the cartoon topological structure model in a second real person gridding model based on the first alignment parameters; and determining the mapping relation according to the texture coordinates of the second real person gridding model and the texture coordinates of the cartoon topological structure.
5. The method of claim 1, wherein the step of constructing the target texture map in the cartoon topology model comprises:
and reconstructing a grid based on the mapped texture coordinates, and constructing a target texture map according to pixel points in the grid.
6. The method of claim 5, wherein the step of reconstructing a mesh comprises:
and constructing a Delaunay triangle according to the mapped texture coordinates, and constructing a target texture map according to pixel points in the Delaunay triangle mesh.
7. An apparatus for constructing an avatar texture map, the apparatus comprising:
the modeling module is used for carrying out face reconstruction based on a three-dimensional target image to obtain a first real person gridding model and obtaining a cartoon topological structure model based on a cartoon topological structure corresponding to the three-dimensional target image; the number of vertexes contained in the real person gridding model and the cartoon topological structure model is different or vertexes with different positions exist;
the topology migration module is used for migrating the first real person gridding model topology to the cartoon topological structure model to obtain a second real person gridding model so that the number of vertexes of the second real person gridding model and the cartoon topological structure model is the same as the position of the vertexes;
and the texture migration module is used for mapping the texture coordinates of the first real person gridding model into the cartoon topological structure model based on the mapping relation of the texture coordinates of the second real person gridding model and the cartoon topological structure model so as to construct a target texture map in the cartoon topological structure model.
8. The apparatus of claim 7, wherein the topology migration module for migrating the first live-gridding model topology to the cartoon topology model comprises:
and acquiring a first alignment parameter between the first real person gridding model and the cartoon topological structure model, and aligning the number and the positions of vertexes of the first real person gridding model and the cartoon topological structure model respectively according to the first alignment parameter to obtain the second real person gridding model.
9. The apparatus of claim 8, wherein:
the topology migration module is further configured to: and determining the mapping relation between the texture coordinates of the second real person gridding model and the cartoon topological structure model based on the first alignment parameters.
10. The apparatus of claim 9, wherein:
the topology migration module determining the mapping relationship comprises: determining texture coordinates of a corresponding second real person gridding model in a second real person gridding model based on the texture coordinates of the cartoon topological structure model and the first alignment parameters; and determining the mapping relation according to the texture coordinates of the second real person gridding model and the texture coordinates of the cartoon topological structure.
11. The apparatus of claim 7, wherein the texture migration module for constructing the target texture map in the cartoon topology model comprises:
and reconstructing a grid based on the mapped texture coordinates, and constructing a target texture map according to pixel points in the grid.
12. The apparatus of claim 11, wherein the texture migration module reconstructs a mesh based on the mapped texture coordinates, and constructing the target texture map according to pixel points in the mesh comprises:
and constructing a Delaunay triangle according to the mapped texture coordinates, and constructing a target texture map according to pixel points in the Delaunay triangle mesh.
13. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions; wherein the processor is configured to perform the operations of any of the methods of claims 1-6.
14. A computer-readable storage medium having stored thereon computer instructions, characterized in that:
the instructions when executed by the processor perform the operations of any of the methods of claims 1-6.
CN202210233787.0A 2022-03-10 2022-03-10 Method, device and equipment for constructing virtual image texture map and storage medium Pending CN114742940A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210233787.0A CN114742940A (en) 2022-03-10 2022-03-10 Method, device and equipment for constructing virtual image texture map and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210233787.0A CN114742940A (en) 2022-03-10 2022-03-10 Method, device and equipment for constructing virtual image texture map and storage medium

Publications (1)

Publication Number Publication Date
CN114742940A true CN114742940A (en) 2022-07-12

Family

ID=82274400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210233787.0A Pending CN114742940A (en) 2022-03-10 2022-03-10 Method, device and equipment for constructing virtual image texture map and storage medium

Country Status (1)

Country Link
CN (1) CN114742940A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359171A (en) * 2022-10-21 2022-11-18 北京百度网讯科技有限公司 Virtual image processing method and device, electronic equipment and storage medium
CN115409933A (en) * 2022-10-28 2022-11-29 北京百度网讯科技有限公司 Multi-style texture mapping generation method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359171A (en) * 2022-10-21 2022-11-18 北京百度网讯科技有限公司 Virtual image processing method and device, electronic equipment and storage medium
CN115359171B (en) * 2022-10-21 2023-04-07 北京百度网讯科技有限公司 Virtual image processing method and device, electronic equipment and storage medium
CN115409933A (en) * 2022-10-28 2022-11-29 北京百度网讯科技有限公司 Multi-style texture mapping generation method and device
CN115409933B (en) * 2022-10-28 2023-02-03 北京百度网讯科技有限公司 Multi-style texture mapping generation method and device

Similar Documents

Publication Publication Date Title
CN111369681B (en) Three-dimensional model reconstruction method, device, equipment and storage medium
CN111986307B (en) 3D object reconstruction using a light grid representation
WO2022193941A1 (en) Image rendering method and apparatus, device, medium, and computer program product
Li et al. Detail-preserving and content-aware variational multi-view stereo reconstruction
CN114742940A (en) Method, device and equipment for constructing virtual image texture map and storage medium
US11263356B2 (en) Scalable and precise fitting of NURBS surfaces to large-size mesh representations
CN113012282A (en) Three-dimensional human body reconstruction method, device, equipment and storage medium
CN113643414B (en) Three-dimensional image generation method and device, electronic equipment and storage medium
Chen et al. Face swapping: realistic image synthesis based on facial landmarks alignment
Zhang et al. Color-guided depth image recovery with adaptive data fidelity and transferred graph Laplacian regularization
CN115984447B (en) Image rendering method, device, equipment and medium
CN114742956B (en) Model processing method, device, equipment and computer readable storage medium
JPH11504452A (en) Apparatus and method for reproducing and handling a three-dimensional object based on a two-dimensional projection
Ouyang et al. Real-time neural character rendering with pose-guided multiplane images
CN114333034A (en) Face pose estimation method and device, electronic equipment and readable storage medium
CN114359453A (en) Three-dimensional special effect rendering method and device, storage medium and equipment
CN114266693A (en) Image processing method, model generation method and equipment
TWI711004B (en) Picture processing method and device
CN116363320B (en) Training of reconstruction model and three-dimensional model reconstruction method, device, equipment and medium
CN116977532A (en) Cube texture generation method, apparatus, device, storage medium, and program product
CN116993955A (en) Three-dimensional model heavy topology method, device, equipment and storage medium
CN111105489A (en) Data synthesis method and apparatus, storage medium, and electronic apparatus
CN116452715A (en) Dynamic human hand rendering method, device and storage medium
CN107818578B (en) Rapid face model reconstruction algorithm and system based on registration method
CN115063708A (en) Light source model parameter obtaining method, training method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination