CN114581586A - Method and device for generating model substrate, electronic equipment and storage medium - Google Patents

Method and device for generating model substrate, electronic equipment and storage medium Download PDF

Info

Publication number
CN114581586A
CN114581586A CN202210226613.1A CN202210226613A CN114581586A CN 114581586 A CN114581586 A CN 114581586A CN 202210226613 A CN202210226613 A CN 202210226613A CN 114581586 A CN114581586 A CN 114581586A
Authority
CN
China
Prior art keywords
texture
coordinate
target
model
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210226613.1A
Other languages
Chinese (zh)
Inventor
刘豪杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210226613.1A priority Critical patent/CN114581586A/en
Publication of CN114581586A publication Critical patent/CN114581586A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The disclosure provides a method and a device for generating a model base, electronic equipment, a readable storage medium and a computer program product, and relates to the field of artificial intelligence such as augmented reality, computer vision and deep learning. The specific implementation scheme is as follows: determining a target texture map corresponding to a second model substrate by using texture information corresponding to a first model substrate, wherein the first model substrate is a substrate model preset aiming at the second model substrate; and filling the texture of the second model substrate based on the target texture map to obtain a target model substrate. The approach is able to generate a target model base that is populated with texture information. The target texture map is determined by using the texture information corresponding to the first model substrate, so that related personnel do not need to set the target texture map in a manual mode to fill the texture of the second model substrate, the generation efficiency of the model substrate filled with the texture information is improved, and the generation cost of the model substrate filled with the texture information is reduced.

Description

Method and device for generating model substrate, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence, and more particularly to computer vision and image processing techniques, which can be used in the fields of artificial intelligence such as augmented reality, computer vision, and deep learning.
Background
image-To-three-dimensional Avatar (PTA), a common Avatar generation technique, is capable of generating a personalized three-dimensional Avatar (Avatar) of a user based on the user's image.
In the process of generating the personalized three-dimensional virtual image based on the user image, the personalized three-dimensional virtual image of the user needs to be generated by utilizing the user image and the generated model base. Therefore, generating the model base becomes an important link of the image-to-three-dimensional virtual image technology.
Disclosure of Invention
The present disclosure provides a method, an apparatus, an electronic device, a readable storage medium, and a computer program product for generating a model base filled with texture information.
According to an aspect of the present disclosure, there is provided a method of generating a model substrate, which may include the steps of:
determining a target texture map corresponding to a second model substrate by using texture information corresponding to the first model substrate, wherein the first model substrate is a substrate model preset aiming at the second model substrate;
and filling the texture of the second model substrate based on the target texture map to obtain a target model substrate.
According to a second aspect of the present disclosure, there is provided an apparatus for generating a model substrate, the apparatus may include:
the target texture map determining unit is used for determining a target texture map corresponding to a second model substrate by using texture information corresponding to the first model substrate, wherein the first model substrate is a substrate model preset aiming at the second model substrate;
and the target model substrate obtaining unit is used for carrying out texture filling on the second model substrate based on the target texture map to obtain the target model substrate.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method in any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the method in any of the embodiments of the present disclosure.
According to the technology disclosed by the disclosure, the target model base is obtained by performing texture filling on the basis of the target texture map, so that the target model base is the model base filled with texture information. And the target texture map is determined by using the texture information corresponding to the first model substrate, so that related personnel do not need to set the target texture map in a manual mode to fill the texture of the second model substrate, the generation efficiency of the model substrate filled with the texture information is improved, and the generation cost of the model substrate filled with the texture information is reduced.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a method of generating a model base provided in an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for determining a target texture map provided in an embodiment of the present disclosure;
fig. 3 is a flowchart of a method for determining coordinates of a pixel point according to an embodiment of the disclosure;
fig. 4 is a flowchart of an interpolation processing method provided in an embodiment of the present disclosure;
FIG. 5 is a schematic view of a mold base provided in an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of an apparatus for generating a model substrate provided in an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The present disclosure provides a method for generating a model substrate, and specifically, referring to fig. 1, a flowchart of a method for generating a model substrate is provided for an embodiment of the present disclosure. The method shown in fig. 1 may comprise the steps of:
step S101: and determining a target texture map corresponding to a second model substrate by using the texture information corresponding to the first model substrate, wherein the first model substrate is a substrate model preset aiming at the second model substrate.
Step S102: and filling the texture of the second model substrate based on the target texture map to obtain a target model substrate.
In the method for generating the model base provided in the embodiment of the present disclosure, since the target model base is obtained by performing texture filling based on the target texture map, the target model base is the model base filled with the texture information. And the target texture map is determined by using the texture information corresponding to the first model substrate, so that related personnel do not need to set the target texture map in a manual mode to fill the texture of the second model substrate, the generation efficiency of the model substrate filled with the texture information is improved, and the generation cost of the model substrate filled with the texture information is reduced.
In embodiments of the present disclosure, the first model base is generally a standard model base filled with texture information that is pre-generated for the first standard model. The second model base is typically a standard model base that is pre-generated for the second standard model and not populated with texture information.
The standard model is a pre-generated standard avatar model, which is an avatar model without distinct features for displaying the normal image of the target object. Target objects include, but are not limited to, humans, animals, and even robots, among others. Accordingly, the avatar includes, but is not limited to, a human personalized three-dimensional avatar, an animal personalized three-dimensional avatar, and a robot personalized three-dimensional avatar.
In the embodiments of the present disclosure, the standard model base is a model base having normal expressiveness, which is further generated based on the standard model. Specifically, the standard model substrate may be a target standard model substrate having normal expressive power and meeting the target requirement, and is used for displaying the image of the target object under the target requirement.
The target requirement is a requirement set according to the facial expression or the facial features of the target object, and the target requirement may include at least one of a large eye, a thick eyebrow, a mouth-opening expression, a smiling expression or a blinking expression. Accordingly, the standard model base may be specifically a standard model base having a large eye, a standard model base having a thick eyebrow, a standard model base having a blinking expression, a standard model base having a mouth-opening action, and a standard model base having a blinking expression.
In addition, the first model base may also be a style model base filled with texture information that is generated in advance for the first style model. The second model base may also be a stylistic model base pre-generated for the second stylistic model that is not populated with texture information.
The target style corresponds to a predetermined distinctive feature. Accordingly, the style model is an avatar model having a distinctive feature for showing the image of the target object under the distinctive feature. The target style may refer to big eye and small mouth, blinking and mouth distortion, etc.
The style model is generally obtained by performing model design aiming at a target style by related designers through a design tool for designing the virtual image model, and the style model can also be a virtual image model with the target style generated in advance through other model generation modes. That is, the generation manner of the style model is not particularly limited in the embodiment of the present disclosure.
The style model base is a model base generated on the basis of the style model and is used for showing the image that the target object has vivid characteristics under the target requirement. For example: in the case where the style model is big-eye big mouth and the target demand is open mouth, the style model base may be a big-eye big mouth style base model having open mouth expression.
It should be noted that, when the target object is a person, permission and authorization of the relevant target object are obtained before the standard model, the target standard model base, the style model, the target style model base and the personalized three-dimensional avatar are generated. In the above process, if the acquisition, storage and application of the personal information of the user are involved, the acquisition, storage and application of the personal information of the user involved should be in accordance with the regulations of the relevant laws and regulations without violating the customs of the public order.
In an embodiment of the present disclosure, the texture map is an image representing texture information of the model base. Specifically, each pixel point of the texture map has a corresponding pixel point in the model substrate, and the texture information of each pixel point of the texture map is used for representing the texture information of the corresponding pixel point of the model substrate.
In an embodiment of the present disclosure, a specific implementation manner of determining a target texture map corresponding to a second model base by using texture information corresponding to a first model base is shown in fig. 2, and fig. 2 is a flowchart of a target texture map determining method provided in an embodiment of the present disclosure. The method shown in fig. 2 may comprise the steps of:
step S201: and obtaining a first texture map corresponding to the second model substrate by using the first texture coordinate, wherein the first texture coordinate is a texture coordinate corresponding to the pixel point of the second model substrate in a texture mapping coordinate system.
Step S202: and determining second pixel point coordinates corresponding to the texture information in the first pixel point coordinates, wherein the first pixel point coordinates are coordinates corresponding to the pixels of the first texture map in a pixel coordinate system.
Step S203: and correspondingly endowing the texture information to the pixel points of the first texture map based on the second pixel point coordinates to obtain a second texture map corresponding to the second model substrate.
Step S204: and obtaining the target texture map based on the second texture map.
On the basis of determining the coordinates of the second pixel points corresponding to the texture information, the texture information is correspondingly given to the pixel points of the first texture map, so that the condition that the texture information is correspondingly and accurately given to the pixel points of the first texture map can be ensured. The texture information is correspondingly and accurately given to the pixel points of the first texture map, so that a normal second texture map can be generated on the basis of the first texture map, and a normal target texture map can be obtained on the basis of the second texture map.
The texture map is obtained by texture mapping the model base through texture coordinates. The process of texture mapping the model base will be described in detail by taking the first texture map as an example. The texture mapping process corresponding to the first texture map is as follows:
firstly, according to the texture distribution corresponding to the second model substrate, calculating the corresponding texture coordinate of the pixel point of the second model substrate in the texture mapping coordinate system, and determining the corresponding texture coordinate of the pixel point of the second model substrate in the texture mapping coordinate system as the first texture coordinate (u, v). Wherein the value range of u and v is 0-1.
Next, the image size of the first texture map is preset. The image size includes a width (w) and a height (h) of the first texture map.
And thirdly, correspondingly calculating the corresponding coordinates of the pixel points of the first texture map in a pixel coordinate system according to the first texture coordinates and the image size, and determining the corresponding coordinates of the pixel points of the first texture map in the pixel coordinate system as the coordinates of the first pixel points.
Specifically, the multiplication of u of the first texture coordinate and the width of the first texture map is used as x of the first pixel point coordinate, and the multiplication of v of the first texture coordinate and the height of the first texture map is used as y of the first pixel point coordinate. From this, the first pixel coordinate is (x ═ u ×, w ═ y ═ v ×, h).
Because each texture coordinate in the first texture coordinate has a corresponding pixel point in the second model substrate, and the first pixel point coordinate is obtained by correspondingly calculating the first texture coordinate and the image size. Therefore, each pixel point of the first texture map has a corresponding pixel point in the model substrate, so that the texture information of each pixel point of the first texture map can be used for representing the texture information of the corresponding pixel point of the second model substrate
And finally, correspondingly endowing the texture information corresponding to the second model substrate with the coordinates of the first pixel points to obtain a first texture map.
Because the second model substrate is not filled with the texture information, initialized texture information can be configured in advance for the pixel points of the second model substrate. That is, the texture information of each pixel point of the first texture map is used to represent the initialized texture information.
In the embodiment of the present disclosure, a specific implementation process of determining a second pixel coordinate corresponding to texture information in a first pixel coordinate is shown in fig. 3, and fig. 3 is a flowchart of a pixel coordinate determination method provided in the embodiment of the present disclosure. The method shown in fig. 3 may include the steps of:
step S301: and determining a second texture coordinate based on the texture distribution corresponding to the second model substrate, wherein the second texture coordinate is the texture coordinate corresponding to the pixel point of the first model substrate in the texture mapping coordinate system.
Step S302: and aiming at the second texture coordinate, determining a second pixel point coordinate by utilizing a first coordinate corresponding relation and a second coordinate corresponding relation, wherein the first coordinate corresponding relation is a corresponding relation between the second texture coordinate and the first texture coordinate, and the second coordinate corresponding relation is a corresponding relation between the first texture coordinate and the first pixel point coordinate.
The second texture coordinate is determined based on the texture distribution corresponding to the second model base, and consistency between the second texture coordinate and the first texture coordinate can be ensured. The second pixel point coordinate is determined by utilizing the first coordinate corresponding relation and the second coordinate corresponding relation aiming at the second texture coordinate, so that the second pixel point coordinate can be determined more simply and directly under the condition that the second texture coordinate and the first texture coordinate have coordinate consistency. Therefore, the determining efficiency of the target texture map is improved, and the generation efficiency of the target model substrate can be further improved.
In the embodiment of the present disclosure, the specific implementation manner of step S302 may be as follows:
firstly, aiming at the second texture coordinate, the texture coordinate of the first texture coordinate and the second texture coordinate is obtained by utilizing the corresponding relation of the first coordinate. And determining the texture coordinate of the first texture coordinate and the second texture coordinate as a target texture coordinate.
And then, aiming at the target texture coordinate, obtaining a pixel point coordinate corresponding to the target texture coordinate in the first pixel point coordinate by utilizing the corresponding relation of the second coordinate. And determining the pixel point coordinate corresponding to the target texture coordinate in the first pixel point coordinate as a second pixel point coordinate.
In the embodiment of the present disclosure, based on the second texture map, the manner of obtaining the target texture map may be: and determining the second texture map as the target texture map.
However, in practical applications, there are often situations where the number of pixels of the first model substrate is less than that of the second model substrate. Therefore, under the condition of the second texture map obtained by correspondingly giving the texture information to the pixel points of the first texture map, the texture information of the pixel points of the second texture map is often sparse. At this time, if the second texture map is directly determined as the target texture map, the texture information of the pixel points of the target texture map is sparse, and the texture information of the pixel points of the target model base is sparse.
Therefore, in order to improve the consistency of the texture information of the pixel point of the target model base, in the embodiment of the disclosure, the texture information interpolation processing may be performed on the second texture map based on the second pixel point coordinate, so as to obtain the target texture map.
Specifically, under the condition that the texture information includes color information, based on the second pixel coordinates, the process of performing texture information interpolation processing on the second texture map is as follows: and performing color information interpolation processing on the second texture map based on the second pixel point coordinates. The color information includes color values of the pixels in the base model, and specifically includes color values of the pixels in red (R), green (G), and blue (B) color channels.
In addition, the texture information may also include other information than color information, such as: shape information, pattern information, and the like.
In the embodiment of the present disclosure, a specific implementation manner of performing color information interpolation processing on the second texture map based on the second pixel coordinates is shown in fig. 4, and fig. 4 is a flowchart of an interpolation processing method provided in the embodiment of the present disclosure. The method shown in fig. 4 may include the steps of:
step S401: and determining pixel points to be interpolated in the second texture map.
Step S402: and determining the gravity center coordinate corresponding to the pixel point to be interpolated by utilizing the second pixel point coordinate.
Step S403: and determining the color information to be interpolated corresponding to the pixel point to be interpolated by using the barycentric coordinates and the color information.
Step S404: and giving the color information to be interpolated to the pixel point to be interpolated so as to perform color information interpolation on the second texture map.
On the basis of determining the pixel points to be interpolated, determining the color information to be interpolated corresponding to the pixel points to be interpolated based on the barycentric coordinates and the color information corresponding to the pixel points to be interpolated, and giving the color information to be interpolated to the pixel points to be interpolated, so that the color information of the pixel points of the target texture map can be denser, and the transition of the color information of the pixel points can be smoother.
In the embodiment of the present disclosure, the determination method of the pixel point to be interpolated is as follows:
firstly, any three adjacent pixel points corresponding to the coordinates of the second pixel points are determined in the pixel points of the second texture image.
Then, in a triangular area formed by any three adjacent pixel points, one pixel point is selected as a pixel point to be interpolated.
In embodiments of the present disclosure, the barycentric coordinates may be determined using the following formula:
Figure BDA0003539459300000081
wherein, P is the barycentric coordinate P ═ (w)1,w2,w3),P1、P2And P3And pixel point coordinates corresponding to any three adjacent pixel points.
In the embodiment of the present disclosure, when the color information is color values of three RGB color channels, the color information to be interpolated may be determined by using the following formula:
tex=w1*tex1+w2*tex2+w3*tex3
where tex is used to represent the color values of the three RGB color channels at P. tex1、tex2And tex3Respectively for P1、P2And P3The color values of the three RGB color channels.
In the embodiment of the disclosure, after the target texture map is obtained, the second model substrate needs to be texture-filled based on the target texture map, so that the target model substrate can be obtained. The specific implementation mode is as follows:
first, the correspondence between the target texture map and the second model base is determined. And then, according to the target texture map, searching texture information to be filled corresponding to the second model base in the corresponding relation. And finally, filling the texture of the second model substrate according to the texture information to be filled to obtain a target model substrate.
And the texture filling is carried out on the second model substrate to obtain the target model substrate, so that the target model substrate can have a better visual effect. Specifically, as shown in fig. 5, fig. 5 is a schematic diagram of a model substrate provided in an embodiment of the present disclosure. The model substrate on the left side in fig. 5 is the second model substrate, and the model substrate on the right side is the target model substrate, and it is obvious from the figure that the visual effect of the target model substrate has a better visual effect than that of the second model substrate.
In the embodiment of the present disclosure, the correspondence between the pixel points may be a correspondence determined by the first texture coordinate. Specifically, the generation method of the pixel point correspondence relationship may include the following steps:
first, according to the corresponding relation between each pixel point of the target texture map and the first texture coordinate, the first texture coordinate corresponding to each pixel point of the target texture map is determined.
And then, aiming at the first texture coordinate corresponding to each pixel point of the target texture map, determining the corresponding pixel point of each pixel point of the target texture map in the first model substrate according to the corresponding relation between the first texture coordinate and the pixel point of the first model substrate.
And finally, generating a pixel point corresponding relation according to the corresponding pixel point of each pixel point of the target texture map in the first model substrate.
In the embodiment of the present disclosure, according to the correspondence between the pixel points, a specific implementation manner for searching the texture information to be filled corresponding to the pixel point of the second model base in the target texture map is as follows:
first, texture information corresponding to each pixel point of a target texture map is determined.
And then, according to the corresponding relation of the pixel points, determining the corresponding pixel points of the second model substrate in the target texture map.
And finally, acquiring texture information to be filled corresponding to each pixel point of the second model substrate according to the corresponding pixel point of the second model substrate in the target texture map and the texture information corresponding to each pixel point of the target texture map.
In the embodiment of the disclosure, a plurality of target standard model bases with different target requirements can be met, and a plurality of corresponding target model bases are obtained. And generating an avatar for the standard model base and the plurality of target model bases.
As shown in fig. 6, an embodiment of the present disclosure provides an apparatus for generating a model substrate, including:
a target texture map determining unit 601, configured to determine a target texture map corresponding to a second model base by using texture information corresponding to a first model base, where the first model base is a base model preset for the second model base;
and an object model base obtaining unit 602, configured to perform texture filling on the second model base based on the object texture map, so as to obtain an object model base.
In one embodiment, the target texture map determining unit 601 may include:
the first texture map obtaining subunit is used for obtaining a first texture map corresponding to the second model substrate by using a first texture coordinate, wherein the first texture coordinate is a texture coordinate corresponding to a pixel point of the second model substrate in a texture mapping coordinate system;
the first coordinate determining subunit is used for determining a second pixel point coordinate corresponding to the texture information in the first pixel point coordinate, and the first pixel point coordinate is a coordinate corresponding to a pixel point of the first texture map in a pixel coordinate system;
the second texture map obtaining subunit is used for correspondingly endowing the texture information to the pixel points of the first texture map based on the second pixel point coordinates to obtain a second texture map corresponding to the second model substrate;
and the target texture map obtaining subunit is used for obtaining the target texture map based on the second texture map.
In one embodiment, the first coordinate determination subunit may include:
a second texture coordinate determining subunit, configured to determine a second texture coordinate based on the texture distribution corresponding to the second model base, where the second texture coordinate is a texture coordinate corresponding to a pixel point of the first model base in the texture mapping coordinate system;
and the second coordinate determination subunit is used for determining a second pixel point coordinate by using the first coordinate corresponding relation and the second coordinate corresponding relation according to the second texture coordinate, wherein the first coordinate corresponding relation is the corresponding relation between the second texture coordinate and the first texture coordinate, and the second coordinate corresponding relation is the corresponding relation between the first texture coordinate and the first pixel point coordinate.
In one embodiment, in the case where the texture information includes color information, the target texture map obtaining subunit may include:
and the interpolation processing subunit is used for carrying out color information interpolation processing on the second texture map based on the second pixel point coordinates so as to obtain the target texture map.
In one embodiment, the interpolation processing subunit may include:
a pixel point determining subunit, configured to determine a pixel point to be interpolated in the second texture map;
the gravity center coordinate determination subunit is used for determining the gravity center coordinate corresponding to the pixel point to be interpolated by utilizing the second pixel point coordinate;
the color information to be interpolated determining subunit is used for determining the color information to be interpolated corresponding to the pixel point to be interpolated by using the barycentric coordinate and the color information;
and the color interpolation subunit is used for endowing the color information to be interpolated to the pixel point to be interpolated so as to perform color information interpolation on the second texture map.
In one embodiment, the target model substrate obtaining unit 602 may include:
the corresponding relation determining subunit is used for determining the corresponding relation of pixel points, and the corresponding relation of the pixel points is the corresponding relation between the target texture map and the second model substrate;
the texture information to be filled determining subunit is used for searching the texture information to be filled corresponding to the pixel points of the second model substrate in the target texture map according to the corresponding relation of the pixel points;
and the target model substrate obtaining subunit is used for carrying out texture filling on the second model substrate according to the texture information to be filled to obtain the target model substrate.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
According to an embodiment of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701 which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, a modem, a wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 performs the respective methods and processes described above, such as the generation method of the model base. For example, in some embodiments, the method of generating the model base may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM703 and executed by the computing unit 701, one or more steps of the method of generating a model base described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g. by means of firmware) to perform the method of generating the model base.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable model-based generation apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server combining a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. A method of generating a model substrate, comprising:
determining a target texture map corresponding to a second model substrate by using texture information corresponding to a first model substrate, wherein the first model substrate is a substrate model preset aiming at the second model substrate;
and filling the texture of the second model substrate based on the target texture map to obtain a target model substrate.
2. The method according to claim 1, wherein the determining the target texture map corresponding to the second model base by using the texture information corresponding to the first model base comprises:
obtaining a first texture map corresponding to the second model substrate by using a first texture coordinate, wherein the first texture coordinate is a texture coordinate corresponding to a pixel point of the second model substrate in a texture mapping coordinate system;
determining a second pixel point coordinate corresponding to the texture information in a first pixel point coordinate, wherein the first pixel point coordinate is a coordinate corresponding to a pixel point of the first texture map in a pixel coordinate system;
based on the second pixel point coordinate, correspondingly endowing the texture information to the pixel point of the first texture map, and obtaining a second texture map corresponding to the second model substrate;
and obtaining the target texture map based on the second texture map.
3. The method of claim 2, wherein the determining of the coordinates of the second pixel point comprises:
determining a second texture coordinate based on the texture distribution corresponding to the second model substrate, wherein the second texture coordinate is a texture coordinate corresponding to the pixel point of the first model substrate in the texture mapping coordinate system;
and determining the second pixel point coordinate by using a first coordinate corresponding relation and a second coordinate corresponding relation aiming at the second texture coordinate, wherein the first coordinate corresponding relation is the corresponding relation between the second texture coordinate and the first texture coordinate, and the second coordinate corresponding relation is the corresponding relation between the first texture coordinate and the first pixel point coordinate.
4. The method according to claim 2 or 3, wherein in case the texture information comprises color information, said obtaining the target texture map based on the second texture map comprises:
and performing color information interpolation processing on the second texture map based on the second pixel point coordinates to obtain the target texture map.
5. The method of claim 4, wherein the interpolating color information of the second texture map based on the second pixel coordinates comprises:
determining pixel points to be interpolated in the second texture map;
determining a gravity center coordinate corresponding to the pixel point to be interpolated by using the second pixel point coordinate;
determining color information to be interpolated corresponding to the pixel points to be interpolated by using the barycentric coordinates and the color information;
and giving the color information to be interpolated to the pixel point to be interpolated so as to perform color information interpolation on the second texture map.
6. The method according to any one of claims 1-3, wherein the texture filling the second model base based on the target texture map to obtain a target model base comprises:
determining a pixel point corresponding relation, wherein the pixel point corresponding relation is the corresponding relation of the pixel points between the target texture map and the second model substrate;
searching texture information to be filled corresponding to the pixel points of the second model substrate in the target texture map according to the pixel point corresponding relation;
and filling the texture of the second model substrate according to the texture information to be filled to obtain the target model substrate.
7. An apparatus for generating a model substrate, comprising:
the target texture map determining unit is used for determining a target texture map corresponding to a second model substrate by using texture information corresponding to a first model substrate, wherein the first model substrate is a substrate model preset aiming at the second model substrate;
and the target model substrate obtaining unit is used for carrying out texture filling on the second model substrate based on the target texture map to obtain a target model substrate.
8. The apparatus of claim 7, wherein the target texture map determining unit comprises:
a first texture map obtaining subunit, configured to obtain, by using a first texture coordinate, a first texture map corresponding to the second model base, where the first texture coordinate is a texture coordinate corresponding to a pixel point of the second model base in a texture map coordinate system;
a first coordinate determining subunit, configured to determine, in a first pixel coordinate, a second pixel coordinate corresponding to the texture information, where the first pixel coordinate is a coordinate corresponding to a pixel of the first texture map in a pixel coordinate system;
a second texture map obtaining subunit, configured to correspondingly assign the texture information to the pixel point of the first texture map based on the second pixel point coordinate, so as to obtain a second texture map corresponding to the second model base;
and the target texture map obtaining subunit is used for obtaining the target texture map based on the second texture map.
9. The apparatus of claim 8, wherein the first coordinate determination subunit comprises:
a second texture coordinate determining subunit, configured to determine a second texture coordinate based on the texture distribution corresponding to the second model base, where the second texture coordinate is a texture coordinate corresponding to a pixel point of the first model base in the texture map coordinate system;
and the second coordinate determination subunit is configured to determine, for the second texture coordinate, the second pixel coordinate by using a first coordinate correspondence relationship and a second coordinate correspondence relationship, where the first coordinate correspondence relationship is a correspondence relationship between the second texture coordinate and the first texture coordinate, and the second coordinate correspondence relationship is a correspondence relationship between the first texture coordinate and the first pixel coordinate.
10. The apparatus according to claim 8 or 9, wherein in case the texture information includes color information, the target texture map obtaining subunit includes:
and the interpolation processing subunit is used for carrying out color information interpolation processing on the second texture map based on the second pixel point coordinates so as to obtain the target texture map.
11. The apparatus of claim 10, wherein the interpolation processing subunit comprises:
a pixel point determining subunit, configured to determine a pixel point to be interpolated in the second texture map;
the gravity center coordinate determining subunit is used for determining the gravity center coordinate corresponding to the pixel point to be interpolated by using the second pixel point coordinate;
a color information to be interpolated determining subunit, configured to determine, by using the barycentric coordinate and the color information, color information to be interpolated corresponding to the pixel point to be interpolated;
and the color interpolation subunit is used for endowing the color information to be interpolated to the pixel point to be interpolated so as to perform color information interpolation on the second texture map.
12. The apparatus according to any one of claims 7-9, wherein the object model substrate obtaining unit comprises:
a correspondence determining subunit, configured to determine a correspondence between pixel points, where the correspondence between pixel points is a correspondence between the target texture map and the second model base;
a texture information to be filled determining subunit, configured to search, according to the pixel point correspondence, texture information to be filled corresponding to a pixel point of the second model base in the target texture map;
and the target model substrate obtaining subunit is used for performing texture filling on the second model substrate according to the texture information to be filled to obtain the target model substrate.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1 to 6.
15. A computer program product comprising computer programs/instructions, wherein the computer programs/instructions, when executed by a processor, implement the method of any one of claims 1 to 6.
CN202210226613.1A 2022-03-09 2022-03-09 Method and device for generating model substrate, electronic equipment and storage medium Pending CN114581586A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210226613.1A CN114581586A (en) 2022-03-09 2022-03-09 Method and device for generating model substrate, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210226613.1A CN114581586A (en) 2022-03-09 2022-03-09 Method and device for generating model substrate, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114581586A true CN114581586A (en) 2022-06-03

Family

ID=81773083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210226613.1A Pending CN114581586A (en) 2022-03-09 2022-03-09 Method and device for generating model substrate, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114581586A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147265A (en) * 2022-06-30 2022-10-04 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005034527A1 (en) * 2003-09-30 2005-04-14 Eric Belk Lange Stereoscopic imaging
JP2005332028A (en) * 2004-05-18 2005-12-02 Nippon Telegr & Teleph Corp <Ntt> Method and apparatus for generating three-dimensional graphic data, generating texture image, and coding and decoding multi-dimensional data, and program therefor
WO2018184140A1 (en) * 2017-04-04 2018-10-11 Intel Corporation Facial image replacement using 3-dimensional modelling techniques
CN111127631A (en) * 2019-12-17 2020-05-08 深圳先进技术研究院 Single image-based three-dimensional shape and texture reconstruction method, system and storage medium
US20210134056A1 (en) * 2018-05-31 2021-05-06 Beijing Jingdong Shangke Information Technology Co., Ltd. Image processing method and device
CN112785674A (en) * 2021-01-22 2021-05-11 北京百度网讯科技有限公司 Texture map generation method, rendering method, device, equipment and storage medium
CN113144613A (en) * 2021-05-08 2021-07-23 成都乘天游互娱网络科技有限公司 Model-based volume cloud generation method
CN113177879A (en) * 2021-04-30 2021-07-27 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN113643412A (en) * 2021-07-14 2021-11-12 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN113963110A (en) * 2021-10-11 2022-01-21 北京百度网讯科技有限公司 Texture map generation method and device, electronic equipment and storage medium
CN114092673A (en) * 2021-11-23 2022-02-25 北京百度网讯科技有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005034527A1 (en) * 2003-09-30 2005-04-14 Eric Belk Lange Stereoscopic imaging
JP2005332028A (en) * 2004-05-18 2005-12-02 Nippon Telegr & Teleph Corp <Ntt> Method and apparatus for generating three-dimensional graphic data, generating texture image, and coding and decoding multi-dimensional data, and program therefor
WO2018184140A1 (en) * 2017-04-04 2018-10-11 Intel Corporation Facial image replacement using 3-dimensional modelling techniques
US20210134056A1 (en) * 2018-05-31 2021-05-06 Beijing Jingdong Shangke Information Technology Co., Ltd. Image processing method and device
CN111127631A (en) * 2019-12-17 2020-05-08 深圳先进技术研究院 Single image-based three-dimensional shape and texture reconstruction method, system and storage medium
CN112785674A (en) * 2021-01-22 2021-05-11 北京百度网讯科技有限公司 Texture map generation method, rendering method, device, equipment and storage medium
CN113177879A (en) * 2021-04-30 2021-07-27 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN113144613A (en) * 2021-05-08 2021-07-23 成都乘天游互娱网络科技有限公司 Model-based volume cloud generation method
CN113643412A (en) * 2021-07-14 2021-11-12 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN113963110A (en) * 2021-10-11 2022-01-21 北京百度网讯科技有限公司 Texture map generation method and device, electronic equipment and storage medium
CN114092673A (en) * 2021-11-23 2022-02-25 北京百度网讯科技有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张辉, 徐光, 谢峰: "基于形变模型的物体建模与分析", 计算机学报, no. 06, 12 June 2001 (2001-06-12) *
钟安元著: "《计算机图形图像渲染》", 30 October 2021, 重庆大学电子音像出版社, pages: 95 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147265A (en) * 2022-06-30 2022-10-04 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113643412B (en) Virtual image generation method and device, electronic equipment and storage medium
CN112785674B (en) Texture map generation method, rendering device, equipment and storage medium
CN112862933B (en) Method, apparatus, device and storage medium for optimizing model
CN112652057B (en) Method, device, equipment and storage medium for generating human body three-dimensional model
CN114549710A (en) Virtual image generation method and device, electronic equipment and storage medium
CN115409933B (en) Multi-style texture mapping generation method and device
CN115147265B (en) Avatar generation method, apparatus, electronic device, and storage medium
CN115222879B (en) Model face reduction processing method and device, electronic equipment and storage medium
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN114723888B (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
CN114549728A (en) Training method of image processing model, image processing method, device and medium
CN115797565A (en) Three-dimensional reconstruction model training method, three-dimensional reconstruction device and electronic equipment
CN114092673B (en) Image processing method and device, electronic equipment and storage medium
CN112562043B (en) Image processing method and device and electronic equipment
CN114581586A (en) Method and device for generating model substrate, electronic equipment and storage medium
CN116524165B (en) Migration method, migration device, migration equipment and migration storage medium for three-dimensional expression model
CN113344213A (en) Knowledge distillation method, knowledge distillation device, electronic equipment and computer readable storage medium
CN116524162A (en) Three-dimensional virtual image migration method, model updating method and related equipment
CN113593046B (en) Panorama switching method and device, electronic equipment and storage medium
CN115147306A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114549785A (en) Method and device for generating model substrate, electronic equipment and storage medium
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
CN115953553B (en) Avatar generation method, apparatus, electronic device, and storage medium
CN113610992B (en) Bone driving coefficient determining method and device, electronic equipment and readable storage medium
CN114037814B (en) Data processing method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination