CN115661322A - Method and device for generating face texture image - Google Patents

Method and device for generating face texture image Download PDF

Info

Publication number
CN115661322A
CN115661322A CN202211180085.7A CN202211180085A CN115661322A CN 115661322 A CN115661322 A CN 115661322A CN 202211180085 A CN202211180085 A CN 202211180085A CN 115661322 A CN115661322 A CN 115661322A
Authority
CN
China
Prior art keywords
texture image
target
image
face
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211180085.7A
Other languages
Chinese (zh)
Other versions
CN115661322B (en
Inventor
王迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211180085.7A priority Critical patent/CN115661322B/en
Publication of CN115661322A publication Critical patent/CN115661322A/en
Application granted granted Critical
Publication of CN115661322B publication Critical patent/CN115661322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Generation (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for generating a face texture image, electronic equipment and a readable storage medium, relates to the technical field of artificial intelligence, specifically to the technical fields of augmented reality, virtual reality, computer vision, deep learning and the like, and can be applied to scenes such as a metasma and a virtual digital person. The generation method of the face texture image comprises the following steps: generating an initial texture image according to the face image; under the condition that the background color in the initial texture image meets the preset requirement, determining a target area corresponding to a non-human face part in the initial texture image; and filling the target area according to the target color to generate a target texture image. The generation quality of the target texture image can be improved.

Description

Method and device for generating face texture image
Technical Field
The present disclosure relates to the field of artificial intelligence, and more particularly to the field of augmented reality, virtual reality, computer vision, deep learning, and the like, and can be applied to the fields of metas, virtual digital people, and the like. A method and a device for generating a face texture image, an electronic device and a readable storage medium are provided.
Background
The reconstruction of the three-dimensional face image through the two-dimensional face image is a common three-dimensional face reconstruction mode at present. When the three-dimensional face image is reconstructed, the three-dimensional face model and the face texture image need to be used, so that the quality of the face texture image influences the quality of the reconstructed three-dimensional face image.
Disclosure of Invention
According to a first aspect of the present disclosure, a method for generating a face texture image is provided, including: generating an initial texture image according to the face image; under the condition that the background color in the initial texture image meets the preset requirement, determining a target area corresponding to a non-human face part in the initial texture image; and filling the target area according to the target color to generate a target texture image.
According to a second aspect of the present disclosure, there is provided a device for generating a face texture image, comprising: the first generating unit is used for generating an initial texture image according to the face image; the processing unit is used for determining a target area corresponding to a non-human face part in the initial texture image under the condition that the background color in the initial texture image meets a preset requirement; and the second generating unit is used for filling the target area according to the target color to generate a target texture image.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method as described above.
According to a fifth aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method as described above.
According to the technical scheme, the target texture image is generated based on the initial texture image without the background color, and the object texture image is prevented from containing the sheltering object of the non-human face part, so that the generation quality of the target texture image is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;
fig. 4 is a block diagram of an electronic device for implementing a method for generating a face texture image according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure. As shown in fig. 1, the method for generating a face texture image of this embodiment specifically includes the following steps:
s101, generating an initial texture image according to the face image;
s102, under the condition that the background color in the initial texture image meets the preset requirement, determining a target area corresponding to a non-human face part in the initial texture image;
s103, filling the target area according to the target color to generate a target texture image.
According to the method for generating the face texture image, the initial texture image is generated according to the face image, then the target area corresponding to the non-face part in the initial texture image is determined under the condition that the background color in the initial texture image meets the preset requirement, and finally the target area is filled according to the target color to generate the target texture image.
In the present embodiment, the texture image is an image for reflecting texture features (for example, five sense organs, skin color, wrinkles, and the like) of the face of the target object.
In the embodiment, when S101 is executed to generate an initial texture image from a face image, the face image (which is a two-dimensional face image) is first acquired, and then an initial texture image corresponding to the face image is generated by using an existing texture image generation method, for example, a generation method based on a texture base or a generation method based on original image expansion.
In the embodiment, when S101 is executed to acquire a face image, an image captured in real time at the input terminal may be used as the face image, and an image stored at the input terminal may also be used as the face image.
After S101 is executed to generate an initial texture image according to a human face image, S102 is executed to determine a target area corresponding to a non-human face part in the initial texture image under the condition that the background color in the initial texture image meets a preset requirement; the background color in this embodiment may be black.
In this embodiment, when S102 is executed to determine that the background color in the initial texture image meets the preset requirement, the pixel value of the pixel included in the initial texture image may be first obtained, and then, in a case that the number of pixels whose pixel values are determined to be 0 is smaller than the preset number threshold (that is, the initial texture image includes a small amount of background color or does not include the background color), it is determined that the background color in the initial texture image meets the preset requirement, otherwise, it is determined that the background color in the initial texture image does not meet the preset requirement.
If the embodiment executes S102 to determine that the background color in the initial texture image does not meet the preset requirement, it indicates that the face of the target object in the face image is not a front face or a non-standard front face (for example, a face when the head is raised upward or a face when the head is lowered downward); under the condition that the face is not a front face or a non-standard front face, the quality of the generated target texture image corresponding to the face image is low under the influence of the face pose, and therefore the reconstruction of the three-dimensional face or the training of a texture image model is influenced.
Therefore, in the embodiment, whether the face image is the standard frontal face image is determined according to the background color in the initial texture image, so that the problem that the target texture image is generated by mistake according to the non-frontal face image or the non-standard frontal face image due to the influence of the face pose can be avoided, and the generation accuracy of the target texture image is improved.
In addition, in this embodiment, when S102 is executed to determine whether the background color in the initial texture image meets the preset requirement, the initial texture image may be input into a pre-trained discrimination model, and whether the background color in the initial texture image meets the preset requirement is determined according to a discrimination result output by the discrimination model (for example, when the discrimination result is 1, it is determined that the background color in the initial texture image meets the preset requirement, and when the discrimination result is 0, it is determined that the background color in the initial texture image does not meet the preset requirement).
In this embodiment, S102 is executed to determine a target area corresponding to a non-human face region in the initial texture image, specifically, a region where an obstruction of the non-human face region, such as hair and glasses, is located in the initial texture image.
In this embodiment, when S102 is executed to determine a target region corresponding to a non-human face part in the initial texture image, an existing human face segmentation algorithm may be adopted to use a region where the non-human face part, such as hair and glasses, is located in the initial texture image as the target region.
In addition, if the embodiment executes S102 to determine that the background color in the initial texture image does not satisfy the preset requirement (i.e., the initial texture image includes a large amount of background colors), the initial texture image may be discarded, and the subsequent processing is not performed.
After the step S102 is executed to determine the target region corresponding to the non-human face part in the initial texture image, the step S103 is executed to fill the target region according to the target color, and generate the target texture image.
In the embodiment, when S103 is executed to fill the target area according to the target color, the optional implementation manner that can be adopted is as follows: determining an extraction area corresponding to the human face part in the initial texture image, wherein the extraction area in the embodiment is an area where five sense organs and/or skin are located in the initial texture image; determining a target color according to the extraction area, wherein the target color obtained in the embodiment can be the skin color of a target object in the face image; the target color is filled into the upper layer of the target area.
In the embodiment, when S103 is executed to determine the target color according to the extraction area, the optional implementation manner that may be adopted is: acquiring pixel values (such as RGB values of pixels) of pixels included in the extraction area; the color corresponding to the average value of the pixel values is set as the target color.
That is to say, in this embodiment, the target color is obtained according to the face part in the initial texture image, and then the target color is filled in the upper layer of the target area, so that the purpose of shielding the non-face part with the target color is achieved, and the generated target texture image does not include the non-face part, thereby improving the generation quality of the target texture image, and the target color is determined according to the face part in the initial texture image, so that the consistency between the color of the upper layer of the target area and the color of the face part can be improved.
In this embodiment, when S103 is executed to fill the target region according to the target color, a preset color may be further obtained as the target color, and then the target region is filled according to the preset color.
That is to say, the present embodiment fills the target region according to the target color, that is, the non-human face part is blocked by using the color of the human face part or the preset color, so that the target region is transformed into the color of the human face part or the preset color, thereby achieving the purpose of removing the non-human face part in the initial texture image.
In order to further improve the generation quality of the target texture image, in this embodiment, when S103 is executed to fill the target area according to the target color and generate the target texture image, the optional implementation manner that may be adopted is: filling a target area according to the target color to obtain a texture image to be optimized; processing the texture image to be optimized by a preset optimization method to generate a target texture image; the preset optimization method in this embodiment may be processing the filling boundary, beautifying the texture image to be optimized, and the like.
In this embodiment, after the target texture image is generated in S103, the three-dimensional face may be reconstructed according to the target texture image and the three-dimensional face model; specifically, the target texture image is pasted to the three-dimensional face model in a map pasting mode, so that a three-dimensional face image is generated.
Therefore, the present embodiment may further include the following after executing S103 to generate the target texture image: determining the type of a shelter of a non-face part, wherein the type of the shelter can be glasses, hair and the like; obtaining a modeling result corresponding to a non-human face part by using a modeling method corresponding to the type of the shielding object, such as a modeling result corresponding to glasses, a modeling result corresponding to hair and the like; and generating a three-dimensional face image according to the obtained modeling result, the target texture image and the three-dimensional face model.
When the three-dimensional face image is generated according to the obtained modeling result, the target texture image and the three-dimensional face model, the target texture image may be attached to the three-dimensional face model, and then the modeling result is added to the corresponding position on the three-dimensional face model covered with the target texture image, so as to generate the final three-dimensional face image.
That is to say, the present embodiment can separately model the non-face region and the face region, so that the non-face region in the initial texture image does not affect the generation of the target texture image, thereby improving the quality of the generated three-dimensional face image.
In addition, after executing S103 to generate the target texture image, the present embodiment may further include the following: after the target texture image is generated, inputting the face image into a neural network model to obtain a predicted texture image output by the neural network model, wherein the neural network model in the embodiment can be a pix2pix model; and adjusting parameters of the neural network model according to the loss function values obtained by the predicted texture image and the target texture image to obtain a texture image generation model.
The texture image generation model obtained by training in the embodiment can generate a corresponding texture image according to the input face image; as the high-quality target texture image is used as a true value (ground true) to train the neural network model, the training effect of the model is improved, and the texture image model obtained by training can generate the high-quality texture image.
Fig. 2 is a schematic diagram according to a second embodiment of the present disclosure. As shown in fig. 2, the apparatus 200 for generating a face texture image according to the present embodiment includes:
a first generating unit 201, configured to generate an initial texture image according to the face image;
the processing unit 202 is configured to determine a target region corresponding to a non-human face part in the initial texture image when it is determined that a background color in the initial texture image meets a preset requirement;
the second generating unit 203 is configured to fill the target area according to the target color, and generate a target texture image.
When generating an initial texture image from a face image, the first generation unit 201 first acquires the face image (the face image is a two-dimensional face image), and then generates an initial texture image corresponding to the face image by using an existing texture image generation method, for example, a generation method based on a texture base or a generation method based on original image expansion.
When the first generating unit 201 acquires a face image, an image captured in real time at the input terminal may be used as the face image, or an image stored at the input terminal may be used as the face image.
After the initial texture image is generated by the first generating unit 201 according to the face image, the processing unit 202 determines the target region corresponding to the non-face part in the initial texture image when determining that the background color in the initial texture image meets the preset requirement.
When determining that the background color in the initial texture image meets the preset requirement, the processing unit 202 may first obtain a pixel value of a pixel included in the initial texture image, and then determine that the background color in the initial texture image meets the preset requirement under the condition that the number of pixels of which the pixel value is 0 is smaller than a preset number threshold, otherwise determine that the background color in the initial texture image does not meet the preset requirement.
If the processing unit 202 determines that the background color in the initial texture image does not meet the preset requirement, it indicates that the face of the target object in the face image is not a front face or a non-standard front face; under the condition that the face is not a front face or a non-standard front face, the quality of the generated target texture image corresponding to the face image is low under the influence of the face pose, and therefore the reconstruction of the three-dimensional face or the training of a texture image model is influenced.
Therefore, the processing unit 202 determines whether the face image is a standard front-face image according to the background color in the initial texture image, so that the problem that the target texture image is generated by mistake according to the non-front-face image or the non-standard front-face image due to the influence of the face pose can be avoided, and the generation accuracy of the target texture image is improved.
In addition, when determining whether the background color in the initial texture image meets the preset requirement, the processing unit 202 may further input the initial texture image into a pre-trained discrimination model, and determine whether the background color in the initial texture image meets the preset requirement according to a discrimination result output by the discrimination model (for example, when the discrimination result is 1, it is determined that the background color in the initial texture image meets the preset requirement; when the discrimination result is 0, it is determined that the background color in the initial texture image does not meet the preset requirement).
The processing unit 202 determines a target region corresponding to a non-human face part in the initial texture image, specifically, a region where an obstruction of the non-human face part such as hair and glasses is located in the initial texture image.
When determining the target region corresponding to the non-human face part in the initial texture image, the processing unit 202 may use an existing human face segmentation algorithm to take the region where the non-human face part, such as hair and glasses, is located in the initial texture image as the target region.
In addition, if the processing unit 202 determines that the background color in the initial texture image does not meet the preset requirement, the initial texture image may be discarded and no further processing is performed.
After the processing unit 202 determines the target region corresponding to the non-human face part in the initial texture image, the second generating unit 203 fills the target region according to the target color to generate the target texture image.
When the second generating unit 203 fills the target area according to the target color, the optional implementation manners that can be adopted are as follows: determining an extraction area corresponding to a human face part in the initial texture image; determining a target color according to the extraction area; the target color is filled into the upper layer of the target area.
When the second generating unit 203 determines the target color according to the extraction area, the following optional implementation manners may be adopted: acquiring pixel values of pixels contained in the extraction area; the color corresponding to the average value of the pixel values is set as the target color.
That is to say, the second generating unit 203 obtains the target color according to the face part in the initial texture image, and then fills the target color into the upper layer of the target region, so as to achieve the purpose of shielding the non-face part with the target color, so that the generated target texture image does not include the non-face part, thereby improving the generation quality of the target texture image, and determining the target color according to the face part in the initial texture image, so as to improve the consistency between the target color of the upper layer of the target region and the color of the face part.
The second generating unit 203 may further acquire a preset color as the target color when the target region is filled according to the target color, and further fill the target region based on the preset color.
That is, the second generating unit 203 fills the target region according to the target color, i.e. the non-face part is blocked by using the color of the face part or the preset color, so that the target region is transformed into the color of the face part or the preset color, thereby achieving the purpose of removing the non-face part in the initial texture image.
When the second generating unit 203 fills the target region according to the target color and generates the target texture image, the optional implementation manners that can be adopted are as follows: filling a target area according to the target color to obtain a texture image to be optimized; and processing the obtained texture image to be optimized by a preset optimization method to generate a target texture image.
The apparatus 200 for generating a face texture image in this embodiment may further include a reconstruction unit 204, configured to perform three-dimensional face reconstruction according to the target texture image and the three-dimensional face model after the second generation unit 203 generates the target texture image; specifically, the target texture image is pasted on the three-dimensional face model in a way of pasting a picture, so that a three-dimensional face image is generated.
After the second generating unit 203 generates the target texture image, the reconstructing unit 204 may further include the following: determining the type of a shelter of a non-face part; obtaining a modeling result corresponding to the non-human face part by using a modeling method corresponding to the type of the shielding object; and generating a three-dimensional face image according to the obtained modeling result, the target texture image and the three-dimensional face model.
When the reconstruction unit 204 generates the three-dimensional face image according to the obtained modeling result, the target texture image and the three-dimensional face model, the target texture image may be attached to the three-dimensional face model, and then the modeling result is added to a corresponding position on the three-dimensional face model covered with the target texture image, so as to generate a final three-dimensional face image.
That is to say, the reconstruction unit 204 can model the non-face region and the face region separately, so that the non-face region in the initial texture image does not affect the generation of the target texture image, thereby improving the quality of the generated three-dimensional face image.
In addition, the apparatus 200 for generating a face texture image according to the present embodiment may further include a training unit 205, configured to execute the following steps after the second generating unit 203 generates the target texture image: inputting the face image into a neural network model to obtain a predicted texture image output by the neural network model; and adjusting parameters of the neural network model according to the loss function values obtained by the predicted texture image and the target texture image to obtain a texture image generation model.
The training unit 205 trains the obtained texture image generation model, and can generate a corresponding texture image according to the input face image; as the high-quality target texture image is used as a true value to train the neural network model, the training effect of the model is improved, and the texture image model obtained by training can generate the high-quality texture image.
Fig. 3 is a schematic diagram according to a third embodiment of the present disclosure. FIG. 3 illustrates an initial texture image generated from a face image; as can be seen from fig. 3, there are a large number of background colors on the upper left and upper right of the initial texture image; therefore, the background color included in the initial texture image in fig. 3 does not meet the preset requirement, which indicates that the face image generating the initial texture image is not a standard face image, specifically, the face in the face image has a head-up condition.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the customs of public sequences.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
As shown in fig. 4, it is a block diagram of an electronic device of a method for generating a face texture image according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the apparatus 400 includes a computing unit 401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data required for the operation of the device 400 can also be stored. The computing unit 401, ROM402, and RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
A number of components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408 such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 401 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 401 executes the respective methods and processes described above, such as the generation method of the face texture image. For example, in some embodiments, the method of generating a face texture image may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 408.
In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM402 and/or the communication unit 409. When the computer program is loaded into the RAM403 and executed by the computing unit 401, one or more steps of the above-described method of generating a face texture image may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the method of generating the face texture image by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable face texture image generation apparatus, such that the program codes, when executed by the processor or controller, cause the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a presentation device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for presenting information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. A method for generating a face texture image comprises the following steps:
generating an initial texture image according to the face image;
under the condition that the background color in the initial texture image meets the preset requirement, determining a target area corresponding to a non-human face part in the initial texture image;
and filling the target area according to the target color to generate a target texture image.
2. The method of claim 1, wherein the determining that the background color in the initial texture image meets a preset requirement comprises:
acquiring pixel values of pixels included in the initial texture image;
and under the condition that the number of the pixels with the pixel values of 0 is less than a preset number threshold value, determining that the background color in the initial texture image meets a preset requirement.
3. The method of any of claims 1-2, wherein the filling the target region according to a target color comprises:
determining an extraction area corresponding to a human face part in the initial texture image;
determining a target color according to the extraction area;
and filling the target color into an upper layer of the target area.
4. The method according to any one of claims 1-3, wherein the filling the target region according to a target color, generating a target texture image comprises:
filling the target area according to the target color to obtain a texture image to be optimized;
and processing the texture image to be optimized by a preset optimization method to generate the target texture image.
5. The method of any of claims 1-4, further comprising,
and after the target texture image is generated, generating a three-dimensional face image according to the target texture image and the three-dimensional face model.
6. The method of claim 5, wherein the generating a three-dimensional face image from the target texture image and a three-dimensional face model comprises:
determining the type of the shelters of the non-face part;
obtaining a modeling result corresponding to the non-human face part by using a modeling method corresponding to the type of the shielding object;
and generating the three-dimensional face image according to the modeling result, the target texture image and the three-dimensional face model.
7. The method of any of claims 1-4, further comprising,
after the target texture image is generated, inputting the face image into a neural network model to obtain a predicted texture image output by the neural network model;
adjusting parameters of the neural network model according to the loss function values obtained by the predicted texture image and the target texture image to obtain a texture image generation model;
the texture image generation model is used for generating a texture image corresponding to the input face image.
8. An apparatus for generating a face texture image, comprising:
the first generation unit is used for generating an initial texture image according to the face image;
the processing unit is used for determining a target area corresponding to a non-human face part in the initial texture image under the condition that the background color in the initial texture image meets a preset requirement;
and the second generating unit is used for filling the target area according to the target color to generate a target texture image.
9. The apparatus according to claim 8, wherein the processing unit, when determining that the background color in the initial texture image satisfies a preset requirement, specifically performs:
acquiring pixel values of pixels included in the initial texture image;
and under the condition that the number of the pixels with the pixel values of 0 is less than a preset number threshold value, determining that the background color in the initial texture image meets a preset requirement.
10. The apparatus according to any one of claims 8-9, wherein the second generating unit, when filling the target area according to a target color, specifically performs:
determining an extraction area corresponding to a human face part in the initial texture image;
determining a target color according to the extraction area;
and filling the target color into an upper layer of the target area.
11. The apparatus according to any one of claims 8 to 10, wherein the second generating unit, when generating the target texture image by filling the target region according to the target color, specifically performs:
filling the target area according to the target color to obtain a texture image to be optimized;
and processing the texture image to be optimized by a preset optimization method to generate the target texture image.
12. The apparatus according to any of claims 8-11, further comprising a reconstruction unit for performing:
and after the second generation unit generates the target texture image, generating a three-dimensional face image according to the target texture image and the three-dimensional face model.
13. The apparatus according to claim 12, wherein the reconstruction unit, when generating a three-dimensional face image according to the target texture image and a three-dimensional face model, specifically performs:
determining the type of the shelters of the non-face part;
obtaining a modeling result corresponding to the non-human face part by using a modeling method corresponding to the type of the shielding object;
and generating the three-dimensional face image according to the modeling result, the target texture image and the three-dimensional face model.
14. The apparatus according to any of claims 8-11, further comprising a training unit for performing:
after the second generation unit generates the target texture image, inputting the face image into a neural network model to obtain a predicted texture image output by the neural network model;
adjusting parameters of the neural network model according to the loss function values obtained by the predicted texture image and the target texture image to obtain a texture image generation model;
the texture image generation model is used for generating a texture image corresponding to the input face image.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202211180085.7A 2022-09-26 2022-09-26 Face texture image generation method and device Active CN115661322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211180085.7A CN115661322B (en) 2022-09-26 2022-09-26 Face texture image generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211180085.7A CN115661322B (en) 2022-09-26 2022-09-26 Face texture image generation method and device

Publications (2)

Publication Number Publication Date
CN115661322A true CN115661322A (en) 2023-01-31
CN115661322B CN115661322B (en) 2023-09-22

Family

ID=84985363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211180085.7A Active CN115661322B (en) 2022-09-26 2022-09-26 Face texture image generation method and device

Country Status (1)

Country Link
CN (1) CN115661322B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563432A (en) * 2023-05-15 2023-08-08 摩尔线程智能科技(北京)有限责任公司 Three-dimensional digital person generating method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255831A (en) * 2018-09-21 2019-01-22 南京大学 The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate
CN111127631A (en) * 2019-12-17 2020-05-08 深圳先进技术研究院 Single image-based three-dimensional shape and texture reconstruction method, system and storage medium
CN112669447A (en) * 2020-12-30 2021-04-16 网易(杭州)网络有限公司 Model head portrait creating method and device, electronic equipment and storage medium
CN113095149A (en) * 2021-03-18 2021-07-09 西北工业大学 Full-head texture network structure based on single face image and generation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255831A (en) * 2018-09-21 2019-01-22 南京大学 The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate
CN111127631A (en) * 2019-12-17 2020-05-08 深圳先进技术研究院 Single image-based three-dimensional shape and texture reconstruction method, system and storage medium
CN112669447A (en) * 2020-12-30 2021-04-16 网易(杭州)网络有限公司 Model head portrait creating method and device, electronic equipment and storage medium
CN113095149A (en) * 2021-03-18 2021-07-09 西北工业大学 Full-head texture network structure based on single face image and generation method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563432A (en) * 2023-05-15 2023-08-08 摩尔线程智能科技(北京)有限责任公司 Three-dimensional digital person generating method and device, electronic equipment and storage medium
CN116563432B (en) * 2023-05-15 2024-02-06 摩尔线程智能科技(北京)有限责任公司 Three-dimensional digital person generating method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115661322B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN113643412B (en) Virtual image generation method and device, electronic equipment and storage medium
CN113327278B (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN114187633B (en) Image processing method and device, and training method and device for image generation model
CN114723888B (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
CN114187624B (en) Image generation method, device, electronic equipment and storage medium
EP3876197A2 (en) Portrait extracting method and apparatus, electronic device and storage medium
CN115330940B (en) Three-dimensional reconstruction method, device, equipment and medium
CN113553961B (en) Training method and device of face recognition model, electronic equipment and storage medium
US20230047748A1 (en) Method of fusing image, and method of training image fusion model
CN114067051A (en) Three-dimensional reconstruction processing method, device, electronic device and storage medium
CN113379877A (en) Face video generation method and device, electronic equipment and storage medium
CN115661322B (en) Face texture image generation method and device
CN114049290A (en) Image processing method, device, equipment and storage medium
CN112884889A (en) Model training method, model training device, human head reconstruction method, human head reconstruction device, human head reconstruction equipment and storage medium
CN114120413A (en) Model training method, image synthesis method, device, equipment and program product
CN113962845A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN113177466A (en) Identity recognition method and device based on face image, electronic equipment and medium
CN113269719A (en) Model training method, image processing method, device, equipment and storage medium
EP4123605A2 (en) Method of transferring image, and method and apparatus of training image transfer model
US20230177756A1 (en) Method of generating 3d video, method of training model, electronic device, and storage medium
US20220351455A1 (en) Method of processing image, electronic device, and storage medium
CN114092616B (en) Rendering method, rendering device, electronic equipment and storage medium
CN115311403A (en) Deep learning network training method, virtual image generation method and device
CN115018734A (en) Video restoration method and training method and device of video restoration model
CN114648601A (en) Virtual image generation method, electronic device, program product and user terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant