CN111445410A - Texture enhancement method, device and equipment based on texture image and storage medium - Google Patents

Texture enhancement method, device and equipment based on texture image and storage medium Download PDF

Info

Publication number
CN111445410A
CN111445410A CN202010224186.4A CN202010224186A CN111445410A CN 111445410 A CN111445410 A CN 111445410A CN 202010224186 A CN202010224186 A CN 202010224186A CN 111445410 A CN111445410 A CN 111445410A
Authority
CN
China
Prior art keywords
texture image
texture
image
face
fitting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010224186.4A
Other languages
Chinese (zh)
Other versions
CN111445410B (en
Inventor
陈雅静
沈雅欣
者雪飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010224186.4A priority Critical patent/CN111445410B/en
Publication of CN111445410A publication Critical patent/CN111445410A/en
Application granted granted Critical
Publication of CN111445410B publication Critical patent/CN111445410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application relates to a texture enhancement method and device based on texture images, computer equipment and a storage medium. The method comprises the following steps: acquiring a source texture image to be processed; fitting the source texture image based on the main component of the sample texture image corresponding to the source texture image to obtain a corresponding fitted texture image; the main component of the sample texture image is obtained by performing main component analysis on the sample texture image; smoothing the fitted texture image to obtain a corresponding smooth texture image; the smoothing processing is used for carrying out noise suppression on the fitted texture image; and performing texture enhancement processing on the smooth texture image through a generator in a generation countermeasure network, and outputting a target texture image corresponding to the source texture image. The method can greatly improve the effect of texture enhancement.

Description

Texture enhancement method, device and equipment based on texture image and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a texture enhancement method and apparatus based on a texture image, a computer device, and a storage medium.
Background
With the development of computer technology, people's work and life have brought about very big changes. For example, original animation production or game production is usually performed based on a planar character, and now with the development of science and technology, more and more scenes supporting three-dimensional characters appear, so that a large number of high-definition texture images, such as high-definition face texture images, are correspondingly required.
In a traditional method for generating a high-definition facial texture image, an adaptive brightness recovery method is generally adopted, namely, the brightness of the facial image is homogenized, and the distance in a facial background class is minimized and the facial texture distribution is maximized through anisotropic histogram stretching, so that the facial texture feature enhancement is realized. The traditional texture enhancement mode is suitable for the condition of poor image quality caused by uneven illumination, the condition of image noise or image blurring and the like cannot be repaired, and detailed texture information such as pores, hairs and the like is difficult to reconstruct, so that the texture enhancement effect is poor.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a texture enhancement method, apparatus, computer device and storage medium based on a texture image, which can generate a high-quality texture image.
A method of texture enhancement based on a texture image, the method comprising:
acquiring a source texture image to be processed;
fitting the source texture image based on the main component of the sample texture image corresponding to the source texture image to obtain a corresponding fitted texture image; the main component of the sample texture image is obtained by performing main component analysis on the sample texture image;
smoothing the fitted texture image to obtain a corresponding smooth texture image; the smoothing processing is used for carrying out noise suppression on the fitted texture image;
and performing texture enhancement processing on the smooth texture image through a generator in a generation countermeasure network, and outputting a target texture image corresponding to the source texture image.
A texture enhancement device based on a texture image, the device comprising:
the acquisition module is used for acquiring a source texture image to be processed;
the fitting module is used for fitting the source texture image based on the main component of the sample texture image corresponding to the source texture image to obtain a corresponding fitting texture image; the main component of the sample texture image is obtained by performing main component analysis on the sample texture image;
the smoothing processing module is used for smoothing the fitted texture image to obtain a corresponding smooth texture image; the smoothing processing is used for carrying out noise suppression on the fitted texture image;
and the texture enhancement module is used for performing texture enhancement processing on the smooth texture image through a generator in the generation countermeasure network and outputting a target texture image corresponding to the source texture image.
A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of:
acquiring a source texture image to be processed;
fitting the source texture image based on the main component of the sample texture image corresponding to the source texture image to obtain a corresponding fitted texture image; the main component of the sample texture image is obtained by performing main component analysis on the sample texture image;
smoothing the fitted texture image to obtain a corresponding smooth texture image; the smoothing processing is used for carrying out noise suppression on the fitted texture image;
and performing texture enhancement processing on the smooth texture image through a generator in a generation countermeasure network, and outputting a target texture image corresponding to the source texture image.
A computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of:
acquiring a source texture image to be processed;
fitting the source texture image based on the main component of the sample texture image corresponding to the source texture image to obtain a corresponding fitted texture image; the main component of the sample texture image is obtained by performing main component analysis on the sample texture image;
smoothing the fitted texture image to obtain a corresponding smooth texture image; the smoothing processing is used for carrying out noise suppression on the fitted texture image;
and performing texture enhancement processing on the smooth texture image through a generator in a generation countermeasure network, and outputting a target texture image corresponding to the source texture image.
According to the texture enhancement method and device based on the texture image, the computer equipment and the storage medium, the source texture image is fitted based on the main component of the sample texture image corresponding to the source texture image to obtain the corresponding fitting texture image. Wherein the principal component of the sample texture image is obtained by performing principal component analysis on a series of sample texture images. In this way, the fitted texture image can eliminate partial image noise, such as shadow and illumination information, while retaining most of the valid image information. Further, the fitted texture image is smoothed to obtain a smoothed texture image in which high-frequency noise such as speckle, pigment, or mole is suppressed. And performing texture enhancement processing on the smooth texture image through a generator in a trained generation countermeasure network, reconstructing and generating real pore and hair textures while keeping the original low-definition texture, and obtaining a target texture image. Therefore, the low-quality source texture image can be converted into the target texture image with high definition and rich detail information under the condition of unchanged resolution, the generated target texture image is ensured to keep the original texture information, more texture details are enhanced, and the texture enhancement effect is greatly improved.
Drawings
FIG. 1 is a diagram of an application environment of a texture enhancement method based on a texture image according to an embodiment;
FIG. 2 is a flow diagram illustrating a method for texture enhancement based on texture images in one embodiment;
FIG. 3 is a schematic flow chart illustrating the steps of fitting a source texture image to obtain a corresponding fitted texture image based on principal components of a sample texture image corresponding to the source texture image in one embodiment;
FIG. 4 is a flowchart illustrating the steps of fitting the face region to obtain a corresponding face-fitted texture image based on the principal components of the sample texture image corresponding to the source texture image in one embodiment;
FIG. 5 is a schematic flowchart illustrating a process of fusing a face fit texture image with a background texture image corresponding to a background region to obtain a fit texture image corresponding to a source texture image in one embodiment;
FIG. 6 is a schematic flow chart diagram illustrating the training steps for generating the countermeasure network in one embodiment;
FIG. 7 is a network architecture diagram illustrating generation of a countermeasure network in one embodiment;
FIG. 8 is a schematic diagram illustrating a texture enhancement method based on texture images in an exemplary embodiment;
FIG. 9 is a block diagram of a texture enhancing device based on texture images in one embodiment;
FIG. 10 is a block diagram showing a texture enhancing apparatus based on a texture image according to another embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The texture enhancement method based on the texture image provided by the application can be applied to the application environment as shown in fig. 1. Wherein the terminal 102 communicates with the computer device 104 over a network. The terminal 102 may collect face images at different angles through an image collecting device, so as to generate corresponding source texture images through the face images at multiple angles. The computer device 104 may then obtain the source texture image to be processed from the terminal 102. The computer device 104 performs fitting processing on the source texture image based on the principal component of the sample texture image corresponding to the source texture image to obtain a corresponding fitted texture image; wherein, the principal component of the sample texture image is obtained by performing principal component analysis on the sample texture image; smoothing the fitted texture image to obtain a corresponding smooth texture image; the smoothing processing is used for carrying out noise suppression on the fitted texture image; and performing texture enhancement processing on the smooth texture image through a generator in the generation countermeasure network, and outputting a target texture image corresponding to the source texture image. The terminal 102 may be, but is not limited to, various cameras, monitoring devices, personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The computer device 104 may specifically be a terminal or a server, and the server may specifically be implemented by an independent server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a texture enhancement method based on texture images is provided, which is exemplified by the method applied to the computer device in fig. 1, and includes the following steps:
step S202, a source texture image to be processed is obtained.
The source texture image is used as a base texture image in the texture enhancement process, and may also be referred to as an initial, real texture image. The source texture image may specifically be a texture image corresponding to the target object, such as a face texture image corresponding to the face of the target object, or a hand-foot texture image corresponding to the hand-foot of the target object. The target object may specifically be a human or other animal, and the like.
The texture image, also called UV image, is a three-dimensional spread surface image. The UV is short for UV texture mapping coordinates, information of the position of each point on an image is defined, U and V are coordinates of the image in the horizontal direction and the vertical direction of a display respectively, and the value is generally 0-1. Each point in the UV image is associated with a three-dimensional model and can determine the position of the surface texture map, i.e. each point in the UV image can correspond exactly to the surface of the model object to construct a stereoscopic object. For example, a face texture image may be used to generate a three-dimensional face.
The face texture image may be a face texture image or a face texture image of other animals. A human face is a representation of the face of an actual person, and in general, different persons have different faces. The face includes more than one component, such as the forehead, eyebrows, eyes, nose, lips, cheeks, and chin. It is to be understood that the second face appearing hereinafter is a face different from the first face, and specifically may be a face of a different person. The face different from the first face may specifically be a face with differences in forehead, eyebrow, eye, nose, lip, cheek, chin, and the like.
In particular, the computer device may obtain the source texture image locally or from other computer devices over a network. In one embodiment, the source texture image may specifically be a face texture image, the computer device may acquire texture images of various faces in advance to construct a texture image library, and when the computer device needs it, a texture image may be selected from the texture image library as the source texture image.
In one embodiment, the source texture image may specifically be a face texture image. The computer equipment can scan the human face at a plurality of angles through image acquisition devices such as a camera and the like to obtain human face images respectively corresponding to all the angles, wherein the human face images are planar two-dimensional images. The computer device can respectively identify the face characteristic points in each face image and align the face characteristic points in each face image of a plurality of angles. And constructing a corresponding face texture image according to a pre-constructed three-dimensional model of the face and the position information among the face characteristic points in the face images at different angles.
In one embodiment, the computer device may further process the face images of multiple angles through a machine learning model, and output corresponding face texture images through the machine learning model.
Step S204, fitting the source texture image based on the main component of the sample texture image corresponding to the source texture image to obtain a corresponding fitting texture image; and the main component of the sample texture image is obtained by performing main component analysis on the sample texture image.
Specifically, the computer device may obtain a sample texture image corresponding to the source texture image in advance, and perform principal component analysis on a series of sample texture images to determine a corresponding principal component. And the computer equipment can perform fitting processing on the source texture image based on the determined principal component to obtain a corresponding fitting texture image. Among the multivariate statistical analysis, Principal Component Analysis (PCA) is a method of statistically analyzing and simplifying a data set. Principal component analysis is mainly based on the use of orthogonal transformation to linearly transform the observed values of a series of possibly correlated variables, thereby projecting as values of a series of linearly uncorrelated variables, which are called Principal Components (Principal Components).
In one embodiment, the computer device may previously acquire a sample texture image corresponding to the source texture image, and perform principal component analysis on a face in the sample texture image to determine a principal component corresponding to the face. And the computer equipment can perform PCA base fitting on the face area in the source texture image based on the principal component corresponding to the face to obtain a face fitting texture image corresponding to the face area. Further, the computer device may select a rectangular frame including a face-fit texture image corresponding to the face region, and generate a corresponding fit texture image. The computer equipment can also determine a background area except the face area in the source texture image, further determine a background texture image according to the background area, and further fuse the face fitting texture image and the background texture image to obtain a fitting texture image.
In one embodiment, the computer device may directly fit the face, resulting in a face-fit texture image. The computer device can also respectively fit each part of the face to obtain a local fitting texture image (fit _ uv _ part), and further fuse the local fitting texture images of each part to obtain a face fitting texture image (fit _ uv) corresponding to the whole face. For details of how to directly fit the face to obtain the face-fitting texture image, reference may be made to the following detailed description of embodiments. Details of how to fit each part of the face to obtain a locally fitted texture image may also be referred to in the following detailed description of embodiments.
Step S206, smoothing the fitted texture image to obtain a corresponding smooth texture image; the smoothing process is used to noise suppress the fitted texture image.
The smoothing processing on the fitted texture image may specifically be removing high-frequency information in the fitted texture image and retaining low-frequency information. Therefore, low-pass filtering can be applied to the fitted texture image to achieve the effect of image smoothing. The low pass filtering can remove noise in the fitted texture image, blurring the image (noise is the region of the fitted texture image where the variation is large, i.e. high frequency information).
Specifically, the computer device may perform filtering processing on the fitted texture image to obtain a smooth texture image after noise suppression. The filtering method may specifically be mean filtering, gaussian filtering, median filtering, or bilateral filtering, and the like, which is not limited in this embodiment.
In one embodiment, the computer device may perform gaussian smoothing on the fitted texture image to obtain a smoothed texture image after noise suppression. The gaussian smoothing process may specifically be low-pass filtering the fitted texture image through a gaussian filter.
In an embodiment, the step S206, that is, the step of performing a smoothing process on the fitted texture image to obtain a corresponding smoothed texture image, specifically includes: cutting out a middle texture image comprising a face area from the fitting texture image according to a preset format; carrying out smoothing operation and normalization processing on the intermediate texture image to obtain a corresponding smooth texture image; wherein the smoothing operation is used for noise suppression of the intermediate texture image.
The preset format matches with a format requirement of the generator in the generation countermeasure network for the input image, specifically, the preset format may be an image size or an image storage size of the input image, for example, the image size of the input image is required to be 1184 × 1020 × 3. In particular, in order to allow the generator to better emphasize the texture details of the face region while keeping the pixels for other regions unchanged, the computer device may crop out an intermediate texture image including the face region from the fitted texture image in a preset format.
Further, the computer device may perform a gaussian low pass filtering process on the intermediate texture image, such as using two gaussian smoothing operations of 3x3, to weaken the marks of the spots, moles, or pox in the facial region. And normalizing the RGB (Red Green Blue, Red Green Blue, representing color space) image with the pixel value of the texture image after the smoothing treatment in the range of 0-255 to be in the range of 0-1 to obtain a smooth texture image. Therefore, when the smooth texture image is processed through the generator subsequently, the data range is small, and the processing efficiency can be improved.
In the above embodiment, the intermediate texture image including the face region is cut out from the fitted texture image according to the preset format, and then the smoothing operation and normalization processing are performed on the face region in a targeted manner, so that the smooth texture image after noise suppression is obtained. Therefore, when texture enhancement processing is carried out, only the face area is subjected to texture enhancement, and the information of other background areas is reserved, so that the effectiveness of texture enhancement is greatly improved, and the processed whole texture image is more real.
And step S208, performing texture enhancement processing on the smooth texture image through a generator in the trained generation countermeasure network, and outputting a target texture image corresponding to the source texture image.
The generation of a countermeasure Network (GAN) is a Network model based on unsupervised machine learning, and learning is performed by enabling two neural networks to game each other. The generation countermeasure network is composed of a generator and a discriminator. The generator takes random samples from the underlying space as input and its output needs to mimic as much as possible the real samples in the training set. The input to the discriminator is the real sample or the output of the generator, which aims to distinguish the output of the generator from the real sample as much as possible. The generator should fool the arbiter as much as possible. The two networks resist each other and continuously adjust parameters, and the final purpose is to make the discriminator unable to judge whether the output result of the generator is real or not.
In the embodiment of the application, the input of the generator is the low-definition smooth texture image, and the generator is to generate the output of the source texture image which is close to high definition as much as possible. The input of the discriminator is the real high-definition source texture image or the output of the generator, and the purpose of the discriminator is to distinguish the output of the generator from the real high-definition source texture image as far as possible. In one embodiment, the countermeasure generation network mentioned in the embodiment of the present application may specifically be a conditional countermeasure generation network, and the difference is that the discriminator receives two inputs, one is a low-definition sample smooth texture image, and the other is a texture image that needs to be discriminated as true or false, that is, the generator outputs a prediction target texture image (false) or a true high-definition sample texture image (true data).
Specifically, the computer device may generate the confrontation network by training the training sample in advance, resulting in a trained generated confrontation network. And inputting the smooth texture image into a generator in a trained generation countermeasure network to carry out texture enhancement processing, and outputting a target texture image with the same resolution through the generator. It can be understood that the smooth texture image after the fitting and smoothing process contains lower noise, but some texture details in the image are lost, and the smooth texture image belongs to a low-definition texture image. After the processing of the generator, the original low-definition texture is kept, and meanwhile, the real pore and hair textures are reconstructed and generated, so that a high-definition target texture image is generated. The training process for generating the countermeasure network will be described in detail in the following embodiments.
In one embodiment, the generator may be specifically constructed by a convolutional neural network structure. The Convolutional neural Network structure may be specifically implemented by a U-Network (connected Networks for biological Image Segmentation), a VGG (Visual geometry group Network, an Image processing Network), a rescet (residual error Network), and other Network structures, which are not limited in the embodiment of the present application. The discriminator may be implemented by a multi-layered convolutional neural network.
In one embodiment, the image resolution of the target texture image is the same as the image resolution of the source texture image, that is, the image resolution of the source texture image is not changed while the texture details are enhanced by the texture image-based texture enhancement method mentioned in the embodiments of the present application. This is quite different from some existing depth super-resolution texture image models, which improve sharpness at the expense of increased image resolution and memory size.
According to the texture enhancement method based on the texture image, the source texture image is fitted to obtain a corresponding fitted texture image based on the principal component of the sample texture image corresponding to the source texture image. Wherein the principal component of the sample texture image is obtained by performing principal component analysis on a series of sample texture images. In this way, partial image noise, such as shadow and illumination information, can be eliminated from the fitted texture image, while most of the valid image information is retained. Further, the fitted texture image is smoothed to obtain a smoothed texture image in which high-frequency noise such as speckle, pigment, or mole is suppressed. And performing texture enhancement processing on the smooth texture image through a generator in a trained generation countermeasure network, reconstructing and generating real pore and hair textures while keeping the original low-definition texture, and obtaining a target texture image. Therefore, the low-quality source texture image can be converted into the target texture image with high definition and rich detail information under the condition of unchanged resolution, the generated target texture image is ensured to keep the original texture information, more texture details are enhanced, and the texture enhancement effect is greatly improved.
In one embodiment, the step S202, namely acquiring the source texture image to be processed, includes: acquiring more than one face image obtained by scanning the face of a target object at different angles; acquiring texture setting information corresponding to a target scene; and according to the texture setting information, performing fusion processing on more than one face image to obtain a source texture image corresponding to the target object.
The target object may be a human or other animal. The target scene is a scene for which the target texture image is used, and may specifically be a specific game scene or animation scene. The texture setting information is required information necessary for generating the source texture image, and may specifically be the roughness of the texture, the form of the texture, or the like. Different target scenes may correspond to different texture setting information.
Specifically, the computer device may scan the face of the target object at a plurality of angles by an image capturing device such as a camera, and obtain face images corresponding to the respective angles, the face images being planar two-dimensional images. Furthermore, the computer device may perform fusion processing on more than one face image according to the texture setting information to obtain a source texture image corresponding to the target object.
In one embodiment, performing fusion processing on more than one face image according to the texture setting information to obtain a source texture image corresponding to the target object includes: determining facial morphology information and illumination information corresponding to the target object by analyzing more than one facial image; determining pixel points corresponding to the position points in the three-dimensional texture model when the position points are respectively mapped to the facial image according to the facial form information; and setting information according to the texture, and constructing a source texture image corresponding to the target object by combining a three-dimensional texture model based on the determined color information and the corresponding illumination information of the pixel points.
In particular, the computer device may determine facial morphology information and illumination information corresponding to the target object by analyzing more than one facial image, such as aligning various components in the face. The face shape information specifically includes a face shape (shape) and a pose (position). Lighting information (lighting) refers to information that pixels of a face area are illuminated or displayed as shadows when a light source is irradiated to the face. Further, the computer device may subtract the illumination information or complement the corresponding pixel difference value generated by the shadow from the color information (specifically, RGB value) of each pixel to obtain the RGB value from which the illumination influence is removed. This may reduce the effect of the illumination.
Further, the computer device may calculate a specific position where each point in the three-dimensional face model falls on the two-dimensional face image according to the estimated face shape in combination with the three-dimensional texture model, and obtain an RGB value of each point in the three-dimensional face model according to the RGB value at the position from which the illumination influence is removed. And then obtaining a source texture image corresponding to the target object based on the corresponding relation between vertex (vertex) and UV, namely the one-to-one corresponding relation between each point in the three-dimensional face model and the pixel point of the UV image. Thus, a corresponding three-dimensional texture image can be accurately generated from a plurality of two-dimensional face images.
In the above embodiment, the source texture image corresponding to the target object can be accurately generated by performing fusion processing on the face images of the target object at different angles, which is convenient and accurate.
Referring to fig. 3, in an embodiment, the step S204, that is, the step of performing fitting processing on the source texture image based on the principal component of the sample texture image corresponding to the source texture image to obtain a corresponding fitted texture image specifically includes the following steps:
s302, a face area and a background area included in the source texture image are determined.
The face area is an area where the face is located, and can be considered as a key area in the source texture image. When the source texture image is a texture image corresponding to a human face, the face region may be specifically a region where the human face is located. It is understood that the facial region specifically includes the forehead, eyebrows, eyes, nose, lips, cheeks, and chin. In the source texture image, the regions other than the face region may be referred to as background regions, and may be considered as non-critical regions in the source texture image. The background area may specifically include hair, neck, or environmental background.
In particular, the computer device may separate the face region and the background region from the source texture image through a face mask corresponding to the face region. In one embodiment, the computer device may further identify facial feature points from the source texture image by means of facial feature point detection, and determine a facial region from the facial feature points. The facial feature points may be eyes, eyebrows, nose, lips, or ears. Of course, the computer device may also determine the face region from the source texture image by using other face detection methods, which is not limited in this embodiment of the present application.
And S304, fitting the face region based on the main component of the sample texture image corresponding to the source texture image to obtain a corresponding face fitting texture image.
In one embodiment, the computer device may previously acquire a sample texture image corresponding to the source texture image, and perform principal component analysis on a face in the sample texture image to determine a principal component corresponding to the face. And the computer equipment can perform PCA base fitting on the face area in the source texture image based on the principal component corresponding to the face to obtain a face fitting image corresponding to the face area.
In one embodiment, the computer device may construct a sample matrix based on a series of sample texture images including a facial region, where one dimension of the sample matrix represents the number of samples and one dimension represents a sample feature. Further, the computer device may perform Singular Value Decomposition (SVD) on the sample matrix to obtain more than one set of Singular values and corresponding PCA bases. The computer equipment can sort the singular values from big to small and acquire the singular values sorted before the preset ranking and the corresponding PCA base. The singular values sorted before the preset name and the corresponding PCA base together form a principal component corresponding to the sample matrix, namely the principal component corresponding to the sample texture image.
In one embodiment, the computer device may fit the full face directly, and then at this point the computer device may construct a corresponding face sample matrix based on the entire sample texture image. Further, the computer device may perform singular value decomposition on the face sample matrix to obtain more than one set of singular values and corresponding PCA bases. The computer device may select a preset number of larger singular values and PCA bases corresponding to the respective singular values to form a face principal component corresponding to the face sample matrix. And then, the computer equipment performs PCA base fitting on the source texture image through the face principal component to obtain a corresponding face fitting texture image.
In another embodiment, considering that the number of samples is limited and the calculation amount of directly fitting the face is too large, the computer device may separately fit each part of the face to obtain a locally fitted texture image (fit _ uv _ part), and further fuse the locally fitted texture images of each part to obtain a face fitted texture image (fit _ uv) corresponding to the whole face. In particular, the computer device may separate a respective part sample texture image from the sample texture image based on the local mask corresponding to each part. The part sample texture image may be, for example, an eye sample texture image corresponding to an eye, a nose sample texture image corresponding to a nose, or the like. And the computer equipment constructs a corresponding component sample matrix for each type of component sample texture image respectively. For each component sample matrix corresponding to a component, the computer device may perform singular value decomposition on the component sample matrix to obtain more than one set of singular values and corresponding PCA bases. The computer device may select a predetermined number of larger singular values and PCA bases corresponding to the respective singular values to form a component principal component corresponding to the component sample matrix. And then, the computer equipment carries out PCA base fitting on the corresponding component area in the source texture image through the component principal component to obtain a corresponding local fitting texture image. For example, an eye-fitted texture image corresponding to an eye and a nose-fitted texture image corresponding to a nose are obtained. Further, the computer device may superimpose the local fitting texture images of different component areas through a mask to obtain the face fitting texture image fit _ uv under all PCA-based coverage.
And S306, fusing the face fitting texture image with the background texture image corresponding to the background area to obtain a fitting texture image corresponding to the source texture image.
In particular, the computer device may segment out the background region in the source texture image to generate a corresponding background texture image. And then the computer equipment can fuse the face fitting texture image and the background texture image to obtain a fitting texture image and output the fitting texture image. The specific fusion mode for performing fusion processing on the face fitting texture image and the background texture image may be a pixel-level image fusion mode, a feature-level image fusion mode, or a decision-level image fusion mode, which is not limited in the embodiment of the present application.
In one embodiment, the computer device may directly fuse pixel points in the face fit texture image and the background texture image to obtain a fit texture image. In another embodiment, after feature extraction is performed on the face fitting texture image and the background texture image, the computer device performs comprehensive processing on information such as edges, shapes, contours, local features and the like to obtain a fitting texture image.
In one embodiment, the computer device may determine a face mask corresponding to the face-fit texture image, and perform fusion processing on the face-fit texture image and the background texture image by means of laplacian image fusion to obtain a fit texture image. The specific way of fusion of laplacian images will be described in detail in the following embodiments.
In the above embodiment, only the face region is fitted to obtain the corresponding face-fitted texture image, so that noise in the face image can be well removed, and pixel information of the background region is retained. In this way, when the texture enhancement is carried out by generating the countermeasure network, the texture enhancement can be carried out on the face area containing the skin in a targeted and targeted manner, so that the overall quality of the whole texture image after processing is improved.
Referring to FIG. 4, in one embodiment, a sample texture image includes more than one set of part sample texture images; step S304, namely, a step of performing fitting processing on the face region based on the principal component of the sample texture image corresponding to the source texture image to obtain a corresponding face-fitted texture image, specifically includes the following steps:
s402, more than one component region included in the face region is determined.
In particular, the computer device, when generating the source texture image, various components of the face in the source texture image are aligned. That is, the eye region, the nose region, or the forehead region, etc., in each source texture image are uniform. The computer device may locate each part in the face region, thereby determining a part region in the face region corresponding to each part. The component may be, in particular, the forehead, the eyebrows, the eyes, the nose, the lips, the cheeks, the chin, etc. The respective component region can be in particular the region of the forehead, the eyebrows, the eyes, the nose, the lips, the cheeks or the chin.
S404, determining local fitting texture images corresponding to the corresponding component areas based on the component main components of the component sample texture images respectively corresponding to the component areas.
In one embodiment, the computer device may determine a local mask corresponding to the respective component region, and superimpose the local mask and the sample texture image to obtain a corresponding sample texture image of the component. For each part, the computer device may derive a series of part sample texture images based on a series of sample texture images. For example, a series of eye sample texture images corresponding to eyes, and a series of nose sample texture images corresponding to a nose.
The local mask is an image mask corresponding to the component region, and the image mask is also called a mask and is used for shielding all or part of an image to be processed to control the image processing region. In optical image processing, the image mask may be a film or a filter, etc. In digital image processing, the image mask may be a two-dimensional matrix array, or a multi-valued image. In this embodiment, the local mask may be a set of binary images, for example, a region corresponding to the component region in the binary image has a value of 1, and the other regions are 0.
Further, the computer device may construct a corresponding component sample matrix for each type of component sample texture image. For each component sample matrix corresponding to a component, the computer device may perform singular value decomposition on the component sample matrix to obtain more than one set of singular values and corresponding PCA bases. The computer device may select a predetermined number of larger singular values and PCA bases corresponding to the respective singular values to form a component principal component corresponding to the component sample matrix. And then, the computer equipment carries out PCA base fitting on the corresponding component area in the source texture image through the component principal component to obtain a corresponding local fitting texture image. For example, an eye-fitted texture image corresponding to an eye and a nose-fitted texture image corresponding to a nose are obtained.
And S406, fusing the corresponding local fitting texture images according to the local masks corresponding to the parts to obtain the face fitting texture images corresponding to the face areas.
Specifically, the computer device may superimpose the local fitting texture images of different component regions through local masks to obtain face fitting texture images under all PCA-based coverage.
In the above embodiment, each part of the face is fitted to obtain a local fitting texture image, and the local fitting texture images of the parts are fused to obtain a face fitting texture image corresponding to the whole face. Therefore, when fitting treatment is carried out, a good fitting effect can be obtained without a large number of samples, local fitting is carried out respectively, the calculation amount required to be calculated simultaneously is greatly reduced, and the fitting efficiency and the fitting effect are further improved.
In an embodiment, in step S404, that is, the step of determining the locally fitted texture image corresponding to each component region based on the component principal component of the component sample texture image corresponding to each component region includes: acquiring a component principal component obtained by performing principal component analysis on a component sample texture image corresponding to each component area and a component sample mean value corresponding to each group of component sample texture images; constructing a corresponding local fitting texture image function through the main component of each part, the local fitting parameter to be adjusted and the sample mean value of each part corresponding to each part area; constructing a first target loss function according to each local fitting texture image function and the source texture image; continuously adjusting the value of the local fitting parameter to minimize the first target loss function, and stopping when a stopping condition is met to obtain local fitting target values respectively corresponding to each part area; and respectively substituting the local fitting target values respectively corresponding to each part area into the corresponding local fitting texture image function to obtain the local fitting texture image corresponding to each part area.
In one embodiment, for each component, the computer device can construct a component sample matrix based on a series of component sample texture images corresponding to the component, where one dimension of the component sample matrix represents the number of samples and one dimension represents a sample feature. The computer device may then perform singular value decomposition on the component sample matrix to obtain more than one set of singular values and corresponding PCA bases. The computer equipment can sort the singular values from big to small and acquire the singular values sorted before the preset ranking and the corresponding PCA base. The singular values sorted before the preset name and the corresponding PCA base jointly form a principal component corresponding to the component sample matrix, namely a component principal component corresponding to the component sample texture image.
Further, for each part, the computer device may calculate a part sample mean corresponding to the part from a series of part sample texture images. The component sample mean is a set of vectors whose dimensions are sample feature numbers. For example, the component sample matrix is a matrix of n rows by m columns, where n represents the number of samples and m represents the number of sample features, and the component sample mean is a vector of m dimensions, each dimension of the vector having a value that is the mean of a corresponding column of the component sample matrix.
Further, the computer device constructs a corresponding local fitting texture image function through the main component of the part corresponding to each part area, the local fitting parameter to be adjusted and the sample mean value of the part. In one embodiment, the computer device may determine the principal component from the dot product of the selected singular value and the PCA base corresponding to the respective singular value, and multiply the inverse matrix of the matrix corresponding to the principal component by the local fitting parameter, plus the component sample mean, to obtain the locally fitted texture image. That is, the computer device may construct the locally fitted texture image function by the following formula: fit _ uv _ part ═ z × X + mu; wherein X is (basis sigma) T; wherein X represents a component principal component; basis represents the PCA group; sigma represents a singular value corresponding to a corresponding PCA base; z represents a local fitting parameter; mu denotes the mean of the part samples.
It can be understood that, in the process of solving the local fitting parameters, the goal is to optimize each local fitting parameter z so that the texture image obtained by superimposing the local fitting texture images is close to the real source texture image gt _ uv, and the measurement criterion may specifically be the euclidean distance between the face fitting texture image and the source texture image, so that the computer device may construct an image loss function according to each local fitting texture image function and the source texture image, where loss _ L2 [ | | | Σ fit _ uv _ part (z) -gt _ uv | ], where fit _ uv _ part (z) is the value of the local fitting texture image function, gt _ uv is the source texture image, E is the expectation calculation, and the computer device may directly adjust the value of the local fitting parameters by using the image loss function as the first target loss function.
The computer device may initialize z to all zeros, trained by an Adam optimizer, an optimizer, and set a learning rate of the match, such as 0.01. And continuously adjusting the value of the local fitting parameter z to minimize the first target loss function, and stopping when a stopping condition is met to obtain local fitting target values respectively corresponding to each part area. The stopping condition may specifically be that a preset number of iterations is reached, for example, 500 iterations are performed to obtain an optimal value of the local fitting parameter z. And then the computer equipment can substitute the local fitting target values respectively corresponding to each part area into the corresponding local fitting texture image functions respectively to obtain the local fitting texture images corresponding to each part area.
In one embodiment, considering that noise may exist at the connection of different components when the local fitting texture images are overlapped and spliced into the face fitting texture image, a smoothing loss may be added when the first target loss function is constructed, so that the output is smoother.
In one embodiment, constructing the first target loss function from each of the locally fitted texture image functions and the source texture image comprises: constructing an image loss function according to the difference between the sum of the local fitting texture image functions and the source texture image; determining a fitting texture image function according to each local fitting texture image function, and constructing a corresponding smooth loss function based on the difference of each pixel in the current fitting texture image determined by the fitting texture image function; a weighted sum function of the image loss function and the smoothing loss function is used as a first target loss function.
The image loss function may be expressed by the following formula, loss _ L2 ═ Ez [ | | Σ fit _ uv _ part (z) -gt _ uv |. the computer device may determine a fitted texture image function from each locally fitted texture image function and construct a corresponding smooth loss function based on the differences of neighboring pixels in the current fitted texture image determined by the fitted texture image function.
In one embodiment, the computer device may perform 2x2 downsampling on the current fitted texture image, selecting 4 pixels in each 2x2 block once, and obtaining 4 downsampled pictures, where the Smoothing loss smoothening is L2 loss between every two pictures.
For example, the computer device may determine the first target loss function by L oss _ total ═ loss _ L2 + loss _ smoothing _ weight, where weight represents a weighting coefficient.
In this way, the smoothing loss is also added to the first target loss, and the determined local fitting parameters can be made more optimal. The local fitting texture image determined by the local fitting parameters calculated by considering the image loss and the smoothing loss does not have obvious splicing when the facial fitting texture image is generated by superposition, and has less noise and better fitting effect.
In the embodiment, the optimized local fitting parameters are obtained by reducing the difference between the local fitting texture image and the source texture image, and the local fitting texture image obtained by fitting the optimized local fitting parameters has a fitting degree closer to that of a real source texture image and a good fitting effect.
In one embodiment, the computer device may perform a PCA-based fit directly on the full face, resulting in a corresponding face-fitted texture image. Specifically, step S304, that is, the step of performing fitting processing on the face region based on the principal component of the sample texture image corresponding to the source texture image to obtain a corresponding face-fitted texture image, specifically includes: acquiring a face principal component obtained after principal component analysis is carried out on a sample texture image corresponding to the source texture image and a face sample mean value corresponding to the sample texture image; constructing a corresponding face fitting texture image function through the face main component, the face fitting parameters to be adjusted and the face sample mean value; constructing a second target loss function according to the face fitting texture image function and the source texture image; continuously adjusting the value of the face fitting parameter to minimize the second target loss function, and stopping when a stopping condition is met to obtain a face fitting target value; and substituting the face fitting target value into a face fitting texture image function to obtain a face fitting texture image corresponding to the source texture image.
In particular, the computer device may construct a corresponding face sample matrix based on the entire sample texture image. Further, the computer device may perform singular value decomposition on the face sample matrix to obtain more than one set of singular values and corresponding PCA bases. The computer device may select a preset number of larger singular values and PCA bases corresponding to the respective singular values to form a face principal component corresponding to the face sample matrix.
Further, the computer device may calculate a corresponding face sample mean from a series of sample texture images. And the computer equipment constructs a corresponding face fitting texture image function through the face main component, the face fitting parameters to be adjusted and the face sample mean value. In solving the face-fit parameters, the goal is to optimize the face-fit parameters so that the face-fit texture image is close to the true source texture image. In this way, the computer device can construct a second target loss function based on the difference between the face fit texture image function and the source texture image. The computer device may initialize the face fitting parameters to all zeros, trained through an Adam optimizer, an optimizer, and set the learning rate of the match. And continuously adjusting the value of the face fitting parameter to minimize the second target loss function, and stopping when a stopping condition is met to obtain a face fitting target value. The stop condition may specifically be that a preset number of iterations is reached. And then the computer equipment can substitute the face fitting target value into a face fitting texture image function to obtain a corresponding face fitting texture image.
In the embodiment, the optimized face fitting parameters are obtained by reducing the difference between the face fitting texture image and the source texture image, and the face fitting texture image obtained by fitting the optimized face fitting parameters has a fitting degree closer to a real source texture image and a good fitting effect.
Referring to fig. 5, in an embodiment, the step S306, namely, the step of fusing the face fitting texture image with the background texture image corresponding to the background region to obtain a fitting texture image corresponding to the source texture image specifically includes:
and S502, determining a face mask corresponding to the face fitting texture image, and constructing a corresponding Gaussian pyramid based on the face mask.
Wherein the face mask is an image mask corresponding to the face region. The gaussian pyramid is an image pyramid, which is a set of images arranged in a pyramid shape with gradually decreasing resolution and derived from the same original image. The images one layer at a time can be compared to a pyramid, and the higher the level is, the smaller the image is, and the lower the resolution is. The gaussian pyramid corresponding to the face mask in the embodiment of the present application is an image pyramid obtained by performing step-by-step downsampling on the face mask. The downsampling (downsampling) is a process of extracting a part of pixels in an original image to form a new image, and may be understood as a reduced image, which is also called downsampling (downsampling). The reverse operation to downsampling is upsampling (upsampling), which is a process of enlarging an image and may also be referred to as image interpolation (interpolating), and its main purpose is to enlarge an original image so that the image can be displayed on a higher resolution display device.
Specifically, the computer device performs gaussian low-pass filtering and interlaced down-sampling processing on the face mask to obtain a processed image, and then continues to perform gaussian low-pass filtering and interlaced down-sampling processing on the processed image, so that the processed image is processed layer by layer to a preset number of layers to obtain a gaussian image pyramid corresponding to the face mask.
In one embodiment, a specific manner of performing gaussian low-pass filtering on the image may be specifically performing filtering through a filter. The filter may be a matrix, and the filtered image is obtained by performing filtering by multiplying each pixel in the image to be subjected to the gaussian low-pass filtering by the matrix, and then performing normalization processing on each pixel. In an exemplary embodiment, the Filter may specifically be Filter 1/256[ [1,4,6,4,1], [4,16,24,16,4], [6,24,36,24,6], [4,16,24,16,4], [1,4,6,4,1] ]. Of course, the filter may be another filter, which is not limited in the embodiments of the present application.
S504, constructing a corresponding first Laplacian pyramid based on the face fitting texture image, and constructing a corresponding second Laplacian pyramid based on the background texture image corresponding to the background area.
The laplacian pyramid is an image pyramid. The laplacian pyramid, also called a laplacian residual pyramid, is an image pyramid constructed by reconstructing an upper-layer non-sampled image from a lower-layer image of the pyramid, and is also a prediction residual in digital image processing, so that the image can be restored to the maximum extent and used together with the laplacian pyramid.
In one embodiment, the computer device performs gaussian low-pass filtering and interlaced down-sampling on the face fitting texture image to obtain a processed image, and then continues to perform gaussian low-pass filtering and interlaced down-sampling on the processed image, so that the processed image is processed layer by layer to a preset number of layers to obtain a gaussian image pyramid corresponding to the face fitting texture image. And amplifying the obtained Gaussian image pyramid corresponding to the face fitting texture image by adopting an interpolation method, filtering by using the same filter as that used in the construction of the Gaussian image pyramid, subtracting the Gaussian image pyramid at the lower layer to obtain a residual image of the layer, and repeating the process to construct a first Laplacian pyramid. It will be appreciated that the computer device may process the background texture image in the same manner as described above to construct a second laplacian pyramid corresponding to the background texture image.
The specific process of constructing the laplacian pyramid is illustrated below by way of example, taking an L diagram as an example, the L diagram may specifically be a face-fit texture image or a background texture image, step (1) of down L is obtained by gaussian down-sampling L0, and this function may specifically be achieved by a pyrDown () function, step (2) of up L is obtained by gaussian up-sampling down L, and this function may specifically be achieved by a pyrUp () function, then the residual between the original image L and up L is calculated, and a residual map lap L0 is obtained, which is an image at the lowest end of the laplacian residual pyramid, step (3) of continuing the step (1) and step (2) of down L, and a series of residual maps lap L1, lap2, lap3.
And S506, fusing the images of the corresponding layers in the first Laplacian pyramid and the second Laplacian pyramid into corresponding fusion images through the face mask image of each layer of the Gaussian pyramid.
Specifically, the computer device may fuse the images of the corresponding layers in the first laplacian pyramid and the second laplacian pyramid into one image, that is, a fused image, through the mask image of each layer of the gaussian pyramid corresponding to the face mask. This way, fusing is done for each layer in such a way that a fused image of more than one layer from the bottom layer to the top layer is obtained.
Specifically, when fusing each layer of images of each pyramid, the computer device may perform the fusion processing in a weighted fusion manner. For each layer, the computer device multiplies the image of the corresponding layer in the first laplacian pyramid by the mask image of the corresponding layer in the corresponding gaussian pyramid to obtain a first image. For each layer, the computer device may determine an inverse mask image that is opposite to the mask image of the corresponding layer in the gaussian pyramid, and multiply the image of the corresponding layer in the second laplacian pyramid with the inverse mask image to obtain a second image. Furthermore, for each layer, the computer device may perform weighted summation processing on the first image and the second image corresponding to the corresponding layer to obtain a fused image corresponding to the layer. The weighting coefficient of the weighted fusion can be determined and adjusted according to the actual situation.
In one embodiment, for each layer, the computer device may calculate a fused image of the corresponding layer according to the following formula, where y is α mask image a + β (1-mask) mask image b, where α and β are weighting coefficients, image a is a face-fit texture image, image b is a background texture image, and mask is a face mask, and thus, by means of weighted fusion, images of the corresponding layers in the first laplacian pyramid and the second laplacian pyramid may be fused into a corresponding fused image to prepare for subsequent generation of a high-quality fit texture image.
And S508, from the fused top-layer fused image, sampling layer by layer, superposing the next layer of fused image until the fused top-layer fused image is superposed to the bottom-layer fused image, and outputting the superposed fitted texture image.
Specifically, the computer device performs layer-by-layer upsampling from the fused image on the top layer to the same size as the fused image on the next layer, superimposes the fused image on the next layer, repeats the upsampling process continuously and superimposes the upsampled image until the fused image is superimposed on the fused image on the bottom layer, and outputs the fitted texture image obtained after the superimposing. Therefore, the face fitting texture image and the background texture image can be fused into a texture image through the face mask in a layer-by-layer fusion and superposition mode.
In the above embodiment, the face fitting texture image and the background texture image are fused by means of laplacian image fusion according to the face mask, so that a complete fitting texture image can be obtained.
Referring to FIG. 6, in one embodiment, generating a countermeasure network includes a generator and an arbiter; the training step of generating the countermeasure network includes the steps of:
s602, obtaining a sample texture image and a sample smooth texture image corresponding to the sample texture image.
In particular, the computer device may obtain the sample texture image locally or from other computer devices over a network. In one embodiment, the sample texture image may specifically be a face sample texture image, and the computer device may acquire texture images of various faces in advance to construct a texture image library, and may further select some lines of texture images with better image quality from the texture image library as sample texture images.
In one embodiment, the computer device can randomly cut out an image with a preset size on the face texture image to serve as a sample texture image, so that the effect of data amplification is achieved. Specifically, the facial texture image of 1184x1020x3 size may be randomly cropped to 1000x950x3 size to obtain a sample texture image, so as to achieve the effect of data augmentation.
Further, the computer device can fit each sample texture image according to the principal components corresponding to the series of sample texture images to obtain the corresponding sample fit texture image. And smoothing the sample fitting texture image to obtain a corresponding sample smooth texture image. For details of the specific technique of obtaining the sample smooth texture image according to the sample texture image, that is, details of how to perform the fitting process and the smoothing process, reference may be made to the detailed implementation in step S204 and step S206, which is not described herein again in this embodiment of the present application. It is understood that the sample texture image is a true high-definition texture image, while the sample smooth texture image is a low-definition texture image.
S604, inputting the sample smooth texture image into a generator to be trained for generating a countermeasure network, performing texture enhancement processing through the generator, and outputting a corresponding prediction target texture image.
In particular, when training the antagonistic network, the computer device may input the sample smooth texture image into the generator G of the antagonistic network to be trained. The generator G performs texture enhancement processing on the sample smooth texture image, and outputs a prediction target texture image of the same resolution.
And S606, forming a sample input pair by the sample smooth texture image and the sample texture image or the prediction target texture image, and inputting the sample input pair into a discriminator to be trained for generating the countermeasure network to obtain output probability.
Specifically, the computer device forms a sample input pair with the sample smooth texture image or the prediction target texture image, and inputs the sample input pair into a discriminator to be trained, which generates a countermeasure network. That is, the discriminator D receives two inputs, one is a low-definition sample smooth texture image, which may be set as x, and the other is a texture image for which discrimination between true and false is required, i.e., the generator G outputs a prediction target texture image y (false), or a true high-definition sample texture image y _ gt (true data). For each group of inputs, the discriminator D compares and discriminates two inputs in each group of inputs to obtain an output probability. It will be appreciated that the output probability may specifically be a probability value representing the probability that the texture image that needs to be discriminated whether true or false is true or false. For example, a probability of 0 indicates true, and a probability of 1 indicates false; or a probability of 1 indicates true and a probability of 0 indicates false. The output probability may also be a set of probability vectors, such as probability vector (a, b), where the sum of a and b is 1, a represents the likelihood of being true, and b represents the likelihood of being false; alternatively, a represents a possibility of being false, b represents a possibility of being true, and the like, and this is not limited in the embodiment of the present application.
Referring to fig. 7, fig. 7 is a schematic diagram of a network architecture for generating a countermeasure network in one embodiment. As shown in fig. 7, the generation countermeasure network includes a generator G and an arbiter D. The computer device inputs the low-definition texture image (i.e., the sample smooth texture image in the embodiment of the present application) into the generator G, and outputs the texture-enhanced high-definition texture image, i.e., the prediction target texture image in the embodiment of the present application. Further, the computer device may input the high-definition texture image and the low-definition texture image into a sample input pair to the discriminator D, and the discriminator D may aim to perform the judgment and output an output probability corresponding to "false". The computer device also inputs a sample input pair of group texture image (i.e., a source texture image) and low-definition texture image into a discriminator D, which aims to perform judgment and output an output probability corresponding to "true".
In one embodiment, the discriminator D may comprise 5 layers of convolutions, 4 × 4conv, stride2, 64; 4 × 4conv, stride2,128; 4 × 4conv, stride2,256; 4 × 4conv, stride2,512; 4 × 4conv, stride2, 1. Of course, in other embodiments, the discriminator may include more or less convolution layers, which is not limited by the embodiments of the present application.
And S608, constructing a resistance loss function according to the output probability corresponding to each sample input pair.
In one embodiment, the computer device may construct a penalty function based on an expectation of the corresponding output probability for each sample input pair, the penalty function may be expressed by, in particular, LcGAN(G,D)=Ex|y_gt[logD(x,y_gt)]+[logD(x,y)](ii) a Wherein x represents a sample smooth texture image; y _ gt represents a sample texture image; d (x, y _ gt) represents an output of the discriminator after discrimination processing of a sample pair composed of a sample smooth texture and a sample texture image; y represents a prediction target texture image; d (x, y) tableThe sample pair composed of the sample smooth texture and the prediction target texture image is output after the discrimination processing is carried out by the discriminator; log represents a logarithm operation; e denotes the expectation operation.
S610, constructing a reconstruction loss function according to the difference between the sample texture image and the prediction target texture image.
In particular, the computer device may construct a reconstruction loss function from differences of the sample texture image and the predicted target texture image. The difference between the sample texture image and the prediction target texture image can be specifically represented by the euclidean distance between the sample texture image and the prediction target texture image, but may also be measured in other manners, for example, the similarity or the norm of the two images.
In one embodiment, the computer device may calculate the reconstruction loss function by LL1(G)=Ex|y_gt[||y_gt-y||1](ii) a Wherein y _ gt represents a sample texture image; representing a prediction target texture image; | y _ gt-y |1 represents a norm of a difference between the sample texture image and the prediction target texture image; e denotes the expectation operation.
And S612, adjusting network parameters of the generated confrontation network to be trained based on the confrontation loss function and the reconstruction loss function, and continuously training until the conditions for stopping training are met, so as to obtain the trained generated confrontation network.
The training stopping condition is a condition for stopping network training, and the training stopping condition may be that a preset number of iterations is reached, or that a network performance index of the generated countermeasure network after network parameters are adjusted reaches a preset index.
Specifically, the computer device may construct an objective function according to the countermeasure loss function and the reconstruction loss function, and then adjust network parameters through the objective function and return to step S602 to obtain different sample texture images and sample smooth texture images corresponding to the sample texture images, and train again until the training stop condition is met, and the training is finished to obtain a trained generated countermeasure network.
In one embodiment, the computer device may be implemented byThe formula constructs the objective function: g*=argminGmaxDLcGAN(G,D)+λLL1(G) Wherein λ is a weighting coefficient, in particular, the computer device may adjust the network parameters such that the penalty function L is resisted during the current trainingcGANAnd (G, D) maximizing to obtain the network parameters of the current discriminator. The computer equipment can freeze the network parameters of the discriminator and then adjust the network parameters of the generator to ensure that the target function G*And minimizing to obtain the network parameters of the generator in the current training process. In this way, step S602 is continuously returned to, so as to input samples, adjust network parameters according to the confrontation loss function, the reconstruction loss function and the objective function, and terminate training until the training stop condition is met, so as to obtain a trained generated confrontation network. In one embodiment, based on practical experience, the computer device may set λ 100 and the learning rate to 0.0002 during training of the anti-biotic network.
In the above embodiment, the generation countermeasure network is trained by the sample smooth texture image and the real sample texture image, so that the generator in the generation countermeasure network learns high-definition texture information, and thus a high-definition target texture image can be generated based on a low-definition sample smooth texture image.
In one embodiment, the texture enhancement method based on the texture image further includes a step of constructing an avatar image, which specifically includes: acquiring a virtual role model; and attaching the target texture image to the face of the virtual character model to obtain a corresponding virtual character image.
The virtual character is a virtual object which is realized by data and can be stored in a computer device, and specifically can be a virtual character, such as a game character or an animation character. The virtual character model is used for realizing the presentation of virtual characters, and when the computer equipment runs the virtual character model, the computer equipment can display the virtual characters in the display. The virtual character model may be a three-dimensional virtual character model, and is used to present a three-dimensional virtual character image.
Specifically, after the source texture image is converted into a low-noise target texture image with rich texture details by the texture image-based texture enhancement method mentioned in the embodiment of the present application, the target texture image may be attached to the face of the virtual character model by UV mapping, so as to obtain a corresponding virtual character image.
In one embodiment, in some applications, such as applications of game face pinching and the like, a high-definition real texture image depends on art designing and repairing, and manual repair of the texture image is time-consuming and labor-consuming.
In the above embodiment, the virtual character image of the specific high-definition real face texture image can be obtained by attaching the target texture image to the face of the virtual character model.
The application also provides an application scene, and the application scene applies the texture enhancement method based on the texture image. Specifically, the application of the texture enhancement method based on the texture image in the application scene is as follows:
referring to fig. 8, fig. 8 is a schematic diagram illustrating a texture enhancement method based on a texture image according to an embodiment. The computer equipment can collect two-dimensional face images corresponding to different faces and generate corresponding face texture images according to the two-dimensional face images. And the computer equipment can perform PCA-based texture synthesis on the face texture image. In the fitting process, the texture image can be locally fitted through six local PCA bases of cheek, eyes, mouth, eyebrows, forehead and chin. The local fitting texture maps of different component areas are overlaid through a mask to obtain a face fitting texture image under all PCA-based coverage (also called a low-definition face fitting texture image). And then, for the hair, the environment background and other non-critical areas, selecting a basic face texture image as a general background of the synthetic image, and fusing the face areas in the background texture image and the face fitting texture image in a Laplace image fusion mode according to a background mask corresponding to the background area to obtain a low-definition fitting texture image. And further carrying out Gaussian smoothing processing on the low-definition fitting texture image to obtain a corresponding smooth texture image. And inputting the smooth texture image to a generator in a trained generation countermeasure network, and outputting a high-definition target texture image subjected to texture enhancement.
It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 9, there is provided a texture enhancing apparatus 900 based on texture image, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, and specifically includes: an obtaining module 901, a fitting module 902, a smoothing processing module 903 and a texture enhancing module 904, wherein:
an obtaining module 901, configured to obtain a source texture image to be processed.
A fitting module 902, configured to perform fitting processing on the source texture image based on a principal component of the sample texture image corresponding to the source texture image to obtain a corresponding fitted texture image; and the main component of the sample texture image is obtained by performing main component analysis on the sample texture image.
A smoothing module 903, configured to smooth the fitted texture image to obtain a corresponding smooth texture image; the smoothing processing is used for carrying out noise suppression on the fitted texture image; .
And a texture enhancement module 904, configured to perform texture enhancement processing on the smooth texture image through a generator in the generation countermeasure network, and output a target texture image corresponding to the source texture image.
In one embodiment, the obtaining module 901 is specifically configured to obtain more than one face image obtained by scanning the face of the target object at different angles; acquiring texture setting information corresponding to a target scene; and according to the texture setting information, performing fusion processing on more than one face image to obtain a source texture image corresponding to the target object.
In one embodiment, the obtaining module 901 is specifically configured to determine facial morphology information and illumination information corresponding to a target object by analyzing more than one facial image; determining pixel points corresponding to the position points in the three-dimensional texture model when the position points are respectively mapped to the facial image according to the facial form information; and setting information according to the texture, and constructing a source texture image corresponding to the target object by combining a three-dimensional texture model based on the determined color information and the corresponding illumination information of the pixel points.
In one embodiment, the fitting module 902 is specifically configured to determine a face region and a background region included in the source texture image; fitting the face region based on the principal component of the sample texture image corresponding to the source texture image to obtain a corresponding face fitting texture image; and fusing the face fitting texture image and the background texture image corresponding to the background area to obtain a fitting texture image corresponding to the source texture image.
In one embodiment, the sample texture image includes more than one set of component sample texture images. The fitting module 902 is specifically configured to determine more than one component region included in the face region; determining local fitting texture images corresponding to the corresponding component areas based on component principal components of the component sample texture images respectively corresponding to the component areas; and fusing the corresponding local fitting texture images according to the local masks corresponding to the parts of the regions to obtain the face fitting texture images corresponding to the face regions.
In an embodiment, the fitting module 902 is specifically configured to obtain a component principal component obtained by performing principal component analysis on a component sample texture image corresponding to each component region, and a component sample mean value corresponding to each group of component sample texture images; constructing a corresponding local fitting texture image function through the main component of each part, the local fitting parameter to be adjusted and the sample mean value of each part corresponding to each part area; constructing a first target loss function according to each local fitting texture image function and the source texture image; continuously adjusting the value of the local fitting parameter to minimize the first target loss function, and stopping when a stopping condition is met to obtain local fitting target values respectively corresponding to each part area; and respectively substituting the local fitting target values respectively corresponding to each part area into the corresponding local fitting texture image function to obtain the local fitting texture image corresponding to each part area.
In one embodiment, the fitting module 902 is specifically configured to construct an image loss function according to a difference between a sum of the local fitted texture image functions and the source texture image; determining a fitting texture image function according to each local fitting texture image function, and constructing a corresponding smooth loss function based on the difference of each pixel in the current fitting texture image determined by the fitting texture image function; a weighted sum function of the image loss function and the smoothing loss function is used as a first target loss function.
In an embodiment, the fitting module 902 is specifically configured to obtain a face principal component obtained after performing principal component analysis on a sample texture image corresponding to a source texture image, and a face sample mean value corresponding to the sample texture image; constructing a corresponding face fitting texture image function through the face main component, the face fitting parameters to be adjusted and the face sample mean value; constructing a second target loss function according to the face fitting texture image function and the source texture image; continuously adjusting the value of the face fitting parameter to minimize the second target loss function, and stopping when a stopping condition is met to obtain a face fitting target value; and substituting the face fitting target value into a face fitting texture image function to obtain a face fitting texture image corresponding to the source texture image.
In one embodiment, the fitting module 902 is specifically configured to determine a face mask corresponding to the face-fitted texture image, and construct a corresponding gaussian pyramid based on the face mask; constructing a corresponding first Laplacian pyramid based on the face fitting texture image, and constructing a corresponding second Laplacian pyramid based on the background texture image corresponding to the background area; fusing images of corresponding layers in the first Laplacian pyramid and the second Laplacian pyramid into corresponding fused images through the face mask image of each layer of the Gaussian pyramid; and from the fused top-layer fused image, sampling layer by layer, superposing the next layer of fused image until the next layer of fused image is superposed to the bottom-layer fused image, and outputting the superposed fitted texture image.
In one embodiment, the smoothing module 903 is specifically configured to cut out an intermediate texture image including a face region from the fitted texture image according to a preset format; carrying out smoothing operation and normalization processing on the intermediate texture image to obtain a corresponding smooth texture image; wherein the smoothing operation is used for noise suppression of the intermediate texture image.
In one embodiment, the texture image-based texture enhancement apparatus 900 further includes a training module 905 configured to obtain a sample texture image and a sample smooth texture image corresponding to the sample texture image; inputting the sample smooth texture image into a generator to be trained for generating a countermeasure network, performing texture enhancement processing through the generator, and outputting a corresponding prediction target texture image; forming a sample input pair by the sample smooth texture image and the sample texture image or the prediction target texture image, and inputting the sample input pair into a discriminator which is to be trained and generates a confrontation network to obtain output probability; constructing a countermeasure loss function according to the output probability corresponding to each sample input pair; constructing a reconstruction loss function according to the difference between the sample texture image and the predicted target texture image; and adjusting the network parameters of the generated confrontation network to be trained based on the confrontation loss function and the reconstruction loss function, and continuously training until the conditions for stopping training are met to obtain the trained generated confrontation network.
Referring to FIG. 10, in one embodiment, the texture image-based texture enhancement apparatus 900 further comprises a fitting module 906 for obtaining a virtual character model; and attaching the target texture image to the face of the virtual character model to obtain a corresponding virtual character image.
The texture enhancement device based on the texture image fits the source texture image to obtain a corresponding fitted texture image based on the principal component of the sample texture image corresponding to the source texture image. Wherein the principal component of the sample texture image is obtained by performing principal component analysis on a series of sample texture images. In this way, partial image noise, such as shadow and illumination information, can be eliminated from the fitted texture image, while most of the valid image information is retained. Further, the fitted texture image is smoothed to obtain a smoothed texture image in which high-frequency noise such as speckle, pigment, or mole is suppressed. And performing texture enhancement processing on the smooth texture image through a generator in a trained generation countermeasure network, reconstructing and generating real pore and hair textures while keeping the original low-definition texture, and obtaining a target texture image. Therefore, the low-quality source texture image can be converted into the target texture image with high definition and rich detail information under the condition of unchanged resolution, the generated target texture image is ensured to keep the original texture information, more texture details are enhanced, and the texture enhancement effect is greatly improved.
For specific limitations of the texture enhancing device based on the texture image, reference may be made to the above limitations of the texture enhancing method based on the texture image, and details thereof are not repeated here. The respective modules in the texture enhancing device based on texture image may be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, and the computer device may be specifically a terminal or a server, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing texture image data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a texture enhancement method based on a texture image.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. A method of texture enhancement based on a texture image, the method comprising:
acquiring a source texture image to be processed;
fitting the source texture image based on the main component of the sample texture image corresponding to the source texture image to obtain a corresponding fitted texture image; the main component of the sample texture image is obtained by performing main component analysis on the sample texture image;
smoothing the fitted texture image to obtain a corresponding smooth texture image; the smoothing processing is used for carrying out noise suppression on the fitted texture image;
and performing texture enhancement processing on the smooth texture image through a generator in a generation countermeasure network, and outputting a target texture image corresponding to the source texture image.
2. The method of claim 1, wherein the obtaining the source texture image to be processed comprises:
acquiring more than one face image obtained by scanning the face of a target object at different angles;
acquiring texture setting information corresponding to a target scene;
and according to the texture setting information, carrying out fusion processing on the more than one face images to obtain a source texture image corresponding to the target object.
3. The method according to claim 2, wherein the performing the fusion processing on the more than one face images according to the texture setting information to obtain a source texture image corresponding to the target object comprises:
determining facial morphology information and illumination information corresponding to the target object by analyzing the more than one facial image;
determining pixel points corresponding to the position points in the three-dimensional texture model when the position points are respectively mapped to the face image according to the face shape information;
and according to the texture setting information, and based on the determined color information and the corresponding illumination information of the pixel points, combining the three-dimensional texture model to construct a source texture image corresponding to the target object.
4. The method according to claim 1, wherein the fitting the source texture image based on the principal component of the sample texture image corresponding to the source texture image to obtain a corresponding fitted texture image comprises:
determining a face region and a background region included in the source texture image;
fitting the face region based on the principal component of the sample texture image corresponding to the source texture image to obtain a corresponding face fitting texture image;
and fusing the face fitting texture image with the background texture image corresponding to the background area to obtain a fitting texture image corresponding to the source texture image.
5. The method of claim 4, wherein the sample texture image comprises more than one set of component sample texture images; the fitting the face region based on the principal component of the sample texture image corresponding to the source texture image to obtain a corresponding face fitting texture image includes:
determining more than one component region included in the face region;
determining local fitting texture images corresponding to the corresponding component areas based on component principal components of the component sample texture images respectively corresponding to the component areas;
and fusing the corresponding local fitting texture images according to the local masks corresponding to the parts of the regions to obtain the face fitting texture images corresponding to the face regions.
6. The method according to claim 5, wherein determining a local fitting texture image corresponding to each component region based on the component principal component of the component sample texture image corresponding to each component region comprises:
acquiring a component principal component obtained by performing principal component analysis on a component sample texture image corresponding to each component area and a component sample mean value corresponding to each group of component sample texture images;
constructing a corresponding local fitting texture image function through the main component of each part corresponding to each part area, the local fitting parameter to be adjusted and the part sample mean value;
constructing a first target loss function according to each local fitting texture image function and the source texture image;
continuously adjusting the value of the local fitting parameter to minimize the first target loss function, and stopping when a stopping condition is met to obtain local fitting target values respectively corresponding to each part area;
and respectively substituting the local fitting target values respectively corresponding to each part area into the corresponding local fitting texture image function to obtain the local fitting texture image corresponding to each part area.
7. The method of claim 6, wherein constructing a first target loss function from each locally fitted texture image function and the source texture image comprises:
constructing an image loss function according to the difference between the sum of the local fitting texture image functions and the source texture image;
determining a fitting texture image function according to each local fitting texture image function, and constructing a corresponding smooth loss function based on the difference of each pixel in the current fitting texture image determined by the fitting texture image function;
and taking the weighted sum function of the image loss function and the smooth loss function as a first target loss function.
8. The method according to claim 4, wherein the fitting the face region based on a principal component of a sample texture image corresponding to the source texture image to obtain a corresponding face-fitting texture image comprises:
acquiring a face principal component obtained after principal component analysis is carried out on a sample texture image corresponding to the source texture image and a face sample mean value corresponding to the sample texture image;
constructing a corresponding face fitting texture image function through the face main component, the face fitting parameters to be adjusted and the face sample mean value;
constructing a second target loss function according to the face fitting texture image function and the source texture image;
continuously adjusting the value of the face fitting parameter to minimize the second target loss function, and stopping when a stopping condition is met to obtain a face fitting target value;
and substituting the face fitting target value into the face fitting texture image function to obtain a face fitting texture image corresponding to the source texture image.
9. The method according to claim 4, wherein the fusing the face-fit texture image with the background texture image corresponding to the background region to obtain a fit texture image corresponding to the source texture image comprises:
determining a face mask corresponding to the face fitting texture image, and constructing a corresponding Gaussian pyramid based on the face mask;
constructing a corresponding first Laplacian pyramid based on the face fitting texture image, and constructing a corresponding second Laplacian pyramid based on the background texture image corresponding to the background area;
fusing images of corresponding layers in the first Laplacian pyramid and the second Laplacian pyramid into corresponding fused images through the face mask image of each layer of the Gaussian pyramid;
and from the fused top-layer fused image, sampling layer by layer, superposing the next layer of fused image until the next layer of fused image is superposed to the bottom-layer fused image, and outputting the superposed fitted texture image.
10. The method according to claim 1, wherein the smoothing the fitted texture image to obtain a corresponding smoothed texture image comprises:
cutting out a middle texture image comprising a face area from the fitting texture image according to a preset format;
carrying out smoothing operation and normalization processing on the intermediate texture image to obtain a corresponding smooth texture image; wherein the smoothing operation is used to noise suppress the intermediate texture image.
11. The method of claim 1, wherein generating the countermeasure network comprises a generator and an arbiter; the training step of generating the countermeasure network includes:
obtaining a sample texture image and a sample smooth texture image corresponding to the sample texture image;
inputting the sample smooth texture image into a generator to be trained for generating a countermeasure network, performing texture enhancement processing through the generator, and outputting a corresponding prediction target texture image;
forming a sample input pair by the sample smooth texture image and the sample texture image or the prediction target texture image, and inputting the sample input pair into the arbiter for generating the confrontation network to be trained to obtain an output probability;
constructing a countermeasure loss function according to the output probability corresponding to each sample input pair;
constructing a reconstruction loss function according to the difference between the sample texture image and the prediction target texture image;
and adjusting the network parameters of the generated countermeasure network to be trained based on the countermeasure loss function and the reconstruction loss function, and continuously training until the training stopping condition is met, so as to obtain the trained generated countermeasure network.
12. The method according to any one of claims 1 to 11, further comprising:
acquiring a virtual role model;
and attaching the target texture image to the face of the virtual character model to obtain a corresponding virtual character image.
13. An apparatus for texture enhancement based on a texture image, the apparatus comprising:
the acquisition module is used for acquiring a source texture image to be processed;
the fitting module is used for fitting the source texture image based on the main component of the sample texture image corresponding to the source texture image to obtain a corresponding fitting texture image; the main component of the sample texture image is obtained by performing main component analysis on the sample texture image;
the smoothing processing module is used for smoothing the fitted texture image to obtain a corresponding smooth texture image; the smoothing processing is used for carrying out noise suppression on the fitted texture image;
and the texture enhancement module is used for performing texture enhancement processing on the smooth texture image through a generator in the trained generation countermeasure network and outputting a target texture image corresponding to the source texture image.
14. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 12.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 12.
CN202010224186.4A 2020-03-26 2020-03-26 Texture enhancement method, device and equipment based on texture image and storage medium Active CN111445410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010224186.4A CN111445410B (en) 2020-03-26 2020-03-26 Texture enhancement method, device and equipment based on texture image and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010224186.4A CN111445410B (en) 2020-03-26 2020-03-26 Texture enhancement method, device and equipment based on texture image and storage medium

Publications (2)

Publication Number Publication Date
CN111445410A true CN111445410A (en) 2020-07-24
CN111445410B CN111445410B (en) 2022-09-27

Family

ID=71650874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010224186.4A Active CN111445410B (en) 2020-03-26 2020-03-26 Texture enhancement method, device and equipment based on texture image and storage medium

Country Status (1)

Country Link
CN (1) CN111445410B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738914A (en) * 2020-07-29 2020-10-02 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112465935A (en) * 2020-11-19 2021-03-09 科大讯飞股份有限公司 Virtual image synthesis method and device, electronic equipment and storage medium
CN112488974A (en) * 2020-12-09 2021-03-12 广州品唯软件有限公司 Image synthesis method, image synthesis device, computer equipment and storage medium
CN112652004A (en) * 2020-12-31 2021-04-13 珠海格力电器股份有限公司 Image processing method, device, equipment and medium
CN112712481A (en) * 2021-01-11 2021-04-27 中国科学技术大学 Structure-texture sensing method aiming at low-light image enhancement
CN112950739A (en) * 2021-03-31 2021-06-11 深圳市慧鲤科技有限公司 Texture generation method, device, equipment and storage medium
CN113034355A (en) * 2021-04-20 2021-06-25 浙江大学 Portrait image double-chin removing method based on deep learning
CN113177879A (en) * 2021-04-30 2021-07-27 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
WO2022062048A1 (en) * 2020-09-24 2022-03-31 苏州科瓴精密机械科技有限公司 Roughness compensation method and system, image processing device, and readable storage medium
CN114821030A (en) * 2022-04-11 2022-07-29 苏州振旺光电有限公司 Planet image processing method, system and device
CN114820908A (en) * 2022-06-24 2022-07-29 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN115393487A (en) * 2022-10-27 2022-11-25 科大讯飞股份有限公司 Virtual character model processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910247A (en) * 2017-03-20 2017-06-30 厦门幻世网络科技有限公司 Method and apparatus for generating three-dimensional head portrait model
CN109993698A (en) * 2019-03-29 2019-07-09 西安工程大学 A kind of single image super-resolution texture Enhancement Method based on generation confrontation network
CN110503625A (en) * 2019-07-02 2019-11-26 杭州电子科技大学 A kind of cmos image signal dependent noise method for parameter estimation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910247A (en) * 2017-03-20 2017-06-30 厦门幻世网络科技有限公司 Method and apparatus for generating three-dimensional head portrait model
CN109993698A (en) * 2019-03-29 2019-07-09 西安工程大学 A kind of single image super-resolution texture Enhancement Method based on generation confrontation network
CN110503625A (en) * 2019-07-02 2019-11-26 杭州电子科技大学 A kind of cmos image signal dependent noise method for parameter estimation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
圣空老宅: "3DMM及eos人脸重建", 《HTTPS://BLOG.CSDN.NET/ZHAISHENGFU/ARTICLE/DETAILS/103504003》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738914A (en) * 2020-07-29 2020-10-02 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111738914B (en) * 2020-07-29 2023-09-12 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
WO2022062048A1 (en) * 2020-09-24 2022-03-31 苏州科瓴精密机械科技有限公司 Roughness compensation method and system, image processing device, and readable storage medium
CN112465935A (en) * 2020-11-19 2021-03-09 科大讯飞股份有限公司 Virtual image synthesis method and device, electronic equipment and storage medium
CN112488974A (en) * 2020-12-09 2021-03-12 广州品唯软件有限公司 Image synthesis method, image synthesis device, computer equipment and storage medium
CN112652004A (en) * 2020-12-31 2021-04-13 珠海格力电器股份有限公司 Image processing method, device, equipment and medium
CN112652004B (en) * 2020-12-31 2024-04-05 珠海格力电器股份有限公司 Image processing method, device, equipment and medium
CN112712481A (en) * 2021-01-11 2021-04-27 中国科学技术大学 Structure-texture sensing method aiming at low-light image enhancement
CN112712481B (en) * 2021-01-11 2022-09-02 中国科学技术大学 Structure-texture sensing method aiming at low-light image enhancement
WO2022205755A1 (en) * 2021-03-31 2022-10-06 深圳市慧鲤科技有限公司 Texture generation method and apparatus, device, and storage medium
CN112950739A (en) * 2021-03-31 2021-06-11 深圳市慧鲤科技有限公司 Texture generation method, device, equipment and storage medium
CN113034355A (en) * 2021-04-20 2021-06-25 浙江大学 Portrait image double-chin removing method based on deep learning
CN113177879A (en) * 2021-04-30 2021-07-27 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN114821030A (en) * 2022-04-11 2022-07-29 苏州振旺光电有限公司 Planet image processing method, system and device
CN114820908A (en) * 2022-06-24 2022-07-29 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN114820908B (en) * 2022-06-24 2022-11-01 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN115393487A (en) * 2022-10-27 2022-11-25 科大讯飞股份有限公司 Virtual character model processing method and device, electronic equipment and storage medium
CN115393487B (en) * 2022-10-27 2023-05-12 科大讯飞股份有限公司 Virtual character model processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111445410B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN111445410B (en) Texture enhancement method, device and equipment based on texture image and storage medium
CN109859098B (en) Face image fusion method and device, computer equipment and readable storage medium
CN111445564B (en) Face texture image generation method, device, computer equipment and storage medium
CN109952594B (en) Image processing method, device, terminal and storage medium
US9142054B2 (en) System and method for changing hair color in digital images
CN110363116B (en) Irregular human face correction method, system and medium based on GLD-GAN
CN111696028A (en) Method and device for processing cartoon of real scene image, computer equipment and storage medium
CN111192201B (en) Method and device for generating face image and training model thereof, and electronic equipment
Galteri et al. Deep 3d morphable model refinement via progressive growing of conditional generative adversarial networks
CN112581370A (en) Training and reconstruction method of super-resolution reconstruction model of face image
CN110853119A (en) Robust reference picture-based makeup migration method
Banerjee et al. Fast face image synthesis with minimal training
Sandić-Stanković et al. Quality assessment of DIBR-synthesized views based on sparsity of difference of closings and difference of Gaussians
CN111275804B (en) Image illumination removing method and device, storage medium and computer equipment
CN114862729A (en) Image processing method, image processing device, computer equipment and storage medium
Gupta et al. A robust and efficient image de-fencing approach using conditional generative adversarial networks
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
CN115393471A (en) Image processing method and device and electronic equipment
Hamdan et al. Example-based face-image restoration for block-noise reduction
Lumentut et al. Human motion deblurring using localized body prior
Zhai et al. Joint gaze correction and face beautification for conference video using dual sparsity prior
Yu et al. Facial video coding/decoding at ultra-low bit-rate: a 2D/3D model-based approach
Cherian et al. Image Augmentation Using Hybrid RANSAC Algorithm
Luo et al. Frontal face reconstruction based on detail identification, variable scale self-attention and flexible skip connection
CN110751078B (en) Method and equipment for determining non-skin color region of three-dimensional face

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40025850

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant