CN115713585B - Texture image reconstruction method, apparatus, computer device and storage medium - Google Patents

Texture image reconstruction method, apparatus, computer device and storage medium Download PDF

Info

Publication number
CN115713585B
CN115713585B CN202310013253.1A CN202310013253A CN115713585B CN 115713585 B CN115713585 B CN 115713585B CN 202310013253 A CN202310013253 A CN 202310013253A CN 115713585 B CN115713585 B CN 115713585B
Authority
CN
China
Prior art keywords
texture image
texture
image
target
enhanced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310013253.1A
Other languages
Chinese (zh)
Other versions
CN115713585A (en
Inventor
徐东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310013253.1A priority Critical patent/CN115713585B/en
Publication of CN115713585A publication Critical patent/CN115713585A/en
Application granted granted Critical
Publication of CN115713585B publication Critical patent/CN115713585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Generation (AREA)

Abstract

The present application relates to a texture image reconstruction method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: respectively carrying out frequency decomposition on the target texture image and the texture image set corresponding to the target texture image to obtain a first target texture image and a second target texture image corresponding to the target texture image and a first texture image set and a second texture image set corresponding to the texture image set; the frequency corresponding to the first texture image set is smaller than the frequency corresponding to the second texture image set, and the frequency corresponding to the first target texture image is smaller than the frequency corresponding to the second target texture image; performing image enhancement on the first target texture image based on the first texture image set to obtain a first enhanced texture image; performing image enhancement on the second target texture image based on the second texture image set to obtain a second enhanced texture image; and fusing the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image, thereby improving the quality of texture image reconstruction.

Description

Texture image reconstruction method, apparatus, computer device and storage medium
Technical Field
The present invention relates to the field of image processing technology, and in particular, to a texture image reconstruction method, apparatus, computer device, storage medium, and computer program product.
Background
With the development of computer technology, people have brought about great changes in work and life. For example, the original animation production, game production, etc. are usually performed based on planar objects, but now with the development of technology, more scenes supporting three-dimensional objects appear, and accordingly, the quality requirements on texture images are higher and higher.
In the conventional art, a large number of texture image samples need to be collected to train a machine learning model for optimizing the texture image. However, the training effect of the model is closely related to the training sample, the individual difference of the texture image is large, the model with excellent characterization is difficult to train and obtain, the universal model is difficult to train and obtain, and the problem of poor reconstruction quality of the texture image exists.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a texture image reconstruction method, apparatus, computer device, computer readable storage medium, and computer program product that are capable of improving the reconstruction quality of texture images.
The application provides a texture image reconstruction method. The method comprises the following steps:
acquiring a target texture image and a texture image set corresponding to the target texture image; texture presented by the texture image in the texture image set and texture presented by the target texture image are matched with each other, and the texture image set is obtained based on texture images with different resolutions;
respectively carrying out frequency decomposition on the texture image set and the target texture image to obtain a first texture image set and a second texture image set corresponding to the texture image set and a first target texture image and a second target texture image corresponding to the target texture image; the frequency corresponding to the first texture image set is smaller than the frequency corresponding to the second texture image set, and the frequency corresponding to the first target texture image is smaller than the frequency corresponding to the second target texture image set;
performing image enhancement on the first target texture image based on the first texture image set to obtain a first enhanced texture image corresponding to the first target texture image;
performing image enhancement on the second target texture image based on the second texture image set to obtain a second enhanced texture image corresponding to the second target texture image;
And fusing the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image.
The application also provides a texture image reconstruction device. The device comprises:
the texture image acquisition module is used for acquiring a target texture image and a texture image set corresponding to the target texture image; texture presented by the texture image in the texture image set and texture presented by the target texture image are matched with each other, and the texture image set is obtained based on texture images with different resolutions;
the image decomposition module is used for respectively carrying out frequency decomposition on the texture image set and the target texture image to obtain a first texture image set and a second texture image set corresponding to the texture image set and a first target texture image and a second target texture image corresponding to the target texture image; the frequency corresponding to the first texture image set is smaller than the frequency corresponding to the second texture image set, and the frequency corresponding to the first target texture image is smaller than the frequency corresponding to the second target texture image set;
the first image enhancement module is used for carrying out image enhancement on the first target texture image based on the first texture image set to obtain a first enhanced texture image corresponding to the first target texture image;
The second image enhancement module is used for carrying out image enhancement on the second target texture image based on the second texture image set to obtain a second enhanced texture image corresponding to the second target texture image;
and the image fusion module is used for fusing the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the texture image reconstruction method described above when the processor executes the computer program.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the texture image reconstruction method described above.
A computer program product comprising a computer program which, when executed by a processor, implements the steps of the texture image reconstruction method described above.
The texture image reconstruction method, the texture image reconstruction device, the computer equipment, the storage medium and the computer program product acquire a target texture image and a texture image set corresponding to the target texture image, wherein textures presented by the texture image in the texture image set and textures presented by the target texture image are matched with each other, the texture image set is obtained based on texture images with different resolutions, and the texture image set is used for reconstructing the texture of the target texture image. And respectively carrying out frequency decomposition on the texture image set and the target texture image to obtain a first texture image set and a second texture image set corresponding to the texture image set and a first target texture image and a second target texture image corresponding to the target texture image, wherein the texture image set can be decomposed into a first texture image set representing a low-frequency component of the image and a second texture image set representing a high-frequency component of the image through frequency decomposition, and the first target texture image is decomposed into a first target texture image representing the low-frequency component of the image and a second target texture image representing the high-frequency component of the image. And carrying out image enhancement on the first target texture image based on the first texture image set, so that a first enhanced texture image corresponding to the first target texture image can be obtained, which is equivalent to enhancing low-frequency components in the target texture image, such as enhancing illumination. And carrying out image enhancement on the second target texture image based on the second texture image set, so that a second enhanced texture image corresponding to the second target texture image can be obtained, which is equivalent to enhancing high-frequency components in the target texture image, such as enhancing texture details. And fusing the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image. Therefore, the low-quality target texture image can be converted into the reconstructed texture image with high definition and rich detail information, the definition is enhanced while the original texture information of the generated reconstructed texture image is maintained, more texture details are enhanced, and the reconstruction quality of the texture image is greatly improved.
Drawings
FIG. 1 is a diagram of an application environment of a texture image reconstruction method in one embodiment;
FIG. 2 is a flow chart of a texture image reconstruction method according to an embodiment;
FIG. 3 is a flow diagram of image enhancement of a first target texture image based on a first texture image set in one embodiment;
FIG. 4 is a flow diagram of an intermediate texture feature map based on a first texture image set and a first target texture image in one embodiment;
FIG. 5 is a flow chart of a target texture feature map obtained by performing attention processing on an intermediate texture feature map in one embodiment;
FIG. 6 is a flow diagram of image enhancement of a second target texture image based on a second texture image set in one embodiment;
FIG. 7 is a flow chart of a texture image reconstruction method according to another embodiment;
FIG. 8 is a schematic diagram of a texture reconstruction model in one embodiment;
FIG. 9 is a block diagram of a texture image reconstruction device in one embodiment;
FIG. 10 is an internal block diagram of a computer device in one embodiment;
fig. 11 is an internal structural view of a computer device in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The texture image reconstruction method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on the cloud or other servers. The terminal 102 may be, but not limited to, various desktop computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster or cloud server composed of a plurality of servers.
The terminal and the server can be used independently to execute the texture image reconstruction method provided in the embodiment of the application.
For example, the server acquires a target texture image and a texture image set corresponding to the target texture image, wherein the texture presented by the texture image in the texture image set and the texture presented by the target texture image are matched with each other, and the texture image set is obtained based on the texture images with different resolutions. The server respectively carries out frequency decomposition on the texture image set and the target texture image to obtain a first texture image set and a second texture image set corresponding to the texture image set and a first target texture image and a second target texture image corresponding to the target texture image, wherein the frequency corresponding to the first texture image set is smaller than the frequency corresponding to the second texture image set, and the frequency corresponding to the first target texture image is smaller than the frequency corresponding to the second target texture image. The server performs image enhancement on the first target texture image based on the first texture image set to obtain a first enhanced texture image corresponding to the first target texture image, performs image enhancement on the second target texture image based on the second texture image set to obtain a second enhanced texture image corresponding to the second target texture image, and fuses the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image.
The terminal and the server may also cooperate to perform the texture image reconstruction method provided in the embodiments of the present application.
For example, the server acquires a target texture image matched with the terminal, and acquires a texture image set corresponding to the target texture image. The server respectively carries out frequency decomposition on the texture image set and the target texture image to obtain a first texture image set and a second texture image set corresponding to the texture image set and a first target texture image and a second target texture image corresponding to the target texture image. The server performs image enhancement on the first target texture image based on the first texture image set to obtain a first enhanced texture image corresponding to the first target texture image, performs image enhancement on the second target texture image based on the second texture image set to obtain a second enhanced texture image corresponding to the second target texture image, and fuses the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image. The server sends the reconstructed texture image to the terminal. The terminal can render and display the reconstructed texture image.
In one embodiment, as shown in fig. 2, a texture image reconstruction method is provided, and the method is applied to a computer device for illustration, and the computer device may be a terminal or a server, and is executed by the terminal or the server alone, or may be implemented through interaction between the terminal and the server. Referring to fig. 2, the texture image reconstruction method includes the steps of:
Step S202, obtaining a target texture image and a texture image set corresponding to the target texture image; the texture presented by the texture image and the texture presented by the target texture image in the texture image set are matched with each other, and the texture image set is obtained based on the texture images with different resolutions.
Wherein the texture image is an image for characterizing the surface of the object. Texture images may also be referred to as texture maps, which when mapped onto the surface of an object in a particular manner, enable the object to look more realistic. In a three-dimensional scene, the texture image, also called UV image, is a three-dimensionally evolving surface image. The UV refers to the abbreviation of the UV texture map coordinates, defines the information of the position of each point on the image, and U and V are the coordinates of the image in the horizontal and vertical directions of the display respectively, and the values are generally 0-1. Each point in the UV image is interrelated with the three-dimensional model, the position of the surface texture map can be determined, i.e., each point in the UV image can precisely correspond to the surface of the model object to construct a three-dimensional object. For example, a face texture image may be used to generate a three-dimensional face; the hair texture image may be used to generate three-dimensional hair; etc.
The target texture image refers to the texture image to be enhanced and reconstructed. The target texture image may be any texture image. The texture presented by the texture image in the texture image set and the texture presented by the target texture image are matched to each other, i.e. the texture image in the texture image set and the target texture image are correspondingly matched textures. The textures presented by different texture images are mutually matched, which means that the image similarity between the different texture images is larger than the preset similarity. In one embodiment, the mutually matching textures refer to the same texture, i.e., the set of texture images corresponding to the target texture image includes texture images having the same texture as the target texture image. The texture image set corresponding to the target texture image is obtained based on texture images with different resolutions. For matching textures (e.g., the same texture), different resolution texture images are adapted to different devices, the different resolution texture images being used for rendering presentations on the different devices. In one embodiment, in order to facilitate subsequent data processing, a plurality of initial texture images having a matched texture with the target texture image may be acquired first, different initial texture images correspond to different resolutions, then the resolutions of the initial texture images are increased to the same resolution to obtain updated texture images, and each updated texture image is formed into a texture image set.
It will be appreciated that a texture image set may be acquired first, and then any one texture image may be selected from the texture image set as the target texture image. Or the target texture image can be acquired first, and then the texture image set corresponding to the target texture image can be acquired.
Specifically, the computer device may obtain the target texture image and a texture image set corresponding to the target texture image locally or from other devices through a network, and further reconstruct the texture of the target texture image based on the texture image set corresponding to the target texture image, so as to convert the target texture image into a clearer reconstructed texture image with more details.
In one embodiment, the target texture image may be a texture image corresponding to the virtual object. The virtual object is an object which can be stored in the computer device through data, and the virtual object can specifically comprise at least one of a virtual character, a virtual animal, a virtual plant, a virtual object, and the like. The computer device may pre-build an object texture image library that includes various texture images required for the virtual object. The computer equipment can acquire at least one texture image from an object texture image library corresponding to the virtual object as a target texture image, and reconstruct textures of the target texture image based on a texture image set corresponding to the target texture image to obtain a reconstructed texture image corresponding to the target texture image. When a certain virtual object is displayed, the display effect of the virtual object can be effectively enhanced and the display quality can be improved by loading at least one reconstructed texture image corresponding to the virtual object.
In one embodiment, the target texture image may be a texture image corresponding to a virtual object in the game. The computer device may pre-build a library of game texture images including various texture images required for virtual objects in the game. The computer equipment can acquire at least one texture image from the game texture image library as a target texture image, and reconstruct textures of the target texture image based on a texture image set corresponding to the target texture image to obtain a reconstructed texture image corresponding to the target texture image. When any game is started, the display effect of the game picture can be effectively enhanced and the display quality can be improved by loading at least one reconstructed texture image required by the game.
In one embodiment, the target texture image may be a texture image of abnormal quality. The texture image may be evaluated for quality, and the image quality of the texture image may be determined. Texture reconstruction may not be required for texture images that meet quality requirements, e.g., high quality texture images. For texture images which do not meet the quality requirement, for example, texture images with insufficient illumination, the texture reconstruction mode can be adopted to improve the image quality.
Step S204, respectively carrying out frequency decomposition on the texture image set and the target texture image to obtain a first texture image set and a second texture image set corresponding to the texture image set and a first target texture image and a second target texture image corresponding to the target texture image; the first texture image set corresponds to a frequency less than the second texture image set, and the first target texture image set corresponds to a frequency less than the second target texture image set.
Wherein the frequency decomposition is used to decompose the image into a high frequency part and a low frequency part. The high frequency part of an image may also be referred to as the high frequency component of the image, which refers to a place where the intensity (brightness or gray scale) of the image varies drastically, for example, an edge, a contour, etc. of the image. The high frequency component of the image is used to characterize the detail information and local information of the image. The low frequency portion of an image may also be referred to as the low frequency component of the image, which refers to where the intensity (brightness or gray scale) of the image changes smoothly, e.g., a low light dark area, background, etc. in the image. The low frequency component of the image is used to characterize the overall information, global information, of the image.
And carrying out frequency decomposition on the texture image sets to obtain a first texture image set and a second texture image set, wherein the frequency corresponding to the first texture image set is smaller than the frequency corresponding to the second texture image set, namely, the first texture image set corresponds to a low-frequency part of the texture image set, and the second texture image set corresponds to a high-frequency part of the texture image set. The texture image set comprises a plurality of texture images, the first texture image set comprises first texture images respectively corresponding to all texture images in the texture image set, and the second texture image set comprises second texture images respectively corresponding to all texture images in the texture image set.
And carrying out frequency decomposition on the target texture images to obtain a first target texture image and a second target texture image, wherein the frequency corresponding to the first target texture image is smaller than the frequency corresponding to the second target texture image, namely, the first target texture image corresponds to a low-frequency part of the target texture image, and the second target texture image corresponds to a high-frequency part of the target texture image.
Specifically, the computer equipment carries out frequency decomposition on the texture image set to obtain a first texture image set and a second texture image set corresponding to the texture image set, wherein the frequency corresponding to the first texture image set is smaller than that corresponding to the second texture image set. The computer equipment carries out frequency decomposition on the target texture image to obtain a first target texture image and a second target texture image corresponding to the target texture image, wherein the frequency corresponding to the first target texture image is smaller than that corresponding to the second target texture image.
It will be appreciated that there are many ways to separate the high frequency and low frequency components from the image, for example, using filters.
Step S206, performing image enhancement on the first target texture image based on the first texture image set to obtain a first enhanced texture image corresponding to the first target texture image.
Wherein, the image enhancement of the first target texture image refers to enhancing the low-frequency component of the target texture image so as to achieve the effect of enhancing global information of the target texture image, for example, enhancing illumination of the image. The first texture image set comprises low-frequency components of each texture image corresponding to the matched texture of the target texture image, the original resolutions of the texture images are different, and therefore the contained low-frequency information is different, and the low-frequency components contain more information than the low-frequency components of the target texture image and can supplement the low-frequency components of the target texture image. And carrying out image enhancement on the first target texture image based on the first texture image set, and supplementing the low-frequency component of the target texture image by utilizing the low-frequency components of the texture images with various resolutions, so as to obtain the first enhanced texture image. The first enhanced texture image refers to a first target texture image after image enhancement.
Specifically, in order to secure texture reconstruction quality, the low frequency component and the high frequency component may be enhanced separately in consideration of differences in image information reflected by the low frequency component and the high frequency component of the image. For the low-frequency component, the computer equipment performs image enhancement on the first target texture image based on the first texture image set to obtain a first enhanced texture image corresponding to the first target texture image.
In one embodiment, the computer device performs convolution processing on the first texture image set and the first target texture image respectively to obtain initial texture feature images corresponding to the first texture image set and the first target texture image respectively, and obtains initial texture feature images corresponding to the first texture image set and the first target texture image respectively based on the initial texture feature images corresponding to the first texture image set and the first target texture image respectively.
Step S208, performing image enhancement on the second target texture image based on the second texture image set to obtain a second enhanced texture image corresponding to the second target texture image.
Wherein, the image enhancement of the second target texture image refers to enhancing the high-frequency component of the target texture image to achieve the effect of enhancing the local information of the target texture image, for example, enhancing the details of the image. The second texture image set comprises high-frequency components of each texture image corresponding to the matched texture of the target texture image, the original resolutions of the texture images are different, and therefore the contained high-frequency information is different, the high-frequency components contain more information than the high-frequency components of the target texture image, and the high-frequency components of the target texture image can be supplemented. And carrying out image enhancement on the second target texture image based on the second texture image set, and supplementing the high-frequency components of the target texture image by utilizing the high-frequency components of the texture images with various resolutions, so as to obtain a second enhanced texture image. The second enhanced texture image refers to a second target texture image after image enhancement.
Specifically, for the high-frequency component, the computer device performs image enhancement on the second target texture image based on the second texture image set to obtain a second enhanced texture image corresponding to the second target texture image.
In one embodiment, the high frequency component may reflect some noise information of the image in addition to the contour information of the image, and in order to better enhance the contour information, the high frequency component may be further enhanced by a low frequency component, which helps to assist in determining the contour information of the image. Accordingly, the computer device performs image enhancement on the second target texture image based on the second texture image set and a reference texture image, resulting in a second enhanced texture image corresponding to the second target texture image, wherein the reference texture image comprises at least one of the first target texture image or the first enhanced texture image. In one embodiment, the average texture image is obtained by first performing a mean calculation on the second texture image set and the second target texture image. Then, a mask texture image is obtained based on the average texture image and the reference texture image, for example, the reference texture image and the average texture image are subjected to pixel value comparison, the pixel value of the pixel point with smaller pixel value in the average texture image is set to 1, and the pixel value of the pixel point with larger or same pixel value in the average texture image is set to 0, so that a binarized mask texture image is obtained. And finally, fusing the mask texture image and the average texture image to obtain a second enhanced texture image. In one embodiment, deriving the mask texture image based on the average texture image and the reference texture image comprises: and performing stitching processing on the average texture image and the reference texture image to obtain a stitched texture image, and performing residual processing on the stitched texture image to obtain a mask texture image.
Step S210, fusing the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image.
Specifically, the computer equipment fuses a first enhanced texture image and a second enhanced texture image obtained through image enhancement to obtain a reconstructed texture image corresponding to the target texture image. For example, the first enhanced texture image and the second enhanced texture image are added to obtain a reconstructed texture image. Compared with the target texture image, the reconstructed texture image has richer high-frequency information and low-frequency information on the basis of the same texture corresponding to the target texture image. The reconstructed texture image may be subsequently used for presentation, for example, from the original presentation target texture image to the presentation reconstructed texture image, to improve the presentation effect.
In the texture image reconstruction method, a target texture image and a texture image set corresponding to the target texture image are obtained, textures presented by the texture image in the texture image set and textures presented by the target texture image are mutually matched, the texture image set is obtained based on texture images with different resolutions, and the texture image set is used for reconstructing the texture of the target texture image. And respectively carrying out frequency decomposition on the texture image set and the target texture image to obtain a first texture image set and a second texture image set corresponding to the texture image set and a first target texture image and a second target texture image corresponding to the target texture image, wherein the texture image set can be decomposed into a first texture image set representing a low-frequency component of the image and a second texture image set representing a high-frequency component of the image through frequency decomposition, and the first target texture image is decomposed into a first target texture image representing the low-frequency component of the image and a second target texture image representing the high-frequency component of the image. And carrying out image enhancement on the first target texture image based on the first texture image set, so that a first enhanced texture image corresponding to the first target texture image can be obtained, which is equivalent to enhancing low-frequency components in the target texture image, such as enhancing illumination. And carrying out image enhancement on the second target texture image based on the second texture image set, so that a second enhanced texture image corresponding to the second target texture image can be obtained, which is equivalent to enhancing high-frequency components in the target texture image, such as enhancing texture details. And fusing the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image. Therefore, the low-quality target texture image can be converted into the reconstructed texture image with high definition and rich detail information, the definition is enhanced while the original texture information of the generated reconstructed texture image is maintained, more texture details are enhanced, and the reconstruction quality of the texture image is greatly improved.
In one embodiment, frequency decomposition is performed on a texture image set and a target texture image to obtain a first texture image set and a second texture image set corresponding to the texture image set, and a first target texture image and a second target texture image corresponding to the target texture image, respectively, including:
carrying out Gaussian decomposition on the texture image set to obtain a first texture image set and a second texture image set corresponding to the texture image set; and carrying out Gaussian decomposition on the target texture image to obtain a first target texture image and a second target texture image corresponding to the target texture image.
The Gaussian decomposition refers to a frequency decomposition mode based on Gaussian filtering. The method comprises the steps of performing Gaussian filtering on a texture image to obtain a first texture image representing a low-frequency component of the texture image, and taking a difference value between the texture image and the first texture image as a second texture image representing a high-frequency component of the texture image. The gaussian filtering is a process of performing weighted average on the whole image, and the value of each pixel point is obtained by performing weighted average on the pixel point and other pixel values in the neighborhood. The gaussian filtering is specifically to scan each pixel in the image with a gaussian kernel and replace the value of the center pixel point determined by the gaussian kernel with the weighted average gray value of the pixels in the neighborhood determined by the gaussian kernel. The gaussian kernel is obtained by sampling a gaussian function, and the gaussian function is a probability density function of a gaussian distribution. Gaussian filtering is equivalent to a low-pass filter, and pixels with high frequency components, such as edges, stripes, noise and the like, which are severely changed after weighted averaging are smoothed (i.e. pressed), while the pixels with original smoother areas are still smoother after weighted averaging, and the pixels with high frequency components are not greatly changed (i.e. passed).
In particular, the computer device performs a gaussian decomposition of the texture image set resulting in a first texture image set representing the high frequency components of the texture image set and a second texture image set representing the low frequency components of the texture image set. The computer device performs a gaussian decomposition on the target texture image resulting in a first target texture image representing a high frequency component of the target texture image and a second target texture image representing a low frequency component of the target texture image.
In the above embodiment, the high-frequency component and the low-frequency component in the image can be rapidly decomposed by gaussian decomposition. The low-frequency component and the high-frequency component of the image reflect different image information, and then the low-frequency component and the high-frequency component of the target texture image are respectively subjected to image enhancement, so that the quality of texture reconstruction can be improved. The low frequency component of the texture image set contains more information, the low frequency component of the target texture image is enhanced based on the low frequency component of the texture image set to improve definition, the high frequency component of the texture image set contains more information, the high frequency component of the target texture image is enhanced based on the high frequency component of the texture image set to increase texture details, and the enhanced low frequency component and the high frequency component are fused to obtain a reconstructed texture image with high definition and rich detail information.
In one embodiment, as shown in fig. 3, performing image enhancement on a first target texture image based on a first texture image set to obtain a first enhanced texture image corresponding to the first target texture image, including:
step S302, convolution processing is carried out on the first texture image set and the first target texture image respectively, and initial texture feature images corresponding to the first texture image set and the first target texture image respectively are obtained.
Step S304, the initial texture feature images corresponding to the first texture image set and the first target texture image set are spliced to obtain an intermediate texture feature image.
And step S306, performing attention processing on the intermediate texture feature map to obtain a target texture feature map.
Step S308, obtaining a first enhanced texture image corresponding to the target texture image based on the target texture feature map and the first target texture image.
Wherein convolution processing is used to extract image features. The convolution processing of the image is to slide the convolution kernel on the image, and the convolution kernel is in turn convolved with the image block at the corresponding position on the image. The convolution of the convolution kernel with the image block at the corresponding position on the image means that the pixel values of the pixel points in the image block are weighted and summed, and the weighted and summed weight is determined by the convolution kernel, that is, the pixel values of the pixel points in the image block are multiplied by the numerical values at the corresponding position in the convolution kernel, and then all the multiplied values are added to be used as the convolution result of the image pixel point corresponding to the pixel point in the middle of the convolution kernel.
The stitching process is mainly used for stitching images. The stitching treatment can be to directly stitch different images, or stitch different images after some preprocessing, or stitch different images before some preprocessing.
Attention processing is used to further extract image features to filter extraneous information to focus on, enhance the accentuation information, e.g., remove noise from the image, illuminate dark areas in the image. Attention processing may be implemented using various attention mechanisms applied to the image, for example, using a channel attention mechanism.
Specifically, when the low-frequency component is subjected to image enhancement, the computer equipment firstly carries out convolution processing on the first texture image set and the first target texture image respectively to obtain initial texture feature images corresponding to the first texture image set and the first target texture image respectively, and spectral features of the image can be extracted through the convolution processing. Then, the computer equipment performs stitching processing on the initial texture feature images corresponding to the first texture image set and the first target texture image respectively to obtain an intermediate texture feature image, and spectral features of different images can be integrated through stitching processing. Furthermore, the computer device performs attention processing on the intermediate texture feature map to obtain a target texture feature map, and spectral features of image depth can be extracted through the attention processing. Finally, the computer equipment obtains a first enhanced texture image corresponding to the target texture image based on the target texture feature image and the first target texture image. The target texture feature map contains frequency information lacking in the first target texture image, and can supplement the frequency information of the first target texture image.
In the above embodiment, convolution processing is performed on the first texture image set and the first target texture image respectively to obtain initial texture feature images corresponding to the first texture image set and the first target texture image respectively, stitching processing is performed on the initial texture feature images corresponding to the first texture image set and the first target texture image respectively to obtain an intermediate texture feature image, attention processing is performed on the intermediate texture feature image to obtain a target texture feature image, and a first enhanced texture image corresponding to the target texture image is obtained based on the target texture feature image and the first target texture image. The convolution process is used to extract spectral features of the image, the attention process is used to enhance low frequency information of the image, the first enhanced texture image obtained by the above process has more low frequency information than the first target texture image and better quality than the first target texture image, for example, illumination in the first target texture image is restored, and such first enhanced texture image helps to improve quality of texture image reconstruction.
In one embodiment, convolution processing is performed on the first texture image set and the first target texture image to obtain initial texture feature images corresponding to the first texture image set and the first target texture image respectively, including:
Performing convolution processing on the first texture image set based on at least two first convolution kernels to obtain at least two first convolution feature images, and splicing the at least two first convolution feature images to obtain an initial texture feature image corresponding to the first texture image set; the at least two first convolution kernels comprise at least two size first convolution kernels; based on at least two second convolution kernels, carrying out convolution processing on the first target texture image to obtain at least two second convolution feature images, and splicing the at least two second convolution feature images to obtain an initial texture feature image corresponding to the first target texture image; the at least two second convolution kernels comprise at least two sizes of second convolution kernels.
Wherein the data in the convolution kernel is used to determine weights for weighted summation. The size of the convolution kernel is used to determine the pixel area that requires weighted summation, i.e. to determine the image block size. The convolution kernels with different sizes are used for extracting spectral features with different scales, and the spectral features with different sizes correspond to image space information with different scales.
The first convolution kernel is used for carrying out convolution processing on the first texture image set, and the second convolution kernel is used for carrying out convolution processing on the first target texture image. In one embodiment, the first convolution kernel and the second convolution kernel may be the same convolution kernel.
Specifically, the procedure of performing convolution processing on the first texture image set and the first target texture image is the same. The computer equipment acquires first convolution kernels with at least two sizes, carries out convolution processing on the first texture image sets based on each first convolution kernel respectively to obtain different first convolution feature images, and finally splices each first convolution feature image to obtain an initial texture feature image corresponding to the first texture image set. The computer equipment acquires second convolution kernels with at least two sizes, carries out convolution processing on the first target texture image based on each second convolution kernel to obtain different second convolution feature images, and finally splices each second convolution feature image to obtain an initial texture feature image corresponding to the first target texture image.
In the above embodiment, the convolution kernels with different sizes can extract the multi-scale feature information, which is helpful to improve the accuracy of the subsequent data processing. The first texture image set is subjected to convolution processing through first convolution cores with different sizes, so that a multi-scale joint space spectrum can be extracted. The multi-scale space and spectral features can be extracted by checking the first target texture image with a second convolution of a different size.
In one embodiment, performing stitching processing on initial texture feature images corresponding to the first texture image set and the first target texture image set respectively to obtain an intermediate texture feature image, including:
splicing initial texture feature images respectively corresponding to the first texture image set and the first target texture image to obtain a first texture feature image, and rectifying the first texture feature image to obtain a second texture feature image; and carrying out convolution processing on the second texture feature map to obtain a third texture feature map, carrying out up-sampling processing on the third texture feature map to obtain a fourth texture feature map, and carrying out rectification processing on the fourth texture feature map to obtain an intermediate texture feature map.
The rectification processing is used for correcting the pixel value and mapping the pixel value to a preset range. The rectification process may be performed by an activation function, for example, by a RELU (Rectified Linear Unit, linear rectification function).
The upsampling process is used to increase the resolution of the image, i.e. to convert the image from a smaller size to a larger size. Specifically, the size of the original image can be enlarged, so that a plurality of areas needing to be supplemented are vacated, and then the pixel value corresponding to the area to be supplemented is calculated through a certain interpolation algorithm, so that the image is enlarged. For example, calculating pixel values corresponding to the region to be supplemented by a bilinear interpolation algorithm; calculating a pixel value corresponding to the region to be supplemented by a nearest neighbor interpolation algorithm; etc.
Specifically, when the initial texture feature images corresponding to the first texture image set and the first target texture image are spliced, the computer equipment firstly splices the initial texture feature images corresponding to the first texture image set and the first target texture image to obtain a first texture feature image, then carries out rectification processing on the first texture feature image to obtain a second texture feature image, and can normalize the pixel value of the feature image through rectification processing to avoid overlarge pixel value difference between pixel points. Then, the computer equipment carries out convolution processing on the second texture feature map to obtain a third texture feature map, and feature information of the image is further extracted through further convolution processing. Furthermore, the computer device performs an upsampling process on the third texture feature map, which is advantageous for facilitating a subsequent attention process, to convert the third texture feature map into a fourth texture feature map of higher resolution. And finally, the computer equipment carries out rectification processing on the fourth texture feature map to obtain an intermediate texture feature map, and as new pixel values are introduced in the up-sampling processing, the pixel value of the feature map can be normalized again through the rectification processing, so that the excessive difference of the pixel values among the pixel points is avoided.
In the above embodiment, the initial texture feature images corresponding to the first texture image set and the first target texture image are spliced to obtain a first texture feature image, the first texture feature image is rectified to obtain a second texture feature image, the second texture feature image is convolved to obtain a third texture feature image, the third texture feature image is up-sampled to obtain a fourth texture feature image, and the fourth texture feature image is rectified to obtain an intermediate texture feature image. The rectification processing is used for standardizing the pixel value of the feature image, the convolution processing is used for further extracting the feature information, and the feature expression capability of the intermediate texture feature image obtained through the processing is stronger, so that the accuracy of subsequent data processing is improved.
In one embodiment, the calculation of the intermediate texture feature map based on the first texture image set and the first target texture image is as follows:
F SL =[H L3 (I L ),H L5 (I L ),H L7 (I L )]
F SC =[H C3 (C L ),H C5 (C L ),H C7 (C L )]
F S =ReLu([F SL , F SC ])
F 0 =ReLu(H conv (F S ))
wherein H is L3 ,H L5 And H L7 The 2D convolution kernels of convolution kernel sizes 3 x 3,5 x 5 and 7 x 7 are shown, respectively. I L Representing a first target texture image. F (F) SL And representing an initial texture feature map corresponding to the first target texture image. H C3 ,H C5 And H C7 The 2D convolution kernels of convolution kernel sizes 3 x 3,5 x 5 and 7 x 7 are shown, respectively. C (C) L Representing a first texture image set. F (F) SC And representing an initial texture feature map corresponding to the first texture image set.
[]Representing concatenation, e.g. [ F ] SL , F SC ]Representing splice F SL And F SC . ReLu represents the ReLu function. H conv Representing the use of a 3 x 3 convolution and filling 1*1 pixels to expand the spatial resolution of the feature map. F (F) 0 And representing the intermediate texture feature map obtained by final processing.
I 0 Representing a target texture image, C 0 Representing a texture image set corresponding to the target texture image. Referring to FIG. 4, will I 0 Low frequency component I of (2) L Inputting a first sub-module, wherein the first sub-module consists of three convolution kernels with different sizes and is used for 2D convolution to extract multi-scale space information, the three convolutions are simultaneously executed, and the extracted feature images are connected to form a feature image F SL . C is C 0 Low frequency component C of (2) L Inputting a second sub-module which is also composed of three convolution kernels with different sizes and is used for 2D convolution to extract multi-scale joint space information, the three convolutions are simultaneously executed, and the extracted feature images are connected to form a feature image F SC 。F SL And F SC Is uniform in size. The output of the two sub-modules is connected, and rectification processing is carried out to obtain a characteristic diagram F S . Feature mapF S The input convolution layer further extracts channel characteristics, and then rectification processing is carried out to obtain a characteristic diagram F 0
In one embodiment, performing attention processing on the intermediate texture feature map to obtain a target texture feature map, including:
sequentially carrying out at least two times of ordered attention treatment on the intermediate texture feature images to obtain at least two ordered attention texture feature images; splicing at least two ordered attention texture feature images to obtain a first spliced texture feature image, and performing convolution processing on the first spliced texture feature image to obtain a convolution texture feature image; acquiring an ending attention texture feature map from at least two ordered attention texture feature maps, and fusing the ending attention texture feature map and the convolution texture feature map to obtain a fused texture feature map; and splicing at least two ordered attention texture feature graphs and the fusion texture feature graph to obtain a second spliced texture feature graph, and performing convolution processing on the second spliced texture feature graph to obtain a target texture feature graph.
The intermediate texture feature map is sequentially subjected to attention processing, for example, the intermediate texture feature map is subjected to first attention processing to obtain a first attention texture feature map, the first attention texture feature map is subjected to second attention processing to obtain a second attention texture feature map, and the second attention texture feature map is subjected to third attention processing to obtain a third attention texture feature. The number of times of attention processing may be set according to actual needs.
The ending attention texture feature map refers to an attention texture feature map obtained by the last attention processing among at least two ordered attention texture feature maps. For example, the number of times of attention processing is four, and then the attention texture map obtained by the fourth time of attention processing is taken as the ending attention texture map.
Specifically, the computer device sequentially performs at least two times of ordered attention processing on the intermediate texture feature images to obtain at least two ordered attention texture feature images, the attention processing at the beginning can extract the feature information of the shallow layer to obtain a primary attention texture feature image, and the subsequent attention processing can extract the feature information of the deep layer to obtain an advanced attention texture feature image. Furthermore, the computer equipment splices at least two ordered attention texture feature images to obtain a first spliced texture feature image, and carries out convolution processing on the first spliced texture feature image to obtain a convolution texture feature image, so that the resolution of the first spliced texture feature image can be reduced through the convolution processing. The computer equipment acquires an ending attention texture feature map from at least two ordered attention texture feature maps, fuses the ending attention texture feature map and the convolution texture feature map to obtain a fused texture feature map, and the fused texture feature map has stronger feature expression capability. Finally, the computer equipment splices at least two ordered attention texture feature graphs and the fusion texture feature graph to obtain a second spliced texture feature graph, and convolves the second spliced texture feature graph to obtain a target texture feature graph. The information contained in the attention texture feature map of each level can be integrated by convolution processing.
In the above embodiment, the intermediate texture feature map is sequentially subjected to at least two sequential attentive processes to obtain at least two sequential attentive texture feature maps, the at least two sequential attentive texture feature maps are spliced to obtain a first spliced texture feature map, the first spliced texture feature map is subjected to convolution processing to obtain a convolution texture feature map, the end attentive texture feature map is obtained from the at least two sequential attentive texture feature maps, the end attentive texture feature map and the convolution texture feature map are fused to obtain a fused texture feature map, the at least two sequential attentive texture feature maps and the fused texture feature map are spliced to obtain a second spliced texture feature map, and the second spliced texture feature map is subjected to convolution processing to obtain the target texture feature map. The continuous attention processing can gradually extract deep characteristic information, enhance low-frequency information and remove noise in images, the convolution processing can further extract the characteristic information, the characteristic expression capability of the target texture characteristic map obtained through the data processing is stronger, and the accuracy of subsequent data processing is improved.
In one embodiment, the intermediate texture feature map is subjected to attention processing to obtain a calculation formula of the target texture feature map as follows:
F i =H TAB,i (F i-1 )=H TAB,i (H TAB,i-1 (⋯(H TAB,1 (F 0 ))⋯))
F n,F =H t ([F n-1 ,F n-2 ,⋯,F n-C ,⋯,F 1 ])
F n = F n,F +F n−1
F D =H DF ([F 0 ,F 1 ,⋯,F N ])
Wherein F is i An attention texture feature map representing the i-th channel attention processing output. H TAB,i The process of the ith channel attention process is represented, which consists of several standard operations, namely 2D convolution, global averaging pooling and activation functions. [ F n-1 ,F n-2 ,⋯,F n-C ,⋯,F 1 ]Represents F n-1 ,F n-2 ,⋯,F n-C ,⋯,F 1 Series connection of feature maps. Ht denotes a convolution operation for reducing the dimension of the feature map. H DF A convolution operation is represented for integrating different levels of features. F (F) n And (5) representing the final processed target texture feature map.
Referring to fig. 5, the intermediate texture map is subjected to n-1 times of standard channel attention processing, and the corresponding attention texture map is output for each time of channel attention processing, namely, output F n-1 ,F n-2 ,⋯,F n-C ,⋯,F 1
The nth special channel attention process is performed based on the outputs of the 1 st to n-1 th channel attention processes, and the output F n . Specifically, after depth feature extraction with successive channel attention processes, feature fusion functions are used to integrate the outputs of the 1 st through n-1 th channel attention processes, i.e., [ F ] n-1 ,F n-2 ,⋯,F n-C ,⋯,F 1 ]And further by a single volume having a convolution kernel sizeLayering transition layers to reduce the dimension of the connection feature map, i.e. H t ([F n-1 ,F n-2 ,⋯,F n-C ,⋯,F 1 ]). This transition layer reduces the computational burden on the overall network, making the network easy to use. To further enhance the model representation capability, an intra block residual learning strategy is applied to transform F n,F And F n−1 Fusion to obtain F n
Finally, the outputs of the 1 st to n th channel attention processes are connected in series, and feature information of different levels is integrated through convolution operation, so that a target texture feature map, namely F, is obtained D =H DF ([F 0 ,F 1 ,⋯,F N ])。
In one embodiment, obtaining a first enhanced texture image corresponding to the target texture image based on the target texture feature map and the first target texture image includes:
carrying out convolution processing on the target texture feature map to obtain a supplementary texture feature map; and fusing the supplementary texture feature image and the first target texture image to obtain a first enhanced texture image corresponding to the target texture image.
Specifically, the computer device performs convolution processing on the target texture feature map to obtain a complementary texture feature map, and a low-frequency component can be reconstructed through convolution processing to supplement frequency information for the first target texture image. And further, fusing the supplementary texture feature image and the first target texture image to obtain a first enhanced texture image corresponding to the target texture image.
In the above embodiment, the convolution processing is performed on the target texture feature map to obtain the supplemental texture feature map, and the supplemental texture feature map and the first target texture image are fused to obtain the first enhanced texture image corresponding to the target texture image. The frequency information of the deletion of the first target texture image can be recovered through convolution processing, and the supplementary texture feature image and the first target texture image are fused, so that a first enhanced texture image containing rich information can be obtained.
In one embodiment, the calculation formula for obtaining the first enhanced texture image based on the target texture feature map and the first target texture image is as follows:
I R =H R (F D )
Figure 757832DEST_PATH_IMAGE002
=I R +I L
wherein F is D Representing a target texture feature map, H R Representing convolution operations, I R Representing the restored residual, i.e. the supplemental texture feature map. I L A first target texture image is represented and,
Figure 347076DEST_PATH_IMAGE002
representing a first enhanced texture image, i.e. a first target texture image enhanced by low frequency information.
In one embodiment, referring to fig. 6, performing image enhancement on a second target texture image based on a second texture image set to obtain a second enhanced texture image corresponding to the second target texture image, including:
step S602, performing an average process on the second texture image set and the second target texture image to obtain an average texture image.
Step S604, performing stitching processing on the average texture image, the first target texture image and the first enhanced texture image to obtain a stitched texture image.
Step S606, residual processing is carried out on the spliced texture image, and a mask texture image is obtained.
Step S608, the mask texture image and the average texture image are fused, so as to obtain a second enhanced texture image corresponding to the target texture image.
The average processing refers to average calculation, which is used for calculating the average value of pixels among pixels at the same position of different images. The stitching process is mainly used for stitching images. The stitching treatment can be to directly stitch different images, or stitch different images after some preprocessing, or stitch different images before some preprocessing.
Residual processing is data processing implemented through a residual network. The mask texture image can be obtained by residual processing. The mask texture image is a binarized image used to determine contour information in the image.
Specifically, the computer device may perform mean calculation on the second texture image set and the second target texture image to obtain an average texture image, and perform stitching processing on the average texture image, the first target texture image, and the first enhanced texture image to obtain a stitched texture image. And the computer equipment performs residual processing on the spliced texture image to obtain a mask texture image, wherein the residual processing is used for performing frequency supplementation on an average texture image, a first target texture image and a first enhanced texture image in the spliced texture image, and then performing pixel value comparison on the first target texture image, the first enhanced texture image and the average texture image after frequency supplementation, thereby obtaining the mask texture image. Finally, the computer equipment fuses the mask texture image and the average texture image to obtain a second enhanced texture image corresponding to the target texture image. The mask texture image is a binarized image, the average texture image and the mask texture image are fused, and the pixel value of the pixel point reflecting the contour information in the average texture image can be reserved, and the pixel value of the pixel point reflecting other information is set to zero.
In the above embodiment, the second texture image set and the second target texture image are subjected to average processing to obtain an average texture image, the first target texture image and the first enhanced texture image are subjected to stitching processing to obtain a stitched texture image, residual processing is performed on the stitched texture image to obtain a mask texture image, and the mask texture image and the average texture image are fused to obtain a second enhanced texture image corresponding to the target texture image. The average processing is helpful for refining the second target texture image, increasing the frequency information, the residual processing is helpful for recovering the frequency information of the second target texture image missing, and the second enhanced texture image obtained through the processing contains more accurate contour information, thereby being helpful for improving the quality of texture image reconstruction.
In one embodiment, the stitching processing is performed on the average texture image, the first target texture image, and the first enhanced texture image to obtain a stitched texture image, including:
respectively carrying out up-sampling processing on the first target texture image and the first enhanced texture image to obtain a first up-sampling texture image corresponding to the first target texture image and a second up-sampling texture image corresponding to the first enhanced texture image; the resolution of the first up-sampled texture image, the second up-sampled texture image and the average texture image are consistent; and splicing the average texture image, the first up-sampling texture image and the second up-sampling texture image to obtain a spliced texture image.
Specifically, the first target texture image and the first enhanced texture image represent low-frequency components of the images, and the resolution of the low-frequency components obtained by frequency decomposition is reduced, so that in order to facilitate stitching, up-sampling processing can be performed first, the resolutions of the images to be stitched are unified, and then stitching is performed.
When the average texture image, the first target texture image and the first enhanced texture image are spliced, the computer equipment firstly carries out up-sampling processing on the first target texture image and the first enhanced texture image respectively, and the resolutions of the first target texture image and the first enhanced texture image are converted into the same resolution as the average texture image, so that a first up-sampling texture image corresponding to the first target texture image and a second up-sampling texture image corresponding to the first enhanced texture image are obtained. Then, the computer equipment splices the average texture image, the first up-sampling texture image and the second up-sampling texture image to obtain a spliced texture image.
In the above embodiment, the up-sampling processing is performed on the first target texture image and the first enhanced texture image, so as to obtain a first up-sampled texture image corresponding to the first target texture image and a second up-sampled texture image corresponding to the first enhanced texture image, where the resolutions of the first up-sampled texture image, the second up-sampled texture image and the average texture image are identical, and the average texture image, the first up-sampled texture image and the second up-sampled texture image are spliced, so as to obtain a spliced texture image. The resolution ratio of the images is unified and then spliced, so that the accuracy and the efficiency of splicing are improved.
In one embodiment, fusing the mask texture image and the average texture image to obtain a second enhanced texture image corresponding to the target texture image includes:
and carrying out pixel-by-pixel fusion on the mask texture image and the average texture image to obtain a second enhanced texture image corresponding to the target texture image.
The pixel-by-pixel fusion refers to fusing pixel values of pixel points at the same position of different images.
Specifically, the mask texture image is a binarized image, the binarized image is an image composed of pixel values 0 and 1, and the computer equipment fuses the mask texture image and the average texture image pixel by pixel, so that a second enhanced texture image corresponding to the target texture image can be obtained.
In the above embodiment, the mask texture image and the average texture image are fused pixel by pixel to obtain the second enhanced texture image corresponding to the target texture image, so that the quality of the second enhanced texture image can be ensured.
In one embodiment, I H Representing a second target texture image, C H Representing a second texture image set. To achieve reliable high frequency component reconstruction, pair I H And C H Averaging to obtain refined high-frequency component I Mean . Further to I Mean Processing to obtain
Figure 392392DEST_PATH_IMAGE004
,/>
Figure 787602DEST_PATH_IMAGE004
Representing a second enhanced texture image. To match I Mean Resolution (I) Mean ∈R H×w×1 ) For I L (I L ∈R H /2×w/2×1 ) And->
Figure 775543DEST_PATH_IMAGE002
(/>
Figure 863584DEST_PATH_IMAGE002
∈R H/2×w/2×1 ) Up-sampling is performed. Then series I Mean And up-sampled I L 、/>
Figure 130618DEST_PATH_IMAGE002
Inputting the images after being connected in series into a lightweight network consisting of residual blocks, and outputting I by the network Mask ,I Mask ∈R H×w×1
Further, a calculation formula for obtaining the second enhanced texture image by fusing the mask texture image and the average texture image is as follows:
Figure 1622DEST_PATH_IMAGE004
=I Mean ⊗I Mask
where ⊗ denotes a pixel-by-pixel multiplication.
In one embodiment, fusing the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image includes:
performing up-sampling processing on the first enhanced texture image, and fusing the first enhanced texture image and the second enhanced texture image after the up-sampling processing to obtain a fused texture image; and carrying out convolution processing on the fusion texture image to obtain a reconstructed texture image corresponding to the target texture image.
Specifically, the first enhanced texture image represents a low frequency component of the image, and the resolution of the low frequency component obtained by frequency decomposition is generally reduced, so that for convenience of fusion, up-sampling processing may be performed first, the resolutions of the images to be fused are unified, and then the images to be fused are fused. The computer equipment carries out up-sampling processing on the first enhanced texture image, fuses the first enhanced texture image and the second enhanced texture image after the up-sampling processing to obtain a fused texture image, and further carries out convolution processing on the fused texture image for thinning the image and smoothing the image to finally obtain a reconstructed texture image corresponding to the target texture image.
In the above embodiment, the up-sampling process is performed on the first enhanced texture image, the up-sampled first enhanced texture image and the up-sampled second enhanced texture image are fused to obtain a fused texture image, and the convolution process is performed on the fused texture image to obtain a reconstructed texture image corresponding to the target texture image. The resolution of the images is unified and then fused, so that the fusion accuracy and efficiency can be ensured. The frequency band information of the reconstructed texture can be further refined through convolution processing, and the quality of the reconstructed texture image is further improved.
In one embodiment, the calculation formula for fusing the first enhanced texture image and the second enhanced texture image to obtain the reconstructed texture image is as follows:
I E =H R (
Figure 670500DEST_PATH_IMAGE004
+Upscale(/>
Figure 929443DEST_PATH_IMAGE002
))
wherein Upscale is an upsampling process in which first the downsamples are performed
Figure 355877DEST_PATH_IMAGE002
Is amplified, the pixel value corresponding to the region to be supplemented is filled with zero, and then the Gaussian kernel is used for convolution to adjust the size of the image, namely the pixel value is matched with
Figure 358468DEST_PATH_IMAGE004
Is uniform in size. H R Is a simple convolution operation for further refining the reconstructed texture bin information. I E Representing the resulting reconstructed texture image.
In one embodiment, as shown in fig. 7, the texture image reconstruction method further includes:
Step S702, inputting a texture image set and a target texture image into a texture reconstruction model; the texture reconstruction model includes an image decomposition network, a first image enhancement network, a second image enhancement network, and an image reconstruction network.
Step S704, inputting the texture image set and the target texture image into an image decomposition network for frequency decomposition, and obtaining a first texture image set and a second texture image set corresponding to the texture image set, and a first target texture image and a second target texture image corresponding to the target texture image.
Step S706, the first texture image set and the first target texture image are input into the first image enhancement network, so as to obtain a first enhanced texture image corresponding to the target texture image.
Step S708, inputting the second texture image set and the second target texture image into a second image enhancement network to obtain a second enhanced texture image corresponding to the target texture image.
Step S710, inputting the first enhanced texture image and the second enhanced texture image into an image reconstruction network to obtain a reconstructed texture image corresponding to the target texture image.
The texture reconstruction model is a neural network model for performing texture reconstruction. The input data of the texture reconstruction model is a target texture image and a texture image set corresponding to the target texture image, and the output data is a reconstructed texture image corresponding to the target texture image.
The texture reconstruction model includes an image decomposition network, a first image enhancement network, a second image enhancement network, and an image reconstruction network. The image decomposition network is used for frequency decomposition, the first image enhancement network is used for image enhancement of low-frequency components, the second image enhancement network is used for image enhancement of high-frequency components, and the image reconstruction network is used for fusing the enhanced low-frequency components and high-frequency components.
Specifically, the computer device inputs a texture image set and a target texture image into a texture reconstruction model, the texture image set and the target texture image are input into an image decomposition network in the texture reconstruction model to perform frequency decomposition, and the image decomposition network outputs a first texture image set and a second texture image set corresponding to the texture image set and a first target texture image and a second target texture image corresponding to the target texture image. The first texture image set and the first target texture image are input into a first image enhancement network in the texture reconstruction model to carry out image enhancement, and the first image enhancement network outputs a first enhanced texture image corresponding to the target texture image. The second texture image set and the second target texture image are input into a second image enhancement network in the texture reconstruction model for image enhancement, and the second image enhancement network outputs a second enhanced texture image corresponding to the target texture image. It will be appreciated that the first image enhancement network and the second image enhancement network may perform data processing in parallel. The first enhanced texture image and the second enhanced texture image are input to an image reconstruction network in the texture reconstruction model, and the image reconstruction network outputs a reconstructed texture image corresponding to the target texture image. Finally, the texture reconstruction model outputs the reconstructed texture image.
In the above embodiment, the texture image set and the target texture image are input into the texture reconstruction model, and accurate texture reconstruction can be rapidly realized through the image decomposition network, the first image enhancement network, the second image enhancement network and the image reconstruction network in the texture reconstruction model, and the reconstructed texture image corresponding to the target texture image is output.
In one embodiment, acquiring a target texture image includes:
acquiring a game texture image library; and acquiring a game texture image with abnormal illumination from the game texture image library as a target texture image.
Wherein the game texture image library comprises various texture images required by the game. For example, the game texture image library may include various texture images required for a game character, various texture images required for a game environment, and the like.
Specifically, in the field of games, texture reconstruction is performed on a game texture image rendered abnormally, and the abnormal game texture image is replaced by a reconstructed texture image obtained by texture reconstruction, so that the display quality of a game picture is improved. The computer equipment can obtain a game texture image library locally or from other equipment, obtain an abnormal game texture image from the game texture image library as a target texture image, and reconstruct textures of the target texture image based on a texture image set corresponding to the target texture image to obtain a high-quality reconstructed texture image.
In a game, many objects are insufficient in illumination due to multiple rendering and a rendering sequence problem, in order to improve the illumination rendering effect of an image, a computer device acquires a game texture image with abnormal illumination from a game texture image library as a target texture image, for example, acquires a game texture image with insufficient illumination as a target texture image, acquires a game texture image with low illumination as a target texture image, and further performs texture reconstruction on the target texture image based on a texture image set corresponding to the target texture image to obtain a high-quality reconstructed texture image.
The game texture image with the illumination anomaly can be a game texture image with an illumination anomaly tag, and the illumination anomaly tag can be manually marked in advance or can be obtained by evaluation through an algorithm for evaluating illumination effects.
In one embodiment, texture reconstruction is performed during the game testing phase. In the game test stage, game texture images obtained through rendering are obtained, game texture images with abnormal illumination are obtained from the game texture images to serve as target texture images, and texture reconstruction is carried out on the target texture images to obtain reconstructed texture images. In the online application stage of the game, a user starts a game, loads reconstructed texture images instead of loading game texture images with abnormal illumination, and displays the reconstructed texture images to the user so as to improve the display effect of game pictures.
In the above embodiment, the game texture image with abnormal illumination is obtained from the game texture image library as the target texture image, and the texture reconstruction is performed on the target texture image based on the texture image set corresponding to the target texture image, so that the illumination of the target texture image can be recovered, the illumination of the target texture image can be enhanced, and the reconstructed texture image with better illumination effect corresponding to the target texture image can be obtained.
In a specific embodiment, the texture image reconstruction method can be applied to a game scene to optimize the performance of texture resources in the game. In games, many objects often cause insufficient illumination effects due to multiple rendering and rendering sequence problems, and the method of the application provides a model for enhancing the color space of low-light game textures (which can be called a texture reconstruction model), which can improve the rendering effect of game texture images and enable the reconstructed game texture images to contain richer and more detailed textures.
Referring to fig. 8, the model includes a gaussian pyramid network, an illumination enhancement branch, a high frequency refinement branch, and a gaussian reconstruction network.
The server acquires a game texture image set C 0 Selecting a low-light game texture image from the game texture image set as a target texture image I 0 . Wherein, game texture image set C 0 The method is obtained based on game texture images with different resolutions and corresponding to the same texture, and the game texture images with different resolutions and corresponding to the same texture are adapted to different terminals. C (C) 0 ∈R H×w×k Represents C 0 Comprises k texture images, I 0 ∈R H×w×1
The server will C 0 And I 0 Inputting a Gaussian pyramid network, performing Gaussian decomposition by a Gaussian decomposition module in the Gaussian pyramid network, wherein the Gaussian decomposition module represents a standard Gaussian decomposition process, and C 0 Is decomposed into C H And C L ,C H Representing high frequency components, C H ∈R H×w×1 ,C L Representing low frequency components, C L ∈R H/2×w/2×1 。I 0 Is decomposed into I H And I L ,I H Representing high frequency components, I H ∈R H×w×1 ,I L Representing low frequency components, I L ∈R H/2×w/2×1
The illumination enhancement branch aims to inspire the low frequency component of the low light game texture color space and restore illumination. The illumination enhancement branch includes a low-light feature extraction module, an illumination enhancement module (which may also be referred to as a maximum expectation module), and a reconstruction module. Will I L And C L Input low-light feature extraction module in illumination enhancement branch, low-light feature extraction module outputs F 0 The low light feature extraction module is responsible for extracting multi-scale space and spectral features of the game texture color space. Will F 0 Input illumination enhancement module, output F of illumination enhancement module D The illumination enhancement module is used for illuminating darkness Regions and removes various noise in the low light game texture color space. Will F D Input reconstruction module, output of reconstruction module
Figure 881853DEST_PATH_IMAGE002
The reconstruction module is used for reconstructing the low-frequency component of the low-light game texture color space. In the reconstruction module, F-based D Generation I R Will I R And I L Added to get->
Figure 983801DEST_PATH_IMAGE002
,/>
Figure 225427DEST_PATH_IMAGE002
Representing the low frequency component of the game texture color space band add light. />
Figure 766129DEST_PATH_IMAGE002
∈R H/2×w/2×1
The high frequency refinement branch aims at restoring texture details and reducing artifacts in reconstruction. Will I H And C H And inputting a high-frequency thinning branch. In order to take advantage of the high frequency properties of good game texture color space, in the high frequency refinement branch, I is calculated H And C H Mean value I of (2) Mean Replace I H Refining the high frequency component because of I Mean Relates to I H Missing part of texture information. To match I Mean For I L And
Figure 580239DEST_PATH_IMAGE002
using the upsampling operation, I Mean And up-sampled I L 、/>
Figure 180985DEST_PATH_IMAGE002
After splicing, inputting a lightweight network consisting of three residual blocks, and outputting I by the lightweight network Mask . Will I Mean And I Mask Multiplication pixel by pixel gives +.>
Figure 644327DEST_PATH_IMAGE004
。/>
Figure 926404DEST_PATH_IMAGE004
∈R H×w×1
Due to the reversible nature of the gaussian pyramid, the image can be reconstructed by a sequential mirroring operation. Will be
Figure 424382DEST_PATH_IMAGE002
And->
Figure 930449DEST_PATH_IMAGE004
And inputting a Gaussian reconstruction network. For matching->
Figure 84350DEST_PATH_IMAGE004
Resolution of pair->
Figure 966856DEST_PATH_IMAGE002
Up-sampling operation is performed, will ∈ ->
Figure 319339DEST_PATH_IMAGE004
And +.>
Figure 933992DEST_PATH_IMAGE002
The convolution operation is carried out after the addition, thereby refining the reconstructed texture frequency band information in one step to obtain I E The reconstruction of the texture is completed.
The method is a brand new low light generation technology, solves the problem of overlapping between actual low light and illusive low light in a game engine, can illuminate a dark area with low light in a texture image, can simultaneously inhibit various noises and keeps spectrum fidelity. The method is applied to the optimization of the texture resource performance of MOBA (Multiplayer Online Battle Arena, multiplayer online tactical competition game), can greatly improve the display effect of the final game picture and improve the game picture quality.
Moreover, compared with the traditional method, the method is time-consuming in loading and less in GPU consumption through testing. Specifically, texture resources are firstly extracted by using a debug tool, and then the texture resources are sent to a model in the method of the application after being extracted, after optimization is carried out, the original texture is replaced, and the texture loading process is restarted.
It can be appreciated that the method can also be applied to scenes such as film and television special effects, visual design, VR (Virtual Reality), industrial simulation, digital text creation and the like. The digital text creation may include a rendered building or tourist attraction, etc. It will be appreciated that processing of texture images may be involved in video special effects, visual design, VR and digital literature, among other scenarios. The reconstruction of the texture image in each scene can be realized by the texture image reconstruction method. The texture image reconstruction method realizes the reconstruction of the texture image, can strengthen the definition of the texture image, strengthen more texture details, greatly improve the quality of the texture image, and further improve the picture display effect in scenes such as film and television special effects, visual design, VR (Virtual Reality), industrial simulation, digital text creation and the like.
For example, industrial simulation refers to the simulated demonstration of industrial processes and industrial products. In an industrial simulation scene, simulation demonstration of an industrial production environment may be involved, for example, three-dimensional digital modeling is performed on a factory building, equipment and facilities, low-quality texture images can be searched from texture images corresponding to the industrial simulation model to serve as target texture images, the target texture images are subjected to texture reconstruction based on texture image sets corresponding to the target texture images by the texture image reconstruction method, reconstructed texture images corresponding to the target texture images are obtained, the reconstructed texture images are used for replacing the low-quality texture images to be attached to the industrial simulation model, simulation effects of the industrial simulation model can be improved, and the more accurate and more reference industrial production simulation environment is obtained.
For example, digital text creation refers to creation, production, propagation and service by digital technology with cultural creative content as a core. In a digital text scene, three-dimensional digital modeling of a building with cultural significance may be involved, for example, three-dimensional digital modeling is performed on a museum or a historical building, a low-quality texture image can be searched from texture images corresponding to a building model to serve as a target texture image, the texture image reconstruction method is used for reconstructing textures of the target texture image based on a texture image set corresponding to the target texture image to obtain a reconstructed texture image corresponding to the target texture image, the reconstructed texture image is used for replacing the low-quality texture image to be attached to the building model, so that modeling effects of modeling objects such as the building are improved, and a more realistic digital text-created building is obtained. It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a texture image reconstruction device for realizing the texture image reconstruction method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiment of one or more texture image reconstruction apparatuses provided below may be referred to the limitation of the texture image reconstruction method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 9, there is provided a texture image reconstruction apparatus including: a texture image acquisition module 902, an image decomposition module 904, a first image enhancement module 906, a second image enhancement module 908, and an image fusion module 910, wherein:
a texture image obtaining module 902, configured to obtain a target texture image and a texture image set corresponding to the target texture image; the texture presented by the texture image and the texture presented by the target texture image in the texture image set are matched with each other, and the texture image set is obtained based on the texture images with different resolutions.
The image decomposition module 904 is configured to perform frequency decomposition on the texture image set and the target texture image respectively, so as to obtain a first texture image set and a second texture image set corresponding to the texture image set, and a first target texture image and a second target texture image corresponding to the target texture image; the first texture image set corresponds to a frequency less than the second texture image set, and the first target texture image set corresponds to a frequency less than the second target texture image set.
The first image enhancement module 906 is configured to perform image enhancement on the first target texture image based on the first texture image set, so as to obtain a first enhanced texture image corresponding to the first target texture image.
The second image enhancement module 908 is configured to perform image enhancement on the second target texture image based on the second texture image set, so as to obtain a second enhanced texture image corresponding to the second target texture image.
The image fusion module 910 is configured to fuse the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image.
In one embodiment, the image decomposition module 904 is further configured to:
carrying out Gaussian decomposition on the texture image set to obtain a first texture image set and a second texture image set corresponding to the texture image set; and carrying out Gaussian decomposition on the target texture image to obtain a first target texture image and a second target texture image corresponding to the target texture image.
In one embodiment, the first image enhancement module 906 is further configured to:
respectively carrying out convolution processing on the first texture image set and the first target texture image to obtain initial texture feature images respectively corresponding to the first texture image set and the first target texture image; splicing the initial texture feature images respectively corresponding to the first texture image set and the first target texture image set to obtain an intermediate texture feature image; performing attention processing on the middle texture feature map to obtain a target texture feature map; and obtaining a first enhanced texture image corresponding to the target texture image based on the target texture feature image and the first target texture image.
In one embodiment, the first image enhancement module 906 is further configured to:
performing convolution processing on the first texture image set based on at least two first convolution kernels to obtain at least two first convolution feature images, and splicing the at least two first convolution feature images to obtain an initial texture feature image corresponding to the first texture image set; the at least two first convolution kernels comprise at least two size first convolution kernels; based on at least two second convolution kernels, carrying out convolution processing on the first target texture image to obtain at least two second convolution feature images, and splicing the at least two second convolution feature images to obtain an initial texture feature image corresponding to the first target texture image; the at least two second convolution kernels comprise at least two sizes of second convolution kernels.
In one embodiment, the first image enhancement module 906 is further configured to:
splicing initial texture feature images respectively corresponding to the first texture image set and the first target texture image to obtain a first texture feature image, and rectifying the first texture feature image to obtain a second texture feature image; and carrying out convolution processing on the second texture feature map to obtain a third texture feature map, carrying out up-sampling processing on the third texture feature map to obtain a fourth texture feature map, and carrying out rectification processing on the fourth texture feature map to obtain an intermediate texture feature map.
In one embodiment, the first image enhancement module 906 is further configured to:
sequentially carrying out at least two times of ordered attention treatment on the intermediate texture feature images to obtain at least two ordered attention texture feature images; splicing at least two ordered attention texture feature images to obtain a first spliced texture feature image, and performing convolution processing on the first spliced texture feature image to obtain a convolution texture feature image; acquiring an ending attention texture feature map from at least two ordered attention texture feature maps, and fusing the ending attention texture feature map and the convolution texture feature map to obtain a fused texture feature map; and splicing at least two ordered attention texture feature graphs and the fusion texture feature graph to obtain a second spliced texture feature graph, and performing convolution processing on the second spliced texture feature graph to obtain a target texture feature graph.
In one embodiment, the first image enhancement module 906 is further configured to:
carrying out convolution processing on the target texture feature map to obtain a supplementary texture feature map; and fusing the supplementary texture feature image and the first target texture image to obtain a first enhanced texture image corresponding to the target texture image.
In one embodiment, the second image enhancement module 908 is further configured to:
Carrying out average processing on the second texture image set and the second target texture image to obtain an average texture image; splicing the average texture image, the first target texture image and the first enhanced texture image to obtain a spliced texture image; residual processing is carried out on the spliced texture image, and a mask texture image is obtained; and fusing the mask texture image and the average texture image to obtain a second enhanced texture image corresponding to the target texture image.
In one embodiment, the second image enhancement module 908 is further configured to:
respectively carrying out up-sampling processing on the first target texture image and the first enhanced texture image to obtain a first up-sampling texture image corresponding to the first target texture image and a second up-sampling texture image corresponding to the first enhanced texture image; the resolution of the first up-sampled texture image, the second up-sampled texture image and the average texture image are consistent; and splicing the average texture image, the first up-sampling texture image and the second up-sampling texture image to obtain a spliced texture image.
In one embodiment, the second image enhancement module 908 is further configured to:
and carrying out pixel-by-pixel fusion on the mask texture image and the average texture image to obtain a second enhanced texture image corresponding to the target texture image.
In one embodiment, the image fusion module 910 is further configured to:
performing up-sampling processing on the first enhanced texture image, and fusing the first enhanced texture image and the second enhanced texture image after the up-sampling processing to obtain a fused texture image; and carrying out convolution processing on the fusion texture image to obtain a reconstructed texture image corresponding to the target texture image.
In one embodiment, the texture image reconstruction means is further for:
inputting the texture image set and the target texture image into a texture reconstruction model; the texture reconstruction model comprises an image decomposition network, a first image enhancement network, a second image enhancement network and an image reconstruction network; inputting the texture image set and the target texture image into an image decomposition network for frequency decomposition to obtain a first texture image set and a second texture image set corresponding to the texture image set and a first target texture image and a second target texture image corresponding to the target texture image; inputting the first texture image set and the first target texture image into a first image enhancement network to obtain a first enhanced texture image corresponding to the target texture image; inputting the second texture image set and the second target texture image into a second image enhancement network to obtain a second enhanced texture image corresponding to the target texture image; inputting the first enhanced texture image and the second enhanced texture image into an image reconstruction network to obtain a reconstructed texture image corresponding to the target texture image.
In one embodiment, texture image acquisition module 902 is further configured to:
acquiring a game texture image library; and acquiring a game texture image with abnormal illumination from the game texture image library as a target texture image.
The texture image reconstruction device acquires the target texture image and a texture image set corresponding to the target texture image, wherein the textures presented by the texture image in the texture image set and the textures presented by the target texture image are mutually matched, the texture image set is obtained based on texture images with different resolutions, and the texture image set is used for reconstructing the texture of the target texture image. And respectively carrying out frequency decomposition on the texture image set and the target texture image to obtain a first texture image set and a second texture image set corresponding to the texture image set and a first target texture image and a second target texture image corresponding to the target texture image, wherein the texture image set can be decomposed into a first texture image set representing a low-frequency component of the image and a second texture image set representing a high-frequency component of the image through frequency decomposition, and the first target texture image is decomposed into a first target texture image representing the low-frequency component of the image and a second target texture image representing the high-frequency component of the image. And carrying out image enhancement on the first target texture image based on the first texture image set, so that a first enhanced texture image corresponding to the first target texture image can be obtained, which is equivalent to enhancing low-frequency components in the target texture image, such as enhancing illumination. And carrying out image enhancement on the second target texture image based on the second texture image set, so that a second enhanced texture image corresponding to the second target texture image can be obtained, which is equivalent to enhancing high-frequency components in the target texture image, such as enhancing texture details. And fusing the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image. Therefore, the low-quality target texture image can be converted into the reconstructed texture image with high definition and rich detail information, the definition is enhanced while the original texture information of the generated reconstructed texture image is maintained, more texture details are enhanced, and the reconstruction quality of the texture image is greatly improved.
The respective modules in the texture image reconstruction apparatus described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 10. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data such as texture images, models and the like. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a texture image reconstruction method.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 11. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a texture image reconstruction method. The display unit of the computer equipment is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device, wherein the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on a shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structures shown in fig. 10 and 11 are merely block diagrams of portions of structures related to the aspects of the present application and are not intended to limit the computer device on which the aspects of the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or may have different arrangements of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (28)

1. A texture image reconstruction method, the method comprising:
acquiring a target texture image and a texture image set corresponding to the target texture image; texture presented by the texture image in the texture image set and texture presented by the target texture image are matched with each other, and the texture image set is obtained based on texture images with different resolutions;
Respectively carrying out frequency decomposition on the texture image set and the target texture image to obtain a first texture image set and a second texture image set corresponding to the texture image set and a first target texture image and a second target texture image corresponding to the target texture image; the frequency corresponding to the first texture image set is smaller than the frequency corresponding to the second texture image set, and the frequency corresponding to the first target texture image is smaller than the frequency corresponding to the second target texture image set;
performing image enhancement on the first target texture image based on the first texture image set to obtain a first enhanced texture image corresponding to the first target texture image;
performing image enhancement on the second target texture image based on the second texture image set and the reference texture image to obtain a second enhanced texture image corresponding to the second target texture image; the reference texture image comprises at least one of the first target texture image, the first enhanced texture image;
and fusing the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image.
2. The method according to claim 1, wherein the frequency decomposing the texture image set and the target texture image set to obtain a first texture image set and a second texture image set corresponding to the texture image set, and a first target texture image and a second target texture image corresponding to the target texture image, respectively, includes:
performing Gaussian decomposition on the texture image set to obtain a first texture image set and a second texture image set corresponding to the texture image set;
and carrying out Gaussian decomposition on the target texture image to obtain a first target texture image and a second target texture image corresponding to the target texture image.
3. The method according to claim 1, wherein the performing image enhancement on the first target texture image based on the first texture image set to obtain a first enhanced texture image corresponding to the first target texture image includes:
respectively carrying out convolution processing on the first texture image set and the first target texture image to obtain initial texture feature images respectively corresponding to the first texture image set and the first target texture image;
splicing the initial texture feature images respectively corresponding to the first texture image set and the first target texture image set to obtain an intermediate texture feature image;
Performing attention processing on the intermediate texture feature map to obtain a target texture feature map;
and obtaining a first enhanced texture image corresponding to the target texture image based on the target texture feature image and the first target texture image.
4. A method according to claim 3, wherein the convolving the first texture image set and the first target texture image to obtain initial texture feature maps corresponding to the first texture image set and the first target texture image respectively, includes:
performing convolution processing on the first texture image set based on at least two first convolution kernels to obtain at least two first convolution feature images, and splicing the at least two first convolution feature images to obtain an initial texture feature image corresponding to the first texture image set; the at least two first convolution kernels comprise at least two sized first convolution kernels;
performing convolution processing on the first target texture image based on at least two second convolution kernels to obtain at least two second convolution feature images, and splicing the at least two second convolution feature images to obtain an initial texture feature image corresponding to the first target texture image; the at least two second convolution kernels comprise at least two sized second convolution kernels.
5. The method according to claim 3, wherein the stitching the initial texture feature maps respectively corresponding to the first texture image set and the first target texture image to obtain an intermediate texture feature map includes:
splicing initial texture feature images corresponding to the first texture image set and the first target texture image respectively to obtain a first texture feature image, and rectifying the first texture feature image to obtain a second texture feature image;
and carrying out convolution processing on the second texture feature map to obtain a third texture feature map, carrying out up-sampling processing on the third texture feature map to obtain a fourth texture feature map, and carrying out rectification processing on the fourth texture feature map to obtain an intermediate texture feature map.
6. A method according to claim 3, wherein said performing attention processing on said intermediate texture feature map to obtain a target texture feature map comprises:
sequentially carrying out at least two times of ordered attention processing on the intermediate texture feature images to obtain at least two ordered attention texture feature images;
splicing the at least two ordered attention texture feature images to obtain a first spliced texture feature image, and performing convolution processing on the first spliced texture feature image to obtain a convolution texture feature image;
Acquiring an ending attention texture feature map from the at least two ordered attention texture feature maps, and fusing the ending attention texture feature map and the convolution texture feature map to obtain a fused texture feature map;
and splicing the at least two ordered attention texture feature graphs and the fusion texture feature graph to obtain a second spliced texture feature graph, and performing convolution processing on the second spliced texture feature graph to obtain a target texture feature graph.
7. A method according to claim 3, wherein said obtaining a first enhanced texture image corresponding to said target texture image based on said target texture feature map and said first target texture image comprises:
carrying out convolution processing on the target texture feature map to obtain a supplementary texture feature map;
and fusing the supplementary texture feature image and the first target texture image to obtain a first enhanced texture image corresponding to the target texture image.
8. The method according to claim 1, wherein the performing image enhancement on the second target texture image based on the second texture image set and the reference texture image to obtain a second enhanced texture image corresponding to the second target texture image includes:
Carrying out average processing on the second texture image set and the second target texture image to obtain an average texture image;
performing stitching processing on the average texture image, the first target texture image and the first enhanced texture image to obtain a stitched texture image;
residual processing is carried out on the spliced texture image, so that a mask texture image is obtained;
and fusing the mask texture image and the average texture image to obtain a second enhanced texture image corresponding to the target texture image.
9. The method of claim 8, wherein stitching the average texture image, the first target texture image, and the first enhanced texture image to obtain a stitched texture image comprises:
respectively carrying out up-sampling processing on the first target texture image and the first enhanced texture image to obtain a first up-sampling texture image corresponding to the first target texture image and a second up-sampling texture image corresponding to the first enhanced texture image; the resolution of the first up-sampled texture image, the second up-sampled texture image and the average texture image are consistent;
And splicing the average texture image, the first up-sampling texture image and the second up-sampling texture image to obtain a spliced texture image.
10. The method of claim 8, wherein the fusing the mask texture image and the average texture image to obtain a second enhanced texture image corresponding to the target texture image comprises:
and carrying out pixel-by-pixel fusion on the mask texture image and the average texture image to obtain a second enhanced texture image corresponding to the target texture image.
11. The method according to claim 1, wherein the fusing the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image comprises:
performing up-sampling processing on the first enhanced texture image, and fusing the up-sampled first enhanced texture image and the second enhanced texture image to obtain a fused texture image;
and carrying out convolution processing on the fusion texture image to obtain a reconstructed texture image corresponding to the target texture image.
12. The method according to claim 1, wherein the method further comprises:
Inputting the texture image set and the target texture image into a texture reconstruction model; the texture reconstruction model comprises an image decomposition network, a first image enhancement network, a second image enhancement network and an image reconstruction network;
inputting the texture image set and the target texture image into the image decomposition network for frequency decomposition to obtain a first texture image set and a second texture image set corresponding to the texture image set, and a first target texture image and a second target texture image corresponding to the target texture image;
inputting the first texture image set and the first target texture image into the first image enhancement network to obtain a first enhanced texture image corresponding to the target texture image;
inputting the second texture image set and the second target texture image into the second image enhancement network to obtain a second enhanced texture image corresponding to the target texture image;
inputting the first enhanced texture image and the second enhanced texture image into the image reconstruction network to obtain a reconstructed texture image corresponding to the target texture image.
13. The method according to any one of claims 1 to 12, wherein the acquiring the target texture image comprises:
Acquiring a game texture image library;
and acquiring a game texture image with abnormal illumination from the game texture image library as a target texture image.
14. A texture image reconstruction apparatus, the apparatus comprising:
the texture image acquisition module is used for acquiring a target texture image and a texture image set corresponding to the target texture image; texture presented by the texture image in the texture image set and texture presented by the target texture image are matched with each other, and the texture image set is obtained based on texture images with different resolutions;
the image decomposition module is used for respectively carrying out frequency decomposition on the texture image set and the target texture image to obtain a first texture image set and a second texture image set corresponding to the texture image set and a first target texture image and a second target texture image corresponding to the target texture image; the frequency corresponding to the first texture image set is smaller than the frequency corresponding to the second texture image set, and the frequency corresponding to the first target texture image is smaller than the frequency corresponding to the second target texture image set;
the first image enhancement module is used for carrying out image enhancement on the first target texture image based on the first texture image set to obtain a first enhanced texture image corresponding to the first target texture image;
The second image enhancement module is used for carrying out image enhancement on the second target texture image based on the second texture image set and the reference texture image to obtain a second enhanced texture image corresponding to the second target texture image; the reference texture image comprises at least one of the first target texture image, the first enhanced texture image;
and the image fusion module is used for fusing the first enhanced texture image and the second enhanced texture image to obtain a reconstructed texture image corresponding to the target texture image.
15. The apparatus of claim 14, wherein the image decomposition module is further configured to:
performing Gaussian decomposition on the texture image set to obtain a first texture image set and a second texture image set corresponding to the texture image set;
and carrying out Gaussian decomposition on the target texture image to obtain a first target texture image and a second target texture image corresponding to the target texture image.
16. The apparatus of claim 14, wherein the first image enhancement module is further configured to:
respectively carrying out convolution processing on the first texture image set and the first target texture image to obtain initial texture feature images respectively corresponding to the first texture image set and the first target texture image;
Splicing the initial texture feature images respectively corresponding to the first texture image set and the first target texture image set to obtain an intermediate texture feature image;
performing attention processing on the intermediate texture feature map to obtain a target texture feature map;
and obtaining a first enhanced texture image corresponding to the target texture image based on the target texture feature image and the first target texture image.
17. The apparatus of claim 16, wherein the first image enhancement module is further configured to:
performing convolution processing on the first texture image set based on at least two first convolution kernels to obtain at least two first convolution feature images, and splicing the at least two first convolution feature images to obtain an initial texture feature image corresponding to the first texture image set; the at least two first convolution kernels comprise at least two sized first convolution kernels;
performing convolution processing on the first target texture image based on at least two second convolution kernels to obtain at least two second convolution feature images, and splicing the at least two second convolution feature images to obtain an initial texture feature image corresponding to the first target texture image; the at least two second convolution kernels comprise at least two sized second convolution kernels.
18. The apparatus of claim 16, wherein the first image enhancement module is further configured to:
splicing initial texture feature images corresponding to the first texture image set and the first target texture image respectively to obtain a first texture feature image, and rectifying the first texture feature image to obtain a second texture feature image;
and carrying out convolution processing on the second texture feature map to obtain a third texture feature map, carrying out up-sampling processing on the third texture feature map to obtain a fourth texture feature map, and carrying out rectification processing on the fourth texture feature map to obtain an intermediate texture feature map.
19. The apparatus of claim 16, wherein the first image enhancement module is further configured to:
sequentially carrying out at least two times of ordered attention processing on the intermediate texture feature images to obtain at least two ordered attention texture feature images;
splicing the at least two ordered attention texture feature images to obtain a first spliced texture feature image, and performing convolution processing on the first spliced texture feature image to obtain a convolution texture feature image;
acquiring an ending attention texture feature map from the at least two ordered attention texture feature maps, and fusing the ending attention texture feature map and the convolution texture feature map to obtain a fused texture feature map;
And splicing the at least two ordered attention texture feature graphs and the fusion texture feature graph to obtain a second spliced texture feature graph, and performing convolution processing on the second spliced texture feature graph to obtain a target texture feature graph.
20. The apparatus of claim 16, wherein the first image enhancement module is further configured to:
carrying out convolution processing on the target texture feature map to obtain a supplementary texture feature map;
and fusing the supplementary texture feature image and the first target texture image to obtain a first enhanced texture image corresponding to the target texture image.
21. The apparatus of claim 14, wherein the second image enhancement module is further configured to:
carrying out average processing on the second texture image set and the second target texture image to obtain an average texture image;
performing stitching processing on the average texture image, the first target texture image and the first enhanced texture image to obtain a stitched texture image;
residual processing is carried out on the spliced texture image, so that a mask texture image is obtained;
and fusing the mask texture image and the average texture image to obtain a second enhanced texture image corresponding to the target texture image.
22. The apparatus of claim 21, wherein the second image enhancement module is further configured to:
respectively carrying out up-sampling processing on the first target texture image and the first enhanced texture image to obtain a first up-sampling texture image corresponding to the first target texture image and a second up-sampling texture image corresponding to the first enhanced texture image; the resolution of the first up-sampled texture image, the second up-sampled texture image and the average texture image are consistent;
and splicing the average texture image, the first up-sampling texture image and the second up-sampling texture image to obtain a spliced texture image.
23. The apparatus of claim 21, wherein the second image enhancement module is further configured to:
and carrying out pixel-by-pixel fusion on the mask texture image and the average texture image to obtain a second enhanced texture image corresponding to the target texture image.
24. The apparatus of claim 14, wherein the image fusion module is further configured to:
performing up-sampling processing on the first enhanced texture image, and fusing the up-sampled first enhanced texture image and the second enhanced texture image to obtain a fused texture image;
And carrying out convolution processing on the fusion texture image to obtain a reconstructed texture image corresponding to the target texture image.
25. The apparatus of claim 14, wherein the apparatus is further configured to:
inputting the texture image set and the target texture image into a texture reconstruction model; the texture reconstruction model comprises an image decomposition network, a first image enhancement network, a second image enhancement network and an image reconstruction network;
inputting the texture image set and the target texture image into the image decomposition network for frequency decomposition to obtain a first texture image set and a second texture image set corresponding to the texture image set, and a first target texture image and a second target texture image corresponding to the target texture image;
inputting the first texture image set and the first target texture image into the first image enhancement network to obtain a first enhanced texture image corresponding to the target texture image;
inputting the second texture image set and the second target texture image into the second image enhancement network to obtain a second enhanced texture image corresponding to the target texture image;
inputting the first enhanced texture image and the second enhanced texture image into the image reconstruction network to obtain a reconstructed texture image corresponding to the target texture image.
26. The apparatus of any one of claims 14 to 25, wherein the texture image acquisition module is further configured to:
acquiring a game texture image library;
and acquiring a game texture image with abnormal illumination from the game texture image library as a target texture image.
27. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 13 when the computer program is executed.
28. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 13.
CN202310013253.1A 2023-01-05 2023-01-05 Texture image reconstruction method, apparatus, computer device and storage medium Active CN115713585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310013253.1A CN115713585B (en) 2023-01-05 2023-01-05 Texture image reconstruction method, apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310013253.1A CN115713585B (en) 2023-01-05 2023-01-05 Texture image reconstruction method, apparatus, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN115713585A CN115713585A (en) 2023-02-24
CN115713585B true CN115713585B (en) 2023-05-02

Family

ID=85236180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310013253.1A Active CN115713585B (en) 2023-01-05 2023-01-05 Texture image reconstruction method, apparatus, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN115713585B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953330B (en) * 2023-03-13 2023-05-26 腾讯科技(深圳)有限公司 Texture optimization method, device, equipment and storage medium for virtual scene image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022241995A1 (en) * 2021-05-18 2022-11-24 广东奥普特科技股份有限公司 Visual image enhancement generation method and system, device, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101910894B1 (en) * 2012-05-23 2018-10-24 삼성전자주식회사 Apparatus and method for reconstructing image
CN115131256A (en) * 2021-03-24 2022-09-30 华为技术有限公司 Image processing model, and training method and device of image processing model
CN113284051B (en) * 2021-07-23 2021-12-07 之江实验室 Face super-resolution method based on frequency decomposition multi-attention machine system
CN113674165A (en) * 2021-07-27 2021-11-19 浙江大华技术股份有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022241995A1 (en) * 2021-05-18 2022-11-24 广东奥普特科技股份有限公司 Visual image enhancement generation method and system, device, and storage medium

Also Published As

Publication number Publication date
CN115713585A (en) 2023-02-24

Similar Documents

Publication Publication Date Title
Jam et al. A comprehensive review of past and present image inpainting methods
CN113838176B (en) Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
CN112950775A (en) Three-dimensional face model reconstruction method and system based on self-supervision learning
CN114820905B (en) Virtual image generation method and device, electronic equipment and readable storage medium
CN110378947B (en) 3D model reconstruction method and device and electronic equipment
US20220377300A1 (en) Method for displaying objects, electronic device, and storage medium
CN111127624A (en) Illumination rendering method and device based on AR scene
KR102311796B1 (en) Method and Apparatus for Deblurring of Human Motion using Localized Body Prior
Akimoto et al. 360-degree image completion by two-stage conditional gans
Liu et al. Painting completion with generative translation models
CN115713585B (en) Texture image reconstruction method, apparatus, computer device and storage medium
CN116228943B (en) Virtual object face reconstruction method, face reconstruction network training method and device
US20230051749A1 (en) Generating synthesized digital images utilizing class-specific machine-learning models
CN116385619B (en) Object model rendering method, device, computer equipment and storage medium
CN115953330B (en) Texture optimization method, device, equipment and storage medium for virtual scene image
CN116797768A (en) Method and device for reducing reality of panoramic image
CN116664422A (en) Image highlight processing method and device, electronic equipment and readable storage medium
CN114782460B (en) Image segmentation model generation method, image segmentation method and computer equipment
CN114119923B (en) Three-dimensional face reconstruction method and device and electronic equipment
Xu et al. An edge guided coarse-to-fine generative network for image outpainting
CN115578497A (en) Image scene relighting network structure and method based on GAN network
Qin et al. Multi-level augmented inpainting network using spatial similarity
AKIMOTO et al. Image completion of 360-degree images by cGAN with residual multi-scale dilated convolution
CN114782256B (en) Image reconstruction method and device, computer equipment and storage medium
CN116071478B (en) Training method of image reconstruction model and virtual scene rendering method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40080507

Country of ref document: HK