CN116310046B - Image processing method, device, computer and storage medium - Google Patents

Image processing method, device, computer and storage medium Download PDF

Info

Publication number
CN116310046B
CN116310046B CN202310548176.XA CN202310548176A CN116310046B CN 116310046 B CN116310046 B CN 116310046B CN 202310548176 A CN202310548176 A CN 202310548176A CN 116310046 B CN116310046 B CN 116310046B
Authority
CN
China
Prior art keywords
image
texture
rendering
sample
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310548176.XA
Other languages
Chinese (zh)
Other versions
CN116310046A (en
Inventor
徐东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310548176.XA priority Critical patent/CN116310046B/en
Publication of CN116310046A publication Critical patent/CN116310046A/en
Application granted granted Critical
Publication of CN116310046B publication Critical patent/CN116310046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, a computer and a storage medium, wherein the method comprises the following steps: acquiring a first texture grid of a first viewpoint image, and performing texture analysis on the first viewpoint image to obtain first texture coloring data; rendering the first texture grid and the first texture coloring data of the first viewpoint image, and determining a depth image and a rendering image corresponding to the first viewpoint image; and performing texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image to obtain a first corrected image. By adopting the method and the device, the stability and the efficiency of image rendering can be improved.

Description

Image processing method, device, computer and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method, an image processing device, a computer, and a storage medium.
Background
In various games such as mixed reality on a terminal device, when rendering and generating a plurality of three-dimensional (3D) objects, the problems of rendering failure or insufficient reality of rendering effect are faced. To solve this problem, a 3D rendering model for refinement of camera angle and the like is generally trained, which requires a large number of samples, and cannot deal with problems of camera drift, grid distortion, texture blurring and the like, so that the rendering efficiency of an image is low, the rendering effect is poor, and the stability is low.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a computer and a storage medium, which can improve the stability and efficiency of image rendering.
In one aspect, an embodiment of the present application provides an image processing method, including:
acquiring a first texture grid of a first viewpoint image, and performing texture analysis on the first viewpoint image to obtain first texture coloring data;
rendering the first texture grid and the first texture coloring data of the first viewpoint image, and determining a depth image and a rendering image corresponding to the first viewpoint image;
and performing texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image to obtain a first corrected image.
In one aspect, an embodiment of the present application provides an image processing method, including:
obtaining a viewpoint image sample, obtaining a first sample texture grid of the viewpoint image sample, and coloring the viewpoint image sample to first sample texture;
inputting a first sample texture grid and first sample texture coloring data of a viewpoint image sample into an initial restoration rendering model for rendering, and determining a sample depth image and a sample rendering image corresponding to the viewpoint image sample;
Inputting the sample depth image and the sample rendering image into an initial image diffusion model for texture optimization processing to obtain a sample correction image;
and carrying out texture analysis on the sample corrected image to obtain second sample texture coloring data, and carrying out parameter adjustment on the initial restoration rendering model and the initial image diffusion model based on the first sample texture coloring data and the second sample texture coloring data until the model convergence condition is reached to obtain a restoration rendering model corresponding to the initial restoration rendering model and an image diffusion model corresponding to the initial image diffusion model.
An aspect of an embodiment of the present application provides an image processing apparatus, including:
the texture acquisition module is used for acquiring a first texture grid of the first viewpoint image;
the coloring acquisition module is used for carrying out texture analysis on the first viewpoint image to obtain first texture coloring data;
the image rendering module is used for rendering the first texture grid and the first texture coloring data of the first viewpoint image and determining a depth image and a rendering image corresponding to the first viewpoint image;
and the texture optimization module is used for performing texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image to obtain a first corrected image.
Wherein, this coloring acquisition module includes:
the object unfolding unit is used for determining an unfolding boundary of an object model of the object to be drawn based on display information corresponding to the object to be drawn in the first viewpoint image, and unfolding the object model of the object to be drawn according to the unfolding boundary to obtain initial texture data;
the coordinate association unit is used for carrying out coordinate association on the initial texture data and the first viewpoint image to obtain a first texture coordinate system;
and the texture combining unit is used for carrying out optimization processing on the initial texture data based on the first texture grid, generating first texture data, and combining the first texture coordinate system with the first texture data to obtain first texture coloring data.
Wherein, this image rendering module includes:
the grid optimization unit is used for carrying out grid optimization processing on the first texture grid of the first viewpoint image to obtain an optimized texture grid;
the combined rendering unit is used for integrally rendering the first texture coloring data and the optimized texture grid to generate a rendering image corresponding to the first viewpoint image;
the depth analysis unit is used for carrying out depth analysis on the first viewpoint image by adopting the first texture grid to obtain image depth information of the first viewpoint image;
And the depth coloring unit is used for performing coloring depth rendering on the image depth information and the first texture coloring data to obtain a depth image corresponding to the first viewpoint image.
Wherein, this texture optimization module includes:
the pixel analysis unit is used for carrying out pixel change analysis on the rendered image and determining N rendering areas corresponding to the rendered image; n is a positive integer; the deformation degree of the pixels corresponding to different rendering areas is different when the pixels move;
the noise adding processing unit is used for adding noise data into the rendering image based on the N rendering areas and the depth image corresponding to the first viewpoint image, and generating a noise image corresponding to the rendering image;
and the image denoising unit is used for denoising the noise image and generating a first corrected image.
Wherein, this pixel analysis unit includes:
a vector obtaining subunit, configured to obtain a first view corresponding to the first view image, and obtain, in a view coordinate system of the first view, a grid normal vector of a unit grid that forms the first texture grid;
a relationship determination subunit configured to determine a relative relationship between the rendered image and the image display plane at the unit mesh based on the mesh normal vector of the unit mesh, and divide the rendered image into N initial rendering areas based on the relative relationship;
The region adjustment subunit is used for acquiring application cache images, and performing region adjustment on the N initial rendering regions by adopting the application cache images to obtain N rendering regions corresponding to the rendering images.
Wherein, this pixel analysis unit includes:
the region dividing subunit is used for inputting the rendering image into the region dividing model for pixel change analysis and determining N rendering regions corresponding to the rendering image;
the apparatus further comprises:
the sample acquisition module is used for acquiring continuous M-frame area division image samples;
the sample dividing module is used for inputting a first divided image sample in the M frame area divided image samples into the initial area divided model for pixel change analysis and determining N sample rendering areas corresponding to the first divided image sample; m is a positive integer; the first divided image sample is any one of the M frame region divided image samples;
the region analysis module is used for acquiring M frame region division image samples and respectively corresponding sample region change information in N sample rendering regions; each sample region change information is used for representing the change condition of the M frame region division image samples in the corresponding sample rendering region;
The model generation module is used for carrying out parameter adjustment on the initial region division model based on the standard region change information corresponding to the N sample rendering regions and the sample region change information corresponding to the N sample rendering regions respectively until the parameters are converged to obtain the region division model.
The noise adding processing unit is specifically configured to:
in the ith noise adding iteration, adding noise data into the image to be noise-added corresponding to the rendering image according to the noise adding iteration round, N rendering areas and the depth image, and generating a noise image i corresponding to the rendering image; i is an integer; the iteration round of adding noise is i;
the image denoising unit is specifically used for:
denoising the noise image i to generate a corrected image i; and the corrected image i when the ith noise adding iteration meets the iteration completion condition is the first corrected image.
Wherein, this add noise processing unit includes:
an initial obtaining subunit, configured to obtain initial noise data in an ith noise adding iteration if i is an iteration first initial value;
the mask determining subunit is used for determining mask data of the rendered image according to the noise adding iteration round and the N rendering areas;
a first generation subunit, configured to add initial noise data to the rendered image by using mask data, to generate a noise image i of the rendered image; the image to be added with noise corresponding to the rendering image is the rendering image;
The second generation subunit is configured to determine, in an ith noise adding iteration, a noise image j as an image to be noise added of the rendered image, obtain an iteration interval to which a noise adding iteration round belongs, determine, based on the iteration interval, noise data i from a depth image and initial noise data of the rendered image, and add the noise data i to the image to be noise added of the rendered image to generate a noise image i of the rendered image; the noise image j is the noise image generated in the previous noise adding iteration of the ith noise adding iteration.
The N rendering areas comprise a holding area, a changing area and a middle area; the mask determining subunit is specifically configured to:
acquiring a first rendering area to which a kth pixel point forming a rendering image belongs, and if the first rendering area is a holding area, determining a first mask value corresponding to the holding area as a pixel mask corresponding to the kth pixel point; k is a positive integer;
if the first rendering area is a change area, determining a second mask value corresponding to the change area as a pixel mask corresponding to the kth pixel point;
if the first rendering area is the middle area, determining a pixel mask corresponding to a kth pixel point based on the noise adding iteration round;
When the pixel masks respectively corresponding to the pixel points forming the rendering image are obtained, the pixel masks respectively corresponding to the pixel points form mask data of the rendering image.
The second generation subunit is specifically configured to:
inputting an image to be noisy of the rendered image and noise data i into an image diffusion model, and performing noise adding processing on the image to be noisy of the rendered image by adopting the noise data i in the image diffusion model to generate a noise image i of the rendered image;
the image denoising unit is specifically used for:
in the image diffusion model, a noise image i is subjected to denoising processing, and a corrected image i is generated.
Wherein, this texture optimization module includes:
the iterative optimization unit is used for carrying out texture optimization processing on the rendering image corresponding to the kth texture optimization based on the depth image corresponding to the kth texture optimization of the first viewpoint image to obtain a first corrected image k; k is a positive integer; when k is a second initial value, the first viewpoint image is an original viewpoint image; when k is not the second initial value, the first viewpoint image is the first corrected image (k-1); the first corrected image (k-1) refers to the first corrected image obtained in the (k-1) th texture optimization;
The apparatus further comprises:
the grid acquisition module is used for acquiring a second texture grid k of the first corrected image k;
the texture projection module is used for carrying out texture analysis on the first corrected image k to obtain second texture coloring data k;
the texture redrawing module is used for carrying out texture redrawing processing based on the second texture coloring data k and the second texture grid k to generate a second corrected image k;
and the image determining module is used for determining the second corrected image k as a target corrected image corresponding to the original viewpoint image if the performance optimization degree of the image performance of the second corrected image k relative to the image performance of the original viewpoint image is larger than or equal to an optimization threshold value.
Wherein, this texture projection module includes:
the primary projection unit is used for acquiring a first viewpoint corresponding to the original viewpoint image, and carrying out texture analysis on the first corrected image k to obtain intermediate texture coloring data;
the buffer generating unit is used for rendering the second texture grid k, the intermediate texture coloring data and the first view point by adopting a differential renderer to generate an intermediate buffer image;
the parameter acquisition unit is used for acquiring texture processing parameters;
the parameter generating unit is used for carrying out error analysis on the intermediate cache image and the first corrected image k based on the texture processing parameters to obtain gradient adjustment parameters;
A texture determining unit, configured to determine the intermediate texture shading data as second texture shading data k if the gradient adjustment parameter is greater than or equal to the texture projection threshold;
and the texture adjusting unit is used for carrying out data adjustment on the intermediate texture coloring data based on the gradient adjusting parameters if the gradient adjusting parameters are smaller than the texture projection threshold, determining the adjusted intermediate texture coloring data as intermediate texture coloring data, and returning to carry out the step of adopting the differential renderer to render the second texture grid k, the intermediate texture coloring data and the first view point to generate an intermediate cache image.
Wherein the parameter acquisition unit includes:
the constraint determination subunit is used for acquiring N corrected rendering areas corresponding to the first corrected image k, and determining area constraint parameters corresponding to the first corrected image k based on the N corrected rendering areas;
and the parameter constraint subunit is used for acquiring the Gaussian blur kernel, and carrying out parameter constraint processing on the Gaussian blur kernel by adopting the regional constraint parameters to obtain texture processing parameters.
An aspect of an embodiment of the present application provides an image processing apparatus, including:
the viewpoint acquisition module is used for acquiring viewpoint image samples;
The sample analysis module is used for acquiring a first sample texture grid of the viewpoint image sample, and performing texture analysis on the viewpoint image sample to obtain first sample texture coloring data;
the sample rendering module is used for inputting a first sample texture grid and first sample texture coloring data of the viewpoint image sample into the initial restoration rendering model for rendering, and determining a sample depth image and a sample rendering image corresponding to the viewpoint image sample;
the sample optimization module is used for inputting the sample depth image and the sample rendering image into the initial image diffusion model for texture optimization processing to obtain a sample correction image;
the sample projection module is used for carrying out texture analysis on the sample correction image to obtain second sample texture coloring data;
the parameter adjustment module is used for carrying out parameter adjustment on the initial restoration rendering model and the initial image diffusion model based on the first sample texture coloring data and the second sample texture coloring data;
the model determining module is used for obtaining the repair rendering model corresponding to the initial repair rendering model and the image diffusion model corresponding to the initial image diffusion model until the model convergence condition is reached.
Wherein, this viewpoint acquisition module includes:
The grid deformation unit is used for acquiring the sample object and an initial object texture grid corresponding to the sample object, performing geometric deformation processing on the initial object texture grid, and performing grid Laplacian regularization processing to generate an optimized texture grid;
the object adjusting unit is used for adjusting the sample object by adopting the optimized texture grid to obtain a rendering object;
and the sample generation unit is used for adding an image background to the rendering object and generating a viewpoint image sample.
Wherein, this parameter adjustment module includes:
the loss generation unit is used for acquiring the sample label and generating a first loss function based on the sample label and the sample correction image;
the loss generation unit is further configured to generate a second loss function based on the first sample texture shading data and the second sample texture shading data;
and the parameter adjustment unit is used for performing parameter adjustment on the initial restoration rendering model and the initial image diffusion model by adopting the first loss function and the second loss function.
In one aspect, the embodiment of the application provides a computer device, which comprises a processor, a memory and an input/output interface;
the processor is respectively connected with the memory and the input/output interface, wherein the input/output interface is used for receiving data and outputting data, the memory is used for storing a computer program, and the processor is used for calling the computer program so as to enable the computer equipment containing the processor to execute the image processing method in one aspect of the embodiment of the application.
An aspect of an embodiment of the present application provides a computer-readable storage medium storing a computer program adapted to be loaded and executed by a processor to cause a computer device having the processor to perform the image processing method in the aspect of an embodiment of the present application.
In one aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in the various alternatives in an aspect of the embodiments of the application. In other words, the computer instructions, when executed by a processor, implement the methods provided in the various alternatives in one aspect of the embodiments of the present application.
The implementation of the embodiment of the application has the following beneficial effects:
in the embodiment of the application, a first texture grid of a first viewpoint image can be obtained, and the first viewpoint image is subjected to texture analysis to obtain first texture coloring data; rendering the first texture grid and the first texture coloring data of the first viewpoint image, and determining a depth image and a rendering image corresponding to the first viewpoint image; and performing texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image to obtain a first corrected image. By the aid of the process, the texture resources with defects can be extracted, namely the first texture grid, the first texture coloring data and the like, and the texture resources are optimized, image depth information (namely depth images) of images is adopted in the process, the optimization performance of texture optimization can be improved by aid of and reference to the process of texture optimization, and optimization of geometry (namely texture grid) and texture and the like is integrated into a unified frame, so that image rendering is more stable, image rendering efficiency is improved, problems of double images, blurring and the like of the textures in image rendering are effectively solved, and image rendering effect and integrity are improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1a is a diagram of a network interaction architecture for image processing according to an embodiment of the present application;
FIG. 1b is a diagram of a data interaction architecture for image processing according to an embodiment of the present application;
FIG. 2 is a schematic view of an image processing scenario provided in an embodiment of the present application;
FIG. 3 is a flow chart of a method of image processing according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an image rendering scene provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a training scenario of a region division model according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a texture optimization scenario provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of an iterative optimization flow of images according to an embodiment of the present application;
FIG. 8 is a schematic view of an image post-processing scene according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a model training process in image processing according to an embodiment of the present application;
FIG. 10 is a schematic diagram of one possible model iterative training process provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 12 is a schematic view of another image processing apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
If the data of the object (such as a user) needs to be collected in the application, before and during the collection, a prompt interface or a popup window is displayed, wherein the prompt interface or the popup window is used for prompting the user to collect XXXX data currently, and the relevant step of data acquisition is started only after the confirmation operation of the user on the prompt interface or the popup window is obtained, otherwise, the process is ended. The acquired user data is used in a reasonable and legal scene, application, or the like. Optionally, in some scenarios where user data is required but not authorized by the user, authorization may be requested from the user, and the user data may be reused when authorization passes. Moreover, the use of user data complies with relevant regulations of legal regulations. Such as a first view image, etc.
In the embodiment of the present application, please refer to fig. 1a, fig. 1a is a network interaction architecture diagram for image processing provided in the embodiment of the present application, as shown in fig. 1a, a computer device 101 may obtain an image rendering request of any one service device, perform texture extraction and redrawing processing on a viewpoint image carried by the image rendering request, obtain a corrected image after texture optimization, and implement rendering optimization and rendering stability of the viewpoint image. Such as the service device 102a, the service device 102b, or the service device 102c shown in fig. 1a, etc. Alternatively, the computer device 101 may acquire the viewpoint image uploaded in the computer device 101, perform texture extraction and redrawing processing on the viewpoint image, obtain a corrected image after texture optimization, and the like. That is, the present application can be applied to any 3D image rendering enhancement scene, for example, any application program capable of performing 3D image rendering, such as a game application, and any other scene capable of performing 3D image rendering. For example, in a game application, the computer device 101 generates a game scene frame of each service device participating in the game application, and sends the generated game scene frame to each service device, where the game scene frames received by different service devices may be different, any one game scene frame may be obtained by performing texture optimization by adopting a scheme implemented by the present application, for example, an initial game frame corresponding to the service device 102a may be obtained, where the initial game frame may be considered as a first viewpoint image, and the initial game frame is subjected to texture extraction and texture redrawing processing to generate a game scene frame (i.e., a first corrected image), and sent to the service device 102 a; alternatively, the computer device 101 receives an initial game frame, performs texture extraction and texture redrawing processing on the initial game frame by adopting the scheme in the present application, generates a game scene frame, renders the game scene frame, and the like. The above are just a few examples of the scenarios in which the present application can be applied, and are not limiting of the use of other scenarios.
Alternatively, referring to fig. 1b, fig. 1b is a data interaction architecture diagram for image processing according to an embodiment of the present application. As shown in fig. 1b, the computer device 103 may obtain viewpoint images corresponding to the h viewpoints of the object to be drawn, perform texture extraction and texture redrawing processing on the viewpoint image corresponding to each viewpoint, generate a corrected image of the viewpoint image corresponding to the viewpoint, and perform image fusion on the corrected images corresponding to the h viewpoints, so as to obtain an image to be rendered for rendering the object to be drawn. Where h is a positive integer, the computer device 103 may obtain viewpoint images from the h viewpoint collecting devices, respectively, each viewpoint collecting device corresponding to one viewpoint, where the viewpoint collecting devices may be a camera or a game camera, for example, the viewpoint collecting device 104a, the viewpoint collecting device 104b, or the viewpoint collecting device 104c shown in fig. 1b, etc.
Specifically, referring to fig. 2, fig. 2 is a schematic view of an image processing scene according to an embodiment of the present application. As shown in fig. 2, the computer device may obtain a first texture grid 201 of the first view image, perform texture analysis on the first view image to obtain first texture coloring data 202, so that texture resources of the first view image may be obtained, and related data such as geometry and texture of the first view image may be obtained, which is used for optimizing image rendering, and improving stability and rendering effect of image rendering. Further, the first texture grid 201 and the first texture rendering data 202 of the first viewpoint image may be rendered, and a depth image 2031 and a rendered image 2032 corresponding to the first viewpoint image may be determined, where the depth image 2031 is used for representing image depth information of the first viewpoint image, the image depth information is used for storing a number of bits used for each pixel point of the first viewpoint image, measuring a color resolution of the first viewpoint image, determining information such as a number of gray levels that may be present for each pixel point of the first viewpoint image, and may further include information related to a distance of a surface of an object to be drawn of the first viewpoint corresponding to the first viewpoint image; the rendered image 2032 is an image obtained by performing mesh optimization on the first viewpoint image on the basis of the first texture mesh 201 and the first texture shading data 202. Further, texture optimization processing may be performed on the rendered image 2032 based on the depth image 2031 corresponding to the first viewpoint image, to obtain the first corrected image 204. Through the process, the geometry, texture and the like of the image are integrated into a unified frame, and the multifunctional integration of image optimization and the improvement of image optimization are realized, so that the stability and the efficiency of image rendering are improved, the problems of texture ghost, blurring and the like in the image rendering are effectively solved, and the effect and the integrity of the image rendering are improved.
It is understood that the service device mentioned in the embodiment of the present application may also be considered as a computer device, and the computer device in the embodiment of the present application includes, but is not limited to, a terminal device or a server. In other words, the computer device may be a server or a terminal device, or may be a system formed by the server and the terminal device. The above-mentioned terminal device may be an electronic device, including but not limited to a mobile phone, a tablet computer, a desktop computer, a notebook computer, a palm computer, a vehicle-mounted device, an augmented Reality/Virtual Reality (AR/VR) device, a head-mounted display, a smart television, a wearable device, a smart speaker, a digital camera, a camera, and other mobile internet devices (mobile internet device, MID) with network access capability, or a terminal device in a scene such as a train, a ship, or a flight. As shown in fig. 1a, the terminal device may be a notebook (as shown by a service device 102 b), a mobile phone (as shown by a service device 102 c), or an in-vehicle device (as shown by a service device 102 a), and fig. 1a only illustrates some devices, and alternatively, the service device 102a refers to a device located in a vehicle 103, and the service device 102a may install an application program for performing 3D image rendering, such as a game application 1021. The servers mentioned above may be independent physical servers, or may be server clusters or distributed systems formed by a plurality of physical servers, or may be cloud servers that provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, vehicle-road collaboration, content distribution networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Optionally, the data related to the embodiment of the present application may be stored in a computer device, or may be stored based on a cloud storage technology or a blockchain network, and the like, which is not limited herein.
Further, referring to fig. 3, fig. 3 is a flowchart of a method for image processing according to an embodiment of the present application. As shown in fig. 3, a viewpoint image is taken as an example, in other words, in the method embodiment described in fig. 3, the image processing procedure includes the steps of:
step S301, a first texture grid of the first view image is acquired, and texture analysis is performed on the first view image to obtain first texture coloring data.
In the embodiment of the present application, the computer device may acquire viewpoint images corresponding to the h viewpoints, respectively, and execute steps S301 to S303 for the h viewpoint images, respectively, to acquire corrected images corresponding to the h viewpoint images, respectively. In the description, one view image is taken as an example, that is, the first view image may be any one view image of the h view images. Optionally, the viewpoint images corresponding to the h viewpoints respectively may be recorded as the original viewpoint images corresponding to the h viewpoints respectively, and any one of the original viewpoint images is determined as the first viewpoint image; alternatively, the object to be drawn may be detected for any one of the original viewpoint images, and the first viewpoint image may be generated based on the object to be drawn. Wherein the object to be drawn refers to any one object having a 3D structure, and the first viewpoint image refers to an image including the object to be drawn.
Further, the computer device may acquire a first texture grid of the first view image, the first texture grid being obtained from pixels constituting the first view image, for example, the computer device may detect pixels constituting the first view image, perform grid construction on the pixels constituting the first view image, and generate the first texture grid of the first view image; or detecting pixel points forming the first viewpoint image, sampling the pixel points forming the first viewpoint image to obtain sampling pixel points, constructing grids of the sampling pixel points, and generating a first texture grid of the first viewpoint image; or, the first viewpoint image can be input into a texture grid extraction model for grid construction to obtain a first texture grid of the first viewpoint image, wherein the texture grid extraction model is a trained model for extracting texture grids; alternatively, a mesh renderer may be employed to acquire a first texture mesh of the first viewpoint image; alternatively, a game debug (debug) engine may be employed to obtain the first texture grid; alternatively, an object surface of the object to be drawn may be acquired, mesh slicing processing may be performed on the object surface, a first texture mesh of the first viewpoint image may be generated, and the like. The above is several optional texture grid generation modes, and is not limited to use of other texture grid extraction modes.
Further, the first viewpoint image may be subjected to texture analysis to obtain first texture rendering data, and the texture analysis process may be regarded as a process of mapping the texture pixels of the first viewpoint image in the texture space to the pixels in the screen space, that is, a process of attaching an image to the surface of the three-dimensional object, for example, information of each pixel point in the first viewpoint image is mapped to the surface of the object to be drawn. The first texture coloring data is used for representing texture data of each pixel point in the first view image or representing texture data of each texture grid in the first texture grid corresponding to the first view image. The texture data generally refers to patterns, lines and the like on the surface of the object, and is used for representing information such as color, transparency and the like of corresponding pixel points or texture grids.
Specifically, in a texture analysis manner, an object model of an object to be drawn may be unfolded based on a first viewpoint image to obtain initial texture data, for example, the object model of the object to be drawn may be directly unfolded, and the first viewpoint image is mapped to the unfolded object model to obtain initial texture data; or, based on the display information corresponding to the object to be drawn in the first viewpoint image, determining the unfolding boundary of the object model of the object to be drawn, and unfolding the object model of the object to be drawn by using the unfolding boundary to obtain the initial texture data. Specifically, an object model of an object to be drawn can be obtained, and the object model is expanded to obtain texture plane coordinate information. Specifically, the object model can be directly unfolded to obtain texture plane coordinate information; or, acquiring display information corresponding to the object to be drawn in the first viewpoint image, where the display information is used to represent an object angle, an object area and the like displayed by the object to be drawn in the first viewpoint image, and an area except the object area in the object model may be a region to be expanded; and determining an unfolding boundary in the region to be unfolded based on the object angle, and unfolding the object model by using the unfolding boundary to obtain texture plane coordinate information. Alternatively, the texture plane coordinate information may be considered to include the entire surface of the object to be drawn. Further, initial texture data may be acquired from the first viewpoint image based on the texture plane coordinate information.
The method comprises the steps of carrying out coordinate association on initial texture data and a first view image to obtain a first texture coordinate system, wherein the first texture coordinate system is used for representing the position of the initial texture data in the first view image, for example, the texture image coordinates of the initial texture data in the first view image can be obtained, normalization processing is carried out on the texture image coordinates, the first texture coordinate system is determined, the first texture coordinate system obtained in the mode can be considered to represent the proportion of the initial texture data in the first view image, for example, the texture image coordinates of one sub data in the initial texture data are (5, 6), after normalization processing, the sub data are corresponding to the texture data at positions (0.1, 0.2) in the first texture coordinate system, namely, the texture data at positions (1/10) and (1/5) of the width in the first view image, and the processing can enable the texture data to be mapped into the image even if the subsequent processing of the first view image is changed in size, so that the texture-optimized fault tolerance is improved, and the image rendering stability and the integrity are improved; alternatively, a pixel association relationship between the initial texture data and the first viewpoint image may be directly established, and the first texture coordinate system may be constructed based on the pixel association relationship, for example, each coordinate in the first texture coordinate system corresponds to one sub-data in the initial texture data, and each coordinate is a position of the sub-data corresponding to the coordinate in the first viewpoint image, and so on. The initial texture data may be determined to be first texture shading data; alternatively, the initial texture data may be optimized based on the first texture grid, and the first texture data may be generated, that is, the initial texture data may be mapped to the first texture grid such that the initial texture data may correspond to the first texture grid. And combining the first texture coordinate system with the first texture data to obtain first texture coloring data. The first texture coordinate system may be considered as data composed of coordinates carrying texture data, which may be considered as being composed of a transverse coordinate (U-coordinate) and a longitudinal coordinate (V-coordinate).
Or in a texture analysis mode, the computer device may acquire object coordinate information of the object to be drawn, acquire a coordinate correspondence between the object coordinate information of the object to be drawn and image space coordinate information of the first viewpoint image, and construct texture space coordinates based on the coordinate correspondence, where the coordinate correspondence is used to represent a transformation function between the object coordinate information and the image space coordinate information, and the texture space coordinates are used to represent a position of coordinates in the object coordinate information in the image space coordinate information, or represent a position of coordinates in the image control coordinate information in the object coordinate information, so that the extracted texture data may be subsequently mapped to a three-dimensional object (i.e., the object to be drawn), or may be mapped to a two-dimensional plane (i.e., each image), so as to implement texture rendering of the image. Further, texture sampling may be performed from the first viewpoint image based on texture space coordinates to obtain second texture data, and the second texture data is determined to be first texture shading data; or, the second texture data may be subjected to data adjustment based on the coordinate correspondence relationship, so as to obtain the first texture coloring data.
Alternatively, in one texture parsing scheme, a game debug engine may be employed to obtain first texture shading data for a first viewpoint image.
Through the above procedure, first texture shading data may be obtained, which may represent texture data of all or part of the texture meshes (i.e., the first texture meshes) in the target texture meshes constituting the entire surface of the object to be drawn, which may be used for subsequent texture optimization.
Step S302, the first texture grid and the first texture coloring data of the first viewpoint image are rendered, and the depth image and the rendering image corresponding to the first viewpoint image are determined.
In the embodiment of the application, the first texture grid and the first texture coloring data of the first viewpoint image can be subjected to depth rendering to obtain the depth image corresponding to the first viewpoint image; and performing grid optimization rendering on the first texture grid and the first texture coloring data of the first viewpoint image to obtain a rendered image corresponding to the first viewpoint image.
Specifically, grid optimization processing can be performed on the first texture grid of the first viewpoint image to obtain an optimized texture grid, and the first texture coloring data and the optimized texture grid are integrated and rendered to generate a rendering image corresponding to the first viewpoint image. Performing depth analysis on the first viewpoint image by adopting a first texture grid to obtain image depth information of the first viewpoint image, wherein the image depth information is obtained; and rendering the image depth information and the first texture rendering data to obtain a depth image corresponding to the first viewpoint image. Or, an image renderer may be used to perform mesh rendering on the first texture mesh and the first texture rendering data, so as to generate a rendered image corresponding to the first viewpoint image; and carrying out depth analysis on the first texture grid and the first texture coloring data by adopting an image renderer to generate a depth image corresponding to the first viewpoint image. In particular, the relevant explanation of the depth image and the rendered image may be seen in fig. 2. Alternatively, the first texture shading data may be mapped into the first texture grid, generating a rendered image corresponding to the first viewpoint image; and performing image Depth (RGBD-D) mapping on the rendered image to generate a Depth image corresponding to the first viewpoint image, or performing image Depth mapping on the first viewpoint image to generate a Depth image corresponding to the first viewpoint image. Optionally, the depth image is used for representing a distance between each pixel point in the first viewpoint image and the first viewpoint acquisition device corresponding to the first viewpoint image.
Alternatively, the first viewpoint may be denoted as v t The first view point is the view point corresponding to the first view point acquisition equipment and is to be shown as a first view point v t The depth image obtained is denoted as D t The rendered image is denoted as Q t Wherein the image Q is rendered t Refers to from the first viewpoint v t The shading data of the texture grid seen below. Alternatively, the first viewpoint may be denoted as v t =(r t ,φ t ,θ t ) Wherein r is t For representing the radius, phi, of the first viewpoint acquisition device t For indicating the angular orientation, θ, of the first viewpoint acquisition device t For representing the elevation angle of the first viewpoint acquisition device, etc.
Optionally, step S303 may be performed on a depth image and a rendering image obtained by any one of the generation manners of the depth image and the rendering image. Or in any of the above generation modes of the depth image and the rendering image, an application loading buffer for the object to be drawn may be obtained, and the application loading buffer is adopted to assist in rendering the first texture grid and the first texture coloring data of the first viewpoint image, so as to obtain the depth image and the rendering image corresponding to the first viewpoint image. For example, after any of the above-mentioned generation modes of the depth image and the rendering image is implemented, the implementation result of the generation mode may be optimized and adjusted by using the application loading buffer, so as to obtain a final depth image and a rendering image, and step S303 is executed for the depth image and the rendering image.
For example, referring to fig. 4, fig. 4 is a schematic view of an image rendering scene according to an embodiment of the present application. As shown in fig. 4, the computer device may obtain a first texture mesh 4011 of a first viewpoint image, obtain first texture shading data 4012 obtained by texture parsing of the first viewpoint image, and obtain an application load buffer 402 for an object to be drawn. The first texture grid 4011 and the first texture shading data 4012 are rendered, and the rendering process is optimized and adjusted by adopting the application loading buffer 402, so that a depth image 4031 and a rendering image 4032 corresponding to the first viewpoint image are obtained.
Optionally, the first texture grid and the first texture coloring data of the first viewpoint image may be input into a repair rendering model, and in the repair rendering model, the first texture grid and the first texture coloring data are rendered, and the grid is optimally rendered, so as to generate a depth image and a rendering image corresponding to the first viewpoint image. The repair rendering model is a pre-trained repair diffusion model, and specific training procedures can be seen in the relevant description shown in fig. 9.
Step S303, performing texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image to obtain a first corrected image.
In an embodiment of the present application, the computer device may perform texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image, to obtain a first corrected image, such as the first corrected image 406 shown in fig. 4. Specifically, the computer device may perform pixel change analysis on the rendered image, and determine N rendering areas corresponding to the rendered image; n is a positive integer; the deformation degree of the pixels corresponding to different rendering areas is different when the pixels are moved, so that the local consistency and the global consistency of the image can be promoted. Adding noise data into the rendering image based on the N rendering areas and the depth image corresponding to the first viewpoint image, and generating a noise image corresponding to the rendering image; and denoising the noise image to generate a first corrected image. The N rendering regions may include a holding region, a changing region, and an intermediate region, where the holding region refers to a region that does not change when performing texture rendering in the present application; the change region refers to a region which changes when texture rendering is performed in the present application; the middle area is between the holding area and the changing area, and changes or holds according to the processing procedure, that is, the area where the processing mode changes in the processing procedure of the present application. Wherein the N rendering regions are used to represent differences in region types, it is contemplated that each rendering region may correspond to one or more image regions in the rendered image. The 3D object (e.g. the object to be drawn) may be rotated by a small margin, so that when an inclination angle exists, the texture grid of the 3D object is colored, which may cause high distortion, because the cross sections of the texture grid and the image display plane are lower, the low resolution of the first viewpoint image is mapped into the first texture coloring data, and the optimization processing of the texture can be realized through step S303, so that the accuracy of image rendering is improved, the distortion problem of image rendering is solved, and the image rendering effect is improved.
Specifically, when the pixel change analysis is performed on the rendered image and N rendering areas corresponding to the rendered image are determined, in an image division manner, the computer device may obtain a first viewpoint v corresponding to the first viewpoint image t At a first view point v t In the viewpoint coordinate system of (2), a grid normal vector of a unit grid constituting the first texture grid is obtained and is denoted as n z . For example, the unit mesh of the first texture mesh is a triangular patch, and the mesh normal vector is used for representing the first viewpoint v t Is normal to the surface of the unit mesh of the first texture mesh. Determining a phase of the rendered image between the image display plane and the unit grid based on the grid normal vector of the unit gridFor the relationship, the rendered image is divided into N initial rendered regions based on the relative relationship. Wherein the image reality plane is used for representing a screen for displaying an image (such as a first viewpoint image or a rendered image), and the relative relationship is used for representing a distance between the unit mesh and the image display plane, an offset angle, and the like. Further, the N initial rendering regions may be determined as N rendering regions corresponding to the rendered image. Alternatively, an application cache image may be acquired, which may be denoted as N t And carrying out region adjustment on the N initial rendering regions by adopting the application cache image to obtain N rendering regions corresponding to the rendering image. Wherein, the application caches the image N t Refers to the image area that can be displayed when the loaded image cannot be completely loaded in the image rendering process, so that in general, the cache image N is applied t The displayed area is generally a less variable area, i.e. an area with less deformation procedure between successive frames, and therefore the application buffer image N may be used t The N initial rendering images are subjected to region adjustment to obtain N rendering regions, and the accuracy of dividing the N rendering regions can be improved, so that the accuracy of subsequent image rendering based on the N rendering regions is improved. Ideally, if the current viewpoint provides a better rendering angle for certain previously rendered regions, it is desirable to "change" their existing texture, otherwise the original texture should be "preserved" avoiding modifying it to maintain consistency with the previous view, aiding the texture optimization process by applying a cached image, improving the accuracy of image rendering and display.
For example, referring to fig. 4, a computer device may be at a first viewpoint v t A mesh normal vector 4041 of a unit mesh constituting the first texture mesh 4011 is acquired. Based on the grid normal vector 4041, a relative relationship between the rendered image and the image display plane at the unit grid is determined, and the rendered image 4032 is divided into N initial rendered regions based on the relative relationship. Acquiring an application cache image 4042, and performing region adjustment on N initial rendering regions by adopting the application cache image 4042 to obtain N rendering regions corresponding to the rendering image 4032A field 405. The N rendering regions 405 include a holding region 4051, an intermediate region 4052, and a change region 4053.
Optionally, in one image division manner, the rendered image may be input into a region division model to perform pixel change analysis, so as to determine N rendered regions corresponding to the rendered image. In this way, a continuous M-frame region-divided image sample may be acquired, which may be considered to be composed of M continuously changing images, so that the M-frame region-divided image is a continuously changing image, and in this changing process, the deformation degrees of different pixel points may represent rendering regions to which the respective pixel points belong, and may be used as model training samples. Further, a first divided image sample in the M frame area divided image samples can be input into an initial area divided model for pixel change analysis, and N sample rendering areas corresponding to the first divided image sample are determined; m is a positive integer; the first divided image sample is any one of the M-frame region divided image samples, for example, may be a first frame region divided image sample, a last frame region divided image sample, or any one of the middle frame region divided image samples in the M-frame region divided image samples. M frame region division image samples can be obtained, and sample region change information corresponding to each of N sample rendering regions is obtained; each sample region change information is used for representing the change condition of the M-frame region division image sample in the corresponding sample rendering region, that is, the sample region change information of one sample rendering region refers to the change condition of the region content corresponding to the M-frame region division image sample in the sample rendering region respectively. And carrying out parameter adjustment on the initial region division model based on the standard region change information respectively corresponding to the N sample rendering regions and the sample region change information respectively corresponding to the N sample rendering regions until the parameters are converged to obtain the region division model.
For example, referring to fig. 5, fig. 5 is a schematic diagram of a training scenario of a region division model according to an embodiment of the present application. As shown in fig. 5, the computer device may obtain consecutive M-frame area-divided image samples 501, and obtain a first divided image sample 502 from the consecutive M-frame area-divided image samples 501. The first divided image sample 502 is input into the initial area division model 503 to perform pixel change analysis, and N sample rendering areas 504 corresponding to the first divided image sample are determined, where N refers to the number of types of sample rendering areas, that is, each sample rendering area may correspond to one or more sample image areas in the first divided image sample. As shown in fig. 5, the N sample rendering regions 504 include a sample holding region 5041, a sample intermediate region 5042, and a sample change region 5043, and it is assumed that the sample holding region 5041 corresponds to the sample image region (1) and the sample image region (3) in the first divided image sample, the sample intermediate region 5042 corresponds to the sample image region (2), the sample image region (4), and the sample image region (7) in the first divided image sample, and the sample change region 5043 corresponds to the sample image region (5) and the sample image region (6) in the first divided image sample. For the sample holding area 5041, sample area change information 1 of the M-frame area division image sample 501 at the sample image area (1) and the sample image area (3) respectively is acquired, and similarly, sample area change information 2 corresponding to the sample intermediate area 5042, sample area change information 3 corresponding to the sample change area 5043, and the like can be acquired. Based on the sample region change information 1, the sample region change information 2, the sample region change information 3, and the standard region change information corresponding to the N sample rendering regions 504, parameter adjustment is performed on the initial region division model 503 until the parameters converge, so as to obtain a region division model.
When noise data is added to the rendered image based on the N rendering areas and the depth image corresponding to the first viewpoint image to generate a noise image corresponding to the rendered image, the noise data can be added to the image to be noisy corresponding to the rendered image according to the round of the noisy iteration, the N rendering areas and the depth image in the ith noisy iteration to generate a noise image i corresponding to the rendered image; i is an integer; and the iteration turn of the noise adding is i. Denoising the noise image to generate a first corrected image, and denoising the noise image i to generate a corrected image i; and the corrected image i when the ith noise adding iteration meets the iteration completion condition is the first corrected image.
Specifically, when generating a noise image i corresponding to a rendered image, if i is an iteration first initial value, in an ith noise adding iteration, initial noise data is obtained, mask data of the rendered image is determined according to the noise adding iteration rounds and N rendering areas, the mask data is adopted, and the initial noise data is added into the rendered image to generate the noise image i of the rendered image; the image to be added with noise corresponding to the rendering image is the rendering image. At this time, the generation process of the noise image i can be shown by referring to formula (1):
As shown in formula (1), z i For the representation of the noise image i,representing initial noise data, m blended For representing mask data, z 0 For representing a rendered image; as indicated above, the value of the two input variables is equal to 1. The initial noise data may be considered as a randomly generated gaussian noise.
Specifically, the N rendering regions include a holding region, a changing region, and an intermediate region. When mask data of a rendered image is determined according to a noise adding iteration round and N rendering areas, a first rendering area which a kth pixel point forming the rendered image belongs to can be obtained, and if the first rendering area is a holding area, a first mask value corresponding to the holding area, such as 1, is determined to be a pixel mask corresponding to the kth pixel point; k is a positive integer; if the first rendering area is a change area, determining a second mask value corresponding to the change area, such as 0, as a pixel mask corresponding to the kth pixel point; if the first rendering area is the middle area, determining a pixel mask corresponding to the kth pixel point based on the noise adding iteration round. When the pixel masks respectively corresponding to the pixel points forming the rendering image are obtained, the pixel masks respectively corresponding to the pixel points form mask data of the rendering image. Specifically, the mask data may be generated as shown in formula (2):
②/>
As shown in formula (2), 1 indicates that the image region should be drawn and preserved, and α is used to indicate the noise adding iteration round of switching the processing mode for the middle region in the noise adding iteration process, which may be 25 or the like. The check may be 0, that is, a portion of the noise-adding iteration round may be reserved for the middle area in the noise-adding iteration process, and a portion of the noise-adding iteration round may be changed. Alternatively, the check may determine, as the pixel mask of the pixel, a mask value corresponding to a region having a smaller region distance between the pixel and the holding region according to the region distance between the pixel and the holding region and the changing region, respectively, e.g., if the region distance between a pixel and the holding region is smaller than the region distance between the holding region and the changing region, then the first mask value corresponding to the holding region may be determined as the pixel mask of the pixel; if the area distance between a pixel point and the holding area is greater than the area distance between the pixel point and the changing area, the second mask value corresponding to the changing area can be determined as the pixel mask of the pixel point. Through this process, noise can be directed to align with the rendering area, improving the efficiency of image rendering.
Further, if i is not the first initial value of iteration, in the ith noise adding iteration, determining the noise image j as the image to be noise added of the rendering image, acquiring an iteration interval to which the noise adding iteration round belongs, determining noise data i from the depth image and the initial noise data of the rendering image based on the iteration interval, adding the noise data i into the image to be noise added of the rendering image, and generating the noise image i of the rendering image; the noise image j is the noise image generated in the previous noise adding iteration of the ith noise adding iteration. For example, the noisy iteration is a successively increasing iteration, the first initial value may be 1, etc., where j=i-1; the noisy iteration is a progressively decreasing iteration, and the first initial value may be an iteration number threshold, such as 50 or 40, where j=i+1.
Specifically, when the noise data i is added to the image to be noisy of the rendered image to generate the noise image i of the rendered image, the image to be noisy of the rendered image and the noise data i can be input into an image diffusion model, and in the image diffusion model, the noise data i is adopted to carry out noise adding processing on the image to be noisy of the rendered image to generate the noise image i of the rendered image. When the noise image i is subjected to denoising processing to generate the corrected image i, the noise image i may be subjected to denoising processing in the image diffusion model to generate the corrected image i. For example, assuming that the iteration number threshold is 50, the generation process of the corrected image i can be shown by referring to formula (3):
As shown in formula (3), z j For representing noisy images j, D t A depth image for representing a rendered image; the generation is used to represent noise, or may be an image or the like in which noise is added to a rendered image; m is M depth For representing an image diffusion model.
Through the above process, the automatic addition of Gaussian noise in the image is realized, and the situation that other noise cannot be resisted in the image generation process can be prevented, so that the robustness of image rendering is improved. Moreover, a pre-trained repair rendering model and an image diffusion model can be integrated, so that the texture repair and the image enhancement (namely noise addition) of the image are realized, and the accuracy, the robustness and the integrity of image rendering are improved. The image independent noise generated in the process, such as noise which is generated in the initial noise data and the like and is independent of the image, is applied to the h viewpoints, namely the h viewpoints adopt the same image independent noise, so that the rendering consistency of an object to be drawn can be improved, on the basis, a restoration rendering model and an image diffusion model are adopted, the image rendering sensitivity of the noise caused by viewpoint change is reduced, meanwhile, the image depth cannot be deviated, and the image rendering accuracy is improved.
Alternatively, image iterative optimization may be performed for the first viewpoint. Specifically, based on a depth image corresponding to the kth texture optimization of the first viewpoint image, performing texture optimization processing on a rendering image corresponding to the kth texture optimization to obtain a first corrected image k; k is a positive integer; when k is a second initial value, the first viewpoint image is an original viewpoint image; when k is not the second initial value, the first viewpoint image is the first corrected image (k-1); the first corrected image (k-1) refers to the first corrected image obtained in the (k-1) th texture optimization.
Alternatively, the computer device may acquire an initial viewpoint image of the object to be drawn at an initial viewpoint, assuming that the initial viewpoint is v 0 =(r 0 =1.25,φ 0 =0,θ 0 =60), texture drawing is carried out on the initial viewpoint image by adopting the repair rendering model, and an initial rendering image I is generated 0 An initial depth image D of the initial viewpoint image 0 For the initial rendering of the image D 0 Performing texture analysis to obtain initial texture coloring data T 0 . On this basis, the above step S301 is executed, optionally, when the first viewpoint image is subjected to texture analysis to obtain the first texture coloring data, the first viewpoint image may be subjected to texture analysis to obtain the preliminary analysis data under the first viewpoint, and the preliminary analysis data is updated and adjusted based on the preliminary texture coloring data to generate the first texture coloring data, where the generating process of the preliminary analysis data may refer to the generating process of the first texture coloring data in step S301. In this way, incremental coloring can be implemented, so that in the implementation process of the steps S301 to S303, the previous coloring step can be considered, so that the data for performing texture rendering is complete, and the accuracy of image rendering is improved to a certain extent, and the image rendering effect is improved.
Alternatively, referring to the steps S301 to S303, the corrected images corresponding to the h viewpoints may be obtained, and the objects to be drawn may be rendered by using the corrected images corresponding to the h viewpoints. Or, a game view angle corresponding to the service device may be obtained, an associated view point associated with the game view angle in the h view points is obtained, image fusion processing is performed on the corrected image corresponding to the associated view point based on the game view angle, an image to be rendered is generated, and the image to be rendered is sent to the service device. Or when the h viewpoints are respectively processed, respectively carrying out texture analysis on the corrected images corresponding to the h viewpoints, respectively projecting analysis results corresponding to the h viewpoints into a texture data set, so that the texture data set can comprise all texture data of the whole surface of the object to be drawn, and subsequently, when the object to be drawn is required to be rendered, directly acquiring the angle to be drawn, acquiring the texture to be drawn from the texture data set based on the angle to be drawn, and carrying out image rendering based on the texture to be drawn.
In the embodiment of the application, a first texture grid of a first viewpoint image can be obtained, and the first viewpoint image is subjected to texture analysis to obtain first texture coloring data; rendering the first texture grid and the first texture coloring data of the first viewpoint image, and determining a depth image and a rendering image corresponding to the first viewpoint image; and performing texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image to obtain a first corrected image. By the aid of the process, the texture resources with defects can be extracted, namely the first texture grid, the first texture coloring data and the like, and the texture resources are optimized, image depth information (namely depth images) of images is adopted in the process, the optimization performance of texture optimization can be improved by aid of and reference to the process of texture optimization, and optimization of geometry (namely texture grid) and texture and the like is integrated into a unified frame, so that image rendering is more stable, image rendering efficiency is improved, problems of double images, blurring and the like of the textures in image rendering are effectively solved, and image rendering effect and integrity are improved.
Specifically, in performing iterative optimization of an image, reference may be made to fig. 6, where fig. 6 is a schematic view of a texture optimization scene provided by an embodiment of the present application. As shown in fig. 6, an original viewpoint image at a first viewpoint may be obtained, texture derivation is performed on the original viewpoint image, texture optimization and texture redrawing are performed on data derived from the texture, and a redrawn image is obtained. And detecting performance optimization conditions based on the image performance of the original viewpoint image and the image performance of the redrawn image.
Specifically, referring to fig. 7, fig. 7 is a schematic diagram of an image iterative optimization flow provided in an embodiment of the present application. As shown in fig. 7, the process may include the steps of:
step S701, the original viewpoint image is determined as a first viewpoint image.
In the embodiment of the application, the computer equipment can acquire the original viewpoint image of the object to be drawn under the first viewpoint, initialize k, and k is a positive integer. Alternatively, the original viewpoint image acquired by the first viewpoint acquisition device corresponding to the first viewpoint may be acquired; alternatively, the current frame data may be determined from the historical frame data, an original viewpoint image may be generated from the current frame data, and the like, without limitation. That is, the original viewpoint image may be an image resulting from any one of 3D renderings.
In step S702, in the kth texture optimization, a first texture grid k of the first view image is obtained, and the first view image is subjected to texture analysis to obtain first texture coloring data k.
In the embodiment of the present application, in the kth texture optimization, a first texture grid k of the first view image is obtained, and the first view image is subjected to texture analysis to obtain first texture coloring data k, which can be specifically described with reference to the first texture grid and the related process description of the first texture coloring data in step S301 in fig. 3, and will not be described herein.
In step S703, the first texture grid k and the first texture rendering data k of the first viewpoint image are rendered, and the depth image and the rendered image corresponding to the first viewpoint image are determined.
In the embodiment of the present application, the process may refer to the related generation process of the depth image and the rendering image in step S302 of fig. 3, and will not be described herein.
Step S704, performing texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image, to obtain a first corrected image k.
In the embodiment of the present application, the process may refer to the process of generating the first corrected image in step S303 in fig. 3, which is not described herein. Referring to fig. 8, fig. 8 is a schematic view of an image post-processing scene provided in an embodiment of the present application, and as shown in fig. 8, a first corrected image k "801" is obtained.
Step S705, a second texture grid k of the first corrected image k is obtained, and the first corrected image k is subjected to texture analysis to obtain second texture coloring data k.
In the embodiment of the present application, as shown in fig. 8, a second texture grid k "8021" of the first corrected image k "801" is obtained, and texture analysis is performed on the first corrected image k "801" to obtain second texture shading data k "8022". The process of acquiring the second texture mesh k may refer to the process of acquiring the first texture mesh in step S301 of fig. 3; the process of acquiring the second texture shading data k may refer to the process of acquiring the first texture shading data in step S301 of fig. 3.
Optionally, a first viewpoint corresponding to the original viewpoint image may be acquired, and texture analysis may be performed on the first corrected image k to obtain intermediate texture coloring data. And rendering the second texture grid k, the intermediate texture coloring data and the first view point by adopting a differential renderer to generate an intermediate cache image. And obtaining texture processing parameters, and carrying out error analysis on the intermediate buffer image and the first corrected image k based on the texture processing parameters to obtain gradient adjustment parameters. Wherein, the gradient adjustment parameter can be shown in the formula (4):
As shown in formula (4), R represents a differential renderer, and mesh is used to represent a second texture grid k, T t For representing intermediate texture shading data, v t Coordinate information for representing a first viewpoint, etc., I t For use inRepresenting a first corrected image k, m s For representing texture processing parameters.
Further, if the gradient adjustment parameter is greater than or equal to the texture projection threshold, determining the intermediate texture shading data as second texture shading data k; if the gradient adjustment parameter is smaller than the texture projection threshold, performing data adjustment on the intermediate texture shading data based on the gradient adjustment parameter, determining the adjusted intermediate texture shading data as intermediate texture shading data, and returning to perform the step of rendering the second texture grid k, the intermediate texture shading data and the first viewpoint by adopting the differential renderer to generate an intermediate cache image.
When obtaining texture processing parameters, N corrected rendering areas corresponding to the first corrected image k can be obtained, and based on the N corrected rendering areas, the area constraint parameter m corresponding to the first corrected image k is determined h . The N corrected rendering regions may refer to the above-mentioned dividing process of the N rendering regions. A gaussian blur kernel G is obtained, which can be regarded as a two-dimensional constant kernel. Using the region constraint parameter m h Parameter constraint processing is carried out on the Gaussian blur kernel G to obtain texture processing parameters m s . Wherein, the texture processing parameters can be shown in the formula (5):
through the above process, smoother projection texture seams can be obtained from different angles based on each corrected rendering area, area boundary division can be performed on the corrected rendering areas, and image rendering accuracy can be improved.
In step S706, a texture redrawing process is performed based on the second texture shading data k and the second texture mesh k, and a second corrected image k is generated.
In the embodiment of the application, the second modified image k is generated by performing texture redrawing processing based on the second texture shading data k and the second texture grid k, and the process can adopt the existing texture redrawing processing method or perform texture redrawing processing by repairing the rendering model.
Step S707 performs image performance comparison on the second corrected image k and the original viewpoint image.
In the embodiment of the application, the image performance of the second corrected image k and the image performance of the original viewpoint image are obtained, and the image performance of the second corrected image k is compared with the image performance of the original viewpoint image. If the performance optimization degree of the image performance of the second modified image k with respect to the image performance of the original viewpoint image is greater than or equal to the optimization threshold, that is, the texture optimization is completed, step S709 is executed to determine the second modified image k as the target modified image corresponding to the original viewpoint image. If the degree of performance optimization of the image performance of the second corrected image k with respect to the image performance of the original viewpoint image is smaller than the optimization threshold, that is, the optimization is not completed, step S708 is performed. Optionally, the resource consumption of the second corrected image k corresponding to the original viewpoint image may be obtained, where the performance optimization degree of the image performance of the second corrected image k with respect to the image performance of the original viewpoint image is greater than or equal to the optimization threshold, and the increase degree of the resource consumption of the second corrected image k with respect to the resource consumption of the original viewpoint image is less than or equal to the overload threshold, and step S709 is performed.
Step S708, the first corrected image k is determined as the first viewpoint image, k++.
In the embodiment of the present application, the first corrected image k is determined as the first viewpoint image, k++, and the process returns to step S703, that is, the next round of texture optimization is performed.
Step S709, the second corrected image k is determined as the target corrected image corresponding to the original viewpoint image.
In the embodiment of the application, the second correction image k is determined to be a target correction image corresponding to the original viewpoint image, and the object to be drawn is rendered based on the target correction image. The rendering process of the object to be drawn may be referred to in step S303 of fig. 3, which is described above, based on the first correction image.
Through the process, the rendering effect of the original viewpoint image can be optimized continuously through iteration, the accuracy of image rendering can be improved, and the display effect of the image is improved.
Further, referring to fig. 9, fig. 9 is a schematic diagram of a model training procedure in image processing according to an embodiment of the present application. As shown in fig. 9, the process may include the steps of:
step S901, a viewpoint image sample is acquired, a first sample texture grid of the viewpoint image sample is acquired, and texture analysis is performed on the viewpoint image sample to obtain first sample texture coloring data.
In the embodiment of the application. The method comprises the steps of obtaining a sample object and an initial object texture grid corresponding to the sample object, performing geometric deformation processing on the initial object texture grid, and performing grid Laplacian regularization processing to generate an optimized texture grid. And adjusting the sample object by adopting the optimized texture grid to obtain a rendering object, adding an image background to the rendering object, and generating a viewpoint image sample. Through the process, the spectrum enhancement of the sample is realized, and the smooth deformation of the image can be caused, so that the integrity of the input shape of the sample is maintained, and the accuracy of model training is improved. Through the process, a large number of images with corresponding depth maps with input shapes can be generated, images corresponding to the rendering objects can be obtained from multiple viewpoints (such as left, right, top of head, bottom, front, back and the like), the rendering objects are pasted on an image background, and a viewpoint image sample is obtained. Further, the process of obtaining the first sample texture grid of the viewpoint image sample, performing texture analysis on the viewpoint image sample to obtain the first sample texture coloring data may refer to step S301 of fig. 3, where the process of obtaining the first texture grid of the first viewpoint image, performing texture analysis on the first viewpoint image to obtain the first texture coloring data is not described herein.
Step S902, inputting a first sample texture grid and first sample texture coloring data of a viewpoint image sample into an initial repair rendering model for rendering, and determining a sample depth image and a sample rendering image corresponding to the viewpoint image sample.
In the embodiment of the present application, the process may refer to the process of determining the depth image of the first viewpoint image and rendering the image in step S302 of fig. 3.
Step S903, inputting the sample depth image and the sample rendering image into an initial image diffusion model for texture optimization processing, and obtaining a sample correction image.
In the embodiment of the present application, the process may refer to the process of generating the first corrected image in step S303 of fig. 3.
Step S904, performing texture analysis on the sample corrected image to obtain second sample texture coloring data, and performing parameter adjustment on the initial restoration rendering model and the initial image diffusion model based on the first sample texture coloring data and the second sample texture coloring data until a model convergence condition is reached to obtain a restoration rendering model corresponding to the initial restoration rendering model and an image diffusion model corresponding to the initial image diffusion model.
In the embodiment of the application, a sample label can be acquired, and the sample label can comprise a viewpoint image sample, a first sample label corresponding to an initial restoration rendering model, a second sample label corresponding to an initial image diffusion model and the like. The first loss function may be generated based on the sample label and the sample correction image, or the first sub-function may be generated based on the first sample label, the sample depth image, and the sample rendering image, the second sub-function may be generated based on the second sample label and the sample correction image, and the first loss function may be generated by combining the first sub-function with the second sub-function. Generating a second penalty function based on the first sample texture shading data and the second sample texture shading data; and carrying out parameter adjustment on the initial restoration rendering model and the initial image diffusion model by adopting the first loss function and the second loss function until the model convergence condition is reached, so as to obtain the restoration rendering model corresponding to the initial restoration rendering model and the image diffusion model corresponding to the initial image diffusion model. The model convergence condition may be parameter convergence. Or, when the second sample coloring data has performance increase relative to the first sample coloring data, the performance increase is greater than or equal to the optimization threshold, and the increase degree of the resources consumed by the second sample coloring data relative to the resources consumed by the first sample coloring data is less than or equal to the overload threshold, determining that the model convergence condition is reached.
Alternatively, referring to fig. 10, fig. 10 is a schematic diagram of a possible model iterative training procedure according to an embodiment of the present application. As shown in fig. 10, the process may include the steps of:
in step S1001, a viewpoint image sample is acquired.
In the embodiment of the present application, the process may refer to step S901 of fig. 9. Optionally, the process may further perform standard proportional size adjustment, clipping enhancement, and other treatments on the optimized texture grid to obtain a size-adjusted texture grid, and adjust the sample object with the size-adjusted texture grid to obtain a rendered object, and add an image background to the rendered object to generate a viewpoint image sample. In this way, the model obtained by pre-training can be used for learning concepts from the image and applying the concepts to the 3D shape without any explicit reconstruction stage, so that the image rendering efficiency is improved.
Step S1002, a first sample texture grid p of the viewpoint image sample is obtained, and texture analysis is performed on the viewpoint image sample to obtain first sample texture coloring data p.
In the embodiment of the present application, in the p-th parameter adjustment, the process may refer to step S901 of fig. 9.
In step S1003, the first sample texture grid p and the first sample texture rendering data p of the viewpoint image sample are input into the initial repair rendering model (p-1) to be rendered, and the sample depth image p and the sample rendering image p corresponding to the viewpoint image sample are determined.
In the embodiment of the present application, the process may refer to step S902 of fig. 9.
In step S1004, the sample depth image p and the sample rendering image p are input into the initial image diffusion model (p-1) for performing texture optimization processing, so as to obtain a sample correction image p.
In the embodiment of the present application, the process may refer to step S903 of fig. 9.
Step S1005, performing texture analysis on the sample correction image p to obtain second sample texture coloring data p, and performing parameter adjustment on the initial repair rendering model (p-1) and the initial image diffusion model (p-1) based on the first sample texture coloring data p and the second sample texture coloring data p to obtain an initial repair rendering model p and an initial image diffusion model p.
In step S1006, it is detected whether the model convergence condition is reached.
In the embodiment of the present application, if the model convergence condition is not reached, p++, returning to execute step S1002, that is, performing parameter adjustment of the next round; or if the model convergence condition is not reached, the sample correction image p is determined as a viewpoint image sample, the second sample texture shading data p is determined as the first sample texture shading data p, p++, and the process returns to step S1002. If the model convergence condition is reached, step S1007 is performed.
Step S1007, the initial repair rendering model p is determined as a repair rendering model, and the initial image diffusion model p is determined as an image diffusion model.
In the embodiment of the application, the pre-training of the restoration rendering model and the image diffusion model can be realized through the above process, and in the process, the data of embedded textures and the depth sample label can be optimized<D v >The depth sample label<D v >The depth sample labels corresponding to the h viewpoints respectively are represented, the depth sample labels can be shared in the images of the same viewpoint, concepts representing different textures are potentially learned, and the process refers to the depth characteristics of the images, so that textures are manufactured in more proper 3D shapes, and the image rendering effect is better. The first sample label comprises a depth sample label and a rendering sample label. Texture optimization of the 3D rendered image can be achieved through a pre-trained repair rendering model and an image diffusion model.
Further, referring to fig. 11, fig. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the application. The image processing apparatus may be a computer program (including program code, etc.) running in a computer device, for example the image processing apparatus may be an application software; the device can be used for executing corresponding steps in the method provided by the embodiment of the application. As shown in fig. 11, the image processing apparatus 1100 may be used in the computer device in the embodiment corresponding to fig. 3, and specifically, the apparatus may include: texture acquisition module 11, shading acquisition module 12, image rendering module 13, and texture optimization module 14.
A texture acquisition module 11 for acquiring a first texture grid of the first viewpoint image;
a shading obtaining module 12, configured to perform texture analysis on the first viewpoint image to obtain first texture shading data;
the image rendering module 13 is configured to render the first texture grid and the first texture coloring data of the first viewpoint image, and determine a depth image and a rendered image corresponding to the first viewpoint image;
the texture optimization module 14 is configured to perform texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image, so as to obtain a first corrected image.
Wherein the color acquisition module 12 comprises:
an object unfolding unit 121, configured to determine an unfolding boundary of an object model of the object to be drawn based on display information corresponding to the object to be drawn in the first viewpoint image, and unfold the object model of the object to be drawn with the unfolding boundary to obtain initial texture data;
a coordinate associating unit 122, configured to coordinate-associate the initial texture data with the first viewpoint image to obtain a first texture coordinate system;
the texture combining unit 123 is configured to perform optimization processing on the initial texture data based on the first texture grid, generate first texture data, and combine the first texture coordinate system with the first texture data to obtain first texture shading data.
Wherein the image rendering module 13 comprises:
a grid optimization unit 131, configured to perform grid optimization processing on a first texture grid of the first viewpoint image, so as to obtain an optimized texture grid;
a combined rendering unit 132, configured to perform integrated rendering on the first texture shading data and the optimized texture grid, and generate a rendered image corresponding to the first viewpoint image;
a depth analysis unit 133, configured to perform depth analysis on the first viewpoint image by using the first texture grid, so as to obtain image depth information of the first viewpoint image;
the depth rendering unit 134 is configured to render the image depth information and the first texture rendering data to a depth image corresponding to the first viewpoint image.
Wherein the texture optimization module 14 comprises:
the pixel analysis unit 141 is configured to perform pixel change analysis on the rendered image, and determine N rendering areas corresponding to the rendered image; n is a positive integer; the deformation degree of the pixels corresponding to different rendering areas is different when the pixels move;
a noise adding unit 142, configured to add noise data to the rendered image based on the N rendered areas and the depth image corresponding to the first viewpoint image, and generate a noise image corresponding to the rendered image;
An image denoising unit 143 for denoising the noise image to generate a first corrected image.
The pixel analysis unit 141 includes:
a vector acquisition subunit 1411, configured to acquire a first viewpoint corresponding to the first viewpoint image, and acquire, in a viewpoint coordinate system of the first viewpoint, a grid normal vector of a unit grid that forms the first texture grid;
a relationship determination subunit 1412 configured to determine a relative relationship between the rendered image and the image display plane at the unit mesh based on the mesh normal vector of the unit mesh, and divide the rendered image into N initial rendering areas based on the relative relationship;
the region adjustment subunit 1413 is configured to obtain an application cache image, and perform region adjustment on the N initial rendering regions by using the application cache image, so as to obtain N rendering regions corresponding to the rendering image.
The pixel analysis unit 141 includes:
the region division subunit 1414 is configured to input the rendered image into a region division model to perform pixel change analysis, and determine N rendered regions corresponding to the rendered image;
the apparatus 1100 further comprises:
a sample acquiring module 15, configured to acquire consecutive M-frame region division image samples;
The sample dividing module 16 is configured to input a first divided image sample of the M frame area divided image samples into the initial area divided model for performing pixel change analysis, and determine N sample rendering areas corresponding to the first divided image sample; m is a positive integer; the first divided image sample is any one of the M frame region divided image samples;
the region analysis module 17 is configured to obtain M frame region division image samples, and respectively correspond to sample region change information in N sample rendering regions; each sample region change information is used for representing the change condition of the M frame region division image samples in the corresponding sample rendering region;
the model generating module 18 is configured to perform parameter adjustment on the initial region division model based on the standard region change information corresponding to the N sample rendering regions and the sample region change information corresponding to the N sample rendering regions, until the parameters converge, so as to obtain the region division model.
The noise adding unit 142 is specifically configured to:
in the ith noise adding iteration, adding noise data into the image to be noise-added corresponding to the rendering image according to the noise adding iteration round, N rendering areas and the depth image, and generating a noise image i corresponding to the rendering image; i is an integer; the iteration round of adding noise is i;
The image denoising unit 143 is specifically configured to:
denoising the noise image i to generate a corrected image i; and the corrected image i when the ith noise adding iteration meets the iteration completion condition is the first corrected image.
Wherein the noise adding unit 142 includes:
an initial obtaining subunit 1421, configured to obtain initial noise data in the ith noise adding iteration if i is the first initial value of the iteration;
a mask determining subunit 1422, configured to determine mask data of the rendered image according to the noisy iteration round and the N rendering areas;
a first generating subunit 1423, configured to add initial noise data to the rendered image by using the mask data, and generate a noise image i of the rendered image; the image to be added with noise corresponding to the rendering image is the rendering image;
a second generating subunit 1424, configured to determine, if i is not the first initial value of iteration, a noise image j as an image to be noisy of the rendered image in the ith noisy iteration, obtain an iteration interval to which the noisy iteration round belongs, determine, based on the iteration interval, noise data i from the depth image and the initial noise data of the rendered image, add the noise data i to the image to be noisy of the rendered image, and generate a noise image i of the rendered image; the noise image j is the noise image generated in the previous noise adding iteration of the ith noise adding iteration.
The N rendering areas comprise a holding area, a changing area and a middle area; the mask determination subunit 1422 is specifically configured to:
acquiring a first rendering area to which a kth pixel point forming a rendering image belongs, and if the first rendering area is a holding area, determining a first mask value corresponding to the holding area as a pixel mask corresponding to the kth pixel point; k is a positive integer;
if the first rendering area is a change area, determining a second mask value corresponding to the change area as a pixel mask corresponding to the kth pixel point;
if the first rendering area is the middle area, determining a pixel mask corresponding to a kth pixel point based on the noise adding iteration round;
when the pixel masks respectively corresponding to the pixel points forming the rendering image are obtained, the pixel masks respectively corresponding to the pixel points form mask data of the rendering image.
Wherein the noise data i is added to the image to be noise-added of the rendered image, and the noise image i of the rendered image is generated, and the second generating subunit 1424 is specifically configured to:
inputting an image to be noisy of the rendered image and noise data i into an image diffusion model, and performing noise adding processing on the image to be noisy of the rendered image by adopting the noise data i in the image diffusion model to generate a noise image i of the rendered image;
The image denoising unit 143 is specifically configured to:
in the image diffusion model, a noise image i is subjected to denoising processing, and a corrected image i is generated.
Wherein the texture optimization module 14 comprises:
the iterative optimization unit 144 is configured to perform texture optimization processing on a rendering image corresponding to the kth texture optimization based on a depth image corresponding to the kth texture optimization of the first viewpoint image, so as to obtain a first corrected image k; k is a positive integer; when k is a second initial value, the first viewpoint image is an original viewpoint image; when k is not the second initial value, the first viewpoint image is the first corrected image (k-1); the first corrected image (k-1) refers to the first corrected image obtained in the (k-1) th texture optimization;
the apparatus 1100 further comprises:
a grid acquisition module 19, configured to acquire a second texture grid k of the first corrected image k;
the texture projection module 21 is configured to perform texture analysis on the first corrected image k to obtain second texture coloring data k;
a texture redrawing module 22, configured to perform texture redrawing processing based on the second texture shading data k and the second texture grid k, and generate a second corrected image k;
the image determining module 23 is configured to determine the second modified image k as the target modified image corresponding to the original viewpoint image if the performance optimization degree of the image performance of the second modified image k with respect to the image performance of the original viewpoint image is greater than or equal to the optimization threshold.
Wherein the texture projection module 21 comprises:
a preliminary projection unit 211, configured to obtain a first viewpoint corresponding to the original viewpoint image, and perform texture analysis on the first corrected image k to obtain intermediate texture coloring data;
a buffer generating unit 212, configured to render the second texture grid k, the intermediate texture shading data, and the first viewpoint by using a differential renderer, and generate an intermediate buffer image;
a parameter acquisition unit 213 for acquiring texture processing parameters;
a parameter generating unit 214, configured to perform error analysis on the intermediate cached image and the first corrected image k based on the texture processing parameter, so as to obtain a gradient adjustment parameter;
a texture determining unit 215 for determining the intermediate texture shading data as the second texture shading data k if the gradient adjustment parameter is greater than or equal to the texture projection threshold;
the texture adjustment unit 216 is configured to perform data adjustment on the intermediate texture shading data based on the gradient adjustment parameter if the gradient adjustment parameter is smaller than the texture projection threshold, determine the adjusted intermediate texture shading data as intermediate texture shading data, and return to performing the step of rendering the second texture grid k, the intermediate texture shading data and the first viewpoint by using the differential renderer to generate an intermediate buffer image.
Wherein the parameter obtaining unit 213 includes:
a constraint determining subunit 2131, configured to obtain N modified rendering areas corresponding to the first modified image k, and determine an area constraint parameter corresponding to the first modified image k based on the N modified rendering areas;
and the parameter constraint subunit 2132 is used for obtaining the Gaussian blur kernel, and performing parameter constraint processing on the Gaussian blur kernel by adopting the regional constraint parameters to obtain texture processing parameters.
The embodiment of the application provides an image processing device, which can acquire a first texture grid of a first viewpoint image, and perform texture analysis on the first viewpoint image to obtain first texture coloring data; rendering the first texture grid and the first texture coloring data of the first viewpoint image, and determining a depth image and a rendering image corresponding to the first viewpoint image; and performing texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image to obtain a first corrected image. By the aid of the process, the texture resources with defects can be extracted, namely the first texture grid, the first texture coloring data and the like, and the texture resources are optimized, image depth information (namely depth images) of images is adopted in the process, the optimization performance of texture optimization can be improved by aid of and reference to the process of texture optimization, and optimization of geometry (namely texture grid) and texture and the like is integrated into a unified frame, so that image rendering is more stable, image rendering efficiency is improved, problems of double images, blurring and the like of the textures in image rendering are effectively solved, and image rendering effect and integrity are improved.
Further, referring to fig. 12, fig. 12 is a schematic diagram of another image processing apparatus according to an embodiment of the present application. The image processing apparatus may be a computer program (including program code, etc.) running in a computer device, for example the image processing apparatus may be an application software; the device can be used for executing corresponding steps in the method provided by the embodiment of the application. As shown in fig. 12, the image processing apparatus 1200 may be used in the computer device in the embodiment corresponding to fig. 9, and specifically, the apparatus may include: the system comprises a viewpoint acquisition module 31, a sample analysis module 32, a sample rendering module 33, a sample optimization module 34, a sample projection module 35, a parameter adjustment module 36 and a model determination module 37.
A viewpoint acquisition module 31 for acquiring a viewpoint image sample;
the sample analysis module 32 is configured to obtain a first sample texture grid of the viewpoint image sample, and perform texture analysis on the viewpoint image sample to obtain first sample texture coloring data;
the sample rendering module 33 is configured to input a first sample texture grid of the viewpoint image sample and first sample texture coloring data into an initial repair rendering model for rendering, and determine a sample depth image and a sample rendering image corresponding to the viewpoint image sample;
The sample optimization module 34 is configured to input the sample depth image and the sample rendering image into an initial image diffusion model for performing texture optimization processing, so as to obtain a sample correction image;
the sample projection module 35 is configured to perform texture analysis on the sample correction image to obtain second sample texture coloring data;
a parameter adjustment module 36 for performing parameter adjustment on the initial repair rendering model and the initial image diffusion model based on the first sample texture shading data and the second sample texture shading data;
the model determining module 37 is configured to obtain a repair rendering model corresponding to the initial repair rendering model and an image diffusion model corresponding to the initial image diffusion model until a model convergence condition is reached.
Wherein, this viewpoint acquisition module 31 includes:
the grid deforming unit 311 is configured to obtain a sample object and an initial object texture grid corresponding to the sample object, perform geometric deformation processing on the initial object texture grid, and perform grid laplace regularization processing to generate an optimized texture grid;
an object adjustment unit 312, configured to adjust the sample object by using the optimized texture grid to obtain a rendered object;
a sample generation unit 313 for adding an image background to the rendering object to generate a viewpoint image sample.
Wherein the parameter adjustment module 36 comprises:
a loss generation unit 361, configured to obtain a sample tag, and generate a first loss function based on the sample tag and a sample correction image;
the penalty generation unit 361 is further configured to generate a second penalty function based on the first sample texture shading data and the second sample texture shading data;
the parameter adjustment unit 362 is configured to perform parameter adjustment on the initial restoration rendering model and the initial image diffusion model by using the first loss function and the second loss function.
Referring to fig. 13, fig. 13 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 13, the computer device in the embodiment of the present application may include: one or more processors 1301, memory 1302, and input-output interfaces 1303. The processor 1301, the memory 1302, and the input-output interface 1303 are connected via a bus 1304. The memory 1302 is used for storing a computer program, the computer program includes program instructions, and the input/output interface 1303 is used for receiving data and outputting data, for example, for data interaction between a computer device and a service device; processor 1301 is operative to execute program instructions stored in memory 1302.
The processor 1301 may perform the following operations, among others:
acquiring a first texture grid of a first viewpoint image, and performing texture analysis on the first viewpoint image to obtain first texture coloring data;
rendering the first texture grid and the first texture coloring data of the first viewpoint image, and determining a depth image and a rendering image corresponding to the first viewpoint image;
and performing texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image to obtain a first corrected image.
In some possible implementations, the processor 1301 may be a central processing unit (central processing unit, CPU), which may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field-programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1302 may include read only memory and random access memory and provides instructions and data to the processor 1301 and the input output interface 1303. A portion of memory 1302 may also include non-volatile random access memory. For example, the memory 1302 may also store information of device type.
In a specific implementation, the computer device may execute, through each functional module built in the computer device, an implementation manner provided by each step in fig. 3 or fig. 7, and specifically, the implementation manner provided by each step in fig. 3 or fig. 7 may be referred to, which is not described herein again.
An embodiment of the present application provides a computer device, including: the processor, the input/output interface and the memory acquire the computer program in the memory through the processor, execute the steps of the method shown in fig. 3, and perform the image processing operation. The embodiment of the application realizes the acquisition of the first texture grid of the first viewpoint image, and performs texture analysis on the first viewpoint image to obtain first texture coloring data; rendering the first texture grid and the first texture coloring data of the first viewpoint image, and determining a depth image and a rendering image corresponding to the first viewpoint image; and performing texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image to obtain a first corrected image. By the aid of the process, the texture resources with defects can be extracted, namely the first texture grid, the first texture coloring data and the like, and the texture resources are optimized, image depth information (namely depth images) of images is adopted in the process, the optimization performance of texture optimization can be improved by aid of and reference to the process of texture optimization, and optimization of geometry (namely texture grid) and texture and the like is integrated into a unified frame, so that image rendering is more stable, image rendering efficiency is improved, problems of double images, blurring and the like of the textures in image rendering are effectively solved, and image rendering effect and integrity are improved.
The embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, where the computer program is adapted to be loaded by the processor and execute the image processing method provided by each step in fig. 3 or fig. 7, and specifically refer to an implementation manner provided by each step in fig. 3 or fig. 7, which is not described herein again. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present application, please refer to the description of the method embodiments of the present application. As an example, a computer program may be deployed to be executed on one computer device or on multiple computer devices at one site or distributed across multiple sites and interconnected by a communication network.
The computer readable storage medium may be the image processing apparatus provided in any of the foregoing embodiments or an internal storage unit of the computer device, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the computer device. Further, the computer-readable storage medium may also include both internal storage units and external storage devices of the computer device. The computer-readable storage medium is used to store the computer program and other programs and data required by the computer device. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and executes the computer instructions, so that the computer device executes the methods provided in various optional modes in fig. 3 or fig. 7, extraction of texture resources with defects generated, namely, first texture grids, first texture coloring data and the like, and optimization processing of the texture resources are realized, image depth information (namely, depth images) of images is adopted in the process, assistance and reference are made to the process of texture optimization, optimization performance of texture optimization can be improved, adjustment and optimization of the texture resources, optimization of geometry (namely, texture grids) and textures and the like are integrated into a unified frame, so that image rendering is more stable, image rendering efficiency is improved, problems of texture ghosting, blurring and the like in image rendering are effectively solved, and image rendering effect and integrity are improved.
The terms first, second and the like in the description and in the claims and drawings of embodiments of the application are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the term "include" and any variations thereof is intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or device that comprises a list of steps or elements is not limited to the list of steps or modules but may, in the alternative, include other steps or modules not listed or inherent to such process, method, apparatus, article, or device.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in this description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The method and related apparatus provided in the embodiments of the present application are described with reference to the flowchart and/or schematic structural diagrams of the method provided in the embodiments of the present application, and each flow and/or block of the flowchart and/or schematic structural diagrams of the method may be implemented by computer program instructions, and combinations of flows and/or blocks in the flowchart and/or block diagrams. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or structural diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable image processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or structures.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device of the embodiment of the application can be combined, divided and deleted according to actual needs.
The foregoing disclosure is illustrative of the present application and is not to be construed as limiting the scope of the application, which is defined by the appended claims.

Claims (19)

1. An image processing method, the method comprising:
acquiring a first texture grid of a first viewpoint image, and performing texture analysis on the first viewpoint image to obtain first texture coloring data;
rendering the first texture grid of the first viewpoint image and the first texture coloring data, and determining a depth image and a rendering image corresponding to the first viewpoint image;
performing pixel change analysis on the rendered image to determine N rendering areas corresponding to the rendered image; n is a positive integer; the deformation degree of the pixels corresponding to different rendering areas is different when the pixels move;
adding noise data into the rendering image based on the N rendering areas and the depth image corresponding to the first viewpoint image, and generating a noise image corresponding to the rendering image;
And denoising the noise image to generate a first corrected image.
2. The method of claim 1, wherein performing texture parsing on the first view image to obtain first texture shading data comprises:
determining an unfolding boundary of an object model of an object to be drawn based on display information corresponding to the object to be drawn in the first viewpoint image, unfolding the object model of the object to be drawn by the unfolding boundary to obtain initial texture data, and carrying out coordinate association on the initial texture data and the first viewpoint image to obtain a first texture coordinate system;
and optimizing the initial texture data based on the first texture grid to generate first texture data, and combining the first texture coordinate system with the first texture data to obtain first texture coloring data.
3. The method of claim 1, wherein rendering the first texture grid of the first view image and the first texture shading data to determine a depth image and a rendered image corresponding to the first view image comprises:
grid optimization processing is carried out on a first texture grid of the first viewpoint image to obtain an optimized texture grid, and the first texture coloring data and the optimized texture grid are integrated and rendered to generate a rendering image corresponding to the first viewpoint image;
Performing depth analysis on the first viewpoint image by adopting the first texture grid to obtain image depth information of the first viewpoint image;
and rendering the image depth information and the first texture rendering data to obtain a depth image corresponding to the first viewpoint image.
4. The method of claim 1, wherein the performing pixel change analysis on the rendered image to determine N rendered regions corresponding to the rendered image comprises:
acquiring a first view corresponding to the first view image, and acquiring a grid normal vector of a unit grid forming the first texture grid in a view coordinate system of the first view;
determining a relative relation between the rendered image and an image display plane at the unit grid based on a grid normal vector of the unit grid, and dividing the rendered image into N initial rendering areas based on the relative relation;
and acquiring application cache images, and carrying out region adjustment on the N initial rendering regions by adopting the application cache images to obtain N rendering regions corresponding to the rendering images.
5. The method of claim 1, wherein the performing pixel change analysis on the rendered image to determine N rendered regions corresponding to the rendered image comprises:
Inputting the rendering image into a region division model for pixel change analysis, and determining N rendering regions corresponding to the rendering image;
the method further comprises the steps of:
acquiring continuous M-frame regional division image samples, inputting a first division image sample in the M-frame regional division image samples into an initial regional division model for pixel change analysis, and determining N sample rendering areas corresponding to the first division image sample; m is a positive integer; the first divided image sample is any one of the M frame region divided image samples;
acquiring the M frame region division image samples, and respectively corresponding sample region change information in the N sample rendering regions; each sample region change information is used for representing the change condition of the M frame region division image sample in the corresponding sample rendering region;
and carrying out parameter adjustment on the initial region division model based on the standard region change information respectively corresponding to the N sample rendering regions and the sample region change information respectively corresponding to the N sample rendering regions until the parameters are converged to obtain the region division model.
6. The method of claim 1, wherein adding noise data to the rendered image based on the N rendered regions and the depth image corresponding to the first viewpoint image, generating the noise image corresponding to the rendered image, comprises:
in the ith noise adding iteration, adding noise data into the image to be noise-added corresponding to the rendering image according to the noise adding iteration round, the N rendering areas and the depth image, and generating a noise image i corresponding to the rendering image; i is an integer; the iteration round of the noise adding is i;
the denoising processing is performed on the noise image to generate a first corrected image, including:
denoising the noise image i to generate a corrected image i; and the corrected image i when the ith noise adding iteration meets the iteration completion condition is a first corrected image.
7. The method of claim 6, wherein in the ith noise adding iteration, adding noise data to the image to be noise-added corresponding to the rendered image according to the noise adding iteration round, the N rendering regions and the depth image, and generating the noise image i corresponding to the rendered image comprises:
If i is an iteration first initial value, in an ith noise adding iteration, initial noise data is obtained, mask data of the rendered image is determined according to the noise adding iteration round and the N rendering areas, the mask data is adopted, the initial noise data is added in the rendered image, and a noise image i of the rendered image is generated; the image to be noisy corresponding to the rendered image is the rendered image;
if i is not the first initial value of the iteration, determining a noise image j as an image to be noisy of the rendering image in the ith noisy iteration, acquiring an iteration interval to which the noisy iteration round belongs, determining noise data i from a depth image of the rendering image and the initial noise data based on the iteration interval, adding the noise data i into the image to be noisy of the rendering image, and generating a noise image i of the rendering image; the noise image j is a noise image generated in a previous noise adding iteration of the ith noise adding iteration.
8. The method of claim 7, wherein the N rendering regions include a hold region, a change region, and an intermediate region; the determining mask data of the rendered image according to the noisy iteration round and the N rendering areas includes:
Acquiring a first rendering area to which a kth pixel point forming the rendering image belongs, and if the first rendering area is a holding area, determining a first mask value corresponding to the holding area as a pixel mask corresponding to the kth pixel point; k is a positive integer;
if the first rendering area is a change area, determining a second mask value corresponding to the change area as a pixel mask corresponding to the kth pixel point;
if the first rendering area is the middle area, determining a pixel mask corresponding to the kth pixel point based on the noise adding iteration round;
when the pixel masks respectively corresponding to the pixel points forming the rendering image are obtained, the pixel masks respectively corresponding to the pixel points are formed into mask data of the rendering image.
9. The method of claim 7, wherein adding the noise data i to the image to be noisy of the rendered image, generating the noise image i of the rendered image, comprises:
inputting an image to be noisy of the rendered image and the noise data i into an image diffusion model, and performing noise adding processing on the image to be noisy of the rendered image by adopting the noise data i in the image diffusion model to generate a noise image i of the rendered image;
The denoising processing is performed on the noise image i to generate a corrected image i, including:
in the image diffusion model, denoising is carried out on the noise image i, and a corrected image i is generated.
10. The method of claim 1, wherein the pixel change analysis is performed on the rendered image to determine N rendered regions corresponding to the rendered image; adding noise data into the rendering image based on the N rendering areas and the depth image corresponding to the first viewpoint image, and generating a noise image corresponding to the rendering image; denoising the noise image to generate a first corrected image, including:
acquiring N rendering areas of a rendering image corresponding to the kth texture optimization;
based on N rendering areas of the rendering image corresponding to the kth texture optimization and the depth image corresponding to the kth texture optimization of the first viewpoint image, adding noise data into the rendering image corresponding to the kth texture optimization, and generating a noise image of the rendering image corresponding to the kth texture optimization;
denoising the noise image of the rendering image corresponding to the kth texture optimization to obtain a first corrected image k; k is a positive integer; when k is a second initial value, the first viewpoint image is an original viewpoint image; when k is not the second initial value, the first viewpoint image is a first corrected image (k-1); the first corrected image (k-1) refers to the first corrected image obtained in the (k-1) th texture optimization;
The method further comprises the steps of:
obtaining a second texture grid k of the first corrected image k, and performing texture analysis on the first corrected image k to obtain second texture coloring data k;
performing texture redrawing processing based on the second texture shading data k and the second texture grid k to generate a second corrected image k;
and if the performance optimization degree of the image performance of the second corrected image k relative to the image performance of the original viewpoint image is larger than or equal to an optimization threshold value, determining the second corrected image k as a target corrected image corresponding to the original viewpoint image.
11. The method of claim 10, wherein performing texture analysis on the first modified image k to obtain second texture shading data k comprises:
acquiring a first viewpoint corresponding to the original viewpoint image, and performing texture analysis on the first corrected image k to obtain intermediate texture coloring data;
rendering the second texture grid k, the intermediate texture coloring data and the first view point by adopting a differential renderer to generate an intermediate cache image;
obtaining texture processing parameters, and carrying out error analysis on the intermediate cache image and the first corrected image k based on the texture processing parameters to obtain gradient adjustment parameters;
If the gradient adjustment parameter is greater than or equal to a texture projection threshold, determining the intermediate texture shading data as second texture shading data k;
and if the gradient adjustment parameter is smaller than the texture projection threshold, performing data adjustment on the intermediate texture shading data based on the gradient adjustment parameter, determining the adjusted intermediate texture shading data as intermediate texture shading data, and returning to the step of performing rendering on the second texture grid k, the intermediate texture shading data and the first viewpoint by using a differential renderer to generate an intermediate cache image.
12. The method of claim 11, wherein the obtaining texture processing parameters comprises:
acquiring N corrected rendering areas corresponding to the first corrected image k, and determining area constraint parameters corresponding to the first corrected image k based on the N corrected rendering areas;
and acquiring a Gaussian blur kernel, and carrying out parameter constraint processing on the Gaussian blur kernel by adopting the region constraint parameters to obtain texture processing parameters.
13. An image processing method, the method comprising:
obtaining a viewpoint image sample, obtaining a first sample texture grid of the viewpoint image sample, and performing texture analysis on the viewpoint image sample to obtain first sample texture coloring data;
Inputting a first sample texture grid of the viewpoint image sample and the first sample texture coloring data into an initial restoration rendering model for rendering, and determining a sample depth image and a sample rendering image corresponding to the viewpoint image sample;
performing pixel change analysis on the sample rendering image, and determining N rendering areas corresponding to the sample rendering image, wherein N is a positive integer, and the deformation degrees of pixels corresponding to different rendering areas are different when the pixels move;
inputting the sample depth image, the sample rendering image and N rendering areas corresponding to the sample rendering image into an initial image diffusion model, adding sample noise data into the sample rendering image, and generating a sample noise image corresponding to the sample rendering image;
denoising the sample noise image to obtain a sample correction image;
and carrying out texture analysis on the sample corrected image to obtain second sample texture coloring data, and carrying out parameter adjustment on the initial restoration rendering model and the initial image diffusion model based on the first sample texture coloring data and the second sample texture coloring data until a model convergence condition is reached to obtain a restoration rendering model corresponding to the initial restoration rendering model and an image diffusion model corresponding to the initial image diffusion model.
14. The method of claim 13, wherein the obtaining the viewpoint image samples comprises:
obtaining a sample object and an initial object texture grid corresponding to the sample object, performing geometric deformation processing on the initial object texture grid, and performing grid Laplacian regularization processing to generate an optimized texture grid;
and adjusting the sample object by adopting the optimized texture grid to obtain a rendering object, adding an image background to the rendering object, and generating a viewpoint image sample.
15. The method of claim 13, wherein the parameter adjusting the initial repair rendering model and the initial image diffusion model based on the first sample texture shading data and the second sample texture shading data comprises:
acquiring a sample tag, and generating a first loss function based on the sample tag and the sample correction image;
generating a second penalty function based on the first sample texture shading data and the second sample texture shading data;
and adopting the first loss function and the second loss function to carry out parameter adjustment on the initial restoration rendering model and the initial image diffusion model.
16. An image processing apparatus, characterized in that the apparatus comprises:
the texture acquisition module is used for acquiring a first texture grid of the first viewpoint image;
the coloring acquisition module is used for carrying out texture analysis on the first viewpoint image to obtain first texture coloring data;
the image rendering module is used for rendering the first texture grid of the first viewpoint image and the first texture coloring data and determining a depth image and a rendering image corresponding to the first viewpoint image;
the texture optimization module is used for performing texture optimization processing on the rendered image based on the depth image corresponding to the first viewpoint image to obtain a first corrected image;
the texture optimization module comprises:
the pixel analysis unit is used for carrying out pixel change analysis on the rendered image and determining N rendering areas corresponding to the rendered image; n is a positive integer; the deformation degree of the pixels corresponding to different rendering areas is different when the pixels move;
the noise adding processing unit is used for adding noise data into the rendering image based on the N rendering areas and the depth image corresponding to the first viewpoint image, and generating a noise image corresponding to the rendering image;
And the image denoising unit is used for denoising the noise image and generating a first corrected image.
17. An image processing apparatus, characterized in that the apparatus comprises:
the viewpoint acquisition module is used for acquiring viewpoint image samples;
the sample analysis module is used for acquiring a first sample texture grid of the viewpoint image sample, and carrying out texture analysis on the viewpoint image sample to obtain first sample texture coloring data;
the sample rendering module is used for inputting a first sample texture grid of the viewpoint image sample and the first sample texture coloring data into an initial restoration rendering model for rendering, and determining a sample depth image and a sample rendering image corresponding to the viewpoint image sample;
the sample optimization module is used for carrying out pixel change analysis on the sample rendering image, determining N rendering areas corresponding to the sample rendering image, wherein N is a positive integer, and the deformation degrees of pixels corresponding to different rendering areas are different when the pixels move;
the sample optimizing module is further configured to input the sample depth image, the sample rendering image, and N rendering areas corresponding to the sample rendering image into an initial image diffusion model, and add sample noise data into the sample rendering image to generate a sample noise image corresponding to the sample rendering image;
The sample optimizing module is further used for denoising the sample noise image to obtain a sample correction image;
the sample projection module is used for carrying out texture analysis on the sample correction image to obtain second sample texture coloring data;
the parameter adjustment module is used for carrying out parameter adjustment on the initial restoration rendering model and the initial image diffusion model based on the first sample texture coloring data and the second sample texture coloring data;
and the model determining module is used for obtaining the repair rendering model corresponding to the initial repair rendering model and the image diffusion model corresponding to the initial image diffusion model until the model convergence condition is reached.
18. A computer device, comprising a processor, a memory, and an input-output interface;
the processor is connected to the memory and the input-output interface, respectively, wherein the input-output interface is used for receiving data and outputting data, the memory is used for storing a computer program, and the processor is used for calling the computer program to enable the computer device to execute the method of any one of claims 1-12 or execute the method of any one of claims 13-15.
19. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program adapted to be loaded and executed by a processor to cause a computer device having the processor to perform the method of any one of claims 1-12 or to perform the method of any one of claims 13-15.
CN202310548176.XA 2023-05-16 2023-05-16 Image processing method, device, computer and storage medium Active CN116310046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310548176.XA CN116310046B (en) 2023-05-16 2023-05-16 Image processing method, device, computer and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310548176.XA CN116310046B (en) 2023-05-16 2023-05-16 Image processing method, device, computer and storage medium

Publications (2)

Publication Number Publication Date
CN116310046A CN116310046A (en) 2023-06-23
CN116310046B true CN116310046B (en) 2023-08-22

Family

ID=86815242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310548176.XA Active CN116310046B (en) 2023-05-16 2023-05-16 Image processing method, device, computer and storage medium

Country Status (1)

Country Link
CN (1) CN116310046B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778065B (en) * 2023-08-21 2024-01-02 腾讯科技(深圳)有限公司 Image processing method, device, computer and storage medium
CN117197319B (en) * 2023-11-07 2024-03-22 腾讯科技(深圳)有限公司 Image generation method, device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243071A (en) * 2020-01-08 2020-06-05 叠境数字科技(上海)有限公司 Texture rendering method, system, chip, device and medium for real-time three-dimensional human body reconstruction
CN111508052A (en) * 2020-04-23 2020-08-07 网易(杭州)网络有限公司 Rendering method and device of three-dimensional grid body
CN112270745A (en) * 2020-11-04 2021-01-26 北京百度网讯科技有限公司 Image generation method, device, equipment and storage medium
CN113888392A (en) * 2021-08-27 2022-01-04 清华大学 Image rendering method and device, electronic equipment and storage medium
WO2022120809A1 (en) * 2020-12-11 2022-06-16 Oppo广东移动通信有限公司 Virtual view drawing method and apparatus, rendering method and apparatus, and decoding method and apparatus, and devices and storage medium
CN114764840A (en) * 2020-12-31 2022-07-19 阿里巴巴集团控股有限公司 Image rendering method, device, equipment and storage medium
CN115760940A (en) * 2022-10-27 2023-03-07 网易(杭州)网络有限公司 Object texture processing method, device, equipment and storage medium
CN116109798A (en) * 2023-04-04 2023-05-12 腾讯科技(深圳)有限公司 Image data processing method, device, equipment and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2511040A1 (en) * 2004-09-23 2006-03-23 The Governors Of The University Of Alberta Method and system for real time image rendering
US11615587B2 (en) * 2020-06-13 2023-03-28 Qualcomm Incorporated Object reconstruction with texture parsing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243071A (en) * 2020-01-08 2020-06-05 叠境数字科技(上海)有限公司 Texture rendering method, system, chip, device and medium for real-time three-dimensional human body reconstruction
CN111508052A (en) * 2020-04-23 2020-08-07 网易(杭州)网络有限公司 Rendering method and device of three-dimensional grid body
CN112270745A (en) * 2020-11-04 2021-01-26 北京百度网讯科技有限公司 Image generation method, device, equipment and storage medium
WO2022120809A1 (en) * 2020-12-11 2022-06-16 Oppo广东移动通信有限公司 Virtual view drawing method and apparatus, rendering method and apparatus, and decoding method and apparatus, and devices and storage medium
CN114764840A (en) * 2020-12-31 2022-07-19 阿里巴巴集团控股有限公司 Image rendering method, device, equipment and storage medium
CN113888392A (en) * 2021-08-27 2022-01-04 清华大学 Image rendering method and device, electronic equipment and storage medium
CN115760940A (en) * 2022-10-27 2023-03-07 网易(杭州)网络有限公司 Object texture processing method, device, equipment and storage medium
CN116109798A (en) * 2023-04-04 2023-05-12 腾讯科技(深圳)有限公司 Image data processing method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于数据修正的实时阴影反走样算法;赵乃良;陈艳军;潘志庚;;计算机辅助设计与图形学学报(08);第48-53页 *

Also Published As

Publication number Publication date
CN116310046A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN116310046B (en) Image processing method, device, computer and storage medium
WO2021174939A1 (en) Facial image acquisition method and system
US10748324B2 (en) Generating stylized-stroke images from source images utilizing style-transfer-neural networks with non-photorealistic-rendering
CN111598998B (en) Three-dimensional virtual model reconstruction method, three-dimensional virtual model reconstruction device, computer equipment and storage medium
CN116109798B (en) Image data processing method, device, equipment and medium
US9437034B1 (en) Multiview texturing for three-dimensional models
CN112348921A (en) Mapping method and system based on visual semantic point cloud
CN110246209B (en) Image processing method and device
CN108961383A (en) three-dimensional rebuilding method and device
WO2024193609A1 (en) Image rendering method and apparatus, electronic device, storage medium and program product
CN111382618B (en) Illumination detection method, device, equipment and storage medium for face image
CN110378947A (en) 3D model reconstruction method, device and electronic equipment
CN114077891B (en) Training method of style conversion model and training method of virtual building detection model
CN115908753B (en) Method and related device for reconstructing whole-body human body grid surface
CN113362338A (en) Rail segmentation method, device, computer equipment and rail segmentation processing system
CN114170290A (en) Image processing method and related equipment
CN117197388A (en) Live-action three-dimensional virtual reality scene construction method and system based on generation of antagonistic neural network and oblique photography
CN116797768A (en) Method and device for reducing reality of panoramic image
CN112819937B (en) Self-adaptive multi-object light field three-dimensional reconstruction method, device and equipment
CN104463962A (en) Three-dimensional scene reconstruction method based on GPS information video
CN116385622B (en) Cloud image processing method, cloud image processing device, computer and readable storage medium
CN117522853A (en) Fault positioning method, system, equipment and storage medium of photovoltaic power station
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN112051921A (en) AR navigation map generation method and device, computer equipment and readable storage medium
CN116385577A (en) Virtual viewpoint image generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40088247

Country of ref document: HK