CN117173314A - Image processing method, device, equipment, medium and program product - Google Patents

Image processing method, device, equipment, medium and program product Download PDF

Info

Publication number
CN117173314A
CN117173314A CN202311448678.1A CN202311448678A CN117173314A CN 117173314 A CN117173314 A CN 117173314A CN 202311448678 A CN202311448678 A CN 202311448678A CN 117173314 A CN117173314 A CN 117173314A
Authority
CN
China
Prior art keywords
image
rendering
dimension space
pixel height
rendered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311448678.1A
Other languages
Chinese (zh)
Other versions
CN117173314B (en
Inventor
徐东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311448678.1A priority Critical patent/CN117173314B/en
Publication of CN117173314A publication Critical patent/CN117173314A/en
Application granted granted Critical
Publication of CN117173314B publication Critical patent/CN117173314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Generation (AREA)

Abstract

The embodiment of the application provides an image processing method, an image processing device, an image processing medium and a program product; the method comprises the following steps: acquiring an image to be rendered; acquiring a first object parameter of a rendering object in an image; reconstructing the rendering object in the second dimension space according to the mapping relation between the first dimension space and the second dimension space based on the first object parameter of the rendering object to obtain a second object parameter of the rendering object; and rendering the display image according to the second object parameters of the rendering object. By adopting the embodiment of the application, the image rendering is more vivid, and the image rendering effect is effectively improved.

Description

Image processing method, device, equipment, medium and program product
Technical Field
The present application relates to the field of computer technology, and in particular, to the field of image processing, and more particularly, to an image processing method, an image processing apparatus, a computer device, a computer readable storage medium, and a computer program product.
Background
Image rendering can be understood simply as a model imaging process that draws a three-dimensional scene into a two-dimensional image through a series of computations.
In the prior art, image rendering is mainly realized by an RGB-D (rgb+depth Map) technology (i.e., a three-dimensional visual sensing technology combining RGB (Red Green Blue) color images and Depth images). Through practice, when reconstructing a rendering object in an image by an RGB-D technology, errors in geometry and texture often occur, so that problems such as camera drift and shadow confusion occur, and the image rendering effect is poor.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, image processing equipment, an image processing medium and a program product, which can enable image rendering to be more vivid and effectively improve image rendering effects.
In one aspect, an embodiment of the present application provides an image processing method, including:
acquiring an image to be rendered, wherein the image refers to an image represented in a first dimension space; the image comprises a rendering object to be rendered;
acquiring a first object parameter of a rendering object in an image; the first object parameter is a parameter for rendering the rendered object in the first dimension space;
reconstructing the rendering object in the second dimension space according to the mapping relation between the first dimension space and the second dimension space based on the first object parameter of the rendering object to obtain a second object parameter of the rendering object; the second dimension space is a three-dimensional scene with a preset light source; the second object parameter is a parameter for reconstructing the rendering object in a second dimension space under illumination projection of a preset light source;
And rendering the display image according to the second object parameters of the rendering object.
In another aspect, an embodiment of the present application provides an image processing apparatus, including:
an acquisition unit configured to acquire an image to be rendered, the image being an image represented in a first dimension space; the image comprises a rendering object to be rendered;
the processing unit is used for acquiring a first object parameter of the rendering object in the image; the first object parameter is a parameter for rendering the rendered object in the first dimension space;
the processing unit is also used for reconstructing the rendering object in the second dimension space according to the mapping relation between the first dimension space and the second dimension space based on the first object parameter of the rendering object to obtain the second object parameter of the rendering object; the second dimension space is a three-dimensional scene with a preset light source; the second object parameter is a parameter for reconstructing the rendering object in a second dimension space under illumination projection of a preset light source;
and the processing unit is also used for rendering the display image according to the second object parameters of the rendering object.
In one implementation, the processing unit is configured to, when acquiring a first object parameter of a rendering object in an image, specifically:
Acquiring a foreground image corresponding to the image; the foreground image comprises a rendering object;
performing pixel height estimation on the foreground image to obtain a first object parameter of a rendering object in the image; the first object parameters include at least: rendering texture maps and planar geometries of objects in an image.
In one implementation, the processing unit is configured to perform pixel height estimation on a foreground image, and when obtaining a first object parameter of a rendering object in the image, the processing unit is specifically configured to:
acquiring relative coordinate information of a rendering object in a foreground image; the relative coordinate information is used to indicate a relative height relationship between the rendered object in the first dimension space, the image light source in the foreground image, and the virtual ground in the foreground image;
according to the relative coordinate information of the rendering object, performing pixel coloring treatment on the rendering object in the foreground image to obtain a foreground pixel height map corresponding to the foreground image;
and performing feature extraction processing on the foreground pixel height map to obtain a first object parameter of the rendering object.
In one implementation, the processing unit is configured to reconstruct a rendered object in a second dimension space according to a mapping relationship between the first dimension space and the second dimension space based on a first object parameter of the rendered object, and when obtaining a second object parameter of the rendered object, the processing unit is specifically configured to:
Setting a foreground pixel height map corresponding to a preset light source and an image in a second dimension space according to the mapping relation between the first dimension space and the second dimension space; wherein, the mapping relation between the first dimension space and the second dimension space indicates: the foreground pixel height map is perpendicular to the virtual ground of the second dimension space in the second dimension space; and the z-axis of the camera coordinate system established by taking the preset light source as the origin of the coordinate system is perpendicular to the foreground pixel height map, the x-axis of the camera coordinate system in the second dimension space is parallel to the x-axis of the image coordinate system in the first dimension space, and the y-axis of the camera coordinate system in the second dimension space is parallel to the y-axis of the image coordinate system in the first dimension space;
according to illumination of a preset light source for a foreground pixel height map in a second dimension space and first object parameters of a rendering object, mapping the rendering object to the second dimension space to obtain second object parameters of the rendering object reconstructed in the second dimension space;
wherein the second object parameters include at least: and rendering parameters of the object in the stereoscopic scene represented by the second dimension space and parameters of object projection generated by the influence of illumination of a preset light source on the object in the second dimension space.
In one implementation, the processing unit is configured to map the rendering object to the second dimension space according to the illumination of the preset light source for the foreground pixel height map in the second dimension space and the first object parameter of the rendering object, so as to obtain the second object parameter of the rendering object reconstructed in the second dimension space, where the processing unit is specifically configured to:
according to illumination of a preset light source for a foreground pixel height map in a second dimension space, mapping pixel points forming a rendering object in the foreground pixel height map to the second dimension space to obtain object projection formed by the pixel points forming the rendering object in the second dimension space;
determining parameters of object projection formed by the rendering object in the second dimension space according to parameters of a shadow receiver used as the object projection in the second dimension space and the first object parameters of the rendering object; the shadow receiver in the second dimension space corresponds to an object in the image for receiving a projection of the rendered object;
according to object projection of the rendering object, presetting a geometric relation of the rendering object in the second dimension space in the light source and foreground pixel height map, and calculating parameters of the rendering object in a stereoscopic scene presented in the second dimension space according to an imaging principle; wherein the second object parameters include at least one of: spatial coordinate information, color information, texture information, and depth information.
In one implementation, the processing unit is configured to, when rendering the display image according to the second object parameter of the rendering object, specifically:
performing normal reconstruction processing on the rendering object based on a second object parameter of the rendering object to obtain a normal modeling image; the rendering object in the normal modeling image and the soft shadow projected by the rendering object under the preset light source have a concave-convex mapping effect; the method comprises the steps of,
performing depth extraction processing on the rendering object based on a second object parameter of the rendering object to obtain a target depth image; the target depth image is used for reflecting depth information between the rendering object and the preset light source;
and acquiring a background pixel height map of the image, and rendering a display image based on the background pixel height map, the normal modeling image and the target depth image.
In one implementation manner, the processing unit is configured to perform depth extraction processing on the rendering object based on the second object parameter of the rendering object, and when obtaining the target depth image, the processing unit is specifically configured to:
performing depth extraction processing on the rendering object based on a second object parameter of the rendering object to obtain an initial depth image;
and adjusting the depth precision of the initial depth image according to the direction for improving the rendering speed to obtain the target depth image.
In one implementation, the processing unit is configured to render the display image based on the background pixel height map, the normal modeling image, and the target depth image, and is specifically configured to:
acquiring a rendering model; the rendering model is used for indicating the rendering influence degree of the geometric shape of the preset light source, the related information of the shade device, the related information of the shadow receiver and the spatial relationship between the shade device and the shadow receiver on the soft shadow together;
and rendering the background pixel height map, the normal modeling image and the target depth image by using a rendering model to render a display image.
In one implementation, the rendering model includes N perceptual buffer channels, N being an integer greater than zero; one perception buffer channel corresponds to one map; the processing unit is used for rendering the background pixel height map, the normal modeling image and the target depth image by using the rendering model, and is specifically used for:
extracting a mapping corresponding to each perception buffer channel in N perception buffer channels from a background pixel height map, a normal modeling image and a target depth image;
according to the corresponding relation between the perception buffer channels and the maps, respectively extracting the characteristics of the corresponding maps by utilizing N perception buffer channels included in the rendering model to obtain channel characteristics corresponding to the N perception buffer channels;
Connecting the channel characteristics corresponding to the N perception buffer channels to obtain splicing characteristics;
and rendering a display image based on the splicing characteristic, wherein the rendered image can present a soft shadow effect of object projection of the rendering object under a preset light source.
In one implementation, the N sense buffer channels include: a pixel height buffer channel, a hard shadow buffer channel and a relative distance buffer channel; wherein,
the pixel height buffer channel is used for extracting characteristics of gradients of a foreground pixel height map of the image and a background pixel height map of the image so as to extract geometric figures of a rendering object in the foreground pixel height map and the background pixel height map;
the hard shadow buffer channel is used for extracting the characteristics of the hard shadow image of the image so as to extract the external boundary of the soft shadow in the image;
the relative distance buffer channel is used for extracting features of the relative distance map so as to extract the softness degree of the illumination shadow of the rendering object; the relative distance map refers to the relative distances between the shadow masks and shadow receivers in the second dimension space.
In another aspect, an embodiment of the present application provides a computer apparatus, including:
A processor adapted to execute a computer program;
a computer-readable storage medium in which a computer program is stored which, when executed by a processor, implements the image processing method as described above.
In another aspect, embodiments of the present application provide a computer readable storage medium storing a computer program adapted to be loaded by a processor and to perform an image processing method as described above.
In another aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the image processing method described above.
In the embodiment of the application, when the image to be rendered is required to be rendered and displayed, the first object parameter of the rendering object to be rendered in the image can be obtained; the first object parameter may be understood simply as a parameter required when rendering the rendered object in a first dimension space representing the image. Then, the second object parameter of the rendering object can be obtained based on the first object parameter of the rendering object and according to the mapping relation between the first dimension space and the second dimension space. In this way, a more realistic image may be rendered and displayed in accordance with the second object parameters of the rendering object. It can be seen that the embodiments of the present application propose a new image rendering scheme that supports mapping of rendering objects comprised by an image represented by a first dimension space from the first dimension space to a second dimension space, i.e. reconstructing rendering objects in the image in the second dimension space. Considering that the second dimension space is a stereoscopic scene with a preset light source compared to the first dimension space; then, by reconstructing the rendering object in the stereoscopic second dimension space, it is possible to simulate a picture in which the rendering object is in the real world (e.g., when the rendering object is in the illuminated real world, the rendering object has shadows, and the display effect of the shadows is affected by the texture of the rendering object and the texture of the shadow receiving object of the shadows), thereby obtaining a more real and accurate second object parameter with respect to the rendering object. Therefore, when the image is rendered based on the accurate second object parameters of the rendering object, the quality of the image (such as that the surface of the rendering object in the image has a more realistic texture effect under illumination, the rendering object has softer and more realistic shadows, and the like) can be remarkably improved, so that the fidelity of image rendering is improved, and the image rendering effect is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow diagram of an image processing scheme provided by an exemplary embodiment of the present application;
FIG. 2a is a schematic diagram of an architecture of an image processing system according to an exemplary embodiment of the present application;
FIG. 2b is a schematic diagram of an architecture of another image processing system provided by an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of an image processing framework provided in accordance with an exemplary embodiment of the present application;
FIG. 4 is a flow chart of an image processing method according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of an image coordinate system provided by an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of a mapping between a first dimension space and a second dimension space provided by an exemplary embodiment of the present application;
FIG. 7 is a flow chart of another image processing method according to an exemplary embodiment of the present application;
FIG. 8 is a schematic illustration of a surface normal provided by an exemplary embodiment of the present application;
fig. 9 is a schematic structural view of an image processing apparatus according to an exemplary embodiment of the present application;
fig. 10 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the embodiment of the present application, an image processing scheme is provided, and technical terms and related concepts related to the image processing scheme are briefly introduced below, where:
(1) Artificial intelligence (Artificial Intelligence, AI).
Artificial intelligence is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and expand human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision. The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include, for example, sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, pre-training model technologies, operation/interaction systems, mechatronics, and the like. The pre-training model is also called a large model and a basic model, and can be widely applied to all large-direction downstream tasks of artificial intelligence after fine adjustment. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application mainly relates to a computer vision technology in the field of artificial intelligence. Computer Vision (CV) is a science of researching how to make a machine "look at", and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, detection and measurement on a target, and further perform graphic processing, so that the Computer processes the target into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. The large model technology brings important reform for the development of the computer vision technology, and a pre-trained model in the vision field can be quickly and widely applied to downstream specific tasks through fine adjustment (finetune). Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR (Optical Character Recognition ), video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, map construction, and other techniques, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and the like.
Further, embodiments of the present application relate generally to 3D techniques and three-dimensional object reconstruction in computer vision techniques. Wherein: (1) the 3D (Dimensions) technology is a related technology using 3D space. The 3D and the three dimensions have the same meaning, and refer to a space system formed by adding a direction vector into a planar two-dimensional system, which enables the 3D to have a three-dimensional property. That is, the coordinate system in the 3D space includes three coordinate axes by which the expression of the spatial position is achieved; the three coordinate axes may be expressed as an x-axis, a y-axis, and a z-axis, where x represents a left-right space located at the origin of the coordinate system with reference to the origin of the coordinate system, y represents a front-rear space located at the origin of the coordinate system with reference to the origin of the coordinate system, and z represents an up-down space located at the origin of the coordinate system with reference to the origin of the coordinate system. Currently, there are 2D technologies corresponding to 3D technologies; 2D is a planar dimension consisting of two directions, horizontal and vertical; that is, the coordinate system in the 2D space includes two coordinate axes, an x-axis and a y-axis, respectively, and a z-axis for representing depth information is less than 3D.2.5D technologies exist between 2D and 3D; 2.5D is a technique that attempts to model the effects of a 3D model using the 2D image resource sprite; thus, 2.5D is also referred to as pseudo 3D.2.5D and 3D can be distinguished by whether the model supports multi-view viewing, e.g., 2.5D models, while having a stereoscopic effect, can only view the model from one view in stereoscopic space, while 3D models support multi-view viewing of the 3D model from multiple views in 3D space; therefore, the 2.5D technology still has defects such as being not realistic enough compared to the 3D technology.
(2) Three-dimensional object reconstruction is an important technique in 3D technology; three-dimensional object reconstruction may be referred to simply as three-dimensional reconstruction, which is the creation of a mathematical model for a three-dimensional object or three-dimensional scene that is suitable for computer representation and processing. Three-dimensional object reconstruction is the basis for processing, operating and analyzing the properties of a three-dimensional object or a three-dimensional scene in a computer environment, and is also a key technology for establishing virtual reality expressing an objective world in a computer. That is, three-dimensional object reconstruction refers to the restoration and reconstruction of certain three-dimensional objects or three-dimensional scenes; the reconstructed mathematical model is convenient for computer representation and processing.
(2) Computer graphics (Computer Graphics, CG).
Computer graphics is a science of converting two-dimensional or three-dimensional images into a grid form for a computer display using mathematical algorithms; and in particular, how to represent images in a computer, and related principles and algorithms for image calculation, processing and display using a computer. One important tool involved in computer graphics is rendering technology; rendering refers to the process of converting an abstract model in a computer into an intuitively visible image; rendering, as the last tool in computer graphics, can be accomplished to convert a model into an image and finally render it on a computer screen. Wherein, the above mentioned model may refer to a three-dimensional object or virtual scene strictly defined by language or data structure, and parameters of the model may include, but are not limited to: geometric (e.g., object shape, etc.), viewpoint (i.e., the optical center of the camera in the image), texture (e.g., 2D picture, to add details to the object, such as grooves or patterns on the surface of the object, etc.), and illumination (e.g., to achieve shadow effects in the image by light rays, etc.).
In practical applications, the image to be rendered is mostly an image represented in a first dimension space; if the first dimension space is 2D space or 2.5D space, that is, the image to be rendered is a 2D image or a 2.5D image. When a 3D object is rendered on a 2D image or a 2.5D image by adopting a rendering technology (such as RGB-D technology) of computer graphics, due to the fact that image resources of the 2D image or the 2.5D image are inaccurate, errors in texture (such as existence of texture ghost and blurring) and geometry (such as inaccurate geometrical shape) and the like are often caused in the rendering of the 3D object, and the problems of poor rendering effect, inadequacy of reality and the like are faced.
In order to ensure the authenticity of the rendered image and improve the image rendering effect, the embodiment of the application supports the integration of the optimization of the engine camera, geometry and texture into a unified frame, namely, the introduction of a default engine camera into a 3D space can be realized through the unified frame to simultaneously optimize the information of the geometry, texture and the like of the rendering object to be rendered in the image. Specifically, based on the unified framework provided by the embodiment of the present application, the image processing scheme provided by the embodiment of the present application is a controllable system based on pixel height (herein, the controllable refers to the controllable of the preset light source, for example, the light of the preset light source is controlled to be softer by a software algorithm, so as to improve the shadow rendering effect). The system can map an image to be rendered (particularly a rendering object to be rendered in the image) from a first dimension space (such as a 2.5D space or a 2D space) to a second dimension space (such as a 3D space) based on a preset light source, and can provide a lighting effect for the surface material of the rendering object to be rendered in the image based on the space mapping, so that textures and geometric shapes (or called geometric figures, such as triangle, quadrangle and the like) of the rendering object are reconstructed in the second dimension space based on a pixel height map of the image, and better image rendering is realized.
The texture rendering enhancement flow related to the image processing scheme provided by the embodiment of the present application may refer to fig. 1, and specifically may include:
firstly, after an image to be rendered is acquired, acquiring a first object parameter of a rendering object to be rendered in the image; the first object parameter may be understood simply as representing the parameters required in rendering the rendered object in the first dimension space of the image. As shown in fig. 1, specifically, feature extraction processing is performed on a foreground pixel height map corresponding to an image, so as to extract a first object parameter of a rendering object in the image; the foreground pixel height image corresponding to the image is obtained by estimating the pixel height of a foreground image of the image, and the foreground image of the image refers to an image extracted from the image and only comprises rendering objects in the image.
Then, based on a first object parameter of the rendering object in the image, reconstructing the rendering object in a second dimension space according to a mapping relation between the first dimension space and the second dimension space, so as to obtain a second object parameter of the rendering object. As shown in fig. 1, the second dimension space is a 3D space, and is a stereoscopic scene with a preset light source (camera center O' as shown in fig. 1) compared to the first dimension space (e.g., 2D space or 2.5D space); in this way, the preset light source in the second dimension space is adopted to illuminate in the three-dimensional space, and the rendering object can be rebuilt to the second dimension space, so that the second object parameters required by the reconstruction of the rendering object in the second dimension space are obtained.
And finally, rendering and displaying the image according to a second object parameter of the rendering object which is more true in a second dimension space. As shown in fig. 1, the method specifically includes: on one hand, the 3D geometric shape of the rendering object reconstructed based on the space mapping supports calculation of the surface normal of the rendering object, and a normal modeling image is obtained so as to increase the concave-convex feeling for the texture of the rendering object, thereby improving the authenticity of the rendering object. On the other hand, according to the direction of improving the rendering speed, a depth image with the accuracy meeting the requirement can be obtained based on the distance between the rendering object and the preset light source in the second dimension space. In this way, the rendered display of the image may be achieved based on the normal modeling image, the depth image, and the background pixel height map of the image. Considering that the embodiment of the application uses the engine camera with preset external and internal (namely the preset light source mentioned above) in the second dimension space, the rendering can be the classical rendering method based on the reconstructed rendering object, so as to realize the accurate rendering of the light effect in the three-dimensional scene, including the reflection generated by the irradiation of the illumination to the surface of the rendering object, the soft shadow generated by the irradiation of the illumination to the rendering object, or the refraction generated by the irradiation of the illumination to the object with transparency; of course, it is also possible to render images directly in pixel height space using efficient data structures derived from pixel height representations.
Therefore, the embodiment of the application adopts the technical means of mapping the rendering object included in the image represented by the first dimension space from the first dimension space to the second dimension space, and can simulate the picture of the rendering object in the real world by reconstructing the rendering object in the stereoscopic second dimension space, thereby obtaining more real and accurate second object parameters related to the rendering object. Therefore, when the image is rendered based on the accurate second object parameters of the rendering object, the quality of the image can be remarkably improved, so that the fidelity of image rendering is improved, and the image rendering effect is effectively improved.
In practical application, the image processing scheme provided by the embodiment of the application can be applied to any application scene needing to realize image rendering; including but not limited to: game scenes, autopilot scenes, audio-visual scenes, and Virtual pictures of augmented Reality scenes (such as Virtual Reality (VR), and Virtual pictures of augmented Reality (Augmented Reality, AR), which can be augmented with the present solution), and the like. Wherein:
the Game scene may include a scene where a Game screen provided by various games (e.g., role-playing Game (RPG), shooting Game, action Game (ACT), etc.) is rendered and enhanced; in this scenario, the computer device for executing the image processing scheme provided by the embodiment of the present application is a terminal device or a server that needs to render a game screen. The automatic driving technology refers to that the vehicle realizes self-driving under the condition of no driver operation, and generally comprises the technologies of high-precision map, environment perception, computer vision, behavior decision, path planning, motion control and the like; in this scenario, a vehicle environment image (such as an image including the environment in front of or around the vehicle) may be displayed on a display of the autonomous vehicle or an operation screen for controlling the autonomous vehicle, where the computer device for executing the image processing scheme provided by the embodiment of the present application is the autonomous vehicle or a server corresponding to the autonomous vehicle. The audio and video scenes can comprise scenes in which video pictures such as television shows, movies and short videos are rendered and enhanced; in this scenario, the computer device for executing the image processing scheme provided by the embodiment of the present application is a terminal device for playing an audio/video picture, or a server for providing a technical service for the terminal device. The augmented reality scene may refer to a scene rendered and augmented by a virtual picture such as virtual reality; in this scenario, the computer device for executing the image processing scheme provided by the embodiment of the present application is a terminal device displaying a virtual picture, or a server providing a technical service for the terminal device.
The implementation process of the image processing scheme provided by the embodiment of the application when being applied to the game scene is simply introduced below by taking the application of the embodiment of the application to the game scene as an example. Wherein, according to the running mode of the game, the game scenes are different. For example, games may include local games (or referred to as client games) and cloud games, categorized by the manner in which the games are run. Wherein:
(1) The local game may refer to: and downloading a game installation package to the terminal equipment and playing the game locally on the terminal equipment. The game screen rendering process of the local game is performed by the computer device, that is, the computer device is responsible for not only rendering the game screen of the local game but also displaying the game screen of the local game. In a game scenario in which the target game is a local game, the above-mentioned computer device performing the rendering processing scheme may include a terminal device that interacts with a game player. That is, the image processing scheme provided by the embodiment of the application is deployed in the terminal equipment held by the game player, and the image processing scheme is called to realize the texture enhancement of the game image when the terminal equipment re-renders the game image. Wherein the terminal device may include, but is not limited to: smart phones (such as smart phones deploying Android systems or smart phones deploying internet operating systems (Internetworking Operating System, IOS)), tablet computers, portable personal computers, mobile internet devices (Mobile Internet Devices, MID), smart televisions, vehicle-mounted devices, headsets, and the like.
As shown in fig. 2a, in a scenario where a game is a local game deployed in a terminal device 201, the terminal device 201 performs data interaction through a server 202 corresponding to the game deployed in the terminal device, so as to implement a running game; if the terminal device 201 receives the game image sent by the server 202, the terminal device 201 invokes the image processing scheme provided by the embodiment of the application to render reality after the received game image is subjected to texture enhancement. Servers in this game scenario may include, but are not limited to: a data processing server, a World wide Web (Web) server, an application server, or the like, having a complex computing capability; alternatively, the server may be a separate physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers.
Therefore, in a game scene that the game is a local game, the image processing scheme provided by the embodiment of the application is deployed in the terminal equipment held by the game player; in this way, in the situation that the target game is a local game (such as a mode that the client independently runs the target game), the calculation migration process provided by the embodiment of the application can adjust the composition of the calculation resources under the condition that the calculation amount of the calculation of the resources is not reduced, and the calculation shader is split from the GPU rendering pipeline to use the CPU for calculation, so that the requirement on the hardware configuration of the computer equipment (namely the terminal equipment) is reduced, the target game with higher rendering quality can be run by using the display card with lower configuration, the speed of running the large-scale game by the terminal equipment can be improved to a certain extent, and the running effect of the target game is better.
(2) Cloud gaming may also be referred to as game on demand (game on demand), which refers to a game running in a computer device, where the computer device includes a cloud server (or cloud server). As shown in fig. 2b, in a game scenario in which the game is a cloud game, the game may send data (such as an input instruction or a signal) sent by the peripheral device to the cloud server, where the cloud server is responsible for rendering a game screen according to the data, compresses the rendered game screen, and then sends the compressed game screen to a terminal device used by an operation object through a network, where the terminal device only performs an operation of displaying the game screen. Therefore, in a game scene of a cloud game, the computer device executing the image processing scheme provided by the embodiment of the application is a cloud server; that is, when the cloud server receives the data of the game picture, the image processing scheme provided by the embodiment of the application is called to perform texture rendering enhancement on the game picture, and the enhanced game picture is directly issued to the terminal equipment for display. The running mode of the cloud game ensures that terminal equipment held by a game player does not need to have strong graphic operation and data processing capability, and only needs to have basic streaming media playing capability (such as the capability of displaying game pictures), man-machine interaction capability (the capability of acquiring input operation of an operation object) and data transmission capability (such as the capability of sending instructions to a cloud server).
Based on the above description of the image processing scheme provided by the embodiment of the present application and the application scenario adapted to the scheme, the embodiment of the present application needs to further describe the following points:
(1) when the image processing scheme is applied to any application scenario, the overall structure or flow may be summarized as the flow shown in fig. 3, which generally includes: the image to be rendered is identified, and a foreground image and a background image are extracted from the image. Then, a texture resource with a defect (which is in the form of blur or imperfection) is derived from the foreground image by using an identification or screenshot tool (e.g., a debug tool). Then, a texture optimization algorithm (specifically, an image processing scheme shown in fig. 1) provided by the embodiment of the application is adopted to perform a series of optimization on the geometric and texture resources of the rendering object in the foreground image, so as to obtain an optimized normal modeling image and depth image, and a texture loading process (namely, a rendering process) is rerun by combining a background pixel height image corresponding to the original background image of the image, so as to obtain a rendered image.
(2) Whether the texture optimization algorithm provided by the embodiment of the application enables rendering enhancement to reach the expected effect or not is evaluated from the dimension of the image rendering effect, and also from the dimension of performance consumption. In other words, the texture optimization algorithm provided by the embodiment of the application not only can realize the enhancement of the texture rendering of the image, but also can not increase excessive performance consumption and avoid increasing the rendering cost. The consumed performance of the texture optimization algorithm can be represented by the difference value of the performance records before and after the image processing by the texture optimization algorithm; as shown in fig. 3, when a foreground image of an image is input to a texture optimization algorithm, a performance value of a computer device is recorded, when image rendering is finished, the performance values of the computer device are recorded, and then the two performance values are subtracted, if the difference value is smaller than a threshold value, which means that excessive performance consumption is not increased, it is determined that rendering enhancement of the texture optimization algorithm provided by the embodiment of the application achieves an expected effect.
(3) In the embodiment of the application, the related data collection and processing should strictly obtain the personal information according to the requirements of related laws and regulations, so that the personal information needs to be informed or agreed (or has the legal basis of information acquisition), and the subsequent data use and processing behaviors are developed within the authorized range of the laws and regulations and the personal information body. For example, when the embodiments of the present application are applied to specific products or technologies, such as when obtaining an image to be rendered, permission or consent of an owner of the image (such as a game application program) needs to be obtained, and collection, use and processing of relevant data (such as collection and release of a bullet screen of an object release, etc.) needs to comply with relevant laws and regulations and standards of a relevant region.
Based on the above-described image processing scheme, the embodiment of the present application proposes a more detailed image processing method, and the image processing method proposed by the embodiment of the present application will be described in detail below with reference to the accompanying drawings. Referring to fig. 4, fig. 4 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present application; the image processing method may be performed by the aforementioned computer device, and the method may include steps S401 to S404:
S401: an image to be rendered is acquired.
Wherein the image to be rendered refers to the image represented in the first dimension space; the first dimension space herein may refer to a 2D space or a 2.5D space. When the first dimension space is a 2D space, the image to be rendered is a planar image (such as a hand drawing image) represented in the 2D space; when the first dimension space is a 2.5D space, the image to be rendered is a pseudo 3D image (such as an image having a three-dimensional viewing angle effect in terms of visual effect) represented in the 2.5D space. The first dimension space to which the image to be rendered belongs is not limited in the embodiment of the present application, and for convenience of explanation, the first dimension space is taken as a 2.5D space, that is, the image to be rendered is an image represented in the 2.5D space, for example, the following description will be made, and in particular, this description is given here.
It should be noted that, the image to be rendered according to the embodiment of the present application is any image having a rendering requirement. According to different application scenes, the types of the images to be rendered are different, for example, the images to be rendered are game images, video images or vehicle environment images; the embodiment of the application does not limit the type of the image to be rendered. The image processing method provided by the embodiment of the application has a wider application range.
It should be noted that the image to be rendered includes one or more rendering objects; rendering an object may specifically refer to any object in an image to be rendered for display, including but not limited to: inanimate objects (e.g., mountains, water, stones, cars, seats, etc.) and animate objects (e.g., animals, humans, plants, etc.). The embodiment of the application does not limit the number and the variety of the rendering objects included in the image.
S402: a first object parameter of a rendered object in an image is obtained.
The first object parameters of the rendered object in the image are for: parameters for rendering the rendered object in a first dimension space representing the image. In other words, the first object parameter of the rendering object may be simply understood as a parameter of the rendering object in the first dimensional space, which may be used to reflect 2D information or 2.5D information such as a position, a color, and a shape of the rendering object in the image to some extent. Wherein the first object parameters of the rendering object may include at least: texture mapping of the rendered object in the image (e.g., color mapping, relief mapping, or highlight mapping, etc.), and planar geometry (e.g., the geometry or geometry of the surface of the rendered object is circular, triangular, etc.), etc.
In a specific implementation, after the computer equipment acquires the image to be rendered, the image can be subjected to image segmentation and extraction so as to separate a foreground image and a background image corresponding to the image; the foreground image corresponding to the image comprises one or more rendering objects to be rendered contained in the image, and the background image corresponding to the image is an image formed by the rest elements except the rendering objects in the image, and is usually a solid-color image. Then, after the computer device obtains the foreground image corresponding to the image, the pixel height estimation can be performed on the foreground image to obtain the first object parameter of the rendering object in the image. Where pixel height may refer to a relative position between a rendered object in the foreground image, an engine camera in the foreground image (e.g., an image light source in the image, or referred to as an original light source), and a virtual ground in the foreground image, the relative position being embodied as a relative height. Further, the pixel height estimation may refer to a process of re-coloring the rendered object in the foreground image according to a relative position between the rendered object in the foreground image, the engine camera in the foreground image, and the virtual ground in the foreground image.
In detail, based on the above simple introduction of pixel height estimation, the specific process of performing pixel height estimation on the foreground image to obtain the first object parameter of the rendering object in the image may include:
firstly, after acquiring a foreground image corresponding to an image to be rendered, computer equipment can acquire relative coordinate information of a rendering object in the foreground image; the relative coordinate information is used to indicate: a relative height relationship between the object, the image light source in the foreground image, and the virtual ground in the foreground image is rendered in a first dimension space. As shown in fig. 5, the image coordinates of the planar image (i.e., the foreground image corresponding to the image to be rendered) are established with the upper left corner of the planar image as the origin, and then the coordinates of the rendering object (specifically, the coordinates of each pixel point constituting the rendering object), the coordinates of the image light source, and the coordinates of the virtual ground in the planar image can be calculated based on the two-dimensional coordinate system established with the upper left corner of the planar image as the origin; further, according to the coordinate relationship between the coordinates of the rendering object, the coordinates of the image light source and the coordinates of the virtual ground, the relative positional relationship between the rendering object, the image light source and the virtual ground may be determined, such as a relative positional relationship expression: the rendering object is located between the image light source and the virtual ground in the planar image, and a distance of the rendering object relative to the image light source is smaller than a distance of the rendering object relative to the virtual ground.
And then, according to the relative coordinate information of the rendering objects, performing pixel coloring processing on the rendering objects in the foreground image to obtain a foreground pixel height map corresponding to the foreground image, wherein different rendering objects in the image can be distinguished through the difference of colors in the foreground pixel height map. Specifically, when the foreground image includes a plurality of rendering objects, according to the difference of the relative coordinate information of different rendering objects, the different rendering objects in the foreground image are recoloured into different colors according to the relative coordinate information, so as to obtain a foreground pixel height map, so that the positions and the shapes of the different rendering objects in the first dimension space are distinguished. With continued reference to fig. 5, the recoloured colors of the surfaces of the rendered objects are different (e.g., the recoloured colors of the seat 501 and the tree 502 are different) according to the difference in height between the rendered objects, the engine camera and the virtual ground, so that the different rendered objects to be rendered are effectively distinguished in the foreground image, and the positional relationship (or height relationship) of the different rendered objects in the first dimension space is represented by the coloring.
And finally, carrying out feature extraction processing on the foreground pixel height map to obtain a first object parameter of the rendering object. As can be seen from the foregoing description, the positions and shapes of different rendering objects are effectively distinguished by different colors in the foreground pixel height map, so that the feature extraction processing of the foreground pixel height map can more accurately extract the first object parameters of the rendering objects in the first dimension space than the feature extraction processing of the foreground image; therefore, the follow-up operation can be performed based on the more accurate first object parameters, and the accuracy and the authenticity of image rendering are further improved.
Therefore, the embodiment of the application can utilize the pixel height to capture the relation between the rendering object, the image light source and the virtual ground by estimating the pixel height of the foreground image, can better keep the upright of the rendering object and the object shadow rendering contact point of the rendering object, and is more suitable for shadow rendering; and the real and accurate first object parameters of the rendering object in the first dimension space are extracted, so that the rendering authenticity and composition authenticity of the rendering object are improved.
Further, the process of estimating the pixel height of the foreground image to obtain the foreground pixel height map corresponding to the foreground image in the above description may be obtained by adopting neural network prediction; that is, the embodiment of the application supports the adoption of a neural network to automatically realize the pixel height estimation for the foreground image, so as to obtain the foreground pixel height map. The neural network belongs to the field of artificial intelligence, and is a network or a circuit composed of biological neurons; the embodiment of the application does not limit the type and structure of the neural network for realizing the pixel height estimation. Or, the process of estimating the pixel height of the foreground image to obtain the foreground pixel height map corresponding to the foreground image can be manually marked by a user; specifically, the user can directly annotate or correct the shape of the rendering object in the foreground image based on each rendering object in the foreground image, so as to obtain the foreground pixel height estimation corresponding to the foreground image.
S403: reconstructing the rendering object in the second dimension space according to the mapping relation between the first dimension space and the second dimension space based on the first object parameter of the rendering object, and obtaining a second object parameter of the rendering object.
In order to render the image represented in the first dimension space into a more realistic 3D image, the composition sense of the rendered image is improved; one key idea of an embodiment of the application is that: the first object parameters of the extracted rendering object are estimated based on the pixel height, and 3D information (such as a spatial 3D position, depth, normal line and the like) related to the rendering object and related to the rendering 3D effect height is acquired so as to improve the rendering authenticity of the rendering object through the 3D information related to the 3D effect height, thereby improving the composition sense of reality of the image. In detail, the embodiment of the application introduces a method for mapping a rendering object in an image from a first dimension space to a second dimension space, and based on the space mapping, the rendering object can be reconstructed in the second dimension space, so that the display effect of the rendering object affected by illumination in the real world is simulated, and further, second object parameters (namely 3D information) of the rendering object reconstructed in the second dimension space are obtained.
Wherein the second dimension space is a 3D space that is more stereoscopic than the first dimension space, corresponding to the first dimension space. The mapping relation between the first dimension space and the second dimension space illustrates a mapping method when the rendering object represented in the first dimension space is mapped to the second dimension space; that is, the process of reconstructing a rendering object from one space to another space can be realized using the mapping relationship between the first dimension space and the second dimension space.
In a specific implementation, the method for mapping a rendering object in an image from a first dimension space to a second dimension space provided by the embodiment of the present application may specifically include:
(1) And setting a preset light source and a foreground pixel height map in the second dimension space according to the mapping relation between the first dimension space and the second dimension space so as to reconstruct a scene expressed by the image. Wherein, the mapping relation between the first dimension space and the second dimension space indicates: the foreground pixel height map is perpendicular to the virtual ground of the second dimension space in the second dimension space; and the z-axis of the camera coordinate system established by taking the preset light source as the origin of the coordinate system is perpendicular to the foreground pixel height map, the x-axis of the camera coordinate system in the second dimension space is parallel to the x-axis of the image coordinate system (namely the two-dimensional coordinate system) in the first dimension space, and the y-axis of the camera coordinate system in the second dimension space is parallel to the y-axis of the image coordinate system in the first dimension space. The illumination intensity of the preset light source in the second dimension space can be controlled through software, namely the illumination of the preset light source in the second dimension space has controllable authority, so that the illumination of the light source in the real world can be simulated, the illumination of the preset light source has a softer effect, and further the shadow of a rendering object under the influence of the illumination of the preset light source is softer.
Illustratively, according to the above-given mapping relationship between the first dimension space and the second dimension space, a schematic view of the reconstructed 3D scene may be referred to fig. 6. As shown in fig. 6, the foreground pixel height map 601 in the second dimension space is perpendicular to the virtual ground 602. And, a camera coordinate system is introduced in the second dimension space, the camera coordinate system takes the center/optical center of the camera as an origin, and is represented as O 'in fig. 6, and the foot point O of the center Q' of the given camera is on the virtual ground; the x-axis and the y-axis of the camera coordinate system are parallel to the x-axis and the y-axis of the image coordinate system of the foreground pixel height map in the second dimension space, respectively, and the z-axis of the camera coordinate system is perpendicular to the foreground pixel height map (it can be understood that the z-axis of the camera coordinate system is a normal line of the foreground pixel height map). Furthermore, for convenience, support is defined in the camera by three vectors: 1) Vector c is the vector from the center O' of the camera to the upper left corner of the foreground pixel height map; 2) A right-facing vector a of the foreground pixel height map; 3) The downward vector b of the foreground pixel height map.
(2) And according to the illumination of the preset light source in the second dimension space for the foreground pixel height map in the second dimension space and the first object parameters of the rendering object, mapping the rendering object to the second dimension space to obtain the second object parameters of the rendering object reconstructed in the second dimension space. The second object parameter of the rendering object in the second dimension space can be understood as a parameter for reconstructing the rendering object in the second dimension space under the illumination projection of the preset light source; the second object parameters include at least: rendering parameters of the object in the stereoscopic scene presented by the second dimension space and rendering parameters of object projection generated by the object in the second dimension space under the influence of illumination of a preset light source; these parameters may include, but are not limited to, at least one of: spatial coordinate information, color information, texture information, depth information, and the like.
That is, the second object parameters when the rendering object is reconstructed in the second dimension space include not only parameters of the rendering object itself (such as surface texture, surface color or shape of the rendering object), but also parameters of object projection generated when the rendering object is illuminated by a preset light source in the second dimension space (such as soft shadow of the object projection, where the surface texture of the object projection is determined by the surface texture of the rendering object and a material of a shadow receiver of the object projection). Therefore, the embodiment of the application adopts the default preset light source to reconstruct the 3D scene in the second dimension space, can realize providing the illumination effect for the surface material of the rendering object (or called 3D object), not only realizes the enhancement of the surface texture of the rendering object, but also can render the soft shadow projected by the more real object.
Wherein, the specific process of mapping the rendering object to the second dimension space to obtain the second object parameter of the rendering object reconstructed in the second dimension space may include: the computer equipment maps the pixel points forming the rendering object in the foreground pixel height map to the second dimension space according to the illumination of the preset light source for the foreground pixel height map in the second dimension space, and obtains object projection formed by the pixel points forming the rendering object in the second dimension space. Further, the computer device determines parameters of object projection formed by the rendering object in the second dimension space according to the parameters of the shadow receiver projected as the object in the second dimension space and the first object parameters of the rendering object. Wherein the shadow receiver in the second dimension space corresponds to an object in the image for receiving a projection of the rendered object; for example, in the foreground pixel height, when a natural light source produces illumination from the upper right corner of a rendered object, if there is an object in the lower left corner of the rendered object, then the object receives a projection of the rendered object, i.e., the object acts as a shadow receiver of the rendered object. Finally, according to the object projection of the rendering object, presetting the geometric relationship between the light source and the rendering object in the foreground pixel altitude map in the second dimension space, and according to the imaging principle, calculating the parameters of the rendering object in the stereoscopic scene presented in the second dimension space.
For example, with continued reference to fig. 6, when a preset light source illuminates a rendered object in a foreground pixel height map, any point P and its foot point Q when the rendered object is reconstructed to a second dimension space may be photographed by a camera; the pixel points of the points P and Q in the foreground pixel height map through the camera center O ' are the points P ' and Q ', respectively. Considering that the y-coordinate of the pixel point Q ' is smaller than the y-coordinate of the camera center O ', the pixel point Q ' is projected to the virtual ground as one projection point of the object projection constituting the rendering object; each pixel point in the foreground pixel height map that constitutes the rendered object is mapped, i.e., the rendered object and the object projection of the rendered object may be reconstructed in the second dimension space. Further, according to the principle of similar triangle, the coordinates of the point P in the second dimension space in the camera coordinate system can be calculated) The method comprises the steps of carrying out a first treatment on the surface of the An exemplary computing process may include:
expressed by the principle of similar triangles, the relationship between a point P and its projection point P' in the foreground pixel height map can be expressed as the following equation:
wherein,a matrix constructed for the coordinates of the camera center O' in the camera coordinate system; / >A matrix constructed for the coordinates of vectors a, b and c in the camera coordinate system; />A matrix constructed for the coordinates of the point P' in pixel space; w is a space mapping coefficient between the first dimension space and the second dimension space; />A rectangle constructed for the coordinates of point P in the camera coordinate system.
Similarly, by the principle of similar triangles, the relationship between the y-axis coordinates of the point Q and the y-axis coordinates of the projected point Q' in the foreground pixel height map can be expressed as the following equation:
wherein,the y-axis coordinate in a camera coordinate system is the camera center O'; />A matrix of y-axis coordinates of vectors a, b and c in a camera coordinate system; />Is a matrix of coordinates of the point Q' in the camera coordinate system.
Considering that the foot point Q 'of the point P' in the first dimension space (i.e. image space) is located in a plane from the definition of the pixel height representation, then it is determined that=0; therefore, solving the above equation can obtain the value of the spatial mapping coefficient w as follows:
further, the pixel height representation assumes no pitch angle, so point P can be valued using w; thus, by re-projecting the point P ', the point P in the second dimension space resulting from the point P' can be calculated as:
Where h is a fixed value.
In summary, by the above-described method of mapping from the first dimension space to the second dimension space, each pixel point in the foreground pixel height map can be reconstructed to the second dimension space, thereby realizing reconstruction of the rendering object and the object projection of the rendering object in the second dimension space, and obtaining the second object parameters about the rendering object in the second dimension space.
Furthermore, it should be appreciated that when the camera's view angle (which can be understood as the user's view angle) changes, the horizon in the image will also change, and the change in horizon will change the camera pitch, which violates the no-pitch assumption of the pixel height representation, resulting in a geometrical tilt, and thus likely in soft shadow distortion; thus, controllability of the horizon is particularly important for image rendering. In order to solve the problem that the horizon changes with the viewing angle, the embodiment of the application supports the use of a shift camera model in a second dimension space; compared with a common camera, the camera model with the moving shaft is mainly characterized in that the lens (or a lens plate provided with the lens) and the back provided with a film can shift or twist through a soft connector (such as a skin cavity) to change the perspective effect and the definition range of an image. In this way, when the user changes views, the vector c from the camera center O' in the second dimension space to the upper left corner of the foreground pixel height map will move vertically in the second dimension space to align the horizon, thereby ensuring that the foreground pixel height map is always perpendicular to the virtual ground to ensure proper reconstruction of the rendered object in the second dimension space.
S404: and rendering the display image according to the second object parameters of the rendering object.
After obtaining the second object parameter of the rendering object reconstructed in the second dimension space based on step S403, it is determined that the rendering parameter of the rendering object is more accurate, and then the display image may be rendered based on the second object parameter.
In summary, the embodiment of the present application introduces a method for mapping a pixel height map from a first dimension space to a second dimension space, so as to simulate a picture of a rendering object in a real world (for example, when the rendering object is in a illuminated real world, the rendering object has shadows, and a display effect of the shadows is jointly affected by a texture of the rendering object and a texture of a shadow receiving object of the shadows), so as to obtain a more real and accurate second object parameter related to the rendering object. Therefore, when the image is rendered based on the accurate second object parameters of the rendering object, the reality of image rendering can be remarkably improved, and the image rendering effect is effectively improved.
The embodiment shown in fig. 4 mainly describes a specific process of spatial mapping in the image processing scheme provided by the embodiment of the present application; the following describes a complete flow of the image processing scheme provided in the embodiment of the present application with reference to fig. 7. Referring to fig. 7, fig. 7 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present application; the image processing method may be performed by the aforementioned computer device, and the method may include steps S701 to S706:
S701: an image to be rendered is acquired.
S702: a first object parameter of a rendered object in an image is obtained.
S703: reconstructing the rendering object in the second dimension space according to the mapping relation between the first dimension space and the second dimension space based on the first object parameter of the rendering object, and obtaining a second object parameter of the rendering object.
It should be noted that, the specific implementation process shown in steps S701-S703 may be referred to the description of the specific implementation process shown in steps S401-S403 in the embodiment shown in fig. 4, which is not repeated herein.
S704: and carrying out normal reconstruction processing on the rendering object based on the second object parameter of the rendering object to obtain a normal modeling image.
In order to make the rendered image more realistic in the rendering process, after the rendering object is mapped from the first dimension space to the second dimension space according to the steps, the surface normal of the rendering object in the second dimension space can be reconstructed by utilizing a Ray detection (Ray Tracing) technology to obtain a normal modeling image; the rendering object in the normal modeling image and soft shadows of the rendering object projected by the rendering object under a preset light source have a concave-convex mapping effect. That is, the embodiment of the application supports that the 3D effect of the rendering object can be rendered based on the surface material and the surface normal of the rendering object, such as the light effect of reflection, refraction and the like generated by the irradiation of light to the surface of the rendering object, and obtains more realistic image rendering effect.
Wherein: (1) the light detection techniques can be divided into two categories: a forward detection algorithm and a reverse detection algorithm; the forward detection algorithm is a natural light detection mode, namely, light emitted by a light source is projected onto the surface of an object after being reflected and transmitted for multiple times among environmental scenes, and finally enters human eyes; the reverse detection algorithm is the opposite, from the perspective of the observer, to detect only the surface projections that are visible to those observers.
(2) Rendering the surface normal of an object may refer to: vectors perpendicular to the model surface of the rendering object (i.e., the 3D model). Considering that the 3D model is composed of meshes, in particular, may be composed of a plurality of triangular meshes, and one triangular mesh is a facet, then a vector perpendicular to the facet is a normal to the facet; therefore, the surface normals of the 3D model are more often, and the normals of different grids on the model surface of the 3D model are detected through light rays, so that the 3D light effect generated by illumination at different positions on the model surface can be accurately determined, and the rendering of more real and stereoscopic light effects is facilitated. An exemplary surface normal schematic can be seen from FIG. 8; as shown in fig. 8, triangular mesh 801a and triangular mesh 802a are included in the 3D model, and then surface normal 801b of triangular mesh 801a is perpendicular to the triangular plane of triangular mesh 801 a; similarly, the surface normal 802b of the triangular mesh 802a is perpendicular to the triangular plane of the triangular mesh 802 a.
Notably, embodiments of the present application implement light detection based on pixel height, such that pixel height space (i.e., the initial pixel space matrix of the 3D model for the relative position of the camera) naturally provides an acceleration mechanism for light detection. Specifically, in the light detection process, intersection inspection of the light and the scene can be performed in the pixel height space to determine the intersection point of the light and the scene, and further determine the surface normal; the ray and scene intersection inspection is only performed along one line between the starting pixel and the ending pixel, and the intersection point of each pixel or each reconstruction triangle is not required to be inspected, so that the complexity of the ray-scene intersection inspection in the pixel height space is O (H) or O (W), and the complexity of the ray-scene intersection inspection is effectively reduced. Furthermore, while ray intersections with the scene are designed for detection visibility, ray detection may be accelerated to some extent during ray detection by modifying the intersection of the ray with the scene to detect the nearest hit pixel given the pixel origin and ray direction in the pixel height space.
S705: and carrying out depth extraction processing on the rendering object based on the second object parameter of the rendering object to obtain a target depth image.
The target depth image is used for reflecting depth information between a rendering object in the second dimension space and a preset light source; specifically, the image with the depth from the center of the camera to each point in the scene as a pixel value in the camera coordinate system can directly reflect the geometric shape of the model surface of the 3D model in the scene, and the distance between different 3D objects in the scene in the 3D space (for example, the more the 3D objects far from the center of the camera display the darker the color in the depth image). The depth in the above definition can be simply understood as the distance between the pixel point and the center of the camera in the camera coordinate system. It can be seen that the embodiment of the present application can extract a target depth image of a rendering object to determine the geometry of the object surface of the rendering object from the target depth image, thereby performing subsequent rendering with respect to the rendering object based on the geometry.
It is noted that the depth image has a depth precision attribute, and the depth precision of the depth image may be a difference between a depth value corresponding to a pixel point and a depth average value obtained by the camera; the depth average value obtained by the camera may be obtained by performing average value operation on depth values of a plurality of pixel points obtained by the camera. The depth precision of the depth image has a relation with the rendering speed; for example, the higher the depth accuracy, the slower the speed at which the image may be rendered; based on the above, the embodiment of the application supports the adjustment of the depth precision of the depth image in the process of extracting the depth image by the depth feature, and the rendering speed is increased by adjusting the depth precision of the depth image as much as possible under the condition of ensuring the image rendering quality, thereby improving the rendering speed.
In a specific implementation, the computer device performs depth extraction processing on the rendering object based on the second object parameter of the rendering object, and after obtaining an initial depth image, the computer device can adjust the depth precision of the initial depth image according to the direction of improving the rendering speed, so as to obtain a target depth image. The target depth image satisfies the two conditions, on one hand, the image rendered based on the target depth image can be ensured to have a better 3D effect (namely, the composition has a stereoscopic impression), and on the other hand, the image can be ensured to have a faster rendering speed.
S706: and acquiring a background pixel height map of the rendered image, and rendering a display image based on the background pixel height map, the normal modeling image and the target depth image.
After the normal modeling image and the target depth image containing the rendering object are obtained based on the steps, the embodiment of the application supports rendering and displaying the complete image by combining the normal modeling image and the target depth image of the rendering object with the background pixel height map of the image to be rendered. The background pixel height map is obtained by performing pixel height estimation on a background image corresponding to the image by using the pixel height estimation described above, and the specific implementation process of the pixel height estimation may refer to the related description of the specific implementation process of the pixel height estimation on the foreground image, which is not described herein.
To enhance shadow rendering effects in an image, one key idea on which embodiments of the present application rely is that the rendering of soft shadows on shadow receivers (i.e., objects in 3D space for rendering shadows) is related to the relative geometric distance between an obscurator (e.g., a rendering object that reflects illumination) and the shadow receiver (i.e., the distance in 3D space); for this reason, the embodiment of the application proposes a perception neural network (or rendering model) to instruct a neural renderer to render vivid soft shadows on a general shadow receiver based on the mapping from the first dimension space to the second dimension space, so as to improve the rendering authenticity of the soft shadows.
Wherein, the embodiment of the application mainly analyzes how the multiple factors affect the rendering result of the soft shadow together by training the rendering model, and the multiple factors can include but are not limited to: the geometry of the light source (e.g., circular or rectangular, etc.), information about the shade (e.g., geometry and surface texture), information about the shadow receiver (e.g., geometry and surface texture), spatial relationship between the shade and shadow receiver, etc. That is, the rendering model may be used to indicate how much the geometry of the preset light source, the relevant information of the shade, the relevant information of the shadow receiver, and the spatial relationship between the shade and the shadow receiver together affect the rendering of the soft shadow; in this way, softer shadows can be obtained when rendering the rendering object by using the trained rendering model.
In more detail, the rendering model provided by the embodiment of the application may include N perception buffer channels, where N is an integer greater than zero. The perceptual buffer channel consists of several maps, including but not limited to: foreground pixel height map of the image, gradient of background pixel height map of the image; a hard shadow of the optical center from a preset light source; sparse hard shadow maps; a relative distance map (e.g., a distance map represented in pixel height space) between the shutter and shadow receiver; etc. That is, each of the N perceptual buffer channels processes a map correspondingly; in the embodiment of the present application, the perceptual buffer channels corresponding to the above several maps may be summarized as: a pixel height buffer channel, a hard shadow buffer channel, and a relative distance buffer channel. Wherein:
(1) the pixel height buffer channel is used for extracting features of gradients of a foreground pixel height map of the image and a background pixel height map of the image to extract geometric figures (or geometric shapes) of the rendering objects in the foreground pixel height map and the background pixel height map. That is, considering that the foreground pixel height map (or referred to as a cut pixel height map) and the background pixel height map of an image describe the geometry of the foreground and background, respectively, a perceptual buffer channel may be employed to extract features of the two maps to determine the geometry of objects in the foreground and background. It should be noted that, considering that the gradient map of the background pixel height map captures the surface direction similar to the normal map, the embodiment of the application supports using the gradient map of the background pixel height map as the background pixel height map and making it shift unchanged, so that the background pixel can be continuously colored when being colored according to the position of the rendering object, thereby making the rendering effect more natural and smooth.
(2) The hard shadow buffer channel is used for extracting features of a hard shadow image of the image to extract the outer boundaries of soft shadows in the image. According to the embodiment of the application, the hard shadow buffer channel is introduced into the perception neural network, so that the network can be guided to know the geometric shape of the shadow receiver by taking the sparse hard shadow image into consideration, and the accurate geometric shape based on the shadow receiver is facilitated, and the image rendering quality is improved. It is worth noting that one important characteristic of a hard shadow buffer channel is that the sparse hard shadow image describes the outer boundary of the soft shadow (i.e., soft shadow), and that the overlapping region of the sparse hard shadow is also a hint of the darker region in the soft shadow; therefore, the interface of the hard shadow and the soft shadow can be accurately extracted through the hard shadow buffer channel, and a more real shadow rendering effect is realized. The above-mentioned hard shadow and soft shadow are distinguished according to the illumination intensity within the shadow, for example, hard shadow means a shadow in which there is no light intensity variation within the shadow region and the shadow region has an absolutely accurate boundary, and conversely, soft shadow means a shadow in which there is light intensity variation within the shadow region and the shadow region does not have an absolutely accurate boundary.
(3) The relative distance buffer channel is used for extracting features of the relative distance map so as to extract the softness degree of the illumination shadow of the rendering object; the relative distance map refers to the relative distances between the shadow masks and shadow receivers in the second dimension space. In other words, the relative distance map in pixel height space defines the relative spatial distance between the shade and shadow receiver; the longer the relative spatial distance between the shader and the shadow receiver, the softer the shadow of the rendered object is typically. The provision of a relative distance buffer channel in the sensory neural network is advantageous in directing the network to notice shadow areas with high contrast. Wherein the relative spatial distance between the shutter and the shadow receiver in the pixel height space can be defined as:
where i and j can be understood as shutters and shadow receivers in the pixel height space;representing the coordinates of the shutter, but +.>Representing the coordinates of the shadow receiver. />Representing the relative spatial distance between the shade and shadow receiver; />Is a norm operator.
Based on the above description of each sensing buffer channel in the sensing neural network provided by the embodiment of the present application, the sensing neural network related to the embodiment of the present application may be expressed as:
f=MLP(Cat(Conv1(para),Conv2(para),...))
Wherein Conv1 and Conv2 are different perception buffer channels, and the perception buffer channels are specifically channel convolution; mainly for extracting 3D channel features, such as Conv1 (para) for extracting features of para (which may be embodied as the aforementioned map); para is a mass parameter herein that includes, but is not limited to, object parameters such as shading, light, position, and color. Cat is a join operation for stitching or joining features extracted from each perceptual buffer channel. The MLP is the perception neural network provided by the embodiment of the application, and the network models the rendering influence of massive parameters on soft shadows together.
Based on the above description about the perceptual neural network (i.e. the rendering model), after obtaining the background pixel height map, the normal modeling image and the target depth image corresponding to the image, the process of implementing image rendering by using the perceptual neural network in the embodiment of the present application may be summarized simply as follows: the trained rendering model may be obtained and utilized to render the background pixel height map, the normal modeling image, and the target depth image to render the display image. Further, the process of rendering the background pixel height map, the normal modeling image and the target depth image by using the rendering model to render the display image may be refined as follows: the corresponding map of each of the N perceptual buffer channels (e.g., the foreground and background pixel height maps corresponding to the pixel height buffer channels mentioned above, the hard shadow image corresponding to the hard shadow buffer channel, the relative distance map corresponding to the relative distance buffer channel, etc.) is extracted from the background pixel height map, the normal modeling image, and the target depth image. Then, according to the corresponding relation between the perception buffer channels and the maps, respectively extracting the characteristics of the corresponding maps by utilizing N perception buffer channels included in the rendering model to obtain channel characteristics corresponding to the N perception buffer channels; that is, each perceptual buffer channel may perform feature extraction on the input map to obtain corresponding channel features. And then, connecting the channel characteristics corresponding to the N perception buffer channels to obtain splicing characteristics. And finally, rendering a display image based on the splicing characteristics, wherein the rendered image can present a soft shadow effect of object projection of the rendering object under a preset light source.
In summary, in the embodiment of the present application, the second dimension space is considered to be a stereoscopic scene with a preset light source compared to the first dimension space; therefore, the embodiment of the application supports mapping the rendering object in the image from the first dimension space to the second dimension space, so that the picture of the rendering object in the real world is simulated by reconstructing the rendering object in the stereoscopic second dimension space, and thus more real and accurate second object parameters about the rendering object are obtained. Therefore, when the image is rendered based on the accurate second object parameters of the rendering object, the quality of the image (such as that the surface of the rendering object in the image has a more realistic texture effect under illumination, the rendering object has softer and more realistic shadows, and the like) can be remarkably improved, so that the fidelity of image rendering is improved, and the image rendering effect is effectively improved. In addition, the embodiment of the application also supports performance recording before and after image texture enhancement so as to ensure the image rendering quality and avoid generating more performance consumption at the same time, thereby avoiding influencing the overall performance of the computer.
The foregoing details of the method of the present application and, in order to facilitate better practice of the method of the present application, a device of the present application is provided below.
Fig. 9 shows a schematic structural diagram of an image processing apparatus according to an exemplary embodiment of the present application, which may be a computer program (including program code) running in a computer device; the image processing device may be used to perform some or all of the steps in the method embodiments shown in fig. 4 and 7; the device comprises the following units:
an acquisition unit 901 for acquiring an image to be rendered, the image being an image represented in a first dimension space; the image comprises a rendering object to be rendered;
a processing unit 902, configured to obtain a first object parameter of a rendering object in an image; the first object parameter is a parameter for rendering the rendered object in the first dimension space;
the processing unit 902 is further configured to reconstruct the rendered object in the second dimension space according to the mapping relationship between the first dimension space and the second dimension space based on the first object parameter of the rendered object, to obtain a second object parameter of the rendered object; the second dimension space is a three-dimensional scene with a preset light source; the second object parameter is a parameter for reconstructing the rendering object in a second dimension space under illumination projection of a preset light source;
The processing unit 902 is further configured to render the display image according to the second object parameter of the rendering object.
In one implementation, the processing unit 902 is configured to, when acquiring a first object parameter of a rendering object in an image, specifically:
acquiring a foreground image corresponding to the image; the foreground image comprises a rendering object;
performing pixel height estimation on the foreground image to obtain a first object parameter of a rendering object in the image; the first object parameters include at least: rendering texture maps and planar geometries of objects in an image.
In one implementation, the processing unit 902 is configured to perform pixel height estimation on a foreground image, and when obtaining a first object parameter of a rendering object in the image, the processing unit is specifically configured to:
acquiring relative coordinate information of a rendering object in a foreground image; the relative coordinate information is used to indicate a relative height relationship between the rendered object in the first dimension space, the image light source in the foreground image, and the virtual ground in the foreground image;
according to the relative coordinate information of the rendering object, performing pixel coloring treatment on the rendering object in the foreground image to obtain a foreground pixel height map corresponding to the foreground image;
And performing feature extraction processing on the foreground pixel height map to obtain a first object parameter of the rendering object.
In one implementation, the processing unit 902 is configured to reconstruct the rendered object in the second dimension space according to the mapping relationship between the first dimension space and the second dimension space based on the first object parameter of the rendered object, so as to obtain the second object parameter of the rendered object, where the processing unit is specifically configured to:
setting a foreground pixel height map corresponding to a preset light source and an image in a second dimension space according to the mapping relation between the first dimension space and the second dimension space; wherein, the mapping relation between the first dimension space and the second dimension space indicates: the foreground pixel height map is perpendicular to the virtual ground of the second dimension space in the second dimension space; and the z-axis of the camera coordinate system established by taking the preset light source as the origin of the coordinate system is perpendicular to the foreground pixel height map, the x-axis of the camera coordinate system in the second dimension space is parallel to the x-axis of the image coordinate system in the first dimension space, and the y-axis of the camera coordinate system in the second dimension space is parallel to the y-axis of the image coordinate system in the first dimension space;
according to illumination of a preset light source for a foreground pixel height map in a second dimension space and first object parameters of a rendering object, mapping the rendering object to the second dimension space to obtain second object parameters of the rendering object reconstructed in the second dimension space;
Wherein the second object parameters include at least: and rendering parameters of the object in the stereoscopic scene represented by the second dimension space and parameters of object projection generated by the influence of illumination of a preset light source on the object in the second dimension space.
In one implementation manner, the processing unit 902 is configured to map the rendering object to the second dimension space according to the illumination of the preset light source for the foreground pixel height map in the second dimension space and the first object parameter of the rendering object, so as to obtain the second object parameter of the rendering object reconstructed in the second dimension space, where the processing unit is specifically configured to:
according to illumination of a preset light source for a foreground pixel height map in a second dimension space, mapping pixel points forming a rendering object in the foreground pixel height map to the second dimension space to obtain object projection formed by the pixel points forming the rendering object in the second dimension space;
determining parameters of object projection formed by the rendering object in the second dimension space according to parameters of a shadow receiver used as the object projection in the second dimension space and the first object parameters of the rendering object; the shadow receiver in the second dimension space corresponds to an object in the image for receiving a projection of the rendered object;
According to object projection of the rendering object, presetting a geometric relation of the rendering object in the second dimension space in the light source and foreground pixel height map, and calculating parameters of the rendering object in a stereoscopic scene presented in the second dimension space according to an imaging principle; wherein the second object parameters include at least one of: spatial coordinate information, color information, texture information, and depth information.
In one implementation, the processing unit 902 is configured to, when rendering the display image according to the second object parameter of the rendering object, specifically:
performing normal reconstruction processing on the rendering object based on a second object parameter of the rendering object to obtain a normal modeling image; the rendering object in the normal modeling image and the soft shadow projected by the rendering object under the preset light source have a concave-convex mapping effect; the method comprises the steps of,
performing depth extraction processing on the rendering object based on a second object parameter of the rendering object to obtain a target depth image; the target depth image is used for reflecting depth information between the rendering object and the preset light source;
and acquiring a background pixel height map of the image, and rendering a display image based on the background pixel height map, the normal modeling image and the target depth image.
In one implementation manner, the processing unit 902 is configured to perform depth extraction processing on the rendering object based on the second object parameter of the rendering object, and when obtaining the target depth image, specifically is configured to:
performing depth extraction processing on the rendering object based on a second object parameter of the rendering object to obtain an initial depth image;
and adjusting the depth precision of the initial depth image according to the direction for improving the rendering speed to obtain the target depth image.
In one implementation, the processing unit 902 is configured to render a display image based on the background pixel height map, the normal modeling image, and the target depth image, and is specifically configured to:
acquiring a rendering model; the rendering model is used for indicating the rendering influence degree of the geometric shape of the preset light source, the related information of the shade device, the related information of the shadow receiver and the spatial relationship between the shade device and the shadow receiver on the soft shadow together;
and rendering the background pixel height map, the normal modeling image and the target depth image by using a rendering model to render a display image.
In one implementation, the rendering model includes N perceptual buffer channels, N being an integer greater than zero; one perception buffer channel corresponds to one map; the processing unit 902 is configured to render the background pixel height map, the normal modeling image, and the target depth image by using the rendering model, so as to render the display image, and specifically configured to:
Extracting a mapping corresponding to each perception buffer channel in N perception buffer channels from a background pixel height map, a normal modeling image and a target depth image;
according to the corresponding relation between the perception buffer channels and the maps, respectively extracting the characteristics of the corresponding maps by utilizing N perception buffer channels included in the rendering model to obtain channel characteristics corresponding to the N perception buffer channels;
connecting the channel characteristics corresponding to the N perception buffer channels to obtain splicing characteristics;
and rendering a display image based on the splicing characteristic, wherein the rendered image can present a soft shadow effect of object projection of the rendering object under a preset light source.
In one implementation, the N sense buffer channels include: a pixel height buffer channel, a hard shadow buffer channel and a relative distance buffer channel; wherein,
the pixel height buffer channel is used for extracting characteristics of gradients of a foreground pixel height map of the image and a background pixel height map of the image so as to extract geometric figures of a rendering object in the foreground pixel height map and the background pixel height map;
the hard shadow buffer channel is used for extracting the characteristics of the hard shadow image of the image so as to extract the external boundary of the soft shadow in the image;
The relative distance buffer channel is used for extracting features of the relative distance map so as to extract the softness degree of the illumination shadow of the rendering object; the relative distance map refers to the relative distances between the shadow masks and shadow receivers in the second dimension space.
According to an embodiment of the present application, each unit in the image processing apparatus shown in fig. 9 may be configured by combining each unit into one or several other units, respectively, or some unit(s) thereof may be configured by splitting into a plurality of units having smaller functions, which may achieve the same operation without affecting the implementation of the technical effects of the embodiment of the present application. The above units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the present application, the image processing apparatus may also include other units, and in practical applications, these functions may also be realized with assistance of other units, and may be realized by cooperation of a plurality of units. According to another embodiment of the present application, an image processing apparatus as shown in fig. 9 may be constructed by running a computer program (including program code) capable of executing steps involved in the respective methods shown in fig. 4 and 7 on a general-purpose computing device such as a computer including a processing element such as a central processing unit (Central Processing Unit, CPU), an access storage medium (Random Access Memory, RAM), a Read Only Memory (ROM), and the like, and a storage element, and the image processing method of the embodiment of the present application is implemented. The computer program may be recorded on, for example, a computer-readable recording medium, and loaded into and run in the above-described computing device through the computer-readable recording medium.
In the embodiment of the application, when the image to be rendered is required to be rendered and displayed, the first object parameter of the rendering object to be rendered in the image can be obtained; the first object parameter may be understood simply as representing the parameters required in rendering the rendered object in the first dimension space of the image. Then, the second object parameter of the rendering object can be obtained based on the first object parameter of the rendering object and according to the mapping relation between the first dimension space and the second dimension space. In this way, more realistic images may be rendered according to the second object parameters of the rendering object. It can be seen that the embodiments of the present application propose a new image rendering scheme that supports mapping of rendering objects comprised by an image represented by a first dimension space from the first dimension space to a second dimension space, i.e. reconstructing rendering objects in the image in the second dimension space. Considering that the second dimension space is a stereoscopic scene with a preset light source compared to the first dimension space; then, by reconstructing the rendering object in the stereoscopic second dimension space, it is possible to simulate a picture in which the rendering object is in the real world (e.g., when the rendering object is in the illuminated real world, the rendering object has shadows, and the display effect of the shadows is affected by the texture of the rendering object and the texture of the shadow receiving object of the shadows), thereby obtaining a more real and accurate second object parameter with respect to the rendering object. Therefore, when the image is rendered based on the accurate second object parameters of the rendering object, the quality of the image (such as that the surface of the rendering object in the image has a more realistic texture effect under illumination, the rendering object has softer and more realistic shadows, and the like) can be remarkably improved, so that the fidelity of image rendering is improved, and the image rendering effect is effectively improved.
Fig. 10 is a schematic diagram of a computer device according to an exemplary embodiment of the present application. Referring to fig. 10, the computer device includes a processor 1001, a communication interface 1002, and a computer-readable storage medium 1003. Wherein the processor 1001, the communication interface 1002, and the computer-readable storage medium 1003 may be connected by a bus or other means. Wherein the communication interface 1002 is for receiving and transmitting data. The computer readable storage medium 1003 may be stored in a memory of a computer device, the computer readable storage medium 1003 storing a computer program comprising program instructions, the processor 1001 being configured to execute the program instructions stored by the computer readable storage medium 1003. The processor 1001, or CPU (Central Processing Unit ), is a computing core and a control core of a computer device, which is adapted to implement one or more instructions, in particular to load and execute one or more instructions to implement a corresponding method flow or a corresponding function.
The embodiment of the application also provides a computer readable storage medium (Memory), which is a Memory device in the computer device and is used for storing programs and data. It is understood that the computer readable storage medium herein may include both built-in storage media in a computer device and extended storage media supported by the computer device. The computer readable storage medium provides storage space that stores a processing system of a computer device. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), adapted to be loaded and executed by the processor 1001. Note that the computer readable storage medium can be either a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory; alternatively, it may be at least one computer-readable storage medium located remotely from the aforementioned processor.
In one embodiment, the computer-readable storage medium has one or more instructions stored therein; loading and executing by the processor 1001 one or more instructions stored in a computer-readable storage medium to implement the corresponding steps in the image processing method embodiments described above; in particular implementations, one or more instructions in a computer-readable storage medium are loaded by the processor 1001 and perform the steps of:
acquiring an image to be rendered, wherein the image refers to an image represented in a first dimension space; the image comprises a rendering object to be rendered;
acquiring a first object parameter of a rendering object in an image; the first object parameter is a parameter for rendering the rendered object in the first dimension space;
reconstructing the rendering object in the second dimension space according to the mapping relation between the first dimension space and the second dimension space based on the first object parameter of the rendering object to obtain a second object parameter of the rendering object; the second dimension space is a three-dimensional scene with a preset light source; the second object parameter is a parameter for reconstructing the rendering object in a second dimension space under illumination projection of a preset light source;
and rendering the display image according to the second object parameters of the rendering object.
In one implementation, one or more instructions in a computer-readable storage medium are loaded by the processor 1001 and, when executed, perform the steps of:
acquiring a foreground image corresponding to the image; the foreground image comprises a rendering object;
performing pixel height estimation on the foreground image to obtain a first object parameter of a rendering object in the image; the first object parameters include at least: rendering texture maps and planar geometries of objects in an image.
In one implementation, one or more instructions in the computer-readable storage medium are loaded by the processor 1001 and when performing pixel height estimation on the foreground image, the following steps are specifically performed to obtain a first object parameter of the rendered object in the image:
acquiring relative coordinate information of a rendering object in a foreground image; the relative coordinate information is used to indicate a relative height relationship between the rendered object in the first dimension space, the image light source in the foreground image, and the virtual ground in the foreground image;
according to the relative coordinate information of the rendering object, performing pixel coloring treatment on the rendering object in the foreground image to obtain a foreground pixel height map corresponding to the foreground image;
And performing feature extraction processing on the foreground pixel height map to obtain a first object parameter of the rendering object.
In one implementation, one or more instructions in the computer-readable storage medium are loaded by the processor 1001 and when executing a first object parameter based on a rendering object, reconstructing the rendering object in a second dimension space according to a mapping relationship between the first dimension space and the second dimension space, and obtaining a second object parameter of the rendering object, specifically perform the following steps:
setting a foreground pixel height map corresponding to a preset light source and an image in a second dimension space according to the mapping relation between the first dimension space and the second dimension space; wherein, the mapping relation between the first dimension space and the second dimension space indicates: the foreground pixel height map is perpendicular to the virtual ground of the second dimension space in the second dimension space; and the z-axis of the camera coordinate system established by taking the preset light source as the origin of the coordinate system is perpendicular to the foreground pixel height map, the x-axis of the camera coordinate system in the second dimension space is parallel to the x-axis of the image coordinate system in the first dimension space, and the y-axis of the camera coordinate system in the second dimension space is parallel to the y-axis of the image coordinate system in the first dimension space;
According to illumination of a preset light source for a foreground pixel height map in a second dimension space and first object parameters of a rendering object, mapping the rendering object to the second dimension space to obtain second object parameters of the rendering object reconstructed in the second dimension space;
wherein the second object parameters include at least: and rendering parameters of the object in the stereoscopic scene represented by the second dimension space and parameters of object projection generated by the influence of illumination of a preset light source on the object in the second dimension space.
In one implementation, one or more instructions in a computer-readable storage medium are loaded by the processor 1001 and when executing the illumination of a foreground pixel height map in a second dimension space according to a preset light source and rendering a first object parameter of an object, map the rendered object to the second dimension space resulting in a second object parameter of the rendered object reconstructed in the second dimension space, specifically perform the steps of:
according to illumination of a preset light source for a foreground pixel height map in a second dimension space, mapping pixel points forming a rendering object in the foreground pixel height map to the second dimension space to obtain object projection formed by the pixel points forming the rendering object in the second dimension space;
Determining parameters of object projection formed by the rendering object in the second dimension space according to parameters of a shadow receiver used as the object projection in the second dimension space and the first object parameters of the rendering object; the shadow receiver in the second dimension space corresponds to an object in the image for receiving a projection of the rendered object;
according to object projection of the rendering object, presetting a geometric relation of the rendering object in the second dimension space in the light source and foreground pixel height map, and calculating parameters of the rendering object in a stereoscopic scene presented in the second dimension space according to an imaging principle; wherein the second object parameters include at least one of: spatial coordinate information, color information, texture information, and depth information.
In one implementation, one or more instructions in the computer-readable storage medium are loaded by the processor 1001 and, when executing the rendering of the display image according to the second object parameters of the rendering object, specifically perform the steps of:
performing normal reconstruction processing on the rendering object based on a second object parameter of the rendering object to obtain a normal modeling image; the rendering object in the normal modeling image and the soft shadow projected by the rendering object under the preset light source have a concave-convex mapping effect; the method comprises the steps of,
Performing depth extraction processing on the rendering object based on a second object parameter of the rendering object to obtain a target depth image; the target depth image is used for reflecting depth information between the rendering object and the preset light source;
and acquiring a background pixel height map of the image, and rendering a display image based on the background pixel height map, the normal modeling image and the target depth image.
In one implementation, one or more instructions in the computer-readable storage medium are loaded by the processor 1001 and when performing a depth extraction process on a rendering object based on a second object parameter of the rendering object, the following steps are specifically performed to obtain a target depth image:
performing depth extraction processing on the rendering object based on a second object parameter of the rendering object to obtain an initial depth image;
and adjusting the depth precision of the initial depth image according to the direction for improving the rendering speed to obtain the target depth image.
In one implementation, one or more instructions in a computer-readable storage medium are loaded by the processor 1001 and, when executing rendering a display image based on a background pixel height map, a normal modeling image, and a target depth image, perform the steps of:
Acquiring a rendering model; the rendering model is used for indicating the rendering influence degree of the geometric shape of the preset light source, the related information of the shade device, the related information of the shadow receiver and the spatial relationship between the shade device and the shadow receiver on the soft shadow together;
and rendering the background pixel height map, the normal modeling image and the target depth image by using a rendering model to render a display image.
In one implementation, the rendering model includes N perceptual buffer channels, N being an integer greater than zero; one perception buffer channel corresponds to one map; one or more instructions in the computer-readable storage medium are loaded by the processor 1001 and, when executed, perform rendering a background pixel height map, a normal modeling image, and a target depth image using a rendering model to render a display image, perform the steps of:
extracting a mapping corresponding to each perception buffer channel in N perception buffer channels from a background pixel height map, a normal modeling image and a target depth image;
according to the corresponding relation between the perception buffer channels and the maps, respectively extracting the characteristics of the corresponding maps by utilizing N perception buffer channels included in the rendering model to obtain channel characteristics corresponding to the N perception buffer channels;
Connecting the channel characteristics corresponding to the N perception buffer channels to obtain splicing characteristics;
and rendering a display image based on the splicing characteristic, wherein the rendered image can present a soft shadow effect of object projection of the rendering object under a preset light source.
In one implementation, the N sense buffer channels include: a pixel height buffer channel, a hard shadow buffer channel and a relative distance buffer channel; wherein,
the pixel height buffer channel is used for extracting characteristics of gradients of a foreground pixel height map of the image and a background pixel height map of the image so as to extract geometric figures of a rendering object in the foreground pixel height map and the background pixel height map;
the hard shadow buffer channel is used for extracting the characteristics of the hard shadow image of the image so as to extract the external boundary of the soft shadow in the image;
the relative distance buffer channel is used for extracting features of the relative distance map so as to extract the softness degree of the illumination shadow of the rendering object; the relative distance map refers to the relative distances between the shadow masks and shadow receivers in the second dimension space.
Based on the same inventive concept, the principle and beneficial effects of the computer device provided in the embodiment of the present application are similar to those of the image processing method in the embodiment of the present application, and may refer to the principle and beneficial effects of implementation of the method, which are not described herein for brevity.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the above-described image processing method.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be accessed by a computer or data processing device, such as a server, data center, or the like, that contains an integration of one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The foregoing description is merely illustrative of the present application, and the scope of the present application is not limited thereto, and any person skilled in the art will readily appreciate variations or substitutions within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (14)

1. An image processing method, comprising:
acquiring an image to be rendered, wherein the image refers to an image represented in a first dimension space; the image comprises a rendering object to be rendered;
acquiring a first object parameter of the rendering object in the image; the first object parameter is a parameter for rendering the rendering object in the first dimension space;
reconstructing the rendering object in a second dimension space according to a mapping relation between the first dimension space and the second dimension space based on a first object parameter of the rendering object to obtain a second object parameter of the rendering object; the second dimension space is a three-dimensional scene with a preset light source; the second object parameter is a parameter for reconstructing the rendering object in the second dimension space under illumination projection of the preset light source;
And rendering and displaying the image according to the second object parameters of the rendering object.
2. The method of claim 1, wherein the obtaining a first object parameter of the rendered object in the image comprises:
acquiring a foreground image corresponding to the image; the foreground image comprises the rendering object;
performing pixel height estimation on the foreground image to obtain a first object parameter of the rendering object in the image; the first object parameters include at least: texture maps and planar geometries of the rendered objects in the image.
3. The method of claim 2, wherein the pixel height estimation of the foreground image results in a first object parameter of the rendered object in the image, comprising:
acquiring relative coordinate information of the rendering object in the foreground image; the relative coordinate information is used for indicating a relative height relation among the rendering object, an image light source in the foreground image and a virtual ground in the foreground image in the first dimension space;
according to the relative coordinate information of the rendering object, performing pixel coloring processing on the rendering object in the foreground image to obtain a foreground pixel height map corresponding to the foreground image;
And carrying out feature extraction processing on the foreground pixel height map to obtain a first object parameter of the rendering object.
4. The method of claim 1, wherein reconstructing the rendered object in the second dimension space based on the first object parameters of the rendered object according to a mapping relationship between the first dimension space and the second dimension space, to obtain the second object parameters of the rendered object, comprises:
setting a preset light source and a foreground pixel height map corresponding to the image in a second dimension space according to the mapping relation between the first dimension space and the second dimension space; wherein, the mapping relation between the first dimension space and the second dimension space indicates: the foreground pixel height map is perpendicular to the virtual ground of the second dimension space in the second dimension space; the z axis of a camera coordinate system established by taking the preset light source as an origin of the coordinate system is perpendicular to the foreground pixel height map, the x axis of the camera coordinate system in the second dimension space is parallel to the x axis of the image coordinate system in the first dimension space, and the y axis of the camera coordinate system in the second dimension space is parallel to the y axis of the image coordinate system in the first dimension space;
According to the illumination of the preset light source for the foreground pixel height map in the second dimension space and the first object parameters of the rendering object, mapping the rendering object to the second dimension space to obtain second object parameters of the rendering object reconstructed in the second dimension space;
wherein the second object parameters include at least: parameters of the rendering object in the stereoscopic scene presented by the second dimension space and parameters of object projection generated by the rendering object in the second dimension space under the influence of illumination of the preset light source.
5. The method of claim 4, wherein the mapping the rendered object to the second dimension space based on the illumination of the preset light source for the foreground pixel height map in the second dimension space and the first object parameters of the rendered object, resulting in second object parameters of the rendered object reconstructed in the second dimension space, comprises:
according to the illumination of the preset light source on the foreground pixel height map in the second dimension space, mapping the pixel points forming the rendering object in the foreground pixel height map to the second dimension space to obtain object projection formed by the pixel points forming the rendering object in the second dimension space;
Determining parameters of object projection formed by the rendering object in the second dimension space according to parameters of a shadow receiver used as the object projection in the second dimension space and first object parameters of the rendering object; a shadow receiver in the second dimension space corresponds to an object in the image for receiving a projection of the rendered object;
according to the object projection of the rendering object, the geometric relationship between the preset light source and the rendering object in the foreground pixel height map in the second dimension space, and according to an imaging principle, calculating parameters of the rendering object in a three-dimensional scene presented in the second dimension space; wherein the second object parameters include at least one of: spatial coordinate information, color information, texture information, and depth information.
6. The method of claim 1, wherein rendering the image according to the second object parameters of the rendered object comprises:
performing normal reconstruction processing on the rendering object based on a second object parameter of the rendering object to obtain a normal modeling image; the rendering object in the normal modeling image and soft shadows of the rendering object projected by the object under the preset light source have a concave-convex mapping effect; the method comprises the steps of,
Performing depth extraction processing on the rendering object based on a second object parameter of the rendering object to obtain a target depth image; the target depth image is used for reflecting depth information between the rendering object and the preset light source;
and acquiring a background pixel height map of the image, and rendering and displaying the image based on the background pixel height map, the normal modeling image and the target depth image.
7. The method of claim 6, wherein the performing depth extraction processing on the rendering object based on the second object parameter of the rendering object to obtain the target depth image comprises:
performing depth extraction processing on the rendering object based on a second object parameter of the rendering object to obtain an initial depth image;
and adjusting the depth precision of the initial depth image according to the direction for improving the rendering speed to obtain a target depth image.
8. The method of claim 6, wherein the rendering the image based on the background pixel height map, the normal modeling image, and the target depth image comprises:
acquiring a rendering model; the rendering model is used for indicating the degree of the rendering influence of the geometrical shape of the preset light source, the related information of the shielding device, the related information of the shadow receiver and the spatial relationship between the shielding device and the shadow receiver on the soft shadow together;
And rendering the background pixel height map, the normal modeling image and the target depth image by using the rendering model so as to render and display the images.
9. The method of claim 8, wherein the rendering model includes N perceptual buffer channels, N being an integer greater than zero; one perception buffer channel corresponds to one map; the rendering the background pixel height map, the normal modeling image, and the target depth image using the rendering model to render and display the images includes:
extracting a mapping corresponding to each perception buffer channel in N perception buffer channels from the background pixel height map, the normal modeling image and the target depth image;
according to the corresponding relation between the perception buffer channels and the maps, respectively extracting the characteristics of the corresponding maps by utilizing N perception buffer channels included in the rendering model to obtain channel characteristics corresponding to the N perception buffer channels;
connecting the channel characteristics corresponding to the N perception buffer channels to obtain splicing characteristics;
and rendering and displaying the image based on the splicing characteristic, wherein the rendered image can present a soft shadow effect of the object projection of the rendering object under the preset light source.
10. The method of claim 9, wherein the N sense buffer channels comprise: a pixel height buffer channel, a hard shadow buffer channel and a relative distance buffer channel; wherein,
the pixel height buffer channel is used for extracting characteristics of gradients of a foreground pixel height map of the image and a background pixel height map of the image so as to extract geometric figures of rendering objects in the foreground pixel height map and the background pixel height map;
the hard shadow buffer channel is used for extracting characteristics of a hard shadow image of the image so as to extract the external boundary of a soft shadow in the image;
the relative distance buffer channel is used for extracting features of the relative distance map so as to extract softness of illumination shadows of the rendering object; the relative distance map refers to the relative distances between the shade and shadow receiver in the second dimension space.
11. An image processing apparatus, comprising:
an acquisition unit configured to acquire an image to be rendered, the image being an image represented in a first dimension space; the image comprises a rendering object to be rendered;
A processing unit, configured to obtain a first object parameter of the rendering object in the image; the first object parameter is a parameter for rendering the rendering object in the first dimension space;
the processing unit is further configured to reconstruct the rendering object in a second dimension space according to a mapping relationship between the first dimension space and the second dimension space based on a first object parameter of the rendering object, so as to obtain a second object parameter of the rendering object; the second dimension space is a three-dimensional scene with a preset light source; the second object parameter is a parameter for reconstructing the rendering object in the second dimension space under illumination projection of the preset light source;
the processing unit is further configured to render and display the image according to the second object parameter of the rendering object.
12. A computer device, characterized in that,
a processor adapted to execute a computer program;
a computer readable storage medium having stored therein a computer program which, when executed by the processor, implements the image processing method according to any one of claims 1-10.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program adapted to be loaded by a processor and to perform the image processing method according to any of claims 1-10.
14. A computer program product, characterized in that the computer program product comprises a computer program which, when executed by a processor, implements the image processing method according to any of claims 1-10.
CN202311448678.1A 2023-11-02 2023-11-02 Image processing method, device, equipment, medium and program product Active CN117173314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311448678.1A CN117173314B (en) 2023-11-02 2023-11-02 Image processing method, device, equipment, medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311448678.1A CN117173314B (en) 2023-11-02 2023-11-02 Image processing method, device, equipment, medium and program product

Publications (2)

Publication Number Publication Date
CN117173314A true CN117173314A (en) 2023-12-05
CN117173314B CN117173314B (en) 2024-02-23

Family

ID=88945378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311448678.1A Active CN117173314B (en) 2023-11-02 2023-11-02 Image processing method, device, equipment, medium and program product

Country Status (1)

Country Link
CN (1) CN117173314B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110096145A1 (en) * 2006-05-23 2011-04-28 See Real Technologies S.A. Method and Device for Rendering and Generating Computer-Generated Video Holograms
CN105825544A (en) * 2015-11-25 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
CN108525298A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
US20190199995A1 (en) * 2016-09-09 2019-06-27 Samsung Electronics Co., Ltd. Method and device for processing three-dimensional image
US20200184714A1 (en) * 2017-08-18 2020-06-11 Tencent Technology (Shenzhen) Company Limited Method for renfering of simulating illumination and terminal
CN114418992A (en) * 2022-01-19 2022-04-29 安徽大学 Interactive 2D and 3D medical image registration parameter automatic generation method
US20220148250A1 (en) * 2020-11-11 2022-05-12 Sony Interactive Entertainment Inc. Image rendering method and apparatus
CN114998504A (en) * 2022-07-29 2022-09-02 杭州摩西科技发展有限公司 Two-dimensional image illumination rendering method, device and system and electronic device
CN115239861A (en) * 2021-04-23 2022-10-25 广州视源电子科技股份有限公司 Face data enhancement method and device, computer equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110096145A1 (en) * 2006-05-23 2011-04-28 See Real Technologies S.A. Method and Device for Rendering and Generating Computer-Generated Video Holograms
CN105825544A (en) * 2015-11-25 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
US20190199995A1 (en) * 2016-09-09 2019-06-27 Samsung Electronics Co., Ltd. Method and device for processing three-dimensional image
US20200184714A1 (en) * 2017-08-18 2020-06-11 Tencent Technology (Shenzhen) Company Limited Method for renfering of simulating illumination and terminal
CN108525298A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
US20220148250A1 (en) * 2020-11-11 2022-05-12 Sony Interactive Entertainment Inc. Image rendering method and apparatus
CN115239861A (en) * 2021-04-23 2022-10-25 广州视源电子科技股份有限公司 Face data enhancement method and device, computer equipment and storage medium
CN114418992A (en) * 2022-01-19 2022-04-29 安徽大学 Interactive 2D and 3D medical image registration parameter automatic generation method
CN114998504A (en) * 2022-07-29 2022-09-02 杭州摩西科技发展有限公司 Two-dimensional image illumination rendering method, device and system and electronic device

Also Published As

Publication number Publication date
CN117173314B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN109003325B (en) Three-dimensional reconstruction method, medium, device and computing equipment
CN115699114B (en) Method and apparatus for image augmentation for analysis
US9646410B2 (en) Mixed three dimensional scene reconstruction from plural surface models
US10163247B2 (en) Context-adaptive allocation of render model resources
KR101885090B1 (en) Image processing apparatus, apparatus and method for lighting processing
US20230230311A1 (en) Rendering Method and Apparatus, and Device
US20230394740A1 (en) Method and system providing temporary texture application to enhance 3d modeling
US20230245396A1 (en) System and method for three-dimensional scene reconstruction and understanding in extended reality (xr) applications
CN116485984B (en) Global illumination simulation method, device, equipment and medium for panoramic image vehicle model
CN115375828B (en) Model shadow generation method, device, equipment and medium
CN116228943B (en) Virtual object face reconstruction method, face reconstruction network training method and device
CN111161398A (en) Image generation method, device, equipment and storage medium
CN115222917A (en) Training method, device and equipment for three-dimensional reconstruction model and storage medium
CN115346018A (en) Three-dimensional model reconstruction method and device and electronic equipment
US20240203030A1 (en) 3d model rendering method and apparatus, electronic device, and storage medium
KR20140000170A (en) Method for estimating the quantity of light received by a participating media, and corresponding device
US10909752B2 (en) All-around spherical light field rendering method
WO2021151380A1 (en) Method for rendering virtual object based on illumination estimation, method for training neural network, and related products
CN117173314B (en) Image processing method, device, equipment, medium and program product
Marek et al. Optimization of 3d rendering in mobile devices
US20200134905A1 (en) Production ray tracing of feature lines
WO2020193703A1 (en) Techniques for detection of real-time occlusion
Chen et al. A quality controllable multi-view object reconstruction method for 3D imaging systems
CN111243099A (en) Method and device for processing image and method and device for displaying image in AR (augmented reality) device
CN116777940B (en) Data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant