CN118552677A - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN118552677A
CN118552677A CN202310166364.6A CN202310166364A CN118552677A CN 118552677 A CN118552677 A CN 118552677A CN 202310166364 A CN202310166364 A CN 202310166364A CN 118552677 A CN118552677 A CN 118552677A
Authority
CN
China
Prior art keywords
shadow
initial
map
target
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310166364.6A
Other languages
Chinese (zh)
Inventor
孙翌峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Opper Software Technology Co ltd
Original Assignee
Nanjing Opper Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Opper Software Technology Co ltd filed Critical Nanjing Opper Software Technology Co ltd
Priority to CN202310166364.6A priority Critical patent/CN118552677A/en
Publication of CN118552677A publication Critical patent/CN118552677A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The application provides an image processing method and a related device, firstly, an initial scene is subjected to first rendering treatment to obtain an initial scene graph, wherein the initial scene graph comprises an initial shadow map corresponding to a shadow area formed by the initial scene under a virtual light source at a first viewpoint; then, determining a target pixel point according to the initial shadow map; and finally, performing second rendering processing on the target pixel points to obtain a target shadow map. The method can selectively determine the scene area needing to be rendered without performing second rendering processing on the whole scene, thereby greatly improving the image display effect and simultaneously reducing the hardware power consumption.

Description

Image processing method and related device
Technical Field
The present application relates to the field of image technology, and in particular, to an image processing method and related apparatus.
Background
With the development of technology, the degree of refinement of image processing is also increasing. The conventional rasterization shadow rendering at present often has the problem of edge saw tooth, and if the shadow rendering by adopting ray tracing can eliminate saw tooth, but the ray tracing shadow rendering requires larger calculation force and has higher requirement on hardware.
Disclosure of Invention
In view of this, the present application provides an image processing method and related apparatus, which can render shadows in a scene in a specific manner, and reduce hardware power consumption while improving image display effects.
In a first aspect, an embodiment of the present application provides an image processing method, including:
Performing first rendering processing on an initial scene to obtain an initial scene graph, wherein the initial scene graph comprises an initial shadow map corresponding to a shadow area formed by the initial scene under a virtual light source at a first viewpoint;
Determining a target pixel point according to the initial shadow map;
And performing second rendering processing on the target pixel points to obtain a target shadow map.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
The first rendering unit is used for performing first rendering processing on the initial scene to obtain an initial scene graph, wherein the initial scene graph comprises an initial shadow map corresponding to a shadow area formed by the initial scene under a virtual light source at a first viewpoint;
The target determining unit is used for determining target pixel points according to the initial shadow mapping;
And the second rendering unit is used for performing second rendering processing on the target pixel points to obtain a target shadow map.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing steps in any of the methods of the first aspect of the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform part or all of the steps as described in any of the methods of the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in any of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
Firstly, performing first rendering treatment on an initial scene to obtain an initial scene map, wherein the initial scene map comprises initial shadow maps corresponding to shadow areas formed by the initial scene under a virtual light source under a first viewpoint; then, determining a target pixel point according to the initial shadow map; and finally, performing second rendering processing on the target pixel points to obtain a target shadow map. The method can selectively determine the scene area needing to be rendered without performing second rendering processing on the whole scene, thereby greatly improving the image display effect and simultaneously reducing the hardware power consumption.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario diagram of an image processing method according to an embodiment of the present application;
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 4A is a schematic view of a scene under a virtual light source according to an embodiment of the present application;
FIG. 4B is a schematic illustration of a portion of an initial shadow map provided by an embodiment of the present application;
FIG. 4C is a schematic diagram of a portion of a target shadow map according to an embodiment of the present application;
FIG. 5 is a flowchart of another image processing method according to an embodiment of the present application;
FIG. 6 is a block diagram showing functional units of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a block diagram showing functional units of another image processing apparatus according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In this context, the character "/" indicates that the front and rear associated objects are an "or" relationship. The term "plurality" as used in the embodiments of the present application means two or more.
"At least one" or the like in the embodiments of the present application means any combination of these items, including any combination of single item(s) or plural items(s), meaning one or more, and plural means two or more. For example, at least one (one) of a, b or c may represent the following seven cases: a, b, c, a and b, a and c, b and c, a, b and c. Wherein each of a, b, c may be an element or a set comprising one or more elements.
The "connection" in the embodiment of the present application refers to various connection manners such as direct connection or indirect connection, so as to implement communication between devices, which is not limited in the embodiment of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The following description will first be made of the relevant terms that the present application relates to.
Rasterization rendering: the raster may be understood as being composed of a single light-emitting point on a display screen, and a three-dimensional object or scene created on a computer needs to be displayed on the screen, and a process of displaying the three-dimensional object on the single light-emitting point on the screen is called rasterizing rendering, and the rasterizing is implemented mainly by using projection, that is, mapping the color of a certain sampling point on the three-dimensional object onto an element of a two-dimensional array (the array may be called a frame buffer), the projection may be performed through a projection matrix, the projection matrix is determined according to the position of a virtual camera, and with the projection matrix, the color value on the three-dimensional object may be mapped onto the frame buffer, and then the frame buffer is handed to a graphics processor for reading and displaying, that is, the rasterizing rendering is completed.
Ray tracing rendering: after the virtual camera is defined, the light rays taking the virtual camera as a starting point and taking a virtual light source in a three-dimensional scene as an end point are tracked, and the colors of the light rays which can reach the light source after reflection, refraction, scattering and the like are calculated, so that ray tracing rendering is completed.
At present, the image is generally processed through rasterization rendering, but the rasterization rendering is to represent a continuous three-dimensional object by using discrete two-dimensional images, so that the obtained shadow map has the problems of edge saw tooth and the like; ray tracing rendering is generally applied to the fields of large games, large animations and the like, and can eliminate edge saw teeth existing in shadow maps, but because each ray needs to be traced, hardware power consumption is high.
In order to solve the above problems, embodiments of the present application provide an image processing method and related apparatus, which may perform specific rendering on a target pixel after performing a first rendering process on a scene, thereby reducing hardware power consumption while improving an image display effect.
The embodiment of the application can be applied to the following application scenarios, including but not limited to: image rendering systems deployed onto electronic devices, for large-scale computer vision applications (e.g., game screen rendering, animation screen rendering, etc.), or other scenes that cannot perform full ray trace rendering due to limited resources and hardware requirements. The embodiments of the present application may be modified and improved according to specific application environments, and are not particularly limited herein.
In the following description of an application scenario of an image processing method according to an embodiment of the present application with reference to fig. 1, fig. 1 is an application scenario diagram of an image processing method according to an embodiment of the present application, it can be seen that an electronic device 110 may perform rasterization rendering on a model 120 to be rendered to obtain a raster rendering image 130, where the raster rendering image 130 may be understood as a two-dimensional image that is presented when viewing the model 120 to be rendered from a current virtual viewpoint, and the raster rendering image 130 includes an initial shadow map 131, and it may be understood that the initial shadow map 131 is a shadow map corresponding to a shadow region that can be seen by the model 120 to be rendered from the current virtual viewpoint, at this time, the initial shadow map 131 still has a problem of edge aliasing, and then the electronic device 110 may perform targeted ray tracing rendering on the initial shadow map 131 to obtain a target shadow map 140 without aliasing.
The electronic device 110 in embodiments of the present application may be a portable electronic device that also includes other functions such as personal digital assistant and/or music player functions, such as a cell phone, tablet computer, wearable electronic device with wireless communication functions (e.g., a smart watch), etc. Exemplary embodiments of portable electronic devices include, but are not limited to, portable electronic devices that are equipped with IOS systems, android systems, microsoft systems, or other operating systems. The portable electronic device may also be other portable electronic devices such as a Laptop computer (Laptop) or the like. It should also be appreciated that in other embodiments, the electronic device described above may not be a portable electronic device, but rather a desktop computer, server, or the like.
Therefore, specific rendering can be performed on the shadow map, and hardware power consumption is reduced while the image display effect is improved.
An electronic device according to an embodiment of the present application will be described with reference to fig. 2, and fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 2, where the electronic device includes one or more application processors 220, a memory 230, a communication module 240, and one or more programs 231, and the application processor 220 is communicatively connected to the memory 230 and the communication module 240 through an internal communication bus.
Wherein the one or more programs 231 are stored in the memory 230 and configured to be executed by the application processor 220, the one or more programs 231 comprising instructions for performing any of the steps of the method embodiments described above.
The Application Processor 220 may be, for example, a central processing unit (Central Processing Unit, CPU), a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application-specific integrated Circuit (ASIC), a field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, units and circuits described in connection with this disclosure. The application processor 220 may also be a combination that implements computing functionality, such as a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like. The communication unit may be a communication module 240, a transceiver, a transceiving circuit, etc., and the storage unit may be a memory 230.
Memory 230 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an erasable programmable ROM (erasable PROM), an electrically erasable programmable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of random access memory (random access memory, RAM) are available, such as static random access memory (STATIC RAM, SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA RATE SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM), and direct memory bus random access memory (direct rambus RAM, DR RAM).
It will be appreciated that the electronic device 20 may include more or fewer structural elements than those described in the above-described block diagrams, including, for example, a power module, physical key, wi-Fi module, speaker, bluetooth module, sensor, display module, etc., without limitation. It can be understood that the electronic device 20 may be a terminal device in the embodiment of the present application, or may be an in-vehicle device in the embodiment of the present application.
An image processing method according to an embodiment of the present application is described below with reference to fig. 3, and fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present application, which specifically includes the following steps:
Step 301, performing a first rendering process on the initial scene to obtain an initial scene graph.
The initial scene map includes an initial shadow map corresponding to a shadow region formed by the initial scene under a virtual light source under a first viewpoint, it can be understood that the initial scene is a three-dimensional model to be rendered, the first viewpoint is a virtual viewpoint, a part of the initial scene seen from the virtual viewpoint can be understood to be a two-dimensional picture which needs to be displayed at present, the initial scene can include a plurality of objects, and the objects which can form shadows under the virtual light source in the initial scene can be determined as target objects.
The first rasterization process may be performed on the initial scene at the virtual light source viewpoint and under the virtual light source to obtain a first depth map, where the first depth map includes depth information of each pixel point in the initial scene at the virtual light source viewpoint and under the virtual light source, that is, the depth information of each pixel point and the distance between each pixel point and the virtual light source in the two-dimensional image corresponding to the initial scene seen from the virtual light source viewpoint are determined first, specifically, a depth value corresponding to a pixel point without other shielding object between the virtual light source may be written into a depth buffer, and when any two pixel points are in the same straight line with a straight line emitted by the virtual light source, a depth value of a pixel point closer to the virtual light source may be written into the depth buffer, for example, see fig. 4A, and no other pixel points between the virtual light source and the point a may be considered as a pixel point closest to the virtual light source, a depth value of the point a may be written into the depth buffer, and a depth value of the point B may be written into the depth buffer, and a depth value of the point may be required to be written into the depth buffer when the point is closer to the virtual light source is located at the same straight line as the virtual light source, and the virtual light source is required.
Then, performing a second rasterization process on the initial scene at the first viewpoint and under the virtual light source to obtain a second depth map, where the second depth map includes depth information of each pixel point in the initial scene at the first viewpoint and under the virtual light source, that is, determining depth information of each pixel point and a distance from each pixel point to the light source in a two-dimensional image corresponding to the initial scene seen from the first viewpoint,
And determining shadow pixels and non-shadow pixels of the initial scene which are in the first view point and under the virtual light source according to the first depth map and the second depth map, namely comparing the depth information of each pixel in the two-dimensional image corresponding to the initial scene which is seen from the first view point with the depth information of the same pixel in a depth buffer zone, selecting pixels with the depth information phase difference value smaller than a preset difference value for routine calculation, and selecting pixels with the depth information phase difference value larger than or equal to the preset difference value as shadow pixels for calculation. For example, when rasterizing points a and B in fig. 4A, conventional calculation may be directly performed, while when rasterizing point C, since the distance from point C to the virtual light source is significantly larger than the depth value of the depth buffer, point C may be calculated as a shadow pixel point, and a shadow parameter is applied to point C to make its color darker.
And finally, rendering each shadow pixel point and each non-shadow pixel point to obtain the initial scene graph comprising the initial shadow map. Referring to fig. 4B, fig. 4B is a schematic diagram of a portion of an initial shadow map according to an embodiment of the present application, where a black portion is an initial shadow, and at this time, specific shadow color and shade division is not needed temporarily according to factors such as distance, and it is obvious that there are saw teeth at edges of a hexagonal shadow, and the display effect is poor.
It should be noted that, the main steps of the first rendering process may be understood as rendering the initial scene through a rasterization rendering pipeline, where the rasterization rendering pipeline generally includes an application stage, a geometry processing stage, a rasterization stage and a pixel processing stage, where the application stage is used to successfully transfer geometries that need to be rendered in the geometry processing stage, where the geometries may be rendered primitives, the geometry processing stage may include vertex shading, projection, clipping, screen mapping, and the like, and the rasterization stage may include triangle assembly, triangle traversal, and the like, which may refer to an existing rasterization rendering method and will not be described herein. It will be appreciated that the initial shadow map obtained by the first rendering process is only used to provide a data reference for the subsequent determination of the target pixel point, and the initial shadow map may be colored according to black, without considering the shade of the shadow, so that the efficiency of the first rendering process may be improved.
Step 302, determining a target pixel point according to the initial shadow map.
In one possible embodiment, each shadow pixel in the initial shadow map may be determined as a target pixel, so that the entire shadow may be subjected to ray tracing rendering later, and the entire scene may not need to be subjected to ray tracing rendering, which may reduce hardware power consumption while improving display effects.
In a possible embodiment, the gray value of each shadow pixel in the initial shadow map may be obtained, and then the shadow pixel with the gray value lower than the preset gray threshold may be screened out as the target pixel, which may be understood that, due to the edge saw tooth, the initial shadow map may have the gray values of the non-saw-tooth shadow pixels in the initial shadow map all 1, that is, black, and the gray values of the shadow pixels in the edge saw tooth area are between 0 and 1, so that the shadow pixels with the gray values smaller than 1 and greater than 0 in the initial shadow map may be determined as the target pixel, so that the target pixel that is most needed to be displayed and optimized may be determined, and the hardware power consumption may be reduced while the display effect may be improved by the subsequent ray tracing rendering.
And 303, performing a second rendering process on the target pixel point to obtain a target shadow map.
In one possible embodiment, if the target pixel point is all the shadow pixels included in the whole initial shadow map, a virtual ray may be set for each target pixel point, and ray tracing rendering may be performed to obtain the target shadow map.
In one possible embodiment, if the target pixel point is a shadow pixel point in an edge sawtooth region of the initial shadow map, a virtual ray may be set for each target pixel point, and ray tracing rendering is performed to obtain an edge shadow map, and then the edge shadow map is merged into the initial shadow map to obtain the target shadow map.
Specifically, a virtual ray can be configured for each target pixel point from the first viewpoint through the ray tracker, the virtual ray can be called a main ray, more secondary rays can be generated from the main ray, the secondary rays can be used for searching whether the pixel point in the initial scene is in shadow or not and calculating the shadow effect, and the secondary rays can be generated through reflection or refraction, so that the conventional ray tracing rendering needs to configure a virtual ray for each pixel point to perform ray tracing, but in the embodiment of the scheme, only one virtual ray needs to be configured for each target pixel point, and the calculation amount is greatly reduced. For specific steps of ray tracing rendering, reference may be made to an existing ray tracing rendering method, and details are not described herein.
If the target pixel points are all the shadow pixel points included in the whole initial shadow map, virtual light is not required to be configured for non-shadow pixel points during ray tracing rendering, so that the calculated amount is greatly reduced, and the hardware power consumption is reduced while the target shadow map with the best display effect can be obtained.
If the target pixel point is a shadow pixel point of the edge sawtooth region, virtual light can be configured only for the shadow pixel point of the edge sawtooth region when performing ray tracing rendering, so that the calculated amount is further reduced, the target shadow map without sawtooth can be obtained, the hardware power consumption can be greatly reduced while the better display effect is ensured, and the user experience is improved. As shown in fig. 4C, fig. 4C is a schematic diagram of a portion of a target shadow map according to an embodiment of the present application, and it can be seen that there are no saw teeth on the edges of the hexagonal shadow, and the user experience is better.
Firstly, performing first rendering treatment on an initial scene to obtain an initial scene map, wherein the initial scene map comprises initial shadow maps corresponding to shadow areas formed by the initial scene under a virtual light source at a first viewpoint; then, determining a target pixel point according to the initial shadow map; and finally, performing second rendering processing on the target pixel points to obtain a target shadow map. The method can selectively determine the scene area needing to be rendered without performing second rendering processing on the whole scene, thereby greatly improving the image display effect and simultaneously reducing the hardware power consumption.
The following describes another image processing method in the embodiment of the present application with reference to fig. 5, and fig. 5 is a schematic flow chart of another image processing method provided in the embodiment of the present application, specifically including the following steps:
Step 501, performing a first rendering process on the initial scene to obtain an initial scene graph.
Step 502, current device performance data is obtained.
The device performance data may include at least one of device temperature, device dynamic memory occupancy, device processor occupancy, and device power.
In step 503, a target pixel is determined according to the device performance data.
In a possible embodiment, if the device performance data meets a preset performance standard, each shadow pixel point in the initial shadow map may be determined to be the target pixel point, where the preset performance standard may include at least one of a temperature standard, a dynamic storage occupancy standard, a processor occupancy standard, an electric quantity standard, and the like, specifically, when the device temperature is lower than a preset temperature threshold, it may be determined that the device temperature meets the temperature standard, and when the device temperature is higher than or equal to the preset temperature threshold, it is determined that the device temperature does not meet the temperature standard; the dynamic storage occupancy of the equipment can be determined to be in accordance with the dynamic storage occupancy standard when the dynamic storage occupancy of the equipment is smaller than the preset dynamic storage occupancy proportion, and is determined to be not in accordance with the dynamic storage occupancy standard when the dynamic storage occupancy of the equipment is larger than or equal to the preset dynamic storage occupancy proportion; the processor occupation of the device can be determined to be in accordance with the processor occupation standard when the processor occupation of the device is smaller than the preset processor occupation proportion, and the processor occupation of the device is determined to be not in accordance with the processor occupation standard when the processor occupation of the device is larger than or equal to the preset processor occupation proportion; the device power may be determined to meet the power standard when the device power is higher than the preset power threshold, and the device power may be determined to not meet the power standard when the device power is lower than or equal to the preset power threshold, which will not be described herein.
In a possible embodiment, if the device performance data does not meet the preset performance standard, the gray value of each shadow pixel in the initial shadow map may be obtained, and then the shadow pixel with the gray value lower than the preset gray threshold is screened out as the target pixel.
Therefore, the target pixel point is determined according to the equipment performance data, the target pixel point with the maximum range can be determined according to the equipment performance, the display effect is ensured to the maximum extent, and meanwhile, the hardware power consumption is reduced.
And 504, performing a second rendering process on the target pixel point to obtain a target shadow map.
Step 505, replacing the initial shadow map with the target shadow map to obtain a target scene map.
In the target scene graph, the shadow part is a target shadow map obtained through ray tracing rendering, and the non-shadow part is an image obtained through rasterization rendering, so that the shadow with better display effect can be obtained, the defect of rasterization rendering is overcome, and meanwhile, the hardware power consumption is reduced because only the ray tracing rendering is carried out on the target pixel point.
Firstly, performing first rendering treatment on an initial scene to obtain an initial scene map, wherein the initial scene map comprises initial shadow maps corresponding to shadow areas formed by the initial scene under a virtual light source at a first viewpoint; then, determining a target pixel point according to the initial shadow map; and finally, performing second rendering processing on the target pixel points to obtain a target shadow map. The method can selectively determine the scene area needing to be rendered without performing second rendering processing on the whole scene, thereby greatly improving the image display effect and simultaneously reducing the hardware power consumption.
The steps not described in detail above may refer to the steps of the method described in fig. 3, and are not described herein.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional units of the electronic device according to the method example, for example, each functional unit can be divided corresponding to each function, and two or more functions can be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
In the case of dividing each functional module by using each corresponding function, fig. 6 is a block diagram showing functional units of an image processing apparatus according to an embodiment of the present application, where the image processing apparatus 600 includes:
A first rendering unit 610, configured to perform a first rendering process on an initial scene to obtain an initial scene graph, where the initial scene graph includes an initial shadow map corresponding to a shadow area formed by the initial scene under a virtual light source at a first viewpoint;
A target determining unit 620, configured to determine a target pixel point according to the initial shadow map;
And the second rendering unit 630 is configured to perform a second rendering process on the target pixel point to obtain a target shadow map.
Firstly, performing first rendering treatment on an initial scene to obtain an initial scene map, wherein the initial scene map comprises initial shadow maps corresponding to shadow areas formed by the initial scene under a virtual light source under a first viewpoint; then, determining a target pixel point according to the initial shadow map; and finally, performing second rendering processing on the target pixel points to obtain a target shadow map. The method can selectively determine the scene area needing to be rendered without performing second rendering processing on the whole scene, thereby greatly improving the image display effect and simultaneously reducing the hardware power consumption.
It should be noted that, the specific implementation of each operation may be described in the above-illustrated method embodiment, and the image processing apparatus 600 may be used to execute the above-illustrated method embodiment of the present application, which is not described herein.
In the case of using integrated units, another image processing apparatus 700 according to an embodiment of the present application will be described in detail below with reference to fig. 7, where the image processing apparatus 700 includes a processing unit 701 and a communication unit 702, where the processing unit 701 is configured to perform any step in the above method embodiments, and when performing data transmission such as transmission, the communication unit 702 is selectively called to perform a corresponding operation.
The image processing apparatus 700 may further include a storage unit 703 for storing program codes and data. The processing unit 701 may be a processor, the communication unit 702 may be a wireless communication module, the storage unit 703 may be a memory, and the processing unit 701 is specifically configured to:
Performing first rendering processing on an initial scene to obtain an initial scene graph, wherein the initial scene graph comprises an initial shadow map corresponding to a shadow area formed by the initial scene under a virtual light source at a first viewpoint;
Determining a target pixel point according to the initial shadow map;
And performing second rendering processing on the target pixel points to obtain a target shadow map.
Firstly, performing first rendering treatment on an initial scene to obtain an initial scene map, wherein the initial scene map comprises initial shadow maps corresponding to shadow areas formed by the initial scene under a virtual light source under a first viewpoint; then, determining a target pixel point according to the initial shadow map; and finally, performing second rendering processing on the target pixel points to obtain a target shadow map. The method can selectively determine the scene area needing to be rendered without performing second rendering processing on the whole scene, thereby greatly improving the image display effect and simultaneously reducing the hardware power consumption.
It should be noted that, the specific implementation of each operation may be described in the above-illustrated method embodiment, and the image processing apparatus 700 may be used to execute the above-illustrated method embodiment of the present application, which is not described herein.
The embodiment of the application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program makes a computer execute part or all of the steps of any one of the method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
For the above embodiments, for simplicity of description, the same is denoted as a series of combinations of actions. It will be appreciated by persons skilled in the art that the application is not limited by the order of acts described, as some steps in embodiments of the application may be performed in other orders or concurrently. In addition, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts, steps, modules, or units, etc. that are described are not necessarily required by the embodiments of the application.
In the foregoing embodiments, the descriptions of the embodiments of the present application are emphasized, and in part, not described in detail in one embodiment, reference may be made to related descriptions of other embodiments.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, or may be embodied in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in RAM, flash memory, ROM, EPROM, electrically Erasable EPROM (EEPROM), registers, hard disk, a removable disk, a compact disk read-only (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may be located in a terminal device or a management device. The processor and the storage medium may reside as discrete components in a terminal device or management device.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented, in whole or in part, in software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another. For example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Drive (SSD)), or the like.
The respective apparatuses and the respective modules/units included in the products described in the above embodiments may be software modules/units, may be hardware modules/units, or may be partly software modules/units, and partly hardware modules/units. For example, for each device or product applied to or integrated on a chip, each module/unit included in the device or product may be implemented in hardware such as a circuit, or at least some modules/units may be implemented in software program, where the software program runs on a processor integrated inside the chip, and the remaining (if any) part of modules/units may be implemented in hardware such as a circuit; for each device and product applied to or integrated in the chip module, each module/unit contained in the device and product can be realized in a hardware manner such as a circuit, different modules/units can be located in the same component (such as a chip, a circuit module and the like) or different components of the chip module, or at least part of the modules/units can be realized in a software program, the software program runs on a processor integrated in the chip module, and the rest (if any) of the modules/units can be realized in a hardware manner such as a circuit; for each device, product, or application to or integrated with the terminal device, each module/unit included in the device may be implemented in hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components in the terminal device, or at least some modules/units may be implemented in a software program, where the software program runs on a processor integrated within the terminal device, and the remaining (if any) some modules/units may be implemented in hardware such as a circuit.
The foregoing detailed description of the embodiments of the present application further illustrates the purposes, technical solutions and advantageous effects of the embodiments of the present application, and it should be understood that the foregoing description is only a specific implementation of the embodiments of the present application, and is not intended to limit the scope of the embodiments of the present application, and any modifications, equivalent substitutions, improvements, etc. made on the basis of the technical solutions of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (10)

1. An image processing method, the method comprising:
Performing first rendering processing on an initial scene to obtain an initial scene graph, wherein the initial scene graph comprises an initial shadow map corresponding to a shadow area formed by the initial scene under a virtual light source at a first viewpoint;
Determining a target pixel point according to the initial shadow map;
And performing second rendering processing on the target pixel points to obtain a target shadow map.
2. The method of claim 1, wherein performing a first rendering process on the initial scene to obtain an initial scene graph comprises:
Performing first rasterization on the initial scene which is at a virtual light source viewpoint and under the virtual light source to obtain a first depth map, wherein the first depth map comprises depth information of each pixel point in the initial scene which is at the virtual light source viewpoint and under the virtual light source;
performing second rasterization on the initial scene which is at the first view point and is under the virtual light source to obtain a second depth map, wherein the second depth map comprises depth information of each pixel point in the initial scene which is at the first view point and is under the virtual light source;
Determining shadow pixel points and non-shadow pixel points of the initial scene which are at the first viewpoint and under the virtual light source according to the first depth map and the second depth map;
And rendering each shadow pixel point and each non-shadow pixel point to obtain the initial scene graph comprising the initial shadow map.
3. The method of claim 2, wherein said determining a target pixel from said initial shadow map comprises:
and determining each shadow pixel point in the initial shadow map as the target pixel point.
4. The method of claim 2, wherein the determining the target pixel point in the original shadow map comprises:
acquiring a gray value of each shadow pixel point in the initial shadow map;
and screening out shadow pixel points with gray values lower than a preset gray threshold value as the target pixel points.
5. A method according to claim 3, wherein said performing a second rendering process on said target pixel points results in a target shadow map, comprising:
and setting a virtual ray for each target pixel point, and performing ray tracing rendering to obtain the target shadow map.
6. The method of claim 4, wherein performing a second rendering process on the target pixel point to obtain a target shadow map comprises:
setting a virtual ray for each target pixel point, and performing ray tracing rendering to obtain an edge shadow map;
And merging the edge shadow map to the initial shadow map to obtain the target shadow map.
7. The method according to any one of claims 1-6, wherein after performing the second rendering process on the target pixel point to obtain a target shadow map, the method further comprises:
And replacing the initial shadow map with the target shadow map to obtain a target scene map.
8. An image processing apparatus, characterized in that the apparatus comprises:
The first rendering unit is used for performing first rendering processing on the initial scene to obtain an initial scene graph, wherein the initial scene graph comprises an initial shadow map corresponding to a shadow area formed by the initial scene under a virtual light source at a first viewpoint;
The target determining unit is used for determining target pixel points according to the initial shadow mapping;
And the second rendering unit is used for performing second rendering processing on the target pixel points to obtain a target shadow map.
9. An electronic device, comprising: a processor, a memory, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-7.
CN202310166364.6A 2023-02-25 2023-02-25 Image processing method and related device Pending CN118552677A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310166364.6A CN118552677A (en) 2023-02-25 2023-02-25 Image processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310166364.6A CN118552677A (en) 2023-02-25 2023-02-25 Image processing method and related device

Publications (1)

Publication Number Publication Date
CN118552677A true CN118552677A (en) 2024-08-27

Family

ID=92444674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310166364.6A Pending CN118552677A (en) 2023-02-25 2023-02-25 Image processing method and related device

Country Status (1)

Country Link
CN (1) CN118552677A (en)

Similar Documents

Publication Publication Date Title
CN108010112B (en) Animation processing method, device and storage medium
EP3179447B1 (en) Foveated rendering
CN108154548B (en) Image rendering method and device
US9064313B2 (en) Adaptive tone map to a region of interest to yield a low dynamic range image
US10164459B2 (en) Selective rasterization
CN112840378B (en) Global illumination interacting using shared illumination contributions in path tracking
CN105550973B (en) Graphics processing unit, graphics processing system and anti-aliasing processing method
KR20060052042A (en) Method for hardware accelerated anti-aliasing in 3d
JP2015515059A (en) Method for estimating opacity level in a scene and corresponding apparatus
US8854392B2 (en) Circular scratch shader
CN105913481B (en) Shadow rendering apparatus and control method thereof
CN106447756B (en) Method and system for generating user-customized computer-generated animations
CN109448123B (en) Model control method and device, storage medium and electronic equipment
CN113610958A (en) 3D image construction method and device based on style migration and terminal
CN111145358A (en) Image processing method, device and hardware device
US11461978B2 (en) Ambient light based mixed reality object rendering
CN113487478A (en) Image processing method, image processing device, storage medium and electronic equipment
US10657705B2 (en) System and method for rendering shadows for a virtual environment
CN112465692A (en) Image processing method, device, equipment and storage medium
CN118552677A (en) Image processing method and related device
CN113744379B (en) Image generation method and device and electronic equipment
US11436794B2 (en) Image processing method, apparatus and device
CN115970275A (en) Projection processing method and device for virtual object, storage medium and electronic equipment
WO2022126145A1 (en) Hybrid shadow rendering
CN114693780A (en) Image processing method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication