CN115423921A - Image rendering method and device, electronic equipment and storage medium - Google Patents

Image rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115423921A
CN115423921A CN202211142280.0A CN202211142280A CN115423921A CN 115423921 A CN115423921 A CN 115423921A CN 202211142280 A CN202211142280 A CN 202211142280A CN 115423921 A CN115423921 A CN 115423921A
Authority
CN
China
Prior art keywords
depth
shadow map
shadow
scene
rendered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211142280.0A
Other languages
Chinese (zh)
Inventor
梅新岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202211142280.0A priority Critical patent/CN115423921A/en
Publication of CN115423921A publication Critical patent/CN115423921A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The embodiment of the application discloses an image rendering method, an image rendering device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a focus point in a scene to be rendered, and determining a depth-of-field focus area corresponding to the focus point in the scene to be rendered; according to the depth-of-field focusing area, dividing a view cone of the electronic equipment to obtain a first part view cone corresponding to the depth-of-field focusing area and a second part view cone corresponding to the non-depth-of-field focusing area; determining a first shadow map corresponding to each first partial view cone and a second shadow map corresponding to each second partial view cone; wherein the resolution of the second shadow map is lower than the resolution of the first shadow map; and respectively rendering the depth-of-field focusing area and the non-depth-of-field focusing area by using the first shadow map and the second shadow map to obtain an image corresponding to the scene to be rendered. By implementing the embodiment of the application, the shadow rendering quality of the focus point can be improved, and meanwhile, the calculation resources consumed by the shadow rendering can be reduced.

Description

Image rendering method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image rendering method and apparatus, an electronic device, and a storage medium.
Background
In the conventional shadow rendering technology, in order to improve the rendering quality of the shadow, the resolution of the shadow map needs to be improved. However, in practice, it has been found that the improvement of the resolution of the shadow map can improve the quality of the shadow rendering, but it consumes excessive computing resources, resulting in a decrease in memory performance.
Disclosure of Invention
The embodiment of the application discloses an image rendering method, an image rendering device, electronic equipment and a storage medium, which can reduce operation resources consumed by shadow rendering and optimize rendering performance while improving shadow rendering quality of a focus point.
The embodiment of the application discloses an image rendering method, which comprises the following steps: acquiring a focus point in a scene to be rendered; determining a depth-of-field focusing area corresponding to a focusing point in the scene to be rendered; according to the depth-of-field focusing area, dividing a view cone of the electronic equipment to obtain at least one first partial view cone corresponding to the depth-of-field focusing area and at least one second partial view cone corresponding to the non-depth-of-field focusing area; the non-depth focus area is other areas except the depth focus area in the scene to be rendered; determining a first shadow map corresponding to each first partial view cone to obtain a first shadow map set, and determining a second shadow map corresponding to each second partial view cone to obtain a second shadow map set; wherein a resolution of the second shadow map is lower than a resolution of the first shadow map; and rendering the depth-of-field focusing area by using the first shadow mapping set, and rendering the non-depth-of-field focusing area by using the second shadow mapping set to obtain an image corresponding to the scene to be rendered.
In one embodiment, the depth-of-field focal region corresponds to a first partial view frustum; the first shadow map set comprises: a first shadow map corresponding to said first partial view frustum; the determining a depth-of-field focusing area corresponding to a focusing point in the scene to be rendered includes: identifying other pixel points which belong to the same object as the focus point in the scene to be rendered; determining a depth of focus region including the focus point and the other pixel points.
In one embodiment, the depth-of-focus area corresponds to at least two first partial cones; the at least two first partial cones comprise: a first focusing optic cone and at least one first non-focusing optic cone, the first focusing optic cone centered at the focus point; the first shadow map set comprises: a first shadow map corresponding to the first focused viewing cone and a first shadow map corresponding to each of the first unfocused viewing cones; the resolution of the first shadow maps corresponding to the first unfocused cones is lower than that of the first shadow maps corresponding to the first focused cones.
In one embodiment, the at least two first partial cones comprise: at least two first non-focusing cones; the resolution of the first shadow map corresponding to each first unfocused view cone is in a negative correlation with the distance from the first unfocused view cone to the focus point.
In one embodiment, the at least two first partial view cones are demarcated from the view cone after the electronic device determines that the depth-of-field focal region has a length in the depth direction greater than a threshold.
In one embodiment, the non-depth-of-field focal region corresponds to at least two second partial cones; the second shadow map set comprises: a second shadow map corresponding to each of said at least two second partial cones, respectively; the resolution of the second shadow map corresponding to each second partial view cone is in a negative correlation relation with the distance from the second partial view cone to the viewpoint; or the resolution of the second shadow map corresponding to each second partial view cone is in a negative correlation relation with the distance from the non-second partial view cone to the focus point; or the resolutions of the second shadow maps corresponding to the second partial view cones are the same.
In one embodiment, the method further comprises: and carrying out fuzzy processing on the non-depth-of-field focusing area.
In one embodiment, the obtaining a focus point in a scene to be rendered includes: detecting an input user focusing operation; identifying a focus point indicated by the user focus operation in the scene to be rendered.
The embodiment of the application discloses image rendering device, the device includes: the acquisition module is used for acquiring a focus point in a scene to be rendered; the first determining module is used for determining a depth-of-field focusing area corresponding to the focusing point in the scene to be rendered; the dividing module is used for dividing the view cones of the electronic equipment according to the depth-of-field focusing area to obtain at least one first part view cone corresponding to the depth-of-field focusing area and at least one second part view cone corresponding to the non-depth-of-field focusing area; the non-depth focus area is other areas except the depth focus area in the scene to be rendered; a second determining module, configured to determine a first shadow map corresponding to each first partial view cone to obtain a first shadow map set, and determine a second shadow map corresponding to each second partial view cone to obtain a second shadow map set; wherein a resolution of the second shadow map is lower than a resolution of the first shadow map; and the rendering module is used for rendering the depth-of-field focusing area by using the first shadow mapping set and rendering the non-depth-of-field focusing area by using the second shadow mapping set to obtain an image corresponding to the scene to be rendered.
The embodiment of the application discloses an electronic device, which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor is enabled to realize any image rendering method disclosed by the embodiment of the application.
The embodiment of the application discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and the computer program is characterized in that when being executed by a processor, the computer program can be used for any image rendering method disclosed in the embodiment of the application.
Compared with the related art, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, the visual cone body can be divided according to the depth focus area where the focus point in the scene to be rendered is located, so that a first partial visual cone body and a second partial visual cone body corresponding to the depth focus area are obtained. The first part of view cone corresponding to the focus area of depth of field can correspondingly generate a first shadow map with higher resolution, and the second part of view cone corresponding to the focus area of non-depth of field can correspondingly generate a second shadow map with lower resolution. And rendering the depth of field focus area and the non-depth of field focus area by using the first shadow map and the second shadow map respectively. On one hand, the depth of field focusing area where the focus point is positioned can achieve a shadow rendering effect with higher quality; on the other hand, by reducing the shadow map resolution of the non-depth-of-field focusing area, the calculation amount of shadow rendering can be reduced, the calculation resources consumed by the shadow rendering are reduced, and the rendering performance is optimized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1A is a diagram of an example of view frustum division of CSM in the related art;
FIG. 1B is a CSM shadow mapping and rendering effect example in the related art;
FIG. 2A is a diagram of an example of view frustum division in CSM in the related art;
FIG. 2B is a diagram of another cone division example of CSM in the related art;
FIG. 2C is an exemplary diagram of projection of a divided view frustum in the related art;
FIG. 3 is an exemplary diagram of a seam problem seen in the related art;
FIG. 4 is a diagram illustrating an application scenario of an image rendering method according to an embodiment;
FIG. 5 is a flowchart illustrating a method of an image rendering method according to an embodiment;
FIG. 6A is an exemplary illustration of a depth of field focal area according to one embodiment disclosed;
FIG. 6B is an exemplary illustration of another depth of focus area disclosed in one embodiment;
FIG. 6C is an exemplary illustration of another depth of focus area disclosed by an embodiment;
FIG. 7A is an example diagram of a partitioning of cones, according to one embodiment;
FIG. 7B is an example diagram of another cone of view partitioning disclosed by one embodiment;
FIG. 7C is an exemplary diagram of one embodiment of a projection of a partitioned cone;
FIG. 8 is a flowchart of another embodiment of a disclosed method for rendering an image;
FIG. 9 is a diagram illustrating an example of the effects of a shadow rendering according to one embodiment;
FIG. 10 is a schematic diagram illustrating an exemplary embodiment of an image rendering apparatus;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
In the related art, an image rendering technique based on Cascade ShadOw Mapping (CSM) is provided. Before introducing CSM, the following terms will be explained.
The viewpoint, i.e. where the observer is located, may be the position of the camera in the AR scene, the position of the VR glasses in the VR scene, etc.
A view frustum, a pyramid whose vertex is the viewpoint. The view frustum is used to represent a corresponding view range when a scene to be rendered is viewed from a viewpoint.
The depth direction is a direction perpendicular to the cross section of the viewing cone. For example, if the cross section of the view frustum is represented as an xy plane, the depth direction may be represented as a Z-axis direction.
The scene to be rendered, which may be a three-dimensional model to be rendered, may include one or more independent objects, such as virtual buildings, virtual plants, virtual furniture, and the like.
CSM divides the camera's view frustum into sub-blocks and generates a separate Shadow Map (Shadow Map) for each sub-block of the division. The resolution of different shadow maps is different, and the resolution of the shadow maps can represent the number of pixel points occupied by shadows of objects with the same size in the shadow maps. The higher the resolution of the shadow map is, the more the number of pixel points occupied by the shadow of the object in the shadow map is, and the better the shadow rendering effect is.
Referring to fig. 1A, fig. 1A is a diagram illustrating a cone division of a CSM in the related art. As shown in fig. 1A, the viewing pyramid is divided into 3 sub-blocks, namely, a Near sub-block 110, a Middle sub-block 120, and a Far sub-block 130.
Referring to fig. 1B, fig. 1B is a diagram illustrating a shadow map and a rendering effect of a CSM in the related art. The shadow map shown in FIG. 1B is generated based on the result of the cone division as shown in FIG. 1A. As shown in FIG. 1B, the rendering effect may be shown as image 140. The shadow map 1101 corresponds to the near sub-block 110, the shadow map 1201 corresponds to the middle sub-block 120, and the shadow map 1301 corresponds to the far sub-block 130. The resolutions of the shadow maps 1101, 1201, and 1301 are successively reduced.
The CSM technology divides the cone of vision, generates a shadow map with higher resolution on a near plane, and generates a shadow map with lower resolution on a far plane, thereby reducing the operation amount of shadow rendering to a certain extent and optimizing the rendering performance.
CSM-based shadow rendering may include the following three keys: 1. dividing a view cone; 2. generating a corresponding shadow map according to each divided sub-block; 3. shadow rendering is performed using the shadow map. The following description will be made separately.
First, referring to fig. 2A, fig. 2A is a diagram illustrating cone division in CSM in the related art. As shown in fig. 2A, a scene 210 to be rendered may include 4 trees, a lighting direction 220 is perpendicular to a depth direction, and a viewing pyramid 230 constructed based on the depth direction is uniformly divided along a z-axis direction.
The uniform division of the view frustum 230 can be expressed with reference to the following formula:
Figure BDA0003852804270000061
wherein, Z i Is the position of the ith sub-block, Z n Indicating the position of the subblock closest to the view point, zf for indicating the position of the subblock farthest from the view point, N indicating the number of subblocks into which the view cone 230 is divided, and i indicating the ith subblock.
Referring to fig. 2B, fig. 2B is a diagram of another cone division example of the CSM in the related art. As shown in fig. 2B, the viewing frustum 230 is logarithmically divided along the z-axis direction.
The logarithmic division of the view frustum 230 can be expressed with reference to the following formula:
Figure BDA0003852804270000062
in addition, the foregoing uniform division and logarithmic division may be combined to divide the cone of view with reference to the following formula:
Figure BDA0003852804270000063
where a is a parameter for controlling the weight.
After the view frustum is divided, how to generate a corresponding shadow map according to each divided sub-block. Referring to fig. 2C, fig. 2C is a diagram illustrating a projection of a divided view frustum in the related art. As shown in fig. 2C, the view frustum 230 is divided into 2 sub-blocks, which are: sub-block 240, sub-block 250. Wherein, the rightmost end of the view frustum 230 does not include any object of the scene 210 to be rendered, and can be ignored. The sub-blocks 240 and 250 comprise the scene 210 to be rendered, and the sub-blocks 240 and 250 are projected according to the illumination direction 220, resulting in a correspondingly graded illumination viewport (Frustum). Sub-block 240 corresponds to the illumination viewport 2401, and sub-block 250 corresponds to the illumination viewport 2501.
The illumination viewport 2401 is used to indicate a pixel range corresponding to the sub-block 240 in the scene 210 to be rendered, based on which the illumination viewport 2401 can generate a shadow map corresponding to the sub-block 240. Similarly, based on the illumination viewport 2601, a shadow map corresponding to sub-block 250 can be generated. Also, the resolution of the shadow map corresponding to sub-block 240 is higher than the resolution of the shadow map corresponding to sub-block 250.
When the shadow rendering is performed on the scene 210 to be rendered, which frame of shadow map is selected for rendering is determined according to the position of the pixel point to be rendered in the view space triggered from the viewpoint, and then whether the pixel point is rendered into a shadow is judged by using the depth information stored in the shadow map.
However, CSM techniques still have certain drawbacks:
1. in some scenes, a rendered picture needs to present a focus point, and the periphery of the focus point requires higher picture quality. However, the focus point may appear at any position, and may be located in a near plane closer to the viewpoint or may be located in a far plane farther from the viewpoint. When shadow rendering is performed based on CSM, if a focus point appears in a far plane, the CSM generates a shadow map with a low resolution in the far plane, which tends to result in poor quality of shadow rendering of the focus point and the periphery.
2. When the same object is divided into two different cascading layers in the view frustum, different parts of the same object use shadow maps with different resolutions to perform shadow rendering, and the problem of visible seams is easy to occur.
For example, referring to fig. 3, fig. 3 is an exemplary diagram illustrating a seam problem in the related art. The shading 310 and shading 320 circled in fig. 3 present a visible seam problem, the shading effect being incoherent.
The embodiment of the application discloses an image rendering method, an image rendering device, electronic equipment and a storage medium, which can reduce operation resources consumed by shadow rendering and optimize rendering performance while improving shadow rendering quality of a focus point. The following are detailed below.
First, possible application scenarios of the image rendering method disclosed in the embodiment of the present application are introduced.
The image rendering method disclosed by the embodiment of the application can be applied to Augmented Reality (AR) shadow rendering application under a multi-user view angle. In an AR shadow rendering application scenario, multiple terminal devices, such as smartphones, smartpads, etc., may be provided. The terminal device can have an AR function and can shoot a real scene of the real world and superpose and display a virtual scene in the shot real scene. When the virtual scene is displayed in an overlapping mode, shadow rendering needs to be carried out on the virtual scene to simulate the shadow effect in the entity scene, the split feeling between the virtual scene and the entity scene is relieved, and the reality of the virtual scene is increased.
Moreover, different users can shoot the same physical scene according to different user visual angles, and even if the same virtual scene is displayed on terminal equipment of different users, different user visual angles can have different focus points when facing the same virtual scene.
Referring to fig. 4, fig. 4 is a schematic view illustrating an application scenario of an image rendering method according to an embodiment, and fig. 4 illustrates a Virtual Reality (VR) shadow rendering application. The virtual scene shown in fig. 4 may be output in a display screen of VR glasses, which may detect a line-of-sight direction of the wearer and determine a focus point of the wearer in the virtual object according to the line-of-sight direction.
As shown in fig. 4, the line of sight direction of the VR glasses wearer may be positioned at a FocuS Point (FocuS Point) 410 in the virtual object.
The image rendering method disclosed in the embodiment of the present application may be executed by any electronic device, such as the foregoing smart phone, smart tablet, VR glasses, or a service device that provides background service for VR glasses, and is not limited specifically.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method of image rendering according to an embodiment of the disclosure. As shown in fig. 5, the method may include the steps of:
510. and acquiring a focus point in the scene to be rendered.
The focus point is a possible focus point of the sight of the user in the scene to be rendered, and can be input by the user through the focusing operation of the user; or may be actively generated by the electronic device.
As an alternative embodiment, the focus point may be input through a user focusing operation, and the user may input the user focusing operation through any one of interaction operations such as touch, sight, voice, and the like. The electronic device may detect the input user operation and identify a focus point indicated in the rendered scene by the user operation.
For example, in the AR scenario as described above, the user may input the user focusing operation by clicking the touch screen of the smart phone, and the point where the user focusing operation is placed on the touch screen is the focusing point. Alternatively, in a VR scene as described above, the user may input a user focusing operation by line-of-sight rotation, where the falling point of the line-of-sight in the VR glasses screen is the focusing point.
As another alternative, the electronic device may actively generate a focus point in the scene to be rendered. The actively generated focus point of the electronic device may be used to guide the user in shifting the gaze. For example, in a VR game, to guide a user to pick up a virtual item, the electronic device may actively perform screen rendering with the virtual item as a focus point.
520. And determining a depth focus area corresponding to the focus point in the scene to be rendered.
The Depth of Field (DOF) focus region corresponding to the focus point may be a region composed of pixels capable of obtaining a clear image when the focus point is set as a focus.
The non-depth focus area may be other areas in the scene to be rendered except the depth focus area.
As an alternative implementation, the electronic device may determine a depth focus area corresponding to the focus point in the scene to be rendered according to the camera parameters. For example, in an AR scene, the camera parameters may be parameters corresponding to a physical camera disposed on the electronic device. In a VR scene, the camera parameters may be parameters corresponding to a simulated camera used in rendering a picture.
Among them, the camera parameters may include: f value of aperture, focus distance, etc. Illustratively, with a focus point as a focus point, the smaller the f value of the aperture, the shallower the depth of field; the longer the focus distance, the shallower the depth of field.
Referring to fig. 6A-6C, fig. 6A is a diagram illustrating an exemplary depth of field focus area according to an embodiment of the disclosure. As shown in fig. 6A, the depth-of-field focusing area corresponding to the focusing point is a narrow area, which can be determined according to a smaller f-number or a longer focusing distance.
Illustratively, fig. 6B is an exemplary illustration of another depth-of-field focal area disclosed in one embodiment. As shown in fig. 6B, the depth-of-field focusing area corresponding to the focusing point is a wider area, which can be determined according to the larger f-number of the aperture.
As another optional implementation manner, the electronic device may further identify other pixel points belonging to the same object as the focus point in the scene to be rendered, and determine a depth focus area including the focus point and the other pixel points.
After the electronic device detects the focus point, other pixel points belonging to the same object as the focus point in the scene to be rendered can be identified based on a target division method such as depth estimation and target identification. For example, the method may show a focusing object where a focusing point is located by means of a minimum Bounding box (Bounding box), and may determine an area enclosed by the minimum Bounding box as a depth-of-field focusing area. That is, the depth of field focus region corresponding to the focus point may completely include the focus object in which the focus point is located.
Illustratively, fig. 6C is an exemplary diagram of another depth-of-field focal area disclosed in one embodiment. As shown in fig. 6C, the depth-of-field focusing area corresponding to the focusing point is a relatively moderate area, which may include a complete focusing object, and may be determined based on the aforementioned object classification method.
530. According to the depth-of-field focusing area, the view cones of the electronic equipment are divided to obtain at least one first partial view cone corresponding to the depth-of-field focusing area and at least one second partial view cone corresponding to the non-depth-of-field focusing area.
In the embodiment of the present application, the depth-of-field focusing region may have two corresponding boundaries in the depth direction, which are a near-end boundary relatively close to the viewpoint and a far-end boundary relatively far from the viewpoint. The electronic equipment can divide the cone of sight by taking the near-end boundary and the far-end boundary as cross sections, wherein the parts in the cross section ranges on the two sides are depth of field parts corresponding to depth of field focusing areas, and the parts outside the cross section ranges on the two sides are non-depth of field parts corresponding to non-depth of field focusing areas.
In the embodiment of the present application, after the electronic device divides the cone of view by the boundary of the depth-of-field focusing area, the electronic device may further divide the depth-of-view portion or the non-depth-of-view portion, or may not further divide the depth-of-view portion, specifically, without limitation.
For example, please refer to fig. 7A, fig. 7A is an exemplary diagram of dividing a cone of view according to an embodiment. The scene to be rendered may include 3 trees with the focus point falling on the middle tree. The view frustum is divided into a first partial view frustum 720 and two second partial view frustums, namely a second partial view frustum 710 and a second partial view frustum 730.
Referring to fig. 7B, fig. 7B is a diagram illustrating another example of dividing a cone of view according to an embodiment. The scene to be rendered may include 3 trees with the focus point falling on the tree closest to the viewpoint. The view frustum is divided into a first partial view frustum 740 and a second partial view frustum 750.
As shown in the foregoing example, if the focus point is located in the middle section of the scene to be rendered, after the view frustum is divided by using the boundary of the depth-of-field focusing area, at least two second partial view frustums may be divided; if the focus points are located at two ends of the scene to be rendered, such as the nearest end or the farthest end, only one second partial view frustum may be divided after the view frustum is divided by using the boundary of the depth-of-field focus area.
Also, the foregoing examples do not further divide the depth portion, and therefore the depth portions each include one first partial view frustum.
In some embodiments, the electronic device may further divide the depth portion, such as based on CSM's logarithmic division or uniform division rules, then the depth portion may include two or more first partial cones of view. Similarly, in other embodiments, the electronic device may further divide the non-depth-of-field portion, for example, the second partial view frustum 750 shown in fig. 7B, so that the non-depth-of-field portion may include more second partial view cones, which is not limited in detail.
540. And determining a first shadow map corresponding to each first partial view cone to obtain a first shadow map set, and determining a second shadow map corresponding to each second partial view cone to obtain a second shadow map set.
In an embodiment of the present application, each first partial view cone may generate a first shadow map. Thus, the first set of shadow maps may comprise the same number of first shadow maps as the first partial view frustum, and may comprise at least one first shadow map.
Each second partial view cone may generate a second shadow map. Thus, the second set of shadow maps may comprise the same number of second shadow maps as the second partial view frustum, and may comprise at least one second shadow map.
And, the resolution of any one of the second shadow maps is lower than the resolution of any one of the first shadow maps.
In the embodiment of the present application, for an implementation that the corresponding shadow map is generated for each first partial view frustum and each second partial view frustum, reference may be made to corresponding descriptions in the CSM, and details are not repeated below.
For example, referring to fig. 7C, fig. 7C is an exemplary diagram of projecting a divided view frustum according to an embodiment, and the projection shown in fig. 7C is performed on the view frustum shown in fig. 7A.
As shown in fig. 7C, a first partial view frustum 720 of the depth of field portion is projected in an illumination direction 760, resulting in a corresponding illumination viewport 7201. And projecting the two second partial view cones according to the illumination direction to obtain corresponding illumination view ports 7101 and 7301. Based thereon, the first shadow atlas may include a first shadow map that is generated based on the illumination viewport 7201; the second shadow atlas may include two second shadow maps, generated based on the illumination viewport 7101 and the illumination viewport 7301, respectively.
It should be noted that, in the embodiment of the present application, when the number of the first shadow maps is two or more, the resolution between different first shadow maps is not limited; similarly, when the number of the second shadow maps is two or more, the resolution between the different second shadow maps is not limited.
550. And rendering the depth-of-field focusing area by using the first shadow mapping set, and rendering the non-depth-of-field focusing area by using the second shadow mapping set to obtain an image corresponding to the scene to be rendered.
In the embodiment of the present application, for any pixel point in the focus region of the depth of field, a first shadow map corresponding to the pixel point may be searched from the first shadow map set, and the pixel point is rendered by using the corresponding first shadow map. And aiming at the pixel points in the non-depth focus area, searching corresponding second shadow maps from the second shadow map set, and rendering the pixel points by utilizing the corresponding second shadow maps.
In the foregoing embodiment, the view frustum may be divided according to the depth-of-field focusing area in the scene to be rendered, so as to obtain at least one first partial view frustum corresponding to the depth-of-field focusing area and at least one second partial view frustum corresponding to the non-depth-of-field focusing area. And generating a shadow map with higher resolution for the first part of the view cones obtained after division, and generating a shadow map with lower resolution for the second part of the view cones. On one hand, the focus area of the depth of field where the focus point is located can be guaranteed to achieve a shadow rendering effect with higher quality; on the other hand, by reducing the shadow map resolution of the non-depth-of-field focusing area, the calculation amount of shadow rendering can be reduced, the calculation resources consumed by the shadow rendering are reduced, and the rendering performance is optimized.
In some embodiments, the depth of field focus area may be identified based on the aforementioned object segmentation method, and may include the complete focus object accurately (as shown in fig. 6B). After the electronic device divides the depth of field portion from the view frustum by the region boundary of the depth of field focusing region, the electronic device may stop dividing the depth of field portion in the step 530. That is, the depth-of-field focal region may correspond to a first partial viewing frustum.
Accordingly, in performing step 540, the electronic device determines that the first shadow map set includes a first shadow map corresponding to the first partial view frustum. When the foregoing step 550 is executed to render the depth focus area, the entire focus object included in the depth focus area may share the same first shadow map for rendering, and there is no problem that one part of the focus object is rendered by using a shadow map with one resolution, and another part is rendered by using a shadow map with another resolution, so that the seam problem in the shadow rendering may be improved.
Referring to fig. 8, fig. 8 is a flowchart illustrating another image rendering method according to an embodiment of the disclosure. As shown in fig. 8, the method may include the steps of:
810. the method includes detecting an input user focus operation and identifying a focus point indicated by the user focus operation in a scene to be rendered.
820. And determining a depth of field focusing area corresponding to the focusing point in the scene to be rendered.
830. According to the depth-of-field focusing area, the view cones of the electronic equipment are divided to obtain at least two first part view cones corresponding to the depth-of-field focusing area and at least two second part view cones corresponding to the non-depth-of-field focusing area.
840. And determining a first shadow map corresponding to each first partial view cone to obtain a first shadow map set, and determining a second shadow map corresponding to each second partial view cone to obtain a second shadow map set.
In this embodiment of the application, when the foregoing step 830 is executed, the electronic device divides the frustum of view by the boundary of the depth-of-field focused region to obtain a depth-of-field portion, and then further divides the depth-of-field portion to obtain at least two first partial frustum bodies corresponding to the depth-of-field focused region. Wherein, the at least two first partial viewing cones corresponding to the depth of field focusing region may include: a first focused view frustum and at least one first unfocused view frustum. The first unfocused view frustum may be any one of the first partial view frustums in the depth of field portion except the first focused view frustum, and the number of the first unfocused view frustums is not limited and may be one or more.
For example, the electronic device may divide the focal point into two sides of the focal point according to a preset length in the depth direction by using the focal point as a center, so as to obtain a first focusing view cone. Similar to the previous description of fig. 7A and 7B, if the focus point is in the middle of the depth of field portion, the division of the first focused view frustum may correspondingly generate two first unfocused view frustums; if the focus point is at both ends of the depth of field portion, the division of the first focused view frustum may generate a first unfocused view frustum accordingly. If the remaining part of the depth part of the view frustum excluding the first focused view frustum is further divided, a plurality of first unfocused view frustums may be generated accordingly.
Optionally, the electronic device may divide the depth of field portion of the view frustum into two or more first partial view frustums when the depth of field focusing area is too large. Wherein, the too large depth of field focus area may include: the length of the depth-of-field focusing region in the depth direction is greater than a threshold, which may be set according to actual service requirements, and is not particularly limited.
Accordingly, in performing the aforementioned step 840, the first shadow map set may include two or more first shadow maps corresponding to the first focused view frustum and the respective first unfocused view cones. The resolution of the first shadow map corresponding to the first unfocused view cone is lower than that of the first shadow map corresponding to the first focused view cone. It should be noted that, if the depth-of-field focused region corresponds to two or more first unfocused view cones, the resolutions of the first shadow maps corresponding to different first unfocused view cones may not be limited.
That is, in the depth-of-field focusing region, the focus point is focused most intensively, so that the first shadow map corresponding to the first focused viewing cone can be given the highest resolution, and the other regions in the depth-of-field focusing region, which are farther from the focus point, can appropriately reduce the resolution of the shadow map, thereby further reducing the calculation amount.
Optionally, the resolution of the first shadow map corresponding to the first unfocused view cone may be inversely related to the distance from the first unfocused view cone to the focused point. That is, the farther the first unfocused view cone is from the focused point, the smaller the resolution of the first shadow map generated based on the first unfocused view cone, further reducing the amount of computation of shadow rendering. The distance from the first unfocused view cone to the focus point may refer to a distance from a space divided by the first unfocused view cone in the scene to be rendered to the focus point, and may be represented by a distance from a center point of the first unfocused view cone to the focus point in a depth direction, for example. For example, assume that the scene to be rendered includes a virtual building and a virtual street view around the building, and the virtual building occupies a relatively large area in the entire scene to be rendered. The electronic device recognizes that the focus point falls into the virtual building, and recognizes the entire virtual building as the depth-of-field focus area, so that the depth-of-field focus area has a large length in the depth direction. If the whole virtual building is rendered by using the shadow map with higher resolution, the calculated amount of shadow rendering is too large, and the rendering performance is influenced. Thus, the entire virtual building can be divided in cones into a first focused cone centered on the focused point and several first unfocused cones.
The first focused view cones generate the first shadow maps with the highest resolution correspondingly, and the resolution of the first shadow maps corresponding to the other first unfocused view cones is reduced. The resolution of the first shadow maps corresponding to the plurality of first unfocused viewing cones may be the same, or the resolution of the first shadow map corresponding to the first unfocused viewing cone farther from the focused point may be the lowest. It should be noted that the lowest resolution of the first shadow map is still higher than the highest resolution of the second shadow map.
Furthermore, in this embodiment, the electronic device may divide two or more second partial view cones from the view cone when performing step 830. Correspondingly, the second shadow map set includes two or more second shadow maps, and the resolution corresponding to each second shadow map may be the same or different, and is not limited specifically.
Optionally, if the resolutions of the different second shadow maps are different, the resolution corresponding to the second shadow map may be determined according to the following rule:
1. the resolution of the second shadow map corresponding to the second partial view frustum is inversely related to the distance of the second partial view frustum from the viewpoint. That is, similar to CSM, the further away the second partial view frustum is, the lower the resolution of the corresponding second shadow map. The farther an object from a viewpoint occupies a smaller number of pixels, and the resolution of the shadow map can be appropriately reduced.
2. The resolution of the second shadow map corresponding to the second partial view cone is inversely related to the distance of the second partial view cone from the focal point. That is, the farther the second partial view frustum is from the focal point, the lower the resolution of the corresponding second shadow map. The farther from the focus point, the lower the attention, and the effect of shadow rendering can be reduced appropriately.
The distance from the second partial view cone to the focus point may refer to a distance from a space divided by the second partial view cone in the scene to be rendered to the focus point, and may be represented by a distance from a center point of the second partial view cone to the focus point in a depth direction, for example.
850. A depth of focus area is rendered using the first shadow map set.
860. And rendering the non-depth-of-field focusing area by using the second shadow mapping set, and performing fuzzy processing on the non-depth-of-field focusing area to obtain an image corresponding to the scene to be rendered.
In the embodiment of the present application, in order to further highlight the depth-of-field focused area, a blurring process may be performed on the non-depth-of-field focused area, and the blurring process may include, but is not limited to: gaussian blur, median filtering, gaussian filtering, etc., and are not particularly limited.
Exemplarily, referring to fig. 9, fig. 9 is an exemplary diagram of an effect of shadow rendering according to an embodiment. Fig. 9 is a diagram illustrating an effect example generated based on the image rendering method disclosed in the embodiment of the present application. As shown in fig. 9, the depth-of-field focus region is the deer in fig. 9, and the peripheral region of the deer. The imaging of deer and the peripheral area is clear, and the shadow rendering quality is higher. The front and rear areas of the deer in fig. 9 are non-depth-of-field focused areas, and the blurring operation is performed on the front and rear areas of the deer, so that the imaging blurring is difficult to detect by naked eyes even if the quality of shadow rendering is low.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an image rendering apparatus according to an embodiment. The image rendering apparatus shown in fig. 10 may be applied to the electronic device described above, and as shown in fig. 10, the image rendering apparatus 1000 may include: an obtaining module 1010, a first determining module 1020, a dividing module 1030, a second determining module 1040, and a rendering module 1050;
an obtaining module 1010, configured to obtain a focus point in a scene to be rendered;
a first determining module 1020, configured to determine a depth-of-field focusing area corresponding to a focusing point in a scene to be rendered;
the dividing module 1030 is configured to divide a view frustum of the electronic device according to the depth-of-field focusing area to obtain at least one first partial view frustum corresponding to the depth-of-field focusing area and at least one second partial view frustum corresponding to the non-depth-of-field focusing area; the non-depth-of-field focusing area is other areas except the depth-of-field focusing area in the scene to be rendered;
a second determining module 1040, configured to determine a first shadow map corresponding to each first partial view cone to obtain a first shadow map set, and determine a second shadow map corresponding to each second partial view cone to obtain a second shadow map set; wherein the resolution of the second shadow map is lower than the resolution of the first shadow map;
the rendering module 1050 is configured to render the depth-of-field focused region by using the first shadow map set, and render the non-depth-of-field focused region by using the second shadow map set, so as to obtain an image corresponding to the scene to be rendered.
In one embodiment, the first determining module 1020 may be further configured to identify other pixel points in the scene to be rendered that belong to the same object as the focus point; a depth of focus region is determined that includes the focus point as well as other pixel points.
The dividing module 1030 is further configured to divide a first partial view frustum corresponding to the depth-of-field focusing region from the view frustum.
In one embodiment, the dividing module 10320 is further configured to divide the view cones into a first focused view cone and at least one first unfocused view cone, the first focused view cone centered at the focused point;
the second determining module 1040 is further configured to generate first shadow maps corresponding to the first focused viewing cones and first shadow maps corresponding to each of the first unfocused viewing cones, resulting in a first shadow map set.
Optionally, the resolution of the first shadow map corresponding to the first unfocused cone may be inversely related to the distance from the first unfocused cone to the focused point.
Optionally, the dividing module 1030 is further configured to divide a first focused viewing cone and at least one first unfocused viewing cone corresponding to the depth-of-field focused region from the viewing cones when the length of the depth-of-field focused region in the depth direction is greater than the threshold.
In one embodiment, the dividing module 1030 is further configured to divide at least two second focal volumes corresponding to the non-depth-of-field focusing region from the view cones;
the second determining module 1040 is further configured to determine a second shadow map corresponding to each second partial view cone, so as to obtain a second shadow map set. The resolution of the second shadow map corresponding to each second partial view cone and the distance from the second partial view cone to the viewpoint are in a negative correlation relationship; or the resolution of the second shadow map corresponding to each second partial view cone and the distance from the second partial view cone to the focus point are in a negative correlation relationship; alternatively, the resolution of each second partial view frustum corresponding to the second shadow map is the same.
In one embodiment, the image rendering apparatus 1000 may further include: and a blurring module.
And the blurring module can be used for blurring the non-depth-of-field focusing area.
In one embodiment, the obtaining module 1010 is configured to detect an input user focusing operation and identify a focusing point indicated in a scene to be rendered by the user focusing operation.
Therefore, by implementing the image rendering device disclosed in the foregoing embodiment, the cone of view may be divided according to the focus area of the depth of field in the scene to be rendered, and a shadow map with a higher resolution may be generated for the depth of field portion obtained after division, and a shadow map with a lower resolution may be generated for the non-depth of field portion. On one hand, the depth-of-field focusing area where the focusing point is positioned can achieve a shadow rendering effect with higher quality; on the other hand, by reducing the shadow map resolution of the non-depth-of-field focusing area, the calculation amount of shadow rendering can be reduced, the calculation resources consumed by the shadow rendering are reduced, and the rendering performance is optimized.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an electronic device according to an embodiment.
As shown in fig. 11, the electronic device 1100 may include:
a memory 1110 in which executable program code is stored;
a processor 1120 coupled with the memory 1110;
the processor 1120 calls the executable program code stored in the memory 1110 to execute any one of the image rendering methods disclosed in the embodiments of the present application.
It should be noted that the electronic device shown in fig. 11 may further include components, which are not shown, such as a power supply, an input key, a camera, a speaker, a screen, an RF circuit, a Wi-Fi module, a bluetooth module, and a sensor, which are not described in detail in this embodiment.
The embodiment of the application discloses a computer readable storage medium which stores a computer program, wherein the computer program is used for realizing any image rendering method disclosed by the embodiment of the application when being executed by a processor.
Embodiments of the present application disclose a computer program product comprising a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform any of the image rendering methods disclosed in embodiments of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solutions of the present application, which essentially or partly contribute to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, or a network device, etc., and may specifically be a processor in the computer device) to execute some or all of the steps of the above methods of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the steps of the methods of the embodiments described above may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, including Read-Only Memory (ROM), random Access Memory (RAM), programmable Read-Only Memory (PROM), erasable Programmable Read-Only Memory (EPROM), one-time Programmable Read-Only Memory (OTPROM), electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc-Read-Only Memory (CD-ROM) or other Memory capable of storing data, a magnetic tape, or any other computer-readable medium capable of storing data.
The foregoing describes in detail an image rendering method, an image rendering apparatus, an electronic device, and a storage medium, which are disclosed in the embodiments of the present application, and specific examples are applied in the present application to explain the principles and implementations of the present application. Meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. An image rendering method applied to an electronic device, the method comprising:
acquiring a focus point in a scene to be rendered;
determining a depth-of-field focusing area corresponding to a focusing point in the scene to be rendered;
according to the depth-of-field focusing area, dividing a view cone of the electronic equipment to obtain at least one first part view cone corresponding to the depth-of-field focusing area and at least one second part view cone corresponding to a non-depth-of-field focusing area; the non-depth focus area is other areas except the depth focus area in the scene to be rendered;
determining a first shadow map corresponding to each first partial view cone to obtain a first shadow map set, and determining a second shadow map corresponding to each second partial view cone to obtain a second shadow map set; wherein a resolution of the second shadow map is lower than a resolution of the first shadow map;
and rendering the depth-of-field focusing area by using the first shadow mapping set, and rendering the non-depth-of-field focusing area by using the second shadow mapping set to obtain an image corresponding to the scene to be rendered.
2. The method of claim 1, wherein the depth-of-field focal region corresponds to a first partial view frustum; the first shadow map set comprises: a first shadow map corresponding to said first partial view frustum; the determining a depth-of-field focusing area corresponding to a focusing point in the scene to be rendered includes:
identifying other pixel points which belong to the same object with the focus point in the scene to be rendered;
determining a depth of focus region including the focus point and the other pixel points.
3. The method of claim 1, wherein the depth-of-field focal area corresponds to at least two first partial cones; the at least two first partial cones comprise: a first focusing optic cone and at least one first non-focusing optic cone, the first focusing optic cone centered at the focus point; the first shadow map set comprises: a first shadow map corresponding to the first focused view frustum and a first shadow map corresponding to each of the first unfocused view cones;
the resolution of the first shadow map corresponding to each first unfocused view cone is lower than that of the first shadow map corresponding to the first focused view cone.
4. The method of claim 3, wherein the at least two first partial cones comprise: at least two first non-focusing cones; the resolution of the first shadow map corresponding to each first unfocused view cone is in a negative correlation with the distance from the first unfocused view cone to the focus point.
5. The method of claim 3, wherein the at least two first partial view cones are demarcated from the view cone after the electronic device determines that the depth of field focus area is greater than a threshold in length in the depth direction.
6. The method according to claim 1, wherein the non-depth-of-field focal region corresponds to at least two second partial cones; the second shadow map set comprises: a second shadow map corresponding to each of the at least two second partial cones, respectively;
the resolution of the second shadow map corresponding to each second partial view cone is in a negative correlation relation with the distance from the second partial view cone to the viewpoint; alternatively, the first and second electrodes may be,
the resolution of the second shadow map corresponding to each second partial view cone is in a negative correlation relation with the distance from the second partial view cone to the focus point; alternatively, the first and second electrodes may be,
and the resolution of the second shadow maps corresponding to the second partial view cones is the same.
7. The method of claim 1, further comprising:
and carrying out fuzzy processing on the non-depth-of-field focusing area.
8. The method according to any one of claims 1-7, wherein the obtaining a focus point in a scene to be rendered comprises:
detecting an input user focusing operation;
identifying a focus point indicated by the user focus operation in the scene to be rendered.
9. An image rendering apparatus applied to an electronic device, the apparatus comprising:
the acquisition module is used for acquiring a focus point in a scene to be rendered;
the first determining module is used for determining a depth-of-field focusing area corresponding to the focusing point in the scene to be rendered;
the dividing module is used for dividing the view frustum of the electronic equipment according to the depth-of-field focusing area to obtain at least one first partial view frustum corresponding to the depth-of-field focusing area and at least one second partial view frustum corresponding to the non-depth-of-field focusing area; the non-depth focus area is other areas except the depth focus area in the scene to be rendered;
a second determining module, configured to determine a first shadow map corresponding to each first partial view cone to obtain a first shadow map set, and determine a second shadow map corresponding to each second partial view cone to obtain a second shadow map set; wherein a resolution of the second shadow map is lower than a resolution of the first shadow map;
and the rendering module is used for rendering the depth-of-field focusing area by using the first shadow mapping set and rendering the non-depth-of-field focusing area by using the second shadow mapping set to obtain an image corresponding to the scene to be rendered.
10. An electronic device, comprising a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202211142280.0A 2022-09-19 2022-09-19 Image rendering method and device, electronic equipment and storage medium Pending CN115423921A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211142280.0A CN115423921A (en) 2022-09-19 2022-09-19 Image rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211142280.0A CN115423921A (en) 2022-09-19 2022-09-19 Image rendering method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115423921A true CN115423921A (en) 2022-12-02

Family

ID=84205098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211142280.0A Pending CN115423921A (en) 2022-09-19 2022-09-19 Image rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115423921A (en)

Similar Documents

Publication Publication Date Title
US11756223B2 (en) Depth-aware photo editing
CN110136082B (en) Occlusion rejection method and device and computer equipment
CN109242961B (en) Face modeling method and device, electronic equipment and computer readable medium
US20160373723A1 (en) Device and method for augmented reality applications
CN109889724B (en) Image blurring method and device, electronic equipment and readable storage medium
CN110324532B (en) Image blurring method and device, storage medium and electronic equipment
US9392248B2 (en) Dynamic POV composite 3D video system
CN110677621B (en) Camera calling method and device, storage medium and electronic equipment
CN112672139A (en) Projection display method, device and computer readable storage medium
CN116324878A (en) Segmentation for image effects
CN109640070A (en) A kind of stereo display method, device, equipment and storage medium
CN115330640B (en) Illumination mapping noise reduction method, device, equipment and medium
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
CN110022430A (en) Image weakening method, device, mobile terminal and computer readable storage medium
CN111951192A (en) Shot image processing method and shooting equipment
CN113793257A (en) Image processing method and device, electronic equipment and computer readable storage medium
Liu et al. Stereo-based bokeh effects for photography
CN109842791B (en) Image processing method and device
CN110177216B (en) Image processing method, image processing device, mobile terminal and storage medium
CN115423921A (en) Image rendering method and device, electronic equipment and storage medium
CN108280887B (en) Shadow map determination method and device
CN115359172A (en) Rendering method and related device
CN115120970A (en) Baking method, baking device, baking equipment and storage medium of virtual scene
US11436794B2 (en) Image processing method, apparatus and device
CN114288647B (en) Artificial intelligence game engine based on AI Designer, game rendering method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination