CN113079409B - Picture rendering method and picture rendering device - Google Patents
Picture rendering method and picture rendering device Download PDFInfo
- Publication number
- CN113079409B CN113079409B CN202110328129.5A CN202110328129A CN113079409B CN 113079409 B CN113079409 B CN 113079409B CN 202110328129 A CN202110328129 A CN 202110328129A CN 113079409 B CN113079409 B CN 113079409B
- Authority
- CN
- China
- Prior art keywords
- light source
- illumination intensity
- random
- pixel
- rendering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 298
- 238000000034 method Methods 0.000 title claims abstract description 94
- 238000005286 illumination Methods 0.000 claims abstract description 214
- 230000004048 modification Effects 0.000 claims abstract description 36
- 238000012986 modification Methods 0.000 claims abstract description 36
- 239000000463 material Substances 0.000 claims description 12
- 238000010521 absorption reaction Methods 0.000 claims description 4
- 230000002238 attenuated effect Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 description 51
- 230000008569 process Effects 0.000 description 27
- 230000008859 change Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000013461 design Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000004075 alteration Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000031700 light absorption Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Generation (AREA)
Abstract
The invention provides a picture rendering method, which comprises the following steps: acquiring scene object information and light source information in a video picture; dividing a video picture into a plurality of pixel regions, wherein each pixel region comprises a plurality of pixel points; selecting a random pixel point with unknown illumination intensity in each pixel region, and determining the illumination intensity of the random pixel point according to the position of the random pixel point, scene object information and light source information; rendering the video picture based on the illumination intensity of the random pixel points; repeating the steps until a light source modification instruction or a light source determination instruction is received; and if the light source modification instruction is received, returning to the step, and if the light source determination instruction is received, determining the illumination intensity of all the pixel points in the video picture according to the positions of all the pixel points in the video picture, the scene object information and the light source information, and rendering the video picture based on the illumination intensity of all the pixel points. The invention also provides a picture rendering device.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a picture rendering method and a picture rendering apparatus.
Background
With the development of video image processing technology (such as game video images), people have higher and higher requirements on video images, such as requirements on color saturation of video images, requirements on resolution of video images, requirements on scene illumination of video images, and the like.
However, as the resolution of the video images is higher and higher, when the light source in some game video images moves, it takes a long time for the scene illumination baking to obtain the game video images after the light source moves, which causes the game art personnel to spend a large amount of time waiting for the adjusted image rendering result, and causes the cost for the game art personnel to adjust the scene illumination of the game images to be higher.
Therefore, it is necessary to provide a screen rendering method and a screen rendering apparatus to solve the problems of the prior art.
Disclosure of Invention
The embodiment of the invention provides a picture rendering method and a picture rendering device which can quickly adjust the scene illumination of a game image at low cost, and aims to solve the technical problem that the scene illumination of the game image of the traditional picture rendering method and the traditional picture rendering device is higher in adjustment cost.
The embodiment of the invention also provides a picture rendering method, which comprises the following steps:
s21, obtaining scene object information and light source information in the video picture;
s22, dividing the video picture into a plurality of pixel areas, wherein each pixel area comprises a plurality of pixel points;
s23, selecting a random pixel point with unknown illumination intensity in each pixel area, and determining the illumination intensity of the random pixel point according to the position of the random pixel point, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points;
s24, obtaining a difference value of illumination intensity of random pixel points adjacent to the pixel area, and setting two pixel areas with the difference value larger than a third set value as difference pixel areas;
s25, selecting i random pixel points with unknown illumination intensity in each difference pixel area and j random pixel points with unknown illumination intensity in other pixel areas, and determining the illumination intensity of the random pixel points according to the positions of the random pixel points, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points; wherein i is greater than j;
s26, repeating the steps S24-S25 until a light source modification command or a light source determination command is received;
and S27, if the light source modification instruction is received, returning to the step S21, if a light source determination instruction is received, determining the illumination intensity of all the pixel points in the video picture according to the positions of all the pixel points in the video picture, the scene object information and the light source information, and rendering the video picture based on the illumination intensity of all the pixel points.
The embodiment of the invention also provides a picture rendering method, which comprises the following steps:
s31, obtaining scene object information and light source information in the video picture;
s32, dividing the video picture into a plurality of pixel areas, wherein each pixel area comprises a plurality of pixel points;
s33, selecting a random pixel point with unknown illumination intensity in each pixel area, and determining the illumination intensity of the random pixel point according to the position of the random pixel point, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points;
s34, repeating the step S33 until a light source modification instruction, a light source determination instruction or a user frame selection instruction is received;
s35, if receiving the user selection instruction, turning to step S36; if the light source modification instruction is received, returning to the step S31, if a light source determination instruction is received, determining the illumination intensity of all the pixel points in the video picture according to the positions of all the pixel points in the video picture, the scene object information and the light source information, and rendering the video picture based on the illumination intensity of all the pixel points;
s36, determining a specific pixel area according to the user frame selection instruction; selecting i random pixel points with unknown illumination intensity in each specific pixel region and j random pixel points with unknown illumination intensity in other pixel regions, and determining the illumination intensity of the random pixel points according to the positions of the random pixel points, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points; wherein i is greater than j; repeating the step S36 until a light source modification instruction, a light source determination instruction or a user frame selection instruction is received; return is made to step S35.
An embodiment of the present invention further provides a device for rendering a picture, including:
the image information acquisition module is used for acquiring scene object information and light source information in a video image;
the pixel region dividing module is used for dividing the video picture into a plurality of pixel regions, and each pixel region comprises a plurality of pixel points;
the random rendering module is used for selecting a random pixel point with unknown illumination intensity in each pixel area and determining the illumination intensity of the random pixel point according to the position of the random pixel point, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points; acquiring a difference value of the illumination intensity of random pixel points adjacent to the pixel area, and setting two pixel areas with the difference value larger than a third set value as difference pixel areas; selecting i random pixel points with unknown illumination intensity in each difference pixel region and j random pixel points with unknown illumination intensity in other pixel regions, and determining the illumination intensity of the random pixel points according to the positions of the random pixel points, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points; wherein i is greater than j;
the instruction receiving module is used for receiving a light source modification instruction or a light source determination instruction; and
and the global rendering module is used for determining the illumination intensity of all the pixel points in the video picture according to the positions of all the pixel points in the video picture, the scene object information and the light source information, and rendering the video picture based on the illumination intensity of all the pixel points.
Embodiments of the present invention also provide a computer-readable storage medium, in which processor-executable instructions are stored, and the instructions are loaded by one or more processors to perform any of the above-mentioned image rendering methods.
Compared with the picture rendering method and the picture rendering device in the prior art, the picture rendering method and the picture rendering device of the invention can be used for rendering random pixel points in the pixel area by dividing the video picture into a plurality of pixel areas, thereby realizing the rapid pre-rendering of the video picture; the user can adjust the rendering effect at any time by observing the rendering result of the random pixel points; therefore, the scene illumination of the game image can be adjusted quickly and at low cost; the technical problem that the scene illumination of the game image is adjusted to be higher in cost in the conventional picture rendering method and the conventional picture rendering device is effectively solved.
Drawings
FIG. 1 is a flowchart illustrating a first embodiment of a method for rendering a screen according to the present invention;
FIG. 2 is a flowchart illustrating a step S103 of a screen rendering method according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of the relative position relationship between a scene object and a light source in a corresponding video frame according to the present invention;
FIGS. 4a and 4b are schematic diagrams illustrating the reflection, scattering and attenuation of illumination energy of objects in a scene according to the present invention;
FIG. 5 is a schematic diagram illustrating a bounding box of the image rendering method according to the present invention;
FIG. 6 is a flowchart illustrating a second embodiment of a method for rendering a display according to the present invention;
FIG. 7 is a flowchart illustrating a screen rendering method according to a third embodiment of the present invention;
FIG. 8 is a flowchart illustrating a step S706 of a screen rendering method according to a third embodiment of the present invention;
FIG. 9 is a schematic structural diagram of a screen rendering apparatus according to an embodiment of the present invention;
FIG. 10 is a flowchart illustrating a screen rendering method and a screen rendering apparatus according to an embodiment of the present invention;
fig. 11a to 11e are schematic diagrams illustrating a screen rendering method and a screen rendering apparatus according to embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The picture rendering method and the picture rendering apparatus of the present invention are used for electronic devices that perform fast rendering processing and rendering effect adjustment on game pictures, including but not limited to wearable devices, head-mounted devices, medical health platforms, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, etc.), multiprocessor systems, consumer electronic devices, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The electronic equipment is preferably a game design terminal capable of rendering game pictures, so that the game design terminal can quickly adjust the scene illumination of the game images.
Referring to fig. 1, fig. 1 is a flowchart illustrating a screen rendering method according to a first embodiment of the present invention. The image rendering method of the present embodiment may be implemented by using the electronic device, and the image rendering method of the present embodiment includes:
step S101, scene object information and light source information in a video picture are acquired.
Step S102, dividing a video picture into a plurality of pixel regions, wherein each pixel region comprises a plurality of pixel points.
Step S103, selecting a random pixel point with unknown illumination intensity in each pixel area, and determining the illumination intensity of the random pixel point according to the position of the random pixel point, scene object information and light source information; and rendering the video picture based on the illumination intensity of the random pixel points.
And step S104, repeating the step S103 until a light source modification instruction or a light source determination instruction is received.
Step S105, if a light source modification instruction is received, returning to the step S101; and if a light source determining instruction is received, determining the illumination intensity of all the pixel points in the video picture according to the positions of all the pixel points in the video picture, the scene object information and the light source information, and rendering the video picture based on the illumination intensity of all the pixel points.
The following describes in detail the screen rendering process of the screen rendering method according to the present embodiment.
In step S101, a screen rendering device (such as a game design terminal or the like) acquires scene object information and light source information in a video screen based on a light source setting instruction of a user.
The light source setting instruction is a user instruction for setting a corresponding light source in a video picture by a user so as to achieve scene illumination required by a game image. The scene object information is position information and size information of an object which has a function of completely or partially shielding light in the video picture. The light source information is position information and light intensity information of the light source set by a user. Subsequently, the process goes to step S102.
In step S102, the screen rendering device divides the video screen acquired in step S101 into a plurality of pixel regions, wherein each pixel region includes a plurality of pixel points.
The user can determine the number of the pixel areas according to the identification requirement of the user on the picture rendering, and if the user wants to realize the picture rendering preview as fast as possible, the number of the pixel areas can be set to be smaller; if the user wants to implement the screen rendering preview as accurately as possible, the number of pixel regions can be set to be large. Subsequently, the process goes to step S103.
In step S103, the image rendering device selects a random pixel point with unknown illumination intensity in each pixel region, and determines the illumination intensity of the random pixel point according to the position of the random pixel point, scene object information in the video image, and light source information; and then rendering the video picture based on the illumination intensity of the random pixel points.
Since each pixel region only acquires one random pixel point, and each pixel region includes 9 pixel points, each rendering operation only needs to consume one ninth of the overall rendering operation, and certainly, if more pixel points are in the pixel region, more picture pre-rendering time can be reduced.
Referring to fig. 2, fig. 2 is a flowchart illustrating the step S103 of the first embodiment of the image rendering method according to the present invention. The step S103 includes:
in step S201, the image rendering device obtains a first-level tracking ray according to the position of the random pixel and the position of the scene object. Specifically, as shown in fig. 3, the video image includes random pixel points a and random pixel points B belonging to different pixel regions, the scene object of the video image includes a scene object C, a scene object D, and a scene object E, and the light source of the video image includes a light source F.
The light ray which is triggered by human eyes and connects the random pixel point A and the scene object C and the light ray which connects the random pixel point B and the scene object D are first tracking light rays. Subsequently, the process goes to step S202.
In step S202, the image rendering apparatus obtains an nth-level tracing ray corresponding to the first-level tracing ray based on the relative positions of the objects in different scenes and the incident angle of the first-level tracing ray, where n is equal to 2. Since the light may be reflected or scattered between different scene objects, a second-level tracing light corresponding to the first-level tracing light is required to be obtained in this step, and the second-level tracing light may be the light between the scene object C and the scene object E and the light between the scene object D and the light source F in fig. 3. Subsequently, the process goes to step S203.
In step S203, the image rendering device obtains an n +1 th-level tracing ray corresponding to an nth-level tracing ray based on the relative positions of the objects in different scenes and the incident angle of the nth-level tracing ray.
The third-level tracing ray corresponding to the second-level tracing ray may be a ray between the scene object E and the light source F in fig. 3.
Then, repeating step S203 until n is equal to n +1, all tracing rays of different levels can be obtained until n is greater than the first setting value. Since the light intensity of the tracing light of different levels gradually attenuates, in order to increase the calculation speed, the user may delete the tracing light after reflecting or scattering a plurality of times. Subsequently, the process goes to step S204.
Step S204, the picture rendering device obtains the illumination energy provided by the light source to the tracking light according to the positions of the light source and the scene object.
Since the illumination intensity of the random pixel point is finally generated by the illumination energy generated by the light source, all the levels of the tracking light obtained in step S202 and step S203 are finally connected to the light source, so that the illumination energy of each tracking light can be obtained based on the level of the tracking light. Subsequently, it goes to step S205.
In step S205, the image rendering device performs attenuation calculation on the illumination energy corresponding to each tracking ray based on the shape and material of the scene object.
The reflection or scattering effect of the incident light is different due to scene objects made of different materials. If the scene object is made of a mirror material such as glass, the incident light is almost only reflected at a set angle. If the scene object is a smooth surface of metal, plastic, etc., the incident light will be reflected with a certain scattering effect. If the scene object is a rough surface of stone or other materials, the incident light is directly scattered on the surface of the scene object. The n +1 order trace ray is therefore reflected or scattered at the surface of the field object to generate an n order trace ray, where the degree of attenuation of the n +1 order trace ray to the n order trace ray is determined based on the degree of reflection or scattering.
For example, if the scene object is a mirror surface material, if the nth-order tracing ray is in the reflected light region of the (n + 1) th-order tracing ray, the nth-order tracing ray may be 80% of the illumination energy of the (n + 1) th-order tracing ray. As shown in fig. 4 a.
If the scene object is a rough material, if the nth-order tracing ray is in the scattered light region of the (n + 1) th-order tracing ray, the nth-order tracing ray may be 20% of the illumination energy of the (n + 1) th-order tracing ray, and so on. As shown in fig. 4 b.
Therefore, when calculating the attenuation of the illumination energy, the image rendering device first needs to calculate the surface absorption attenuation of the illumination energy corresponding to each tracking ray, i.e., the light absorption of the field scene object, based on the material of the scene object. The frame rendering device then calculates the surface scattering attenuation of the illumination energy for each of the tracking rays, such as the energy attenuation due to reflection or scattering of the rays shown in fig. 4a and 4b, based on the shape of the scene object and the incident angle of the tracking ray. And finally, the image rendering device performs attenuation calculation on the illumination energy corresponding to each tracking ray based on the surface absorption attenuation and the surface scattering attenuation.
Step S206, the image rendering device obtains all attenuated tracking rays that finally pass through the random pixel point, and superimposes the illumination energy of the attenuated tracking rays, thereby obtaining the illumination intensity of the random pixel point.
Further, since the scene objects have different shapes, the calculation of the direction of the traced ray may result in an excessive calculation amount, and therefore, before the step S103 is executed, the image rendering apparatus may further convert the scene objects into corresponding triangular objects, which may include the original scene objects therein.
And then the picture rendering device sets bounding boxes corresponding to the scene objects according to the distance between the adjacent triangular objects, wherein each bounding box comprises at least one triangular object. This can be seen in particular in fig. 5. The triangular objects may include circular scene objects, rectangular scene objects, and hexagonal scene objects.
Here, the screen rendering apparatus may set two first-level bounding boxes corresponding to the scene objects based on the maximum distance between adjacent triangle objects, please refer to bounding box a and bounding box b in fig. 5.
Then the screen rendering apparatus sets two second-level bounding boxes of the first-level bounding boxes based on the maximum distance between adjacent triangle objects in the first-level bounding boxes, see bounding box b1 and bounding box b2 in fig. 5.
And m is m +1, and the picture rendering device sequentially acquires all levels of bounding boxes until the maximum distance between adjacent triangular objects in the bounding box of the maximum level is smaller than a second set value.
In step S202, the image rendering apparatus obtains an nth-level tracing ray corresponding to the first-level tracing ray based on the triangle object in the bounding box through which the first-level tracing ray passes and the incident angle of the first-level tracing ray.
In step S203, the image rendering apparatus obtains an n +1 th-level tracing ray corresponding to the nth-level tracing ray based on the triangular object in the bounding box through which the nth-level tracing ray passes and the incident angle of the nth-level tracing ray.
In step S205, the image rendering device performs an attenuation operation on the illumination energy corresponding to each tracking ray based on the shape and material of the triangle object corresponding to the scene object.
In the embodiment, the scene object is converted into the triangular object, so that the calculation amount of the tracking ray in the later-stage calculation is reduced; and the setting of the bounding box can directly simplify and delete scene objects outside the bounding box, which are involved in the calculation and are not passed by the tracking ray, so that the calculation amount of the tracking ray in the later calculation is further simplified.
In step S104, the image rendering apparatus repeatedly executes step S103, so that the number of rendered random pixel points in each pixel region in the video image increases, the image rendering effect of the video image becomes more and more accurate, and the user can perform pre-determination on the rendering effect in the video image.
If the user determines that the rendering effect is the final rendering effect, a light source determination instruction can be sent out; if the user determines that the light source position needs to be modified to improve the rendering effect based on the existing rendering effect, a light source determination instruction may be issued. Subsequently, the process goes to step S105.
In step S105, if the screen rendering apparatus receives a light source modification instruction of the user, the light source position is modified according to the light source modification instruction, and then the process returns to step S101, and the screen rendering is performed again after the screen rendering effect is cleared.
If the picture rendering device receives a light source determination instruction of a user, the picture rendering device determines the illumination intensity of all pixel points in the video picture according to the positions of all the pixel points in the video picture, the scene object information and the light source information, and renders the video picture based on the illumination intensity of all the pixel points, namely, performs global rendering on the video picture.
Thus, the screen rendering process of the screen rendering method of the present embodiment is completed.
According to the picture rendering method, the video picture is divided into a plurality of pixel areas, and random pixel point rendering is carried out on the pixel areas, so that rapid pre-rendering of the video picture is realized; the user can adjust the rendering effect at any time by observing the rendering result of the random pixel points; therefore, the scene illumination of the game image can be adjusted quickly and at low cost.
Referring to fig. 6, fig. 6 is a flowchart illustrating a screen rendering method according to a second embodiment of the present invention. The image rendering method of the present embodiment may be implemented by using the electronic device, and the image rendering method of the present embodiment includes:
step S601, scene object information and light source information in a video picture are acquired;
step S602, dividing a video picture into a plurality of pixel regions, wherein each pixel region comprises a plurality of pixel points;
step S603, selecting a random pixel point with unknown illumination intensity in each pixel area, determining the illumination intensity of the random pixel point according to the position of the random pixel point, scene object information and light source information, and rendering a video picture based on the illumination intensity of the random pixel point;
step S604, obtaining a difference value of the illumination intensity of the random pixel points of the adjacent pixel areas, and setting two pixel areas with the difference value larger than a third set value as difference pixel areas;
step S605, selecting i random pixel points with unknown illumination intensity in each difference pixel area, wherein j random pixel points with unknown illumination intensity in the pixel area determine the illumination intensity of the random pixel points according to the positions of the random pixel points, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points, wherein i is larger than j;
step S606, repeating steps S604 and S605 until a light source modification instruction or a light source determination instruction is received;
step S607, if a light source modification instruction is received, returning to step S601, if a light source determination instruction is received, determining the illumination intensities of all the pixel points in the video picture according to the positions of all the pixel points in the video picture, the scene object information, and the light source information, and rendering the video picture based on the illumination intensities of all the pixel points.
The following describes in detail the screen rendering process of the screen rendering method according to the present embodiment.
The contents of steps S601 to S603 are the same as or similar to the contents of steps S101 to S103 of the first embodiment of the screen rendering method, and refer to the related descriptions of steps S101 to S103 of the first embodiment of the screen rendering method.
In step S604, since the attention degrees of the users to different pixel areas in the video picture are different, the attention degree of the users to the pixel area with small change in illumination intensity is low, that is, the influence of the pixel area on the judgment of the video picture rendering result is low. The pixel area with large illumination intensity change, such as the shadow position change area caused by the light source position change, has a large influence on the judgment of the video image rendering result.
In order to increase the speed of the pre-rendering, the image rendering device can perform accelerated rendering on the pixel area with larger judgment influence on the video image rendering result. Specifically, the image rendering device may obtain the difference pixel region based on a difference value between illumination intensities of random pixels adjacent to the pixel region. The difference pixel area is an area with a large change of illumination intensity in the video picture, and when the light source in the video picture moves, the illumination intensity in the difference pixel area changes first, and the change amplitude is the largest. Subsequently, the process goes to step S605.
In step S605, the image rendering device performs accelerated rendering on the difference pixel region, specifically, the image rendering device selects i random pixel points with unknown illumination intensity in the difference pixel region and j random pixel points with unknown illumination intensity in other pixel regions; then, the picture rendering device determines the illumination intensity of the random pixel points according to the positions of the random pixel points, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points; where i is greater than j.
The random pixel points of the pixel region with poorer influence on the video picture rendering effect are increased more and more, the increase range of the random pixel points of the pixel region with poorer influence on the video picture rendering effect is smaller, the calculation resource cannot be influenced greatly, and therefore a user can preferentially and accurately judge the overall rendering effect of the video picture based on the rendering effect of the difference pixel region. Then, the process goes to step S606.
In step S606, the image rendering apparatus repeatedly executes steps S604 to S605, so that the number of rendered random pixel points in each pixel region in the video image increases, the image rendering effect of the image rendering of the video image becomes more and more accurate, and the user can pre-determine the rendering effect in the video image.
If the user determines that the rendering effect is the final rendering effect, a light source determination instruction can be sent out; if the user determines that the light source position needs to be modified to improve the rendering effect based on the existing rendering effect, a light source determination instruction may be issued. Subsequently, it goes to step S607.
In step S607, if the image rendering device receives a light source modification instruction from the user, the light source position is modified according to the light source modification instruction, and then the process returns to step S601, and the image rendering is performed again after the image rendering effect is cleared.
If the picture rendering device receives a light source determination instruction of a user, the picture rendering device determines the illumination intensity of all pixel points in the video picture according to the positions of all the pixel points in the video picture, the scene object information and the light source information, and renders the video picture based on the illumination intensity of all the pixel points, namely, performs global rendering on the video picture.
Thus, the screen rendering process of the screen rendering method of the present embodiment is completed.
In order to further increase the rendering speed of the pixel region with a large influence, the image rendering apparatus may adjust the pre-rendering process of the different pixel region based on the illumination intensity difference value of the random pixel point of the adjacent pixel region in the process of repeatedly performing step S604 to step S605. Specifically, if the difference value of the illumination intensities of the random pixel points in the difference pixel region and the adjacent pixel region is larger, the number of the random pixel points with unknown illumination intensities selected in the difference pixel region is larger.
For example, a video image includes a pixel area a1, a pixel area a2, a pixel area A3, and a pixel area a4 that are adjacent to each other in sequence, and if the illumination intensity of the pixel area a1 is 100 (the numerical value is not an actual numerical value, and a specific unit is not set for numerical comparison, the illumination intensity of the pixel area a2 is 80, the illumination intensity of the pixel area A3 is 60, the illumination intensity of the pixel area a4 is 20, and the illumination intensity of the pixel area a5 is 10; the difference between the illumination intensities of the pixel region a1 and the pixel region a2 is 20, the difference between the illumination intensities of the pixel region a2 and the pixel region A3 is 20, the difference between the illumination intensities of the pixel region A3 and the pixel region a4 is 40, and the difference between the illumination intensities of the pixel region a4 and the pixel region a5 is 10. Therefore, the illumination intensity between the pixel area A3 and the pixel area a4 varies greatly, and the number of random pixels in the selected pixel area a1, the selected pixel area a2, the selected pixel area A3, the selected pixel area a4, and the selected pixel area a5 may be 2:2:4:4: 1.
As the pre-rendering operation is performed, the illumination intensity of the pixel region a1 becomes 1000, the illumination intensity of the pixel region a2 is 600, the illumination intensity of the pixel region A3 is 500, the illumination intensity of the pixel region a4 is 400, and the illumination intensity of the pixel region a5 is 200.
The difference value between the illumination intensities of the pixel region a1 and the pixel region a2 is 400, the difference value between the illumination intensities of the pixel region a2 and the pixel region A3 is 100, the difference value between the illumination intensities of the pixel region A3 and the pixel region a4 is 100, and the difference value between the illumination intensities of the pixel region a4 and the pixel region a5 is 200. At this time, the illumination intensity between the pixel area a1 and the pixel area a2 is greatly changed, so the number of random pixels in the selected pixel area a1, the selected pixel area a2, the selected pixel area A3, the selected pixel area a4, and the selected pixel area a5 may be 4:4:1:2: 2.
Therefore, the rendering speed of different pixel areas is adjusted based on the change of the illumination intensity of each pixel area in the picture pre-rendering process, so that the shadow position change areas are subjected to rapid pre-rendering in sequence according to the illumination intensity change degree (namely the bone attention degree of a user), and the effective adjustment operation of the pre-rendering speed of different pixel areas in the picture pre-rendering process is realized.
On the basis of the first embodiment, the image rendering method of this embodiment performs differentiated random pixel point rendering on different pixel areas based on the difference value of the illumination intensity of the random pixel points in the pixel areas, so as to further improve the pre-rendering efficiency of the video image and achieve quick and accurate pre-rendering of the video image.
Referring to fig. 7, fig. 7 is a flowchart illustrating a screen rendering method according to a third embodiment of the present invention. The image rendering method of the present embodiment may be implemented by using the electronic device, and the image rendering method of the present embodiment includes:
step S701, scene object information and light source information in a video picture are obtained;
step S702, dividing a video picture into a plurality of pixel regions, wherein each pixel region comprises a plurality of pixel points;
step S703, selecting a random pixel point with unknown illumination intensity in each pixel region, and determining the illumination intensity of the random pixel point according to the position of the random pixel point, the scene of the scene object and the light source information; rendering the video picture based on the illumination intensity of the random pixel points;
step S704, repeating step S703 until a light source modification instruction, a light source determination instruction or a user frame selection instruction is received;
step S705, if a user frame selection instruction is received, turning to step S706; if a light source modification instruction is received, returning to the step S701; if a light source determining instruction is received, determining the illumination intensity of all pixel points in the video picture according to the positions of all pixel points in the video picture, scene object information and light source information, and rendering the video picture based on the illumination intensity of all pixel points;
step S706, determining a specific pixel area according to a user frame selection instruction; selecting i random pixel points with unknown illumination intensity in each specific pixel region and j random pixel points with unknown illumination intensity in other pixel regions, and determining the illumination intensity of the random pixel points according to the positions of the random pixel points, scene object information and light source information; rendering the video picture based on the illumination intensity of the random pixel points; wherein i is greater than j; repeating the step S706 until a light source modification instruction, a light source determination instruction or a user frame selection instruction is received; return to step S705.
The following describes in detail the screen rendering process of the screen rendering method according to the present embodiment.
The contents of steps S701 to S703 are the same as or similar to those of steps S101 to S103 of the first embodiment of the screen rendering method, and refer to the related descriptions of steps S101 to S103 of the first embodiment of the screen rendering method.
In step S704, the image rendering apparatus repeatedly executes step S703, so that the number of rendered random pixels in each pixel region in the video image increases, the image rendering effect of the video image becomes more and more accurate, and the user can pre-determine the rendering effect in the video image.
If the user determines that the rendering effect is the final rendering effect, a light source determination instruction can be sent out; if the user determines that the position of the light source needs to be modified based on the existing rendering effect so as to improve the rendering effect, a light source determining instruction can be sent; and if the user needs to confirm the rendering effect of the local detail in the video picture again, sending a user frame selection instruction.
In step S705, if a user frame selection instruction is received, the process goes to step S706 to perform accelerated adjustment of the prerender effect on the local details of the video frame.
If the image rendering device receives a light source modification instruction of a user, the light source position is modified according to the light source modification instruction, and then the step S701 is returned to, and the image rendering is performed again after the image rendering effect is cleared.
If the picture rendering device receives a light source determination instruction of a user, the picture rendering device determines the illumination intensity of all pixel points in the video picture according to the positions of all the pixel points in the video picture, the scene object information and the light source information, and renders the video picture based on the illumination intensity of all the pixel points, namely, performs global rendering on the video picture.
In step S706, if a user box instruction is received, the user box instruction is a pixel area specified by the user and requiring the accelerated rendering effect. The picture rendering apparatus determines a specific pixel region according to a user rendering instruction.
Then, the picture rendering device selects i random pixel points with unknown illumination intensity in a specific pixel area and j random pixel points with unknown illumination intensity in other pixel areas, determines the illumination intensity of the random pixel points according to the positions of the random pixel points, scene object information and light source information, and renders a video picture based on the illumination intensity of the random pixel points; where i is greater than j. And repeating the step S706 until the light source modification instruction, the light source determination instruction or the user frame selection instruction is received again, and returning to the step S705.
Therefore, more and more random pixel points are rendered in the specific pixel region, the increase amplitude of the random pixel points in the non-specific pixel region which is not concerned by the user is small, the calculation resource is not greatly influenced, and the user can preferentially and accurately judge the overall rendering effect of the video picture based on the rendering effect of the specific pixel region.
Further, in step S706, in this embodiment, the video frame may be pre-rendered based on the user frame area and the difference pixel area at the same time. Referring to fig. 8, fig. 8 is a flowchart illustrating the step S706 of the screen rendering method according to the third embodiment of the present invention. The step S706 includes:
step S801, the picture rendering device determines a specific pixel area according to a user frame selection instruction; selecting i random pixel points with unknown illumination intensity in each specific pixel region and j random pixel points with unknown illumination intensity in other pixel regions, and determining the illumination intensity of the random pixel points according to the positions of the random pixel points, the scene object information and the light source information; and rendering the video picture based on the illumination intensity of the random pixel points.
In step S802, the image rendering device obtains a difference value between the illumination intensities of the random pixels in the adjacent specific pixel regions, and sets two specific pixel regions having a difference value greater than a fourth set value as the difference pixel regions.
Step S803, the picture rendering device selects i random pixel points with unknown illumination intensity in each difference pixel region, j random pixel points with unknown illumination intensity in other specific pixel regions and k random pixel points with unknown illumination intensity in other pixel regions, and determines the illumination intensity of the random pixel points according to the positions of the random pixel points, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points; wherein i is greater than j, and j is greater than k; repeating the step S803 until a light source modification instruction, a light source determination instruction or a user frame selection instruction is received; return to step S705.
Of course, if the user feels that the priority of the user's boxed area is higher than that of the difference pixel area, j may be set to be greater than or equal to i.
Therefore, in the pre-rendering process of the video picture, the specific pixel area selected by the user and the difference pixel area detected by the picture rendering device can be considered at the same time, and the pre-rendering processing is carried out on the random pixel points of the specific pixel area and the difference pixel area in priority.
Thus, the screen rendering process of the screen rendering method of the present embodiment is completed.
On the basis of the first embodiment, the picture rendering method of this embodiment performs differentiated random pixel point rendering on different pixel areas based on the difference value of the illumination intensity of the random pixel points of the specific pixel area and/or the pixel area framed by the user, further improves the pre-rendering efficiency of the video picture, and realizes quick and accurate pre-rendering of the video picture.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of the image rendering apparatus of the present invention, where the image rendering apparatus of the present embodiment can be implemented by using the first embodiment of the image rendering method, and the image rendering apparatus 90 of the present embodiment includes an image information obtaining module 91, a pixel area dividing module 92, a random rendering module 93, an instruction receiving module 94, and a global rendering module 95.
The image information acquiring module 91 is configured to acquire scene object information and light source information in a video image; the pixel region dividing module 92 is configured to divide a video frame into a plurality of pixel regions, where each pixel region includes a plurality of pixel points; the random rendering module 93 is configured to select a random pixel point with unknown illumination intensity in each pixel region, and determine the illumination intensity of the random pixel point according to the position of the random pixel point, scene object information, and light source information; rendering the video picture based on the illumination intensity of the random pixel points; the instruction receiving module 94 is configured to receive a light source modification instruction or a light source determination instruction; the global rendering module 95 is configured to determine the illumination intensities of all the pixel points in the video image according to the positions of all the pixel points in the video image, the scene object information, and the light source information, and render the video image based on the illumination intensities of all the pixel points.
When the image rendering apparatus 90 of the present embodiment is used, first, the image information acquiring module 91 acquires scene object information and light source information in a video image based on a light source setting instruction of a user.
The light source setting instruction is a user instruction for setting a corresponding light source in a video picture by a user so as to achieve scene illumination required by a game image. The scene object information is position information and size information of an object which has a function of completely or partially shielding light in the video picture. The light source information is position information and light intensity information of the light source set by a user.
The pixel region division module 92 then divides the acquired video picture into a plurality of pixel regions, wherein each pixel region includes a plurality of pixel points.
The user can determine the number of the pixel areas according to the identification requirement of the user on the picture rendering, and if the user wants to realize the picture rendering preview as fast as possible, the number of the pixel areas can be set to be smaller; if the user wants to implement the screen rendering preview as accurately as possible, the number of pixel regions can be set to be large.
Then, the random rendering module 93 selects a random pixel point with unknown illumination intensity in each pixel region, and determines the illumination intensity of the random pixel point according to the position of the random pixel point, scene object information in the video picture and light source information; and then rendering the video picture based on the illumination intensity of the random pixel points.
The random rendering module 93 repeatedly performs the rendering operation of the random pixel points in the pixel regions, so that the number of rendered random pixel points in each pixel region in the video image is increased, the image rendering effect of the video image is more and more accurate, and a user can pre-judge the rendering effect in the video image.
Specifically, the random rendering module may further obtain a difference value of illumination intensities of random pixels in adjacent pixel regions, and set two pixel regions having a difference value greater than a third set value as the difference pixel region; then selecting i random pixel points with unknown illumination intensity in each difference pixel area and j random pixel points with unknown illumination intensity in other pixel areas, and determining the illumination intensity of the random pixel points according to the positions of the random pixel points, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points; wherein i is greater than j;
the random pixel points of the pixel region with poorer influence on the video picture rendering effect are increased more and more, the increase range of the random pixel points of the pixel region with poorer influence on the video picture rendering effect is smaller, the calculation resource cannot be influenced greatly, and therefore a user can preferentially and accurately judge the overall rendering effect of the video picture based on the rendering effect of the difference pixel region.
If the user determines that the rendering effect is the final rendering effect, a light source determination instruction may be sent to the instruction receiving module 94; if the user determines that the light source position needs to be modified to improve the rendering effect based on the existing rendering effect, a light source determination instruction may be issued to the instruction receiving module 94.
Finally, if the instruction receiving module 94 receives a light source modification instruction of the user, the light source position is modified according to the light source modification instruction, and then the image information obtaining module 91 returns to perform image rendering again after the image rendering effect is cleared.
If the instruction receiving module 94 receives a light source determination instruction of a user, the global rendering module 95 determines the illumination intensities of all the pixel points in the video image according to the positions of all the pixel points in the video image, the scene object information, and the light source information, and renders the video image based on the illumination intensities of all the pixel points, that is, performs global rendering on the video image.
This completes the screen rendering process of the screen rendering apparatus 90 of the present embodiment.
The specific picture rendering process of the picture rendering apparatus of this embodiment is the same as or similar to the processes in all embodiments of the picture rendering method, and please refer to the similar description in the embodiments of the picture rendering method.
The picture rendering device of the embodiment performs random pixel point rendering on the pixel area by dividing the video picture into a plurality of pixel areas, thereby realizing the rapid pre-rendering of the video picture; the user can adjust the rendering effect at any time by observing the rendering result of the random pixel points; therefore, the scene illumination of the game image can be adjusted quickly and at low cost.
The following describes a specific working principle of the screen rendering method and the screen rendering apparatus according to the present invention by using a specific embodiment. Referring to fig. 10 and 11a to 11e, fig. 10 is a flowchart illustrating a screen rendering method and a screen rendering apparatus according to an embodiment of the present invention, and fig. 11a to 11d are schematic diagrams illustrating a screen rendering method and a screen rendering apparatus according to an embodiment of the present invention.
The picture rendering method and the picture rendering device can be arranged on a game design terminal and used for designing and rendering the corresponding game video pictures. The picture rendering process comprises the following steps:
in step S1001, scene object information and light source information in a video frame are obtained, as shown in fig. 11a, where the scene object and the light source in the video frame have been converted into corresponding bounding boxes.
Step S1002, divide the video frame into a plurality of pixel regions, where each pixel region includes a plurality of pixels, and may be specifically shown in fig. 11 b.
Step S1003, selecting a random pixel point with unknown illumination intensity in each pixel area, and determining the illumination intensity of the random pixel point according to the position of the random pixel point, scene object information and light source information; and rendering the video picture based on the illumination intensity of the random pixel points.
Step S1004 is repeated step S1003, and the pre-rendering effect of the video image is gradually exhibited, as shown in fig. 11c to 11 e.
Step S1005, if the user is not satisfied with the displayed prerendering effect, a light source modification instruction can be sent at any time, and after the scene object information and/or the light source information is modified, the step S1001 is returned; if the user is satisfied with the displayed pre-rendering effect, a light source determining instruction can be sent out, the game design terminal determines the illumination intensity of all the pixel points in the video picture according to the positions of all the pixel points in the video picture, the scene object information and the light source information, and renders the video picture based on the illumination intensity of all the pixel points.
Thus, the screen rendering method and the screen rendering process of the screen rendering device of the present embodiment are completed.
According to the picture rendering method and the picture rendering device, the video picture is divided into the pixel areas, the random pixel point rendering is carried out on the pixel areas in a progressive mode, a user can increase the sampling number and stop iterative calculation at any time, when the iterative calculation reaches a certain acceptable visual degree, the user can finish the pre-rendering calculation, and the rapid pre-rendering of the video picture is realized; the user can adjust the rendering effect at any time by observing the rendering result of the random pixel points; therefore, the scene illumination of the game image can be adjusted quickly and at low cost; the technical problem that the scene illumination of the game image is adjusted to be higher in cost in the conventional picture rendering method and the conventional picture rendering device is effectively solved.
Various operations of embodiments are provided herein. In one embodiment, the one or more operations may constitute computer readable instructions stored on one or more computer readable media, which when executed by an electronic device, will cause the computing device to perform the operations. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Those skilled in the art will appreciate alternative orderings having the benefit of this description. Moreover, it should be understood that not all operations are necessarily present in each embodiment provided herein.
Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations, and is limited only by the scope of the appended claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for a given or particular application. Furthermore, to the extent that the terms "includes," has, "" contains, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
Each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Each apparatus or system described above may perform the method in the corresponding method embodiment.
In summary, although the present invention has been disclosed in the foregoing embodiments, the serial numbers before the embodiments are used for convenience of description only, and the sequence of the embodiments of the present invention is not limited. Furthermore, the above embodiments are not intended to limit the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention, therefore, the scope of the present invention shall be limited by the appended claims.
Claims (8)
1. A screen rendering method, comprising:
s21, obtaining scene object information and light source information in the video picture;
s22, dividing the video picture into a plurality of pixel areas, wherein each pixel area comprises a plurality of pixel points;
s23, selecting a random pixel point with unknown illumination intensity in each pixel area, and determining the illumination intensity of the random pixel point according to the position of the random pixel point, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points;
s24, obtaining a difference value of illumination intensity of random pixel points adjacent to the pixel area, and setting two pixel areas with the difference value larger than a third set value as difference pixel areas;
s25, selecting i random pixel points with unknown illumination intensity in each difference pixel area and j random pixel points with unknown illumination intensity in other pixel areas, and determining the illumination intensity of the random pixel points according to the positions of the random pixel points, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points; wherein i is greater than j;
s26, repeating the steps S24-S25 until a light source modification command or a light source determination command is received;
s27, if the light source modification instruction is received, returning to the step S21, if a light source determination instruction is received, determining the illumination intensity of all the pixel points in the video picture according to the positions of all the pixel points in the video picture, the scene object information and the light source information, and rendering the video picture based on the illumination intensity of all the pixel points;
wherein the step of determining the illumination intensity of the random pixel point according to the position of the random pixel point, the scene object information and the light source information comprises:
s231, acquiring a first-level tracking ray according to the position of the random pixel point and the position of the scene object;
s232, acquiring nth-level tracing light corresponding to first-level tracing light based on the relative positions of objects in different scenes and the incident angle of the first-level tracing light; n is 2;
s233, obtaining an nth +1 th-level tracing ray corresponding to the nth-level tracing ray based on the relative position of the different scene objects and the incident angle of the nth-level tracing ray, where n is n +1, and repeating step S233 until n is greater than a first set value;
s234, acquiring illumination energy provided by the light source for the tracking light according to the positions of the light source and the scene object;
s235, based on the shape and the material of the scene object, performing attenuation calculation on the illumination energy corresponding to each tracking ray;
and S236, overlapping the illumination energy of each attenuated tracking ray to obtain the illumination intensity of the corresponding random pixel point.
2. The image rendering method according to claim 1, wherein the step of determining the illumination intensity of the random pixel point according to the position of the random pixel point, the scene object information, and the light source information further comprises:
converting the scene object into a corresponding triangular object;
setting bounding boxes corresponding to the scene objects according to the distance between the adjacent triangular objects; wherein each said bounding box comprises at least one trigonal object;
the step S232 specifically includes: acquiring an nth-level tracing ray corresponding to a first-level tracing ray based on a triangular object in a bounding box through which the first-level tracing ray passes and the incident angle of the first-level tracing ray;
the step S233 specifically includes: acquiring an n +1 th-level tracing ray corresponding to an nth-level tracing ray based on a triangular object in a bounding box through which the nth-level tracing ray passes and the incident angle of the nth-level tracing ray;
the step S235 specifically includes: and based on the shape and the material of the triangular object corresponding to the scene object, performing attenuation operation on the illumination energy corresponding to each tracking ray.
3. The picture rendering method according to claim 2, wherein the step of setting the bounding box corresponding to the scene object according to the distance between the adjacent triangular objects comprises:
s241, setting two m-level bounding boxes corresponding to the scene objects based on the maximum distance between the adjacent triangular objects; m is 1;
s242, setting two (m + 1) th-level bounding boxes in the mth-level bounding box based on the maximum distance between adjacent triangular objects in the mth-level bounding box;
and S243, wherein m is m +1, and the method returns to the step S242 until the maximum distance between the adjacent triangle objects in the m-th level bounding box is smaller than a second set value.
4. The screen rendering method according to claim 1, wherein the step S235 specifically includes:
s2351, calculating the surface absorption attenuation of the illumination energy corresponding to each tracking ray based on the material of the scene object;
s2352, calculating the surface scattering attenuation of the illumination energy corresponding to each tracking ray based on the shape of the scene object and the incident angle of the tracking ray;
s2353, based on the surface absorption attenuation and the surface scattering attenuation, performing attenuation calculation on the illumination energy corresponding to each of the tracking light rays.
5. The image rendering method of claim 1, wherein the larger the difference between the illumination intensities of the random pixels in the difference pixel region and the adjacent pixel region is, the larger the number of random pixels with unknown illumination intensity in the difference pixel region is selected in step S25.
6. A screen rendering method, comprising:
s31, obtaining scene object information and light source information in the video picture;
s32, dividing the video picture into a plurality of pixel areas, wherein each pixel area comprises a plurality of pixel points;
s33, selecting a random pixel point with unknown illumination intensity in each pixel area, and determining the illumination intensity of the random pixel point according to the position of the random pixel point, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points;
s34, repeating the step S33 until a light source modification instruction, a light source determination instruction or a user frame selection instruction is received;
s35, if receiving the user selection instruction, turning to step S36; if the light source modification instruction is received, returning to the step S31, if a light source determination instruction is received, determining the illumination intensity of all the pixel points in the video picture according to the positions of all the pixel points in the video picture, the scene object information and the light source information, and rendering the video picture based on the illumination intensity of all the pixel points;
s36, determining a specific pixel area according to the user frame selection instruction; selecting i random pixel points with unknown illumination intensity in each specific pixel region and j random pixel points with unknown illumination intensity in other pixel regions, and determining the illumination intensity of the random pixel points according to the positions of the random pixel points, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points; wherein i is greater than j; repeating the step S36 until a light source modification instruction, a light source determination instruction or a user frame selection instruction is received; returning to step S35;
wherein the step of determining the illumination intensity of the random pixel point according to the position of the random pixel point, the scene object information and the light source information comprises:
s231, acquiring a first-level tracking ray according to the position of the random pixel point and the position of the scene object;
s232, acquiring nth-level tracing light corresponding to first-level tracing light based on the relative positions of objects in different scenes and the incident angle of the first-level tracing light; n is 2;
s233, obtaining an nth +1 th-level tracing ray corresponding to the nth-level tracing ray based on the relative position of the different scene objects and the incident angle of the nth-level tracing ray, where n is n +1, and repeating step S233 until n is greater than a first set value;
s234, acquiring illumination energy provided by the light source for the tracking light according to the positions of the light source and the scene object;
s235, based on the shape and the material of the scene object, performing attenuation calculation on the illumination energy corresponding to each tracking ray;
and S236, overlapping the illumination energy of each attenuated tracking ray to obtain the illumination intensity of the corresponding random pixel point.
7. The screen rendering method according to claim 6, wherein the step S36 includes:
s361, determining a specific pixel area according to a user frame selection instruction; selecting i random pixel points with unknown illumination intensity in each specific pixel region and j random pixel points with unknown illumination intensity in other pixel regions, and determining the illumination intensity of the random pixel points according to the positions of the random pixel points, the scene object information and the light source information; rendering the video picture based on the illumination intensity of the random pixel points;
s362, obtaining a difference value of the illumination intensity of the random pixel points of the adjacent specific pixel areas, and setting the two specific pixel areas with the difference value larger than a fourth set value as the difference pixel areas;
s363, selecting i random pixel points of unknown illumination intensity in each difference pixel region, j random pixel points of unknown illumination intensity in other specific pixel regions, and k random pixel points of unknown illumination intensity in other pixel regions, and determining the illumination intensity of the random pixel points according to the positions of the random pixel points, the scene object information, and the light source information; rendering the video picture based on the illumination intensity of the random pixel points; wherein i is greater than j, and j is greater than k; repeating the step S363 until a light source modification instruction, a light source determination instruction or a user frame selection instruction is received; return is made to step S35.
8. The image rendering method of claim 7, wherein the larger the difference value between the illumination intensities of the random pixels in the difference pixel region and the adjacent pixel region is, the larger the number of random pixels with unknown illumination intensity in the difference pixel region is selected in step S363.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110328129.5A CN113079409B (en) | 2021-03-26 | 2021-03-26 | Picture rendering method and picture rendering device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110328129.5A CN113079409B (en) | 2021-03-26 | 2021-03-26 | Picture rendering method and picture rendering device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113079409A CN113079409A (en) | 2021-07-06 |
CN113079409B true CN113079409B (en) | 2021-11-26 |
Family
ID=76610757
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110328129.5A Active CN113079409B (en) | 2021-03-26 | 2021-03-26 | Picture rendering method and picture rendering device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113079409B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115731331A (en) * | 2021-08-30 | 2023-03-03 | 华为云计算技术有限公司 | Method and related device for rendering application |
CN118628639A (en) * | 2023-03-08 | 2024-09-10 | 深圳市腾讯网域计算机网络有限公司 | Shadow rendering method, shadow rendering device, computer equipment and storage medium |
CN116617659B (en) * | 2023-07-26 | 2023-12-12 | 深圳市凉屋游戏科技有限公司 | Game picture rendering method and device based on data synchronization period |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112419147A (en) * | 2020-04-14 | 2021-02-26 | 上海哔哩哔哩科技有限公司 | Image rendering method and device |
CN112465939A (en) * | 2020-11-25 | 2021-03-09 | 上海哔哩哔哩科技有限公司 | Panoramic video rendering method and system |
CN112465940A (en) * | 2020-11-25 | 2021-03-09 | 北京字跳网络技术有限公司 | Image rendering method and device, electronic equipment and storage medium |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8665341B2 (en) * | 2010-08-27 | 2014-03-04 | Adobe Systems Incorporated | Methods and apparatus for rendering output images with simulated artistic effects from focused plenoptic camera data |
CN104200513A (en) * | 2014-08-08 | 2014-12-10 | 浙江传媒学院 | Matrix row-column sampling based multi-light-source rendering method |
CN106445430B (en) * | 2015-07-27 | 2019-06-21 | 常州市武进区半导体照明应用技术研究院 | For the instant photochromic rendering system and rendering method of interactive interface and its application |
CN106991717B (en) * | 2017-03-16 | 2020-12-18 | 珠海市魅族科技有限公司 | Image processing method and system applied to three-dimensional scene |
US10762695B1 (en) * | 2019-02-21 | 2020-09-01 | Electronic Arts Inc. | Systems and methods for ray-traced shadows of transparent objects |
CN111739142A (en) * | 2019-03-22 | 2020-10-02 | 厦门雅基软件有限公司 | Scene rendering method and device, electronic equipment and computer readable storage medium |
CN110211218B (en) * | 2019-05-17 | 2021-09-10 | 腾讯科技(深圳)有限公司 | Picture rendering method and device, storage medium and electronic device |
CN111080798B (en) * | 2019-12-02 | 2024-02-23 | 网易(杭州)网络有限公司 | Visibility data processing method of virtual scene and rendering method of virtual scene |
CN111311723B (en) * | 2020-01-22 | 2021-11-02 | 腾讯科技(深圳)有限公司 | Pixel point identification and illumination rendering method and device, electronic equipment and storage medium |
-
2021
- 2021-03-26 CN CN202110328129.5A patent/CN113079409B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112419147A (en) * | 2020-04-14 | 2021-02-26 | 上海哔哩哔哩科技有限公司 | Image rendering method and device |
CN112465939A (en) * | 2020-11-25 | 2021-03-09 | 上海哔哩哔哩科技有限公司 | Panoramic video rendering method and system |
CN112465940A (en) * | 2020-11-25 | 2021-03-09 | 北京字跳网络技术有限公司 | Image rendering method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113079409A (en) | 2021-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113079409B (en) | Picture rendering method and picture rendering device | |
KR101893047B1 (en) | Image processing method and image processing device | |
US10403035B2 (en) | Rendering images using ray tracing with multiple light sources | |
CN107408294B (en) | Cross-level image blending | |
US10768799B2 (en) | Display control of an image on a display screen | |
WO2023142607A1 (en) | Image rendering method and apparatus, and device and medium | |
KR102317182B1 (en) | Apparatus for generating composite image using 3d object and 2d background | |
US10965864B2 (en) | Panoramic photograph with dynamic variable zoom | |
CN112957731B (en) | Picture rendering method, picture rendering device and storage medium | |
Javidnia et al. | Application of preconditioned alternating direction method of multipliers in depth from focal stack | |
CN112839172B (en) | Shooting subject identification method and system based on hand identification | |
WO2024148898A1 (en) | Image denoising method and apparatus, and computer device and storage medium | |
US8791947B1 (en) | Level of detail blurring and 3D model data selection | |
JP2015210672A (en) | Information processor, control method, program and recording medium | |
US10460427B2 (en) | Converting imagery and charts to polar projection | |
JP6619598B2 (en) | Program, recording medium, luminance calculation device, and luminance calculation method | |
US20210390665A1 (en) | Gpu-based lens blur rendering using depth maps | |
Wang et al. | An airlight estimation method for image dehazing based on gray projection | |
Mundhenk et al. | PanDAR: a wide-area, frame-rate, and full color lidar with foveated region using backfilling interpolation upsampling | |
Roth et al. | Guided high-quality rendering | |
US10922829B2 (en) | Zero order light removal in active sensing systems | |
CN116824082B (en) | Virtual terrain rendering method, device, equipment, storage medium and program product | |
US20240153177A1 (en) | Vector object blending | |
CN117911605A (en) | Three-dimensional scene construction method, apparatus, device, storage medium, and program product | |
Gui et al. | A portable low-cost 3D point cloud acquiring method based on structure light |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |