CN107016718B - Scene rendering method and device - Google Patents

Scene rendering method and device Download PDF

Info

Publication number
CN107016718B
CN107016718B CN201710091613.4A CN201710091613A CN107016718B CN 107016718 B CN107016718 B CN 107016718B CN 201710091613 A CN201710091613 A CN 201710091613A CN 107016718 B CN107016718 B CN 107016718B
Authority
CN
China
Prior art keywords
color
target video
video image
scene
baking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710091613.4A
Other languages
Chinese (zh)
Other versions
CN107016718A (en
Inventor
王晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201710091613.4A priority Critical patent/CN107016718B/en
Publication of CN107016718A publication Critical patent/CN107016718A/en
Application granted granted Critical
Publication of CN107016718B publication Critical patent/CN107016718B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a scene rendering method and a scene rendering device, wherein the method is applied to a virtual three-dimensional scene and comprises the following steps: when a target video file is played, determining a target video image and the color of the target video image; searching a baking map corresponding to the color of the target video image; rendering the scene according to the baking map; the baking map is obtained by baking and rendering the scene irradiated by any colored surface light source. Therefore, the light and shadow effect of different video images on a scene is simulated in real time, and the user experience is greatly improved; and meanwhile, the rendering efficiency is also improved.

Description

Scene rendering method and device
Technical Field
The invention relates to the technical field of computers, in particular to a scene rendering method and device.
Background
By utilizing a Virtual Reality (VR) technology, the interactive three-dimensional dynamic scene with multi-source information fusion and the system simulation of the entity behaviors of the user can be realized to create a simulation environment; therefore, the user can have real experience when being placed in the simulated environment. For example, VR virtual cinema utilizes the virtual environment that VR technique simulation cinema looked the shadow, and the user just can experience and look the shadow at the cinema wearing the VR helmet.
However, in the process of viewing the real movie, the movie screen will affect the effect of the movie on the environment, so that in order to more approximate the effect of viewing the movie in the real movie theatres, the effect of the movie screen on the virtual environment can be simulated in the VR virtual theatres. In the prior art, there are various methods to simulate the light and shadow effect of a movie screen in a VR virtual cinema on a scene, such as simulating the light and shadow effect of a point light source in the scene, and because the movie screen in a real cinema is equivalent to a surface light source, the visual effect simulated by the method is poor; for example, by using the light and shadow effect of a surface light source simulating a single color in a scene, the method cannot reflect the influence of a video picture on the surrounding environment in real time because the colors of different video frames are not used; for example, the light and shadow effect of the surface light source is simulated in real time according to the color of the video picture, and the method is high in cost.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a scene rendering method, so as to solve the problems of low efficiency and poor effect of rendering a scene by using a video image as a light source in the prior art.
Correspondingly, the embodiment of the invention also provides a scene rendering device, which is used for ensuring the realization and the application of the method.
In order to solve the above problems, the present invention discloses a scene rendering method, which is applied to a virtual three-dimensional scene, and specifically includes: when a target video file is played, determining a target video image and the color of the target video image; searching a baking map corresponding to the color of the target video image; rendering the scene according to the baking map; the baking map is obtained by baking and rendering the scene irradiated by any colored surface light source.
The invention also discloses a scene rendering device, which specifically comprises: the target video determining module is used for determining a target video image and the color of the target video image when a target video file is played; the target map determining module is used for searching a baking map corresponding to the color of the target video image; the scene rendering module is used for rendering the scene according to the baking map; the baking map is obtained by baking and rendering the scene irradiated by any colored surface light source.
Compared with the prior art, the embodiment of the invention has the following advantages:
the scene rendering method is applied to a virtual three-dimensional scene, and when a target video file is played in the three-dimensional scene, a corresponding baking map can be determined according to the color of a target video image; rendering the scene according to the baking map; the baking maps are obtained by baking and rendering the scene irradiated by the colored surface light source, namely each baking map corresponds to the light and shadow effect of the scene irradiated by one colored surface light source; when the target video image is played, the corresponding baking map is used for rendering the scene, namely the light and shadow effect of the scene irradiated by the surface light source with the color corresponding to the target video image can be simulated; therefore, the light and shadow effect of different video images on a scene is simulated in real time, and the user experience is greatly improved; in addition, the light and shadow effect of the scene irradiated by the surface light source corresponding to each video image does not need to be calculated in real time, and the rendering efficiency is improved.
Drawings
FIG. 1 is a flow chart of steps of a scene rendering method embodiment of the present invention;
FIG. 2 is a flow chart of steps in another scene rendering method embodiment of the present invention;
FIG. 3 is a flow chart of steps in another scene rendering method embodiment of the present invention;
FIG. 4 is a block diagram of a scene rendering apparatus according to an embodiment of the present invention;
fig. 5 is a block diagram of another embodiment of a scene rendering apparatus according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
In order to better explain the following embodiments, the scenario will be explained first. Specifically, the scene is a virtual three-dimensional scene and is obtained through three-dimensional modeling; the virtual three-dimensional scene comprises a video screen capable of playing a video file, and when the video file is played in the scene, the rendering of the scene refers to the rendering of a three-dimensional model around the video screen with a light and shadow effect by taking the video screen as a light source; for example, a virtual cinema, the scene corresponding to the virtual cinema includes a movie screen and surrounding scenes; the surrounding scene is as follows: the wall of a movie theater, the roof of the movie theater, the seats of the movie theater, the audience for watching the movie, etc., when the movie is played in the virtual cinema, the light emitted from the movie screen can affect the surrounding environment.
Example one
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a scene rendering method according to the present invention is shown, which may specifically include the following steps:
step 101, determining a target video image and a color of the target video image when a target video file is played.
The scene rendering method provided by the embodiment of the invention is applied to a virtual three-dimensional scene, and when a video file is played on a video screen in the scene, the scene can be rendered in real time; the virtual three-dimensional scene is obtained by simulating a virtual reality device; the user wears the virtual reality equipment, and can experience the corresponding virtual three-dimensional scene; for example, the VR helmet, the user can experience the effect of watching the movie in the movie theater after wearing the VR helmet. The virtual reality equipment can store a plurality of video files, and video images in each video file are stored according to a time sequence; wherein each frame of video image has a corresponding color. Determining a video file played in the scene as a target video file, wherein when the target video file is played by the virtual reality equipment, video images in the target video file are played according to a frame rate and a time sequence; in order to render the scene correspondingly according to the color of a certain frame of video image while playing the frame of video image; therefore, the video image to be played can be determined as the target video image, so that the color of the target video image is calculated in real time and the scene is rendered when the target video image is played. For example, the average color of the target video image may be calculated using the GPU, and the average color may be determined as the color of the target video image.
And 102, searching a baking map corresponding to the color of the target video image.
103, rendering the scene according to the baking map; the baking map is obtained by baking and rendering the scene irradiated by any colored surface light source.
After the color of the target video image is determined, the baking chartlet corresponding to the color of the target video image can be searched; the baking chartlet is a texture map of a light and shadow effect when a scene is illuminated, and is obtained by baking and rendering the scene illuminated by any colored surface light source; the baking chartlet can reflect the light and shadow effect of each three-dimensional object in the scene when being irradiated by the surface light source, and the baking chartlets correspond to colors one by one; the size of the area light source is the same as the size of the video screen in the scene. All the baking maps can be stored in the server, or can be stored in advance by the virtual reality device. If the target video image is stored in the server, the virtual reality equipment sends a baking map searching request to the server after determining the color of the target video image, and the server searches the corresponding baking map according to the color information in the request and returns the baking map to the virtual reality equipment. If the baking map is stored on the virtual reality equipment, after the color of the target video image is determined, the corresponding baking map can be searched on the virtual reality equipment. And after the corresponding baking mapping is found, pasting the found baking back to the scene so as to render the scene. Wherein the determining the color of the target video image and the rendering the scene are all completed before the end of playing the target video image. For example, a virtual theater is created using a virtual reality device such as a VR headset; after the user wears the VR helmet, the effect of watching the movie in a movie theater can be experienced; that is, the user can view the movie screen, the walls, floor, roof, seats, and viewing audience in the movie theater, and the like, all objects in the movie theater of the real movie theater. After the target video file is determined to be played, a user can watch the played target video file through the movie screen, and when the target video image is played, a three-dimensional scene outside the movie screen in the virtual cinema can be rendered by using a baking mapping corresponding to the target video image; for example, when the color of the played target video image is red, the light and shadow effect of three-dimensional objects such as walls, seats, floors and the like in the scene around the movie screen when being irradiated by red light is presented; therefore, the user can experience the three-dimensional object in the virtual cinema more truly and be influenced by the illumination of the played video image.
The scene rendering method is applied to a virtual three-dimensional scene, and when a target video file is played in the three-dimensional scene, a corresponding baking map can be determined according to the color of a target video image; rendering the scene according to the baking map; the baking maps are obtained by baking and rendering the scenes irradiated by the colored surface light sources, namely each baking map corresponds to and the scene is subjected to the light and shadow effect irradiated by one colored surface light source; when the target video image is played, the corresponding baking map is used for rendering the scene, namely the light and shadow effect of the scene irradiated by the surface light source with the color corresponding to the target video image can be simulated; therefore, the light and shadow effect of different video images on a scene is simulated in real time, and the user experience is greatly improved; in addition, the light and shadow effect of the scene irradiated by the surface light source corresponding to each video image does not need to be calculated in real time, and the rendering efficiency is improved.
Example two
There are various ways of determining the target video image, and in another embodiment of the present invention, one of the ways of determining the target video image is explained; specifically, referring to fig. 2, fig. 2 shows a flowchart of steps of another embodiment of a scene rendering method according to the present invention, which specifically includes the following steps:
step 201, placing a surface light source at the screen position of the scene, and sequentially setting the color of the surface light source according to the color of a preset color list.
And 202, performing baking rendering by adopting global illumination to the scene irradiated by the surface light source with any color to generate a corresponding baking map.
In order to simulate the lighting effect of the scene irradiated by the light emitted by the video screen, a surface light source can be placed at the position of the video screen in the scene before the target video file is not played, and when the surface light source irradiates the whole scene, corresponding lighting effect can be presented on each object in the scene. The global illumination represents the comprehensive effect of direct illumination and indirect illumination of the surface light source; there are various ways to implement global illumination, such as radiance, ray tracing, ambient light masking, and photon mapping, etc., and are not limited herein. Since the colors of different video images of a video file may be the same or different; therefore, in order to make the colors of different video images correspond to one baking map, baking maps corresponding to different colors may be generated in advance. Therefore, a preset color list can be generated in advance, and the colors of the surface light source are set according to the colors in the preset color list in sequence; when the surface light source is set to be in one color, the baking chartlet corresponding to the color is produced by adopting the method. The types of the colors in the preset color list can be determined according to requirements, for example, the preset color list can be produced according to an RGB color table, such as R:255, G:255, and B: 255; if the rendering effect on the scene is not high, a preset color list may be generated according to the gray value, for example, 255. The preset color list may include names of colors and color parameters such as RGB values and gray scale values. Wherein, the baking map can be generated by a virtual reality device or a server.
Step 203, determining the target video image when the target video file is played.
When the target video file is played, in order to render the scene when the video image of the current frame is played, the video image to be played may be determined as the target video image.
Specifically, there are various ways to determine the target video image, one of which is as described in step 204-step 205.
And 204, acquiring color parameters of each pixel point of the target video image, and performing weighted calculation on the color parameters of each pixel point.
And step 205, determining the color corresponding to the value of the weighted calculation as the color of the target video image.
In the embodiment of the present invention, the color parameter of each pixel point of the target video image may be determined according to the storage information of the target video image, and specifically, if the preset color list is generated according to the RGB color table, the color parameter of each pixel point of the target video image is obtained as an RGB value; and if the preset color list is generated according to the gray value, calculating the color parameter of each pixel point of the target video image as the gray value. Then, performing weighted calculation on the color parameters of the acquired pixel points to obtain weighted calculation values; the value of the weighted calculation is the color parameter of the target video image. For example, if the preset color list is generated according to the RGB color table, the weighted calculation values may be R:108, G:232, B: 24; if the preset color list is generated according to the gray scale value, the value obtained by the weighted calculation may be 55. And then determining the color corresponding to the weighted calculated value as the color corresponding to the target video image. Wherein the value of the weighting calculation may or may not correspond to the RGB value or the gray value of a certain color in the preset color list; therefore, after the value of the weighted calculation is obtained, the RGB value or the gray value closest to the value of the weighted calculation may be searched in the preset color list, and the color corresponding to the closest RGB value or gray value is determined as the color of the target video; it is also possible to carry out a correction of the value of the weighting calculation when calculating the value of the weighting calculation so that the corresponding color is directly determined from the value of the weighting calculation.
Step 206, finding the baking map corresponding to the color of the target video image.
And step 207, rendering the scene according to the baking map.
After the color of a target video image is determined, searching a baking mapping corresponding to the color of the target video; wherein, the baking map is a light and shadow effect map of the scene irradiated by the surface light source; the method comprises the steps that information such as brightness and color of objects at different positions in a scene irradiated by a surface light source can be used for generating a baking chartlet; while playing the target video image, the baking map may be pasted back into the scene to render the corresponding object in the scene.
On the basis of the above embodiment, in the embodiment of the present invention, before playing the target video file, the baking maps corresponding to the various colors are generated in advance, and when playing the target video image, only the baking map corresponding to the color of the target video image needs to be called to render the scene; the light and shadow effect of a plane light source corresponding to the color of the target video image on the scene does not need to be calculated in real time, so that the time for rendering the scene is greatly reduced, and the rendering efficiency is improved; the hardware cost of the virtual reality device is reduced.
EXAMPLE III
In yet another embodiment of the present invention, another way of determining a target video image is described in detail. Referring to fig. 3, fig. 3 is a flowchart illustrating steps of another embodiment of a scene rendering method according to the present invention, which may specifically include the following steps:
step 301, placing a surface light source at the screen position of the scene, and sequentially setting the color of the surface light source according to the color of a preset color list.
And 302, performing baking rendering by adopting global illumination to the scene irradiated by the surface light source with any color to generate a corresponding baking map.
The steps 301 to 302 can be referred to specifically as the steps 201 to 202.
And step 303, extracting a video image from the target video file according to a preset time interval.
And step 304, determining and storing the color of the extracted video image.
Step 305, setting a corresponding time identifier for the color of the extracted video image.
Another way of determining the color of the target video image according to the embodiment of the present invention is to determine the color of the video image of the target video file in advance and store the color; and when the target video file is played, searching the corresponding color according to the time frame corresponding to the target video image. Specifically, each video file is stored in the form of a video image, and when the video file is played, the video images are displayed according to the corresponding frame rate and time sequence; each time frame corresponds to one frame of video image; for example, when the duration of a video file is 2 minutes and the playing frame rate is 30fps, the video file includes 3600 frames of video pictures, and the interval of each time frame is 1/30 seconds. In the embodiment of the invention, a plurality of video images of the target video file may belong to the same video scene, so that the colors of the video frames of two adjacent time frames may be the same or different; therefore, one frame of video image can be extracted at intervals of multiple frames, and the color of the extracted video image is determined; or extracting a frame of video image from each frame, and then determining the color of the extracted video image; wherein the preset interval is determined according to the target video and the user requirement. In order to better simulate the shadow effect of a scene in different video images; the color of each frame of video image of the target video file can be determined, and the preset time is the reciprocal of the frame rate of the target video file; if the playing frame rate of the target video file is 30fps, the preset time is 1/30 seconds; after the video image is extracted, the color of the extracted video image may be calculated and stored, and the method for calculating the color of the video image may specifically refer to steps 204 to 205 in embodiment two. When saving the color of the video image, a corresponding time identifier may be set for the color, and the time identifier is used to identify which time frame of the video image in the target video file the color corresponds to. For example, if the time interval is 1/30 seconds, the time stamp stored for the first color is 1/30 and the time stamp stored for the second color is 1/15 seconds. The corresponding color and identification of each video image of the target video file can be stored in various manners, wherein one manner is a run-length encoding manner, such as: a header | color | period | … | color | period | where the header is used to identify to which video file the saved color belongs, which may be the name of the target video file; because the colors of the multiple frames of videos are the same, the recording of the color of each frame of video is not needed, and only the time period of the video frame with the same color is needed; for example: if the frame rate of the target video file is 30 fps; the color of the first frame video image is red, and the colors of the second frame to the twentieth frame are yellow, then the recording mode is as follows: header | red |1/30| yellow |1/15-2/3| ….
Step 306, determining the target video image when the target video file is played.
Specifically, refer to step 203, which is not described herein again.
Step 307, determining a time frame corresponding to the target video image, and searching for a time identifier matched with the time frame.
And 308, determining the color corresponding to the time identifier matched with the time frame as the color of the target video image.
After determining the target video image, the color of the target video image does not need to be calculated; only a time frame corresponding to the target video image needs to be determined, and a time identifier corresponding to the time frame is searched according to the time frame; determining the color corresponding to the time identification matched with the time frame as the color of the target video; for example, the colors corresponding to the video images corresponding to the saved target video file are: header | red |1/30| yellow |1/15-2/3| …; determining 13/30 a time frame of the target video image; then the 13/30 falls within the range 1/15-2/3, then it is determined that the time stamp matches the time frame of the target video image and yellow is the color of the target video image.
Step 309, find the baking map corresponding to the color of the target video image.
And 310, rendering the scene according to the baking map.
Step 309-step 310, refer to step 206-step 207.
On the basis of the embodiment, when the color of the target video image is determined, the color of the target video image does not need to be calculated, and the time frame corresponding to the target video image is directly utilized to search the color corresponding to each stored video image corresponding to the target video; the workload of the virtual reality equipment is reduced, and the rendering efficiency is further improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Example four
Referring to fig. 4, a block diagram of a scene rendering apparatus according to an embodiment of the present invention is shown, so as to ensure the implementation of the scene rendering method, where the apparatus specifically includes: a target video determination module 401, a target map determination module 402, and a scene rendering module 403, wherein,
the target video determining module 401 is configured to determine a target video image and a color of the target video image when the target video file is played.
A target map determination module 402 for finding a bake map corresponding to a color of the target video image.
A scene rendering module 403, configured to render the scene according to the baking map; the baking map is obtained by baking and rendering the scene irradiated by any colored surface light source.
In another embodiment of the present invention, a module further included in the apparatus and a sub-module included in the module are described, and referring to fig. 5, a block diagram of another embodiment of a scene rendering apparatus according to the present invention is shown, where the apparatus includes: the target video determining module 501, the target map determining module 502, and the scene rendering module 503, which have already been discussed above, are not described herein again. The device further comprises: a video image storage module 504 and a baked map generation module 505, wherein,
a video image storage module 504, configured to extract video images from the target video file according to a preset time interval; determining and storing the color of the extracted video image; and setting corresponding time identification for the extracted color of the video image.
A baking map generating module 505, configured to place a surface light source at a screen position of the scene, and sequentially set colors of the surface light source according to colors of a preset color list; and for the scene irradiated by the surface light source with any color, performing baking rendering by adopting global illumination to generate a corresponding baking chartlet.
The target video determining module 501 of the embodiment of the present invention includes: a first image color determination submodule 5011 and a second image color determination submodule 5012, wherein,
the first image color determining submodule 5011 is configured to obtain color parameters of each pixel point of the target video image, and perform weighted calculation on the color parameters of each pixel point; and determining the color corresponding to the weighted calculated value as the color of the target video image.
The second image color determining submodule 5012 is configured to determine a time frame corresponding to the target video image, and search for a time identifier matching with the time frame; and determining the color corresponding to the time identifier matched with the time frame as the color of the target video image.
The scene rendering method is applied to a virtual three-dimensional scene, and when a target video file is played in the three-dimensional scene, a corresponding baking map can be determined according to the color of a target video image; rendering the scene according to the baking map; the baking maps are obtained by baking and rendering the scenes irradiated by the colored surface light sources, namely each baking map corresponds to and the scene is subjected to the light and shadow effect irradiated by one colored surface light source; when the target video image is played, the corresponding baking map is used for rendering the scene, namely the light and shadow effect of the scene irradiated by the surface light source with the color corresponding to the target video image can be simulated; therefore, the light and shadow effect of different video images on a scene is simulated in real time, and the user experience is greatly improved; in addition, the light and shadow effect of the scene irradiated by the surface light source corresponding to each video image does not need to be calculated in real time, and the rendering efficiency is improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The scene rendering method and the scene rendering device provided by the invention are described in detail above, and the principle and the implementation mode of the invention are explained in the text by applying specific examples, and the description of the above embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A scene rendering method is applied to a virtual three-dimensional scene, and comprises the following steps:
when a target video file is played, determining a target video image and the color of the target video image;
searching a baking map corresponding to the color of the target video image;
rendering the scene according to the baking map;
the baking chartlet is obtained by baking and rendering the scene irradiated by any colored surface light source;
before the step of playing the target video file, the method further comprises the following steps:
placing a surface light source at the screen position of the scene, and sequentially setting the color of the surface light source according to the color of a preset color list;
and for the scene irradiated by the surface light source with any color, performing baking rendering by adopting global illumination to generate a corresponding baking chartlet.
2. The method of claim 1, wherein the step of determining the color of the target video image comprises:
acquiring color parameters of all pixel points of the target video image, and performing weighted calculation on the color parameters of all the pixel points;
and determining the color corresponding to the weighted calculation value as the color of the target video image.
3. The method of claim 1, further comprising, prior to the step of playing the target video file:
extracting video images from the target video file according to a preset time interval;
determining and storing the color of the extracted video image;
and setting corresponding time identification for the extracted color of the video image.
4. The method of claim 3, wherein the step of determining the color of the target video image comprises:
determining a time frame corresponding to the target video image, and searching a time identifier matched with the time frame;
and determining the color corresponding to the time identifier matched with the time frame as the color of the target video image.
5. A scene rendering apparatus, comprising:
the target video determining module is used for determining a target video image and the color of the target video image when a target video file is played;
the target map determining module is used for searching a baking map corresponding to the color of the target video image;
the scene rendering module is used for rendering the scene according to the baking map; the baking chartlet is obtained by baking and rendering the scene irradiated by any colored surface light source;
the device further comprises:
the baking mapping generation module is used for placing a surface light source at the screen position of the scene and sequentially setting the color of the surface light source according to the color of a preset color list; and for the scene irradiated by the surface light source with any color, performing baking rendering by adopting global illumination to generate a corresponding baking chartlet.
6. The apparatus of claim 5, wherein the target video determination module comprises:
the first image color determining submodule is used for acquiring color parameters of all pixel points of the target video image and carrying out weighted calculation on the color parameters of all the pixel points; and determining the color corresponding to the weighted calculation value as the color of the target video image.
7. The apparatus of claim 5, further comprising:
the video image storage module is used for extracting video images from the target video file according to a preset time interval; determining and storing the color of the extracted video image; and setting corresponding time identification for the extracted color of the video image.
8. The apparatus of claim 5, wherein the target video determination module comprises:
the second image color determining submodule is used for determining a time frame corresponding to the target video image and searching a time identifier matched with the time frame; and determining the color corresponding to the time identifier matched with the time frame as the color of the target video image.
CN201710091613.4A 2017-02-20 2017-02-20 Scene rendering method and device Active CN107016718B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710091613.4A CN107016718B (en) 2017-02-20 2017-02-20 Scene rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710091613.4A CN107016718B (en) 2017-02-20 2017-02-20 Scene rendering method and device

Publications (2)

Publication Number Publication Date
CN107016718A CN107016718A (en) 2017-08-04
CN107016718B true CN107016718B (en) 2020-06-19

Family

ID=59440084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710091613.4A Active CN107016718B (en) 2017-02-20 2017-02-20 Scene rendering method and device

Country Status (1)

Country Link
CN (1) CN107016718B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909641A (en) * 2017-10-26 2018-04-13 广州市雷军游乐设备有限公司 One kind bakees rendering intent, device, terminal device and storage medium
CN108235055B (en) * 2017-12-15 2021-07-06 苏宁易购集团股份有限公司 Method and device for realizing transparent video in AR scene
CN109410300A (en) * 2018-10-10 2019-03-01 苏州好玩友网络科技有限公司 Shadows Processing method and device and terminal device in a kind of scene of game
CN110084154B (en) * 2019-04-12 2021-09-17 北京字节跳动网络技术有限公司 Method and device for rendering image, electronic equipment and computer readable storage medium
CN111932641B (en) * 2020-09-27 2021-05-14 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN112546633B (en) * 2020-12-10 2024-06-21 网易(杭州)网络有限公司 Virtual scene processing method, device, equipment and storage medium
CN113256801A (en) * 2021-06-11 2021-08-13 山东能之源核电科技有限公司 Three-dimensional reverse model visualization system for radiation field

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310488A (en) * 2013-03-20 2013-09-18 常州依丽雅斯纺织品有限公司 mental ray rendering-based virtual reality rendering method
CN103971713A (en) * 2014-05-07 2014-08-06 厦门美图之家科技有限公司 Video file filter processing method
CN105657494A (en) * 2015-12-31 2016-06-08 北京小鸟看看科技有限公司 Virtual cinema and implementation method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310488A (en) * 2013-03-20 2013-09-18 常州依丽雅斯纺织品有限公司 mental ray rendering-based virtual reality rendering method
CN103971713A (en) * 2014-05-07 2014-08-06 厦门美图之家科技有限公司 Video file filter processing method
CN105657494A (en) * 2015-12-31 2016-06-08 北京小鸟看看科技有限公司 Virtual cinema and implementation method thereof

Also Published As

Publication number Publication date
CN107016718A (en) 2017-08-04

Similar Documents

Publication Publication Date Title
CN107016718B (en) Scene rendering method and device
RU2468401C2 (en) Ambient illumination
CN109816762B (en) Image rendering method and device, electronic equipment and storage medium
US11425283B1 (en) Blending real and virtual focus in a virtual display environment
CN113436343B (en) Picture generation method and device for virtual concert hall, medium and electronic equipment
US9542975B2 (en) Centralized database for 3-D and other information in videos
TW202123178A (en) Method for realizing lens splitting effect, device and related products thereof
US20130257851A1 (en) Pipeline web-based process for 3d animation
CN108765270B (en) Virtual three-dimensional space tag binding method and device
CN105611267B (en) Merging of real world and virtual world images based on depth and chrominance information
CN105430376A (en) Method and device for detecting consistency of panoramic camera
KR20200136930A (en) Methods, systems, articles of manufacture and apparatus for creating digital scenes
CN112446939A (en) Three-dimensional model dynamic rendering method and device, electronic equipment and storage medium
AU2020277170B2 (en) Realistic illumination of a character for a scene
EP3850625A1 (en) 3d media elements in 2d video
CN113546410B (en) Terrain model rendering method, apparatus, electronic device and storage medium
Spielhofer et al. 3D point clouds for representing landscape change
CN112153472A (en) Method and device for generating special picture effect, storage medium and electronic equipment
CN110853487A (en) Digital sand table system for urban design
CN116801037A (en) Augmented reality live broadcast method for projecting image of live person to remote real environment
US20230171506A1 (en) Increasing dynamic range of a virtual production display
JP7387029B2 (en) Single-image 3D photography technology using soft layering and depth-aware inpainting
KR101373631B1 (en) System for composing images by real time and method thereof
US20220198720A1 (en) Method and system for generating an augmented reality image
CN101292516A (en) System and method for capturing visual data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant