CN112138386A - Volume rendering method and device, storage medium and computer equipment - Google Patents

Volume rendering method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN112138386A
CN112138386A CN202011017777.0A CN202011017777A CN112138386A CN 112138386 A CN112138386 A CN 112138386A CN 202011017777 A CN202011017777 A CN 202011017777A CN 112138386 A CN112138386 A CN 112138386A
Authority
CN
China
Prior art keywords
sampling
volume
vertex
rendering
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011017777.0A
Other languages
Chinese (zh)
Inventor
盘琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011017777.0A priority Critical patent/CN112138386A/en
Publication of CN112138386A publication Critical patent/CN112138386A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Abstract

The embodiment of the application discloses a volume rendering method, a volume rendering device, a storage medium and computer equipment. The method comprises the following steps: voxelizing a model to be rendered into a plurality of voxel grids; dividing a pixel map area with a predetermined resolution to divide the pixel map area into a predetermined number of squares; any vertex N of each block obtained by segmentationiIs mapped to the position P of each sampling camera in the three-dimensional coordinate system for volume sampling of the voxel gridj(ii) a At any vertex N of each squareiThe position of (2) holds the vertex NiMapped position PjProcessing the volume sampling data acquired by the sampling camera; according to the view angle of the rendering cameraAnd rendering the model to be rendered after the volume sampling data acquired by the sampling camera is matched. The technical scheme of the application greatly reduces the calculation amount of the sampling data, obviously saves the memory and the video memory resource, and can be applied to light-weight equipment such as a mobile terminal.

Description

Volume rendering method and device, storage medium and computer equipment
Technical Field
The present application relates to the field of image processing, in particular to the field of electronic games, and more particularly, to a volume rendering method, apparatus, storage medium, and computer device.
Background
In the field of computer image processing, ray tracing is a volume rendering technique that generates an image by tracing a path of light in a pixel plane in units of pixel points and simulating an effect of encountering a virtual object. Compared with the scanning line rendering method, ray tracing can produce high visual sense of reality and is widely applied to computer-generated still images and movie and television visual effects.
With the rapid development of the mobile internet, many games can be run in the intelligent mobile terminal. Players are increasingly pursuing the visual effects of games, and they wish to experience the visual effects of film and television in games as well. To satisfy the pursuit of players, many gamers attempt to use ray-tracing, a volume rendering technique in games.
However, one salient feature of ray tracing is the large computational overhead. Rendering using ray tracing in a lightweight device such as a smart mobile terminal means a high latency, which may bring a bad experience to the player, e.g. the stuttering of the picture, etc.
Disclosure of Invention
The embodiment of the application provides a volume rendering method, a volume rendering device, a storage medium and computer equipment, which can reduce the cost of calculated amount and the consumption of resources such as a memory, a video memory and the like.
The embodiment of the application provides a volume rendering method, which comprises the following steps:
voxelizing a model to be rendered into a plurality of voxel grids;
dividing a pixel mapping area with a preset resolution to divide the pixel mapping area into a preset number of squares;
any vertex N of the squareiIs mapped to the position P of each sampling camera in the three-dimensional coordinate system for volume sampling the voxel gridiThe volume sampling data acquired by volume sampling comprises the accumulated concentration d of the voxel grid acquired along the sampling direction of the sampling camera, and the forward depth f of the voxel griddAnd the inverse depth b of the voxel gridd
At the vertex NiPosition of (2) storing said position PiUp sampling phaseVolume sampling data acquired by the machine;
and according to the visual angle of a rendering camera, rendering the model to be rendered after matching the volume sampling data acquired by the sampling camera.
Optionally, after dividing the pixel map area with the predetermined resolution and dividing the pixel map area into a predetermined number of blocks, the method further includes: mapping a preset number of squares divided by the pixel mapping area to a square area with the center at the origin of a two-dimensional coordinate system; recording any vertex N of each squareiCoordinates (x) in the square areai,yi)。
Optionally, the vertex N of any one of the squares is set to be the same as the vertex N of the other of the squaresiIs mapped to the position P of each sampling camera in the three-dimensional coordinate system for volume sampling the voxel gridiThe method comprises the following steps: the vertex N isiIs mapped to a viewpoint on the sphere of the hemisphere, the viewpoint being the position P of the sampling cameraiAnd the hemisphere can at least surround the model to be rendered, and is used as a starting point of a sampling direction in which the sampling camera performs volume sampling on the voxel grid.
Optionally, the vertex N isiIs mapped to a viewpoint on the sphere of the hemisphere, comprising: finding the vertex NiCoordinate x ofiAnd yiMean value E of the sum1And the coordinate xiAnd yiMean value of the difference E2(ii) a With E1-0.5、1-|E1|-|E2L and E2As a parameter of a normalization function, a function value of the normalization function is found as a coordinate (p) of the viewpoint in the three-dimensional coordinate systemx,py,pz)。
Optionally, the step of locating the vertex NiPosition of (2) storing said position PiVolumetric sampling data acquired by an up-sampling camera, comprising: at the vertex NiIs mapped on the pixel corresponding to the position of (D), and the position P is mapped on the pixel corresponding to the position of (D)iAccumulation of the voxel grid acquired by an upsampling cameraConcentration d, forward depth f of the voxel griddAnd the inverse depth b of the voxel griddAnd storing the R channel value, the G channel value and the B channel value of the RGB channel of the pixel map respectively.
Optionally, the rendering the model to be rendered after matching the volume sampling data acquired by the sampling camera according to the view angle of the rendering camera includes: according to the current vector of the rendering camera and the inverse world matrix of the model to be rendered, solving a mapping coordinate (U, V) in the current vector state; matching the map coordinates (U, V) with vertices of said each square; if the map coordinates (U, V) can be matched with the coordinates (x) of the vertices of the square in the square areai,yi) Matching, then selecting the coordinate (x)i,yi) Corresponding vertex NiAs rendering resources; and calculating color information after the fusion of the pixel map and the scene according to the illumination data on the voxel grid and the volume sampling data of the rendering resources, wherein the scene is the scene of the model to be rendered.
Optionally, the calculating, according to the illumination data on the voxel grid and the volume sampling data of the rendering resource, color information after the fusion of the pixel map and the scene includes: according to the vertex NiThe depth of the pixel map, the depth of the scene and the radius of the hemisphere, and d' is calculated according to a Beer-Lambert formula and a Henyey-Greestein formula at the position PiScattered illumination energy L of an upsampled voxel gridbAnd projected illumination energy LhgThe hemisphere is the top point NiIs mapped to the position PiA hemisphere for enclosing the model to be rendered, said d' being at said position PiThe percentage of the cumulative concentration of the upsampled voxel grid remaining after the scene crop; according to the formula C ═ S1*Lb*LhgD' calculating the color information of the fused pixel map and scene, S1Is the brightness of the illumination to the model to be rendered.
An embodiment of the present application further provides a volume rendering apparatus, including:
the voxelization module is used for voxelizing the model to be rendered into a plurality of voxelization grids;
the mapping square dividing module is used for dividing a pixel mapping area with a preset resolution so as to divide the pixel mapping area into a preset number of squares;
a mapping module for mapping any vertex N of the squareiIs mapped to the position P of each sampling camera in the three-dimensional coordinate system for volume sampling the voxel gridiThe volume sampling data acquired by volume sampling comprises the accumulated concentration d of the voxel grid acquired along the sampling direction of the sampling camera, and the forward depth f of the voxel griddAnd the inverse depth b of the voxel gridd
A data saving module for saving data at the vertex NiPosition of (2) storing said position PiVolume sampling data acquired by an up-sampling camera;
and the rendering module is used for rendering the model to be rendered after matching the volume sampling data acquired by the sampling camera according to the visual angle of the rendering camera.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, where the computer program is suitable for being loaded by a processor to execute the steps in the volume rendering method according to any one of the above embodiments.
An embodiment of the present application further provides a computer device, where the computer device includes a memory and a processor, where the memory stores a computer program, and the processor executes the steps in the volume rendering method according to any of the above embodiments by calling the computer program stored in the memory.
As can be seen from the technical solutions provided in the embodiments of the present application, on one hand, any vertex N divided into a plurality of squares in the pixel map area is usediTo the position P ofiThe method comprises the steps of using a sampling camera to perform volume sampling on a voxel grid to obtain the accumulated concentration d and the forward depth f of the voxel griddAnd a reverse depth bdIn the process, each pixel only needs to be sampled for 6 times at most, and the rendering scheme of ray tracing needs to be sampled for more than dozens of times during volume sampling, so that the calculation amount of the scheme of the application on the sampled data is greatly reduced; on the other hand, compared with the pixel quantity (namely 3 times of each mapping pixel) which needs mass memory recording when ray tracing is adopted and 3D mapping is used, the technical scheme of the application only needs to process the volume sampling data corresponding to the mapping at the vertex of each square, and the data quantity of the volume sampling data is reduced by several orders of magnitude compared with the data quantity adopting the ray tracing technology, so that the memory and the video memory resources are obviously saved, and the method can be applied to light-weight equipment such as a mobile terminal.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a volume rendering apparatus according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a volume rendering method according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a circular ring body voxelized into a voxel grid according to an embodiment of the present application.
Fig. 4a is a schematic diagram of a doll model according to an embodiment of the present application.
Fig. 4b is a schematic diagram of the doll model illustrated in fig. 4a "surrounded" by a bounding box during voxelization provided by the embodiment of the present application.
Fig. 5 is a schematic diagram of dividing a pixel map area into 25 blocks according to an embodiment of the present disclosure.
Fig. 6 is a schematic diagram provided in this embodiment of the present application, in which after the 25 squares illustrated in fig. 5 are mapped to a square region having a center at the origin of a two-dimensional coordinate system and a side length of 2, the vertices of each square are numbered from 0.
Fig. 7 is a schematic diagram of 36 viewpoints on a spherical surface, where 36 vertices illustrated in fig. 6 are mapped as hemispheres, according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a sampling camera on a spherical surface of a hemisphere provided in this application to sample a model surrounded by the hemisphere.
FIG. 9 shows a vertex N according to an embodiment of the present applicationiDepth p of pixel map of (1)dThe depth w of the scene where the model to be rendered is locateddPosition PjAt the forward depth f of the sampled voxel griddAnd the inverse depth b of the voxel griddSchematic representation of (a).
Fig. 10 is a schematic structural diagram of a volume rendering apparatus according to an embodiment of the present application.
Fig. 11 is a schematic structural diagram of a volume rendering apparatus according to another embodiment of the present application.
Fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a volume rendering method, a volume rendering device, a storage medium and computer equipment. Specifically, the volume rendering method according to the embodiment of the present application may be executed by a computer device, where the computer device may be a terminal or a server, and the like. The terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like, and may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform.
For example, when the volume rendering method is run on a terminal, the terminal device stores a game application and is used for presenting a virtual scene in a game screen. The terminal device is used for interacting with a user through a graphical user interface, for example, downloading and installing a game application program through the terminal device and running the game application program. The manner in which the terminal device provides the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a game screen and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for executing the game, generating the graphical user interface, responding to the operation instructions, and controlling display of the graphical user interface on the touch display screen.
For example, when the volume rendering method is run on a server, it may be a cloud game. Cloud gaming refers to a gaming regime based on cloud computing. In the running mode of the cloud game, the running main body of the game application program and the game picture presenting main body are separated, and the storage and the running of the volume rendering method are finished on the cloud game server. The game screen presentation is performed at a cloud game client, which is mainly used for receiving and sending game data and presenting the game screen, for example, the cloud game client may be a display device with a data transmission function near a user side, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, and the like, but a terminal device for performing game data processing is a cloud game server at the cloud end. When a game is played, a user operates the cloud game client to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the cloud game client through a network, and finally the data are decoded through the cloud game client and the game pictures are output.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a volume rendering apparatus according to an embodiment of the present disclosure. The system may include at least one terminal 1000, at least one server 2000, at least one database 3000, and a network 4000. The terminal 1000 held by the user can be connected to servers of different games through the network 4000. Terminal 1000 can be any device having computing hardware capable of supporting and executing a software product corresponding to a game. In addition, terminal 1000 can have one or more multi-touch sensitive screens for sensing and obtaining user input through touch or slide operations performed at multiple points on one or more touch sensitive display screens. In addition, when the system includes a plurality of terminals 1000, a plurality of servers 2000, and a plurality of networks 4000, different terminals 1000 may be connected to each other through different networks 4000 and through different servers 2000. The network 4000 may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, and so on. In addition, different terminals 1000 may be connected to other terminals or a server using their own bluetooth network or hotspot network. For example, a plurality of users may be online through different terminals 1000 to be connected and synchronized with each other through a suitable network to support multiplayer games. In addition, the system may include a plurality of databases 3000, the plurality of databases 3000 being coupled to different servers 2000, and information related to the game environment may be continuously stored in the databases 3000 when different users play the multiplayer game online.
The embodiment of the application provides a volume rendering method, which can be executed by a terminal or a server. The embodiment of the present application is described by taking a volume rendering method as an example, which is executed by a terminal. The terminal comprises a touch display screen and a processor, wherein the touch display screen is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. When a user operates the graphical user interface through the touch display screen, the graphical user interface can control the local content of the terminal through responding to the received operation instruction, and can also control the content of the opposite-end server through responding to the received operation instruction. For example, the operation instruction generated by the user acting on the graphical user interface comprises an instruction for starting a game application, and the processor is configured to start the game application after receiving the instruction provided by the user for starting the game application. Further, the processor is configured to render and draw a graphical user interface associated with the game on the touch display screen. A touch display screen is a multi-touch sensitive screen capable of sensing a touch or slide operation performed at a plurality of points on the screen at the same time. The user uses a finger to perform touch operation on the graphical user interface, and when the graphical user interface detects the touch operation, different virtual objects in the graphical user interface of the game are controlled to perform actions corresponding to the touch operation. For example, the game may be any one of a leisure game, an action game, a role-playing game, a strategy game, a sports game, a game of chance, and the like. Wherein the game may include a virtual scene of the game drawn on a graphical user interface. Further, one or more virtual objects, such as virtual characters, controlled by the user (or player) may be included in the virtual scene of the game. Additionally, one or more obstacles, such as railings, ravines, walls, etc., may also be included in the virtual scene of the game to limit movement of the virtual objects, e.g., to limit movement of one or more objects to a particular area within the virtual scene. Optionally, the virtual scene of the game also includes one or more elements, such as skills, points, character health, energy, etc., to provide assistance to the player, provide virtual services, increase points related to player performance, etc. In addition, the graphical user interface may also present one or more indicators to provide instructional information to the player. For example, a game may include a player-controlled virtual object and one or more other virtual objects (such as enemy characters). In one embodiment, one or more other virtual objects are controlled by other players of the game. For example, one or more other virtual objects may be computer controlled, such as a robot using Artificial Intelligence (AI) algorithms, to implement a human-machine fight mode. For example, the virtual objects possess various skills or capabilities that the game player uses to achieve the goal. For example, the virtual object possesses one or more weapons, props, tools, etc. that may be used to eliminate other objects from the game. Such skills or capabilities may be activated by a player of the game using one of a plurality of preset touch operations with a touch display screen of the terminal. The processor may be configured to present a corresponding game screen in response to an operation instruction generated by a touch operation of a user.
Referring to fig. 2, a flow chart of the volume rendering method according to the embodiment of the present application mainly includes steps S201 to S205, which are described in detail as follows:
step S201, the model to be rendered is voxelized into a plurality of voxel grids.
The so-called voxel (voxel) is an abbreviation for volume element, and is essentially a 3D version of a pixel (pixel), each voxel having a position and attributes associated therewith in 3D space, and the voxelization is the filling of the interior of an object with some geometry. In other words, the voxelization is a process of changing an object into a grid of volume elements, i.e. a voxel grid, in a cutting manner, and as shown in fig. 3, is an example of a circle being voxelized into a voxel grid (the voxel grid in the figure is a small cube). Voxelization is generally considered to be a pre-work or necessary process of volume sampling. In the embodiment of the present application, when the model to be rendered is voxelized into the voxel grid, a bounding-box (bounding-box) of one model may be obtained first, and then the length, width and height of the bounding box are cut by a certain step length, and usually only three parameters, such as the model to be voxelized, a spatial mode (i.e., a world space or a local space), and the density of the bounding box, need to be input to the voxel generator. Fig. 4a shows a doll model to be rendered, and fig. 4b shows a schematic diagram of the doll model illustrated in fig. 4a being "surrounded" by a bounding box during voxelization, the bounding box being a cube. It should be noted that the bounding box may be any geometric body of any shape that encloses a model, and therefore, fig. 4b should not be considered as a limitation on the choice of bounding box in the present technical solution.
In step S202, a pixel map area with a predetermined resolution is divided into a predetermined number of blocks.
A pixel map (volume texture) area is a two-dimensional display plane projected as an image after volume sampling of a three-dimensional model by a sampling camera at the time of volume rendering, the two-dimensional display plane having a predetermined resolution, for example, 1024 pixels. In the embodiment of the present application, a pixel map area having a predetermined resolution is divided into a predetermined number of blocks. As shown in fig. 5, the pixel map area is divided into 5 × 5 — 25 squares. Obviously, on the premise that the resolution of the pixel map region is constant, the greater the number of squares into which the pixel map region is divided, the lower the resolution of the region included in each square, for example, the division method illustrated in fig. 5, the resolution of each square is 1024/25, and if the pixel map region is divided into 8 × 8 or 64 squares, the resolution of each square is 1024/64 when the resolution of the pixel map region is still 1024 pixels.
For convenience of subsequent calculation, in the embodiment of the present application, after the pixel map regions with the predetermined resolution are divided into the predetermined number of squares, the pixel map regions may be further divided into the predetermined number of squares, the predetermined number of squares are mapped to a square region (for example, the side length of the square region is 2) with the center at the origin of the two-dimensional coordinate system, and any vertex N of each square is recordediCoordinates (x) in the square regioni,yi) That is, a predetermined number of blocks are mapped to a square region centered at the origin of the two-dimensional coordinate system and surrounded by coordinates (1, 1), (-1, -1) and (1, -1), respectively, and any vertex N of each block isiCoordinates (x) in the square regioni,yi) May be determined according to the number of tiles and the location of the tiles in the square area. Further, the vertices of the blocks may be numbered, as shown in FIG. 6, which is an example of FIG. 5 in which 25 blocks are numberedAfter mapping to a square area centered at the origin of the two-dimensional coordinate system and having a side length of 2, the number of the vertices of each square is indicated by starting from 0, the number of the vertices is indicated by starting from 0, and 35 is the number of the last vertex (since the vertices of adjacent squares overlap, 25 squares actually need to be numbered with only 36 vertices rather than 100 vertices).
Step S203, any vertex N of each block obtained by the division in step S202iIs mapped to the position P of each sampling camera in the three-dimensional coordinate system for volume sampling of the voxel gridjWherein the volume sampling data acquired by volume sampling comprises the accumulated concentration d of the voxel grid and the forward depth f of the voxel grid acquired along the sampling direction of the sampling cameradAnd the inverse depth b of the voxel gridd
Considering that the required volume sampling data may be acquired through fewer sampling positions and the subsequent calculation is facilitated, in the embodiment of the present application, any vertex N of each block segmented by step S202 is usediIs mapped to the position P of each sampling camera in the three-dimensional coordinate system for volume sampling of the voxel gridjCan be as follows: let vertex NiIs mapped to a viewpoint on the sphere of the hemisphere, which is the position P of the sampling camerajAnd the hemisphere can at least surround the model to be rendered as a starting point of a sampling direction in which the sampling camera performs volume sampling on the voxel grid. FIG. 7 is a schematic diagram of the 25 squares of the example of FIG. 6 with 36 vertices mapped to 36 viewpoints on the sphere of the hemisphere, the viewpoints being the locations of sampling cameras used to sample the volume of the model to be rendered; if the position is denoted as PjThen point PjAnd is also the starting point of the sampling direction in which the sampling camera volumetrically samples the voxel grid. Fig. 8 is a schematic diagram of a sampling camera on the sphere of a hemisphere sampling a model (the model is a tree) enclosed by the hemisphere.
As an embodiment of the present application, let vertex NiA viewpoint of the sphere of which the position is mapped to is a vertex NiCoordinate x ofiAnd yiMean value E of the sum1And the coordinate xiAnd yiMean value of the difference E2Then, with E1-0.5、1-|E1|-|E2L and E2As a parameter of the normalization function, a function value of the normalization function is obtained as a coordinate (p) of the viewpoint in the three-dimensional coordinate systemx,py,pz) I.e. E1=(xi+yi) /2 … … … … … (equation 1)
E2=(xi-yi) /2 … … … … … (equation 2)
Call normalization function normalization (x, y, z), and apply E1-0.5、1-|E1|-|E2I and 1-E1|-|E2I is respectively transmitted to parameters x, y and z, and the return value is used as a viewpoint PjI.e. of the sampling camera (p)x,py,pz) Namely: (p)x,py,pz)=normalize(E1-0.5,1-|E1|-|E2|,1-|E1|-|E2|) … … (equation 3)
The prototype of the normalization function normalization (x, y, z) is normalization
Figure BDA0002699658410000101
Figure BDA0002699658410000102
It should be noted that, if the model to be rendered is placed so that the center thereof (the center of the model to be rendered is also the center of the bounding box) coincides with the center of the hemisphere (which may be the origin of the three-dimensional coordinate system), and the vertex of the square illustrated in fig. 6 is mapped to the viewpoint on the sphere of the hemisphere illustrated in fig. 7 according to the above equations 1, 2, and 3, the sampling direction in which the sampling camera performs volume sampling on the voxel grid should be the viewpoint P from the position of the sampling camerajPointing towards the centre of the hemisphere.
Step S204, at any vertex N of each blockiThe position of (2) holds the vertex NiMapped position PjProcessing and miningVolume sample data acquired by a sample camera.
As an embodiment of the present application, at any one vertex N of each squareiThe position of (2) holds the vertex NiMapped position PjThe volumetric sampling data acquired by the sampling camera may be: at vertex NiThe pixel corresponding to the position of (3) is mapped to the position PjThe accumulated concentration d of the voxel grid obtained by the sampling camera, and the forward depth f of the voxel griddAnd the inverse depth b of the voxel griddThe R, G and B channel values are stored as RGB channels of the pixel map, respectively. Due to the position PjIs formed by a vertex NiIs mapped to, and each vertex NiNumbered as in FIG. 6, and therefore can be indexed by position PjGet the corresponding vertex NiMay also be at vertex NiThe pixel corresponding to the position of (3) is mapped to the position PjThe accumulated concentration d of the voxel grid obtained by the sampling camera, and the forward depth f of the voxel griddAnd the inverse depth b of the voxel griddAnd respectively assigning to the R channel, the G channel and the B channel of the RGB channels of the pixel map. Note that, due to the position PjIs formed by a vertex NiThus, even though the vertex points have the same numbers in fig. 6 and 7, the vertex points having the same numbers do not necessarily correspond to the viewpoint, for example, the vertex point number 6 in the example of fig. 6 is mapped to the viewpoint point number 6 on the hemisphere surface in the example of fig. 7.
And S205, rendering the model to be rendered after matching the volume sampling data acquired by the sampling camera according to the visual angle of the rendering camera.
Rendering the perspective of a camera from vector VwDetermining a vector VwAnd taking the direction from the geometric center of the model to be rendered to the rendering camera as the direction and the distance from the geometric center of the model to be rendered to the rendering camera as the size of the three-dimensional vector. When changing the view angle of the rendering camera, i.e. by different vectors VwDifferent map coordinates (U, V) can be obtained if they can be associated with the corresponding vertices NiCoordinate of (x)i,yi) Matching, which corresponds to selecting a map at that point. Specifically, step S205 can be implemented by steps S2051 to S2054 as follows:
step S2051: and solving mapping coordinates (U, V) in the current vector state according to the current vector of the rendering camera and the inverse world matrix of the model to be rendered.
Assuming that the inverse world matrix of the model to be rendered is represented by InvWorld, the current vector of the rendering camera is as described above, using vector VwRepresents that V iswMultiplying by InvWorld to obtain a three-dimensional intermediate vector Vo. Let Vo be { vo.x, vo.y, vo.z }. Further, calculate P.x ═ vo.x/[ abs (vo.x) + abs (vo.y) + abs (vo.z)],P.y=Vo.z/[abs(Vo.x)+abs(Vo.y)+abs(Vo.z)]Here, abs (vo.x), abs (vo.y), and abs (vo.z) denote the absolute values of vo.x, vo.y, and vo.z, respectively. The map coordinates (U, V) in the current vector state are determined by P.x and P.y, i.e., U-P.x + P.y and V-P.x-P.y.
Step S2052: the map coordinates (U, V) are matched to the vertices of each square.
The mapping coordinates (U, V) belong to coordinates in a two-dimensional coordinate system, the squares are divided by step S202, and any vertex N of the squaresiUsing coordinates (x)i,yi) And (4) showing. The map coordinates (U, V) calculated in step S2051 are matched with the vertices of each square, i.e., U, P.x + P.y, and xiV is P.x-P.y and yiAnd (6) comparing.
Step S2053: if the map coordinates (U, V) can be related to the coordinates (x) of the vertices of the square in the square areai,yi) If there is a match, then the coordinate (x) is selectedi,yi) Corresponding vertex NiThe pixel map of (a) as rendering resources.
In the present embodiment, the so-called map coordinates (U, V) can be associated with the coordinates (x) of the square areai,yi) Match does not mean U and xiV and yiJust equal, but two thresholds can be presetxAndy(xandyboth positive), if U and xiAbsolute difference of (i.e. | U-x)iL is at[0,x]And/or V and yiAbsolute difference of (i.e. | V-y)iThe value of | is in the range of [0,y]then determine the mapping coordinates (U, V) can be related to the coordinates (x) of the square areai,yi) And (6) matching.
Mapping coordinates (U, V) with coordinates (x) of the square regioni,yi) The process of matching, i.e. the process of selecting which tile's vertex is the pixel map. For example, for the square and the vertex illustrated in fig. 6, if the rendering camera is in a certain vector state, i.e., in a certain view angle, the map coordinates (U, V) and the coordinates (x) of the vertex numbered 16 are calculated through step S2052 and step S205316,y16) If there is a match, the pixel map at the vertex numbered 16 is selected as the rendering resource. When the view angle of the rendering camera is continuously changed, the pixel maps at the 36 vertices illustrated in fig. 6 can be finally matched one by one.
Step S2054: calculating a vertex N from the illumination data on the voxel grid and the volume sample data of the rendering resourcesiAnd fusing the pixel map and the scene of the model to be rendered.
Rendering a model is essentially to obtain color information obtained by fusing the model with a scene in which the model is located after the model is located in the scene. In the embodiment of the application, after the color information obtained by fusing the pixel map at each vertex with the scene where the model to be rendered is located is calculated, the color information obtained by fusing the model to be rendered with the scene where the model to be rendered is located is obtained. In particular, vertex N is computed from the illumination data on the voxel grid and the volume sample data of the rendering resourcesiThe color information of the fused pixel map and the scene of the model to be rendered can be realized through the following steps S1 and S2:
step S1: according to the vertex NiThe depth of the pixel map, the depth of the scene where the model to be rendered is located and the radius of the hemisphere are processed, d' is calculated according to a Beer-Lambert formula and a Henyey-Greestein formula, and the position P is obtainedjScattered illumination energy L of the sampled voxel gridbAnd projected illumination energy Lhg
Here, theThe hemisphere is a vertex NiIs mapped to a position PjA hemisphere for enclosing the model to be rendered, such as the hemisphere shown in FIG. 7, d' being at position PjThe percentage of the cumulative concentration of the upsampled voxel grid remaining after cropping by the scene. Let the vertex NiThe depth of the pixel map is pdThe depth of the scene of the model to be rendered is wdA certain vertex NiDepth p of pixel map of (1)dThe depth w of the scene where the model to be rendered is locateddPosition PjAt the forward depth f of the sampled voxel griddAnd the inverse depth b of the voxel griddAs shown in fig. 9. According to the vertex NiDepth p of pixel map of (1)dThe depth w of the scene where the model to be rendered is locateddAnd the radius R of the hemisphere, and calculating d' according to a Beer-Lambert formula and a Henyey-Greestein formula at the position PjScattered illumination energy L of the sampled voxel gridbAnd projected illumination energy LhgThe following were used:
Ld=pd+(fd-0.5)×R-wd… … … … … (equation 4)
Td=[fd-(1-bd)]XR … … … … … (equation 5)
T in the above equation 5dIs position PjThe total volume depth of the upsampled voxel grid.
d'=d×Ld/Td… … … … … (equation 6)
In the above equation 6, d is the position PjCumulative concentration of the upsampled voxel bins.
Lb=e-d'… … … … … (equation 6)
Lhg=2(1-g2)/pow(1+g 22 Xg × LoV, 1.5) … … … … … (equation 7)
In the above equation 8, the prototype of the function pow (x, y) is pow (x, y) ═ xyLoV is the dot product of the illumination direction and the viewing angle direction defined in the Henyey-Greestein formula, and its value range is [ -1, 1]G is a scattering factor, which is typically a constant value between 0 and 1.
Step S2: according to the formula C ═ S1*Lb*LhgD' calculating the vertex NiThe color information of the fused pixel map and the scene of the model to be rendered is obtained, wherein S1Is the brightness of the illumination onto the model to be rendered.
Obtaining L through the equations 4 to 7bAnd LhgThen, according to the formula C ═ S1*Lb*LhgD', the vertex N can be calculatediAnd fusing the pixel map and the scene of the model to be rendered. With vertex NiAccording to the difference of the color information of the model to be rendered and the color information of the scene where the model to be rendered is fused, the color information of the model to be rendered and the scene where the model to be rendered is fused during rendering can be calculated.
As can be seen from the above-mentioned volume rendering method illustrated in FIG. 2, in one aspect, the pixel map area is divided into any vertex N of several squaresiTo the position P ofiThe method comprises the steps of using a sampling camera to perform volume sampling on a voxel grid to obtain the accumulated concentration d and the forward depth f of the voxel griddAnd a reverse depth bdIn the process, each pixel only needs to be sampled for 6 times at most, and the rendering scheme of ray tracing needs to be sampled for more than dozens of times during volume sampling, so that the calculation amount of the scheme of the application on the sampled data is greatly reduced; on the other hand, compared with the pixel quantity (namely 3 times of each mapping pixel) which needs mass memory recording when ray tracing is adopted and 3D mapping is used, the technical scheme of the application only needs to process the volume sampling data corresponding to the mapping at the vertex of each square, and the data quantity of the volume sampling data is reduced by several orders of magnitude compared with the data quantity adopting the ray tracing technology, so that the memory and the video memory resources are obviously saved, and the method can be applied to light-weight equipment such as a mobile terminal.
In order to better implement the volume rendering method according to the embodiment of the present application, an embodiment of the present application further provides a volume rendering device. Please refer to fig. 10, which is a schematic structural diagram of a volume rendering apparatus according to an embodiment of the present disclosure. The volume rendering apparatus may comprise a voxelization module 1001, a map tile division module 1002, a mapping module 1003, a data saving module 1004, and a rendering module 1005, wherein:
a voxelization module 1001 for voxelizing the model to be rendered into a plurality of voxel grids;
a tile block dividing module 1002, configured to divide a pixel map area with a predetermined resolution, so that the pixel map area is divided into a predetermined number of tiles;
a mapping module 1003 for dividing any vertex N of the block divided by the mapping block dividing module 1002iIs mapped to the position P of each sampling camera in the three-dimensional coordinate system for volume sampling of the voxel gridjWherein the volume sampling data acquired by volume sampling comprises the accumulated concentration d of the voxel grid and the forward depth f of the voxel grid acquired along the sampling direction of the sampling cameradAnd the inverse depth b of the voxel gridd
A data saving module 1004 for saving the data at the vertex NiPosition holding position P ofjProcessing the volume sampling data acquired by the sampling camera;
and a rendering module 1005, configured to render the model to be rendered after matching the volume sampling data acquired by the sampling camera according to the view angle of the rendering camera.
Please refer to fig. 11, which is a schematic structural diagram of a volume rendering apparatus according to an embodiment of the present disclosure. Fig. 11 differs from fig. 10 in that: the volume rendering apparatus further comprises a two-dimensional mapping module 1101 and a recording module 1102, wherein:
a two-dimensional mapping module 1101, configured to map the pixel map area into a square area with a predetermined number of squares and a center at an origin of a two-dimensional coordinate system, for example, a square area with a center at the origin of the two-dimensional coordinate system and a side length of 2;
a recording module 1102 for recording any vertex N of each squareiCoordinates (x) in the square regioni,yi)。
Optionally, in the volume rendering apparatus illustrated in fig. 11, the mapping module 1003 is specifically configured to map the vertex NiIs mapped to halfA viewpoint on the spherical surface of the sphere, wherein the viewpoint is the position P of the sampling camerajAnd the hemisphere can at least surround the model to be rendered as a starting point of a sampling direction in which the sampling camera performs volume sampling on the voxel grid.
Optionally, in the above embodiment of the present application, the vertex N is setiOne viewpoint of the location mapping onto the sphere of the hemisphere may be: finding the vertex NiCoordinate x ofiAnd yiMean value E of the sum1And the coordinate xiAnd yiMean value of the difference E2(ii) a With E1-0.5、1-|E1|-|E2L and E2As a parameter of the normalization function, a function value of the normalization function is obtained, wherein the function value is taken as a coordinate (p) of the viewpoint in the three-dimensional coordinate systemx,py,pz),|E1L is E1Absolute value of, | E2L is E2Absolute value of (a).
Optionally, in the volume rendering apparatus illustrated in fig. 10, the data storage module 1004 is configured to store the vertex NiThe pixel corresponding to the position of (3) is mapped to the position PjThe accumulated concentration d of the voxel grid obtained by the sampling camera, and the forward depth f of the voxel griddAnd the inverse depth b of the voxel griddRespectively stored as a vertex NiThe position of (B) corresponds to the R channel value, G channel value, and B channel value of the RGB channel of the pixel map.
Optionally, in the volume rendering apparatus illustrated in fig. 11, the rendering module 1005 is specifically configured to obtain the mapping coordinates (U, V) in the current vector state according to the current vector of the rendering camera and the inverse world matrix of the model to be rendered, match the mapping coordinates (U, V) with the vertex of each square, and if the mapping coordinates (U, V) can be matched with the coordinates (x) of the vertex of the square in the square regioni,yi) If there is a match, then the coordinate (x) is selectedi,yi) Corresponding vertex NiAnd taking the pixel map as a rendering resource, and calculating color information after the pixel map and the scene where the model to be rendered is located are fused according to the illumination data on the voxel grid and the volume sampling data of the rendering resource.
Optionally, in the foregoing embodiment of the present application, the vertex N is calculated according to the illumination data on the voxel grid and the volume sampling data of the rendering resourceiThe color information obtained after the pixel mapping and the scene where the model to be rendered is located are fused may be: according to the vertex NiThe depth of the pixel map, the depth of the scene where the model to be rendered is located and the radius of the hemisphere are processed, d' is calculated according to a Beer-Lambert formula and a Henyey-Greestein formula, and the position P is obtainedjScattered illumination energy L of the sampled voxel gridbAnd projected illumination energy LhgAccording to the formula C ═ S1*Lb*LhgD' calculating the vertex NiFusing color information of the pixel map and the scene of the model to be rendered, wherein the hemisphere is a peak NiIs mapped to a position PjA hemisphere surrounding said model to be rendered, d' being in position PjThe percentage remaining after the cumulative concentration of the sampled voxel grid has been cropped by the scene, S1Is the brightness of the illumination onto the model to be rendered.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
In the volume rendering apparatus provided in the embodiment of the present application, on one hand, any vertex N divided into a plurality of squares in the pixel map area is usediTo the position P ofiThe method comprises the steps of using a sampling camera to perform volume sampling on a voxel grid to obtain the accumulated concentration d and the forward depth f of the voxel griddAnd a reverse depth bdIn the process, each pixel only needs to be sampled for 6 times at most, and the rendering scheme of ray tracing needs to be sampled for more than dozens of times during volume sampling, so that the calculation amount of the scheme of the application on the sampled data is greatly reduced; on the other hand, compared with the pixel quantity (namely 3 times of each block of mapping pixel) which needs massive memory recording when 3D mapping is adopted and ray tracing is adopted, the technical scheme of the application only needs to process the mapping corresponding volume sampling data at the vertex of each square block, and the data quantity of the volume sampling data is reduced by several orders of magnitude compared with the data quantity adopting the ray tracing technology, so that the memory and the data quantity adopting the ray tracing technology are obviously savedThe video memory resource can be applied to light-weight equipment such as mobile terminals.
Correspondingly, the embodiment of the present application further provides a Computer device, where the Computer device may be a terminal or a server, and the terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like. As shown in fig. 12, fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer apparatus 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer-readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. Those skilled in the art will appreciate that the computer device configurations illustrated in the figures are not meant to be limiting of computer devices and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The processor 401 is a control center of the computer device 400, connects the respective parts of the entire computer device 400 using various interfaces and lines, performs various functions of the computer device 400 and processes data by running or loading software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby monitoring the computer device 400 as a whole.
In the embodiment of the present application, the processor 401 in the computer device 400 loads instructions corresponding to processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions:
voxelizing a model to be rendered into a plurality of voxel grids; dividing a pixel map area with a predetermined resolution to divide the pixel map area into a predetermined number of squares; any vertex N of each block obtained by segmentationiFor each sampling camera in the three-dimensional coordinate system for volume sampling of the voxel gridPosition PjWherein the volume sampling data acquired by volume sampling comprises the accumulated concentration d of the voxel grid and the forward depth f of the voxel grid acquired along the sampling direction of the sampling cameradAnd the inverse depth b of the voxel gridd(ii) a At any vertex N of each squareiThe position of (2) holds the vertex NiMapped position PjProcessing the volume sampling data acquired by the sampling camera; and rendering the model to be rendered after matching the volume sampling data acquired by the sampling camera according to the visual angle of the rendering camera.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 12, the computer device 400 further includes: touch-sensitive display screen 403, radio frequency circuit 404, audio circuit 405, input unit 406 and power 407. The processor 401 is electrically connected to the touch display screen 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power source 407. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 12 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The touch display screen 403 may be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the computer device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 401, and can receive and execute commands sent by the processor 401. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel may transmit the touch operation to the processor 401 to determine the type of the touch event, and then the processor 401 may provide a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display screen 403 may also be used as a part of the input unit 406 to implement an input function.
In the embodiment of the present application, a game application is executed by the processor 401 to generate a graphical user interface on the touch display screen 403, where a virtual scene on the graphical user interface includes at least one skill control area, and the skill control area includes at least one skill control. The touch display screen 403 is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface.
The rf circuit 404 may be used for transceiving rf signals to establish wireless communication with a network device or other computer device via wireless communication, and for transceiving signals with the network device or other computer device.
The audio circuit 405 may be used to provide an audio interface between a user and a computer device through speakers, microphones. The audio circuit 405 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 405 and converted into audio data, which is then processed by the audio data output processor 401, and then sent to, for example, another computer device via the radio frequency circuit 404, or output to the memory 402 for further processing. The audio circuit 405 may also include an earbud jack to provide communication of a peripheral headset with the computer device.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 407 is used to power the various components of the computer device 400. Optionally, the power source 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 407 may also include one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, or any other component.
Although not shown in fig. 12, the computer device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided in this embodiment is divided into any vertex N of several blocks in the pixel map areaiTo the position P ofiThe method comprises the steps of using a sampling camera to perform volume sampling on a voxel grid to obtain the accumulated concentration d and the forward depth f of the voxel griddAnd a reverse depth bdIn the process, each pixel only needs to be sampled for 6 times at most, and the rendering scheme of ray tracing needs to be sampled for more than dozens of times during volume sampling, so that the calculation amount of the scheme of the application on the sampled data is greatly reduced; on the other hand, compared with the pixel quantity (namely 3 power of each block of mapping pixel) needing mass memory recording when 3D mapping is used by adopting ray tracing, the technical scheme of the application only needs to process each squareThe vertex map of (2) corresponds to volume sampling data, and the data volume of the volume sampling data is reduced by several orders of magnitude compared with the data volume adopting the ray tracing technology, so that the memory and the video memory resources are obviously saved, and the method can be applied to light-weight equipment such as a mobile terminal.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to execute the steps in any one of the volume rendering methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
voxelizing a model to be rendered into a plurality of voxel grids; dividing a pixel map area with a predetermined resolution to divide the pixel map area into a predetermined number of squares; any vertex N of each block obtained by segmentationiIs mapped to the position P of each sampling camera in the three-dimensional coordinate system for volume sampling of the voxel gridjWherein the volume sampling data acquired by volume sampling comprises the accumulated concentration d of the voxel grid and the forward depth f of the voxel grid acquired along the sampling direction of the sampling cameradAnd the inverse depth b of the voxel gridd(ii) a At any vertex N of each squareiThe position of (2) holds the vertex NiMapped position PjProcessing the volume sampling data acquired by the sampling camera; and rendering the model to be rendered after matching the volume sampling data acquired by the sampling camera according to the visual angle of the rendering camera.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any volume rendering method provided in the embodiments of the present application, beneficial effects that can be achieved by any volume rendering method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The volume rendering method, the volume rendering device, the storage medium and the computer apparatus provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method of volume rendering, comprising:
voxelizing a model to be rendered into a plurality of voxel grids;
dividing a pixel mapping area with a preset resolution to divide the pixel mapping area into a preset number of squares;
any vertex N of the squareiIs mapped to the position P of each sampling camera in the three-dimensional coordinate system for volume sampling the voxel gridjThe volume sampling data acquired by volume sampling comprises the accumulated concentration d of the voxel grid acquired along the sampling direction of the sampling camera, and the forward depth f of the voxel griddAnd the inverse depth b of the voxel gridd
At the vertex NiPosition of (2) storing said position PjProcessing the volume sampling data acquired by the sampling camera;
and according to the visual angle of a rendering camera, rendering the model to be rendered after matching the volume sampling data acquired by the sampling camera.
2. The volume rendering method of claim 1, wherein the dividing of the pixel map area having a predetermined resolution after the pixel map area is divided into a predetermined number of blocks, the method further comprises:
mapping a preset number of squares divided by the pixel mapping area to a square area with the center at the origin of a two-dimensional coordinate system;
recording any vertex N of each squareiCoordinates (x) in the square areai,yi)。
3. The volume rendering method of claim 2, wherein the vertex N of any one of the squares is set to be a vertex NiIs mapped to the position P of each sampling camera in the three-dimensional coordinate system for volume sampling the voxel gridjThe method comprises the following steps:
the vertex N isiIs mapped to a viewpoint on the sphere of the hemisphere, the viewpoint being the position P of the sampling camerajAnd the hemisphere can at least surround the model to be rendered, and is used as a starting point of a sampling direction in which the sampling camera performs volume sampling on the voxel grid.
4. The volume rendering method of claim 3, wherein the vertex N is generatediIs mapped to a viewpoint on the sphere of the hemisphere, comprising:
finding the vertex NiCoordinate x ofiAnd yiMean value E of the sum1And the coordinate xiAnd yiMean value of the difference E2
With E1-0.5、1-|E1|-|E2L and E2As a parameter of a normalization function, a function value of the normalization function is found as a coordinate (p) of the viewpoint in the three-dimensional coordinate systemx,py,pz) Said | E1L is the E1Absolute value of, said | E2L is the E2ToAnd (6) comparing the values.
5. The volume rendering method of claim 1, wherein the N at the vertexiPosition of (2) storing said position PjProcessing volumetric sample data acquired by a sampling camera, comprising:
at the vertex NiIs mapped on the pixel corresponding to the position of (D), and the position P is mapped on the pixel corresponding to the position of (D)jThe accumulated concentration d of the voxel grid and the forward depth f of the voxel grid acquired by the sampling cameradAnd the inverse depth b of the voxel griddAnd storing the R channel value, the G channel value and the B channel value of the RGB channel of the pixel map respectively.
6. The volume rendering method of claim 2, wherein the rendering the model to be rendered after matching the volume sample data acquired by the sampling camera according to the view angle of the rendering camera comprises:
according to the current vector of the rendering camera and the inverse world matrix of the model to be rendered, solving a mapping coordinate (U, V) in the current vector state;
matching the map coordinates (U, V) with vertices of said each square;
if the map coordinates (U, V) can be matched with the coordinates (x) of the vertices of the square in the square areai,yi) Matching, then selecting the coordinate (x)i,yi) Corresponding vertex NiThe pixel map of (1) is used as a rendering resource;
and calculating color information after the pixel mapping and the scene where the model to be rendered is located are fused according to the illumination data on the voxel grid and the volume sampling data of the rendering resources.
7. The volume rendering method of claim 6, wherein the calculating color information of the fused pixel map and the scene of the model to be rendered according to the illumination data on the voxel grid and the volume sampling data of the rendering resource comprises:
according to the vertex NiThe depth of the pixel map, the depth of the scene where the model to be rendered is located and the radius of the hemisphere are determined, and d' is calculated according to a Beer-Lambert formula and a Henyey-Greestein formula at the position PjScattered illumination energy L of the sampled voxel gridbAnd projected illumination energy LhgThe hemisphere is the top point NiIs mapped to the position PjA hemisphere for enclosing the model to be rendered, said d' being at said position PjThe percentage remaining after the cumulative concentration of the sampled voxel grid is cropped by the scene;
according to the formula C ═ S1*Lb*LhgD' calculating color information after the pixel map and the scene where the model to be rendered is located are fused, and S1Is the brightness of the illumination to the model to be rendered.
8. A volume rendering apparatus, comprising:
the voxelization module is used for voxelizing the model to be rendered into a plurality of voxelization grids;
the mapping square dividing module is used for dividing a pixel mapping area with a preset resolution so as to divide the pixel mapping area into a preset number of squares;
a mapping module for mapping any vertex N of the squareiIs mapped to the position P of each sampling camera in the three-dimensional coordinate system for volume sampling the voxel gridjThe volume sampling data acquired by volume sampling comprises the accumulated concentration d of the voxel grid acquired along the sampling direction of the sampling camera, and the forward depth f of the voxel griddAnd the inverse depth b of the voxel gridd
A data saving module for saving data at the vertex NiPosition of (2) storing said position PjVolume sampling data acquired by an up-sampling camera;
and the rendering module is used for rendering the model to be rendered after matching the volume sampling data acquired by the sampling camera according to the visual angle of the rendering camera.
9. A computer-readable storage medium, characterized in that it stores a computer program adapted to be loaded by a processor for performing the steps of the volume rendering method according to any one of claims 1 to 7.
10. A computer device, characterized in that it comprises a memory in which a computer program is stored and a processor which performs the steps in the volume rendering method according to any one of claims 1 to 7 by calling the computer program stored in the memory.
CN202011017777.0A 2020-09-24 2020-09-24 Volume rendering method and device, storage medium and computer equipment Pending CN112138386A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011017777.0A CN112138386A (en) 2020-09-24 2020-09-24 Volume rendering method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011017777.0A CN112138386A (en) 2020-09-24 2020-09-24 Volume rendering method and device, storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN112138386A true CN112138386A (en) 2020-12-29

Family

ID=73897965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011017777.0A Pending CN112138386A (en) 2020-09-24 2020-09-24 Volume rendering method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN112138386A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112957731A (en) * 2021-03-26 2021-06-15 深圳市凉屋游戏科技有限公司 Picture rendering method, picture rendering device and storage medium
CN113658316A (en) * 2021-10-18 2021-11-16 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment
CN113947657A (en) * 2021-10-18 2022-01-18 网易(杭州)网络有限公司 Target model rendering method, device, equipment and storage medium
CN114332337A (en) * 2021-12-23 2022-04-12 武汉大学 Shadow analysis and three-dimensional visualization method considering cloud accumulation density
CN114748872A (en) * 2022-06-13 2022-07-15 深圳市乐易网络股份有限公司 Game rendering updating method based on information fusion
CN116109756A (en) * 2023-04-13 2023-05-12 腾讯科技(深圳)有限公司 Ray tracing method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040222988A1 (en) * 2003-05-08 2004-11-11 Nintendo Co., Ltd. Video game play using panoramically-composited depth-mapped cube mapping
US20070216676A1 (en) * 2006-03-16 2007-09-20 Samsung Electronics Co., Ltd Point-based rendering apparatus, method and medium
US20140306955A1 (en) * 2013-04-16 2014-10-16 Autodesk, Inc. Voxelization techniques
JP6544472B1 (en) * 2018-09-06 2019-07-17 大日本印刷株式会社 Rendering device, rendering method, and program
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture
CN110717964A (en) * 2019-09-26 2020-01-21 深圳市名通科技股份有限公司 Scene modeling method, terminal and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040222988A1 (en) * 2003-05-08 2004-11-11 Nintendo Co., Ltd. Video game play using panoramically-composited depth-mapped cube mapping
US20070216676A1 (en) * 2006-03-16 2007-09-20 Samsung Electronics Co., Ltd Point-based rendering apparatus, method and medium
US20140306955A1 (en) * 2013-04-16 2014-10-16 Autodesk, Inc. Voxelization techniques
JP6544472B1 (en) * 2018-09-06 2019-07-17 大日本印刷株式会社 Rendering device, rendering method, and program
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture
CN110717964A (en) * 2019-09-26 2020-01-21 深圳市名通科技股份有限公司 Scene modeling method, terminal and readable storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112957731A (en) * 2021-03-26 2021-06-15 深圳市凉屋游戏科技有限公司 Picture rendering method, picture rendering device and storage medium
CN112957731B (en) * 2021-03-26 2021-11-26 深圳市凉屋游戏科技有限公司 Picture rendering method, picture rendering device and storage medium
CN113658316A (en) * 2021-10-18 2021-11-16 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment
CN113947657A (en) * 2021-10-18 2022-01-18 网易(杭州)网络有限公司 Target model rendering method, device, equipment and storage medium
CN113658316B (en) * 2021-10-18 2022-03-08 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment
CN114332337A (en) * 2021-12-23 2022-04-12 武汉大学 Shadow analysis and three-dimensional visualization method considering cloud accumulation density
CN114332337B (en) * 2021-12-23 2024-04-02 武汉大学 Shadow analysis and three-dimensional visualization method considering cloud density
CN114748872A (en) * 2022-06-13 2022-07-15 深圳市乐易网络股份有限公司 Game rendering updating method based on information fusion
CN116109756A (en) * 2023-04-13 2023-05-12 腾讯科技(深圳)有限公司 Ray tracing method, device, equipment and storage medium
CN116109756B (en) * 2023-04-13 2023-06-30 腾讯科技(深圳)有限公司 Ray tracing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112138386A (en) Volume rendering method and device, storage medium and computer equipment
CN110354489B (en) Virtual object control method, device, terminal and storage medium
CN113426117B (en) Shooting parameter acquisition method and device for virtual camera, electronic equipment and storage medium
CN112206517A (en) Rendering method, device, storage medium and computer equipment
CN112802172A (en) Texture mapping method and device of three-dimensional model, storage medium and computer equipment
JP7186901B2 (en) HOTSPOT MAP DISPLAY METHOD, DEVICE, COMPUTER DEVICE AND READABLE STORAGE MEDIUM
CN113952720A (en) Game scene rendering method and device, electronic equipment and storage medium
CN112465945A (en) Model generation method and device, storage medium and computer equipment
WO2022237116A1 (en) Image processing method and apparatus
CN113487662A (en) Picture display method and device, electronic equipment and storage medium
CN112206519B (en) Method, device, storage medium and computer equipment for realizing game scene environment change
CN108230430B (en) Cloud layer mask image processing method and device
CN112950753B (en) Virtual plant display method, device, equipment and storage medium
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN112843697A (en) Image processing method and device, storage medium and computer equipment
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
CN112245914A (en) Visual angle adjusting method and device, storage medium and computer equipment
CN114004922B (en) Bone animation display method, device, equipment, medium and computer program product
CN113362348B (en) Image processing method, image processing device, electronic equipment and storage medium
CN115761066A (en) Animation effect generation method and device for mosaic particles, storage medium and equipment
CN115393495A (en) Texture processing method and device for virtual model, computer equipment and storage medium
CN115645921A (en) Game indicator generating method and device, computer equipment and storage medium
CN115430150A (en) Game skill release method and device, computer equipment and storage medium
CN117899490A (en) Virtual model processing method and device, computer equipment and storage medium
CN115588066A (en) Rendering method and device of virtual object, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination