CN112070874A - Image rendering method and device - Google Patents

Image rendering method and device Download PDF

Info

Publication number
CN112070874A
CN112070874A CN202011121384.4A CN202011121384A CN112070874A CN 112070874 A CN112070874 A CN 112070874A CN 202011121384 A CN202011121384 A CN 202011121384A CN 112070874 A CN112070874 A CN 112070874A
Authority
CN
China
Prior art keywords
data
area
block
target area
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011121384.4A
Other languages
Chinese (zh)
Inventor
覃健青
程佳
齐锦楠
黄奕达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsoft Online Game Technology Co Ltd
Original Assignee
Zhuhai Kingsoft Online Game Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsoft Online Game Technology Co Ltd filed Critical Zhuhai Kingsoft Online Game Technology Co Ltd
Priority to CN202011121384.4A priority Critical patent/CN112070874A/en
Publication of CN112070874A publication Critical patent/CN112070874A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The application provides an image rendering method and device, wherein the method comprises the following steps: acquiring target area data; discretizing the target area data to obtain the representation data of each area block; determining position information of a virtual camera in a virtual scene, and determining representation data of an area block in the visual field range of the virtual camera according to the position information of the virtual camera and the representation data of the area block; and sampling and rendering based on the characterization data of the area block in the visual field range of the virtual camera to obtain a target area image in the virtual scene. According to the method, the target area image is rendered by replacing the target area data with the representation data, so that the workload of rendering the scene blocks in the visual field of the virtual camera is greatly reduced, the rendering time is saved, the manufacturing efficiency of the target area image is improved, and the development cost is reduced.

Description

Image rendering method and device
Technical Field
The present application relates to the field of virtual scene processing technologies, and in particular, to a method and an apparatus for image rendering, a computing device, and a computer-readable storage medium.
Background
In the process of drawing the virtual scene, the virtual scene is made in a mapping mode, the mapping is to paste a picture on one or more surfaces of the 3D model, the picture can be any, but is generally a general pattern, such as a brick, a plant, a waste land and the like, so as to improve the reality of the virtual scene.
When a large scene is created, for example, for creating a water surface of a wide lake surface in a game, the prior art finishes creating a water surface map in advance, and then places and pastes the water surface map to a target position of a specific virtual scene.
Disclosure of Invention
In view of this, embodiments of the present application provide an image rendering method and apparatus, a computing device, and a computer-readable storage medium, so as to solve technical defects existing in the prior art.
The embodiment of the application provides an image rendering method, which comprises the following steps:
acquiring target area data;
discretizing the target area data to obtain the representation data of each area block;
determining position information of a virtual camera in a virtual scene, and determining representation data of an area block in the visual field range of the virtual camera according to the position information of the virtual camera and the representation data of the area block;
and sampling and rendering based on the characterization data of the area block in the visual field range of the virtual camera to obtain a target area image in the virtual scene.
Optionally, discretizing the target region data to obtain characterization data of each region block, including: performing contour processing according to the target area data to generate a rectangular area corresponding to the target area; intersecting the rectangular area with an initial quadtree, and establishing a quadtree corresponding to the target area data; discretizing the target area data based on the quadtree to obtain the characterization data of each area block, wherein the quadtree comprises a plurality of leaf nodes, and each leaf node corresponds to the characterization data of one area block.
Optionally, performing discretization processing on the target region data based on the quadtree to obtain characterization data of each region block, including:
discretizing the target area data based on the quadtree to respectively obtain data of a plurality of area blocks;
if the data of the area blocks are all target area data, determining the mark of the area block as 1, and taking the data of the area block as the characterization data of the area block;
if the data of the area blocks are not the target area data, determining that the marks of the area blocks are 0;
if the area block is an edge area block and part of the data of the area block is target area data, determining the mark of the area block to be 2, and taking the data of the area block and the mask data as the characterization data of the area block.
Optionally, the data of the region block includes: common data and offset data;
discretizing the target area data based on the quadtree to respectively obtain data of a plurality of area blocks, wherein the discretizing comprises the following steps:
discretizing the target area data based on the quadtree to respectively obtain common data and offset data of the area blocks, wherein the common data of the area blocks are the same.
Optionally, sampling and rendering based on the characterization data of the region block in the visual field of the virtual camera to obtain a target region image in the virtual scene, including:
if the mark of the area block in the visual field range of the virtual camera is 2, writing the mask data of the area block into a cache region, and generating index information corresponding to the mask data of the area block;
reading the index information by using a shader, and sampling the representation data of the area block in the cache region;
and deleting other data except the target area data, and then rendering to obtain the target area image.
Optionally, generating index information corresponding to the mask data of the region block includes:
setting attribute values associated with mask data of each region block, wherein the size of each region block is 32 x 32;
and generating a plurality of rendering instances corresponding to the region blocks based on the attribute values, wherein each rendering instance comprises 4 index cases, and each index case corresponds to one region block.
Optionally, sampling and rendering based on the characterization data of the region block in the visual field of the virtual camera to obtain a target region image in the virtual scene, including:
and if the mark of the area block in the visual field range of the virtual camera is 1, sampling and rendering according to the common data and the offset data of the area block to obtain the target area image.
The embodiment of the application provides an image rendering device, which comprises:
an acquisition module configured to acquire target area data;
the discretization module is configured to perform discretization processing on the target area data to obtain representation data of each area block;
the determining module is configured to determine position information of a virtual camera in a virtual scene, and determine representation data of the area block in the visual field range of the virtual camera according to the position information of the virtual camera and the representation data of the area block;
and the processing module is configured to sample and render based on the characterization data of the area blocks in the visual field range of the virtual camera to obtain a target area image in the virtual scene.
Optionally, the discretization module is specifically configured to:
performing contour processing according to the target area data to generate a rectangular area corresponding to the target area;
intersecting the rectangular area with an initial quadtree, and establishing a quadtree corresponding to target area data;
discretizing the target area data based on the quadtree to obtain the characterization data of each area block, wherein the quadtree comprises a plurality of leaf nodes, and each leaf node corresponds to the characterization data of one area block.
Optionally, the discretization module is specifically configured to:
discretizing the target area data based on the quadtree to respectively obtain data of a plurality of area blocks;
if the data of the area blocks are all target area data, determining the mark of the area block as 1, and taking the data of the area block as the characterization data of the area block;
if the data of the area blocks are not the target area data, determining that the marks of the area blocks are 0;
if the area block is an edge area block and part of the data of the area block is target area data, determining the mark of the area block to be 2, and taking the data of the area block and the mask data as the characterization data of the area block.
Optionally, the data of the region block includes: common data and offset data;
the discretization module is specifically configured to: discretizing the target area data based on the quadtree to respectively obtain common data and offset data of the area blocks, wherein the common data of the area blocks are the same.
Optionally, the processing module is specifically configured to: if the mark of the area block in the visual field range of the virtual camera is 2, writing the mask data of the area block into a cache region, and generating index information corresponding to the mask data of the area block;
reading the index information by using a shader, and sampling the representation data of the area block in the cache region;
and deleting other data except the target area data, and then rendering to obtain the target area image.
Optionally, the processing module is specifically configured to:
setting attribute values associated with mask data of each region block, wherein the size of each region block is 32 x 32;
and generating a plurality of rendering instances corresponding to the region blocks based on the attribute values, wherein each rendering instance comprises 4 index cases, and each index case corresponds to one region block.
Optionally, the processing module is specifically configured to: and if the mark of the area block in the visual field range of the virtual camera is 1, sampling and rendering according to the common data and the offset data of the area block to obtain the target area image.
Embodiments of the present application provide a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the method of image rendering as described above when executing the instructions.
Embodiments of the present application provide a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of a method of image rendering as described above.
According to the image rendering method and device, after the target area data are obtained, rendering is not directly performed according to the target area data, but the obtained target area data are discretized to obtain the representation data of each area block, then the representation data of the area blocks located in the visual field range of the virtual camera are determined to be sampled and rendered, the target area image in the virtual scene is obtained, the representation data are used for replacing the target area data to achieve rendering of the target area image, the workload of rendering of the scene blocks in the visual field of the virtual camera is greatly reduced, the rendering time is saved, the manufacturing efficiency of the target area image is improved, and the development cost is reduced.
And secondly, the characterization data of the area blocks comprises common data and offset data, and each area block can share one common data and is characterized by the offset data, so that the data volume can be reduced, and the production efficiency of the target area image can be improved.
Drawings
FIG. 1 is a schematic block diagram of a computing device according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method of image rendering according to an embodiment of the present application;
FIG. 3 is a schematic diagram of segmentation by applying the image rendering method of the embodiment of the present application;
FIG. 4 is a schematic flow chart diagram illustrating a method for image rendering according to another embodiment of the present application;
FIG. 5 is a schematic diagram illustrating selection of a region block in a field of view of a virtual camera by applying the image rendering method according to the embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image rendering apparatus according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms referred to in the present embodiment are schematically explained.
Virtual scene: a two-dimensional or three-dimensional scene of game play.
Target area data: and data corresponding to the area needing to be rendered, such as the water surface, the mountains and the like in the game scene.
Area block: the area blocks obtained by discretizing the target area data may have various sizes, for example, 32 × 32, 64 × 64, and the like.
Common data and offset data: for example, a lake surface requires 10 ten thousand vertices to represent, and after being cut into area blocks, each area block has 100 vertices. The data of each region block can be represented by using 100 basic vertex data as common data and adding an offset data.
Marking: under the condition that the data of the region blocks are all target region data, marking 1 for 1bit to represent; when the data of the area block is not the target area data, marking 0 of 1bit as the expression; when the partial data of the region block is the target region data, 2 denoted by 1bit of the region block is indicated.
In the present application, a method and an apparatus for image rendering, a computing device and a computer readable storage medium are provided, which are described in detail in the following embodiments one by one.
Fig. 1 is a block diagram illustrating a configuration of a computing device 100 according to an embodiment of the present specification. The components of the computing device 100 include, but are not limited to, memory 110 and processor 120. The processor 120 is coupled to the memory 110 via a bus 130 and a database 150 is used to store data.
Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 140 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 100 and other components not shown in FIG. 1 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 1 is for purposes of example only and is not limiting as to the scope of the description. Those skilled in the art may add or replace other components as desired.
Computing device 100 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.
Wherein the processor 120 may perform the steps of the method shown in fig. 2. Fig. 2 is a schematic flow chart illustrating a method of image rendering according to a first embodiment of the present application, including steps 202 to 208.
202. Target area data is acquired.
The method comprises the steps of manufacturing a high-precision original map, and pasting the original map on a three-dimensional model to obtain a virtual scene. The original map can be in a wide lake water style, a barren land, a mountain style and the like. The original map can show three-dimensional effects of highlight shadow and depth, and the memory occupation of the high-precision original map is large.
The original map can be at least one of a color map, a normal map, a highlight map or a concave-convex map, and the original map can also generate subtle three-dimensional detail change through the combination of the normal map, the highlight map and the concave-convex map.
Correspondingly, the target area may be various, such as a water surface area, a desert area, a mountain area, and the like.
In this embodiment, the target area data is original non-segmented data, and the method of this embodiment does not use the target area data for rendering, which may result in an excessive data processing amount and affect the normal production of the virtual scene.
204. And carrying out discretization processing on the target area data to obtain the characterization data of each area block.
In this embodiment, there are various discretization methods, such as an equal discretization method, an unequal discretization method, and the like. For example, by using mesh discretization, the dynamically increased mesh density can be subdivided, such as by increasing the mesh vertex density around the main camera at the GPU based on the main camera position, to make the discretization accuracy around the main camera different from that at the distance. This step will be described below by taking the case of the equally divided discretization.
Specifically, step 204 includes the following steps S242 to S246:
and S242, performing contour processing according to the target area data to generate a rectangular area corresponding to the target area.
In this step, a rectangle relatively close to the outline of the target area is generated according to information such as the rotational translation of the target area data, and the rectangular area surrounded by the rectangle is the rectangular area corresponding to the target area.
Referring to fig. 3, fig. 3 shows a rectangular area surrounding the target area.
And S244, intersecting the rectangular area with an initial quad-tree, and establishing a quad-tree corresponding to the target area data.
The quadtree in this embodiment is a method of placing and locating files (called records or keys) in a database. The basic idea is to divide an image or a grid map into four parts (sub-regions) equally, then check all grid values of each sub-region continuously, if the sub-regions all contain the same value (gray or attribute value), then the sub-region is not divided downwards; otherwise, the sub-region is subdivided into four sub-regions, so that the partitioning is recursive until each sub-region has the same value. Each sub-area corresponds to a node.
The quadtree comprises a root node, a middle node and a leaf node. The root node refers to a node which has no father node but only a child node, and the leaf node refers to a node which has no son node but only a father node.
For example, the initial quadtree includes 32 × 32 leaf nodes, the rectangular region is intersected with the initial quadtree, and the target region data is placed into the 32 × 32 leaf nodes to obtain the data of the region block corresponding to each leaf node.
S246, discretizing the target area data based on the quadtree to obtain the characterization data of each area block, wherein the quadtree comprises a plurality of leaf nodes, and each leaf node corresponds to the characterization data of one area block.
Specifically, step S246 includes the following steps S2462 to S2468:
s2462, discretizing the target area data based on the quadtree to respectively obtain data of the area blocks.
In this embodiment, the data of the region block includes: the common data and the offset data, step S2462 includes: discretizing the target area data based on the quadtree to respectively obtain common data and offset data of the area blocks, wherein the common data of the area blocks are the same.
For example, a lake surface needs 10 ten thousand vertices to represent, and after being cut into area blocks, each area block has 100 vertices. The data of 100 vertices of each region block is used as common data.
The data of the region block 1 includes: common data a and offset data b 1;
the data of the region block 2 includes: common data a and offset data b 2;
the data of the area block 3 includes: common data a and offset data b 3;
the data of the area block 4 includes: common data a and offset data b 4.
Then, for the data of the region blocks 1-4, a common data is shared, and the offset data of each region block can characterize the data of the region block. Since the number of bits of offset data is small, the amount of memory and processing of data can be greatly reduced.
And S2464, if the data of the area blocks are all target area data, determining the mark of the area block as 1, and taking the data of the area block as the characterization data of the area block.
And S2466, if the data of the area blocks are not the target area data, determining that the mark of the area block is 0.
S2468, if the area block is an edge area block and part of data of the area block is target area data, determining the mark of the area block to be 2, and taking the data of the area block and mask data as characterization data of the area block.
Referring to fig. 3, the area blocks a22, a23, a32, a33 are labeled 1, the area blocks a41, a23 are labeled 0, and the area blocks a11 to a14, a21, a24, a31, a34, a42, and a43 are labeled 2.
In this embodiment, the characterization data of the region blocks includes common data and offset data, and each region block can share one common data and is characterized by the offset data, so that the data amount can be reduced and the production efficiency of the target region image can be improved.
206. Determining the position information of a virtual camera in a virtual scene, and determining the representation data of the area block in the visual field range of the virtual camera according to the position information of the virtual camera and the representation data of the area block.
The virtual camera is a video camera assumed in software in the device, and is a tool for representing a viewpoint in a three-dimensional virtual scene.
In this embodiment, the visual field range of the virtual camera in the virtual scene, the space of the virtual scene shot by the virtual camera, in other words, the visual field range in the virtual scene covered by the virtual camera, is determined by obtaining the coordinate information and the posture data in the position information of the virtual camera.
The coordinate information of the virtual camera comprises x, y and z values of the virtual camera, and the gesture information of the virtual camera comprises Rx, Ry and Rz values of the virtual camera.
208. And sampling and rendering based on the characterization data of the area block in the visual field range of the virtual camera to obtain a target area image in the virtual scene.
Specifically, in the case where the flag of the region block is 0, no processing is required, and the amount of processing of data can be saved.
Specifically, under the condition that the mark of the region block is 1, the target region image is obtained by directly sampling and rendering according to the common data and the offset data of the region block without mask data.
Specifically, in the case where the label of the region block is 2, the step 208 includes the following steps S282 to S286:
s282, writing the mask data of the area block into a cache area if the mark of the area block in the view field of the virtual camera is 2, and generating index information corresponding to the mask data of the area block.
In this embodiment, the mask data of the region block is written into a buffer (structbuffer), and then transmitted to a graphics card (GPU) for rendering.
In step S282, generating index information corresponding to the mask data of the region block includes: setting attribute values associated with mask data of each region block, wherein the size of each region block is 32 x 32; and generating a plurality of rendering instances corresponding to the region blocks based on the attribute values, wherein each rendering instance comprises 4 index cases, and each index case corresponds to one region block.
And S284, reading the index information by using a shader, and sampling the characterization data of the area block in the cache region.
The shader is used for realizing image rendering and is used for replacing an editable program of a fixed rendering pipeline. The shaders comprise a Vertex Shader (Vertex Shader) and a Pixel Shader (Pixel Shader), wherein the Vertex Shader is mainly responsible for the operation of the geometric relationship and the like of the Vertex, and the Pixel Shader is mainly responsible for the calculation of the color of a slice source and the like.
And S286, rendering after deleting the other data except the target area data to obtain the target area image.
In the image rendering method provided by this embodiment, after the target area data is obtained, rendering is not directly performed according to the target area data, but the obtained target area data is discretized to obtain the characterization data of each area block, the characterization data of the area blocks located within the visual field range of the virtual camera is determined to be sampled and rendered, so as to obtain the target area image in the virtual scene, and the target area image is rendered by replacing the target area data with the characterization data, so that the workload of rendering the scene blocks within the visual field of the virtual camera is greatly reduced, the rendering time is saved, the manufacturing efficiency of the target area image is improved, and the development cost is reduced.
And secondly, the characterization data of the area blocks comprises common data and offset data, and each area block can share one common data and is characterized by the offset data, so that the data amount can be reduced, and the production efficiency of the target area image can be improved.
The embodiment of the application also discloses an image rendering method, which is schematically illustrated by taking the rendering water area shown in fig. 3 and 5 as an example. Referring to FIG. 4, the method includes the following steps 402-412:
402. the receiving includes target water area data.
404. And carrying out contour processing according to the target area data to generate a rectangular area corresponding to the target area.
406. And intersecting the rectangular area with an initial quadtree to establish the quadtree corresponding to the target area data.
408. And carrying out discretization processing on the target area data based on the quadtree to obtain the characterization data of each area block.
In this embodiment, for example, the quadtree includes 32 × 32 leaf nodes, and each leaf node corresponds to the characterization data of one region block.
If the data of the area blocks are all target area data, determining the mark of the area block as 1, and taking the data of the area block as the representation data of the area block;
if the data of the area blocks are not the target area data, determining that the marks of the area blocks are 0;
if the area block is an edge area block and part of the data of the area block is target area data, determining the mark of the area block to be 2, and taking the data of the area block and the mask data as the characterization data of the area block.
Referring to fig. 3, the area blocks a22, a23, a32, a33 are labeled 1, the area blocks a41, a23 are labeled 0, and the area blocks a11 to a14, a21, a24, a31, a34, a42, and a43 are labeled 2.
410. Determining the position information of a virtual camera in a virtual scene, and determining the representation data of the area block in the visual field range of the virtual camera according to the position information of the virtual camera and the representation data of the area block.
In the present embodiment, the area blocks include 16 area blocks, such as a11 to a 44. In a use scene, the area blocks in the virtual camera view field in the virtual scene include A22-A24, A32-A34 and A42-A44. Then correspondingly, the subsequent steps are performed for the area blocks A22-A24, A32-A34 and A42-A44, as shown in FIG. 5.
412. And sampling and rendering based on the characterization data of the area block in the visual field range of the virtual camera to obtain a target area image in the virtual scene.
For a detailed explanation of step 412, refer to the detailed content of step 208 in the foregoing embodiment, and will not be described herein again.
According to the image rendering method provided by the embodiment, after the target water area data is obtained, the obtained target water area data is discretized to obtain the representation data of each area block, then the representation data of the area blocks located in the visual field range of the virtual camera is determined to be sampled and rendered to obtain the target area image in the virtual scene, and the representation data is used for replacing the target water area data to achieve rendering of the water area image, so that the workload of rendering the scene blocks in the visual field of the virtual camera is greatly reduced, the rendering time is saved, the manufacturing efficiency of the target area image is improved, and the development cost is reduced.
And secondly, the characterization data of the area blocks comprises common data and offset data, and each area block can share one common data and is characterized by the offset data, so that the data amount can be reduced, and the production efficiency of the target area image can be improved.
Fig. 6 is a diagram illustrating an apparatus for image rendering according to a first embodiment of the present application, including:
an acquisition module 602 configured to acquire target area data;
a discretization module 604 configured to perform discretization processing on the target area data to obtain characterization data of each area block;
a determining module 606 configured to determine position information of a virtual camera in a virtual scene, and determine characterization data of the area block within a visual field range of the virtual camera according to the position information of the virtual camera and the characterization data of the area block;
and the processing module 608 is configured to sample and render the region blocks in the visual field of the virtual camera based on the characterization data of the region blocks, and obtain a target region image in the virtual scene.
Optionally, the discretization module 604 is specifically configured to:
performing contour processing according to the target area data to generate a rectangular area corresponding to the target area;
intersecting the rectangular area with an initial quadtree, and establishing a quadtree corresponding to the target area data;
discretizing the target area data based on the quadtree to obtain the characterization data of each area block, wherein the quadtree comprises a plurality of leaf nodes, and each leaf node corresponds to the characterization data of one area block.
Optionally, the discretization module 604 is specifically configured to:
discretizing the target area data based on the quadtree to respectively obtain data of a plurality of area blocks;
if the data of the area blocks are all target area data, determining the mark of the area block as 1, and taking the data of the area block as the characterization data of the area block;
if the data of the area blocks are not the target area data, determining that the marks of the area blocks are 0;
if the area block is an edge area block and part of the data of the area block is target area data, determining the mark of the area block to be 2, and taking the data of the area block and the mask data as the characterization data of the area block.
Optionally, the data of the region block includes: common data and offset data;
the discretization module 604 is specifically configured to: discretizing the target area data based on the quadtree to respectively obtain common data and offset data of the area blocks, wherein the common data of the area blocks are the same.
Optionally, the processing module 608 is specifically configured to:
if the mark of the area block in the visual field range of the virtual camera is 2, writing the mask data of the area block into a cache region, and generating index information corresponding to the mask data of the area block;
reading the index information by using a shader, and sampling the representation data of the area block in the cache region;
and deleting other data except the target area data, and then rendering to obtain the target area image.
Optionally, the processing module 608 is specifically configured to:
setting attribute values associated with mask data of each region block, wherein the size of each region block is 32 x 32;
and generating a plurality of rendering instances corresponding to the region blocks based on the attribute values, wherein each rendering instance comprises 4 index cases, and each index case corresponds to one region block.
Optionally, the processing module 608 is specifically configured to: and if the mark of the area block in the visual field range of the virtual camera is 1, sampling and rendering according to the common data and the offset data of the area block to obtain the target area image.
The image rendering device disclosed in this embodiment obtains the target area data, and then performs discretization on the obtained target area data to obtain the characterization data of each area block, determines the characterization data of the area blocks located within the visual field range of the virtual camera, and performs sampling and rendering to obtain the target area image in the virtual scene, and replaces the target area data with the characterization data to realize the rendering of the target area image, so as to greatly reduce the workload of rendering the scene blocks within the visual field of the virtual camera, save the rendering time, improve the production efficiency of the target area image, and reduce the development cost.
An embodiment of the present application also provides a computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor executes the instructions to implement the steps of the method for image rendering as described above.
An embodiment of the present application also provides a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of image rendering as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium and the technical solution of the image rendering method belong to the same concept, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the image rendering method.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (16)

1. A method of image rendering, comprising:
acquiring target area data;
discretizing the target area data to obtain the representation data of each area block;
determining position information of a virtual camera in a virtual scene, and determining representation data of an area block in the visual field range of the virtual camera according to the position information of the virtual camera and the representation data of the area block;
and sampling and rendering based on the characterization data of the area block in the visual field range of the virtual camera to obtain a target area image in the virtual scene.
2. The method of claim 1, wherein discretizing the target region data to obtain characterization data for each region block comprises:
performing contour processing according to the target area data to generate a rectangular area corresponding to the target area;
intersecting the rectangular area with an initial quadtree, and establishing a quadtree corresponding to the target area data;
discretizing the target area data based on the quadtree to obtain the characterization data of each area block, wherein the quadtree comprises a plurality of leaf nodes, and each leaf node corresponds to the characterization data of one area block.
3. The method of claim 2, wherein discretizing the target region data based on the quadtree to obtain characterization data for each region block comprises:
discretizing the target area data based on the quadtree to respectively obtain data of a plurality of area blocks;
if the data of the area blocks are all target area data, determining the mark of the area block as 1, and taking the data of the area block as the characterization data of the area block;
if the data of the area blocks are not the target area data, determining that the marks of the area blocks are 0;
if the area block is an edge area block and part of the data of the area block is target area data, determining the mark of the area block to be 2, and taking the data of the area block and the mask data as the characterization data of the area block.
4. The method of claim 3, wherein the data of the region block comprises: common data and offset data;
discretizing the target area data based on the quadtree to respectively obtain data of a plurality of area blocks, wherein the discretizing comprises the following steps:
discretizing the target area data based on the quadtree to respectively obtain common data and offset data of the area blocks, wherein the common data of the area blocks are the same.
5. The method of claim 3, wherein sampling and rendering based on characterization data of a region block within a field of view of the virtual camera to obtain a target region image in a virtual scene comprises:
if the mark of the area block in the visual field range of the virtual camera is 2, writing the mask data of the area block into a cache region, and generating index information corresponding to the mask data of the area block;
reading the index information by using a shader, and sampling the representation data of the area block in the cache region;
and deleting other data except the target area data, and then rendering to obtain the target area image.
6. The method of claim 5, wherein generating index information corresponding to the mask data of the region block comprises:
setting attribute values associated with mask data of each region block, wherein the size of each region block is 32 x 32;
and generating a plurality of rendering instances corresponding to the region blocks based on the attribute values, wherein each rendering instance comprises 4 index cases, and each index case corresponds to one region block.
7. The method of claim 4, wherein sampling and rendering based on characterization data of a region block within a field of view of the virtual camera to obtain a target region image in a virtual scene comprises:
and if the mark of the area block in the visual field range of the virtual camera is 1, sampling and rendering according to the common data and the offset data of the area block to obtain the target area image.
8. An apparatus for image rendering, comprising:
an acquisition module configured to acquire target area data;
the discretization module is configured to perform discretization processing on the target area data to obtain representation data of each area block;
the determining module is configured to determine position information of a virtual camera in a virtual scene, and determine representation data of the area block in the visual field range of the virtual camera according to the position information of the virtual camera and the representation data of the area block;
and the processing module is configured to sample and render based on the characterization data of the area blocks in the visual field range of the virtual camera to obtain a target area image in the virtual scene.
9. The apparatus of claim 8, wherein the discretization module is specifically configured to:
performing contour processing according to the target area data to generate a rectangular area corresponding to the target area;
intersecting the rectangular area with an initial quadtree, and establishing a quadtree corresponding to the target area data;
discretizing the target area data based on the quadtree to obtain the characterization data of each area block, wherein the quadtree comprises a plurality of leaf nodes, and each leaf node corresponds to the characterization data of one area block.
10. The apparatus of claim 9, wherein the discretization module is specifically configured to:
discretizing the target area data based on the quadtree to respectively obtain data of a plurality of area blocks;
if the data of the area blocks are all target area data, determining the mark of the area block as 1, and taking the data of the area block as the characterization data of the area block;
if the data of the area blocks are not the target area data, determining that the marks of the area blocks are 0;
if the area block is an edge area block and part of the data of the area block is target area data, determining the mark of the area block to be 2, and taking the data of the area block and the mask data as the characterization data of the area block.
11. The apparatus of claim 10, wherein the data of the region block comprises: common data and offset data;
the discretization module is specifically configured to: discretizing the target area data based on the quadtree to respectively obtain common data and offset data of the area blocks, wherein the common data of the area blocks are the same.
12. The apparatus of claim 10, wherein the processing module is specifically configured to: if the mark of the area block in the visual field range of the virtual camera is 2, writing the mask data of the area block into a cache region, and generating index information corresponding to the mask data of the area block;
reading the index information by using a shader, and sampling the representation data of the area block in the cache region;
and deleting other data except the target area data, and then rendering to obtain the target area image.
13. The apparatus of claim 12, wherein the processing module is specifically configured to:
setting attribute values associated with mask data of each region block, wherein the size of each region block is 32 x 32;
and generating a plurality of rendering instances corresponding to the region blocks based on the attribute values, wherein each rendering instance comprises 4 index cases, and each index case corresponds to one region block.
14. The apparatus of claim 11, wherein the processing module is specifically configured to: and if the mark of the area block in the visual field range of the virtual camera is 1, sampling and rendering according to the common data and the offset data of the area block to obtain the target area image.
15. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-7 when executing the instructions.
16. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 7.
CN202011121384.4A 2020-10-19 2020-10-19 Image rendering method and device Pending CN112070874A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011121384.4A CN112070874A (en) 2020-10-19 2020-10-19 Image rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011121384.4A CN112070874A (en) 2020-10-19 2020-10-19 Image rendering method and device

Publications (1)

Publication Number Publication Date
CN112070874A true CN112070874A (en) 2020-12-11

Family

ID=73655334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011121384.4A Pending CN112070874A (en) 2020-10-19 2020-10-19 Image rendering method and device

Country Status (1)

Country Link
CN (1) CN112070874A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113209632A (en) * 2021-06-08 2021-08-06 腾讯科技(深圳)有限公司 Cloud game processing method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831644A (en) * 2012-07-09 2012-12-19 哈尔滨工程大学 Marine environment information three-dimensional visualization method
CN103093497A (en) * 2013-01-09 2013-05-08 吉林大学 LIDAR data city fast reconstruction method based on layered outline
CN104952101A (en) * 2015-05-21 2015-09-30 中国人民解放军理工大学 Height-field based dynamic vector rendering method
CN106997612A (en) * 2016-01-13 2017-08-01 索尼互动娱乐股份有限公司 The apparatus and method of image rendering
US20180089894A1 (en) * 2016-09-27 2018-03-29 Adobe Systems Incorporated Rendering digital virtual environments utilizing full path space learning
CN111340926A (en) * 2020-03-25 2020-06-26 北京畅游创想软件技术有限公司 Rendering method and device
CN111656790A (en) * 2018-01-26 2020-09-11 夏普株式会社 System and method for signaling location information

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831644A (en) * 2012-07-09 2012-12-19 哈尔滨工程大学 Marine environment information three-dimensional visualization method
CN103093497A (en) * 2013-01-09 2013-05-08 吉林大学 LIDAR data city fast reconstruction method based on layered outline
CN104952101A (en) * 2015-05-21 2015-09-30 中国人民解放军理工大学 Height-field based dynamic vector rendering method
CN106997612A (en) * 2016-01-13 2017-08-01 索尼互动娱乐股份有限公司 The apparatus and method of image rendering
US20180089894A1 (en) * 2016-09-27 2018-03-29 Adobe Systems Incorporated Rendering digital virtual environments utilizing full path space learning
CN111656790A (en) * 2018-01-26 2020-09-11 夏普株式会社 System and method for signaling location information
CN111340926A (en) * 2020-03-25 2020-06-26 北京畅游创想软件技术有限公司 Rendering method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王振武 等: "基于四叉树分割的地形LOD技术综述", 《计算机科学》, no. 04, 15 April 2018 (2018-04-15), pages 40 - 51 *
邓正宏 等: "大规模地形渲染技术的研究与实现", 《西北工业大学学报》, no. 06, 15 December 2010 (2010-12-15), pages 137 - 141 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113209632A (en) * 2021-06-08 2021-08-06 腾讯科技(深圳)有限公司 Cloud game processing method, device, equipment and storage medium
CN113209632B (en) * 2021-06-08 2022-08-12 腾讯科技(深圳)有限公司 Cloud game processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105678683B (en) A kind of two-dimensional storage method of threedimensional model
CN110706341B (en) High-performance rendering method and device of city information model and storage medium
GB2559446A (en) Generating a three-dimensional model from a scanned object
CN110570507A (en) Image rendering method and device
CN110544291B (en) Image rendering method and device
CN110570506B (en) Map resource management method, device, computing equipment and storage medium
CN110516015B (en) Method for manufacturing geographical PDF map based on map graphic data and DLG
CN112569602B (en) Method and device for constructing terrain in virtual scene
CN107092354B (en) Sketchup model virtual reality transformation technology method
CN113112581A (en) Texture map generation method, device and equipment for three-dimensional model and storage medium
CN109816770B (en) Oil painting stroke simulation using neural networks
CN110866965A (en) Mapping drawing method and device for three-dimensional model
CN112070874A (en) Image rendering method and device
CN114820972A (en) Contour line and/or contour surface generation method, system, device and storage medium
CN111617480A (en) Point cloud rendering method and device
CN110363733B (en) Mixed image generation method and device
CN109285160B (en) Image matting method and system
CN110825250A (en) Optimization method and device for brush track
CN114820374A (en) Fuzzy processing method and device
CN110990104B (en) Texture rendering method and device based on Unity3D
CN110136235B (en) Three-dimensional BIM model shell extraction method and device and computer equipment
CN113786616A (en) Indirect illumination implementation method and device, storage medium and computing equipment
CN111617484B (en) Map processing method and device
CN116883575B (en) Building group rendering method, device, computer equipment and storage medium
WO2023221683A1 (en) Image rendering method and apparatus, device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant after: Zhuhai Jinshan Digital Network Technology Co.,Ltd.

Address before: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant before: ZHUHAI KINGSOFT ONLINE GAME TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information