CN111667393B - Method and terminal for simulating raining in virtual scene - Google Patents

Method and terminal for simulating raining in virtual scene Download PDF

Info

Publication number
CN111667393B
CN111667393B CN201910787925.8A CN201910787925A CN111667393B CN 111667393 B CN111667393 B CN 111667393B CN 201910787925 A CN201910787925 A CN 201910787925A CN 111667393 B CN111667393 B CN 111667393B
Authority
CN
China
Prior art keywords
coordinates
camera
space
transformation matrix
raindrops
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910787925.8A
Other languages
Chinese (zh)
Other versions
CN111667393A (en
Inventor
林进浔
黄明炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Shuboxun Information Technology Co ltd
Original Assignee
Fujian Shuboxun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Shuboxun Information Technology Co ltd filed Critical Fujian Shuboxun Information Technology Co ltd
Priority to CN201910787925.8A priority Critical patent/CN111667393B/en
Publication of CN111667393A publication Critical patent/CN111667393A/en
Application granted granted Critical
Publication of CN111667393B publication Critical patent/CN111667393B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/603D [Three Dimensional] animation of natural phenomena, e.g. rain, snow, water or plants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

According to the method and the terminal for simulating raining in the virtual scene, provided by the invention, the depth information of all the rain drop shielding pieces in the virtual scene is rendered on a preset target to obtain a depth information graph; determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space; according to the method, the position of the raindrops falling in the world space is obtained through calculation according to the depth information graph, the coordinates of preset points and the first transformation matrix, the raindrop effect is rendered according to the position of the raindrops falling in the world space, a large amount of calculation for collision detection is not needed for each frame, the operation amount and the performance cost are reduced, in addition, the influence of the raindrop speed on the positions of the previous frame and the next frame is needed to be considered in the traditional collision detection, the error is large, the position of the raindrops falling in the world space is calculated in a mode of calculating the depth information, the influence of the raindrop speed is not needed to be considered, and the accuracy of the raindrop effect rendering is improved.

Description

Method and terminal for simulating raining in virtual scene
Technical Field
The invention relates to the field of computer graphic images, in particular to a method and a terminal for simulating raining in a virtual scene.
Background
Computer graphics (Computer Graphics, CG for short) is a science of using mathematical algorithms to transform two-or three-dimensional graphics into a grid form for a computer display. Briefly, the main study of computer graphics is to study how graphics are represented in a computer and the related principles and algorithms for computing, processing and displaying the graphics with the computer.
In the virtual scene, raining is simulated, the position of a collision point is usually detected by adopting a collision detection mode to calculate the position of a raindrop place, a large amount of particles are needed, the operation amount is large, the precision is low, under the traditional collision detection scheme, the particles need to be mounted with a collision box, in the rendering of each frame, thousands of particles need to be intersected with blocking objects for calculation, the operation is complex, the performance cost is high, the operation cannot be performed under a mobile platform, and the existing raining simulation method is low in the accuracy of rendering the raindrop collision effect.
Disclosure of Invention
First, the technical problem to be solved
In order to solve the problems in the prior art, the invention provides a method and a terminal for simulating raining in a virtual scene, which can reduce the operand and the performance cost and improve the accuracy of rendering the raining effect.
(II) technical scheme
In order to achieve the above purpose, the technical scheme adopted by the invention comprises the following steps:
a method of simulating rain in a virtual scene, comprising the steps of:
s1, rendering depth information of all rain drop shielding pieces in a virtual scene to a preset target to obtain a depth information map;
s2, determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
s3, calculating to obtain the falling position of the raindrops in the world space according to the depth information map, the coordinates of preset points and the first transformation matrix;
and S4, rendering a raining effect according to the falling position of the raindrops in the world space. In order to achieve the above object, another technical solution adopted by the present invention includes:
a terminal for simulating rain in a virtual scene, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the program:
s1, rendering depth information of all rain drop shielding pieces in a virtual scene to a preset target to obtain a depth information map;
s2, determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
s3, calculating to obtain the falling position of the raindrops in the world space according to the depth information map, the coordinates of preset points and the first transformation matrix;
and S4, rendering a raining effect according to the falling position of the raindrops in the world space.
(III) beneficial effects
The invention has the beneficial effects that: the method comprises the steps of rendering depth information of all rain drop shielding pieces in a virtual scene to a preset target to obtain a depth information map; determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space; according to the method, the position of the raindrops falling in the world space is obtained through calculation according to the depth information graph, the coordinates of preset points and the first transformation matrix, the raindrop effect is rendered according to the position of the raindrops falling in the world space, a large amount of calculation for collision detection is not needed for each frame, the operation amount and the performance cost are reduced, in addition, the influence of the raindrop speed on the positions of the previous frame and the next frame is needed to be considered in the traditional collision detection, the error is large, the position of the raindrops falling in the world space is calculated in a mode of calculating the depth information, the influence of the raindrop speed is not needed to be considered, and the accuracy of the raindrop effect rendering is improved.
Drawings
FIG. 1 is a flow chart of a method for simulating rain in a virtual scene according to an embodiment of the invention;
fig. 2 is a schematic structural diagram of a terminal simulating rain in a virtual scene according to an embodiment of the present invention.
[ reference numerals description ]
1: a terminal simulating raining in a virtual scene;
2: a memory;
3: a processor.
Detailed Description
The invention will be better explained by the following detailed description of the embodiments with reference to the drawings.
Example 1
Referring to fig. 1, a method for simulating rain in a virtual scene includes the steps of:
s1, rendering depth information of all rain drop shielding pieces in a virtual scene to a preset target to obtain a depth information map;
the step S1 specifically comprises the following steps:
s11, adding a first camera into a virtual scene, and calculating a second transformation matrix of the first camera from world space to projection space according to parameters of the first camera and an orthogonal projection formula;
s12, starting a depth test, calculating coordinates of the vertexes of all the raindrop shields in the projection space of the first camera through the second transformation matrix and the coordinates of the vertexes of all the raindrop shields in the world space, and transmitting the coordinates to a fragment shader;
and S13, the fragment shader stores coordinates of the vertexes of the raindrop shields in the projection space of the first camera as depth information through colors, and a depth information map is obtained.
S2, determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
the step S2 specifically comprises the following steps:
s21, determining the size of a current rendering picture, and establishing a coordinate system to obtain coordinates of preset points;
s22, acquiring a first transformation matrix of the current scene camera from the screen space to the world space.
S3, calculating to obtain the falling position of the raindrops in the world space according to the depth information map, the coordinates of preset points and the first transformation matrix;
the step S3 specifically comprises the following steps:
s31, calculating to obtain the coordinates of the preset points in a screen space under a first camera according to the coordinates of the preset points and the first transformation matrix;
s32, randomly selecting a plurality of coordinate values from the coordinates of the preset points in the screen space under the first camera to serve as screen coordinates of the raindrops, and calculating the screen coordinates corresponding to the raindrops with the minimum depth values from the screen coordinates of all the raindrops according to the depth information map;
s33, calculating to obtain an inverse matrix of the second transformation matrix according to the second transformation matrix;
s34, obtaining world coordinates according to screen coordinates corresponding to the raindrops with the minimum depth value and the inverse matrix.
And S4, rendering a raining effect according to the falling position of the raindrops in the world space.
The step S4 specifically comprises the following steps:
and comparing the height component of the falling position of the raindrops in the world space with the height component of the world space position of the fragment in the fragment shader, and rendering the raining effect according to the comparison result.
Example two
The difference between the present embodiment and the first embodiment is that the present embodiment will further explain, with reference to a specific application scenario, how the method for simulating rain in the virtual scenario of the present invention is implemented:
s1, rendering depth information of all rain drop shielding pieces in a virtual scene to a preset target to obtain a depth information map;
specifically, the preset target is a single rendering target;
the step S1 specifically comprises the following steps:
s11, adding a first camera into a virtual scene, and calculating a second transformation matrix of the first camera from world space to projection space according to parameters of the first camera and an orthogonal projection formula;
specifically, the parameters of the first camera include the position, the direction, the near-platform distance, the far-plane distance and the view port size of the first camera in world space, the position of the first camera is set as the position of a raining starting point, the rotation is set as the raining direction, the near-plane is set to 0.1, the far-plane is set as the maximum height of raining, and the view port size is set as the size of a rectangular area requiring raining in the virtual scene;
s12, starting a depth test, calculating coordinates of the vertexes of all the raindrop shields in the projection space of the first camera through the second transformation matrix and the coordinates of the vertexes of all the raindrop shields in the world space, and transmitting the coordinates to a fragment shader;
specifically, a depth test is started in a rendering state to ensure that an object with the smallest depth value can be drawn to the forefront, and the depth values of other objects are replaced in a preset target;
and S13, the fragment shader stores coordinates of the vertexes of the raindrop shields in the projection space of the first camera as depth information through colors, and a depth information map is obtained.
Specifically, in the vertex fragment shader, the positions of all vertexes of a rain drop shielding piece model in a virtual scene in the projection space of a first camera are required to be calculated;
where POS0 is a parameter of a four-dimensional component, the four components are defined as (x, y, z, w), and a DEPTH value is obtained by dividing the z component of POS0 by the w component, and is denoted as DEPTH. In rendering, the value range of the DEPTH is-1 to 1, and the DEPTH information is required to be stored through colors, so that the range of the DEPTH information is required to be converted into 0 to 1, the method is that the DEPTH is 0.5+0.5, the value is stored into a DEPTH information graph as a final DEPTH value, and the finally stored DEPTH value is recorded as the DEPTH0;
s2, determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
the step S2 specifically comprises the following steps:
s21, determining the size of a current rendering picture, and establishing a coordinate system to obtain coordinates of preset points;
s22, acquiring a first transformation matrix of the current scene camera from the screen space to the world space.
Specifically, the SIZE of the current rendered picture needs to be confirmed first and recorded as SCREEN_SIZE, so that the upper left corner coordinate of the SCREEN is (0, 0), and the upper right corner coordinate of the SCREEN is (SCREEN_SIZE.x, 0), and a first transformation matrix from the SCREEN space of the current scene camera to the world space is obtained;
and S3, calculating according to the depth information map, coordinates of preset points and the first transformation matrix to obtain the falling position of the raindrops in the world space.
The step S3 specifically comprises the following steps:
s31, calculating to obtain the coordinates of the preset points in a screen space under a first camera according to the coordinates of the preset points and the first transformation matrix;
specifically, multiplying the upper left corner coordinate and the upper right corner coordinate by a first transformation matrix to obtain the position of the upper left corner coordinate in world space as LPOS and the position of the upper right corner coordinate in world space as RPOS, and then multiplying the LPOS and the RPOS by the first transformation matrix to obtain the coordinates of LSCREEN_POS and RSCREEN_POS in screen space under a first camera;
s32, randomly selecting a plurality of coordinate values from the coordinates of the preset points in the screen space under the first camera to serve as screen coordinates of the raindrops, and calculating the screen coordinates corresponding to the raindrops with the minimum depth values from the screen coordinates of all the raindrops according to the depth information map;
specifically, the values of the x components are randomly sampled in the LSCREEN_POS and the RSCREEN_POS, the number of sampling points is the number of raindrop places, the number of the raindrop places can be determined according to the size of the raindrops, the position of each sampling can be randomly carried out to achieve the effect of RANDOM drop point positions, the RANDOM points are recorded as RANDOM_POS, a DEPTH information graph is read from a GPU (graphics processing unit) to a CPU (Central processing unit), the x components of the RANDOM points are taken as indexes, and the minimum values in different ordinate coordinates are obtained from data in the DEPTH information graph and are taken as DEPTH_MIN;
s33, calculating to obtain an inverse matrix of the second transformation matrix according to the second transformation matrix;
s34, obtaining world coordinates according to screen coordinates corresponding to the raindrops with the minimum depth value and the inverse matrix.
Specifically, the position of the minimum DEPTH value depth_min in world space will be back calculated by the screen coordinates corresponding to the minimum DEPTH value depth_min and the inverse of the second transformation matrix. The screen coordinate corresponding to the minimum DEPTH value is a four-dimensional component coordinate, wherein the x component is the x component of DEPTH1, the y component is MIN_V, the z component is DEPTH_MIN, the w component is 1, the screen coordinate is converted into a section from 0 to 1 so as to calculate the back calculation from the projection space to the world space, MAT1 is multiplied by the screen coordinate, the world space position of the minimum DEPTH is POS1, and POS1 is the position of a raindrop place.
And S4, rendering a raining effect according to the falling position of the raindrops in the world space.
The step S4 specifically comprises the following steps:
and comparing the height component of the falling position of the raindrops in the world space with the height component of the world space position of the fragment in the fragment shader, and rendering the raining effect according to the comparison result.
Specifically, in the patch shader, the y component (height) of the world space position of the patch is compared with the y component of POS1, if the world space height coordinate of the patch is greater than the y component of POS1, the patch is rendered, otherwise, the rendering of the patch is ignored, so as to achieve the effect of blocking the object from rain.
According to the method, the position of the raindrop place is accurately calculated by taking the scene depth map rendered by the first camera as a reference and performing matrix calculation between the camera rendering the game picture and the first camera according to the principle that the world space position is unchanged, and compared with a scheme of collision detection, a large amount of collision detection calculation per frame is reduced. In addition, in collision detection, the position of each frame and the next frame is affected by the rain speed, so that there may be an error, and the position of the clipping point cannot reach the accuracy in the present invention.
Example III
Referring to fig. 2, a terminal 1 simulating rain in a virtual scene includes a memory 2, a processor 3, and a computer program stored in the memory 2 and executable on the processor 3, wherein the processor 3 implements the steps in the first embodiment when executing the program.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent changes made by the specification and drawings of the present invention, or direct or indirect application in the relevant art, are included in the scope of the present invention.

Claims (2)

1. A method for simulating rain in a virtual scene, comprising the steps of:
s1, rendering depth information of all rain drop shielding pieces in a virtual scene to a preset target to obtain a depth information map;
s2, determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
s3, calculating to obtain the falling position of the raindrops in the world space according to the depth information map, the coordinates of preset points and the first transformation matrix;
s4, rendering a raining effect according to the falling position of the raindrops in the world space;
the step S1 specifically comprises the following steps:
s11, adding a first camera into a virtual scene, and calculating a second transformation matrix of the first camera from world space to projection space according to parameters of the first camera and an orthogonal projection formula;
s12, starting a depth test, calculating coordinates of the vertexes of all the raindrop shields in the projection space of the first camera through the second transformation matrix and the coordinates of the vertexes of all the raindrop shields in the world space, and transmitting the coordinates to a fragment shader;
s13, the fragment coloring device stores coordinates of the vertexes of the raindrop shields in the projection space of the first camera as depth information through colors to obtain a depth information image;
the step S2 specifically comprises the following steps:
s21, determining the size of a current rendering picture, and establishing a coordinate system to obtain coordinates of preset points;
s22, acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
the step S3 specifically comprises the following steps:
s31, calculating to obtain the coordinates of the preset points in a screen space under a first camera according to the coordinates of the preset points and the first transformation matrix;
s32, randomly selecting a plurality of coordinate values from the coordinates of the preset points in the screen space under the first camera to serve as screen coordinates of the raindrops, and calculating the screen coordinates corresponding to the raindrops with the minimum depth values from the screen coordinates of all the raindrops according to the depth information map;
s33, calculating to obtain an inverse matrix of the second transformation matrix according to the second transformation matrix;
s34, obtaining world coordinates according to screen coordinates corresponding to the raindrops with the minimum depth values and the inverse matrix;
the step S4 specifically comprises the following steps:
and comparing the height component of the falling position of the raindrops in the world space with the height component of the world space position of the fragment in the fragment shader, and rendering the raining effect according to the comparison result.
2. A terminal for simulating rain in a virtual scene, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the following steps when executing the program:
s1, rendering depth information of all rain drop shielding pieces in a virtual scene to a preset target to obtain a depth information map;
s2, determining coordinates of preset points in a current rendering picture, and acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
s3, calculating to obtain the falling position of the raindrops in the world space according to the depth information map, the coordinates of preset points and the first transformation matrix;
s4, rendering a raining effect according to the falling position of the raindrops in the world space;
the step S1 specifically comprises the following steps:
s11, adding a first camera into a virtual scene, and calculating a second transformation matrix of the first camera from world space to projection space according to parameters of the first camera and an orthogonal projection formula;
s12, starting a depth test, calculating coordinates of the vertexes of all the raindrop shields in the projection space of the first camera through the second transformation matrix and the coordinates of the vertexes of all the raindrop shields in the world space, and transmitting the coordinates to a fragment shader;
s13, the fragment coloring device stores coordinates of the vertexes of the raindrop shields in the projection space of the first camera as depth information through colors to obtain a depth information image;
the step S2 specifically comprises the following steps:
s21, determining the size of a current rendering picture, and establishing a coordinate system to obtain coordinates of preset points;
s22, acquiring a first transformation matrix of a current scene camera from a screen space to a world space;
the step S3 specifically comprises the following steps:
s31, calculating to obtain the coordinates of the preset points in a screen space under a first camera according to the coordinates of the preset points and the first transformation matrix;
s32, randomly selecting a plurality of coordinate values from the coordinates of the preset points in the screen space under the first camera to serve as screen coordinates of the raindrops, and calculating the screen coordinates corresponding to the raindrops with the minimum depth values from the screen coordinates of all the raindrops according to the depth information map;
s33, calculating to obtain an inverse matrix of the second transformation matrix according to the second transformation matrix;
s34, obtaining world coordinates according to screen coordinates corresponding to the raindrops with the minimum depth values and the inverse matrix;
the step S4 specifically comprises the following steps:
and comparing the height component of the falling position of the raindrops in the world space with the height component of the world space position of the fragment in the fragment shader, and rendering the raining effect according to the comparison result.
CN201910787925.8A 2019-08-26 2019-08-26 Method and terminal for simulating raining in virtual scene Active CN111667393B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910787925.8A CN111667393B (en) 2019-08-26 2019-08-26 Method and terminal for simulating raining in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910787925.8A CN111667393B (en) 2019-08-26 2019-08-26 Method and terminal for simulating raining in virtual scene

Publications (2)

Publication Number Publication Date
CN111667393A CN111667393A (en) 2020-09-15
CN111667393B true CN111667393B (en) 2023-07-07

Family

ID=72381690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910787925.8A Active CN111667393B (en) 2019-08-26 2019-08-26 Method and terminal for simulating raining in virtual scene

Country Status (1)

Country Link
CN (1) CN111667393B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115423913A (en) * 2021-05-31 2022-12-02 北京字跳网络技术有限公司 Particle rendering method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009070215A (en) * 2007-09-14 2009-04-02 Making:Kk Simulation device
CN106874565A (en) * 2017-01-17 2017-06-20 上海电力学院 A kind of computational methods of rainy day transmission line of electricity lower section three-dimensional electric field
CN107886574A (en) * 2017-09-19 2018-04-06 浙江科澜信息技术有限公司 A kind of global rain effect emulation mode based on particIe system
CN108022283A (en) * 2017-12-06 2018-05-11 北京像素软件科技股份有限公司 Rainwater analogy method, device and readable storage medium storing program for executing
CN108510592A (en) * 2017-02-27 2018-09-07 亮风台(上海)信息科技有限公司 The augmented reality methods of exhibiting of actual physical model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101802164B1 (en) * 2016-09-20 2017-11-28 대한민국 The automatic time-series analysis method and system for the simulated dispersal information of cloud seeding material

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009070215A (en) * 2007-09-14 2009-04-02 Making:Kk Simulation device
CN106874565A (en) * 2017-01-17 2017-06-20 上海电力学院 A kind of computational methods of rainy day transmission line of electricity lower section three-dimensional electric field
CN108510592A (en) * 2017-02-27 2018-09-07 亮风台(上海)信息科技有限公司 The augmented reality methods of exhibiting of actual physical model
CN107886574A (en) * 2017-09-19 2018-04-06 浙江科澜信息技术有限公司 A kind of global rain effect emulation mode based on particIe system
CN108022283A (en) * 2017-12-06 2018-05-11 北京像素软件科技股份有限公司 Rainwater analogy method, device and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN111667393A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
JP6374982B2 (en) Improved graphics processing by tracking object and / or primitive identifiers
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
JP6342513B2 (en) Effective construction method of high resolution display buffer
CN111815755A (en) Method and device for determining shielded area of virtual object and terminal equipment
JP2020531980A (en) Rendering methods and terminals that simulate lighting
US9589386B2 (en) System and method for display of a repeating texture stored in a texture atlas
US9536333B2 (en) Method and apparatus for improved processing of graphics primitives
WO2023185262A1 (en) Illumination rendering method and apparatus, computer device, and storage medium
US20230230311A1 (en) Rendering Method and Apparatus, and Device
CN115512025A (en) Method and device for detecting model rendering performance, electronic device and storage medium
CN111667393B (en) Method and terminal for simulating raining in virtual scene
KR20170036419A (en) Graphics processing apparatus and method for determining LOD (level of detail) for texturing of graphics pipeline thereof
CN113129420B (en) Ray tracing rendering method based on depth buffer acceleration
CN110689606B (en) Method and terminal for calculating raindrop falling position in virtual scene
CN114494570A (en) Rendering method and device of three-dimensional model, storage medium and computer equipment
CN109427084B (en) Map display method, device, terminal and storage medium
CN114832375A (en) Ambient light shielding processing method, device and equipment
KR101208826B1 (en) Real time polygonal ambient occlusion method using contours of depth texture
CN117058301B (en) Knitted fabric real-time rendering method based on delayed coloring
CN117557711B (en) Method, device, computer equipment and storage medium for determining visual field
CN112734896B (en) Environment shielding rendering method and device, storage medium and electronic equipment
Romanyuk et al. Ways to improve performance of anisotropic texture filtering
US11087523B2 (en) Production ray tracing of feature lines
CN109191556B (en) Method for extracting rasterized digital elevation model from LOD paging surface texture model
Hoppe et al. Adaptive meshing and detail-reduction of 3D-point clouds from laser scans

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant