CN110111408B - Large scene rapid intersection method based on graphics - Google Patents

Large scene rapid intersection method based on graphics Download PDF

Info

Publication number
CN110111408B
CN110111408B CN201910410023.2A CN201910410023A CN110111408B CN 110111408 B CN110111408 B CN 110111408B CN 201910410023 A CN201910410023 A CN 201910410023A CN 110111408 B CN110111408 B CN 110111408B
Authority
CN
China
Prior art keywords
scene
depth
value
frame buffer
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910410023.2A
Other languages
Chinese (zh)
Other versions
CN110111408A (en
Inventor
丁伟
阮怀照
刘从丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongzhi Software Co.,Ltd.
Original Assignee
Luoyang Zhongzhi Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luoyang Zhongzhi Software Technology Co ltd filed Critical Luoyang Zhongzhi Software Technology Co ltd
Priority to CN201910410023.2A priority Critical patent/CN110111408B/en
Publication of CN110111408A publication Critical patent/CN110111408A/en
Application granted granted Critical
Publication of CN110111408B publication Critical patent/CN110111408B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The invention relates to a large scene fast intersection method based on graphics, which pre-renders a scene to obtain the depth of the scene; outputting the scene depth to a fragment shader to obtain a first depth value; packing the first depth value into a pixel value and outputting the pixel value to a frame buffer area; reading the color value of the frame buffer area through WebGL; unpacking the color values to obtain second depth values; calculating a scene three-dimensional coordinate corresponding to the screen coordinate according to the second depth value and the screen space vector; the depth calculation is realized based on WEB scene rendering, so that the intersection speed is equivalent to the rendering speed, the original rendering frame value is about 80fps, and the intersection frame value adopting the intersection method can reach 70fps; and the utilized frame buffer is the original frame buffer, so that the additional memory consumption is hardly required; the method has the advantages of independence on scene complexity, high rendering speed, high rendering precision and good display effect, and improves the efficiency by at least 10 times.

Description

Large scene rapid intersection method based on graphics
Technical Field
The invention belongs to the technical field of three-dimensional intersection, and particularly relates to a large scene rapid intersection method based on graphics.
Background
The existing intersection technology is mostly based on a space partition tree intersection technology and mainly comprises three steps of calculating a bounding box, intersecting according to the bounding box and intersecting a node intersected with the bounding box with a model in the node.
Firstly, calculating a bounding box of each level of nodes in a scene, wherein the bounding box is a square box which can just accommodate all models in the nodes. And then intersecting the bounding boxes of the nodes at each level with the rays. Since the bounding box can fully accommodate all the models in the node, if the ray does not intersect the bounding box, the ray must not intersect any portion of the models in the bounding box. This step is mainly to increase the intersection speed. All leaf nodes intersected by the ray will be obtained through the above two steps. These nodes have no more children, so if a certain model in the scene intersects a ray, it must be in these leaf nodes. And sequentially intersecting all the triangular surfaces in the model in each node intersected with the bounding box to obtain a list of all the triangular surfaces intersected with the ray. And calculating the intersection points of the rays and the surfaces by a space geometry method, namely the intersection points of the rays and the scene.
Finally, if no intersection is eventually found, it is stated that the ray does not intersect the scene. If only one intersection is finally obtained, the ray intersects the scene at that point. If a group of intersection points are obtained, a point closest to the camera along the observation direction of the camera can be found by comparing the position relationship between each intersection point and the camera, and the point is the intersection point of the ray and the scene.
From the above, most of the existing scene intersection is based on the space partition tree, and the geometric data is required to have a complete structure, which greatly occupies the memory, especially under the condition that the browser itself has memory use limitation, the algorithm efficiency and the scene data amount are closely related, and as the scene data amount is continuously increased, the rendering speed and the scene loading efficiency are greatly reduced, and the like.
Disclosure of Invention
In view of the above, the present invention provides a graphics-based method for fast intersecting a large scene, so as to solve the problems that an intersection method in the prior art greatly occupies a memory, has a rendering speed, and has low scene loading efficiency.
In order to realize the purpose, the invention adopts the following technical scheme: a large scene rapid intersection method based on graphics comprises the following steps:
pre-rendering a scene to acquire the depth of the scene;
outputting the scene depth to a fragment shader to obtain a first depth value;
lossless packing the first depth value into a pixel value and outputting the pixel value to a frame buffer area;
reading the color value of the frame buffer area through WebGL;
unpacking the color value to obtain a second depth value;
and calculating the scene three-dimensional coordinate corresponding to the screen coordinate according to the second depth value and the screen space vector.
Further, the pre-rendering the scene to obtain the depth of the scene includes:
and calculating the visual coordinate and the projection coordinate of the vertex shader to obtain the visual coordinate z value.
Further, the packing the first depth value into a pixel value and outputting the pixel value to a frame buffer includes:
packing, by the fragment shader, the z values into a color vector;
calculating RGB components of the color vector;
and storing the RGB components into a frame buffer.
Further, the reading the color value of the frame buffer by WebGL includes:
and acquiring the color value of the frame buffer by a readPixels method of an HTML5Canvas WebGLrenderingContext object.
Further, unpacking the color values to obtain second depth values includes:
unpacking the color value to obtain a second depth value from the RGB components,
further, the calculating the scene three-dimensional coordinate corresponding to the screen coordinate according to the second depth value and the screen space vector includes:
acquiring a viewport matrix, a projection matrix, a view matrix and screen coordinates;
calculating a transformation matrix from the screen coordinates to the space vectors according to the viewport matrix, the projection matrix, the view matrix and the screen coordinates;
calculating a direction vector in a scene corresponding to the screen coordinate according to the transformation matrix from the screen coordinate to the space vector;
and calculating scene three-dimensional coordinates corresponding to the screen coordinates according to the second depth value and the direction vector.
Further, the pre-rendering the scene to obtain the depth of the scene further includes:
and cleaning a frame buffer area.
Further, the method also comprises the following steps:
performing view matrix transformation on the view coordinate z value of the vertex shader;
and taking an inverse number for the visual coordinate z value of the vertex shader after the visual matrix is transformed, so that the z coordinate is a positive number.
Further, the cleaning the frame buffer includes:
judging whether pixels are written in the frame buffer area or not;
if a pixel is written into the frame buffer area, the corresponding depth texture value is larger than 0.0;
if the depth texture value is 0.0, it is determined that no pixels are written to the frame buffer.
Furthermore, the precision range of the packed first depth value is 0.01-65536.0.
By adopting the technical scheme, the invention can achieve the following beneficial effects:
(1) The depth calculation is realized based on WEB scene rendering, so that the intersection speed is equivalent to the rendering speed. In an algorithm test scene, the original rendering frame value is about 80fps, the frame value after intersection by using a space segmentation tree is about 4-5 fps, and the real-time intersection frame value by using the algorithm can reach 70fps;
(2) The method is realized based on scene prerendering, and an original frame buffer area is utilized, so that extra memory is hardly consumed;
(3) The method is irrelevant to scene complexity, the rendering speed is high, and the efficiency is improved by at least 10 times;
(4) The invention has high rendering precision and good display effect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating the steps of a large scene fast intersection method based on graphics according to the present invention;
fig. 2 is a schematic diagram of another step of the large scene fast intersection method based on graphics.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
A specific graphics-based large-scene fast intersection method provided in the embodiment of the present application is described below with reference to the accompanying drawings.
The invention provides a large scene fast intersection method based on graphics, which comprises the following steps:
s101, pre-rendering a scene to obtain the depth of the scene;
s102, outputting the scene depth to a fragment shader to obtain a first depth value;
s103, packing the first depth value into a pixel value and outputting the pixel value to a frame buffer area;
s104, reading the color value of the frame buffer area through WebGL;
s105, unpacking the color values to obtain second depth values;
and S106, calculating scene three-dimensional coordinates corresponding to the screen coordinates according to the second depth value and the screen space vector.
The working principle of the large scene fast intersection method based on the graphics is that the whole scene is pre-rendered, the scene depth is obtained, the scene depth is output to a fragment shader to obtain a first depth value, the output first depth value is packed into a pixel value in a high-precision mode and is output to a frame buffer area, the color value of the frame buffer area is read through WebGL, the color value is unpacked to obtain a second depth value, a scene three-dimensional coordinate corresponding to a screen coordinate is calculated according to the second depth value and a screen space vector, and the scene three-dimensional coordinate is an intersection point.
Preferably, the pre-rendering the scene to obtain the depth of the scene includes:
and calculating the visual coordinate and the projection coordinate of the vertex shader to obtain the visual coordinate z value.
Specifically, in the classic vertex shader of WebGL, the view coordinates of the vertex are calculated first, then the projection coordinates of the vertex are calculated and output, and the calculated view coordinate z value is stored and output to the fragment shader for subsequent steps.
Preferably, the lossless packing the first depth value into a pixel value and outputting the pixel value to a frame buffer includes:
packing, by the fragment shader, the z values into a color vector;
calculating RGB components of the color vector;
and storing the RGB components to a frame buffer.
Specifically, after a vertex visual coordinate z value transmitted by a vertex shader is obtained, the z value is packed into a color vector at a fragment shader with high precision and high accuracy, the packed color vector is output, a color vector RGB component is calculated, and the RGB component is stored in a frame buffer area.
And recording the finally output color vector as vDepth, wherein the depth value of the view matrix transmitted by the vertex shader is fDepth, and the RGB component of the vDepth is assumed to respectively store a quotient of dividing the depth value by 256.0, a remainder of dividing the depth value by 256.0 and a decimal part of the depth value. Obtaining:
vDepth.b=fract(-fDepth) (1)
Figure BDA0002062527890000051
Figure BDA0002062527890000061
the RGB components are obtained by the above formula.
It should be noted that, since WebGL only supports reading of the frame buffer, outputting the depth as a color must be written into the frame buffer and cannot be a custom rendering target, and therefore, pixel values are directly written into the screen. And because the screen should not show the depth texture, the rendering process is put to a pre-rendering stage, and the rendering target can be cleaned to perform rendering of a normal scene after the pre-rendering is finished, so that the depth image cannot be output on the final screen.
The precision range of the packed first depth value in the application is 0.01-65536.0.
The traditional floating point packing method is direct storage compression according to bits, which has a large dependence on hardware, mainly because the formats of the graphics card for storing floating point data and the CPU for storing floating point data are required to be completely consistent, and although few problems occur in the current devices, unexpected errors are generated once a certain device uses a different floating point data format. In the method, although the range of floating point storage is reduced, only the depth value with the precision of about 0.01-65536.0 can be stored, but the program has good portability, and the stored floating point range is enough to be used in most three-dimensional rendering programs because the depth value is based on a camera, and the far and near clipping surfaces cannot exceed the range generally.
It should be noted that this rendering process only renders the main scene, and the sky-box, special objects, such as labels, rubber bands, should not participate in the rendering process because their depth values are unpredictable.
Preferably, the reading the color value of the frame buffer by WebGL includes:
and acquiring the color value of the frame buffer by a readPixels method of an HTML5Canvas WebGLrenderingContext object. Wherein, the color value is the depth value packed in the fragment shader.
Preferably, unpacking the color values to obtain second depth values includes:
unpacking the color value calculates a second depth value from the RGB components.
Specifically, let the first depth value be vPixel and the second depth value be fDepth, where vPixel = [ r, g, b, a ]. The second depth value is then calculated as
Figure BDA0002062527890000071
In some embodiments, the calculating the scene three-dimensional coordinates corresponding to the screen coordinates according to the second depth value and the screen space vector includes:
s201, obtaining a viewport matrix, a projection matrix, a view matrix and screen coordinates;
s202, calculating a transformation matrix from the screen coordinate to a space vector according to the viewport matrix, the projection matrix, the view matrix and the screen coordinate;
s203, calculating a direction vector in a scene corresponding to the screen coordinate according to the transformation matrix from the screen coordinate to the space vector;
and S204, calculating scene three-dimensional coordinates corresponding to the screen coordinates according to the second depth value and the direction vector.
Specifically, let mviewort be a viewport matrix, mProj be a projection matrix, mView be a view matrix, screen coordinates be [ x, y ], a transformation matrix from screen coordinates to space vectors be mat, and the transformed vectors be Vec, then:
mat=mat4.invert(mViewPort×mProj×mView) (5)
calculating the vector in the scene corresponding to the screen coordinate
Vec=[x,y,1.0]×mat-[x,y,0.0]×mat (6)
With depth value and direction vector, it is easy to calculate the scene three-dimensional coordinate corresponding to the screen coordinate
vPos=vEye+fDepth×Vec (7)
Where vEye is the position of the camera in the scene.
Preferably, the pre-rendering the scene to obtain the depth of the scene further includes:
and cleaning a frame buffer area.
The cleaning of the frame buffer area comprises the following steps:
judging whether pixels are written in the frame buffer area or not;
if a pixel is written into the frame buffer area, the corresponding depth texture value is larger than 0.0;
if the depth texture value is 0.0, it is determined that no pixel is written to the frame buffer.
Specifically, a frame buffer clean-up must be performed before the rendering process, and the cleaned value is 0x00000000. Since the camera near crop area is typically larger than 0.0, that means if a pixel is written into the frame buffer, its corresponding depth texture value must be larger than 0.0. Thus, if the read depth texture result after that is 0.0, it can be determined that this pixel does not correspond to any point in the scene.
Preferably, the graphics-based large scene fast intersection method provided by the present application further includes:
performing view matrix transformation on the vertex shader to obtain a view coordinate;
and taking an inverse number for the visual coordinate z value of the vertex shader after the visual matrix is transformed, so that the z coordinate is a positive number. This is because the vertex in front of the camera in WebGL is inverted after the view matrix is transformed so that the z coordinate is negative.
The scene coordinates obtained by the large scene rapid intersection method based on the graphics can be applied to the three-dimensional simulation fields of WebGL pickup, elevation calculation, collision detection and the like.
For example, the present application may be applied to a rubber band tool;
the rubber band tool is the most direct application of the technology; in software of the three-dimensional city planning industry, auxiliary tools such as distance measurement and area calculation, namely a rubber band tool, are generally needed. The user establishes the rubber band in a real-time interaction process, the user moves on a screen through a mouse or touch after the creation operation is started, the rubber band moves in real time at the moment, the length of the rubber band and the like are calculated in real time, and the calculation result is fed back to a user interface.
After the technical scheme provided by the application is used, the mouse or touch position of the user can be quickly calculated as the scene coordinate, and the interactive output rubber band parameter is simple to process, accurate in result and high in precision.
The present application may be applied to elevation calculation tools, for example: algorithm adjustment and algorithm optimization of a high-cost calculation tool;
in the field of three-dimensional simulation, when a first person roams and browses a whole city scene, a camera needs to walk on the ground, and fluctuates along with the fluctuation of the terrain, so that the height of a scene corresponding to a vertical straight line where the camera is located at present is required to be calculated in real time.
The technical scheme is applied to algorithm adjustment of the elevation calculation tool;
in the scene fast intersection solving method, the position and the direction information of a camera used for prerendering are completely the same as those of a main camera of a scene, while an elevation calculation tool is used, the position of the camera used for prerendering is kept unchanged, the direction is vertical and downward, orthogonal projection is used, a piece of depth texture facing the terrain is obtained, and the depth texture is converted to be the elevation near the camera. The concrete application is as follows:
the technical scheme is applied to algorithm optimization of an elevation calculation tool;
when an elevation texture is rendered, the camera is in the midpoint position of the texture, and many frames later may not go out of the rendered elevation range, so that repeated rendering is unnecessary. Once the camera is found to go out of the elevation range rendered last time, only one depth texture needs to be rendered again, so that the rendering efficiency of the main scene is hardly influenced by the elevation calculation, and higher interaction speed is obtained.
The application may also be applied to collision detection tools, for example: and (4) algorithm adjustment and algorithm optimization of the collision detection tool.
At present, with the development of new generation technologies such as 3S and the like, the three-dimensional simulation city planning field simulates that a user roams the whole city, the inside of a certain building, an underground parking lot, a city underground pipe network, a pipe gallery and the like, and the user has higher and higher requirements on experience reality, so that the walking track needs to be calculated in real time in the roaming and browsing process, collision detection must be carried out constantly, and unreality caused by a series of false phenomena such as separation from reality, wall penetration and the like is avoided. The invention is well used for a collision detection tool, and is specifically applied as follows:
the technical scheme is applied to algorithm adjustment of the collision detection tool;
the analogy elevation calculation tool renders the pre-rendering camera from the main camera to the periphery to obtain the depth texture around the camera, namely: distance in the horizontal direction. Once the distance of the camera from a certain direction is less than the margin parameter, it is concluded that the camera has not been able to continue moving in that direction, i.e.: a collision of the camera with the scene is detected.
The technical scheme is applied to algorithm optimization of the collision detection tool;
the collision tool does not need to render in real time, and the depth result rendered once is stored. If the camera moves to the blind area of the depth texture, the depth texture is rendered again, and the four directions do not need to be rendered in one frame, in fact, the camera only has one moving direction in one frame, rendering is preferentially performed along the moving direction, rendering is completed in the next frames in other directions, and therefore the situation that the rendering efficiency of the main scene is greatly reduced hardly occurs.
In summary, the invention provides a large scene fast intersection method based on graphics, and the method has the following beneficial effects:
(1) The depth calculation is realized based on WEB scene rendering, so that the intersection speed is equivalent to the rendering speed. In an algorithm test scene, the original rendering frame value is about 80fps, the frame value after intersection by using a space segmentation tree is about 4-5 fps, and the real-time intersection frame value by using the algorithm can reach 70fps;
(2) The method is realized based on scene prerendering, and an original frame buffer area is utilized, so that extra memory is hardly consumed;
(3) The method is irrelevant to scene complexity, the rendering speed is high, and the efficiency is improved by at least 10 times;
(4) The invention has high rendering precision and good display effect.
It can be understood that the method embodiments provided above correspond to the large scene fast intersection method embodiments based on graphics, and corresponding specific contents may be referred to each other, which is not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the functions specified in the flowchart flow or flows and/or block diagram block or blocks for a graphics-based large-scenario fast rendezvous method.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (7)

1. A large scene fast intersection method based on graphics is characterized by comprising the following steps:
pre-rendering a scene to obtain the depth of the scene;
the pre-rendering the scene to obtain the depth of the scene includes:
calculating the visual coordinate and the projection coordinate of the vertex shader to obtain the z value of the visual coordinate;
outputting the scene depth to a fragment shader to obtain a first depth value;
packing the first depth value into a pixel value and outputting the pixel value to a frame buffer area;
the lossless packing of the first depth value into a pixel value and outputting the pixel value to a frame buffer comprises:
packing, by the fragment shader, the z values into a color vector;
calculating RGB components of the color vector;
storing the RGB components to a frame buffer;
reading the color value of the frame buffer area through WebGL;
the reading the color value of the frame buffer area through the WebGL comprises the following steps:
acquiring the color value of the frame buffer area by a readPixels method of an HTML5Canvas WebGLrenderingContext object;
unpacking the color values to obtain second depth values;
calculating a scene three-dimensional coordinate corresponding to the screen coordinate according to the second depth value and the screen space vector;
the calculating the scene three-dimensional coordinate corresponding to the screen coordinate according to the second depth value and the screen space vector includes:
acquiring a viewport matrix, a projection matrix, a view matrix and screen coordinates;
calculating a transformation matrix from the screen coordinates to the space vectors according to the viewport matrix, the projection matrix, the view matrix and the screen coordinates;
calculating a direction vector in a scene corresponding to the screen coordinate according to the transformation matrix from the screen coordinate to the space vector;
and calculating scene three-dimensional coordinates corresponding to the screen coordinates according to the second depth value and the direction vector.
2. The graphics-based large scene fast intersection method according to claim 1, wherein the pre-rendering the scene to obtain the depth of the scene comprises:
and calculating the visual coordinate and the projection coordinate of the vertex shader to obtain the visual coordinate z value.
3. The graphics-based large scene fast intersection method of claim 1, wherein the unpacking the color values and obtaining second depth values comprises:
unpacking the color values to obtain second depth values through the RGB components.
4. The graphics-based large scene fast intersection method of claim 1, wherein before pre-rendering the scene to obtain the scene depth, the method further comprises:
and cleaning a frame buffer area.
5. The graphics-based large scene fast intersection method according to claim 2, further comprising:
performing view matrix transformation on the vertex shader to obtain a view coordinate;
and taking an inverse number for the visual coordinate z value of the vertex shader after the visual matrix transformation, so that the z coordinate is a positive number.
6. The graphics-based large-scene fast intersection method of claim 4, wherein the clearing of the frame buffer comprises:
judging whether pixels are written in the frame buffer area or not;
if a pixel is written into the frame buffer area, the corresponding depth texture value is larger than 0.0;
if the depth texture value is 0.0, it is determined that no pixels are written to the frame buffer.
7. The graphics-based large scene fast intersection method of claim 1,
the precision range of the packed first depth value is 0.01-65536.0.
CN201910410023.2A 2019-05-16 2019-05-16 Large scene rapid intersection method based on graphics Active CN110111408B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910410023.2A CN110111408B (en) 2019-05-16 2019-05-16 Large scene rapid intersection method based on graphics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910410023.2A CN110111408B (en) 2019-05-16 2019-05-16 Large scene rapid intersection method based on graphics

Publications (2)

Publication Number Publication Date
CN110111408A CN110111408A (en) 2019-08-09
CN110111408B true CN110111408B (en) 2023-03-14

Family

ID=67490660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910410023.2A Active CN110111408B (en) 2019-05-16 2019-05-16 Large scene rapid intersection method based on graphics

Country Status (1)

Country Link
CN (1) CN110111408B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458922B (en) * 2019-08-14 2022-12-27 深圳市商汤科技有限公司 Graphics rendering method and related product
CN111508052B (en) * 2020-04-23 2023-11-21 网易(杭州)网络有限公司 Rendering method and device of three-dimensional grid body
CN111612878B (en) * 2020-05-21 2023-04-07 广州光锥元信息科技有限公司 Method and device for making static photo into three-dimensional effect video
CN111932689B (en) * 2020-07-03 2023-11-14 北京庚图科技有限公司 Three-dimensional object quick selection method adopting ID pixel graph
CN112184922B (en) * 2020-10-15 2024-01-26 洛阳众智软件科技股份有限公司 Fusion method, device, equipment and storage medium of two-dimensional video and three-dimensional scene
CN113379814B (en) * 2021-06-09 2024-04-09 北京超图软件股份有限公司 Three-dimensional space relation judging method and device
CN117058301B (en) * 2023-06-29 2024-03-19 武汉纺织大学 Knitted fabric real-time rendering method based on delayed coloring
CN118470173B (en) * 2024-07-15 2024-09-20 安创启元(杭州)科技有限公司 Underground pipe network display method and device based on cloud rendering and computer equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007085482A1 (en) * 2006-01-30 2007-08-02 Newsight Gmbh Method for producing and displaying images which can be perceived in three dimensions
CN102722861A (en) * 2011-05-06 2012-10-10 新奥特(北京)视频技术有限公司 CPU-based graphic rendering engine and realization method

Also Published As

Publication number Publication date
CN110111408A (en) 2019-08-09

Similar Documents

Publication Publication Date Title
CN110111408B (en) Large scene rapid intersection method based on graphics
CN108648269B (en) Method and system for singulating three-dimensional building models
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
US9965892B2 (en) Rendering tessellated geometry with motion and defocus blur
Amanatides et al. A fast voxel traversal algorithm for ray tracing.
CN109448137B (en) Interaction method, interaction device, electronic equipment and storage medium
CN102289845B (en) Three-dimensional model drawing method and device
CN110309458B (en) BIM model display and rendering method based on WebGL
KR101281157B1 (en) Ray tracing core and processing mehtod for ray tracing
WO2009145155A1 (en) Cutting process simulation display device, method for displaying cutting process simulation, and cutting process simulation display program
CN104318605B (en) Parallel lamination rendering method of vector solid line and three-dimensional terrain
JPH07120434B2 (en) Method and apparatus for volume rendering
CN109979002A (en) Scenario building system and method based on WebGL three-dimensional visualization
CN110706326B (en) Data display method and device
CN103473814A (en) Three-dimensional geometric primitive picking method based on GPU
CN105894551A (en) Image drawing method and device
KR20150117662A (en) Method and device for enriching the content of a depth map
KR20150124112A (en) Method for Adaptive LOD Rendering in 3-D Terrain Visualization System
US8587586B2 (en) Electronic device and method for meshing curved surface
Chen et al. An improved texture-related vertex clustering algorithm for model simplification
Wiemann et al. Automatic Map Creation For Environment Modelling In Robotic Simulators.
Min et al. Octomap-rt: Fast probabilistic volumetric mapping using ray-tracing gpus
CN118799496A (en) A method and device for extracting and vectorizing indoor building structures based on laser radar data
Schäfer et al. Real-Time Deformation of Subdivision Surfaces from Object Collisions.
CN115588076A (en) Method for solving threading between three-dimensional human body model and clothes model and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Fast Intersection Method for Large Scenes Based on Graphics

Granted publication date: 20230314

Pledgee: Industrial and Commercial Bank of China Limited Luoyang Jili Branch

Pledgor: Luoyang Zhongzhi Software Technology Co.,Ltd.

Registration number: Y2024980003551

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Floor 13, 14 and 15, building 3, lianfei building, No.1, Fenghua Road, high tech Development Zone, Luoyang City, Henan Province, 471000

Patentee after: Zhongzhi Software Co.,Ltd.

Country or region after: China

Address before: Floor 13, 14 and 15, building 3, lianfei building, No.1, Fenghua Road, Luoyang hi tech Development Zone, Luoyang City, Henan Province, 471000

Patentee before: Luoyang Zhongzhi Software Technology Co.,Ltd.

Country or region before: China