Disclosure of Invention
In view of the above, the present invention provides a graphics-based method for fast intersecting a large scene, so as to solve the problems that an intersection method in the prior art greatly occupies a memory, has a rendering speed, and has low scene loading efficiency.
In order to realize the purpose, the invention adopts the following technical scheme: a large scene rapid intersection method based on graphics comprises the following steps:
pre-rendering a scene to acquire the depth of the scene;
outputting the scene depth to a fragment shader to obtain a first depth value;
lossless packing the first depth value into a pixel value and outputting the pixel value to a frame buffer area;
reading the color value of the frame buffer area through WebGL;
unpacking the color value to obtain a second depth value;
and calculating the scene three-dimensional coordinate corresponding to the screen coordinate according to the second depth value and the screen space vector.
Further, the pre-rendering the scene to obtain the depth of the scene includes:
and calculating the visual coordinate and the projection coordinate of the vertex shader to obtain the visual coordinate z value.
Further, the packing the first depth value into a pixel value and outputting the pixel value to a frame buffer includes:
packing, by the fragment shader, the z values into a color vector;
calculating RGB components of the color vector;
and storing the RGB components into a frame buffer.
Further, the reading the color value of the frame buffer by WebGL includes:
and acquiring the color value of the frame buffer by a readPixels method of an HTML5Canvas WebGLrenderingContext object.
Further, unpacking the color values to obtain second depth values includes:
unpacking the color value to obtain a second depth value from the RGB components,
further, the calculating the scene three-dimensional coordinate corresponding to the screen coordinate according to the second depth value and the screen space vector includes:
acquiring a viewport matrix, a projection matrix, a view matrix and screen coordinates;
calculating a transformation matrix from the screen coordinates to the space vectors according to the viewport matrix, the projection matrix, the view matrix and the screen coordinates;
calculating a direction vector in a scene corresponding to the screen coordinate according to the transformation matrix from the screen coordinate to the space vector;
and calculating scene three-dimensional coordinates corresponding to the screen coordinates according to the second depth value and the direction vector.
Further, the pre-rendering the scene to obtain the depth of the scene further includes:
and cleaning a frame buffer area.
Further, the method also comprises the following steps:
performing view matrix transformation on the view coordinate z value of the vertex shader;
and taking an inverse number for the visual coordinate z value of the vertex shader after the visual matrix is transformed, so that the z coordinate is a positive number.
Further, the cleaning the frame buffer includes:
judging whether pixels are written in the frame buffer area or not;
if a pixel is written into the frame buffer area, the corresponding depth texture value is larger than 0.0;
if the depth texture value is 0.0, it is determined that no pixels are written to the frame buffer.
Furthermore, the precision range of the packed first depth value is 0.01-65536.0.
By adopting the technical scheme, the invention can achieve the following beneficial effects:
(1) The depth calculation is realized based on WEB scene rendering, so that the intersection speed is equivalent to the rendering speed. In an algorithm test scene, the original rendering frame value is about 80fps, the frame value after intersection by using a space segmentation tree is about 4-5 fps, and the real-time intersection frame value by using the algorithm can reach 70fps;
(2) The method is realized based on scene prerendering, and an original frame buffer area is utilized, so that extra memory is hardly consumed;
(3) The method is irrelevant to scene complexity, the rendering speed is high, and the efficiency is improved by at least 10 times;
(4) The invention has high rendering precision and good display effect.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
A specific graphics-based large-scene fast intersection method provided in the embodiment of the present application is described below with reference to the accompanying drawings.
The invention provides a large scene fast intersection method based on graphics, which comprises the following steps:
s101, pre-rendering a scene to obtain the depth of the scene;
s102, outputting the scene depth to a fragment shader to obtain a first depth value;
s103, packing the first depth value into a pixel value and outputting the pixel value to a frame buffer area;
s104, reading the color value of the frame buffer area through WebGL;
s105, unpacking the color values to obtain second depth values;
and S106, calculating scene three-dimensional coordinates corresponding to the screen coordinates according to the second depth value and the screen space vector.
The working principle of the large scene fast intersection method based on the graphics is that the whole scene is pre-rendered, the scene depth is obtained, the scene depth is output to a fragment shader to obtain a first depth value, the output first depth value is packed into a pixel value in a high-precision mode and is output to a frame buffer area, the color value of the frame buffer area is read through WebGL, the color value is unpacked to obtain a second depth value, a scene three-dimensional coordinate corresponding to a screen coordinate is calculated according to the second depth value and a screen space vector, and the scene three-dimensional coordinate is an intersection point.
Preferably, the pre-rendering the scene to obtain the depth of the scene includes:
and calculating the visual coordinate and the projection coordinate of the vertex shader to obtain the visual coordinate z value.
Specifically, in the classic vertex shader of WebGL, the view coordinates of the vertex are calculated first, then the projection coordinates of the vertex are calculated and output, and the calculated view coordinate z value is stored and output to the fragment shader for subsequent steps.
Preferably, the lossless packing the first depth value into a pixel value and outputting the pixel value to a frame buffer includes:
packing, by the fragment shader, the z values into a color vector;
calculating RGB components of the color vector;
and storing the RGB components to a frame buffer.
Specifically, after a vertex visual coordinate z value transmitted by a vertex shader is obtained, the z value is packed into a color vector at a fragment shader with high precision and high accuracy, the packed color vector is output, a color vector RGB component is calculated, and the RGB component is stored in a frame buffer area.
And recording the finally output color vector as vDepth, wherein the depth value of the view matrix transmitted by the vertex shader is fDepth, and the RGB component of the vDepth is assumed to respectively store a quotient of dividing the depth value by 256.0, a remainder of dividing the depth value by 256.0 and a decimal part of the depth value. Obtaining:
vDepth.b=fract(-fDepth) (1)
the RGB components are obtained by the above formula.
It should be noted that, since WebGL only supports reading of the frame buffer, outputting the depth as a color must be written into the frame buffer and cannot be a custom rendering target, and therefore, pixel values are directly written into the screen. And because the screen should not show the depth texture, the rendering process is put to a pre-rendering stage, and the rendering target can be cleaned to perform rendering of a normal scene after the pre-rendering is finished, so that the depth image cannot be output on the final screen.
The precision range of the packed first depth value in the application is 0.01-65536.0.
The traditional floating point packing method is direct storage compression according to bits, which has a large dependence on hardware, mainly because the formats of the graphics card for storing floating point data and the CPU for storing floating point data are required to be completely consistent, and although few problems occur in the current devices, unexpected errors are generated once a certain device uses a different floating point data format. In the method, although the range of floating point storage is reduced, only the depth value with the precision of about 0.01-65536.0 can be stored, but the program has good portability, and the stored floating point range is enough to be used in most three-dimensional rendering programs because the depth value is based on a camera, and the far and near clipping surfaces cannot exceed the range generally.
It should be noted that this rendering process only renders the main scene, and the sky-box, special objects, such as labels, rubber bands, should not participate in the rendering process because their depth values are unpredictable.
Preferably, the reading the color value of the frame buffer by WebGL includes:
and acquiring the color value of the frame buffer by a readPixels method of an HTML5Canvas WebGLrenderingContext object. Wherein, the color value is the depth value packed in the fragment shader.
Preferably, unpacking the color values to obtain second depth values includes:
unpacking the color value calculates a second depth value from the RGB components.
Specifically, let the first depth value be vPixel and the second depth value be fDepth, where vPixel = [ r, g, b, a ]. The second depth value is then calculated as
In some embodiments, the calculating the scene three-dimensional coordinates corresponding to the screen coordinates according to the second depth value and the screen space vector includes:
s201, obtaining a viewport matrix, a projection matrix, a view matrix and screen coordinates;
s202, calculating a transformation matrix from the screen coordinate to a space vector according to the viewport matrix, the projection matrix, the view matrix and the screen coordinate;
s203, calculating a direction vector in a scene corresponding to the screen coordinate according to the transformation matrix from the screen coordinate to the space vector;
and S204, calculating scene three-dimensional coordinates corresponding to the screen coordinates according to the second depth value and the direction vector.
Specifically, let mviewort be a viewport matrix, mProj be a projection matrix, mView be a view matrix, screen coordinates be [ x, y ], a transformation matrix from screen coordinates to space vectors be mat, and the transformed vectors be Vec, then:
mat=mat4.invert(mViewPort×mProj×mView) (5)
calculating the vector in the scene corresponding to the screen coordinate
Vec=[x,y,1.0]×mat-[x,y,0.0]×mat (6)
With depth value and direction vector, it is easy to calculate the scene three-dimensional coordinate corresponding to the screen coordinate
vPos=vEye+fDepth×Vec (7)
Where vEye is the position of the camera in the scene.
Preferably, the pre-rendering the scene to obtain the depth of the scene further includes:
and cleaning a frame buffer area.
The cleaning of the frame buffer area comprises the following steps:
judging whether pixels are written in the frame buffer area or not;
if a pixel is written into the frame buffer area, the corresponding depth texture value is larger than 0.0;
if the depth texture value is 0.0, it is determined that no pixel is written to the frame buffer.
Specifically, a frame buffer clean-up must be performed before the rendering process, and the cleaned value is 0x00000000. Since the camera near crop area is typically larger than 0.0, that means if a pixel is written into the frame buffer, its corresponding depth texture value must be larger than 0.0. Thus, if the read depth texture result after that is 0.0, it can be determined that this pixel does not correspond to any point in the scene.
Preferably, the graphics-based large scene fast intersection method provided by the present application further includes:
performing view matrix transformation on the vertex shader to obtain a view coordinate;
and taking an inverse number for the visual coordinate z value of the vertex shader after the visual matrix is transformed, so that the z coordinate is a positive number. This is because the vertex in front of the camera in WebGL is inverted after the view matrix is transformed so that the z coordinate is negative.
The scene coordinates obtained by the large scene rapid intersection method based on the graphics can be applied to the three-dimensional simulation fields of WebGL pickup, elevation calculation, collision detection and the like.
For example, the present application may be applied to a rubber band tool;
the rubber band tool is the most direct application of the technology; in software of the three-dimensional city planning industry, auxiliary tools such as distance measurement and area calculation, namely a rubber band tool, are generally needed. The user establishes the rubber band in a real-time interaction process, the user moves on a screen through a mouse or touch after the creation operation is started, the rubber band moves in real time at the moment, the length of the rubber band and the like are calculated in real time, and the calculation result is fed back to a user interface.
After the technical scheme provided by the application is used, the mouse or touch position of the user can be quickly calculated as the scene coordinate, and the interactive output rubber band parameter is simple to process, accurate in result and high in precision.
The present application may be applied to elevation calculation tools, for example: algorithm adjustment and algorithm optimization of a high-cost calculation tool;
in the field of three-dimensional simulation, when a first person roams and browses a whole city scene, a camera needs to walk on the ground, and fluctuates along with the fluctuation of the terrain, so that the height of a scene corresponding to a vertical straight line where the camera is located at present is required to be calculated in real time.
The technical scheme is applied to algorithm adjustment of the elevation calculation tool;
in the scene fast intersection solving method, the position and the direction information of a camera used for prerendering are completely the same as those of a main camera of a scene, while an elevation calculation tool is used, the position of the camera used for prerendering is kept unchanged, the direction is vertical and downward, orthogonal projection is used, a piece of depth texture facing the terrain is obtained, and the depth texture is converted to be the elevation near the camera. The concrete application is as follows:
the technical scheme is applied to algorithm optimization of an elevation calculation tool;
when an elevation texture is rendered, the camera is in the midpoint position of the texture, and many frames later may not go out of the rendered elevation range, so that repeated rendering is unnecessary. Once the camera is found to go out of the elevation range rendered last time, only one depth texture needs to be rendered again, so that the rendering efficiency of the main scene is hardly influenced by the elevation calculation, and higher interaction speed is obtained.
The application may also be applied to collision detection tools, for example: and (4) algorithm adjustment and algorithm optimization of the collision detection tool.
At present, with the development of new generation technologies such as 3S and the like, the three-dimensional simulation city planning field simulates that a user roams the whole city, the inside of a certain building, an underground parking lot, a city underground pipe network, a pipe gallery and the like, and the user has higher and higher requirements on experience reality, so that the walking track needs to be calculated in real time in the roaming and browsing process, collision detection must be carried out constantly, and unreality caused by a series of false phenomena such as separation from reality, wall penetration and the like is avoided. The invention is well used for a collision detection tool, and is specifically applied as follows:
the technical scheme is applied to algorithm adjustment of the collision detection tool;
the analogy elevation calculation tool renders the pre-rendering camera from the main camera to the periphery to obtain the depth texture around the camera, namely: distance in the horizontal direction. Once the distance of the camera from a certain direction is less than the margin parameter, it is concluded that the camera has not been able to continue moving in that direction, i.e.: a collision of the camera with the scene is detected.
The technical scheme is applied to algorithm optimization of the collision detection tool;
the collision tool does not need to render in real time, and the depth result rendered once is stored. If the camera moves to the blind area of the depth texture, the depth texture is rendered again, and the four directions do not need to be rendered in one frame, in fact, the camera only has one moving direction in one frame, rendering is preferentially performed along the moving direction, rendering is completed in the next frames in other directions, and therefore the situation that the rendering efficiency of the main scene is greatly reduced hardly occurs.
In summary, the invention provides a large scene fast intersection method based on graphics, and the method has the following beneficial effects:
(1) The depth calculation is realized based on WEB scene rendering, so that the intersection speed is equivalent to the rendering speed. In an algorithm test scene, the original rendering frame value is about 80fps, the frame value after intersection by using a space segmentation tree is about 4-5 fps, and the real-time intersection frame value by using the algorithm can reach 70fps;
(2) The method is realized based on scene prerendering, and an original frame buffer area is utilized, so that extra memory is hardly consumed;
(3) The method is irrelevant to scene complexity, the rendering speed is high, and the efficiency is improved by at least 10 times;
(4) The invention has high rendering precision and good display effect.
It can be understood that the method embodiments provided above correspond to the large scene fast intersection method embodiments based on graphics, and corresponding specific contents may be referred to each other, which is not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the functions specified in the flowchart flow or flows and/or block diagram block or blocks for a graphics-based large-scenario fast rendezvous method.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.