CN117523052A - Method for realizing ground-attached special effect of three-dimensional scene and computer-readable storage medium - Google Patents

Method for realizing ground-attached special effect of three-dimensional scene and computer-readable storage medium Download PDF

Info

Publication number
CN117523052A
CN117523052A CN202210898237.0A CN202210898237A CN117523052A CN 117523052 A CN117523052 A CN 117523052A CN 202210898237 A CN202210898237 A CN 202210898237A CN 117523052 A CN117523052 A CN 117523052A
Authority
CN
China
Prior art keywords
scene
camera
ground
rendering
bounding box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210898237.0A
Other languages
Chinese (zh)
Inventor
刘德建
林琛
郑福淦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian TQ Digital Co Ltd
Original Assignee
Fujian TQ Digital Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian TQ Digital Co Ltd filed Critical Fujian TQ Digital Co Ltd
Priority to CN202210898237.0A priority Critical patent/CN117523052A/en
Publication of CN117523052A publication Critical patent/CN117523052A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for realizing a ground-attached special effect of a three-dimensional scene and a computer-readable storage medium, wherein the method comprises the following steps: 3D particles to be displayed in a ground are imported into a newly built layer in the 3D scene; a camera is newly added in the 3D scene and is set to orthographic projection, the length and width of the visual field range of the camera are the same as those of the 3D scene, and the display content of the camera is the content of a layer; newly adding a rendering texture, and setting a rendering target of the camera as the rendering texture; creating a bounding box and a shader, and transmitting the rendering texture, the depth map of the main camera and the projection matrix of the bounding box into the shader, wherein the length and width of the bounding box are the same as the length and width of the 3D scene; the shader renders the rendered texture according to the projection matrix of the bounding box and the depth map of the primary camera. According to the invention, all the 3D particle special effects to be displayed in the ground can be projected and displayed in the 3D scene, and the original dynamic effect of the ground particle special effects is maintained.

Description

Method for realizing ground-attached special effect of three-dimensional scene and computer-readable storage medium
Technical Field
The invention relates to the technical field of three-dimensional special effect processing, in particular to a method for realizing a ground-attached special effect of a three-dimensional scene and a computer-readable storage medium.
Background
At present, when a 3D game or 3D software is developed, a 3D scene is often required to be constructed, the ground of the 3D scene is not regular and smooth, if the ground of the scene is convex, the ground-attached special effect can penetrate the ground surface often, and part of the ground-attached special effect can be blocked by the ground surface, at the moment, how to enable the ground-attached special effect not to be blocked by the ground surface has good dynamic expression effect, and meanwhile, the whole performance and the complexity of the process of manufacturing the ground-attached special effect are taken into consideration, so that the expression of the 3D game or the software is improved, and the problem to be solved is solved.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the method and the computer readable storage medium for realizing the ground-attached special effect of the three-dimensional scene can enable the ground-attached particle special effect in the 3D scene to be automatically rendered on the ground surface and objects on the ground surface according to the fluctuation of the scene, and meanwhile the original dynamic effect of the ground-attached particle special effect is maintained.
In order to solve the technical problems, the invention adopts the following technical scheme: a method for realizing a ground-attached special effect of a three-dimensional scene comprises the following steps:
newly creating a layer, and importing 3D particles to be displayed in a ground in a 3D scene into the layer;
newly adding a camera in a 3D scene, setting the camera to be in orthogonal projection, wherein the length and width of the visual field range of the camera are the same as those of the 3D scene, and the display content of the camera is the content of the layer;
newly adding a rendering texture, and setting a rendering target of the camera as the rendering texture;
creating a bounding box and a shader, and transmitting the rendering texture, a depth map of a main camera and a projection matrix of the bounding box into the shader, wherein the length and width of the bounding box are the same as the length and width of the 3D scene;
the shader renders the rendered texture according to the projection matrix of the bounding box and the depth map of the main camera.
The invention also proposes a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements a method as described above.
The invention has the beneficial effects that: by setting the display content of the newly added camera to be a layer containing all 3D particles needing to be displayed in the 3D scene and setting the field of view of the newly added camera, the special effects needing to be displayed in the 3D scene can be displayed in the rendering textures associated with the newly added camera, when the rendering textures are used as the textures and are transmitted into the coloring device used by the material balls of the bounding box, all the 3D particle special effects needing to be displayed in the 3D scene can be projected and displayed, and the 3D particle special effects or the dynamic effects of other ground special effects can be kept. Meanwhile, the artistic staff can keep the original process of developing the special effects when developing the special effects of the 3D ground-attached particles or other ground-attached special effects, and only the 3D particles of the ground-attached special effects are led into a specific image layer when the ground-attached special effects are required, so that additional program workload is not required, and the efficiency of manufacturing the ground-attached special effects is improved.
Drawings
FIG. 1 is a flow chart of a method for implementing a ground-attached special effect of a three-dimensional scene;
FIG. 2 is a flow chart of a method according to a first embodiment of the invention;
FIG. 3 is a view of an added camera according to a first embodiment of the present invention;
FIG. 4 is a schematic diagram of rendering textures according to a first embodiment of the present invention;
FIG. 5 is a schematic diagram of a bounding box according to a first embodiment of the present invention;
fig. 6 is a schematic diagram showing an effect of integrally projecting all 3D particles to be projected in a scene according to the first embodiment of the present invention;
FIG. 7 is a schematic diagram showing a gradual effect of an elevated portion of a particle effect projected on an object above the depth of a 3D scene according to an embodiment of the present invention;
FIG. 8 is a schematic diagram showing the effect of particle effects projected on an object on a road surface according to the first embodiment of the present invention;
FIG. 9 is a schematic diagram showing the effect of particle effects projected on a rugged scene ground in accordance with an embodiment of the present invention.
Detailed Description
In order to describe the technical contents, the achieved objects and effects of the present invention in detail, the following description will be made with reference to the embodiments in conjunction with the accompanying drawings.
Referring to fig. 1, a method for implementing a ground-attached special effect of a three-dimensional scene includes:
newly creating a layer, and importing 3D particles to be displayed in a ground in a 3D scene into the layer;
newly adding a camera in a 3D scene, setting the camera to be in orthogonal projection, wherein the length and width of the visual field range of the camera are the same as those of the 3D scene, and the display content of the camera is the content of the layer;
newly adding a rendering texture, and setting a rendering target of the camera as the rendering texture;
creating a bounding box and a shader, and transmitting the rendering texture, a depth map of a main camera and a projection matrix of the bounding box into the shader, wherein the length and width of the bounding box are the same as the length and width of the 3D scene;
the shader renders the rendered texture according to the projection matrix of the bounding box and the depth map of the main camera.
From the above description, the beneficial effects of the invention are as follows: the special effect of the ground-attached particles in the 3D scene can be automatically rendered on the ground surface and objects on the ground surface according to the fluctuation of the scene, the original dynamic effect of the special effect of the ground-attached particles can be maintained, and meanwhile, an original flow can be maintained when a new special effect of the 3D ground-attached particles is manufactured by an artist, so that additional program workload is not needed.
Further, the shader renders the rendering texture according to the projection matrix of the bounding box and the depth map of the main camera, specifically:
the shader acquires a depth value of a depth map of the main camera, and calculates coordinate values of a homogeneous clipping space according to screen coordinates of the main camera and the depth value;
multiplying the projection matrix of the bounding box by the coordinate value of the homogeneous clipping space to obtain world coordinates of a rendering position;
and according to the world coordinates, projecting the rendering texture.
Further, the information passed into the shader further includes a depth map of the 3D scene;
after the rendering texture is projected according to the world coordinates, the method further comprises:
and determining non-routable objects in the 3D scene according to the depth map of the 3D scene, and performing gradient processing on rendering textures projected on the non-routable objects.
As can be seen from the above description, the depth map of the main camera is used for calculating world coordinates of the rendering position with the projection matrix of the bounding box, and the depth map of the scene is used for determining the non-searchable object on the searchable road surface so as to perform gradient processing on the rendering texture projected on the non-searchable object, thereby realizing the transparent transition effect.
Further, the gradual change processing of the rendering texture projected on the non-routable object specifically comprises:
and reducing the transparency of the rendering texture projected on the non-routable object according to the height of the non-routable object.
As can be seen from the above description, the existing Unity self-contained projector can only project static textures or uv motion textures, and has better performance.
The invention also proposes a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements a method as described above.
Example 1
Referring to fig. 2-9, a first embodiment of the present invention is as follows: a method for realizing the ground-attached special effect of a three-dimensional scene can be applied to the ground-attached special effect expression of 3D games or 3D software, such as 3D games or software developed in a Unity3D engine, and the 3D ground-attached effect is required.
As shown in fig. 2, the method comprises the following steps:
s1: and creating a layer in the 3D scene, and guiding the 3D particles needing to be displayed in a stuck way into the layer.
Specifically, a Layer is newly created, which is named, for example, under the name of sticktogroundparticle, and all 3D dynamic particles that need to be displayed in place are set as the Layer.
S2: and newly adding a camera in the 3D scene, setting the camera to be orthogonal projection, and displaying the content of the camera as the content of the layer.
Specifically, a camera is newly added in the 3D scene, and the camera is set to be orthogonal projection (orthogonal camera), and the length and width of the field of view of the camera are the same as those of the 3D scene, so that the camera can just irradiate the whole 3D scene. For example, as shown in fig. 3, the middle part of fig. 3 is the current 3D scene, and the white rectangular frame around the 3D scene is the field of view of the newly added camera, and the sizes thereof just overlap.
The camera' S Culling Mask is set to be a stand-alone sticktogroundparticle, the Culling Mask of the main camera in the 3D scene excludes the sticktogroundparticle, i.e. the camera only displays the content of the 3D dynamic particle Layer in step S1, whereas the main camera does not display the content of the 3D dynamic particle Layer.
S3: and newly adding a rendering texture, and setting a rendering target of the camera as the rendering texture.
Specifically, a rendering texture is newly added, the rendering texture is assigned to the newly added camera, and at this time, the display content of the newly added camera is automatically read into the rendering texture, that is, the visual field content of the newly added camera can read the 3D particle special effects to be displayed in a scene in a sticking manner into the rendering texture.
As shown in fig. 4, fig. 4 is a schematic diagram of a rendering texture, and the newly added camera displays a ground-attached special effect that is set to sticktogroundparticle for all layers in the scene.
S4: creating a bounding box, wherein the length and width of the bounding box are the same as those of the 3D scene, namely, generating a bounding box which just can enclose the whole 3D scene according to the position and the size of the 3D scene.
As shown in fig. 5, the middle part of fig. 5 is the current 3D scene, and the rectangular frame on the outer side is the bounding box.
S5: creating a shader, and transmitting the rendered texture, the depth map of the 3D scene, the depth map of the primary camera, and the projection matrix of the bounding box into the shader.
The method comprises the steps of adding a shader loader used by a material ball of the bounding box, then transmitting a rendering texture render texture as a texture to the shader, transmitting a depth map of a road-searching surface of a 3D scene to the shader, transmitting a projection matrix of the bounding box to the shader, and transmitting a depth map of a main camera to the shader.
For the depth map of the main camera, an interface m_camera, settargetbuffer, is used to set a depthbuffer, then data is acquired from the depthbuffer and written into a rendering texture render texture, so that a required depth map can be obtained, and the map is transferred into a shader.
S6: the shader renders the rendered texture according to the projection matrix of the bounding box, the depth map of the 3D scene, and the depth map of the primary camera.
Specifically, a pixel shader in the shader loader samples the value of an R channel of a depth map of the incoming main camera according to the screen position of the main camera to obtain a depth value, and calculates coordinate values of a homogeneous clipping space according to the screen coordinates of the main camera and the depth value; and then multiplying the coordinate value of the homogeneous clipping space by the matrix of the bounding box to obtain world coordinates after projection, namely world coordinates of a rendering position, so that the ground-attaching effect of the rendering texture render texture can be rendered.
Since all 3D ground-attached particle effects on the 3D scene have been read in by the newly added camera of the newly added rendering texture render texture, all ground-attached particle effects can be projected on the entire 3D scene at one time. As shown in fig. 6, the 3 ground-contacting particle special effects marked in fig. 6 are integrally projected in the 3D scene.
Further, the depth map of the 3D scene input into the shader is the depth map of the road-seekable road surface of the 3D scene, and some non-seekable objects (such as large stones) are also arranged on the road-seekable road surface, so that when the 3D ground-attached special effects are projected on the non-seekable objects, the higher the height is, the lower the ground-attached special effect transparency is, that is, the gradual change treatment is performed on the rendering texture render texture of the objects projected on the 3D scene according to the scene depth map, and the gradual change effect that the higher the height is, the lower the partial special effect transparency is realized.
Specifically, the shader may be assigned to a texture sphere of the bounding box, where the texture sphere may adjust the gradual effect of the portion of the 3D particle effect projected on the object above the 3D scene depth.
As shown in fig. 7, when a large stone is not found, and a rendering texture (such as a white light ring in the middle of fig. 7) is projected on the large stone, it is desirable that the higher the projection position is, the lower the transparency of the rendering texture is, that is, the rendering texture projected on a part where the road is found (such as a rugged ground) is not transparent, and the rendering texture projected on a part of an object above a scene depth map is graded.
Since the layers of the 3D particle effects to be displayed in the 3D scene are set to the sticktogroundparticle, the field of view of the newly added camera just can include the entire 3D scene, and the camera only displays the objects of the layers of the sticktogroundparticle, so that the 3D effects to be displayed in the entire scene just can be displayed on the rendering texture associated with the newly added camera, and when the rendering texture is used as a shader for texture-transferring material balls of the bounding box, all the 3D particle effects to be displayed in the 3D scene can be projected and displayed, and the dynamic effects of the 3D particle in-place effects or other in-place effects can be maintained.
8-9, FIG. 8 shows that the dynamic particle effects of the character's underfoot halo automatically project on the object and remain dynamic effects when the object is on the road surface; FIG. 9 shows that the dynamic particle effects of the halo under the foot of a character are automatically projected on the scene surface and the original 3D particle effects dynamic effects are maintained when the 3D scene surface is rugged.
Example two
The present embodiment is a computer readable storage medium corresponding to the above embodiment, and has a computer program stored thereon, where the computer program realizes each process in the embodiment of the ground-pasting special effect implementation method of the three-dimensional scene when executed by a processor, and the same technical effect can be achieved, so that repetition is avoided, and no detailed description is given here.
In summary, according to the implementation method of the ground-attached special effect of the three-dimensional scene and the computer readable storage medium provided by the invention, all 3D particle special effects or other special effects required to be displayed in a ground in a 3D scene are displayed on a rendering texture by using a camera, and then a bounding box which has the same size as the 3D scene and just can just enclose the 3D scene is projected and displayed in the scene. When the depth map of the main camera is obtained, the depthbuffer is used for obtaining the depth map of the main camera, the depth map of the main camera is transmitted into the shader, the depth map of the main camera is used for calculating world coordinates of rendering positions with the pixel shader of the projection matrix of the bounding box in the shader, the depth map of the path-finding ground of the scene is mainly used for deformation enhancement, so that the rendering textures are projected to an object part above the depth map of the scene for gradual transition treatment, and the higher the projection position is, the higher the transparency is. By doing so, the projector with relative units can only project static textures or uv motion textures, the performance effect is better, and all 3D particles to be projected in the scene are integrally projected in the scene by the ground-attached special effect or other ground-attached special effects, so that the management of the projector positions when the independent projector is managed is omitted, and meanwhile, the overall projection is beneficial in performance. Meanwhile, the artistic staff can keep the original process of developing the special effects of the 3D ground-attached particles or other ground-attached special effects, and only needs to set the Layer of the ground-attached special effects to be the StickToGroundPartice when the ground-attached requirement exists, so that the artistic staff only needs to pay attention to the aesthetic feeling of the special effects, does not need to cooperate with a program to realize a certain part of the ground-attached special effects, and the efficiency of manufacturing the ground-attached special effects is improved.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent changes made by the specification and drawings of the present invention, or direct or indirect application in the relevant art, are included in the scope of the present invention.

Claims (5)

1. The method for realizing the ground-attached special effect of the three-dimensional scene is characterized by comprising the following steps of:
newly creating a layer, and importing 3D particles to be displayed in a ground in a 3D scene into the layer;
newly adding a camera in a 3D scene, setting the camera to be in orthogonal projection, wherein the length and width of the visual field range of the camera are the same as those of the 3D scene, and the display content of the camera is the content of the layer;
newly adding a rendering texture, and setting a rendering target of the camera as the rendering texture;
creating a bounding box and a shader, and transmitting the rendering texture, a depth map of a main camera and a projection matrix of the bounding box into the shader, wherein the length and width of the bounding box are the same as the length and width of the 3D scene;
the shader renders the rendered texture according to the projection matrix of the bounding box and the depth map of the main camera.
2. The method for realizing the ground-attached special effect of the three-dimensional scene according to claim 1, wherein the rendering texture by the shader according to the projection matrix of the bounding box and the depth map of the main camera is specifically:
the shader acquires a depth value of a depth map of the main camera, and calculates coordinate values of a homogeneous clipping space according to screen coordinates of the main camera and the depth value;
multiplying the projection matrix of the bounding box by the coordinate value of the homogeneous clipping space to obtain world coordinates of a rendering position;
and according to the world coordinates, projecting the rendering texture.
3. The method for realizing the ground-attached special effect of the three-dimensional scene according to claim 2, wherein the information input into the shader further comprises a depth map of the 3D scene;
after the rendering texture is projected according to the world coordinates, the method further comprises:
and determining non-routable objects in the 3D scene according to the depth map of the 3D scene, and performing gradient processing on rendering textures projected on the non-routable objects.
4. The method for implementing the ground-attached special effect of the three-dimensional scene according to claim 3, wherein the gradual change processing of the rendering texture projected on the non-routable object is specifically as follows:
and reducing the transparency of the rendering texture projected on the non-routable object according to the height of the non-routable object.
5. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-4.
CN202210898237.0A 2022-07-28 2022-07-28 Method for realizing ground-attached special effect of three-dimensional scene and computer-readable storage medium Pending CN117523052A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210898237.0A CN117523052A (en) 2022-07-28 2022-07-28 Method for realizing ground-attached special effect of three-dimensional scene and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210898237.0A CN117523052A (en) 2022-07-28 2022-07-28 Method for realizing ground-attached special effect of three-dimensional scene and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN117523052A true CN117523052A (en) 2024-02-06

Family

ID=89751787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210898237.0A Pending CN117523052A (en) 2022-07-28 2022-07-28 Method for realizing ground-attached special effect of three-dimensional scene and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN117523052A (en)

Similar Documents

Publication Publication Date Title
CN103946895B (en) The method for embedding in presentation and equipment based on tiling block
Kalkofen et al. Visualization techniques for augmented reality
CN111508052B (en) Rendering method and device of three-dimensional grid body
CN112270756A (en) Data rendering method applied to BIM model file
KR102465969B1 (en) Apparatus and method for performing graphics pipeline
US9275493B2 (en) Rendering vector maps in a geographic information system
CN102289845B (en) Three-dimensional model drawing method and device
US20140218356A1 (en) Method and apparatus for scaling images
JP2012524327A (en) How to add shadows to objects in computer graphics
US20130027389A1 (en) Making a two-dimensional image into three dimensions
CN102243768A (en) Method for drawing stereo picture of three-dimensional virtual scene
Ganovelli et al. Introduction to computer graphics: A practical learning approach
JP2023553507A (en) System and method for obtaining high quality rendered display of synthetic data display of custom specification products
CN112991508A (en) WebGL-based 3D rendering system and method
EP1922700B1 (en) 2d/3d combined rendering
CN110634178A (en) Three-dimensional scene refinement reconstruction method for digital museum
Sinenko et al. Automation of visualization process for organizational and technological design solutions
CN113648655B (en) Virtual model rendering method and device, storage medium and electronic equipment
US9007393B2 (en) Accurate transparency and local volume rendering
CN113546410B (en) Terrain model rendering method, apparatus, electronic device and storage medium
US7180523B1 (en) Trimming surfaces
US10186073B2 (en) Image processing device, image processing method, and data structure of image file
US10657705B2 (en) System and method for rendering shadows for a virtual environment
CN117523052A (en) Method for realizing ground-attached special effect of three-dimensional scene and computer-readable storage medium
CN115970275A (en) Projection processing method and device for virtual object, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination