CN116263980A - CesiumJS-based video projection mapping method in three-dimensional scene - Google Patents

CesiumJS-based video projection mapping method in three-dimensional scene Download PDF

Info

Publication number
CN116263980A
CN116263980A CN202111530530.3A CN202111530530A CN116263980A CN 116263980 A CN116263980 A CN 116263980A CN 202111530530 A CN202111530530 A CN 202111530530A CN 116263980 A CN116263980 A CN 116263980A
Authority
CN
China
Prior art keywords
video
projection
matrix
camera
cesiumjs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111530530.3A
Other languages
Chinese (zh)
Inventor
胡欣立
王建东
黄志远
刘振宇
李明霖
夏翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Institute Of Computing Technology Xi'an University Of Electronic Science And Technology
Original Assignee
Qingdao Institute Of Computing Technology Xi'an University Of Electronic Science And Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Institute Of Computing Technology Xi'an University Of Electronic Science And Technology filed Critical Qingdao Institute Of Computing Technology Xi'an University Of Electronic Science And Technology
Priority to CN202111530530.3A priority Critical patent/CN116263980A/en
Publication of CN116263980A publication Critical patent/CN116263980A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

With the rapid development of cities and the advent of the internet age, smart parks have become the subject of hot research in urban construction. The video monitoring system is used as an important integrated function of the intelligent park, and is researched more. The video is fused with the park, so that three-dimensional space information can be given to the video, and the effect of enhancing the virtual three-dimensional scene is achieved. CesiumJS is a three-dimensional GIS frame based on WebGL, can represent abundant three-dimensional scenes in an intelligent park, is added with GIS elements, and has good applicability. The video projection technology based on CesiumJS is provided, the WebGL and CesiumJS technologies are used, the texture mapping algorithm and the shadow map algorithm are used for realizing real-time video projection and solving the video projection penetration problem, and a monitoring video is fused with a park scene in a projection mode, so that a video monitoring system with visual effect, strong availability and richer information is realized.

Description

CesiumJS-based video projection mapping method in three-dimensional scene
Technical Field
The invention relates to the fields of GIS, computer graphics and computer vision, in particular to a video projection technology based on CesiumJS.
Background
Geographic information systems (Geographic Information System or Geo-Information system, GIS) are sometimes referred to as "geoscience information systems". It is a particular very important spatial information system. The system is a technical system for collecting, storing, managing, operating, analyzing, displaying and describing the related geographic distribution data in the whole or partial earth surface (including atmosphere) space under the support of a computer hard and software system.
Computer graphics (Computer Graphics, CG for short) is a science of using mathematical algorithms to transform two-or three-dimensional graphics into a grid form for a computer display. Briefly, the main study of computer graphics is to study how graphics are represented in a computer and the related principles and algorithms for computing, processing and displaying the graphics with the computer.
shadow map is a shadow generation method based on images, and the shadow has important significance for drawing a realistic scene, so that not only can the mutual shielding relation between objects in space be reflected, but also the geometrical information of a shielding object and a receiving surface can be reflected, and the real-time drawing of the shadow also greatly increases the reality of the scene drawing. The traditional shadow map algorithm will draw the entire scene with the light source as the viewpoint. At this time, only the minimum depth value from the light source to the scene is stored in the Z cache, so as to obtain a depth map, and each unit in the shadow map is called a shadow texel; then, returning to the view from the camera, the scene is rendered in a conventional rendering pipeline manner. In contrast, shadow judgment is performed in the drawing process, if the pixel is not in the shadow, normal drawing is performed, otherwise, the shadow is drawn.
Most of video projection in the current market is to directly paste the video as textures on a rectangular object, so that the spatial information of the video cannot be effectively utilized, the three-dimensional virtual scene cannot be enhanced, the innovation is lacking, the intuitionism is not realized, and the usability is poor
Disclosure of Invention
The invention provides a projection mapping method of a video based on CesiumJS in a three-dimensional scene, which aims to solve the problem of fusion of a screen video and a virtual three-dimensional scene, and enables the video and the scene to be fused together better by projecting the video onto a three-dimensional model.
In order to achieve the above purpose, the invention adopts the following specific technical scheme:
a video projection mapping method based on CesiumJS in a three-dimensional scene comprises the following steps:
the first step: preprocessing for a projection source camera;
calculating a model matrix M, a viewpoint matrix V and a projection matrix P of the projection source camera according to the projection parameters;
and a second step of: preprocessing the shadow map by using an API of the CesiumJS;
creating shadow map, and transmitting parameters such as position of the projection source camera.
(2) Acquiring an attribute value of a shadow map as a parameter to be transmitted to a fragment shader in post-processing;
and a third step of: post-processing rendering is carried out on the scene, namely, the content of a fragment shader is written;
(1) Acquiring a depth z pixel by pixel, and assigning a pixel point in a cache frame in post-processing as a v1 (x, y, z, 1) coordinate;
coordinate conversion, namely converting the pixel point from the ndc coordinate system of the viewpoint camera to the ndc coordinate system of the projection source camera; the conversion formula is as follows:
firstly, calculating a view perspective transformation matrix of the viewpoint camera, wherein the view matrix is P1, and the projection matrix is V2: t1=p1×v1;
obtaining world coordinates
Figure BDA0003410582660000021
Calculating a view perspective transformation matrix of the projection source camera: t2=p×v;
obtaining ndc coordinate v of projection source camera 3 =T 2 *v 2
Judging whether the pixel points are in a reasonable range or not, and cutting out unreasonable parts;
judging whether the pixel points are shielded from the projection source camera or not, wherein the step is mainly completed through an API (application program interface) built in Cesium, namely a shadow map, and the penetration problem is solved by comparing depth information;
and normalizing the ndc coordinate to obtain the uv coordinate of the video texture.
The color of the pixel which is not occluded is assigned as the texture color of the video through uv mapping.
Preferably, since the MT matrix of the projection source camera does not change, the view perspective transformation matrix T can be calculated and stored in memory, with T being input into the constant of the fragment shader.
Compared with the prior art, the invention has the beneficial effects that:
compared with the prior art, the invention provides a brand-new projection texture mapping method based on CesiumJS, which calculates whether the pixel points of a projection source are blocked by adding a shadow map, so that the problem of blocking penetration caused by the lack of depth values in the traditional projection texture mapping is improved, the drawing effect is obviously enhanced, the logic is clear and concise, the drawing batch is reduced, only one-time post-processing operation is needed, and the efficiency is higher. The method has the advantages of clear algorithm, robust result and high operation efficiency, and is suitable for being used in most of the video monitoring systems based on CesiumJS at present.
Drawings
The following describes the embodiments of the present invention in further detail with reference to the accompanying drawings.
Fig. 1 is a post-processing flow chart of a video projection mapping method in a three-dimensional scene based on CesiumJS;
fig. 2 is a post-processing formula diagram of a projection mapping method of a video based on CesiumJS in a three-dimensional scene;
fig. 3 is a projection effect diagram of a video projection mapping method in a three-dimensional scene based on CesiumJS.
Detailed Description
A preferred embodiment of a building visual monitoring method according to the present invention will be described in detail with reference to the accompanying drawings.
The invention is illustrated in detail below with reference to examples and figures, as shown in fig. 1, but the invention is not limited thereto.
The flow of the texture projection mapping algorithm based on CesiumJS in the embodiment is shown in fig. 1, and the method comprises three steps of preprocessing a projection source, creating and setting a shadow map, drawing a three-dimensional scene from a current viewpoint, and performing post-processing rendering.
(1) Pretreatment process
For each projection source in a scene, parameter information such as the position, orientation and the like of the projection source in a three-dimensional scene needs to be acquired first. Calculating a viewpoint matrix V and a projection matrix P of the projection source camera according to the projection parameters; the multiplication gives a view projection matrix T of the projection source camera, and T is used as a world matrix to be transformed into a transformation matrix of the ndc coordinate system of the projection source camera. The matrix needs to be passed to the fragment shader in code as a uniform constant.
(2) Creation and setting of a shadow map
The shadow map is created mainly by setting the position of a projection source camera, depth information from the projection source camera can be obtained from the shadow map after rendering, then the depth information can be compared, whether the pixel is shielded or not can be obtained through depth comparison, then coordinate points which are not shielded are assigned as video textures, and the shielded coordinate points are assigned as shadow textures.
(3) And drawing the three-dimensional scene from the current viewpoint and performing post-processing rendering.
The post-processing flow is mainly concentrated in a fragment shader, pixel-by-pixel processing is carried out on pixels in a frame buffer, and firstly, pixel points are assigned to v1 (x, y, z, 1) to obtain ndc coordinates rendered by a viewpoint camera; the world coordinate v is obtained by multiplying the coordinate by the inverse matrix of the PV matrix of the viewpoint camera 2 The method comprises the steps of carrying out a first treatment on the surface of the Then multiplying the view perspective transformation matrix of the projection source camera (i.e. the matrix T obtained in the preprocessing process) by the world coordinates to obtain the ndc coordinates v of the projection source camera 3 The method comprises the steps of carrying out a first treatment on the surface of the Then by judging v 3 Whether the pixel points are located between (-1, 1) or not, so as to cut out the pixel points in the view cone of the projection camera; followed by depth information comparison in a shadow mapAnd calculating whether the pixel point is shielded or not, and if the pixel point is not shielded, giving the video texture. Finally, the uv sampling coordinates are obtained by normalization processing, the color value of the video texture is taken out through the uv sampling coordinates, the color value is assigned to the pixel color value, the logic of the process can be referred to as figure 1, and the formula can be referred to as figure 2.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (3)

1. A video projection mapping method based on CesiumJS in a three-dimensional scene is characterized by comprising the following steps:
step 1: preprocessing for a projection source camera;
step 2: preprocessing the shadow map by using an API of the CesiumJS;
step 3: post-processing rendering of the scene, i.e. writing the content of the fragment shader.
2. The method for projection mapping of a CesiumJS-based video in a three-dimensional scene according to claim 1, wherein the method comprises the following steps: the pretreatment step comprises the following steps:
s1, creating a shadow map, and transmitting the shadow map into the position of a projection source camera: the external parameters comprise a space coordinate position and a rotation angle, and the internal parameters comprise top, left, bottom and right of the near-cut surface;
s2, acquiring an attribute value of a shadow map, and transmitting the attribute value as a parameter to a fragment shader in post-processing; parameters include video texture, shadow image matrix, light source position, clipping matrix of camera, view matrix of camera, and observation distance;
s3, obtaining video textures through flv.js and other tools, wherein the shadow map textures are used as depth maps and the shadow map matrixes to participate in matrix conversion operation, and the light source positions are the positions and the postures of the projection camera and correspond to external parameters of the camera; the cropping matrix and the view matrix of the camera are used for matrix conversion in the shader, and the observation distance is the distance between the near cropping surface and the far cropping surface of the view cone, representing the depth of the view cone.
S4, setting additional parameters of the depth map, wherein the additional parameters are set as follows:
enabled: false, no shadow map is displayed;
iso Pointlight: false, not point light sources;
isosmotlight, true, is a spotlight source;
pointLightRadius, the radius of the point light source, is set to the depth of the viewing cone;
dark: 0.3 darkness;
the castadesEnabled false does not use multiple shadow maps to cover the view cone (less precision).
3. The method for projection mapping of a CesiumJS-based video in a three-dimensional scene according to claim 1, wherein the method comprises the following steps: the content of the writing fragment shader includes:
s1, acquiring a depth z pixel by pixel, and assigning a pixel point in a cache frame in post-processing as a vnode1 (x, y, z, 1) coordinate;
s2, converting coordinates to convert the pixel points from the ndc coordinate system of the viewpoint camera to the ndc coordinate system of the projection source camera; the conversion formula is as follows:
firstly, calculating a view perspective transformation Matrix T1 of the viewpoint camera, wherein the view Matrix is V_Matrix1, and the projection Matrix is P_Matrix1,: t1=v_matrix 1×p_matrix1;
obtaining world coordinates
Figure FDA0003410582650000021
Calculating a view perspective transformation matrix of the projection source camera: t2=p_matrix 2×v_matrix2;
obtaining ndc coordinate vnode of projection source camera 3 =T 2 *vnode 2
S3, judging whether the pixel points are in a reasonable range or not, and cutting out unreasonable parts;
s4, judging whether the pixel points are blocked from the projection source camera or not, wherein the step is mainly completed through an API (application program interface) built in Cesium, namely a shadow map, and the penetration problem is solved by comparing depth information;
s5, normalizing the ndc coordinate to obtain a uv coordinate of the video texture;
s6, assigning the color of the pixel which is not shielded as the texture color of the video through uv mapping.
CN202111530530.3A 2021-12-14 2021-12-14 CesiumJS-based video projection mapping method in three-dimensional scene Pending CN116263980A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111530530.3A CN116263980A (en) 2021-12-14 2021-12-14 CesiumJS-based video projection mapping method in three-dimensional scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111530530.3A CN116263980A (en) 2021-12-14 2021-12-14 CesiumJS-based video projection mapping method in three-dimensional scene

Publications (1)

Publication Number Publication Date
CN116263980A true CN116263980A (en) 2023-06-16

Family

ID=86722222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111530530.3A Pending CN116263980A (en) 2021-12-14 2021-12-14 CesiumJS-based video projection mapping method in three-dimensional scene

Country Status (1)

Country Link
CN (1) CN116263980A (en)

Similar Documents

Publication Publication Date Title
CN108648269B (en) Method and system for singulating three-dimensional building models
Goesele et al. Ambient point clouds for view interpolation
WO2017206325A1 (en) Calculation method and apparatus for global illumination
CN108986195B (en) Single-lens mixed reality implementation method combining environment mapping and global illumination rendering
CN111508052B (en) Rendering method and device of three-dimensional grid body
CN111354062B (en) Multi-dimensional spatial data rendering method and device
CN110717494B (en) Android mobile terminal indoor scene three-dimensional reconstruction and semantic segmentation method
CN111127633A (en) Three-dimensional reconstruction method, apparatus, and computer-readable medium
WO2012037863A1 (en) Method for simplifying and progressively transmitting 3d model data and device therefor
JP7390497B2 (en) Image processing methods, apparatus, computer programs, and electronic devices
Liang et al. Visualizing 3D atmospheric data with spherical volume texture on virtual globes
Oh et al. Pyramidal displacement mapping: a gpu based artifacts-free ray tracing through an image pyramid
CN111862295A (en) Virtual object display method, device, equipment and storage medium
CN116070687B (en) Neural network light field representation method based on global ray space affine transformation
CN109461197B (en) Cloud real-time drawing optimization method based on spherical UV and re-projection
CN109544671B (en) Projection mapping method of video in three-dimensional scene based on screen space
CN114863061A (en) Three-dimensional reconstruction method and system for remote monitoring medical image processing
CN116385619B (en) Object model rendering method, device, computer equipment and storage medium
CN111507891A (en) Digital image geometric transformation method, device, equipment and medium based on CUDA
Laycock et al. Exploring cultural heritage sites through space and time
CN114170367B (en) Method, apparatus, storage medium, and device for infinite-line-of-sight pyramidal heatmap rendering
CN116263980A (en) CesiumJS-based video projection mapping method in three-dimensional scene
Dong et al. Resolving incorrect visual occlusion in outdoor augmented reality using TOF camera and OpenGL frame buffer
CN111652807B (en) Eye adjusting and live broadcasting method and device, electronic equipment and storage medium
CN108921908B (en) Surface light field acquisition method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination