CN107835436A - A kind of real-time virtual reality fusion live broadcast system and method based on WebGL - Google Patents
A kind of real-time virtual reality fusion live broadcast system and method based on WebGL Download PDFInfo
- Publication number
- CN107835436A CN107835436A CN201710872854.2A CN201710872854A CN107835436A CN 107835436 A CN107835436 A CN 107835436A CN 201710872854 A CN201710872854 A CN 201710872854A CN 107835436 A CN107835436 A CN 107835436A
- Authority
- CN
- China
- Prior art keywords
- video
- virtual reality
- model
- scene
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/643—Communication protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8543—Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]
Abstract
A kind of real-time virtual reality fusion live broadcast system and method based on WebGL proposed by the present invention, the system and method realize a kind of effective ways of virtual reality fusion in real time using WebGL interfaces at Web ends.The system is made up of 5 modules:Video Model generation module, net cast module, GIS service module, virtual reality fusion module and scene executor module.The system realizes the display effect of the real-time virtual reality fusion of multiple video strems on Web ends, improves the matching accuracy that virtual reality fusion is shown, has the advantages of demand resource is few, and cross-platform compatibility is good, and scalability is strong.
Description
Technical field
The invention belongs to the technical field of virtual reality in computer vision, relates generally to a kind of based on the real-time of WebGL
Virtual reality fusion live broadcast system and method.
Background technology
With the development of computer graphics techniques and Internet technology, traditional two dimensional image can not meet modern gradually
Needs to showing and understanding scene, and the appearance of virtual reality technology compensate for this shortcoming.Therefore, in recent years, virtually
Reality technology and its association area are quickly grown, and are widely used in simulated training, and video monitoring, city roaming is military to demonstrate,
The fields such as scenic spot displaying.
Virtual three-dimensional scene can allow user to obtain more three-dimensional and real experience.When virtual scene environmental information with
True environment information is more similar, and the sense of reality that user obtains is higher, and virtual scene can make user more preferably, more freely
Real scene corresponding to understanding, add the feeling of immersion and experience sense of scene.But due to the model in three-dimensional virtual scene
Can only represent the inactive state at some moment, user can not by observe so static three-dimensional scene models come
Solve the dynamic change of the scene.It is just the opposite, although two-dimensional video image can not show the three-dimensional sense that threedimensional model has
Feel, but can more truly record the actual change of scene in a period of time, people are recognized that scene by video
Dynamic change.
If three-dimensional virtual environment to be combined to foundation enhancing virtual environment with two-dimensional video image, three can either be lifted
The information content that dimension module contains, makes threedimensional model more have the change of real world, the richer sense of reality, and and can gives expression to ratio
The more wide spatial dimension of video image, allows users to the model outward appearance using virtual scene, spatial distribution, and picture moves
The information such as state further understand the content of video image and the time-space relationship of each video image, mitigate the cognition pressure of user
Power.Early in 1996, Moezzi was (referring to Moezzi S, Katkere A, Kuramura D Y, et al.Reality
modeling and visualization from multiple video sequences[J].Computer Graphics
and Applications,IEEE,1996,16(6):The concept that three-dimensional scenic fusion video is shown 58-63) etc. is just proposed, he
Catch the object in motion using the camera of multiple different visual angles, and it is rebuild, then dynamically by the model of reconstruction
It is fused in virtual scene.The Neumann of University of Southern California in 2003 is (referring to NeumannU, et al.Augmented
Virtual Environments(AVE):for Visual-ization of Dynamic Imagery[C].IEEE
Virtual Reality 2003.2003:Enhancing virtual environment (Augmented Virtual 61-67) et al. are proposed first
Environment, AVE) this concept, has further developed video source modeling virtual scene technology, has obtained virtual reality fusion and shown
Effect.Neumann et al. realizes the enhancing virtual environment of multiple campus scenes, then the video data of collection is projected to
On corresponding BUILDINGS MODELS and landform, the dynamic 3 D model with image change is formd.The Chen of Taiwan Univ. in 2012
(referring to Chen S C, Lee C Y, Lin C W, et al.2D and 3D visualization with dual-
resolution for surveillance.Proceedingsof IEEE Computer Society Conference on
Computer Vision and Pattern Recognition Workshops, Providence, 2012.23-30) etc. build
The Visualization Framework using GIS auxiliary has been found, it is merged the view of multiple different resolution cameras with threedimensional model,
Realize the monitoring strategies of multiresolution.
At the same time, more and more important role, wherein Web are play in all trades and professions based on Web Internet technology
The trend for merging even more current Technological research of technology and virtual reality technology:Virtual three dimensional field is realized by Internet technology
Scape, allow users to break through the limitation of regional space by browser access virtual scene, additionally it is possible to more square
Just home-confined and on the spot in person virtual scene experience is quickly brought to user.2011, multimedia technology standard
Tissue Khronos formally issues WebGL standard criterions, and it can not inserted only by html script by any browser
In the case of part, the drafting of Web interactive three-dimensional scenes is realized;By what is unified, standard, cross-platform OpenGL interfaces,
The acceleration rendered using the graphic hardware progress figure of bottom, WebGL technologies can realize enhancing virtual environment in desktop browsing
The device even drafting of mobile phone terminal, the succinct propagation for efficiently also greatly facilitating enhancing virtual environment at Web ends is with practical, and this is just
It is a following important development trend of virtual reality technology.
But in general, the multiple video strems virtual reality fusion method of current main-stream is in the prevalence of picture distortion, syncretizing effect
The problems such as difference and excessive actual situation alignment cost.And due to the available resource-constrained in Web ends, realize good syncretizing effect
It is increasingly difficult, so existing virtual reality fusion correlation technique is mostly based on desktop client realization, fail to realize lightweight
The virtual reality fusion system at Web ends.The system requirements of current virtual reality fusion system generally existing is too high, and Video Rendering expends resource
The problems such as excessive and difficult in maintenance, limits extensive use of the virtual reality fusion system in browser or even mobile terminal.
The content of the invention
The technology of the present invention solves problem:A kind of overcome the deficiencies in the prior art, there is provided real-time void based on WebGL technologies
Real fusion live broadcast system and method, overcome the problem of current multiple video strems virtual reality fusion cost is excessive, improve virtual reality fusion
The cross-platform compatibility of method.
The technical solution of the present invention:A kind of real-time virtual reality fusion live broadcast system based on WebGL, described actual situation are melted
Close live broadcast system to be made up of offline end, server end and client, Video Model generation module is affixed one's name in offline end;Servicing
Device end deploys net cast module and GIS service module;Client deployment virtual reality fusion module and scene executor mould
Block:
Video Model generation module:The real-time monitor video image or local video image of monitoring camera collection are read,
The file described using single width image modeling technolog generation binary format, the binary format file include Video Model
Vertex coordinates data and camera parameter information;The texture projective transformation square of Video Model is calculated using its camera parameter information
Battle array and the optimal camera view pose for watching the Video Model, then the Video Model relevant information combination video screen module that will be obtained
The conversion of type WebGL rendering parameters, the Video Model file for being stored as the recognizable JSON file formats of client browser.Institute
The Video Model file stated is by Video Model vertex coordinates data, Video Model camera posture information, initial texture pictorial information
And WebGL spatial cues composition;Described WebGL spatial cues include Video Model vertex data form, projection matrix,
Video dynamic texture information and coloration program;Video Model generation module with server end is asynchronous, generates video offline
Model file, the Video Model of generation is finally supplied to the calling of GIS service module;
Net cast module:The real-time monitor video image or local video image of monitoring camera collection are received, and will
It handles and stored;To corresponding to net cast module request when client virtual reality fusion module draws Video Model
When video image is as dynamic texture, net cast module by corresponding video image to client virtual reality fusion module forwards,
For its use;If virtual reality fusion module request is local video, net cast module directly forwards local video;If actual situation
Fusion Module request is real-time monitoring video flow, and net cast module receives network monitoring camera head using RTMP agreements
Real-time plug-flow, and transcoding, burst processing are carried out to live video stream, ultimately generate the video profile and ts of m3u8 forms
The video slicing file of form, it is pushed to client browser finally by http protocol and is used for virtual reality fusion module;
GIS service module:Whole virtual reality fusion contextual data is provided and manages, the virtual reality fusion contextual data includes regarding
The Video Model of frequency model generation module generation, the three-dimensional building model in virtual reality fusion scene, the three-dimensional building model by
Real building modeling generation, and the environment of whole scene, when client browser sends the HTTP request of access, GIS service
Module is responsible for the Video Model and three-dimensional building model needed for the transmission of virtual reality fusion system;Simultaneously also to client browser
Load of one GIS-Geographic Information System (Geographic Information System, GIS) as virtual reality fusion scene is provided
Body and environment, Video Model and BUILDINGS MODELS are that latitude and longitude coordinates are positioned at digital earth by real-world geographical coordinate system
On, realize whole scene and the accurate relative position of each model, the GIS-Geographic Information System is a 3-dimensional digital
Ball, with terrain information and satellite base map, realize the true reappearance of whole scene environment;
Virtual reality fusion module:Video Model file is read, calls WebGL interfaces to realize rendering for Video Model, and use
Video stream data corresponding to HTML5 Tag labels to the net cast module request Video Model, data pass through http protocol
Transmission, finally give the real-time video flow data of burst;Using real-time video flow data as texture, the side projected using texture
Formula is rendered, drawn, and obtains the Video Model virtual reality fusion effect with video dynamic texture;
Scene executor module:Provide the user a series of interactive operations on client web interface so that Yong Huneng
It is enough to be free to navigate through in three-dimensional strengthens virtual environment, including dummy scene roaming, scene information are shown, video texture controls and
This four classes function of VR patterns, user can be allowed to have more preferable experience sense and feeling of immersion for enhancing virtual scene;Described is virtual
Scene walkthrough function can allow user to select to access pre-set important scenes node, or carry out scene certainly along projected route
Animation is swum;Described scene information display function can be clicked in user chooses corresponding BUILDINGS MODELS or Video Model to obtain
Introduction to the model details;Described video texture control function allows user to carry out the Video Model in scene
Operate, control so that user can operate to video interested, and the operation includes pause, plays, plays back, be fast
Enter, simultaneously operating;Described VR mode capabilities can allow user to obtain VR display effects when using VR equipment browse client Web
Fruit.
A kind of real-time virtual reality fusion live broadcasting method based on WebGL of the present invention, realizes that step is as follows:
(1) when Video Model generation step:When obtaining real-time monitor video image or local video image input, list is used
Monitored picture is generated Video Model vertex data by photos modeling technique, while records camera parameter information, finally will be upper
State data summarization generation binary format file storage;The texture that Video Model is calculated using camera parameter information projects change
Change matrix and watch the best view pose of the Video Model, by obtained data summarization and be converted to the description of JSON forms
GlTF files, including former binary file content and WebGL rendering parameters, it is supplied to GIS service step to adjust obtained result
With;
(2) net cast step:Receive when client virtual reality fusion step draws Video Model and apply for corresponding video
Request of the image as dynamic texture, if virtual reality fusion steps request is local video, net cast step directly forwards this
Ground video;If virtual reality fusion steps request is real-time monitoring video flow, net cast step is received in real time using RTMP agreements
The real-time plug-flow of network monitoring camera head, and transcoding, burst processing are carried out to live video stream, ultimately generate m3u8 forms
The video slicing file of video profile and ts forms, it is pushed to client browser finally by http protocol and supplies actual situation
Fusion steps use;
(3) GIS service step:The Video Model file that Video Model generation step provides is received, forwards it to actual situation
Fusion steps use;The BUILDINGS MODELS of storage modeling generation, and forward BUILDINGS MODELS to supply when client request is received
Client virtual reality fusion step is called;The HTTP request that client browser sends access is received, it is clear to client according to asking
Device of looking at provides a GIS-Geographic Information System (Geographic Information System, GIS) and is used as whole virtual reality fusion
The carrier of scene, the GIS-Geographic Information System that GIS server provides are a three-dimensional digital earths, and three-dimensional digital earth includes ground
Shape information and satellite base map, GIS service step send out three-dimensional digital earth and Video Model, the latitude and longitude coordinates of BUILDINGS MODELS
Virtual reality fusion step is given, for realizing whole scene and the accurate relative position of each model;
(4) virtual reality fusion step:WebGL interfaces are called to realize rendering for Video Model after reading Video Model file, and
Pass through HTTP using video stream data, data corresponding to HTML5 Tag labels to the net cast steps request Video Model
Agreement is transmitted, and finally gives the real-time video flow data of burst;Using real-time video flow data as texture, projected using texture
Mode is rendered, drawn, and obtains the Video Model virtual reality fusion effect with video dynamic texture;Virtual reality fusion step is read simultaneously
Three-dimensional building model, call WebGL interfaces that BUILDINGS MODELS is rendered to the three-dimensional digital earth provided in GIS service step in the lump
On, realize the complete color applying drawing of whole scene;
(5) scene executor step:Interactive operation of the user on client web interface is received and parsed through, passes through change
Camera pose, display corresponding informance and change render mode method meet the need that user is free to navigate through in virtual reality fusion scene
Ask, including dummy scene roaming, scene information are shown, video texture controls and VR patterns this four classes functions, so as to be carried for user
For the interactive experience of virtual reality fusion scene.
The present invention compared with prior art the advantages of be:
(1) present invention directly reconstructs Video Model using single width image modeling technolog from original monitored picture, makes simultaneously
The mode projected by the use of texture realizes that video flowing as the dynamic texture of Video Model, solves Most current virtual reality fusion side
The problem of method actual situation is directed at cost prohibitive, accuracy is low, realizes preferable virtual reality fusion effect.
(2) present invention improves virtual reality fusion method by calling WebGL and HTML5 interface to realize virtual reality fusion
Efficiency, demand of this method to system resource is reduced, and finally realize virtual reality fusion system at Web ends, improve actual situation
The cross-platform compatibility of emerging system, be advantageous to this method and more broadly propagate.
(3) present invention realizes net cast module by building streaming media server so that system can be supported and realized
Storage, forwarding and the projective textures textures of multichannel live video stream, the real-time of whole virtual reality fusion system is ensure that, improved
Compatibility of the system to different model monitoring camera, it is ensured that the scalability of whole system.
Brief description of the drawings
Fig. 1 is the system structure diagram of the present invention;
Fig. 2 is the glTF file structure schematic diagrames of the present invention;
Fig. 3 is Video Model rendering effect schematic diagram;Wherein (a) is not textured Video Model;(b) to be textured
Video Model;(c) it is the Video Model under best view;
Fig. 4 is the live schematic diagram of streaming media server live video stream of the present invention;
Fig. 5 is the m3u8 file playing principles schematic diagrames of the present invention;
Fig. 6 is the texture projection process schematic diagram of the present invention.
Specific implementation method
In order to be better understood from technical scheme, do and further chat in detail below in conjunction with accompanying drawing and implementation example
State.
The present invention proposes a kind of real-time virtual reality fusion live broadcast system and method based on WebGL, as shown in figure 1, this hair
Bright described virtual reality fusion live broadcast system is made up of offline end, server end and client, in the Video Model life of offline end administration
Into module;Net cast module and GIS service module are deployed in server end;Client deployment virtual reality fusion module with
And scene executor module.
Whole implementation process is as follows:
(1) when Video Model generation module obtains real-time monitor video image or local video image inputs, video screen module
Monitored picture is generated Video Model vertex data by type generation module using single width image modeling technolog, while records camera
Parameter information, above-mentioned data summarization is finally generated into the storage of binary format file;Video is calculated using camera parameter information
The texture projective transformation matrix of model and the best view pose for watching the Video Model.By obtained data summarization and conversion
For the glTF files of JSON forms description, including former binary file content and WebGL rendering parameters.Obtained result is carried
GIS service modules are supplied to call.
(2) net cast module, which is received when client virtual reality fusion module draws Video Model, applies for corresponding video figure
As the request as dynamic texture, if virtual reality fusion module request is local video, net cast module directly forwards local
Video;If virtual reality fusion module request is real-time monitoring video flow, net cast module receives net in real time using RTMP agreements
The real-time plug-flow of network monitoring camera, and transcoding, burst processing are carried out to live video stream, ultimately generate regarding for m3u8 forms
The video slicing file of frequency configuration file and ts forms, it is pushed to client browser finally by http protocol and melts for actual situation
Matched moulds block uses.
(3) GIS service module receives the Video Model file that Video Model generation module provides, and forwards it to actual situation and melts
Matched moulds block uses;The BUILDINGS MODELS of storage modeling generation, and BUILDINGS MODELS is forwarded for visitor when client request is received
Family end virtual reality fusion module is called;Receive client browser and send the HTTP request of access, according to request to Client browse
Device provides a GIS-Geographic Information System (Geographic Information System, GIS) and is used as whole virtual reality fusion field
The carrier of scape.GIS server provide GIS-Geographic Information System be mainly a three-dimensional digital earth, its include terrain information and
Satellite base map.Three-dimensional digital earth and Video Model, the latitude and longitude coordinates of BUILDINGS MODELS are sent to actual situation by GIS service modules
Fusion Module, for realizing whole scene and the accurate relative position of each model.
(4) virtual reality fusion module calls WebGL interfaces to realize rendering for Video Model after reading Video Model file, and makes
The video stream data corresponding to HTML5 Tag labels to the net cast module request Video Model, data are assisted by HTTP
View transmission, finally give the real-time video flow data of burst;Using real-time video flow data as texture, the side projected using texture
Formula is rendered, drawn, and obtains the Video Model virtual reality fusion effect with video dynamic texture.Virtual reality fusion module reads three simultaneously
BUILDINGS MODELS is tieed up, calls WebGL interfaces to render BUILDINGS MODELS in the lump in the three-dimensional digital earth provided in GIS service module,
Realize the complete color applying drawing of whole scene.
(5) scene executor module receives and parses through interactive operation of the user on client web interface, by changing phase
The methods of seat in the plane appearance, display corresponding informance and change render mode, meets the need that user is free to navigate through in virtual reality fusion scene
Ask, including dummy scene roaming, scene information are shown, video texture controls and VR patterns this four classes major functions.So that to use
Family provides the interactive experience of the virtual reality fusion scene for the system.
Above-mentioned implementation process specific implementation principle and method are as follows:
1. end Video Model generation module cardinal principle and method are as follows offline:
The initial three-dimensional Video Model file format that the present invention uses be BJ University of Aeronautics & Astronautics's virtual reality technology with
A kind of threedimensional model file format SIBM of system National Key Laboratory design definition, the type file use single width photo
Modeling technique generates.SIBM file formats are binary files, storage be model binary data, user can be right
It is quickly read and write.The information included in SIBM files is first the version information of SIBM files, different SIBM versions
Corresponding different model vertices coordinate system, be suitable for different engines renders coordinate system requirement, followed by the whole of 4 bytes
The number of vertices information of several classes of types, followed by the three-dimensional coordinate information on these summits, the type of coordinate is three-dimensional floating point vector;
It is the camera parameter information of threedimensional model afterwards, includes the parameters such as the position of camera, visual angle, focal length;It is finally threedimensional model
The binary stream data of initial texture pictorial information, i.e. picture.
Video Model generation module outputs it after original SIBM file datas are read in and can identify and make for WebGL engines
GlTF formatted files.GlTF (GL Transmission Format) is total to by Microsoft and Khronos 3D companies
With the threedimensional model file format released.It uses the data message of the framework descriptive model based on JSON, data structure height
Effect, there is the promptness of transmission and the high efficiency of parsing.Meet very much requirement of the Web ends for speed and terseness.
As shown in Fig. 2 glTF basic framework can substantially be divided into four parts, the glTF modules of the wherein the superiors are one
Individual JSON frameworks, describe the interrelated logic structures such as the node level of model, material, camera, animation;Bin modules describe
The specific vertex data information of object described by glTF modules;Glsl modules describe the tinter of rending model;Png, jpg mould
Block describes the texture maps of model.
The camera video flow image data of acquisition is to be drawn by way of projective textures with 3 D video Model Fusion.
Projective textures mapping purpose is to map a texture to object with the mode of projection.This method need not be in the application
Vertex texture coordinate is specified, but texture coordinate is calculated by viewpoint matrix and projection matrix in vertex shading program.
Therefore the apex coordinate of the threedimensional model read from SIBM files needs to carry out matrixing, could be in the display of two dimension
On correctly show.
In glTF files, by defining glsl coloration programs and defining view transform matrixes and projection matrix come real
Existing projective textures.Wherein view transform matrixes and projection matrix are calculated by the camera parameter in SIBM files.Calculate step
It is rapid as follows:
[input] location, forward, up, focus, closely width, height, cutting plane (near), remote cutting
Plane (far)
The 4x4 matrixes of [output] view transform matrixes and projection matrix
Step 1:Obtain forward, side, up base vector and visual coordinate system eye.Location is visual coordinate
It is eye, it is not necessary to extra computing.Side and up base vectors are needed by being calculated, as follows:
Side=cross (forward, up)
Up=cross (side, forward)
Step 2:Three obtained base vectors are standardized.
Step 3:Matrix R is formed using side, up, forward base vector, it is as follows using eye composition matrixes T:
Step 4:View transform matrixes are obtained using matrix R and matrix T, operation method is as follows:
Step 5:Obtain left, right, bottom, top parameter.
Left (l)=(- ratio) * near/Focus
Right (r)=- left
Bottom (b)=(- near)/Focus
Top (t)=bottom
step 6:Obtain projection matrix.
It resulting in the recognizable Video Model files of WebGL.The rendering effect of this document is as shown in figure 3, wherein (a)
For not textured Video Model;(b) it is textured Video Model, but due to observation viewpoint and original monitoring camera position
It is misaligned to cause video dynamic texture to have certain twisted phenomena;(c) it is the Video Model under best view, can now obtains
Obtain preferable visual effect.As can be seen that the dynamic texture using video as scene can be realized by Video Model, and
Relatively good virtual reality fusion effect is can obtain at best view.
Video Model generation module and server end are asynchronous, generate Video Model file offline, finally by generation
Video Model supplies the calling of net cast module.Described Video Model file is by Video Model vertex coordinates data, video screen module
Type camera posture information, initial texture pictorial information and WebGL spatial cues composition;Described WebGL spatial cues include
Video Model vertex data form, projection matrix, video dynamic texture information and coloration program.
2. server end net cast module cardinal principle and method are as follows:
The present invention furthermore achieved that network monitoring is taken the photograph on the basis of local video is realized as 3 D video model
As the live video stream of head acquisition is as texture.The real-time monitoring on the 3 D video model of scene can thus be realized
With it is live.
The live method signal of live video stream is as shown in figure 4, streaming media server safeguards a message queue to receive
The request of data of browser, and to browser send needed for video stream data.If user browser asks live video stream,
Streaming media server can pull video stream data by special module from IP Camera, by unified coded treatment
The browser for sending request is sent to afterwards.By this way, streaming media server can enable virtual reality fusion system preferably
Support different type, the camera data of different coding without regard to specific camera model and driving, so as to add to system
Add new camera and data more convenient, ensure that the scalability of system.If user's request is regarding for local cache
Frequently, streaming media server then directly finds corresponding cache file and is sent to user browser.
Wherein, the present invention uses Nginx servers to be used to receive client requirements as streaming media server.According to user
It is required that Nginx streaming media servers call FFmpeg to obtain regarding in real time for RTMP protocol codes transmission from network monitoring camera head
Frequency flow data.RTMP real-time messages host-host protocols are that Adobe Systems companies are Flash player and streaming media server
Between video/audio transmission and the agreement developed, be a kind of to be used for carrying out the procotol of real-time data communication.RTMP's is simultaneous
Capacitive is poor, and real-time is preferable.So carry out uploaded videos stream using this agreement, that is, pushing video streaming services to Nginx
Device can reduce the delay in video capture transmission procedure to greatest extent.
During Nginx streaming media servers are transferred to real-time virtual reality fusion client, in order to meet HTML5 to reality
When stream mobile terminal compatibility, present invention uses HLS protocol transmitting video-frequency flow data.
So-called HLS protocol, briefly whole real-time streams are divided into it is small one by one, based on HTTP file come under
Carry, only download every time, m3u8 files therein are namely based on HLS protocol, deposit the file of video stream metadata.It is each
Individual m3u8 files, several ts files are corresponded to respectively, these ts files are only the data of really storage video, and m3u8 files are only
It is the configuration information and introductory path for housing some ts files.As shown in Figure 5, when video playback .m3u8 is dynamic
Change, video labels can parse this file, and find corresponding ts files to play.So it is achieved that video is marked
The support to real-time streams is signed, model rendering drafting module is recalled and color applying drawing is carried out to real-time streams, so as in desktop end and shifting
Moved end all realizes the dynamic texture that real-time streams make Video Model, realizes the live of live video stream.
3. server end GIS service module cardinal principle and method are as follows:
In the present invention, the exact position of camera and model is described using geographic coordinate system is latitude and longitude coordinates, so
It can help more effectively to manage this large scene using GIS service module.This Department of Geography based on real world coordinates
System can allow scene and real world to be preferably mapped.
GIS service module safeguards that a message queue receives the HTTP request that client browser sends access, parsing pair
Should ask and to client browser provide GIS-Geographic Information System (Geographic Information System,
GIS) the carrier as whole virtual reality fusion scene.One three-dimensional digital earth of the GIS-Geographic Information System major maintenance, this three
The dimension word earth carries terrain information and satellite base map.The satellite map includes 13 class precisions altogether, and full accuracy can be realized
The other base map of street-level is shown.Video Model and BUILDINGS MODELS in virtual reality fusion scene are positioned at number by geographic coordinate system
On the word earth, accurate relative position is realized, user can be allowed true just like placing oneself in the midst of when being gone sight-seeing in virtual reality fusion scene
Earth environment on, improve the feeling of immersion of user.
4. client virtual reality fusion module cardinal principle and method are as follows:
Model especially 3 D video model used in render process HTML5 multimedia new features and with
The corresponding interface of WebGL engines.
HTML5 is one under the kernel language of WWW, standard generalized markup language and applies HTML
(HTML) the 5th material alteration.Formulate and complete in October, 2014.Its original intention designed is exactly in order on the mobile apparatus
Support multimedia.
In the virtual reality fusion module of the present invention, by HTML5's<video>Tag reader local or real-time video fluxion
According to, and browser carries out decoding storage as used in user, the image file that video stream data is resolved into a frame frame is stored in number
In group, then by WebGL engines obtained data are handled, generated according to the mode of the projective textures of glsl document definitions
Corresponding texture, finally constantly the texture of Video Model is carried out rendering renewal by the rendering engine of a real-time update, finally
A continually changing projective textures have been obtained on 3 D video model.Thus three-dimensional is realized in three-dimensional scenic to regard
Frequency model merges drafting and display with video flowing data texturing.
As shown in fig. 6, virtual reality fusion module realizes that the process that virtual reality fusion is rendered and drawn is main after obtaining corresponding data
There are 4 stages:
(1) posture information of the camera in true environment, the posture information being converted into three dimensions are read first;
(2) and then available obtained camera posture information calculates the Model-View matrix M of cameramvWith projection matrix Mp;
(3) necessary processing is being done to model after, such as cutting camera invisible model, amount of calculation is reduced with this
Accelerate the process of fusion;
(4) finally according to related rendering parameter Video Model is carried out using WebGL interfaces rendering acceleration, entered in video card
The veining of row piece member and coloration operation, piece member are ultimately converted to pixel seen on screen after rasterization operation.
Due to having used HTML5 as optimized integration, so the present invention can not only support that desktop end is all kinds of clear well
Look at device access and render display, also achieve the high efficiency of transmission in mobile terminal and display.
5. scene executor module cardinal principle and method are as follows:
Present system is after realizing that three-dimensional scene models are drawn with live video stream data fusion, by scene executor mould
Block provides a series of interactive operations on web interface so that and user can be free to navigate through in three-dimensional strengthens virtual environment,
Including dummy scene roaming, scene information is shown, video texture controls and VR patterns this four classes major functions.So as to allow user
Enhancing virtual scene for the system has more preferable experience sense and feeling of immersion.
(a) dummy scene roaming:This function is primarily to allow user more freely to experience three-dimensional enhancing virtually
Environment.It is that user is able to access that the several important scenario nodes pre-set first.In the present system, modal position by
Geographic coordinate system represents, i.e., is represented by longitude and latitude.Therefore, user can be selected pre-set important by drop-down menu column
Scenario node, node camera longitude and latitude and elevation information corresponding to acquisition, wherein camera attitude angle are represented with Eulerian angles.Again
By corresponding camera control module according to obtained camera posture information control camera flight, finally user is caused to access the scene
Node.Automatic scene roaming principle is similar, and scene control prestores a series of camera node pose parameters, when user's point
When hitting corresponding button, camera carries out automatic roaming along the projected route.
(b) scene information is shown:When user goes sight-seeing virtual scene, corresponding three-dimensional building can also click on
Model or 3 D video model obtain the introduction to the model details, so as to allow user to have more for scene and model
For deep understanding.The information of threedimensional model is stored in MySQL database by the system, then using being deployed in Web service
PHP script pages on device access MySQL database, form data corresponding to inquiry, finally return to these form datas
User interface, so as to realize inquiry of the user to model information in three-dimensional scenic.Wherein three-dimensional model information is compiled including model
Number, model name, building time, building function brief introduction, the information such as geographical position, 3 D video model information includes model and compiles
Number, model name, video flowing source, camera parameter, the information such as geographical position.
In addition to inquiring about three-dimensional model information, the system also supports the actual geographic information of current mouse position,
Including longitude and latitude and elevation information.So user is had more for particular location of the current scene in the three-dimensional earth
The understanding of perception.
(c) video texture controls:This function allows user to be operated, controlled to the Video Model in scene.So that with
Family can be interested in it video operated, including suspend, play, playing back, the operation such as F.F., so as to play back
Remove the video of time, moreover it is possible to it is synchronous with actual clock at any time, obtain current live video stream.This function combination real-time live broadcast
Module can help user preferably to monitor scene.
(d) VR patterns:VR patterns can be entered by clicking on VR mode buttons, in VR patterns, system is by scenic picture wash with watercolours
Dye the width picture of right and left eyes two, two width pictures are substantially similar, and angle slightly has difference, by artificially simulate right and left eyes parallactic angle from
And realize VR effects and show.The function causes the system to be experienced in VR equipment well, allows user preferably to enjoy enhancing
Virtual scene, obtain effect more true to nature.
What the present invention did not elaborated partly belongs to those skilled in the art's known technology.
Described above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications
It should be regarded as protection scope of the present invention.
Claims (2)
- A kind of 1. real-time virtual reality fusion live broadcast system based on WebGL, it is characterised in that:Described virtual reality fusion live broadcast system by Offline end, server end and client are formed, and Video Model generation module is affixed one's name in offline end;Video is deployed in server end Live module and GIS service module;Client deployment virtual reality fusion module and scene executor module:Video Model generation module:The real-time monitor video image or local video image of monitoring camera collection are read, is used The file of single width image modeling technolog generation binary format description, the binary format file are sat comprising Video Model summit Mark data and camera parameter information;Texture projective transformation matrix and the viewing of Video Model are calculated using its camera parameter information The optimal camera view pose of the Video Model, then the Video Model relevant information combination Video Model WebGL wash with watercolours that will be obtained The conversion of dye parameter, the Video Model file for being stored as the recognizable JSON file formats of client browser;Described video screen module Type file is by Video Model vertex coordinates data, Video Model camera posture information, initial texture pictorial information and WebGL wash with watercolours Contaminate information composition;Described WebGL spatial cues include Video Model vertex data form, projection matrix, video dynamic texture Information and coloration program;Video Model generation module and server end are asynchronous, generate Video Model file offline, finally The Video Model of generation is supplied to the calling of GIS service module;Net cast module:The real-time monitor video image or local video image of monitoring camera collection are received, and is located Manage and store;To video figure corresponding to net cast module request when client virtual reality fusion module draws Video Model As when being used as dynamic texture, net cast module to client virtual reality fusion module forwards, makes corresponding video image for it With;If virtual reality fusion module request is local video, net cast module directly forwards local video;If virtual reality fusion module Request is real-time monitoring video flow, and net cast module receives the real-time plug-flow of network monitoring camera head using RTMP agreements, And transcoding, burst processing are carried out to live video stream, ultimately generate the video profile of m3u8 forms and the video of ts forms Slicing files, it is pushed to client browser finally by http protocol and is used for virtual reality fusion module;GIS service module:Whole virtual reality fusion contextual data is provided and manages, the virtual reality fusion contextual data includes video screen module The Video Model of type generation module generation, the three-dimensional building model in virtual reality fusion scene, and the environment of whole scene, it is described Three-dimensional building model is modeled by real building and generated;When client browser sends the HTTP request of access, GIS service module is born Blame to the Video Model and three-dimensional building model needed for the transmission of virtual reality fusion system;Also provide one to client browser simultaneously Carrier and environment of the GIS-Geographic Information System (Geographic Information System, GIS) as virtual reality fusion scene, Video Model and BUILDINGS MODELS are that latitude and longitude coordinates are positioned on digital earth by real-world geographical coordinate system, are realized whole Individual scene and the accurate relative position of each model, the GIS-Geographic Information System are a three-dimensional digital earths, are believed with landform Breath and satellite base map, realize the true reappearance of whole scene environment;Virtual reality fusion module:Video Model file is read, calls WebGL interfaces to realize rendering for Video Model, and use HTML5 Tag labels to the net cast module request Video Model corresponding to video stream data, data are transmitted by http protocol, most The real-time video flow data of burst is obtained eventually;Using real-time video flow data as texture, the mode projected using texture is rendered, painted System, obtain the Video Model virtual reality fusion effect with video dynamic texture;Scene executor module:Provide the user a series of interactive operations on client web interface so that user can be It is free to navigate through in three-dimensional enhancing virtual environment, including dummy scene roaming, scene information are shown, video texture controls and VR patterns This four classes function, user can be allowed to have more preferable experience sense and feeling of immersion for enhancing virtual scene;Described virtual scene overflows Trip function can allow user to select to access pre-set important scenes node, or carry out scene automatic roaming along projected route; Described scene information display function can be clicked in user chooses corresponding BUILDINGS MODELS or Video Model to obtain to the model The introduction of details;Described video texture control function allows user to be operated, controlled to the Video Model in scene, Allow users to operate video interested, the operation includes pause, broadcasting, playback, F.F., simultaneously operating; Described VR mode capabilities can allow user to obtain VR display effects when using VR equipment browse client Web.
- A kind of 2. real-time virtual reality fusion live broadcasting method based on WebGL, it is characterised in that:Realize that step is as follows:(1) when Video Model generation step:When obtaining real-time monitor video image or local video image input, shone using single width Monitored picture is generated Video Model vertex data by piece modeling technique, while records camera parameter information, finally by above-mentioned number Stored according to generation binary format file is collected;The texture projective transformation matrix of Video Model is calculated using camera parameter information And watch the best view pose of the Video Model, by obtained data summarization and be converted to JSON forms description glTF text Part, including former binary file content and WebGL rendering parameters, it is supplied to GIS service step to call obtained result;(2) net cast step:Receive when client virtual reality fusion step draws Video Model and apply for corresponding video image As the request of dynamic texture, if virtual reality fusion steps request is local video, net cast step directly forwards local regard Frequently;If virtual reality fusion steps request is real-time monitoring video flow, net cast step receives network in real time using RTMP agreements The real-time plug-flow of monitoring camera, and transcoding, burst processing are carried out to live video stream, the video for ultimately generating m3u8 forms is matched somebody with somebody The video slicing file of file and ts forms is put, being pushed to client browser finally by http protocol supplies virtual reality fusion step Use;(3) GIS service step:The Video Model file that Video Model generation step provides is received, forwards it to virtual reality fusion Step uses;The BUILDINGS MODELS of storage modeling generation, and forward BUILDINGS MODELS to supply client when client request is received Virtual reality fusion step is held to call;Receive client browser and send the HTTP request of access, according to request to client browser A GIS-Geographic Information System (Geographic Information System, GIS) is provided as whole virtual reality fusion scene Carrier, the GIS-Geographic Information System that GIS server provides is a three-dimensional digital earth, three-dimensional digital earth include terrain information and Three-dimensional digital earth and Video Model, the latitude and longitude coordinates of BUILDINGS MODELS are sent to actual situation and melted by satellite base map, GIS service step Step is closed, for realizing whole scene and the accurate relative position of each model;(4) virtual reality fusion step:Call WebGL interfaces to realize rendering for Video Model after reading Video Model file, and use Video stream data corresponding to HTML5 Tag labels to the net cast steps request Video Model, data are passed by http protocol It is defeated, finally give the real-time video flow data of burst;Using real-time video flow data as texture, the mode wash with watercolours projected using texture Dye, draw, obtain the Video Model virtual reality fusion effect with video dynamic texture;Virtual reality fusion step reads three-dimensional and built simultaneously Model is built, calls WebGL interfaces to render BUILDINGS MODELS in the lump in the three-dimensional digital earth provided in GIS service step, is realized The complete color applying drawing of whole scene;(5) scene executor step:Interactive operation of the user on client web interface is received and parsed through, by changing camera Pose, display corresponding informance and change render mode method meet the needs of user is free to navigate through in virtual reality fusion scene, wrap Include dummy scene roaming, scene information is shown, video texture controls and VR patterns this four classes functions, so as to provide the user actual situation Merge the interactive experience of scene.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710872854.2A CN107835436B (en) | 2017-09-25 | 2017-09-25 | A kind of real-time virtual reality fusion live broadcast system and method based on WebGL |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710872854.2A CN107835436B (en) | 2017-09-25 | 2017-09-25 | A kind of real-time virtual reality fusion live broadcast system and method based on WebGL |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107835436A true CN107835436A (en) | 2018-03-23 |
CN107835436B CN107835436B (en) | 2019-07-26 |
Family
ID=61644048
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710872854.2A Active CN107835436B (en) | 2017-09-25 | 2017-09-25 | A kind of real-time virtual reality fusion live broadcast system and method based on WebGL |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107835436B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109045694A (en) * | 2018-08-17 | 2018-12-21 | 腾讯科技(深圳)有限公司 | Virtual scene display method, apparatus, terminal and storage medium |
CN109165270A (en) * | 2018-07-02 | 2019-01-08 | 武汉珞珈德毅科技股份有限公司 | A kind of three-dimensional GIS platform architecture system |
CN109842811A (en) * | 2019-04-03 | 2019-06-04 | 腾讯科技(深圳)有限公司 | A kind of method, apparatus and electronic equipment being implanted into pushed information in video |
CN110349254A (en) * | 2019-07-11 | 2019-10-18 | 东北大学 | A kind of adaptive medical image three-dimensional rebuilding method towards C/S framework |
CN110418127A (en) * | 2019-07-29 | 2019-11-05 | 南京师范大学 | Virtual reality fusion device and method based on template pixel under a kind of Web environment |
CN110738721A (en) * | 2019-10-12 | 2020-01-31 | 四川航天神坤科技有限公司 | Three-dimensional scene rendering acceleration method and system based on video geometric analysis |
CN111225191A (en) * | 2020-01-17 | 2020-06-02 | 华雁智能科技(集团)股份有限公司 | Three-dimensional video fusion method and device and electronic equipment |
CN111464818A (en) * | 2020-03-20 | 2020-07-28 | 新之航传媒集团有限公司 | Online live broadcast exhibition hall system |
CN112437276A (en) * | 2020-11-20 | 2021-03-02 | 埃洛克航空科技(北京)有限公司 | WebGL-based three-dimensional video fusion method and system |
CN112584254A (en) * | 2020-11-30 | 2021-03-30 | 北京邮电大学 | RTSP video stream loading method and device based on Cesium |
CN112584120A (en) * | 2020-12-15 | 2021-03-30 | 北京京航计算通讯研究所 | Video fusion method |
CN112584060A (en) * | 2020-12-15 | 2021-03-30 | 北京京航计算通讯研究所 | Video fusion system |
CN112687012A (en) * | 2021-01-08 | 2021-04-20 | 中国南方电网有限责任公司超高压输电公司南宁监控中心 | Island information fusion method based on three-dimensional visual management and control platform |
CN113099204A (en) * | 2021-04-13 | 2021-07-09 | 北京航空航天大学青岛研究院 | Remote live-action augmented reality method based on VR head-mounted display equipment |
CN114047821A (en) * | 2021-11-18 | 2022-02-15 | 中国人民解放军陆军装甲兵学院士官学校 | Virtual teaching method |
CN114494563A (en) * | 2022-02-14 | 2022-05-13 | 北京清晨动力科技有限公司 | Method and device for fusion display of aerial video on digital earth |
CN114885147A (en) * | 2022-07-12 | 2022-08-09 | 中央广播电视总台 | Fusion production and broadcast system and method |
CN115686182A (en) * | 2021-07-22 | 2023-02-03 | 荣耀终端有限公司 | Processing method of augmented reality video and electronic equipment |
CN115695841A (en) * | 2023-01-05 | 2023-02-03 | 威图瑞(北京)科技有限公司 | Method and device for embedding online live broadcast in external virtual scene |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104376596A (en) * | 2014-11-28 | 2015-02-25 | 北京航空航天大学 | Method for modeling and registering three-dimensional scene structures on basis of single image |
CN105872496A (en) * | 2016-07-01 | 2016-08-17 | 黄岩 | Ultrahigh-definition video fusion method |
CN106373148A (en) * | 2016-08-31 | 2017-02-01 | 中国科学院遥感与数字地球研究所 | Equipment and method for realizing registration and fusion of multipath video images to three-dimensional digital earth system |
-
2017
- 2017-09-25 CN CN201710872854.2A patent/CN107835436B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104376596A (en) * | 2014-11-28 | 2015-02-25 | 北京航空航天大学 | Method for modeling and registering three-dimensional scene structures on basis of single image |
CN105872496A (en) * | 2016-07-01 | 2016-08-17 | 黄岩 | Ultrahigh-definition video fusion method |
CN106373148A (en) * | 2016-08-31 | 2017-02-01 | 中国科学院遥感与数字地球研究所 | Equipment and method for realizing registration and fusion of multipath video images to three-dimensional digital earth system |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109165270A (en) * | 2018-07-02 | 2019-01-08 | 武汉珞珈德毅科技股份有限公司 | A kind of three-dimensional GIS platform architecture system |
CN109045694B (en) * | 2018-08-17 | 2020-01-10 | 腾讯科技(深圳)有限公司 | Virtual scene display method, device, terminal and storage medium |
CN109045694A (en) * | 2018-08-17 | 2018-12-21 | 腾讯科技(深圳)有限公司 | Virtual scene display method, apparatus, terminal and storage medium |
CN109842811B (en) * | 2019-04-03 | 2021-01-19 | 腾讯科技(深圳)有限公司 | Method and device for implanting push information into video and electronic equipment |
CN109842811A (en) * | 2019-04-03 | 2019-06-04 | 腾讯科技(深圳)有限公司 | A kind of method, apparatus and electronic equipment being implanted into pushed information in video |
CN110349254A (en) * | 2019-07-11 | 2019-10-18 | 东北大学 | A kind of adaptive medical image three-dimensional rebuilding method towards C/S framework |
CN110349254B (en) * | 2019-07-11 | 2022-12-06 | 东北大学 | C/S architecture-oriented adaptive medical image three-dimensional reconstruction method |
CN110418127B (en) * | 2019-07-29 | 2021-05-11 | 南京师范大学 | Operation method of pixel template-based virtual-real fusion device in Web environment |
CN110418127A (en) * | 2019-07-29 | 2019-11-05 | 南京师范大学 | Virtual reality fusion device and method based on template pixel under a kind of Web environment |
CN110738721A (en) * | 2019-10-12 | 2020-01-31 | 四川航天神坤科技有限公司 | Three-dimensional scene rendering acceleration method and system based on video geometric analysis |
CN110738721B (en) * | 2019-10-12 | 2023-09-01 | 四川航天神坤科技有限公司 | Three-dimensional scene rendering acceleration method and system based on video geometric analysis |
CN111225191A (en) * | 2020-01-17 | 2020-06-02 | 华雁智能科技(集团)股份有限公司 | Three-dimensional video fusion method and device and electronic equipment |
CN111225191B (en) * | 2020-01-17 | 2022-07-29 | 华雁智能科技(集团)股份有限公司 | Three-dimensional video fusion method and device and electronic equipment |
CN111464818A (en) * | 2020-03-20 | 2020-07-28 | 新之航传媒集团有限公司 | Online live broadcast exhibition hall system |
CN111464818B (en) * | 2020-03-20 | 2022-04-19 | 新之航传媒科技集团有限公司 | Online live broadcast exhibition hall system |
CN112437276A (en) * | 2020-11-20 | 2021-03-02 | 埃洛克航空科技(北京)有限公司 | WebGL-based three-dimensional video fusion method and system |
CN112437276B (en) * | 2020-11-20 | 2023-04-07 | 埃洛克航空科技(北京)有限公司 | WebGL-based three-dimensional video fusion method and system |
CN112584254A (en) * | 2020-11-30 | 2021-03-30 | 北京邮电大学 | RTSP video stream loading method and device based on Cesium |
CN112584120A (en) * | 2020-12-15 | 2021-03-30 | 北京京航计算通讯研究所 | Video fusion method |
CN112584060A (en) * | 2020-12-15 | 2021-03-30 | 北京京航计算通讯研究所 | Video fusion system |
CN112687012A (en) * | 2021-01-08 | 2021-04-20 | 中国南方电网有限责任公司超高压输电公司南宁监控中心 | Island information fusion method based on three-dimensional visual management and control platform |
CN113099204A (en) * | 2021-04-13 | 2021-07-09 | 北京航空航天大学青岛研究院 | Remote live-action augmented reality method based on VR head-mounted display equipment |
CN115686182A (en) * | 2021-07-22 | 2023-02-03 | 荣耀终端有限公司 | Processing method of augmented reality video and electronic equipment |
CN115686182B (en) * | 2021-07-22 | 2024-02-27 | 荣耀终端有限公司 | Processing method of augmented reality video and electronic equipment |
CN114047821A (en) * | 2021-11-18 | 2022-02-15 | 中国人民解放军陆军装甲兵学院士官学校 | Virtual teaching method |
CN114494563A (en) * | 2022-02-14 | 2022-05-13 | 北京清晨动力科技有限公司 | Method and device for fusion display of aerial video on digital earth |
CN114885147A (en) * | 2022-07-12 | 2022-08-09 | 中央广播电视总台 | Fusion production and broadcast system and method |
CN115695841A (en) * | 2023-01-05 | 2023-02-03 | 威图瑞(北京)科技有限公司 | Method and device for embedding online live broadcast in external virtual scene |
CN115695841B (en) * | 2023-01-05 | 2023-03-10 | 威图瑞(北京)科技有限公司 | Method and device for embedding online live broadcast in external virtual scene |
Also Published As
Publication number | Publication date |
---|---|
CN107835436B (en) | 2019-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107835436B (en) | A kind of real-time virtual reality fusion live broadcast system and method based on WebGL | |
Lee et al. | Data visceralization: Enabling deeper understanding of data using virtual reality | |
CN110379010A (en) | Three-dimensional geographic information method for visualizing and system based on video fusion | |
CN105931288A (en) | Construction method and system of digital exhibition hall | |
Zara | Virtual reality and cultural heritage on the web | |
CN105989623B (en) | The implementation method of augmented reality application based on handheld mobile device | |
Zhang et al. | The Application of Folk Art with Virtual Reality Technology in Visual Communication. | |
Che et al. | Reality-virtuality fusional campus environment: An online 3D platform based on OpenSimulator | |
Ekong et al. | Teacher-student vr telepresence with networked depth camera mesh and heterogeneous displays | |
Cao | Development and design case function comparison of panoramic roaming system of virtual museum based on Pano2VR | |
Ribeiro et al. | Capturing and visualizing 3D dance data: Challenges and lessons learnt | |
Tang | Application and design of drama popular science education using augmented reality | |
Liu et al. | A 2d and 3d indoor mapping approach for virtual navigation services | |
Hudson-Smith | Digital urban-the visual city | |
Yang et al. | Research on the dissemination and application of computer 3D technology in the process of intangible cultural heritage display | |
Tao | A VR/AR-based display system for arts and crafts museum | |
Wang et al. | Evolution and innovations in animation: A comprehensive review and future directions | |
CN112423014A (en) | Remote review method and device | |
Indraprastha et al. | Constructing virtual urban environment using game technology | |
CN112884906A (en) | System and method for realizing multi-person mixed virtual and augmented reality interaction | |
Zhang et al. | Design and implementation of virtual museum based on Web3D | |
Barszcz et al. | 3D scanning digital models for virtual museums | |
Barrile et al. | A Combined Study of Cultural Heritage in Archaeological Museums: 3D Survey and Mixed Reality. Heritage 2022, 5, 1330–1349 | |
Trapp et al. | Communication of digital cultural heritage in public spaces by the example of roman cologne | |
CN110662099A (en) | Method and device for displaying bullet screen |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |