CN114422813A - VR live video splicing and displaying method, device, equipment and storage medium - Google Patents

VR live video splicing and displaying method, device, equipment and storage medium Download PDF

Info

Publication number
CN114422813A
CN114422813A CN202111665654.2A CN202111665654A CN114422813A CN 114422813 A CN114422813 A CN 114422813A CN 202111665654 A CN202111665654 A CN 202111665654A CN 114422813 A CN114422813 A CN 114422813A
Authority
CN
China
Prior art keywords
video
panoramic
videos
sorted
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111665654.2A
Other languages
Chinese (zh)
Inventor
陈芃
李武璇
张晗冰
李鹏
朱雄增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202111665654.2A priority Critical patent/CN114422813A/en
Publication of CN114422813A publication Critical patent/CN114422813A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests

Abstract

The embodiment of the application relates to the technical field of VR live broadcast, in particular to a VR live broadcast video splicing and display method, device, equipment and storage medium, aiming at splicing high-quality 8K video outside a VR live broadcast machine and displaying rich panoramic video data. The method comprises the following steps: acquiring multiple 4K original videos in real time through multiple cameras of VR video acquisition equipment, and transmitting the multiple 4K original videos to a cloud server in real time; sequencing and sorting the plurality of 4K original videos to obtain a plurality of sequenced and sorted 4K videos; splicing a plurality of sorted 4K videos to obtain an 8K panoramic VR video; pushing the 8K panoramic VR video to a streaming media server; when the playing end pulls the 8K panoramic VR video, the 8K panoramic VR video is transmitted to the playing end through the streaming media server.

Description

VR live video splicing and displaying method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of VR live broadcast, in particular to a VR live broadcast video splicing and displaying method, device, equipment and storage medium.
Background
With the development of VR (virtual reality) technique, the live technique of VR begins to rise, and the live VR is the combination of virtual reality and live technique, shoots the panoramic video through 360 degrees panorama shooting equipment, carries out the live to the user through the network, and the viewer can angle of adjustment wantonly, watches the panorama picture, just as being personally on the spot generally. The panoramic shooting device shoots video clips at different angles through a plurality of cameras, and a panoramic video is obtained after the video clips are spliced, the picture quality output by video splicing is a key influencing VR live broadcast experience.
In the prior art, the quality of the video spliced outside the machine is uneven due to different splicing methods, splicing of 8K videos is not supported, and relevant information of objects in the videos cannot be displayed in the spliced panoramic video.
Disclosure of Invention
The embodiment of the application provides a VR live video splicing and displaying method, device, equipment and storage medium, and aims to realize splicing of a high-quality 8K video outside a VR live video player and display of rich panoramic video data.
A first aspect of an embodiment of the present application provides a VR live video splicing and display method, where the method includes:
acquiring multiple 4K original videos in real time through multiple cameras of VR video acquisition equipment, and transmitting the multiple 4K original videos to a cloud server in real time;
sequencing and sorting the multiple 4K original videos to obtain multiple 4K videos after sequencing and sorting;
splicing the sorted and sorted 4K videos to obtain an 8K panoramic VR video;
pushing the 8K panoramic VR video to a streaming media server;
when the playing end pulls the 8K panoramic VR video, the 8K panoramic VR video is transmitted to the playing end through the streaming media server.
Optionally, the method further comprises:
performing data analysis on the 8K panoramic VR video to obtain related data of an object in the 8K panoramic VR video;
and transmitting the related data to the streaming media server.
Optionally, the method further comprises:
the playing end reads the related data from the streaming media server;
and displaying the relevant data at the corresponding position in the 8K panoramic VR video.
Optionally, performing data analysis on the panoramic VR video to obtain related data of each object in the panoramic VR video, including:
decoding and decomposing the 8K panoramic VR video to obtain a plurality of decoded and decomposed videos;
and transmitting the decoded and decomposed videos to a corresponding analysis platform, and performing data analysis through an intelligent analysis algorithm to obtain the related data of the object of the 8K panoramic VR video.
Optionally, sorting and sorting the multiple original 4K videos includes:
time aligning the multiple original 4K videos;
and carrying out length alignment on the plurality of original 4K videos.
Optionally, the 4K videos sorted and sorted in sequence are spliced to obtain an 8K panoramic VR video, including:
calculating and fusing the sorted and sorted multiple 4K videos through a GPU optical flow acceleration algorithm to obtain the 8K panoramic VR video;
and performing real-time quick coding calculation on the 8K panoramic VR video through a real-time quick coding algorithm to obtain the coded 8K panoramic VR video.
Optionally, the method further comprises:
collecting related data of a middle object of the 8K panoramic VR video through Internet of things equipment;
and transmitting the related data to the streaming media server.
A second aspect of the embodiments of this application provides a VR live video splicing and display device, the device includes:
the video acquisition module is used for acquiring multiple 4K original videos in real time through multiple cameras of VR video acquisition equipment and transmitting the multiple 4K original videos to the cloud server in real time;
the video sorting module is used for sorting and sorting the multiple 4K original videos to obtain multiple sorted 4K videos;
the video splicing module is used for splicing the sorted and sorted 4K videos to obtain an 8K panoramic VR video;
the video pushing module is used for pushing the 8K panoramic VR video to a streaming media server;
and the video transmission module is used for transmitting the 8K panoramic VR video to the playing end through the streaming media server when the playing end pulls the 8K panoramic VR video.
Optionally, the apparatus further comprises:
the data analysis module is used for carrying out data analysis on the 8K panoramic VR video to obtain related data of an object in the 8K panoramic VR video;
and the first data transmission module is used for transmitting the related data to the streaming media server.
Optionally, the apparatus further comprises:
the data reading module reads the related data from the streaming media server by the playing end;
and the data display module is used for displaying the corresponding position of the related data in the 8K panoramic VR video.
Optionally, the data analysis module comprises:
the video decomposition submodule is used for decoding and decomposing the 8K panoramic VR video to obtain a plurality of decoded and decomposed videos;
and the video analysis submodule is used for transmitting the decoded and decomposed videos to a corresponding analysis platform and carrying out data analysis through an intelligent analysis algorithm to obtain the related data of the object of the 8K panoramic VR video.
Optionally, the video finishing module includes:
the time alignment submodule is used for carrying out time alignment on the multiple original 4K videos;
and the length alignment submodule is used for carrying out length alignment on the plurality of original 4K videos.
Optionally, the video stitching module includes:
the video splicing sub-module is used for calculating and fusing the sorted and sorted multiple 4K videos through a GPU optical flow acceleration algorithm to obtain the 8K panoramic VR video;
and the video coding submodule is used for carrying out real-time quick coding calculation on the 8K panoramic VR video through a real-time quick coding algorithm to obtain the coded 8K panoramic VR video.
Optionally, the apparatus further comprises:
the data acquisition module is used for acquiring related data of a middle object of the 8K panoramic VR video through Internet of things equipment;
and the second data transmission module is used for transmitting the related data to the streaming media server.
A third aspect of embodiments of the present application provides a readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps in the method according to the first aspect of the present application.
A fourth aspect of the embodiments of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the method according to the first aspect of the present application.
By the adoption of the VR live video splicing and displaying method, multiple 4K original videos are collected in real time through multiple cameras of VR video collecting equipment, and the multiple 4K original videos are transmitted to a cloud server in real time; sequencing and sorting the multiple 4K original videos to obtain multiple 4K videos after sequencing and sorting; splicing the sorted and sorted 4K videos to obtain an 8K panoramic VR video; pushing the 8K panoramic VR video to a streaming media server; and when a VR video watching request transmitted by a playing end is received, transmitting the 8K panoramic VR video to the playing end through the streaming media server. This application is through cloud ware, and the concatenation of the many copies of 4K video of taking panoramic camera is 8K panorama VR video for, and for spectator's propelling movement high-quality 8K panorama VR video in the VR live broadcast, effectively promoted the immersive of watching live user and experienced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a diagram of a VR live logical architecture according to an embodiment of the present application;
fig. 2 is a flowchart of a VR live video splicing and displaying method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a VR live video splicing and display device 300 according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present disclosure.
In this embodiment, as shown in fig. 1, fig. 1 is a VR live broadcast logic architecture diagram provided in an embodiment of the present application, in the diagram, content acquisition is performed through a live broadcast acquisition end, acquired videos are encoded and encapsulated and injected into a VR live broadcast platform, a cloud server on the VR live broadcast platform receives the injected content, video segments are spliced to obtain an 8K panoramic VR video, the video is then encapsulated, transcoding and scheduling delivery are performed through a director, and distribution scheduling is performed through a CDN. And transmitting the video to a user terminal through a network, and de-encapsulating, decoding and playing the received video by the user terminal. And then can realize that 8K panorama VR is live.
Referring to fig. 2, fig. 2 is a flowchart of a VR live video splicing and displaying method according to an embodiment of the present application. As shown in fig. 2, the method comprises the steps of:
s11: the method comprises the steps of collecting multiple 4K original videos through multiple cameras of VR video collecting equipment in real time, and transmitting the multiple 4K original videos to a cloud server in real time.
In this embodiment, VR video acquisition equipment is for gathering the video camera of VR, possess the video ability of shooting 4K definition, a plurality of cameras have been arranged on it, every camera shoots different angles respectively, realize 360 degrees panorama shooting, 4K indicates video resolution ratio, the pixel value of every line of horizontal direction reaches or is close 4096 promptly, can clearly show each department's detail of picture, 4K original video indicates that is gathered by the camera, the original video of not having any processing, cloud server indicates to set up in the high in the clouds, a server for unifying the management and handle the VR video to whole VR live, storage and distribution.
In this embodiment, when the VR video is broadcast directly, each camera on the VR camera shoots a scene within a certain range, the multiple cameras obtain multiple original videos, the definition of the video is 4K, and the multiple original videos shot are transmitted to the cloud server through real-time streaming transmission, that is, continuous shooting and continuous transmission.
Illustratively, the number of cameras of the VR camera is 4, and each camera is responsible for acquiring a video picture in a range of 90 degrees around, and shooting to obtain 4 paths of 4K original videos.
S12: and sequencing and sorting the multiple 4K original videos to obtain multiple 4K videos after sequencing and sorting.
In this embodiment, a plurality of 4K original videos need to be spliced after being sorted and sorted, the sorting and sorting includes aligning the videos in terms of time and length, and the specific sorting and sorting steps are as follows:
s12-1: time-aligning the plurality of original 4K videos.
In this embodiment, the time alignment means that the videos shot by each camera at the same time are aligned in time uniformly, so as to ensure that the start time of each video is the same.
In this embodiment, multiple original 4K videos are streamed in real time through a network, so that multiple original videos can be continuously photographed and continuously transmitted, the multiple original videos are continuously spliced, and before video splicing, the time of each path of original video needs to be aligned, so that time confusion of the spliced videos is prevented. After the cloud platform receives the multi-path transmitted 4K original videos, the starting time of each path of spliced videos is determined to be the same time.
Illustratively, when 4 paths of 4K original videos are received, the cloud server time-aligns the 4 paths of original videos, and aligns the start times of the 4 paths of original videos to 0:00, that is, the time when live broadcasting starts.
S12-2: and carrying out length alignment on the plurality of original 4K videos.
In this embodiment, the length alignment means that the videos shot by each camera at the same time are aligned uniformly in length, so as to ensure that the lengths of the videos are the same.
In this embodiment, since the video is streamed in real time, when multiple original videos are continuously spliced, the length of each splicing needs to be the same, otherwise, the spliced video pictures are erroneous. When the cloud platform receives multiple sets of 4K original videos which are transmitted in a multi-path mode, the lengths of the videos which are spliced at this time are aligned, and therefore the termination time of the videos with splicing is determined.
Illustratively, when 4 paths of 4K original videos are received, the cloud server time-aligns the 4 paths of original videos, and sets the lengths of the 4 paths of original videos to 5 minutes.
In the embodiment, the videos collected by the collecting end are aligned in time and length, so that the time and the length of each path of video correspond to each other when the videos are spliced, and the situation of splicing errors is prevented.
S13: and splicing the sorted and sorted multiple 4K videos to obtain an 8K panoramic VR video.
In this embodiment, the 8K panoramic VR video is a panoramic VR video spliced by multiple 4K videos.
In this embodiment, a plurality of ordered 4K videos are spliced by using an implementation fast coding algorithm and a GPU streaming acceleration algorithm through an 8K LIVE privatization protocol, where the 8K LIVE privatization protocol is a privatization network protocol specially customized for 8K LIVE broadcasting, and includes a related network provision of 8K LIVE broadcasting videos. The method comprises the following specific steps of splicing a plurality of received 4K videos by using an implementation fast coding algorithm and a GPU streamer acceleration algorithm through an 8K LIVE privatization protocol:
s13-1: and calculating and fusing the sorted and sorted multiple 4K videos through a GPU optical flow acceleration algorithm to obtain the 8K panoramic VR video.
In this embodiment, the GPU optical flow acceleration algorithm is to process the dynamic video by calculating the optical flow in the dynamic video, and accelerate the calculation.
In this embodiment, through the GPU optical flow acceleration algorithm, multiple 4K videos can be quickly and seamlessly spliced, and multiple 4K videos are fused into one integrated video stream including multidimensional parameters. Which is essentially the stitching of a sequence of images of multiple videos.
Exemplarily, 4 paths of 4K videos are received, GPU streamer acceleration calculation is performed on the sorted 4 paths of 4K videos, and partial pictures of the 4 paths of 4K videos are seamlessly fused into a complete picture, so as to obtain an 8K panoramic VR video.
S13-2: and performing real-time quick coding calculation on the 8K panoramic VR video through a real-time quick coding algorithm to obtain the coded 8K panoramic VR video.
In this embodiment, the real-time fast encoding algorithm performs real-time fast encoding calculation on the video, and this algorithm ensures that the video uploaded in real time can be encoded fast, and the encoded video can be played by the player.
In this embodiment, the video needs to be encoded to be played, and when the 8K panoramic VR video is received, the video is encoded in real time and rapidly, so as to obtain the encoded 8K panoramic VR video.
In the embodiment, the GPU light stream acceleration algorithm and the real-time fast coding algorithm are utilized to rapidly splice and code the multi-channel 4K videos, so that 8K panoramic VR videos are obtained, fast transmission of live videos and high-definition 8K videos are guaranteed for VR live broadcasting, fluency and definition of live broadcasting watching of a user are greatly improved, and watching experience of the user is improved.
S14: and pushing the 8K panoramic VR video to a streaming media server.
In this embodiment, the streaming media server is used to collect, cache, schedule, transmit and play streaming media content.
In this embodiment, 8K panorama VR video after will encoding encapsulation is pushed to the streaming media server, when directly broadcasting through a plurality of platforms simultaneously, can carry out the adaptability transcoding to each platform with 8K panorama VR video propelling movement after the encoding encapsulation to each platform, before propelling movement to each platform. And meanwhile, the CDN is utilized to distribute and schedule the video, and the central VR live broadcast platform distributes and schedules the content to each edge server.
In the embodiment, the streaming media server is used for caching the 8K panoramic VR video, and the live video is distributed and scheduled through the CDN, so that users at different places can watch live broadcast smoothly, the load of a center server of a live broadcast platform is well reduced, and the watching experience of the users is improved.
S15: when the playing end pulls the 8K panoramic VR video, the 8K panoramic VR video is transmitted to the playing end through the streaming media server.
In this embodiment, the broadcast end is the playback device that the user used, and the play end goes up to run has live APP, opens live APP, can visit live platform and watch live.
In this embodiment, the user opens the live broadcast APP of broadcast end, is provided with the live broadcast list in the live broadcast APP, and the user clicks the live broadcast of interest in the live broadcast list, can get into the live broadcast page, and the broadcast end can be pulled to the live broadcast platform this moment, pulls the corresponding 8K panorama VR video from the streaming media server and plays, and the live broadcast platform can be when the user pulls, records user's account information, the time of pulling and the video content of pulling.
For example, the playing terminal may be a mobile phone terminal, VR glasses, a WAP website, and the like, and at the mobile phone terminal and the VR glasses terminal, the live broadcast APP may be used to access the live broadcast platform, and the WAP website may play through a live broadcast H5 webpage. Any equipment such as a television and a large screen which can be connected to a live broadcast platform can be used as a playing terminal, and a user can freely select a proper channel to click a live broadcast video stream to watch.
In this embodiment, the user can use the terminal of installing live APP wantonly to carry out the VR live broadcast and watch to can watch the definition and be 8K's live video of panorama VR, can watch 8K panorama VR video anytime and anywhere, promote user's live broadcast greatly and watch experience.
In another embodiment of the present application, the method further comprises:
s21: and carrying out data analysis on the 8K panoramic VR video to obtain related data of the object in the 8K panoramic VR video.
In this embodiment, the data analysis refers to performing feature recognition on various scenes and objects in the video through various corresponding algorithms, and further recognizing various types of related information of each object in the video.
In this embodiment, after the 8K panoramic VR video is obtained by splicing, the video is transmitted to a data analysis platform, various feature recognition algorithms, such as a person face information recognition algorithm and a vehicle recognition algorithm, are added to the data analysis platform, and the video is analyzed by using the algorithms, so that the related data of the object in the 8K panoramic VR video can be obtained. The specific steps of the identification are as follows:
s21-1: and decoding and decomposing the 8K panoramic VR video to obtain a plurality of decoded and decomposed videos.
In this embodiment, the decoding decomposition is to decompose the 8K panorama VR video into multiple paths of 4K videos.
In this embodiment, because each corresponding analysis platform does not support directly receiving the 8K panoramic VR video, and can not directly perform feature recognition on the panoramic VR video, the 8K panoramic VR video needs to be decomposed to obtain multiple decoded and decomposed videos.
Illustratively, an 8K panoramic VR video is decoded and decomposed into 4 paths of 4K videos.
S21-2: and transmitting the decoded and decomposed videos to a corresponding analysis platform, and performing data analysis through an intelligent analysis algorithm to obtain the related data of the object of the 8K panoramic VR video.
In this embodiment, the analysis platform is a platform dedicated to feature recognition of the video, and the intelligent algorithm is an artificial intelligent algorithm for feature recognition, and the algorithm is preloaded in the platform server.
In this embodiment, the 8K panoramic VR video in the live broadcast platform can be transmitted to the streaming media server on the one hand, and can be decomposed on the other hand and transmitted to the corresponding analysis platform for data analysis. After the analysis platform receives the video, the corresponding feature recognition algorithm is used for carrying out feature recognition on the video to obtain the related data of the object.
Illustratively, when the analysis platform receives a video to be analyzed, scene recognition is performed on the video by using a scene recognition algorithm, so as to obtain scene data in the video, such as a school, a park, an amusement park, and the like. Vehicles in the video may also be identified using vehicle identification algorithms, such as buses, cars, non-motor vehicles, and the like. Various feature recognition algorithms can also be used to recognize each object in the video to obtain the related data of the object, which is not limited herein.
S22: and transmitting the related data to the streaming media server.
In this embodiment, after the related data of the object is obtained, the related data is transmitted to the streaming server.
In this embodiment, the related data also needs to correspond to the 8K panoramic VR video, so as to ensure that the related data can be displayed at the correct position in the video, and before the related data is transmitted to the streaming media server, the position information of the related data in the video is superimposed on the related data, so that the related data can be accurately displayed in the VR video. The position information is the time and coordinates at which the relevant data appears in the video.
Illustratively, if an Tianman door appears in the picture of 40-56 seconds of the 8K panoramic VR video, the analysis platform obtains relevant data as the Tianman door after identifying the building Tianman door, and position information is the Tianman door (time: 40s-56 s; coordinate: 100,250,200), and the position information is superposed in the relevant data and transmitted to the streaming media server.
In this embodiment, carry out intelligent analysis through data analysis platform to the data in the 8K panorama VR video, can effectively discern the relevant data of each object in the video, with relevant data transmission to the streaming media server in, show when the user watches the live broadcast, guaranteed the rate of accuracy of information identification in the video, help promoting the user and watch live experience.
In another embodiment of the present application, the method further comprises:
s31: and reading the related data from the streaming media server through the playing end.
In this embodiment, when the playing end pulls a video to the streaming media server, the playing end may simultaneously read related data from the streaming media server, and the playing end APP sets an http interface, and reads related data of an object in the video from the streaming media server at a fixed time through the http interface.
S32: and displaying the relevant data at the corresponding position in the 8K panoramic VR video.
In this embodiment, when the user watches the 8K panoramic VR video at the playing end, the received related data of the object in the video is displayed at the corresponding position of the video, and the user can obtain the related information of the object in the live video by reading the related data while watching the live video.
In this embodiment, when the player plays a video, an information display layer is disposed on the VR video, where the information display layer is used to display related information of an object in the video, and after the player receives the related data, the player renders the related data in the information display layer, and superimposes position information on the related data, and displays the related data in a corresponding position of the layer according to the position information.
For example, when a user watches 8K panoramic VR live video and watches an Tiananmen scene, the information display layer of the player displays the name of the Tiananmen, the age of the creation and other information at the position of the Tiananmen in the video.
In the embodiment, the layer display technology is utilized, the related information of the objects in the live broadcast video is displayed on the layer, the user can know the related data of the objects in the video when watching the live broadcast, and the experience of the user when watching the live broadcast is effectively improved.
In another embodiment of the present application, the method further comprises:
s41: and collecting related data of the object in the 8K panoramic VR video through the Internet of things equipment.
S42: and transmitting the related data to the streaming media server.
In this embodiment, the internet of things device refers to an electronic device that can be connected to a network and can collect related information of an article.
In this embodiment, in the live broadcast process, the VR platform may collect public information collected by internet of things devices near a live broadcast scene, invoke the collected related data to these internet of things devices, directly transmit these data to the streaming media server, and when a user watches the live broadcast, the player may obtain these related data from the streaming media server, and display these data on the layer of the VR video.
Exemplarily, the internet of things device is air detection equipment, if at the in-process of carrying out the live broadcast of VR, there is air detection equipment near the place of live broadcast, and the live broadcast platform of VR can receive the air data that this air detection equipment detected, with this air data transmission to streaming media server in, when the broadcast end user watched the live broadcast of VR, show this air data on the visual picture layer of VR, for example, air quality: good, PM2.5 value: 35.
in this embodiment, through the data presentation with thing networking device gathers on the picture layer of live broadcast video, further richened the video data of 8K panorama VR video, effectively promoted user's panorama immersive experience.
In the above embodiment, the VR video is spliced outside the machine by a LIVE video stitching technology, multiple paths of 4K pictures are fused into one path of 8K panoramic VR video by using a real-time fast editing algorithm and a GPU light flow acceleration algorithm through an 8K LIVE privatization protocol, the video is distributed and scheduled by a streaming media server, and meanwhile, data analysis is performed on the video through an artificial intelligence algorithm to obtain related data of the video, and then the related data is transmitted to the streaming media server, and finally displayed in the video at the playing end. The user can watch in real time at PC end, cell-phone end, VR glasses end, and the 8K panorama VR video of splicing out has improved the video image quality of VR, and the data of panorama video have been richened to the data that picture layer demonstrates, have effectively improved the user and watched the live panorama immersive of VR and have experienced.
In another embodiment of the present application, the present application is further described by an actual live scene.
Live scene is travelled to VR: through the VR camera, 8K high definition digtal camera in the assembly, shoot the scene of each sight spot, a plurality of video clip that will shoot transmit to the live platform of VR, after piecing together of platform and analysis, transmit to the streaming media server, the viewer can pass through VR glasses, cell-phone, the PC end can watch 8K panorama VR video, still can demonstrate the name and the relevant information of each sight spot on the video simultaneously, the glamour of each sight spot is experienced to the immersive.
VR exhibition: at the on-the-spot installation VR camera of exhibition, upload the on-the-spot picture of exhibition in real time, the user can watch the on-the-spot 8K panorama VR video of exhibition at the broadcast end, still can show the information of each showpiece of exhibition on the video, promotes user's immersive experience.
The VR live broadcast method can be further applied to VR distance education, teachers and students can experience immersive education experience, and VR agricultural product sales and exhibition can enable users to experience rural scenery in an immersive mode, and agricultural products are purchased with great care.
Based on the same inventive concept, an embodiment of the application provides a VR live video splicing and displaying device. Referring to fig. 3, fig. 3 is a schematic diagram of a VR live video splicing and displaying device 300 according to an embodiment of the present application. As shown in fig. 3, the apparatus includes:
the video acquisition module 301 is configured to acquire multiple 4K original videos in real time through multiple cameras of a VR video acquisition device, and transmit the multiple 4K original videos to a cloud server in real time;
the video sorting module 302 is configured to sort and sort the multiple 4K original videos to obtain multiple sorted 4K videos;
the video splicing module 303 is configured to splice the sorted and sorted multiple 4K videos to obtain an 8K panoramic VR video;
a video pushing module 304, configured to push the 8K panoramic VR video to a streaming media server;
the video transmission module 305 is configured to transmit the 8K panoramic VR video to the playing end through the streaming media server when the playing end pulls the 8K panoramic VR video.
Optionally, the apparatus further comprises:
the data analysis module is used for carrying out data analysis on the 8K panoramic VR video to obtain related data of an object in the 8K panoramic VR video;
and the first data transmission module is used for transmitting the related data to the streaming media server.
Optionally, the apparatus further comprises:
the data reading module reads the related data from the streaming media server by the playing end;
and the data display module is used for displaying the corresponding position of the related data in the 8K panoramic VR video.
Optionally, the data analysis module comprises:
the video decomposition submodule is used for decoding and decomposing the 8K panoramic VR video to obtain a plurality of decoded and decomposed videos;
and the video analysis submodule is used for transmitting the decoded and decomposed videos to a corresponding analysis platform and carrying out data analysis through an intelligent analysis algorithm to obtain the related data of the object of the 8K panoramic VR video.
Optionally, the video finishing module includes:
the time alignment submodule is used for carrying out time alignment on the multiple original 4K videos;
and the length alignment submodule is used for carrying out length alignment on the plurality of original 4K videos.
Optionally, the video stitching module includes:
the video splicing sub-module is used for calculating and fusing the sorted and sorted multiple 4K videos through a GPU optical flow acceleration algorithm to obtain the 8K panoramic VR video;
and the video coding submodule is used for carrying out real-time quick coding calculation on the 8K panoramic VR video through a real-time quick coding algorithm to obtain the coded 8K panoramic VR video.
Optionally, the apparatus further comprises:
the data acquisition module is used for acquiring related data of a middle object of the 8K panoramic VR video through Internet of things equipment;
and the second data transmission module is used for transmitting the related data to the streaming media server.
Based on the same inventive concept, another embodiment of the present application provides a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the VR live video splicing and displaying method according to any of the above embodiments of the present application.
Based on the same inventive concept, another embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, and when the processor executes the computer program, the steps in the VR live video splicing and displaying method described in any of the above embodiments of the present application are implemented.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The VR live video splicing and displaying method, device, equipment and storage medium provided by the present application are introduced in detail, and a specific example is applied in the present application to explain the principle and implementation manner of the present application, and the description of the above embodiment is only used to help understand the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A VR live video splicing and displaying method is characterized by comprising the following steps:
acquiring multiple 4K original videos in real time through multiple cameras of VR video acquisition equipment, and transmitting the multiple 4K original videos to a cloud server in real time;
sequencing and sorting the multiple 4K original videos to obtain multiple 4K videos after sequencing and sorting;
splicing the sorted and sorted 4K videos to obtain an 8K panoramic VR video;
pushing the 8K panoramic VR video to a streaming media server;
when the playing end pulls the 8K panoramic VR video, the 8K panoramic VR video is transmitted to the playing end through the streaming media server.
2. The method of claim 1, further comprising:
performing data analysis on the 8K panoramic VR video to obtain related data of an object in the 8K panoramic VR video;
and transmitting the related data to the streaming media server.
3. The method of claim 2, further comprising:
reading the related data from the streaming media server through the playing end;
and displaying the relevant data at the corresponding position in the 8K panoramic VR video.
4. The method of claim 2, wherein performing data analysis on the panoramic VR video to obtain data related to each object in the panoramic VR video comprises:
decoding and decomposing the 8K panoramic VR video to obtain a plurality of decoded and decomposed videos;
and transmitting the decoded and decomposed videos to a corresponding analysis platform, and performing data analysis through an intelligent analysis algorithm to obtain the related data of the object of the 8K panoramic VR video.
5. The method of claim 1, wherein sorting the plurality of original 4K videos comprises:
time aligning the multiple original 4K videos;
and carrying out length alignment on the plurality of original 4K videos.
6. The method of claim 1, wherein the splicing the sorted and sorted 4K videos to obtain an 8K panoramic VR video comprises:
calculating and fusing the sorted and sorted multiple 4K videos through a GPU optical flow acceleration algorithm to obtain the 8K panoramic VR video;
and performing real-time quick coding calculation on the 8K panoramic VR video through a real-time quick coding algorithm to obtain the coded 8K panoramic VR video.
7. The method of claim 1, further comprising:
collecting related data of a middle object of the 8K panoramic VR video through Internet of things equipment;
and transmitting the related data to the streaming media server.
8. The utility model provides a live video concatenation of VR and display device which characterized in that, the device includes:
the video acquisition module is used for acquiring multiple 4K original videos in real time through multiple cameras of VR video acquisition equipment and transmitting the multiple 4K original videos to the cloud server in real time;
the video sorting module is used for sorting and sorting the multiple 4K original videos to obtain multiple sorted 4K videos;
the video splicing module is used for splicing the sorted and sorted 4K videos to obtain an 8K panoramic VR video;
the video pushing module is used for pushing the 8K panoramic VR video to a streaming media server;
and the video transmission module is used for transmitting the 8K panoramic VR video to the playing end through the streaming media server when the playing end pulls the 8K panoramic VR video.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the computer program is executed by the processor.
CN202111665654.2A 2021-12-30 2021-12-30 VR live video splicing and displaying method, device, equipment and storage medium Pending CN114422813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111665654.2A CN114422813A (en) 2021-12-30 2021-12-30 VR live video splicing and displaying method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111665654.2A CN114422813A (en) 2021-12-30 2021-12-30 VR live video splicing and displaying method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114422813A true CN114422813A (en) 2022-04-29

Family

ID=81270533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111665654.2A Pending CN114422813A (en) 2021-12-30 2021-12-30 VR live video splicing and displaying method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114422813A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108353122A (en) * 2015-11-17 2018-07-31 索尼公司 Multicamera system, the method and camera for controlling multicamera system
US20190045222A1 (en) * 2016-02-12 2019-02-07 Samsung Electronics Co., Ltd. Method for supporting vr content display in communication system
CN110324648A (en) * 2019-07-17 2019-10-11 咪咕文化科技有限公司 Live streaming shows method and system
CN111193937A (en) * 2020-01-15 2020-05-22 北京拙河科技有限公司 Processing method, device, equipment and medium for live video data
CN111311490A (en) * 2020-01-20 2020-06-19 陕西师范大学 Video super-resolution reconstruction method based on multi-frame fusion optical flow
CN111416989A (en) * 2020-04-28 2020-07-14 北京金山云网络技术有限公司 Video live broadcast method and system and electronic equipment
CN112911221A (en) * 2021-01-15 2021-06-04 欧冶云商股份有限公司 Remote live-action storage supervision system based on 5G and VR videos

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108353122A (en) * 2015-11-17 2018-07-31 索尼公司 Multicamera system, the method and camera for controlling multicamera system
US20190045222A1 (en) * 2016-02-12 2019-02-07 Samsung Electronics Co., Ltd. Method for supporting vr content display in communication system
CN110324648A (en) * 2019-07-17 2019-10-11 咪咕文化科技有限公司 Live streaming shows method and system
CN111193937A (en) * 2020-01-15 2020-05-22 北京拙河科技有限公司 Processing method, device, equipment and medium for live video data
CN111311490A (en) * 2020-01-20 2020-06-19 陕西师范大学 Video super-resolution reconstruction method based on multi-frame fusion optical flow
CN111416989A (en) * 2020-04-28 2020-07-14 北京金山云网络技术有限公司 Video live broadcast method and system and electronic equipment
CN112911221A (en) * 2021-01-15 2021-06-04 欧冶云商股份有限公司 Remote live-action storage supervision system based on 5G and VR videos

Similar Documents

Publication Publication Date Title
CN106789991B (en) Multi-person interactive network live broadcast method and system based on virtual scene
CN106331732B (en) Generate, show the method and device of panorama content
CN106792246B (en) Method and system for interaction of fusion type virtual scene
CN106060578B (en) Generate the method and system of video data
EP2876891B1 (en) Method and apparatus for matching of corresponding frames in multimedia streams
CN109416931A (en) Device and method for eye tracking
CN102290082B (en) Method and device for processing brilliant video replay clip
US9294710B2 (en) Image comparison device using personal video recorder and method using the same
EP2822288A1 (en) Method and apparatus for frame accurate advertisement insertion
CN105939481A (en) Interactive three-dimensional virtual reality video program recorded broadcast and live broadcast method
CN106658032B (en) Multi-camera live broadcasting method and system
US20030030734A1 (en) System and method for transitioning between real images and virtual images
CN103297688A (en) System and method for multi-media panorama recording
US20140112633A1 (en) Method and system for network-based real-time video display
CN103929669A (en) Interactive video generator, player, generating method and playing method
US10306303B2 (en) Tailored audio content delivery
CN104767975A (en) Method for achieving interactive panoramic video stream map
CN110930220A (en) Display method, display device, terminal equipment and medium
CN106464773A (en) Augmented reality apparatus and method
CN105704399A (en) Playing method and system for multi-picture television program
CN111246196B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN105791883A (en) Multimedia data play method, apparatus and system
CN109862385B (en) Live broadcast method and device, computer readable storage medium and terminal equipment
CN112288877A (en) Video playing method and device, electronic equipment and storage medium
CN107707830A (en) Panoramic video based on one-way communication plays camera system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination