CN114697684A - Method for realizing multi-VR machine position switching - Google Patents

Method for realizing multi-VR machine position switching Download PDF

Info

Publication number
CN114697684A
CN114697684A CN202210378003.3A CN202210378003A CN114697684A CN 114697684 A CN114697684 A CN 114697684A CN 202210378003 A CN202210378003 A CN 202210378003A CN 114697684 A CN114697684 A CN 114697684A
Authority
CN
China
Prior art keywords
video
splicing
picture
data
machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210378003.3A
Other languages
Chinese (zh)
Inventor
高成亮
陈鹤
罗群英
叶建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Arcvideo Technology Co ltd
Original Assignee
Hangzhou Arcvideo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Arcvideo Technology Co ltd filed Critical Hangzhou Arcvideo Technology Co ltd
Priority to CN202210378003.3A priority Critical patent/CN114697684A/en
Publication of CN114697684A publication Critical patent/CN114697684A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/23608Remultiplexing multiplex streams, e.g. involving modifying time stamps or remapping the packet identifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The invention discloses a method for realizing multi-VR machine position switching. The method specifically comprises the following steps: the VR information source is used for erecting VR recording equipment according to a recording range; collecting and summarizing information source signals of all machine positions by VR video collection; VR data processing is used for processing data acquired by a VR video and synchronizing the data to a VR video synchronization module so as to help VR picture splicing and position positioning in a subsequent VR video processing stage; switching a main source and a standby source of VR signals output by VR video processing through a VR guide to process the gathered VR video information and the machine position information, splicing and compressing the VR video in a frame level, and marking the position of each machine position in the picture content, so that a user can click and switch according to the VR machine position when watching the VR content; and VR video watching. The beneficial effects of the invention are: the video pictures are watched at a plurality of different angles and centers, and are switched to any position, so that better and more visual watching experience is brought.

Description

Method for realizing multi-VR machine position switching
Technical Field
The invention relates to the technical field of VR playing, in particular to a method for realizing switching of multiple VR machine positions.
Background
The VR technique is more and more mature now, various VR correlation techniques are in food and beverage hotel, amusement and leisure, movie & TV preparation, trades such as vacation tourism have obtained very big popularization and application, but in the live broadcast trade of video, live to VR's video, more be by certain live broadcast platform or certain anchor utilize single equipment to carry out the VR picture content show of minizone, the content is more live with the VR recreation, live and show field live as giving first place to, the user can only follow single route as the viewer and watch, the content is comparatively single, can watch the scope little, unable multi-position switching scheduling problem. With the maturation of VR technique and 5G technique, the bottleneck that can't satisfy people to the demand of watching VR video has appeared in current VR live broadcast mode.
A large amount of collected VR video resources are not effectively applied, and the same VR live broadcast mode is adopted, so that VR video resources which can be used for users to watch are only limited to pictures in a single region, and the users cannot watch the video resources in a multi-dimensional and multi-center mode, and waste of the VR video resources is caused.
Disclosure of Invention
The invention provides a method for realizing switching of multiple VR machine positions, which can switch any position, in order to overcome the defects in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for realizing multi-VR machine position switching specifically comprises the following steps:
(1) VR information source: the VR recording equipment is erected according to a recording range, wherein the VR recording equipment is designed according to the characteristics of a machine position;
(2) VR video acquisition: collecting and summarizing information source signals of each machine position, and accessing signal data to a VR (virtual reality) collection system to complete a step of collecting and analyzing VR data;
(3) VR data processing: processing data acquired by the VR video, including a first frame of picture of the VR video, VR video content, setting parameters and VR machine position information, and synchronizing the data to a VR video synchronization module to help VR picture splicing and position positioning in a subsequent VR video processing stage;
(4) and (3) VR video processing: finishing links of VR video synchronization, splicing, position marking and outputting, wherein the output VR signal is switched from a master source to a standby source through a VR guide to process the collected VR video information and machine position information, the VR video is subjected to frame-level splicing and compression processing, and the positions of all machine positions in the picture content are marked, so that a user can conveniently click and switch according to the VR machine positions when watching the VR content, and the user does not need to drag the page back and forth to reach the appointed position;
(5) VR video watching: a cast terminal for viewing by a user is provided.
The invention discloses a method for realizing switching of multiple VR machine positions, which comprises the steps of VR machine position setting, VR video acquisition, VR video synchronization, VR video splicing, VR data processing, VR guide, VR video output and a VR playing terminal.
Preferably, in the step (4), in the VR video synchronization node, due to buffers and exposure times of multiple VR machine position devices, data among multiple VR devices need to be synchronized, if the exposures and the buffers are not synchronized, obvious frame skipping and splicing dislocation can occur in a VR video splicing picture, and VR machine position information in the synchronized data is used for marking each VR machine position in a video after subsequent VR video splicing, and a user can realize quick switching by clicking the marks.
Preferably, in the step (4), VR video synchronization is divided into two aspects, the first is exposure time synchronization of a plurality of VR shooting devices, and the second is time synchronization of data at the top of buffers of a plurality of VR shooting devices; for exposure time synchronization: the VR video synchronization module transmits the same square wave signal to a plurality of VR shooting devices, and the square wave signal controls the exposure of the camera, so that the camera can be forced to be exposed simultaneously; for buffer synchronization: and the VR video synchronization module synchronizes data collected by VR shooting equipment according to the timestamp of each frame of data.
Preferably, in the step (4), splicing a plurality of VR pictures by VR video splicing is realized, and a panoramic picture covering all shot pictures is finally formed; the method specifically comprises the following steps: after VR video content is collected and enters a VR video splicing module, the VR video splicing module can automatically identify the type, the visual angle and the focal length parameter of a shooting lens, and meanwhile, the size of a picture is identified, improper automatic adjustment is carried out, and after the adjustment step is completed, the VR video splicing module can preliminarily splice all video pictures to form a panoramic picture; after the preliminary picture that splices, VR video concatenation module can detect the quality of concatenation picture, to the flaw automatic restoration that appears in the picture, the picture can be exported after the completion.
Preferably, in step (4), the VR position markers are specifically: the VR picture is shot by a plurality of VR shooting equipment and is constituteed, and in the panorama picture of concatenation, every VR shooting equipment position marks, and the position that marks and the picture that splices are carried to play terminal together for the user can select the position that marks to select different angles to watch during watching the VR video.
Preferably, in step (4), the VR output is specifically: and the work of transcoding and outputting the video spliced by the VR is undertaken, and the video file is transcoded and output into a video stream of a related protocol according to the requirement of a front-end player so as to be played by the front-end player.
Preferably, in step (4), after the data are synchronized, the VR video splicing module performs frame-level splicing on VR videos of multiple VR stations, and after the splicing is completed, video compression is performed to output a real-time live stream, which can be directly output to a broadcasting server for broadcasting or output by a VR director for broadcasting, where the VR director is used for switching between a main stream and a backup stream during VR live broadcasting and a gasket.
Preferably, in step (4), the frame-level splicing of the VR video specifically includes: extracting VR picture files uploaded to a VR video splicing module by taking frames as units, extracting and registering feature points of extracted rear frame level pictures, determining overlapping areas and overlapping positions between spliced images, and completing splicing of VR panoramic images through feature point matching; the specific operation algorithm flow is as follows: (a) detecting feature points in each image, (b) calculating a match between each feature point, (c) calculating an initial value of an inter-image transformation matrix, (d) iteratively refining an H change matrix, (e) guiding the match, and (f) repeating the iteration.
Preferably, in step (5), after the VR frame signal and the VR position are sent to the broadcasting server, the user can watch the VR video content through the broadcasting terminal, after the user starts the broadcasting page, the broadcasting page is divided into the currently-played VR video and the selectable VR position, and the user can watch the VR live broadcast through a free path or can quickly switch to different VR positions through a plurality of VR positions to watch the VR live broadcast in real time through different central points.
The invention has the beneficial effects that: the viewer can watch the video picture at a plurality of different angles, different centers, and the user can be regarded as a "free person" and "freely walk" in the video, can not also switch to arbitrary position through clicking the VR machine site that sets up in advance, watches the video picture with different visual angles, different positions, different dimensions, brings better more audio-visual watching experience, need not confine to fixed position again and watch.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
fig. 2 is a layout diagram of the player.
Detailed Description
The invention is further described with reference to the following figures and detailed description.
In the embodiment shown in fig. 1, a method for implementing multi-VR rack switch specifically includes the following steps:
(1) VR information source: the VR recording equipment is erected according to a recording range, wherein the VR recording equipment is designed according to the characteristics of a machine position, and the machine position comprises a fixed machine position, a rocker machine position, a straight rail machine position and the like;
(2) VR video acquisition: collecting and summarizing information source signals of each machine position, and accessing data such as the signals to a VR (virtual reality) collection system to complete a step of collecting and analyzing VR data; the VR acquisition system is used for encoder encoding output and convergence access of a streaming media server;
(3) VR data processing: processing data acquired by the VR video, including a first frame of picture of the VR video, VR video content, setting parameters and VR machine position information, and synchronizing the data to a VR video synchronization module to help VR picture splicing and position positioning in a subsequent VR video processing stage;
(4) and (3) VR video processing: finishing links such as VR video synchronization, splicing, position marking, output and the like, switching an output VR signal through a VR guide to carry out master-slave source switching to process gathered VR video information, machine position information and the like, carrying out frame-level splicing and compression processing on the VR video, and marking the position of each machine position in picture content, so that a user can conveniently click and switch according to the VR machine position when watching the VR content, and the user does not need to drag a page back and forth to reach a specified position;
in a VR video synchronization node, due to reasons such as buffers and exposure time of multiple VR machine position devices, data among multiple VR devices needs to be synchronized, if exposure and buffers are not synchronized, obvious frame skipping and splicing dislocation can occur in a VR video splicing picture, and VR machine position information in the data is synchronized and is used for marking each VR machine position in a video after subsequent VR video splicing, and a user can realize quick switching by clicking the marks;
and VR video synchronization: the synchronization steps are mainly divided into two aspects, the first is the exposure time synchronization of a plurality of VR shooting devices, the second is the time synchronization of data at the top ends of buffers of a plurality of VR shooting devices, and the VR video synchronization module mainly realizes the synchronization of VR videos through the two points.
1. Exposure time synchronization: the VR video synchronization module transmits the same square wave signal to a plurality of VR shooting devices, and the square wave signal controls the exposure of the camera, so that the camera can be forced to be exposed simultaneously.
2. buffer synchronization: and the VR video synchronization module synchronizes data collected by VR shooting equipment according to the timestamp of each frame of data.
Splicing VR videos: in the module, splicing of multiple VR pictures is realized, and finally a panoramic picture covering all the lens pictures is formed. After VR video content gathered and entered into this module, VR concatenation module can automatic identification shoot the type of camera lens, visual angle, focus and so on parameter, simultaneously, discerns the picture size, to improper automatic adjustment that carries out. After the adjusting step is completed, the VR video splicing module can initially splice all video pictures to form a panoramic picture. After the preliminary picture of concatenation, VR video concatenation module can detect the quality of concatenation picture, and to the flaw automatic restoration that appears in the picture, if get rid of halation, rectify geometric distortion, repair stain etc. the picture can be exported after this step is accomplished.
VR position marking: the VR picture is shot by a plurality of VR shooting equipment and is constituteed, and in the panorama picture of concatenation, every VR shooting equipment position marks, and the good position of mark and the picture of concatenation are carried to VR video output system together for the user watches the VR video in-period, can select the position that the mark is good to select different angles to watch.
And VR output: the work of transcoding and outputting the video spliced by VR is undertaken, and the video file is transcoded and output into video streams with protocols such as H264 and H265 according to the requirements of a front-end player for playing by the front-end player.
After data synchronization, frame-level splicing is carried out on VR videos of a plurality of VR machine positions by a VR video splicing module, after splicing is completed, video compression is carried out, real-time live broadcast streams are output, the live broadcast streams can be directly output to a broadcasting server for broadcasting, and broadcasting can also be output by a VR guide, wherein the VR guide is used for switching main and standby streams, gaskets and the like during VR live broadcast;
in the method, the pictures shot by a plurality of VR shooting devices need to be spliced to finally form a panoramic picture covering all the lens pictures. The VR video is spliced in a frame level mode, namely VR picture files uploaded to a VR video splicing module are extracted by taking frames as units, extracted rear frame level pictures are subjected to feature point extraction and registration, overlapping areas and overlapping positions among spliced images are determined, and splicing of VR panoramic images is completed through feature point matching.
The algorithm flow is as follows:
1. feature points in each image are detected.
2. A match between each feature point is calculated.
3. An initial value of an inter-image transformation matrix is calculated.
4. And (5) iteratively refining the H change matrix.
5. And guiding to match.
6. And repeating the iteration.
After splicing of VR video pictures, VR video files need to be compressed and encoded, and because no special encoding compression standard exists in China at present, in the method, the files are compressed according to the currently common standards such as H264 and H265, and the encoded and compressed files can be used for playing of a player.
(5) VR video watching: providing a playing terminal for a user to watch;
after VR picture signal and VR machine position are sent for broadcasting the server, the user can watch VR video content through broadcast terminal, after the user ordered the broadcast page, the broadcast page divide into present broadcast VR video and optional VR machine position, the user can watch by the free choice route in watching the VR live broadcast, also can come fast switch to different VR machine positions through a plurality of VR machine positions again, as shown in fig. 2, watch in real time with different central points.
Wherein the VR machine position is set as the placement position of a VR camera in a live broadcast site; VR video acquisition, VR video synchronization, VR video splicing, VR video output and VR directing are all performed on a main node machine (namely a machine operated by a main program); the data processing belongs to a real-time data transmission step during the VR live broadcast period, and information such as VR video acquisition parameters, configuration parameters and VR machine position information is synchronized, so that the VR video splicing accuracy can be improved; the VR watching terminal is a mobile phone, Pad or PC player of the watching user.
The invention discloses a method for realizing switching of multiple VR machine positions, which comprises VR machine position setting, VR video acquisition, VR video synchronization, VR video splicing, VR data processing, VR guide, VR video output and a VR playing terminal, wherein a plurality of VR central points are formed by utilizing a multiple VR machine position mode, a viewer can watch video pictures at a plurality of different angles and different centers, a user can freely move in a video as a 'free person', the user can also switch to any position by clicking a preset VR machine position point, and watch the video pictures by using different visual angles, different positions and different dimensions, so that better and more visual watching experience is brought, the user does not need to be limited to a fixed position for watching, the user can watch the VR video in a multi-angle and multi-dimension mode more freely, and the free movement watching experience of a VR video world is realized, and the VR shooting resource utilization rate is maximized.

Claims (9)

1. A method for realizing multi-VR machine position switching is characterized by comprising the following steps:
(1) VR information source: the VR recording equipment is erected according to a recording range, wherein the VR recording equipment is designed according to the characteristics of a machine position;
(2) VR video acquisition: collecting and summarizing the information source signals of each machine position, and accessing the signal data to a VR (virtual reality) collection system to complete a VR data collection and analysis link;
(3) VR data processing: processing data acquired by the VR video, including a first frame of picture of the VR video, VR video content, setting parameters and VR machine position information, and synchronizing the data to a VR video synchronization module to help VR picture splicing and position positioning in a subsequent VR video processing stage;
(4) and (3) VR video processing: finishing links of VR video synchronization, splicing, position marking and outputting, wherein the output VR signal is switched from a master source to a standby source through a VR guide to process the collected VR video information and machine position information, the VR video is subjected to frame-level splicing and compression processing, and the positions of all machine positions in the picture content are marked, so that a user can conveniently click and switch according to the VR machine positions when watching the VR content, and the user does not need to drag the page back and forth to reach the appointed position;
(5) VR video watching: a cast terminal for viewing by a user is provided.
2. The method of claim 1, wherein in step (4), in the VR video synchronization node, due to buffers and exposure time of multiple VR machine position devices, data between multiple VR devices needs to be synchronized, if the exposures and the buffers are not synchronized, obvious frame skipping and splicing misalignment occur in a VR video splicing picture, and VR machine position information in the synchronized data is used for marking each VR machine position in the video after subsequent VR video splicing, and a user clicks the marks to realize fast switching.
3. The method of claim 2, wherein in step (4), the VR video synchronization is divided into two aspects, the first is the exposure time synchronization of multiple VR cameras, and the second is the time synchronization of the data at the top of the buffers of multiple VR cameras; for exposure time synchronization: the VR video synchronization module transmits the same square wave signal to a plurality of VR shooting devices, and the square wave signal controls the exposure of the camera, so that the camera can be forced to be exposed simultaneously; for buffer synchronization: and the VR video synchronization module synchronizes data collected by VR shooting equipment according to the timestamp of each frame of data.
4. The method for implementing multi-VR machine position switching as claimed in claim 1, wherein in step (4), VR video splicing is implemented to splice multiple VR frames to form a panoramic frame covering all the shot frames; the method specifically comprises the following steps: after VR video content is collected and enters a VR video splicing module, the VR video splicing module can automatically identify the type, the visual angle and the focal length parameter of a shooting lens, and meanwhile, the size of a picture is identified, improper automatic adjustment is carried out, and after the adjustment step is completed, the VR video splicing module can preliminarily splice all video pictures to form a panoramic picture; after the preliminary picture that splices into, VR video concatenation module can detect the quality of concatenation picture, and the flaw to appearing in the picture is automatic to be restoreed, and the picture can be exported after the completion.
5. The method for implementing multi-VR machine position switching as claimed in claim 1, wherein in step (4), VR position markers are specifically: the VR picture is shot by a plurality of VR shooting equipment and is constituteed, and in the panorama picture of concatenation, every VR shooting equipment position marks, and the position that marks and the picture that splices are carried to play terminal together for the user can select the position that marks to select different angles to watch during watching the VR video.
6. The method of claim 1, wherein in step (4), the VR output is specifically: and the work of transcoding and outputting the video spliced by the VR is undertaken, and the video file is transcoded and output into a video stream of a related protocol according to the requirement of a front-end player so as to be played by the front-end player.
7. The method for implementing switching between multiple VR machine positions as claimed in claim 2, 3, 4, 5 or 6, wherein in step (4), after data synchronization, VR video splicing module performs frame-level splicing on VR videos of multiple VR machine positions, after splicing, video compression is performed, and the video is output as a real-time live stream which can be directly output to a broadcasting server for broadcasting or output by VR director, wherein the VR director is used for switching between master stream and backup stream during VR live broadcasting and gasket.
8. The method of claim 7, wherein in step (4), the frame-level splicing of the VR video is specifically: extracting VR picture files uploaded to a VR video splicing module by taking frames as units, extracting and registering feature points of extracted rear frame level pictures, determining overlapping areas and overlapping positions between spliced images, and completing splicing of VR panoramic images through feature point matching; the specific operation algorithm flow is as follows: (a) detecting feature points in each image, (b) calculating a match between each feature point, (c) calculating an initial value of an inter-image transformation matrix, (d) iteratively refining an H change matrix, (e) guiding the match, and (f) repeating the iteration.
9. The method of claim 1, wherein in step (5), after the VR frame signal and the VR frame position are sent to the broadcast server, the user can view the VR video content through the broadcast terminal, and after the user starts the playing page, the playing page is divided into the currently played VR video and the selectable VR frame position, and the user can freely select a path to view the VR live broadcast, or can quickly switch to a different VR frame position through multiple VR frame positions to view the VR live broadcast in real time at a different central point.
CN202210378003.3A 2022-04-12 2022-04-12 Method for realizing multi-VR machine position switching Pending CN114697684A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210378003.3A CN114697684A (en) 2022-04-12 2022-04-12 Method for realizing multi-VR machine position switching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210378003.3A CN114697684A (en) 2022-04-12 2022-04-12 Method for realizing multi-VR machine position switching

Publications (1)

Publication Number Publication Date
CN114697684A true CN114697684A (en) 2022-07-01

Family

ID=82142495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210378003.3A Pending CN114697684A (en) 2022-04-12 2022-04-12 Method for realizing multi-VR machine position switching

Country Status (1)

Country Link
CN (1) CN114697684A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116896658A (en) * 2023-09-11 2023-10-17 厦门视诚科技有限公司 Camera picture switching method in live broadcast

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539931A (en) * 2014-12-05 2015-04-22 北京格灵深瞳信息技术有限公司 Multi-ocular camera system, device and synchronization method
CN106937128A (en) * 2015-12-31 2017-07-07 幸福在线(北京)网络技术有限公司 A kind of net cast method, server and system and associated uses
CN108961162A (en) * 2018-03-12 2018-12-07 北京林业大学 A kind of unmanned plane forest zone Aerial Images joining method and system
CN109104613A (en) * 2017-06-21 2018-12-28 苏宁云商集团股份有限公司 A kind of VR live broadcasting method and system for realizing the switching of multimachine position

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539931A (en) * 2014-12-05 2015-04-22 北京格灵深瞳信息技术有限公司 Multi-ocular camera system, device and synchronization method
CN106937128A (en) * 2015-12-31 2017-07-07 幸福在线(北京)网络技术有限公司 A kind of net cast method, server and system and associated uses
CN109104613A (en) * 2017-06-21 2018-12-28 苏宁云商集团股份有限公司 A kind of VR live broadcasting method and system for realizing the switching of multimachine position
CN108961162A (en) * 2018-03-12 2018-12-07 北京林业大学 A kind of unmanned plane forest zone Aerial Images joining method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116896658A (en) * 2023-09-11 2023-10-17 厦门视诚科技有限公司 Camera picture switching method in live broadcast
CN116896658B (en) * 2023-09-11 2023-12-12 厦门视诚科技有限公司 Camera picture switching method in live broadcast

Similar Documents

Publication Publication Date Title
US10123070B2 (en) Method and system for central utilization of remotely generated large media data streams despite network bandwidth limitations
JP6432029B2 (en) Method and system for producing television programs at low cost
CN109587401B (en) Electronic cloud deck multi-scene shooting implementation method and system
CN103959802B (en) Image provides method, dispensing device and reception device
CN112601097B (en) Double-coding cloud broadcasting method and system
CN101217623B (en) A quick manual focusing method
CN105635675B (en) A kind of panorama playing method and device
KR102025157B1 (en) System and method for transmitting a plurality of video image
CN101742096A (en) Multi-viewing-angle interactive TV system and method
CN213094282U (en) On-cloud director
CN101945216A (en) Camera head and dynamic image reproducting method
WO2021218573A1 (en) Video playing method, apparatus and system, and computer storage medium
CN113382177A (en) Multi-view-angle surrounding shooting method and system
CN114697684A (en) Method for realizing multi-VR machine position switching
WO2022021519A1 (en) Video decoding method, system and device and computer-readable storage medium
CN113542896B (en) Video live broadcast method, equipment and medium of free view angle
CN112019921A (en) Body motion data processing method applied to virtual studio
CN112543340B (en) Drama watching method and device based on augmented reality
CN108184078A (en) A kind of processing system for video and its method
CN114727126A (en) Implementation method for applying image stitching to multi-machine-position VR (virtual reality) broadcasting-directing station
CN114286121A (en) Method and system for realizing picture guide live broadcast based on panoramic camera
CN112261422A (en) Simulation remote live broadcast stream data processing method suitable for broadcasting and television field
JP4270703B2 (en) Automatic person index generator
TW201125358A (en) Multi-viewpoints interactive television system and method.
CN108600580A (en) 4K programs supervise method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220701