CN114727126A - Implementation method for applying image stitching to multi-machine-position VR (virtual reality) broadcasting-directing station - Google Patents

Implementation method for applying image stitching to multi-machine-position VR (virtual reality) broadcasting-directing station Download PDF

Info

Publication number
CN114727126A
CN114727126A CN202210377996.2A CN202210377996A CN114727126A CN 114727126 A CN114727126 A CN 114727126A CN 202210377996 A CN202210377996 A CN 202210377996A CN 114727126 A CN114727126 A CN 114727126A
Authority
CN
China
Prior art keywords
video
director
picture
stream
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210377996.2A
Other languages
Chinese (zh)
Other versions
CN114727126B (en
Inventor
陈鹤
晏仁强
郎建彬
段长安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Arcvideo Technology Co ltd
Original Assignee
Hangzhou Arcvideo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Arcvideo Technology Co ltd filed Critical Hangzhou Arcvideo Technology Co ltd
Priority to CN202210377996.2A priority Critical patent/CN114727126B/en
Publication of CN114727126A publication Critical patent/CN114727126A/en
Application granted granted Critical
Publication of CN114727126B publication Critical patent/CN114727126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a method for realizing image stitching applied to a VR program director of multiple machine positions. The method comprises the following specific operation steps: (1) a live scene environment with multiple VR machine positions is switched among multiple VR camera signals by a broadcast guide; (2) the VR cloud director is used for director to switch and control input and output video content through a visual interface, and adjust VR picture angle on the visual interface; (3) the VR video stitching server is used for performing focus adjustment and outputting again on the picture adjusted by the director; (4) and the VR playing terminal is used for receiving the VR video stream for final watching. The beneficial effects of the invention are: the VR video watching focus is changed by utilizing a video picture splicing mode, so that the impression of a user can be effectively improved, and the utilization rate of VR shooting resources is maximized; the consistency of the incident angles of the first frames of the multiple VR cameras is realized through the manual intervention of the director, and the better viewing experience is brought to the audience.

Description

Implementation method for applying image stitching to multi-machine-position VR (virtual reality) broadcasting-directing station
Technical Field
The invention relates to the technical field of image processing, in particular to a realization method for applying image stitching to a VR program director at multiple machine positions.
Background
With the rapid development of the 5G technology and the national policy support, the 5G commercial application starts to fall on the ground, and VR live broadcast is widely applied to scenes such as sports events, hot news, concerts, release meetings and the like as a preferred scene of the 5G application. A plurality of VR panoramic cameras are adopted to complete a live event in more and more live scenes.
However, just as a common video with a center focus as a core of the presentation content, the VR video also needs to have a first frame focus picture concept in the user viewing experience so that a viewer can understand the core content to be presented in a scene when watching VR live broadcast. However, most VR cameras adopt a multi-view acquisition mode, and VR camera installation and deployment personnel do not know which central lens is, and when a plurality of VR pictures are switched, the angle of the related picture is probably turned, so that the audience cannot watch VR content smoothly, and the VR experience is reduced.
Disclosure of Invention
The invention provides a method for realizing that image stitching is applied to a VR program director at multiple machine positions, which aims to overcome the defects in the prior art and realizes the maximization of the utilization rate of VR shooting resources.
In order to achieve the purpose, the invention adopts the following technical scheme:
an implementation method for applying image stitching to a VR director table with multiple machine positions comprises the following specific operation steps:
(1) a live scene environment with multiple VR machine positions is switched among multiple VR camera signals by a broadcast guide;
(2) the VR cloud director is used for director to switch and control input and output video content through a visual interface, and adjust VR picture angle on the visual interface;
(3) the VR video stitching server is used for performing focus adjustment and outputting again on the picture adjusted by the director;
(4) and the VR playing terminal is used for receiving the VR video stream for final watching.
The invention discloses a realization method for applying an image stitching technology to a multi-machine-position VR (virtual reality) program guide platform, which comprises a plurality of VR cameras, a VR cloud program guide platform, a VR video stitching server and a VR playing terminal, wherein a video picture splicing mode is utilized to change a VR video watching focus, so that the impression of a user can be effectively improved, the live broadcast range is increased through live broadcast, full-scene live broadcast without dead angles is realized through the plurality of VR machine positions, and the utilization rate of VR shooting resources is maximized; by utilizing a multi-VR machine position switching mode, the watching feeling of a user can be effectively improved, the user can watch the VR video more freely in a multi-angle and multi-dimension mode without being limited to a small range, the free-running watching experience of the VR video world is realized, and the VR shooting resource utilization rate is maximized; by the method, the consistency of the incident angles of the first frames of the multiple VR cameras can be realized through manual intervention of the director on the VR videos needing to have the association relation, and better viewing experience is brought to audiences.
Preferably, in the step (1), specifically: and the multiple VR cameras are used for shooting VR contents and converting the VR contents into VR video streams, the output videos are standard 2:1VR pictures, and performance indexes of coding parameters are defined according to actual network conditions, including video height and width, video frame rate and video code rate.
Preferably, in the step (2), specifically: VR cloud instructor broadcast platform receives the VR cloud instructor broadcast platform that the VR camera was shot to through all VR video stream of web page control, the instructor broadcast keeps unanimous for letting spectator watch the visual angle and switch between the VR camera in the switching process, need be visible and can adjust VR camera focus position at any time in VR cloud instructor broadcast platform, VR cloud instructor broadcast platform is used for the quick preview of instructor broadcast with VR video stream with low-delay mode through webRTC agreement, and adjust focus position through visual interface.
Preferably, in the step (2), the specific operation method is as follows:
(21) the resolution and code rate of IP signals of multiple VR cameras received by a VR cloud broadcasting guide platform are required to be kept consistent, and after the VR cloud broadcasting guide platform receives the IP signals, the original signals are recoded into a webRTC protocol stream of a preview proxy stream with the same proportion, low code rate, low resolution and low delay for monitoring and operation;
(22) a user can preview and select a standby broadcast signal needing to be switched in a web interface through a visual interface, the signal is selected into an PVW pre-monitoring window, and the focus position is marked in the PVW pre-monitoring window through visual operation, namely the X-axis coordinate position and the Y-axis coordinate position of a marking point in the webRTC proxy stream resolution ratio;
(23) the corresponding position of the focus under the original resolution can be obtained through the ratio of the resolution of the input IP stream to the resolution of the webRTC proxy stream.
Preferably, in the step (3), specifically: the VR cloud broadcasting guide platform transmits VR video streams and position information to be adjusted to a VR video stitching server, the VR video stitching server decodes source streams, masks and cuts pictures according to focus position data, stitches and splices two original videos according to cutting position parameters, outputs the original videos with the same resolution as input signals after stitching and splicing, finally forms completely new VR video streams according to output requirements through coding, and transmits the completely new VR video streams to a VR playing terminal through coding.
Preferably, in step (3), the specific operation method is as follows:
(31) in order to ensure that each region in the cut and stitched picture realizes frame synchronization and the picture cannot be felt by audiences and is subjected to cutting and stitching processing, the VR video stitching server copies the IP stream signal into two parts and decodes the two parts simultaneously;
(32) when the marking point represents the marking point on the left side of the Y axis and needs to move rightwards, the size of the supplementary picture on the left side of the canvas needs to be filled to the size of the picture at the position from the X axis coordinate of the marking point to the resolution position 1/2 of the video width; similarly, when the marking point is on the right side of the Y axis, the marking point is represented to be moved leftwards, and the right video is masked to remove the X-axis 0 point to the video width reduction marking point value by utilizing the cutting and splicing standard;
(33) after two frames reserved by the shade are obtained, a conversation with the same resolution and code rate as the input IP signal is newly established as output, and reserved image data are sewed into a complete and brand new VR video after the focus is manually adjusted.
Preferably, in step (3), after obtaining the fully new VR video stream, the fully new VR video stream may be previewed at the VR director.
Preferably, in the step (4), specifically: VR broadcast terminal passes through VR video and stitches the VR video stream after the coding that the server provided and watch, experiences the live enjoyment of many stands VR.
The invention has the beneficial effects that: the VR video watching focus is changed by utilizing a video picture splicing mode, so that the impression of a user can be effectively improved, and the utilization rate of VR shooting resources is maximized; the consistency of the incident angles of the first frames of the multiple VR cameras is realized through the manual intervention of the director, and the better viewing experience is brought to the audience.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic diagram of a distribution of VR cameras;
fig. 3 is a VR video stitching schematic.
Detailed Description
The invention is further described with reference to the following figures and detailed description.
In the embodiment shown in fig. 1, an implementation method for applying image stitching to a VR director station with multiple machine positions includes the following specific operation steps:
(1) a multi-VR chassis live scene environment, switched by the director among multiple VR camera signals, as shown in fig. 2; the method specifically comprises the following steps: the multiple VR cameras are used for shooting VR contents and converting the VR contents into VR video streams, the output videos are standard 2:1VR pictures, and the performance indexes of coding parameters are defined according to the actual network conditions, wherein the performance indexes comprise the height and the width of the videos, the video frame rate and the video code rate;
(2) the VR cloud director is used for director to switch and control input and output video content through a visual interface, and adjust VR picture angle on the visual interface; the method specifically comprises the following steps: the VR cloud broadcasting guide platform receives a VR cloud broadcasting guide platform shot by a VR camera, monitors all VR video streams through a web page, is visible in the VR cloud broadcasting guide platform and can adjust the focus position of the VR camera at any time in order to enable the watching visual angles of audiences to be switched and kept consistent among the VR cameras in the switching process of the broadcasting guide, uses the VR video streams for broadcasting guide quick preview in a low-delay mode through a webRTC protocol, and adjusts the focus position through a visual interface;
(3) the VR video stitching server is used for carrying out focus adjustment and re-output on the picture adjusted by the director; the method specifically comprises the following steps: the VR cloud broadcasting guide platform transmits VR video streams to be adjusted and position information to a VR video stitching server, the VR video stitching server decodes source streams, masks and cuts pictures according to focus position data, stitches and splices two original videos according to cutting position parameters, outputs the two original videos with the same resolution as an input signal after stitching and splicing, finally forms completely new VR video streams according to output requirements through coding, and transmits the completely new VR video streams to a VR playing terminal through coding; after the full-new VR video stream is obtained, the full-new VR video stream can be previewed on a VR director;
(4) the VR playing terminal is used for receiving the VR video stream for final watching; the method specifically comprises the following steps: the VR playing terminal watches the encoded VR video stream provided by the VR video stitching server, and experiences the fun of live VR broadcasting in multiple machine positions; the VR player terminal is generally a VR head display, a mobile phone or an iPad of a watching user, and the like.
The specific operation method for the steps (2) and (3) is as follows:
(a) the resolution and code rate of IP signals of multiple VR cameras received by a VR cloud broadcasting guide platform need to be kept consistent, and after the VR cloud broadcasting guide platform receives the IP signals, the original signals are recoded into a webRTC protocol stream of a preview proxy stream with the same proportion, low code rate, low resolution and low delay for monitoring and operation. For example: the source code rate is 8 Mbps; the source resolution was 3840x 1920. The code rate of the proxy code stream is 2 Mbps; the proxy stream resolution is: 1440x 720.
(b) A user can preview and select a standby broadcast signal (namely webRTC proxy stream of one VR camera) needing to be switched in a web interface through a visual interface, the signal is selected into PVW a pre-monitoring window, and a focus position, namely an X-axis coordinate position and a Y-axis coordinate position of a marking point in the resolution of the webRTC proxy stream, is marked in PVW the pre-monitoring window through visual operation.
(c) The corresponding position of the focus under the original resolution can be obtained by inputting the ratio of the IP stream resolution to the webRTC proxy stream resolution, and in order to ensure that each region in the cut and stitched picture realizes frame synchronization and the picture cannot be felt by the audience after cutting and stitching, the VR video stitching server copies the IP stream signal into two parts and decodes the two parts at the same time.
(d) The focal position of the video needs to be adjusted, that is, the mark point is converged to the focal position, namely, the two spliced images at the left and right positions of 1/2 of the video resolution need to be obtained by recalculation. When the marking point is on the left side of the Y-axis (i.e. the marking point position < 1/2 video width), it means that the marking point needs to move to the right, and the supplementary image needs to be filled to the left side of the canvas, which is the image size from the marking point X-axis coordinate to the 1/2 resolution position of the video width, as shown in fig. 3. Because the VR video is finally stitched into a sphere, the two images only retain the required images in a masking mode, and the whole image is spliced to form a brand new video. Where the left video is retained, the video is wide reduced by X-axis 0 points to the full wide image. The right video is the leftmost picture of the wide mask image opposite to the left video, the starting position is the video from the X axis position minus the X axis 0 to the width of the mark point, and the video from the X axis to (the video from the X axis minus the width of the mark point) is reserved. Similarly, when the mark point is on the right side of the Y axis (i.e. the mark point position is greater than 1/2 video width), it represents that the mark point needs to move to the left, and the right video is masked by the cutting and splicing standard from the X axis 0 point to the video width reduction mark point value.
(e) After two frames reserved by the shade are obtained, a conversation with the same resolution and code rate as the input IP signal is newly established as output, and reserved image data is stitched into a complete and brand new VR video after the focus is manually adjusted.
(f) And transcoding the rendered Buffer into IP streams in forms of intelligent terminals, notebooks, VR helmets and the like for distribution and watching through online coding equipment.
The invention discloses a realization method for applying an image stitching technology to a multi-machine-position VR (virtual reality) program guide platform, which comprises a plurality of VR cameras, a VR cloud program guide platform, a VR video stitching server and a VR playing terminal, wherein a video picture splicing mode is utilized to change a VR video watching focus, so that the impression of a user can be effectively improved, the live broadcast range is increased through live broadcast, full-scene live broadcast without dead angles is realized through the plurality of VR machine positions, and the utilization rate of VR shooting resources is maximized; by utilizing a multi-VR machine position switching mode, the watching feeling of a user can be effectively improved, the user can watch the VR video more freely in a multi-angle and multi-dimension mode without being limited to a small range, the free-running watching experience of the VR video world is realized, and the VR shooting resource utilization rate is maximized; by the method, the consistency of the incident angles of the first frames of the multiple VR cameras can be realized through manual intervention of the director on the VR videos needing to have the association relation, and better viewing experience is brought to audiences.

Claims (8)

1. An implementation method for applying image stitching to a VR program director of multiple machine positions is characterized by comprising the following specific operation steps:
(1) a live scene environment with multiple VR machine positions is switched among multiple VR camera signals by a broadcast guide;
(2) the VR cloud director is used for director to switch and control input and output video content through a visual interface, and adjust VR picture angle on the visual interface;
(3) the VR video stitching server is used for performing focus adjustment and outputting again on the picture adjusted by the director;
(4) and the VR playing terminal is used for receiving the VR video stream for final watching.
2. The method for implementing image stitching to a multi-seat VR director according to claim 1, wherein in step (1), the method specifically comprises: and the plurality of VR cameras are used for shooting VR contents and converting the VR contents into VR video streams, the output video is a standard 2:1VR picture, and the performance indexes of the coding parameters are defined according to the actual network conditions, including the height and the width of the video, the frame rate of the video and the code rate of the video.
3. The method for implementing image stitching to a multi-seat VR director as claimed in claim 1, wherein in step (2), the method specifically comprises: VR cloud instructor broadcast platform receives the VR cloud instructor broadcast platform that the VR camera was shot to through all VR video stream of web page control, the instructor broadcast keeps unanimous for letting spectator watch the visual angle and switch between the VR camera in the switching process, need be visible and can adjust VR camera focus position at any time in VR cloud instructor broadcast platform, VR cloud instructor broadcast platform is used for the quick preview of instructor broadcast with VR video stream with low-delay mode through webRTC agreement, and adjust focus position through visual interface.
4. The method for implementing image stitching applied to the VR director station in multiple machine positions as claimed in claim 3, wherein in the step (2), the specific operation method is as follows:
(21) the resolution and code rate of IP signals of multiple VR cameras received by a VR cloud broadcasting guide platform are required to be kept consistent, and after the VR cloud broadcasting guide platform receives the IP signals, the original signals are recoded into a webRTC protocol stream of a preview proxy stream with the same proportion, low code rate, low resolution and low delay for monitoring and operation;
(22) a user can preview and select a standby broadcast signal needing to be switched in a web interface through a visual interface, the signal is selected into an PVW pre-monitoring window, and the focus position is marked in the PVW pre-monitoring window through visual operation, namely the X-axis coordinate position and the Y-axis coordinate position of a marking point in the webRTC proxy stream resolution ratio;
(23) the corresponding position of the focus under the original resolution can be obtained through the ratio of the resolution of the input IP stream to the resolution of the webRTC proxy stream.
5. The method for implementing image stitching to a multi-seat VR director as claimed in claim 1, wherein in step (3), the method comprises: the VR cloud broadcasting guide platform transmits VR video streams and position information to be adjusted to a VR video stitching server, the VR video stitching server decodes source streams, masks and cuts pictures according to focus position data, stitches and splices two original videos according to cutting position parameters, outputs the original videos with the same resolution as input signals after stitching and splicing, finally forms completely new VR video streams according to output requirements through coding, and transmits the completely new VR video streams to a VR playing terminal through coding.
6. The method for implementing image stitching applied to the VR director station in multiple machine positions as claimed in claim 4, wherein in the step (3), the specific operation method is as follows:
(31) in order to ensure that each region in the cut and stitched picture realizes frame synchronization and the audience cannot feel the picture and the picture is cut and stitched, the VR video stitching server copies the IP stream signal into two parts and decodes the two parts at the same time;
(32) when the marking point represents the marking point on the left side of the Y axis and needs to move rightwards, the size of the supplementary picture on the left side of the canvas needs to be filled to the size of the picture at the position from the X axis coordinate of the marking point to the resolution position 1/2 of the video width; similarly, when the marking point is on the right side of the Y axis, the marking point is represented to be moved leftwards, and the right video is masked to remove the X-axis 0 point to the video width reduction marking point value by utilizing the cutting and splicing standard;
(33) after two frames reserved by the shade are obtained, a conversation with the same resolution and code rate as the input IP signal is newly established as output, and reserved image data is stitched into a complete and brand new VR video after the focus is manually adjusted.
7. The method of claim 5, wherein in step (3), after the fully new VR video stream is obtained, the fully new VR video stream is previewed at the VR director.
8. The method for implementing image stitching to a multi-seat VR director as claimed in claim 1, wherein in step (4), the method comprises: VR broadcast terminal passes through VR video and stitches the VR video stream after the coding that the server provided and watch, experiences the live enjoyment of many stands VR.
CN202210377996.2A 2022-04-12 2022-04-12 Implementation method for applying image stitching to multi-machine-position VR (virtual reality) guide broadcasting station Active CN114727126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210377996.2A CN114727126B (en) 2022-04-12 2022-04-12 Implementation method for applying image stitching to multi-machine-position VR (virtual reality) guide broadcasting station

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210377996.2A CN114727126B (en) 2022-04-12 2022-04-12 Implementation method for applying image stitching to multi-machine-position VR (virtual reality) guide broadcasting station

Publications (2)

Publication Number Publication Date
CN114727126A true CN114727126A (en) 2022-07-08
CN114727126B CN114727126B (en) 2023-09-19

Family

ID=82244614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210377996.2A Active CN114727126B (en) 2022-04-12 2022-04-12 Implementation method for applying image stitching to multi-machine-position VR (virtual reality) guide broadcasting station

Country Status (1)

Country Link
CN (1) CN114727126B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612168A (en) * 2023-04-20 2023-08-18 北京百度网讯科技有限公司 Image processing method, device, electronic equipment, image processing system and medium
CN116781958A (en) * 2023-08-18 2023-09-19 成都索贝数码科技股份有限公司 XR-based multi-machine-position presentation system and method
CN117939183A (en) * 2024-03-21 2024-04-26 中国传媒大学 Multi-machine-position free view angle guided broadcasting method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104519310A (en) * 2013-09-29 2015-04-15 深圳锐取信息技术股份有限公司 Remote program director control system
CN109104613A (en) * 2017-06-21 2018-12-28 苏宁云商集团股份有限公司 A kind of VR live broadcasting method and system for realizing the switching of multimachine position
CN111405188A (en) * 2020-04-17 2020-07-10 四川省卫生健康宣传教育中心 Multi-camera control method and system based on switcher
CN111726640A (en) * 2020-07-03 2020-09-29 中图云创智能科技(北京)有限公司 Live broadcast method with 0-360 degree dynamic viewing angle
CN213094282U (en) * 2021-03-02 2021-04-30 中国传媒大学 On-cloud director
CN213342434U (en) * 2020-07-22 2021-06-01 四川新视创伟超高清科技有限公司 Multi-machine-position video picture cutting system
CN113438495A (en) * 2021-06-23 2021-09-24 中国联合网络通信集团有限公司 VR live broadcast method, device, system, equipment and storage medium
CN113542897A (en) * 2021-05-19 2021-10-22 广州速启科技有限责任公司 Audio and video live broadcast method suitable for multi-view live broadcast
CN114035672A (en) * 2020-07-20 2022-02-11 华为技术有限公司 Video processing method and related equipment for virtual reality VR scene

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104519310A (en) * 2013-09-29 2015-04-15 深圳锐取信息技术股份有限公司 Remote program director control system
CN109104613A (en) * 2017-06-21 2018-12-28 苏宁云商集团股份有限公司 A kind of VR live broadcasting method and system for realizing the switching of multimachine position
CN111405188A (en) * 2020-04-17 2020-07-10 四川省卫生健康宣传教育中心 Multi-camera control method and system based on switcher
CN111726640A (en) * 2020-07-03 2020-09-29 中图云创智能科技(北京)有限公司 Live broadcast method with 0-360 degree dynamic viewing angle
CN114035672A (en) * 2020-07-20 2022-02-11 华为技术有限公司 Video processing method and related equipment for virtual reality VR scene
CN213342434U (en) * 2020-07-22 2021-06-01 四川新视创伟超高清科技有限公司 Multi-machine-position video picture cutting system
CN213094282U (en) * 2021-03-02 2021-04-30 中国传媒大学 On-cloud director
CN113542897A (en) * 2021-05-19 2021-10-22 广州速启科技有限责任公司 Audio and video live broadcast method suitable for multi-view live broadcast
CN113438495A (en) * 2021-06-23 2021-09-24 中国联合网络通信集团有限公司 VR live broadcast method, device, system, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨煜红;: "智慧媒体实训平台在教学中的应用", 智能建筑, no. 01 *
陈泽翔;: "海南大学沉浸式远程互动教室建设", 信息与电脑(理论版), no. 09 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612168A (en) * 2023-04-20 2023-08-18 北京百度网讯科技有限公司 Image processing method, device, electronic equipment, image processing system and medium
CN116781958A (en) * 2023-08-18 2023-09-19 成都索贝数码科技股份有限公司 XR-based multi-machine-position presentation system and method
CN116781958B (en) * 2023-08-18 2023-11-07 成都索贝数码科技股份有限公司 XR-based multi-machine-position presentation system and method
CN117939183A (en) * 2024-03-21 2024-04-26 中国传媒大学 Multi-machine-position free view angle guided broadcasting method and system

Also Published As

Publication number Publication date
CN114727126B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN106792246B (en) Method and system for interaction of fusion type virtual scene
CN106921866B (en) Multi-video-director method and equipment for assisting live broadcast
CN106789991B (en) Multi-person interactive network live broadcast method and system based on virtual scene
JP6432029B2 (en) Method and system for producing television programs at low cost
JP6878014B2 (en) Image processing device and its method, program, image processing system
CN106303289B (en) Method, device and system for fusion display of real object and virtual scene
CN109547724B (en) Video stream data processing method, electronic equipment and storage device
CN106713942B (en) Video processing method and device
CN109587401A (en) The more scene capture realization method and systems of electronic platform
CN104335243A (en) Processing panoramic pictures
CN105898395A (en) Network video playing method, device and system
CN115209172A (en) XR-based remote interactive performance method
CN101106717B (en) Video player circuit and video display method
CN114727126A (en) Implementation method for applying image stitching to multi-machine-position VR (virtual reality) broadcasting-directing station
CN109982096A (en) 360 ° of VR content broadcast control systems of one kind and method
CN102447722A (en) Service system for rapid manufacturing of virtual video content of video chatting
CN116781958B (en) XR-based multi-machine-position presentation system and method
Wang et al. A brief analysis of the 5G+ 4K/8K+ AI strategic layout of central radio and television station——taking the 2019 national day campaign publicity report as an example
US10764655B2 (en) Main and immersive video coordination system and method
CN107197316A (en) Panorama live broadcast system and method
GB2354388A (en) System and method for capture, broadcast and display of moving images
CN113395527A (en) Remote live broadcast virtual background cloud synthesis system based on VR technology
KR20220023606A (en) Multi-View Live Casting System using smartphone
CN206807670U (en) The servomechanism live for panorama
CN117939183B (en) Multi-machine-position free view angle guided broadcasting method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant