CN114710682A - Virtual reality video processing method and device for event site and electronic equipment - Google Patents
Virtual reality video processing method and device for event site and electronic equipment Download PDFInfo
- Publication number
- CN114710682A CN114710682A CN202210350453.1A CN202210350453A CN114710682A CN 114710682 A CN114710682 A CN 114710682A CN 202210350453 A CN202210350453 A CN 202210350453A CN 114710682 A CN114710682 A CN 114710682A
- Authority
- CN
- China
- Prior art keywords
- video
- virtual reality
- server
- video data
- event
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 25
- 238000012545 processing Methods 0.000 claims abstract description 54
- 238000000034 method Methods 0.000 claims abstract description 50
- 238000004891 communication Methods 0.000 claims description 15
- 230000000007 visual effect Effects 0.000 abstract description 10
- 238000009826 distribution Methods 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 9
- 238000003860 storage Methods 0.000 description 8
- 238000004519 manufacturing process Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000004806 packaging method and process Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000011144 upstream manufacturing Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention provides a virtual reality video processing method and device for an event scene and electronic equipment. The method is applied to a virtual reality video processing system of an event scene. The method comprises the following steps: the method comprises the following steps that a plurality of video acquisition devices acquire video data of an event site and send the video data to a server; and the edge computing node of the server generates a virtual reality video based on the plurality of video data and the pre-stored virtual background of the event scene, and distributes the virtual reality video to the user terminal. According to the method, the virtual reality videos can be provided for the field audiences, the requirements of the field audiences for watching at different visual angles of the event are met by the acquisition of the plurality of video acquisition devices and the distribution of the servers, and the experience of the field audiences can be improved.
Description
Technical Field
The invention relates to the technical field of virtual reality, in particular to a virtual reality video processing method and device for an event site and electronic equipment.
Background
The field contest viewing in the traditional mode is generally as follows: spectators enter the venue to watch live events at fixed locations and distances, and the spectators' view (perspective) and acquisition of event content information are both limited. The audience in the live competition has no more abundant content acquisition means except the large screen in the live competition.
In order to solve the problem of reduced match watching experience caused by unsatisfactory visual field (visual angle), audiences can select to open the mobile phone and watch online live broadcast at the same time. At this time, the implementation scheme for live mobile phone users to watch live broadcasts is as follows: firstly, erecting acquisition equipment on site to shoot a match, transmitting a match picture and sound to coding and decoding equipment in a baseband signal form, then pushing the match picture and the sound to a rebroadcasting vehicle by the coding and decoding equipment, monitoring and guiding and making each path of signal by the rebroadcasting vehicle, transmitting each path of signal and guiding signal to a remote studio or a remote making center in a satellite or special line mode to carry out remote making or directly broadcasting by a master control of a broadcasting platform, and finally accessing a live broadcasting platform by a live mobile phone user to watch live broadcast content.
However, in a large-scale event field, due to the constraints of wireless networks, calculation support and the like, the real-time event content information available to the field competition watching users is very limited, and the competition watching experience is single. The viewing angle of the live audience cannot be switched due to the seat position. The time delay sensitivity of live audiences to live broadcast content of real-time events is high, and the experience is poor.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus and an electronic device for processing a virtual reality video of an event scene, so as to improve the experience of the spectators in the event scene.
In a first aspect, an embodiment of the present invention provides a virtual reality video processing method for an event site, which is applied to a virtual reality video processing system for the event site, where the virtual reality video processing system includes a video acquisition device and a server that are sequentially connected in a communication manner, the server is connected in a communication manner with an external user terminal, and an edge computing node is deployed in the server; the method comprises the following steps: the method comprises the following steps that a plurality of video acquisition devices acquire video data of an event site and send the video data to a server; and the edge computing node of the server generates a virtual reality video based on the plurality of video data and the prestored virtual background of the competition scene, and distributes the virtual reality video to the user terminal.
In a preferred embodiment of the present application, the video capturing device includes a wide-angle lens; the step of a plurality of video capture devices capturing video data of a scene of an event, comprising: the wide-angle lenses of the video acquisition devices acquire ultra-wide-angle foreground videos of the event scene as video data.
In a preferred embodiment of the present application, the virtual reality video processing system further includes a signal base station, and the video acquisition device, the signal base station and the server are sequentially connected; the method comprises the steps that a plurality of video acquisition devices acquire video data of an event site and send the video data to a server, and comprises the following steps: the method comprises the following steps that a plurality of video acquisition devices acquire video data of an event scene and perform encoding operation on the video data; the video acquisition equipment sends the coded video data to a signal base station; and the signal base station transmits the coded video data to a server.
In a preferred embodiment of the present application, the virtual reality video processing system further includes an exchange, and the video acquisition device, the exchange and the signal base station are sequentially connected; the step of sending the encoded video data to a signal base station by a plurality of video capture devices includes: the plurality of video acquisition devices send the encoded video data to the switch; and the exchanger transmits the encoded video data to the signal base station.
In a preferred embodiment of the present application, after the step of generating, by the edge computing node of the server, a virtual reality video based on the plurality of video data and the virtual background of the pre-stored event site, the method further includes: the edge computing node of the server transcodes the virtual reality video based on the video format of the user terminal.
In a preferred embodiment of the present application, the virtual reality video processing system further includes a tuning device, and the tuning device is in communication connection with the server; the method further comprises the following steps: the sound tuning equipment acquires sound data and sends the sound data to the server; an edge compute node of the server generates a virtual reality video based on the sound data, the plurality of video data, and a virtual background of the event site.
In a preferred embodiment of the present application, the method further includes: checking the network state of the virtual reality video processing system; and if the network state of the virtual reality video processing system is a normal state, starting an edge computing node of the server.
In a preferred embodiment of the present application, the server includes a distortion server, a control server, and a packaging server.
In a second aspect, an embodiment of the present invention further provides a virtual reality video processing apparatus for an event site, and a virtual reality video processing system for the event site, where the virtual reality video processing system includes a video acquisition device and a server that are sequentially connected in communication, the server is connected in communication with an external user terminal, and an edge computing node is deployed in the server; the device comprises: the video data acquisition module is used for acquiring video data of an event site by a plurality of video acquisition devices and sending the video data to the server; and the virtual reality video generation module is used for generating a virtual reality video by the edge computing node of the server based on the plurality of video data and the pre-stored virtual background of the event scene and distributing the virtual reality video to the user terminal.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores computer-executable instructions that can be executed by the processor, and the processor executes the computer-executable instructions to implement the virtual reality video processing method for the event site.
The embodiment of the invention has the following beneficial effects:
according to the virtual reality video processing method and device for the event site and the electronic device, the plurality of video acquisition devices acquire video data of the event site, the edge computing node of the server generates a virtual reality video based on the plurality of video data and a prestored virtual background of the event site, and the virtual reality video is distributed to the user terminal. According to the method, the virtual reality videos can be provided for the field audiences, the requirements of the field audiences for watching at different visual angles of the event are met by the acquisition of the plurality of video acquisition devices and the distribution of the servers, and the experience of the field audiences can be improved.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part may be learned by the practice of the above-described techniques of the disclosure, or may be learned by practice of the disclosure.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a virtual reality video processing method for an event site according to an embodiment of the present invention;
fig. 2 is a flowchart of another virtual reality video processing method for an event site according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a virtual reality video processing system according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a transmission link of a virtual reality video processing system according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a virtual reality video processing method for an event site according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a virtual reality video processing apparatus for an event site according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The stadium is a bearing field for large-scale field activities, including sports events, commercial performances, commercial exhibition activities and the like, and is a high-speed place for preparing and broadcasting the event contents. Unlike remote viewers, which can experience many event content services such as multi-view viewing, real-time short video playback, and commentary while watching a main rebroadcast signal. The field audience is limited by the traditional network bandwidth, the calculation power of a mobile phone terminal and the like, and the field audience cannot obtain most of the rebroadcast content of the event at present.
At present, the field contest observation in the traditional mode is generally as follows: spectators enter the venue to watch live events at fixed locations and distances, and the spectators' view (perspective) and acquisition of event content information are both limited. The audience of the live viewing competition has no more abundant content acquisition means besides the large screen in the field.
In order to solve the problem of reduced match watching experience caused by unsatisfactory visual field (visual angle), audiences can select to open the mobile phone and watch online live broadcast at the same time. At this moment, the realization scheme for watching live broadcast by the live mobile phone user is as follows: firstly, erecting acquisition equipment on site to shoot a match, transmitting a match picture and sound to coding and decoding equipment in a baseband signal form, then pushing the match picture and the sound to a rebroadcasting vehicle by the coding and decoding equipment, monitoring and guiding and making each path of signal by the rebroadcasting vehicle, transmitting each path of signal and guiding signal to a remote studio or a remote making center in a satellite or special line mode to carry out remote making or directly broadcasting by a master control of a broadcasting platform, and finally accessing a live broadcasting platform by a live mobile phone user to watch live broadcast content.
However, in a large-scale event site, due to the constraints of wireless networks, calculation support and the like, the real-time event content information available to the site competition watching users is very limited, and the competition watching experience is single. The viewing angle of the live audience cannot be switched due to the seat position. The time delay sensitivity of live audiences to live broadcast content of real-time events is high, and the experience is poor.
Aiming at the problem that a real-time auxiliary match watching video content solution scheme capable of automatically switching visual angles is provided for field audiences based on a mobile terminal in a domestic large-scale match scene at present, the Virtual Reality video processing method, the device and the electronic equipment for the match scene provided by the embodiment of the invention can provide low-delay Virtual Reality (VR) live broadcast content for the field audiences.
The method provided by the embodiment of the invention comprises three parts of on-site virtual reality content acquisition, public wireless network facility deployment, an edge computing service platform system and the like. The method provided by the embodiment of the invention aims to provide auxiliary competition watching contents for field audiences through the mobile terminal, and realizes the content distribution of the short-link closed-loop mobile terminal distributed on the field by integrating the prior content acquisition, wireless network, edge calculation and other technologies and combining software engineering planning and design. The method provided by the embodiment of the invention belongs to innovative scene application.
In the embodiment of the invention, the diversified event content can be provided for the scene through the high-bandwidth 5G wireless network and the customized edge computing multimedia content service, the selection of the event content of the scene audience can be enriched, the audience can be helped to master the event details more comprehensively, and the scene experience is improved.
To facilitate understanding of the embodiment, a detailed description will be given to a virtual reality video processing method for an event site disclosed in the embodiment of the present invention.
The first embodiment is as follows:
the embodiment of the invention provides a virtual reality video processing method for an event site, which is applied to a virtual reality video processing system for the event site.
Among them, the Edge Computing technology belongs to a general network technology, which is called Multi Edge Computing (MEC), and the MEC is generally used to describe a concept of pushing a service to a network Edge, provide a cloud Computing function for application developers and content providers, and an IT service environment at the network Edge. Such an environment is characterized by ultra-low latency and high bandwidth and real-time access to wireless network information that is available to applications.
Based on the above description, referring to the flowchart of a virtual reality video processing method for an event site shown in fig. 1, the virtual reality video processing method for the event site includes the following steps:
and S102, a plurality of video acquisition devices acquire video data of the event scene and send the video data to a server.
In the embodiment, a plurality of video acquisition devices can be arranged on the event site, the video acquisition devices are used for acquiring video data of the event site, and the video data can be used for making VR videos. The video acquisition equipment of the embodiment can be a 180-degree wide-angle lens, can shoot the foreground of the event, namely the content in the event field, and does not relate to auditorium.
After the video data of the event site is collected by the video collecting device, the video data can be sent to the server.
And step S104, the edge computing node of the server generates a virtual reality video based on the plurality of video data and the pre-stored virtual background of the event scene, and distributes the virtual reality video to the user terminal.
In this embodiment, servers used by workflows such as distortion correction, stitching, content rendering, video encoding and decoding, transcoding and distribution required by VR videos can be deployed at the edge of a venue to form an edge computing service system for presenting virtual reality live content to a mobile terminal on a competition site.
The edge computing node of the server in this embodiment may perform stitching and stitching on the video data and the virtual background of the pre-stored event site to generate the virtual reality video. The virtual reality video may then be further processed, such as transcoding, to improve the quality of the virtual reality video. The virtual background is a virtual competition room picture, a virtual competition seat scene can be designed by adopting 3D modeling software, the generated virtual competition seat scene is used as a background picture of a virtual reality video, and video data is used as a foreground to be stitched with the background picture to obtain the virtual reality video.
After the virtual reality video is generated, the server can distribute the virtual reality video to the user terminals, and live audiences can watch the virtual reality video through the user terminals. The server can transcode the virtual reality video, and the transcoded virtual reality video conforms to the playing format of the user terminal. And then distributing the transcoded virtual reality video to a user terminal so as to present the virtual reality video conforming to the playing format at the user terminal.
According to the virtual reality video processing method for the event site, provided by the embodiment of the invention, a plurality of video acquisition devices acquire video data of the event site, and the edge computing node of the server generates a virtual reality video based on the plurality of video data and a pre-stored virtual background of the event site and distributes the virtual reality video to the user terminal. According to the method, the virtual reality videos can be provided for the field audiences, the videos are collected by the multiple video collecting devices and distributed by the server, the requirements of the field audiences on watching at different visual angles of the event are met, and the experience of the field audiences can be improved.
Example two:
the present embodiment provides another virtual reality video processing method for an event site, which is implemented on the basis of the foregoing embodiments, and as shown in a flowchart of another virtual reality video processing method for an event site shown in fig. 2, the virtual reality video processing method for an event site in the present embodiment includes the following steps:
step S202, wide-angle lenses of a plurality of video collecting devices collect super wide-angle foreground videos of the event site as video data.
In order to produce virtual reality video, the video capture device of the embodiment includes a wide-angle lens, and a monocular spherical fisheye VR camera (effective viewing angle range is not less than 180) may be used.
Referring to a schematic diagram of a virtual reality video processing system shown in fig. 3 and a schematic diagram of a transmission link of the virtual reality video processing system shown in fig. 4, the virtual reality video processing system further includes a signal base station, and the video acquisition device, the signal base station and the server are sequentially connected. The method comprises the following steps that a plurality of video acquisition devices acquire video data of an event scene and perform encoding operation on the video data; the video acquisition equipment sends the coded video data to a signal base station; and the signal base station transmits the coded video data to a server.
As shown in fig. 3 and 4, the virtual reality video processing system further includes an exchange, and the video capture device, the exchange and the signal base station are connected in sequence. The plurality of video acquisition devices send the encoded video data to the switch; and the exchanger transmits the coded video data to the signal base station.
After the video acquisition equipment shoots, video coding and RTMP stream pushing can be completed in the video acquisition equipment, and a network cable is used for connecting a signal base station through a switch. The signal base station may be a 5G small-scale base station (5G CPE), and the 5G CPE is configured to provide a live 5G upstream network to transmit the VR signal (i.e., the encoded video data) to a 5G network for upstream pushing to a server, and an edge computing node of the server performs operations such as stitching and video transcoding and distribution.
And S204, the edge computing node of the server generates a virtual reality video based on the plurality of video data and the prestored virtual background of the competition scene, and distributes the virtual reality video to the user terminal.
The server of this embodiment may be a computing server device deployed at the near-end edge of a venue for processing links such as decoding, stream receiving, distortion correction, stitching, video rendering, video transcoding, stream pushing, and distribution for VR content production. As shown in fig. 3, the server of the present embodiment includes a distortion server, a control server, and a packaging server.
The shot video data can be ultra-wide-angle foreground video (namely 180-degree VR monocular spherical fisheye video), after the video data reaches the edge computing node of the server, the edge computing node further completes virtual background fusion (the 180-degree VR monocular spherical fisheye video is fused into a preset virtual match background to form a complete 360-degree visual VR video), and then media processing processes such as transcoding are carried out.
In particular, an edge computing node of the server may transcode the virtual reality video based on a video format of the user terminal. After the processing is finished, the VR video transcodes the video of the 5G mobile terminal (namely the user terminal) on the edge computing node of the server, and the live broadcast distribution of the streaming media is finished.
The user terminal (for example, a 5G mobile phone) of the specified APP (Application) is installed in the field venue area, and the VR video can be watched in real time in the VR module of the APP.
In addition, as shown in fig. 3, the virtual reality video processing system further includes a tuning device, and the tuning device is connected to the server in communication. The sound tuning equipment acquires sound data and sends the sound data to the server; an edge compute node of the server generates a virtual reality video based on the sound data, the plurality of video data, and a virtual background of the event site.
That is, the sound data in the virtual reality video may be obtained by the tuning device, and the edge computing node of the server may generate the virtual reality video according to the sound data, the plurality of video data, and the virtual background of the event site.
In addition, the method further comprises: checking the network state of the virtual reality video processing system; and if the network state of the virtual reality video processing system is a normal state, starting an edge computing node of the server.
Referring to fig. 5, a schematic diagram of a virtual reality video processing method for an event site is shown, for device deployment debugging and system setting installation debugging, first, network parameters may be configured for a collection device and an encoding device. And then checking whether the network state of the equipment is normal or not, if not, re-checking the network configuration, and configuring network parameters for the acquisition equipment and the coding equipment.
If the data is normal, the streaming media service of the edge computing node of the server is started. The coding and decoding equipment configures a push flow task, then checks whether the working state of the coding equipment and the stream corresponding to the streaming media service is normal, and rechecks the push flow configuration if the working state is abnormal; if the state is normal, multi-machine-bit live stream information is configured in the application background. The user can open a live broadcast interface at the mobile phone client to watch the multi-machine live broadcast.
The method provided by the embodiment of the invention can deploy the manufacturing capability to the edge server side in part of working links which have higher requirements on timeliness and can realize automatic manufacturing, such as transcoding distribution of video formats and the like, can rapidly configure a multi-machine position coding and transcoding link and links for configuration and associated management of a specific stadium and a terminal by uniformly utilizing edge computing service, and provides end-to-end content presentation service under a very short link for a field 5G mobile terminal. The server can be used for quickly generating the rebroadcast content, improving the timeliness of the content, compressing the manufacturing process, reducing the manufacturing cost and improving the manufacturing quality and the content richness.
Example three:
corresponding to the method embodiment, the embodiment of the invention provides a virtual reality video processing device for an event site, which is applied to a virtual reality video processing system for the event site. Referring to fig. 6, a schematic structural diagram of a virtual reality video processing apparatus for a scene of an event is shown, the virtual reality video processing apparatus for the scene of the event comprising:
the video data acquisition module 61 is used for acquiring video data of an event site by a plurality of video acquisition devices and sending the video data to the server;
and a virtual reality video generation module 62, configured to generate a virtual reality video based on the plurality of video data and a pre-stored virtual background of the event site by the edge computing node of the server, and distribute the virtual reality video to the user terminal.
According to the virtual reality video processing device for the event site, provided by the embodiment of the invention, a plurality of video acquisition devices are used for acquiring video data of the event site, and the edge computing node of the server generates a virtual reality video based on the plurality of video data and the prestored virtual background of the event site and distributes the virtual reality video to the user terminal. According to the method, the virtual reality videos can be provided for the field audiences, the videos are collected by the multiple video collecting devices and distributed by the server, the requirements of the field audiences on watching at different visual angles of the event are met, and the experience of the field audiences can be improved.
The video acquisition equipment comprises a wide-angle lens; the video data acquisition module is used for acquiring super wide-angle foreground videos of the event site by the wide-angle lenses of the video acquisition devices as video data.
The virtual reality video processing system also comprises a signal base station, and the video acquisition equipment, the signal base station and the server are sequentially connected; the video data acquisition module is used for acquiring video data of an event site by a plurality of video acquisition devices and carrying out coding operation on the video data; the video acquisition equipment sends the coded video data to a signal base station; and the signal base station transmits the coded video data to a server.
The virtual reality video processing system also comprises an exchanger, wherein the video acquisition equipment, the exchanger and the signal base station are sequentially connected; the video data acquisition module is used for sending the encoded video data to the switch by the plurality of video acquisition devices; and the exchanger transmits the encoded video data to the signal base station.
The device further comprises a virtual reality video transcoding module, wherein the virtual reality video transcoding module is used for transcoding the virtual reality video by the edge computing node of the server based on the video format of the user terminal.
The virtual reality video processing system also comprises a tuning device, wherein the tuning device is in communication connection with the server; the device also comprises a sound data processing module, a sound data processing module and a server, wherein the sound data processing module is used for acquiring sound data by the tuning equipment and sending the sound data to the server; an edge compute node of the server generates a virtual reality video based on the sound data, the plurality of video data, and a virtual background of the event site.
The device also comprises a network state checking module used for checking the network state of the virtual reality video processing system; and if the network state of the virtual reality video processing system is a normal state, starting an edge computing node of the server.
The server comprises a distortion server, a control server and a packaging server.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the virtual reality video processing apparatus of the event site described above may refer to the corresponding process in the embodiment of the virtual reality video processing method of the event site, and will not be described herein again.
Example four:
the embodiment of the invention also provides electronic equipment for operating the virtual reality video processing method of the event site; referring to fig. 7, the electronic device includes a memory 100 and a processor 101, where the memory 100 is used to store one or more computer instructions, and the one or more computer instructions are executed by the processor 101 to implement the virtual reality video processing method for the event scene.
Further, the electronic device shown in fig. 7 further includes a bus 102 and a communication interface 103, and the processor 101, the communication interface 103, and the memory 100 are connected through the bus 102.
The Memory 100 may include a Random Access Memory (RAM) and a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 7, but this does not indicate only one bus or one type of bus.
The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 101. The Processor 101 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 100, and the processor 101 reads the information in the memory 100, and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the virtual reality video processing method for the event site, where specific implementation may refer to method embodiments, and details are not described herein.
The method, the apparatus, and the computer program product for processing a virtual reality video of an event site provided in the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method in the foregoing method embodiments, and specific implementations may refer to the method embodiments, which are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and/or the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in a specific case to those of ordinary skill in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. The virtual reality video processing method for the event site is characterized by being applied to a virtual reality video processing system of the event site, wherein the virtual reality video processing system comprises a video acquisition device and a server which are sequentially in communication connection, the server is in communication connection with an external user terminal, and an edge computing node is deployed in the server; the method comprises the following steps:
the video acquisition devices acquire video data of the event sites and send the video data to the server;
and the edge computing node of the server generates a virtual reality video based on the plurality of video data and the pre-stored virtual background of the event scene, and distributes the virtual reality video to the user terminal.
2. The method of claim 1, wherein the video capture device comprises a wide-angle lens; the step of collecting the video data of the event scene by a plurality of the video collecting devices comprises the following steps:
and a plurality of wide-angle lenses of the video acquisition equipment acquire the ultra-wide-angle foreground video of the event scene as video data.
3. The method according to claim 1, wherein the virtual reality video processing system further comprises a signal base station, and the video acquisition device, the signal base station and the server are connected in sequence; the step of collecting the video data of the event site by a plurality of video collecting devices and sending the video data to the server comprises the following steps:
the video acquisition devices acquire video data of the event sites and perform encoding operation on the video data;
the video acquisition devices send the encoded video data to the signal base station;
and the signal base station sends the coded video data to the server.
4. The method according to claim 3, wherein the virtual reality video processing system further comprises a switch, and the video acquisition device, the switch and the signal base station are connected in sequence; the step of sending the encoded video data to the signal base station by the plurality of video capture devices includes:
the video acquisition devices send the encoded video data to the switch;
and the exchanger transmits the encoded video data to the signal base station.
5. The method of claim 1, wherein after the step of the edge computing node of the server generating a virtual reality video based on the plurality of video data and a pre-stored virtual background of the event venue, the method further comprises:
and transcoding the virtual reality video based on the video format of the user terminal by the edge computing node of the server.
6. The method of claim 1, wherein the virtual reality video processing system further comprises a tuning device communicatively connected to the server; the method further comprises the following steps:
the tuning equipment acquires sound data and sends the sound data to the server;
an edge computing node of the server generates the virtual reality video based on the sound data, the plurality of video data, and a virtual background of the event venue.
7. The method according to any one of claims 1-6, further comprising:
checking a network status of the virtual reality video processing system;
and if the network state of the virtual reality video processing system is a normal state, starting the edge computing node of the server.
8. The method of any of claims 1-6, wherein the servers include a distortion server, a control server, and a wrapping server.
9. The virtual reality video processing device is applied to a virtual reality video processing system of an event site, the virtual reality video processing system comprises a video acquisition device and a server which are sequentially in communication connection, the server is in communication connection with an external user terminal, and an edge computing node is deployed in the server; the device comprises:
the video data acquisition module is used for acquiring video data of the competition scene by the plurality of video acquisition devices and sending the video data to the server;
and the virtual reality video generation module is used for generating a virtual reality video by the edge computing node of the server based on the video data and the pre-stored virtual background of the event scene and distributing the virtual reality video to the user terminal.
10. An electronic device comprising a processor and a memory, the memory storing computer-executable instructions executable by the processor, the processor executing the computer-executable instructions to implement the method of virtual reality video processing at a scene of an event according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210350453.1A CN114710682A (en) | 2022-04-02 | 2022-04-02 | Virtual reality video processing method and device for event site and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210350453.1A CN114710682A (en) | 2022-04-02 | 2022-04-02 | Virtual reality video processing method and device for event site and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114710682A true CN114710682A (en) | 2022-07-05 |
Family
ID=82172228
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210350453.1A Pending CN114710682A (en) | 2022-04-02 | 2022-04-02 | Virtual reality video processing method and device for event site and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114710682A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116931737A (en) * | 2023-08-03 | 2023-10-24 | 重庆康建光电科技有限公司 | System and method for realizing virtual reality interaction between person and scene |
CN117809001A (en) * | 2024-02-28 | 2024-04-02 | 深圳市广通软件有限公司 | VR-based stadium management event viewing method, device and equipment |
CN117908684A (en) * | 2024-03-20 | 2024-04-19 | 南昌大学 | Virtual reality implementation method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170155695A1 (en) * | 2015-11-26 | 2017-06-01 | Le Holdings (Beijing) Co., Ltd. | Method, device and system for uploading live video |
CN110267058A (en) * | 2019-07-18 | 2019-09-20 | 世纪龙信息网络有限责任公司 | Live broadcasting method, gateway, device clusters, system and device |
CN110266664A (en) * | 2019-06-05 | 2019-09-20 | 中国联合网络通信有限公司广州市分公司 | A kind of Cloud VR video living transmission system based on 5G and MEC |
CN111416989A (en) * | 2020-04-28 | 2020-07-14 | 北京金山云网络技术有限公司 | Video live broadcast method and system and electronic equipment |
CN214959899U (en) * | 2021-07-13 | 2021-11-30 | 西安星舟志屹智能科技有限公司 | Vehicle-mounted panoramic splicing system suitable for large vehicle |
CN113727144A (en) * | 2021-09-02 | 2021-11-30 | 中国联合网络通信集团有限公司 | High-definition live broadcast system and streaming media method based on mixed cloud |
-
2022
- 2022-04-02 CN CN202210350453.1A patent/CN114710682A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170155695A1 (en) * | 2015-11-26 | 2017-06-01 | Le Holdings (Beijing) Co., Ltd. | Method, device and system for uploading live video |
CN110266664A (en) * | 2019-06-05 | 2019-09-20 | 中国联合网络通信有限公司广州市分公司 | A kind of Cloud VR video living transmission system based on 5G and MEC |
CN110267058A (en) * | 2019-07-18 | 2019-09-20 | 世纪龙信息网络有限责任公司 | Live broadcasting method, gateway, device clusters, system and device |
CN111416989A (en) * | 2020-04-28 | 2020-07-14 | 北京金山云网络技术有限公司 | Video live broadcast method and system and electronic equipment |
CN214959899U (en) * | 2021-07-13 | 2021-11-30 | 西安星舟志屹智能科技有限公司 | Vehicle-mounted panoramic splicing system suitable for large vehicle |
CN113727144A (en) * | 2021-09-02 | 2021-11-30 | 中国联合网络通信集团有限公司 | High-definition live broadcast system and streaming media method based on mixed cloud |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116931737A (en) * | 2023-08-03 | 2023-10-24 | 重庆康建光电科技有限公司 | System and method for realizing virtual reality interaction between person and scene |
CN117809001A (en) * | 2024-02-28 | 2024-04-02 | 深圳市广通软件有限公司 | VR-based stadium management event viewing method, device and equipment |
CN117908684A (en) * | 2024-03-20 | 2024-04-19 | 南昌大学 | Virtual reality implementation method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114710682A (en) | Virtual reality video processing method and device for event site and electronic equipment | |
US9635252B2 (en) | Live panoramic image capture and distribution | |
US6839080B2 (en) | Remote server switching of video streams | |
CN106803974A (en) | The real-time retransmission method of live video stream | |
CN112601097B (en) | Double-coding cloud broadcasting method and system | |
CN101682727B (en) | Media channel switching | |
CN112601033B (en) | Cloud rebroadcasting system and method | |
CN102342066A (en) | REAL-TIME MULTI-MEDIA STREAMING processing BANDWIDTH MANAGEMENT | |
CN112019927A (en) | Video live broadcast method, microphone connecting equipment, RTC media server and main broadcast equipment | |
CN105978926A (en) | Data transmission method and device | |
CN113452935B (en) | Horizontal screen and vertical screen live video generation system and method | |
WO2017193830A1 (en) | Video switching method, device and system, and storage medium | |
CN111447503A (en) | Viewpoint switching method, server and system for multi-viewpoint video | |
CN106209824A (en) | The cloud edit methods of data, system and the client of cloud editor | |
WO2019048733A1 (en) | Transmission of video content based on feedback | |
KR20170081517A (en) | Server and method for providing interactive broadcast | |
CN109788366A (en) | A kind of 3D interaction live broadcast system | |
CN114666610A (en) | Video processing method and device for event site and electronic equipment | |
CN104093089B (en) | Cinema program live telecasting system and method | |
WO2014012384A1 (en) | Communication data transmitting method, system and receiving device | |
KR101957807B1 (en) | Method and system of audio retransmition for social network service live broadcasting of multi-people points | |
CN112738609A (en) | Multi-channel video stream transmission method and device and multi-channel video stream playing system | |
CN115706812A (en) | Program processing system, method, device, equipment and storage medium | |
Prins et al. | A hybrid architecture for delivery of panoramic video | |
CN114071193B (en) | Video data processing method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |