CN114827647B - Live broadcast data generation method, device, equipment, medium and program product - Google Patents

Live broadcast data generation method, device, equipment, medium and program product Download PDF

Info

Publication number
CN114827647B
CN114827647B CN202210399727.6A CN202210399727A CN114827647B CN 114827647 B CN114827647 B CN 114827647B CN 202210399727 A CN202210399727 A CN 202210399727A CN 114827647 B CN114827647 B CN 114827647B
Authority
CN
China
Prior art keywords
video data
data
audio
acquisition equipment
live
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210399727.6A
Other languages
Chinese (zh)
Other versions
CN114827647A (en
Inventor
石启铮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210399727.6A priority Critical patent/CN114827647B/en
Publication of CN114827647A publication Critical patent/CN114827647A/en
Application granted granted Critical
Publication of CN114827647B publication Critical patent/CN114827647B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The disclosure provides a live broadcast data generation method, a live broadcast data generation device, live broadcast data generation equipment, a live broadcast data generation medium and a live broadcast data generation program product, and relates to the technical field of computers, in particular to the technical field of Internet live broadcast. The specific implementation scheme is as follows: in response to receiving the shooting visual angle information, obtaining field audio and video data matched with the shooting visual angle information; acquiring user audio and video data associated with a live user; and generating live broadcast data based on the field audio-video data and the user audio-video data. According to the technical scheme, real-time remote live broadcasting can be realized, and voice interference among anchor broadcasting is avoided.

Description

Live broadcast data generation method, device, equipment, medium and program product
Technical Field
The disclosure relates to the technical field of computers, in particular to the technical field of internet live broadcasting, and specifically relates to a live broadcasting data generation method, device, equipment, medium and program product.
Background
With the continuous development of internet technology, more and more internet applications are appearing in people's lives. For example, more and more users view on-line audio and video programs provided by a host broadcast through a live application.
In order to improve the viewing volume, many broadcasters use mobile phones or other mobile devices to live in areas such as commercial areas or tourist areas, and voice interference among the broadcasters is easy to occur. Therefore, how to avoid speech interference is important to improve live data quality.
Disclosure of Invention
The present disclosure provides a live data generation method, apparatus, device, medium, and program product.
According to an aspect of the present disclosure, there is provided a live broadcast data generating method, including:
in response to receiving shooting visual angle information, obtaining field audio and video data matched with the shooting visual angle information;
acquiring user audio and video data associated with a live user;
and generating live broadcast data based on the venue audio and video data and the user audio and video data.
According to another aspect of the present disclosure, there is provided a live data generation apparatus, including:
the field data acquisition module is used for responding to the received shooting visual angle information and acquiring field audio and video data matched with the shooting visual angle information;
the user data acquisition module is used for acquiring user audio and video data associated with the live user;
and the live broadcast data generation module is used for generating live broadcast data based on the field audio and video data and the user audio and video data.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the live data generation method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the live data generation method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the live data generation method of any of the embodiments of the present disclosure.
The embodiment of the disclosure can realize real-time remote live broadcast and avoid voice interference among the anchor.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic diagram of a live data generation method according to an embodiment of the present disclosure;
fig. 2a is a schematic diagram of a live data generation method according to an embodiment of the present disclosure;
FIG. 2b is a schematic diagram of a remote acquisition device provided in accordance with an embodiment of the present disclosure;
FIG. 2c is a schematic diagram of an azimuthal relationship of a remote acquisition device provided in accordance with an embodiment of the present disclosure;
fig. 3a is a schematic diagram of a live data generation method according to an embodiment of the present disclosure;
FIG. 3b is a schematic diagram of an azimuthal relationship of a remote acquisition device provided in accordance with an embodiment of the present disclosure;
fig. 4a is a flowchart of a live data generation method provided in accordance with an embodiment of the present disclosure;
fig. 4b is an architecture diagram of a live data generation system provided in accordance with an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a live data generation apparatus provided according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device used to implement a live data generation method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a live broadcast data generating method according to an embodiment of the present disclosure, where the embodiment may be applicable to a case of performing remote live broadcast by using a remote acquisition device, and the method may be applied to a cloud mobile phone device.
The method of the embodiment can be executed by a live broadcast data generating device, the device can be implemented in a software and/or hardware mode, and the device is specifically configured in an electronic device with a certain data operation capability, and the electronic device can be a client device or a server device, and the client device can be a mobile phone, a tablet computer, a vehicle-mounted terminal, a desktop computer and the like. Specifically, referring to fig. 1, the method specifically includes the following steps:
s110, in response to receiving the shooting visual angle information, obtaining field audio and video data matched with the shooting visual angle information.
In order to realize remote live broadcast, one or more remote acquisition devices are required to be deployed in a live broadcast site, and audio and video data of the live broadcast site are acquired through the remote acquisition devices. The method comprises the steps of setting equipment arrangement schemes according to performances of remote acquisition equipment, and arranging the plurality of audio and video acquisition equipment in the live broadcast field according to the equipment arrangement schemes.
When the remote live broadcast is required, according to shooting visual angle information conforming to the live broadcast requirement, selectively acquiring the audio and video data acquired by one or more remote acquisition devices, and further combining the audio and video data acquired by the remote acquisition devices with the audio and video data of a live broadcast user to acquire live broadcast data, wherein the live broadcast user can leave the home and acquire live broadcast data about a live broadcast site.
The shooting visual angle information is used for limiting the shooting visual angle of the live broadcast picture, and can comprise the identification of the target acquisition equipment and the target acquisition angle associated with the target acquisition equipment. Illustratively, 3 remote collection devices, device 1, device 2 and device 3, are deployed in the live venue, and the deployment locations of the 3 remote collection devices are different. The shooting angle of view information may contain the device 2 and a 30 degree shooting angle of view, i.e. video data that the currently live user needs to acquire the device 2 to shoot at the 30 degree shooting angle of view.
The field audio-video data includes field audio data and field video data. For example, the live broadcast field is scenic spot A, and the field audio and video data are the audio data in scenic spot A and the video data for setting shooting viewing angles in scenic spot A collected by the remote collecting equipment.
In the embodiment of the disclosure, after receiving shooting view angle information sent by a live user, field audio/video data matched with the shooting view angle information can be obtained. Specifically, after the shooting view angle information is obtained, the shooting view angle information may be sent to a data forwarding device for forwarding audio and video data, so as to instruct the data forwarding device to provide the audio and video data matched with the shooting view angle information. The data forwarding device establishes connection with a plurality of remote acquisition devices in advance, acquires audio and video data acquired by each remote acquisition device, and is an exemplary data forwarding server.
In the embodiment of the disclosure, after the shooting view angle information is acquired, the target acquisition equipment required by the current live broadcast is positioned in the plurality of remote acquisition equipment according to the shooting view angle information, so that connection is established with the target acquisition equipment, and the audio and video data acquired by the target acquisition equipment are directly acquired. And finally, the audio and video data collected by the target collecting equipment can be used as field audio and video data, the audio and video data collected by the target collecting equipment can be correspondingly processed, and the processed audio and video data are used as field audio and video data. For example, noise reduction may be performed on voice data in audio-video data, or video extraction may be performed at a specific angle on video data in audio-video data.
In a specific example, a plurality of remote acquisition devices are deployed in a live broadcast field in advance, and the remote acquisition devices send acquired audio and video data to the data forwarding device in real time under the condition of being started. And the live broadcast user operates the cloud mobile phone client through the local terminal to control the cloud mobile phone device to establish connection with the data forwarding device. When a live user needs to live, shooting visual angle information is sent to cloud mobile phone equipment through a cloud mobile phone client, and the cloud mobile phone equipment is driven by a virtual camera microphone to acquire field audio and video data matched with the shooting visual angle information from data forwarding equipment. The shooting view angle information may include an identification of the target acquisition device and a target acquisition angle associated with the target acquisition device.
In another specific example, a plurality of remote acquisition devices are deployed in advance on a live broadcast site, and the remote acquisition devices acquire audio and video data of the live broadcast site in real time. And the live broadcast user operates the cloud mobile phone client through the local terminal to control the cloud mobile phone device. When a live user needs to live, shooting visual angle information can be sent to cloud mobile phone equipment through a cloud mobile phone client. And the cloud mobile phone equipment analyzes the shooting visual angle information so as to determine target acquisition equipment matched with the shooting visual angle information. Further, the cloud mobile phone equipment is connected with the target acquisition equipment to acquire the audio and video data acquired by the target acquisition equipment, and the audio and video data are directly used as field audio and video data.
S120, acquiring user audio and video data associated with the live user.
The user audio-video data includes user audio data and user video data. The user audio and video data may be data containing live user sound and live user human body images collected through a camera and a microphone of the user terminal.
In order to realize remote live broadcast, not only site audio and video data but also audio and video data of a live broadcast user are acquired, so that the effect that the live broadcast user personally shoots on a live broadcast site is achieved. Specifically, the live user can erect a user terminal at any place such as home or a studio, and collect user audio data and user video data of the live user through the user terminal. Further, the user audio data and the user video data are compressed, and the compressed data are sent to the cloud mobile phone equipment, so that the cloud mobile phone equipment generates live broadcast data based on the field audio and video data and the user audio and video data.
In addition, to facilitate the superposition of venue video data and user video data, backdrop, e.g., a green curtain, may be suspended behind the live user. After the user terminal collects the original video data, the body part image of the live user is further extracted from the original video data, and the user video data is obtained.
In one specific example, a live user collects user audio data using a local microphone and captures raw video data using a local camera. And further extracting body part images of the live user from the original video data to obtain the user video data. Further, H.264 or H.265 coding is carried out on the user audio data and the user video data, and the coded user audio and video data is sent to the cloud mobile phone equipment, so that the cloud mobile phone equipment generates live broadcast data based on the field audio and video data and the user audio and video data.
S130, generating live broadcast data based on the field audio and video data and the user audio and video data.
After the site audio and video data and the user audio and video data are acquired, the site audio and video data and the user audio and video data are overlapped to synthesize live broadcast data, so that the effect that a live broadcast user personally live broadcast to a live broadcast site is achieved. Specifically, the site audio data in the site audio and video data and the user audio data in the user audio and video data can be overlapped according to the time for acquiring the site audio and video data and the user audio and video data, so as to obtain live broadcast audio data; and superposing the field video data in the field audio and video data with the user video data in the user audio and video data to obtain live video data, and finally forming live video data by the live audio data and the live video data.
And the live broadcast data can be obtained by superposing the field audio data and the user audio data and superposing the field video data and the user video data according to the field data acquisition time stamp contained in the field audio and video data and the user data acquisition time stamp contained in the user audio and video data.
According to the technical scheme, in response to receiving shooting view angle information, field audio and video data matched with the shooting view angle information are obtained, further user audio and video data associated with a live broadcast user are obtained, finally live broadcast data are generated based on the field audio and video data and the user audio and video data, flexible conversion of the shooting view angle can be achieved in a remote live broadcast process, and the flexibility of remote live broadcast shooting is improved.
Fig. 2a is a schematic diagram of a live broadcast data generating method according to an embodiment of the present disclosure, which is further refined on the basis of the foregoing embodiment, and provides specific steps for acquiring field audio/video data matched with shooting angle of view information. The technical solution in this embodiment may be combined with each of the alternatives in one or more embodiments described above. The following describes a live broadcast data generating method provided by an embodiment of the present disclosure with reference to fig. 2a, including the following steps:
S210, in response to receiving the shooting visual angle information, determining the target acquisition equipment in at least one remote acquisition equipment according to the identification of the target acquisition equipment in the shooting visual angle information.
In the embodiment of the disclosure, after receiving shooting view angle information, a target acquisition device is determined among a plurality of remote acquisition devices deployed in a live broadcast field according to an identifier of the target acquisition device included in the shooting view angle information. The target acquisition device can be selected by a live user according to the deployment position of the device in a live broadcast field during each remote acquisition and the current live broadcast visual angle requirement.
In a specific example, the live broadcast site is a food street, a plurality of street lamps are installed at 5 m intervals on two sides of the food street, and remote collection equipment is deployed on lamp posts of each street lamp. The live user can determine shooting visual angle information according to the position of each remote acquisition device and the current visual angle required to be shot, wherein the shooting visual angle information comprises the identification of the target acquisition device matched with the current required shooting visual angle. After receiving the shooting visual angle information, the cloud mobile phone terminal determines target acquisition equipment in remote acquisition equipment deployed in the food street according to target acquisition equipment identification in the shooting visual angle information.
S220, acquiring site audio data and panoramic video data acquired by the target acquisition equipment.
The remote acquisition device may include a panoramic camera module, a microphone module, a data processing module, and a network module, as shown in fig. 2 b. The panoramic camera module is used for collecting panoramic video data of a place where the remote collection equipment is located; the microphone module is used for collecting site audio data of a site where the remote collection equipment is located; the data processing module is used for carrying out compression coding on the data acquired by the panoramic camera module and the microphone module; the network module is used for sending the encoded data to the cloud mobile phone equipment or the data forwarding equipment.
In the embodiment of the disclosure, after the target acquisition device is determined in at least one remote acquisition device, the site audio data and the panoramic video data acquired by the target acquisition device are further acquired. Specifically, the identification of the target acquisition device may be sent to the data forwarding device, so that the site audio data and the panoramic video data acquired by the target acquisition device are acquired in the data forwarding device. And the system can also be directly connected with the target acquisition equipment to acquire the site audio data and the panoramic video data acquired by the target acquisition equipment.
In a specific example, the live broadcast site is a food street, a plurality of street lamps are installed at 5 m intervals on two sides of the food street, and remote collection equipment is deployed on lamp posts of each street lamp. After the target acquisition equipment is determined to be the remote acquisition equipment deployed on the 2 nd street lamp in the food street, the identification of the target acquisition equipment can be sent to the data forwarding equipment, and the data forwarding equipment searches the data sent by the target acquisition equipment in the data sent by the plurality of remote acquisition equipment according to the identification, so that the site audio data and panoramic video data associated with the searched target acquisition equipment are forwarded to the cloud mobile phone equipment in real time. The panoramic video data are acquired by a panoramic camera module in the target acquisition equipment.
S230, extracting field video data from the panoramic video data according to the target shooting view angle in the shooting view angle information, and forming the field audio and video data by the field audio data and the field video data.
In the embodiment of the disclosure, the shooting view angle information includes, in addition to the target acquisition device, a target shooting view angle associated with the target acquisition device. After the site audio data and the panoramic video data acquired by the target acquisition equipment are acquired, the site video data matched with the target shooting visual angle is extracted from the panoramic video data according to the target shooting visual angle. And finally, the site audio and video data are formed by the site audio data and the site video data. The live user can freely set the target acquisition equipment and the target shooting visual angle according to live broadcast requirements, and the effect of shooting in the field can be achieved while the problem of voice interference caused by live broadcast user aggregation is avoided.
In a specific example, the shooting view angle information includes an identifier of the target acquisition device, and a target shooting angle associated with the target acquisition device, specifically, a horizontal view angle of 20 degrees. Local video data having a horizontal angle of view of 20 degrees is extracted from the panoramic video data according to the target photographing angle, and the local video data is determined as field video data. The horizontal viewing angle is 20 degrees, and can be a viewing angle rotated by 20 degrees clockwise with the north direction of the target acquisition equipment being 0 degrees.
Optionally, the embodiment further includes:
determining at least one adjacent acquisition device having a distance to the target acquisition device within a set range;
determining the visual angle difference between the target acquisition equipment and the adjacent acquisition equipment according to the target shooting visual angle and the azimuth relation between the adjacent acquisition equipment and the target acquisition equipment;
and determining acquisition equipment to be switched in at least one adjacent acquisition equipment according to the visual angle difference, and sending the identification of the acquisition equipment to be switched to the user terminal for indicating the user terminal to provide a switching acquisition equipment selection item associated with the acquisition equipment to be switched.
In this optional embodiment, when acquiring the audio and video data of the venue, in order to facilitate the live user to flexibly switch the viewing angle according to the live requirement, the acquisition device to be switched can be determined according to the target acquisition device, and then the acquisition device to be switched is sent to the user terminal, so as to instruct the user terminal to provide a selection item of the acquisition device to be switched associated with the acquisition device to be switched. The live broadcast user can switch the current target acquisition equipment by clicking the switching acquisition equipment selection item, so that the live broadcast user can walk around to shoot in a live broadcast field, the quality of live broadcast data is improved, and the viewing experience of audiences is improved.
The method for determining the acquisition equipment to be switched comprises the following steps: at least one neighboring acquisition device having a distance to the target acquisition device within a set range is first determined. Further, the azimuth relation between the target acquisition equipment and the adjacent acquisition equipment is obtained, the visual angle difference between the target acquisition equipment and the adjacent acquisition equipment is calculated according to the azimuth relation and the target shooting visual angle associated with the current target acquisition equipment, and the adjacent acquisition equipment with the visual angle difference smaller than the set threshold value is determined as the acquisition equipment to be switched. And finally, the identification of the equipment to be switched can be sent to the user terminal, and the user terminal can provide selection items for the equipment to be switched for the user to select. After the user selects one of the devices to be switched, the device is immediately converted into the audio and video data acquired by the device to be switched. The acquisition equipment to be switched is determined according to the view angle difference, the condition that video suddenly jumps is avoided, and smooth view angle switching can be performed.
The azimuth relationship between the target collecting device and the adjacent collecting device is shown in fig. 2C, the north direction of the target collecting device is taken as 0 degree, the angle between the target collecting device and the adjacent collecting device is gradually increased along the clockwise direction, the azimuth relationship between the target collecting device and the adjacent collecting device can be expressed as that the adjacent collecting device is at the X degree of the target collecting device, for example, the target collecting device is the device D, the adjacent collecting device associated with the target collecting device comprises the devices A, B, C and F, wherein the device A is at the 315 degree azimuth of the device D, the device B is at the 0 degree azimuth of the device D, the device C is at the 45 degree azimuth of the device D, the device F is at the 225 degree azimuth of the device D, and the distance between the device E and the device D in the figure exceeds the set range, so that the device E does not belong to the adjacent collecting device of the device D. In addition, the angle of view difference may be obtained by calculating the difference between the target photographing angle of view and the azimuth angle. For example, the target shooting angle of view is 90 degrees, and the current neighboring acquisition device is in the 30 degree direction of the target acquisition device, the angle of view difference is 60 degrees.
In addition, in order to further improve the flexibility of the remote live broadcast visual angle, the user terminal can provide an arbitrary switching button for the live broadcast user, after clicking the arbitrary switching button, options of all remote acquisition devices can be displayed for the live broadcast user, and the live broadcast user can switch to any remote acquisition device from the target acquisition device according to the current live broadcast progress and demand, so that the control of the live broadcast visual angle is more flexible.
S240, acquiring user audio and video data associated with the live user.
S250, generating live broadcast data based on the field audio and video data and the user audio and video data.
According to the technical scheme, in response to receiving shooting view angle information, according to the identification of target acquisition equipment in the shooting view angle information, the target acquisition equipment is determined in at least one remote acquisition equipment, and site audio data and panoramic video data acquired by the target acquisition equipment are acquired. Further, according to the target shooting view angle in the shooting view angle information, the field video data is extracted from the panoramic video data, and the field audio data and the field video data form field audio and video data. And acquiring user audio and video data associated with the live user, and finally generating live data based on the field audio and video data and the user audio and video data. On the one hand, can realize long-range live broadcast, avoid gathering the voice interference problem that live broadcast arouses, on the other hand, can freely select long-range collection equipment and shooting visual angle, improve the shooting flexibility of long-range live broadcast.
Fig. 3a is a schematic diagram of a live broadcast data generating method according to an embodiment of the present disclosure, which is further refined on the basis of the foregoing embodiment, and provides specific steps for generating live broadcast data based on field audio and video data and user audio and video data. The technical solution in this embodiment may be combined with each of the alternatives in one or more embodiments described above. The following describes a live broadcast data generating method provided by an embodiment of the present disclosure with reference to fig. 3a, including the following steps:
s310, acquiring position information of a remote acquisition device, and sending the position information to a user terminal, wherein the position information is used for indicating the user terminal to provide a remote acquisition device selection item based on the position information;
the position information comprises the azimuth relation between the remote acquisition equipment and the adjacent acquisition equipment; the distance between the adjacent acquisition equipment and the remote acquisition equipment is within a set range.
The position information is used for representing the position relation among a plurality of remote acquisition devices in the live broadcast field, and comprises the azimuth relation between the remote acquisition devices and the adjacent acquisition devices, wherein the distance between the adjacent acquisition devices and the remote acquisition devices is in a set range.
Illustratively, the live venue is provided with remote collection devices A, B, C and D, specifically arranged as shown in fig. 3 b. Accordingly, the location information is shown in table 1, where table 1 includes at least one neighboring acquisition device associated with each of the remote acquisition devices A, B, C and D, respectively, and an azimuth relationship with each neighboring acquisition device. For example, the neighboring collection devices of device D in table 1 include devices A, B and C, where device a is in the 315 degree direction of device D, device B is in the 0 degree direction of device D, device C is in the 45 degree direction of device D, and the information associated with the other remote collection devices is the same and will not be described herein.
TABLE 1
Identification of remote acquisition devices Identification of proximity acquisition devices Azimuth relationship
D A 315
D B 0
D C 45
A B 90
A D 135
B A 270
B C 90
B D 180
C B 270
C D 225
In the embodiment of the disclosure, when the data forwarding device is connected for the first time, the position information of the remote acquisition device is acquired from the data forwarding device, and the position information is sent to the user terminal. The user terminal can provide remote acquisition equipment options according to the position information. For example, the user terminal performs corresponding display on the remote acquisition devices according to the azimuth relation of the plurality of remote acquisition devices, so that a live user can intuitively see the azimuth of each remote acquisition device, and then select a target acquisition device from the plurality of remote acquisition devices according to the current viewing angle required by live broadcast. Through obtaining the position information of remote acquisition equipment, can realize living broadcast user and select the remote acquisition equipment who carries out audio and video data acquisition in a flexible way, improve long-range living broadcast flexibility.
In addition, after the deployment of the remote acquisition equipment is completed in the live broadcast site, the position information of the remote acquisition equipment is directly uploaded to the server, and a live broadcast user can operate the cloud mobile phone client through the local mobile phone to control the cloud mobile phone to download the position information from the server.
S320, in response to receiving the shooting visual angle information, obtaining field audio and video data matched with the shooting visual angle information; the shooting view angle information comprises an identification of the target acquisition equipment and a target shooting view angle.
S330, acquiring user audio and video data associated with the live user.
S340, overlapping the site audio data in the site audio-video data and the user audio data in the user audio-video data to obtain live broadcast audio data.
In the embodiment of the disclosure, after the site audio and video data and the user audio and video data are acquired, in order to realize the effect that a live user is live on a live site, the site audio data in the site audio and video data and the user audio data in the user video data can be overlapped according to the time stamp of the received data to obtain the live audio data.
The venue audio data is environmental sound in the live venue, and the user audio data is sound for a live user to illustrate the live venue or interact with a viewer. By superposing the site audio data and the user audio data, the sound of the live user can be embedded into the environment sound of the live site, so that the remote live data is more natural and smooth, and a viewer can have good live program watching experience.
S350, overlapping the field video data in the field audio and video data and the user video data in the user audio and video data to obtain live video data, and forming live video data by the live audio data and the live video data.
In the embodiment of the disclosure, after the site audio and video data and the user audio and video data are acquired, the site video data in the site audio and video data and the user video data in the user audio and video data can be superimposed according to the time stamp of the received data to obtain live video data. And finally, constructing live broadcast data by the live broadcast audio data and the live broadcast video data.
The field video data are environment image data in a live field, and the user video data are human body part images of a live user. By superposing the field video data in the field audio video data and the user video data in the user audio video data, the human body image of the live user can be added into the field video data, compared with the situation that only the comment sound of the live user is added, the comment sound and the human body image of the live user are added at the same time, so that the remote live broadcast is more vivid, the effect that the live user performs live broadcast on the live broadcast site can be achieved while the voice interference caused by aggregation is avoided.
And S360, transmitting the live broadcast data to the user terminal, and indicating the user terminal to display the live broadcast data.
In the embodiment of the disclosure, after the live broadcast data is generated by superposing the field audio and video data and the user audio and video data, the live broadcast data can be encoded and then sent to the user terminal, and the user terminal can play the live broadcast data after decoding the live broadcast data. The live broadcast user can watch the current synthesized live broadcast data at the user terminal, and shooting visual angle information is adjusted according to the playing effect, so that the optimization of the live broadcast data is realized.
According to the technical scheme, the position information of the remote acquisition equipment is provided for the user terminal, so that a live user can select proper remote acquisition equipment according to live broadcasting requirements, further, according to shooting visual angle information sent by the live user, field audio and video data matched with the shooting visual angle information are acquired, user audio and video data are acquired, finally, the field audio and video data and the user audio and video data are combined to obtain live broadcasting data, remote live broadcasting is realized, voice interference between anchor broadcasting is avoided, flexible switching of visual angles can be realized, and live broadcasting data quality is improved.
Fig. 4a is a flowchart of a live data generation method according to an embodiment of the present disclosure, including data interaction between devices involved in a live data generation system.
The architecture of the live broadcast data generation system is shown in fig. 4b, and includes at least one remote acquisition device, a data forwarding server, at least one cloud mobile phone device and at least one user terminal deployed in a live broadcast site. The cloud mobile phone client is operated in the user terminal, cloud mobile phone equipment can be controlled through the cloud mobile phone client, and one user terminal corresponds to one cloud mobile phone equipment. And the at least one remote acquisition device sends the acquired audio and video data to the data forwarding server through a network. The cloud mobile phone equipment is in communication connection with the data forwarding server through the virtual camera microphone drive, and audio and video data forwarded by the data forwarding server are obtained.
The live broadcast data generation method specifically comprises the following steps:
s401, the remote acquisition equipment sends the acquired audio and video data of the live broadcast field to the data forwarding server in real time.
S402, the user terminal is connected with a virtual camera microphone driver of the cloud mobile phone equipment through the cloud mobile phone client.
S403, the cloud mobile phone equipment is driven by the virtual camera microphone and is connected with the data forwarding server.
S404, the data forwarding server responds to the establishment of connection with the cloud mobile phone equipment and feeds back the position information of the remote acquisition equipment to the cloud mobile phone equipment.
S405, the cloud mobile phone equipment feeds back the position information of the remote acquisition equipment to the user terminal.
S406, the user terminal responds to shooting visual angle information selected by the live user according to the position information, and the shooting visual angle information is sent to the cloud mobile phone equipment.
S407, the cloud mobile phone equipment acquires the field audio and video data matched with the shooting visual angle information from the data forwarding server according to the shooting visual angle information.
S408, the user terminal sends the locally acquired user audio and video data to the cloud mobile phone device.
S409, the cloud mobile phone equipment superimposes the field audio and video data and the user audio and video data to obtain live broadcast data.
S410, the cloud mobile phone equipment sends the encoded live broadcast data to the user terminal.
In addition, the cloud mobile phone equipment also transmits the live broadcast data to a live broadcast application background through a live broadcast application running in the cloud mobile phone equipment, so that the live broadcast application background distributes the live broadcast data to a live broadcast audience terminal, and remote live broadcast is realized.
S411, the live broadcast user terminal decodes the live broadcast data and plays the live broadcast data.
According to the technical scheme, in response to receiving shooting view angle information, field audio and video data matched with the shooting view angle information are obtained, further user audio and video data associated with a live broadcast user are obtained, finally live broadcast data are generated based on the field audio and video data and the user audio and video data, flexible conversion of the shooting view angle can be achieved in a remote live broadcast process, and the flexibility of remote live broadcast shooting is improved.
Fig. 5 is a block diagram of a live data generating apparatus according to an embodiment of the present disclosure, where the embodiment of the present disclosure is applicable to a case of performing remote live broadcasting using a remote acquisition device. The device is realized by software and/or hardware, and is specifically configured in the electronic equipment with certain data operation capability.
A live data generation apparatus 500 as shown in fig. 5, comprising: a venue data acquisition module 510, a user data acquisition module 520, and a live data generation module 530; wherein,
a field data obtaining module 510, configured to obtain field audio/video data matched with shooting view angle information in response to receiving the shooting view angle information;
a user data acquisition module 520, configured to acquire user audio/video data associated with a live user;
and the live broadcast data generating module 530 is configured to generate live broadcast data based on the venue audio and video data and the user audio and video data.
According to the technical scheme, in response to receiving shooting view angle information, field audio and video data matched with the shooting view angle information are obtained, further user audio and video data associated with a live broadcast user are obtained, finally live broadcast data are generated based on the field audio and video data and the user audio and video data, flexible conversion of the shooting view angle can be achieved in a remote live broadcast process, and the flexibility of remote live broadcast shooting is improved.
Further, the site data acquisition module 510 is specifically configured to:
determining target acquisition equipment in at least one remote acquisition equipment according to the identification of the target acquisition equipment in the shooting visual angle information;
acquiring field audio data and panoramic video data acquired by the target acquisition equipment;
and extracting field video data from the panoramic video data according to a target shooting view angle in the shooting view angle information, and forming field audio and video data by the field audio data and the field video data.
Further, the live broadcast data generating apparatus 500 further includes:
the proximity acquisition equipment determining module is used for determining at least one proximity acquisition equipment with the distance from the target acquisition equipment within a set range;
the visual angle difference determining module is used for determining the visual angle difference between the target acquisition equipment and the adjacent acquisition equipment according to the target shooting visual angle and the azimuth relation between the adjacent acquisition equipment and the target acquisition equipment;
the acquisition equipment to be switched determines the acquisition equipment to be switched in the at least one adjacent acquisition equipment according to the visual angle difference, and sends the identification of the acquisition equipment to be switched to a user terminal, and the identification is used for indicating the user terminal to provide a selection item of the acquisition equipment to be switched, wherein the selection item is associated with the acquisition equipment to be switched.
Further, the live data generation module 530 is specifically configured to:
superposing the site audio data in the site audio-video data and the user audio data in the user audio-video data to obtain live broadcast audio data;
and superposing the field video data in the field audio and video data and the user video data in the user audio and video data to obtain live video data, and forming live video data by the live audio data and the live video data.
Further, the live broadcast data generating apparatus 500 further includes:
the remote acquisition equipment option providing module is used for acquiring the position information of the remote acquisition equipment before acquiring the field audio and video data matched with the shooting visual angle information in response to receiving the shooting visual angle information, and sending the position information to the user terminal, and is used for indicating the user terminal to provide remote acquisition equipment options based on the position information;
the position information comprises the azimuth relation between the remote acquisition equipment and the adjacent acquisition equipment; the distance between the adjacent acquisition equipment and the remote acquisition equipment is within a set range.
Further, the live broadcast data generating apparatus 500 further includes:
And the live broadcast data sending module is used for sending the live broadcast data to a user terminal after generating the live broadcast data and indicating the user terminal to display the live broadcast data.
The live broadcast data generating device provided by the embodiment of the disclosure can execute the live broadcast data generating method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the executing method.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the respective methods and processes described above, for example, a live data generation method. For example, in some embodiments, the live data generation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the live data generation method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the live data generation method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (12)

1. A live data generation method, comprising:
in response to receiving shooting visual angle information, obtaining field audio and video data matched with the shooting visual angle information;
acquiring user audio and video data associated with a live user;
generating live broadcast data based on the field audio-video data and the user audio-video data;
the step of obtaining the field audio and video data matched with the shooting visual angle information comprises the following steps:
Determining at least one adjacent acquisition device having a distance to the target acquisition device within a set range; the target acquisition equipment is selected by the live broadcast user according to the deployment position of each remote acquisition equipment on a live broadcast site and the current live broadcast visual angle requirement;
determining a visual angle difference between the target acquisition equipment and the adjacent acquisition equipment according to a target shooting visual angle and the azimuth relation between the adjacent acquisition equipment and the target acquisition equipment;
and determining acquisition equipment to be switched in the at least one adjacent acquisition equipment according to the visual angle difference, and sending the identification of the acquisition equipment to be switched to a user terminal for indicating the user terminal to provide a switching acquisition equipment selection item associated with the acquisition equipment to be switched.
2. The method of claim 1, wherein the acquiring venue audio-visual data matching the shooting view angle information further comprises:
determining target acquisition equipment in at least one remote acquisition equipment according to the identification of the target acquisition equipment in the shooting visual angle information;
acquiring field audio data and panoramic video data acquired by the target acquisition equipment;
And extracting field video data from the panoramic video data according to a target shooting view angle in the shooting view angle information, and forming field audio and video data by the field audio data and the field video data.
3. The method of claim 1, wherein the generating live data based on the venue audio video data and the user audio video data comprises:
superposing the site audio data in the site audio-video data and the user audio data in the user audio-video data to obtain live broadcast audio data;
and superposing the field video data in the field audio and video data and the user video data in the user audio and video data to obtain live video data, and forming live video data by the live audio data and the live video data.
4. The method of claim 1, further comprising, prior to acquiring venue audio-video data matching shooting perspective information in response to receiving the shooting perspective information:
acquiring position information of a remote acquisition device, and sending the position information to a user terminal, wherein the user terminal is used for indicating the user terminal to provide a remote acquisition device selection item based on the position information;
The position information comprises the azimuth relation between the remote acquisition equipment and the adjacent acquisition equipment; the distance between the adjacent acquisition equipment and the remote acquisition equipment is within a set range.
5. The method of claim 1, wherein after generating the live data, further comprising:
and sending the live broadcast data to a user terminal, and indicating the user terminal to display the live broadcast data.
6. A live data generation apparatus comprising:
the field data acquisition module is used for responding to the received shooting visual angle information and acquiring field audio and video data matched with the shooting visual angle information;
the user data acquisition module is used for acquiring user audio and video data associated with the live user;
the live broadcast data generation module is used for generating live broadcast data based on the field audio and video data and the user audio and video data;
the proximity acquisition equipment determining module is used for determining at least one proximity acquisition equipment with the distance from the target acquisition equipment within a set range; the target acquisition equipment is selected by the live broadcast user according to the deployment position of each remote acquisition equipment on a live broadcast site and the current live broadcast visual angle requirement;
The visual angle difference determining module is used for determining the visual angle difference between the target acquisition equipment and the adjacent acquisition equipment according to the target shooting visual angle and the azimuth relation between the adjacent acquisition equipment and the target acquisition equipment;
the acquisition equipment to be switched determines the acquisition equipment to be switched in the at least one adjacent acquisition equipment according to the visual angle difference, and sends the identification of the acquisition equipment to be switched to a user terminal, and the identification is used for indicating the user terminal to provide a selection item of the acquisition equipment to be switched, wherein the selection item is associated with the acquisition equipment to be switched.
7. The device of claim 6, wherein the site data acquisition module is specifically configured to:
determining target acquisition equipment in at least one remote acquisition equipment according to the identification of the target acquisition equipment in the shooting visual angle information;
acquiring field audio data and panoramic video data acquired by the target acquisition equipment;
and extracting field video data from the panoramic video data according to a target shooting view angle in the shooting view angle information, and forming field audio and video data by the field audio data and the field video data.
8. The apparatus of claim 6, wherein the live data generation module is specifically configured to:
Superposing the site audio data in the site audio-video data and the user audio data in the user audio-video data to obtain live broadcast audio data;
and superposing the field video data in the field audio and video data and the user video data in the user audio and video data to obtain live video data, and forming live video data by the live audio data and the live video data.
9. The apparatus of claim 6, further comprising:
the remote acquisition equipment option providing module is used for acquiring the position information of the remote acquisition equipment before acquiring the field audio and video data matched with the shooting visual angle information in response to receiving the shooting visual angle information, and sending the position information to the user terminal, and is used for indicating the user terminal to provide remote acquisition equipment options based on the position information;
the position information comprises the azimuth relation between the remote acquisition equipment and the adjacent acquisition equipment; the distance between the adjacent acquisition equipment and the remote acquisition equipment is within a set range.
10. The apparatus of claim 6, further comprising:
and the live broadcast data sending module is used for sending the live broadcast data to a user terminal after generating the live broadcast data and indicating the user terminal to display the live broadcast data.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the live data generation method of any one of claims 1-5.
12. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the live data generation method according to any one of claims 1-5.
CN202210399727.6A 2022-04-15 2022-04-15 Live broadcast data generation method, device, equipment, medium and program product Active CN114827647B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210399727.6A CN114827647B (en) 2022-04-15 2022-04-15 Live broadcast data generation method, device, equipment, medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210399727.6A CN114827647B (en) 2022-04-15 2022-04-15 Live broadcast data generation method, device, equipment, medium and program product

Publications (2)

Publication Number Publication Date
CN114827647A CN114827647A (en) 2022-07-29
CN114827647B true CN114827647B (en) 2024-03-19

Family

ID=82537613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210399727.6A Active CN114827647B (en) 2022-04-15 2022-04-15 Live broadcast data generation method, device, equipment, medium and program product

Country Status (1)

Country Link
CN (1) CN114827647B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116614650A (en) * 2023-06-16 2023-08-18 上海随幻智能科技有限公司 Voice and picture synchronous private domain live broadcast method, system, equipment, chip and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791906A (en) * 2016-12-31 2017-05-31 北京星辰美豆文化传播有限公司 A kind of many people's live network broadcast methods, device and its electronic equipment
CN107018448A (en) * 2017-03-23 2017-08-04 广州华多网络科技有限公司 Data processing method and device
WO2018059352A1 (en) * 2016-09-29 2018-04-05 广州华多网络科技有限公司 Remote control method and apparatus for live video stream
CN111200747A (en) * 2018-10-31 2020-05-26 百度在线网络技术(北京)有限公司 Live broadcasting method and device based on virtual image
CN112738009A (en) * 2019-10-28 2021-04-30 阿里巴巴集团控股有限公司 Data synchronization method, device, synchronization system, medium and server
CN113542896A (en) * 2021-05-19 2021-10-22 广州速启科技有限责任公司 Free-view video live broadcast method, device and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9936238B2 (en) * 2016-07-29 2018-04-03 Infiniscene, Inc. Systems and methods for production and delivery of live video
US11179635B2 (en) * 2017-10-11 2021-11-23 Sony Interactive Entertainment LLC Sound localization in an augmented reality view of a live event held in a real-world venue

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018059352A1 (en) * 2016-09-29 2018-04-05 广州华多网络科技有限公司 Remote control method and apparatus for live video stream
CN106791906A (en) * 2016-12-31 2017-05-31 北京星辰美豆文化传播有限公司 A kind of many people's live network broadcast methods, device and its electronic equipment
CN107018448A (en) * 2017-03-23 2017-08-04 广州华多网络科技有限公司 Data processing method and device
CN111200747A (en) * 2018-10-31 2020-05-26 百度在线网络技术(北京)有限公司 Live broadcasting method and device based on virtual image
CN112738009A (en) * 2019-10-28 2021-04-30 阿里巴巴集团控股有限公司 Data synchronization method, device, synchronization system, medium and server
CN113542896A (en) * 2021-05-19 2021-10-22 广州速启科技有限责任公司 Free-view video live broadcast method, device and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
5G+4K/8K超高清视频制播平台;张现丰;《中国高新科技》;全文 *

Also Published As

Publication number Publication date
CN114827647A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN111476911B (en) Virtual image realization method, device, storage medium and terminal equipment
CN110798697B (en) Video display method, device and system and electronic equipment
CN110012209B (en) Panoramic image generation method and device, storage medium and electronic equipment
CN106412681B (en) Live bullet screen video broadcasting method and device
US20220319136A1 (en) Augmented reality processing method, storage medium, and electronic device
CN111445583B (en) Augmented reality processing method and device, storage medium and electronic equipment
CN110602554A (en) Cover image determining method, device and equipment
CN106791906B (en) Multi-user network live broadcast method and device and electronic equipment thereof
CN106657733A (en) Panoramic live broadcasting method based on unmanned aerial vehicle and terminal
WO2019109828A1 (en) Ar service processing method, device, server, mobile terminal, and storage medium
US20150172634A1 (en) Dynamic POV Composite 3D Video System
CN108513088B (en) Method and device for group video session
CN110149517B (en) Video processing method and device, electronic equipment and computer storage medium
CN113411621B (en) Audio data processing method and device, storage medium and electronic equipment
CN113676592B (en) Recording method, recording device, electronic equipment and computer readable medium
CN105392058A (en) Method and device for generating interactive information by interactive television system
CN106791915A (en) A kind of method and apparatus for showing video image
CN114827647B (en) Live broadcast data generation method, device, equipment, medium and program product
CN110928509B (en) Display control method, display control device, storage medium, and communication terminal
CN109413152B (en) Image processing method, image processing device, storage medium and electronic equipment
US9887791B2 (en) System and method for participants to perceivably modify a performance
CN113315927B (en) Video processing method and device, electronic equipment and storage medium
CN108320331B (en) Method and equipment for generating augmented reality video information of user scene
CN109218612B (en) Tracking shooting system and shooting method
CN114594892B (en) Remote interaction method, remote interaction device, and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant