CN116614650A - Voice and picture synchronous private domain live broadcast method, system, equipment, chip and medium - Google Patents

Voice and picture synchronous private domain live broadcast method, system, equipment, chip and medium Download PDF

Info

Publication number
CN116614650A
CN116614650A CN202310721121.4A CN202310721121A CN116614650A CN 116614650 A CN116614650 A CN 116614650A CN 202310721121 A CN202310721121 A CN 202310721121A CN 116614650 A CN116614650 A CN 116614650A
Authority
CN
China
Prior art keywords
audio
data
picture
cloud
live broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310721121.4A
Other languages
Chinese (zh)
Inventor
洪煦
周阳
陈国家
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Suihuan Intelligent Technology Co ltd
Original Assignee
Shanghai Suihuan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Suihuan Intelligent Technology Co ltd filed Critical Shanghai Suihuan Intelligent Technology Co ltd
Priority to CN202310721121.4A priority Critical patent/CN116614650A/en
Publication of CN116614650A publication Critical patent/CN116614650A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The method comprises the steps of collecting local sound data and uploading the local sound data to a cloud; synthesizing the sound data and the cloud picture into audio and video data; and then transmitted back to the local; and dividing the voice video data, outputting the audio data to a virtual microphone, outputting the picture data to a virtual camera, and selecting the video data to a private live broadcast platform. According to the application, the audio data is transmitted to the cloud end to be synthesized with the picture to be rendered, and then the audio data is transmitted from the cloud end to the local, so that the synchronization of the audio data and the rendering result is ensured.

Description

Voice and picture synchronous private domain live broadcast method, system, equipment, chip and medium
Technical Field
The application relates to the technical field of computer vision, in particular to a private live broadcast method, system, equipment, chip and medium for synchronizing sound and picture.
Background
Cloud rendering is a technique for migrating rendering tasks on a local computer to the cloud, thereby improving rendering performance and efficiency. In cloud rendering, a user can send data such as models, textures, lamplight and the like on a local computer to a cloud, and then the cloud can perform quick rendering and processing by utilizing strong computing capacity and resources of the cloud and return rendering results to the user.
However, the cloud rendering requires data to be sent to the cloud for processing, so that a delay is generated in the picture. In cloud live, only clients can provide public live addresses (such as rtmp addresses) for live. For some private domain environments, if a live address is not provided, the picture can not be directly pushed to a third party application; or some suppliers need to use their own live tools, and cloud plug flow cannot achieve the desired effect. If the private region live broadcast tool directly uses the returned picture to carry out live broadcast, but at the moment, the problem of asynchronous audio and video of the live broadcast picture can occur because the audio is not input delay.
Disclosure of Invention
The application aims to solve the existing problems and aims to provide a private live broadcast method, system, equipment, chip and medium for synchronizing audio and video.
In order to achieve the above purpose, the technical scheme adopted by the application provides a private domain live broadcast method for synchronizing audio and video, which comprises the following steps:
s1, collecting local sound data and uploading the local sound data to a cloud;
s2, synthesizing the sound data and the cloud picture into audio and video data; and then transmitted back to the local;
s3, distributing audio and video data, outputting the audio data to a virtual microphone, outputting picture data to a virtual camera, and selecting the picture data to a private live broadcast platform; and the situation that live broadcast data is asynchronous in sound and picture caused by using desktop audio is avoided. In S1 in some embodiments, the sound data is pre-processed, including noise cancellation or digitization. In S2 in some embodiments, the sound data is converted to a digital signal using an audio codec library.
In S2 in some embodiments, the sound data is framed, encrypted, or compressed to increase the transmission speed at the time of rendering.
In S2 of some embodiments, the cloud end captures and renders the texture through the illusion engine, captures the texture, puts the texture into a buffer queue, pushes the texture, and synthesizes the texture with the sound data.
In some embodiments, after the cloud receives the sound data, the cloud captures the audio mixing data and synthesizes the audio and video data with the screen.
In some embodiments, the cloud rendering renders the received audio-video data.
In some embodiments S3, the input source of the selection screen is a virtual camera, and the input source of the selection audio is a virtual microphone.
In S3 in some embodiments, receiving local audio is turned off.
In some embodiments, the method further includes S4, where the live tool outputs the received audio data and the video data to the designated private domain platform according to the user' S requirements, respectively.
The application also provides a private live broadcast system with synchronous audio and video, which comprises an acquisition module, a synthesis module, a rendering module and a separation module, wherein: the acquisition module is used for acquiring local sound data; the synthesis module is positioned at the cloud end, receives the sound data, synthesizes the sound data with the picture of the cloud end into audio and video data, and transmits the audio and video data back to the local; the rendering module is also positioned at the cloud end and used for rendering the synthesized audio and video data; the separation module is used for outputting the audio data to the virtual microphone and outputting the picture data to the virtual camera.
The application also provides audio-video synchronous private live broadcast equipment, which comprises local audio equipment, a local server and a cloud server, wherein the audio equipment is respectively in communication connection with the cloud server and the local server; the audio equipment transmits the collected sound data to the cloud server; the cloud server synthesizes the sound data and the cloud picture into audio and video data, renders the audio and video data and transmits the audio and video data and the cloud picture back to the local server; the local server separates audio and video data, outputs the audio data to the virtual microphone, outputs the picture data to the virtual camera, and selects the video data to the private live broadcast platform.
The application also provides a chip comprising one or more processors for invoking and running a computer program from memory, such that a device on which the chip is installed performs any of the private live methods.
The present application also provides a storage medium containing computer executable instructions which when executed by a computer processor are for performing any of the private live methods described.
Compared with the prior art, the method and the device have the advantages that the audio data are transmitted to the cloud end to be synthesized and rendered with the picture, and then are transmitted back to the local from the cloud end to be respectively output to the virtual microphone and the virtual camera, so that the synchronization of the audio data and the rendering result is ensured; the method effectively solves the problems that cloud rendering pictures cannot be obtained in third party application live broadcast, the screen capturing pictures of a live broadcast window are unclear, and sound and picture are asynchronous caused by using a virtual camera to output pictures.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of the present application.
Detailed Description
The application will now be further described with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 illustrates a flowchart of an embodiment of the present application, where a device portion of the embodiment mainly includes a local audio device, a local server, and a cloud server, and the audio device uses a microphone; the local server and the cloud server are computers. The audio device is respectively in communication connection with the cloud server and the local server.
The computer comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes a private live broadcast method when executing the computer program.
In this embodiment, the virtual camera and virtual microphone are created locally before the method operation begins. I.e. creating a virtual camera using dshow, four export functions Dll Register Server, dll Unregister Server, dllGetClass Object, dllcanloadnow are implemented in the DLL dynamic library. And virtual camera registration is implemented in DllRegister Server function, then registry writing can be done using regsvr32 commands. And creates a shared memory for receiving the picture data.
The present embodiment creates an audio driver based on the Windows Driver Model (WDM). The virtual microphone array and virtual speaker are simulated in the drive. The ring buffer is used to establish data communication between the microphone and speaker through inheritance IMiniportWave RTInputStrem and IMiniportWave RTO utput Stream. And outputting the data acquired by the virtual loudspeaker to the virtual microphone. And the shared memory of the user state and the kernel state is realized through the MDL.
After the creation is completed, executing a private domain live broadcast method:
s1, collecting local sound data and uploading the local sound data to a cloud.
The microphone collects sound data on the local computer, and preferably, the collected sound data is preprocessed, such as noise elimination, digitization and the like, so as to facilitate transmission at the cloud. The voice data is then transmitted to a cloud server via the internet or other network.
S2, synthesizing the sound data and the cloud picture into audio and video data; and then transmitted back to the local.
In the cloud server, the sound data is preferably converted into a digital signal using a corresponding audio codec library. The sound data may also be further processed, such as framing, encryption, compression, etc., to facilitate rapid transmission during rendering.
And then, synthesizing the processed sound data and the cloud server rendering picture. In the embodiment, webRTC (Web Real-Time Communications) is adopted for Real-time audio and video communication, and the illusion engine UE is used for rendering and processing the pictures. After the cloud rendering application is opened, establishing audio and video data channels of the cloud rendering application, the cloud application and the local output application. The cloud rendering application detects the local audio and video equipment and performs audio and video acquisition and transmission. The video data acquired by the cloud application is transferred to a sampling pool of Media players, and then play call is carried out in the blueprint through a URL (Uniform Resource Locator ). Video streaming is implemented by Media Capture.
Firstly, the method Enable Back Buffer Ready To Present is called to carry out UE picture capturing and rendering to the texture RT, then the texture RT is captured through Media Capture, picture data is put into a push buffer queue, and then a frame manager Agora Frame Manager in the Agora Blueprintable module carries out push flow.
After receiving the audio PCM data, the cloud application captures the output of the audio mixing sub mix audio data of the UE through ISubmix Buffer Listener of the UE, and then sends the synthesized data to the server. The cloud rendering application receives the synthesized audio and video data to render and then transmits the synthesized audio and video data back to the local computer through the Internet or other networks.
And S3, distributing audio and video data, namely separating the audio and video data through a local output application on a local computer, outputting the audio data to a virtual microphone, and outputting a returned picture to a virtual camera. In the prior art, the audio is generally directly distributed through a desktop, and the picture is distributed after being processed by cloud matting and the like; in the mode, because network transmission and cloud service both need a certain time, the time of receiving the audio is earlier than the time of receiving the picture, and finally the problem that the sound and the picture are not synchronous is generated. In order to avoid the situation that the live data is asynchronous in audio and video due to the use of desktop audio, the method of synthesizing and then distributing is adopted in the embodiment in S3.
The local output application receives the synthesized video picture data, converts the picture data into RGBA format and outputs the RGBA format to the shared memory appointed by the virtual camera. After receiving the audio data, searching whether all the loudspeakers of the host have virtual loudspeakers which are simulated by the sound card drive, and if so, setting the output equipment of the audio data as the virtual loudspeakers. Or converting the received audio data into a PCM data format and outputting the PCM data format to the MDL shared memory. The sound card driver renders audio to the virtual microphone based on data in the memory or in the ring buffer of the virtual speaker.
When the live broadcast tool is used, the live broadcast tool selects a picture input source as a virtual camera, an audio input source as a virtual microphone and closes desktop audio, and echo can occur if the desktop audio is not closed. The live broadcast tool outputs the received audio data and video data to a designated platform according to the user requirements.
By completing the embodiment, the problem that the direct broadcasting and drawing of the cloud rendering private domain direct broadcasting tool are not synchronous can be solved. And for some platforms which do not provide live rtmp addresses and can be live only by means of a third party tool, the cloud rendering can realize the effect of live high-performance host rendering on a computer with weak performance by using a method of using a virtual camera and a virtual sound card in a mixed manner.
The embodiment of the application also provides a private live broadcast system, which comprises an acquisition module, a synthesis module, a rendering module and a separation module, wherein: the acquisition module is used for acquiring local sound data;
the synthesis module is positioned at the cloud end, receives the sound data, synthesizes the sound data with the picture of the cloud end into audio and video data, and transmits the audio and video data back to the local;
the rendering module is also positioned at the cloud end and used for rendering the synthesized audio and video data;
the separation module is used for outputting the audio data to the virtual microphone and outputting the picture data to the virtual camera. The embodiment of the application also provides a chip, which comprises one or more processors and is used for calling and running a computer program from a memory, so that a device provided with the chip executes the private live broadcast method in the embodiment.
Embodiments of the present application also provide a storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the private live method as described in any of the above embodiments.
The memory in this embodiment may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories. The nonvolatile memory may be a ROM (Read-only memory), a PROM (programmable Read-only memory), an EPROM (erasablprom, erasable programmable Read-only memory), an EEPROM (electrically erasable EPROM), or a flash memory. The volatile memory may be a RAM (random access memory) which serves as an external cache. By way of example, and not limitation, many forms of RAM are available, such as SRAM (static RAM), DRAM (dynamic RAM), SDRAM (synchronous DRAM), ddr SDRAM (DoubleDataRate SDRAM, double data rate synchronous DRAM), ESDRAM (Enhanced SDRAM), SLDRAM (synclinkdram), and DRRAM (directrambus RAM). The memory 42 described herein is intended to comprise, without being limited to, these and any other suitable types of memory. In some embodiments, the memory stores the following elements, an upgrade package, an executable unit, or a data structure, or a subset thereof, or an extended set thereof: an operating system and application programs.
The operating system includes various system programs, such as a framework layer, a core library layer, a driving layer, and the like, and is used for realizing various basic services and processing hardware-based tasks. And the application programs comprise various application programs and are used for realizing various application services. The program for implementing the method of the embodiment of the application can be contained in an application program.
In an embodiment of the present application, the processor is configured to execute the method steps provided in the first aspect by calling a program or an instruction stored in the memory, in particular, a program or an instruction stored in the application program.
In the embodiment of the application, the audio data is transmitted to the cloud end to be synthesized and rendered with the picture, and then is transmitted back to the local from the cloud end to be respectively output to the virtual microphone and the virtual camera, so that the synchronization of the audio data and the rendering result is ensured; the method effectively solves the problems that cloud rendering pictures cannot be obtained in third party application live broadcast, the screen capturing pictures of a live broadcast window are unclear, and sound and picture are asynchronous caused by using a virtual camera to output pictures.
Those of skill in the art will appreciate that the elements and steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or as a combination of software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

Claims (14)

1. A private domain live broadcast method for synchronizing sound and pictures is characterized by comprising the following steps:
s1, collecting local sound data and uploading the local sound data to a cloud;
s2, synthesizing the sound data and the cloud picture into audio and video data; and then transmitted back to the local;
and S3, distributing audio and video data, outputting the audio data to a virtual microphone, outputting the picture data to a virtual camera, and selecting the picture data to a private live broadcast platform.
2. The voice and picture synchronized private domain live broadcast method of claim 1, wherein: in S1, the sound data is preprocessed, including noise cancellation or digitization.
3. The voice and picture synchronized private domain live broadcast method of claim 1, wherein: in S2, the audio codec library is used to convert the sound data into a digital signal.
4. A method for private live broadcast of sound and picture synchronization according to claim 3, wherein: in S2, the sound data is framed, encrypted or compressed to increase the transmission speed at the time of rendering.
5. The audio-visual synchronized private-area live broadcast method according to claim 1, 3 or 4, wherein: and S2, capturing and rendering the texture by the cloud end through the illusion engine picture, and placing the captured texture into a buffer queue and pushing the texture to be synthesized with sound data.
6. The voice and picture synchronized private domain live broadcast method of claim 5, wherein: after the cloud receives the sound data, the cloud captures the audio mixing data and synthesizes the audio and video data with the picture.
7. The voice and picture synchronized private domain live broadcast method of claim 6, wherein: and rendering the received audio and video data by cloud rendering.
8. The voice and picture synchronized private domain live broadcast method of claim 1, wherein: in S3, the input source of the selection screen is a virtual camera, and the input source of the selection audio is a virtual microphone.
9. The voice and picture synchronized private domain live broadcast method of claim 8, wherein: in S3, the reception of local audio is turned off.
10. The voice and picture synchronized private domain live broadcast method of claim 1, wherein: and S4, the live broadcast tool outputs the received audio data and video data to the appointed private domain platform respectively according to the user requirement.
11. The private domain live broadcast system with synchronous audio and video is characterized in that: the system comprises an acquisition module, a synthesis module, a rendering module and a separation module, wherein: the acquisition module is used for acquiring local sound data;
the synthesis module is positioned at the cloud end, receives the sound data, synthesizes the sound data with the picture of the cloud end into audio and video data, and transmits the audio and video data back to the local;
the rendering module is also positioned at the cloud end and used for rendering the synthesized audio and video data;
the separation module is used for outputting the audio data to the virtual microphone and outputting the picture data to the virtual camera.
12. The private domain live broadcast equipment with synchronous sound and picture is characterized in that: the system comprises local audio equipment, a local server and a cloud server, wherein the audio equipment is respectively in communication connection with the cloud server, and the cloud server is respectively in communication connection with the local server;
the audio equipment transmits the collected sound data to the cloud server; the cloud server synthesizes the sound data and the cloud picture into audio and video data, renders the audio and video data and transmits the audio and video data and the cloud picture back to the local server; the local server separates audio and video data, outputs the audio data to the virtual microphone, outputs the picture data to the virtual camera, and selects the video data to the private live broadcast platform.
13. A chip, characterized in that: comprising one or more processors for invoking and running a computer program from memory to cause a device on which the chip is installed to perform the private live method of any of claims 1-10.
14. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the private live method of any of claims 1-10.
CN202310721121.4A 2023-06-16 2023-06-16 Voice and picture synchronous private domain live broadcast method, system, equipment, chip and medium Pending CN116614650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310721121.4A CN116614650A (en) 2023-06-16 2023-06-16 Voice and picture synchronous private domain live broadcast method, system, equipment, chip and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310721121.4A CN116614650A (en) 2023-06-16 2023-06-16 Voice and picture synchronous private domain live broadcast method, system, equipment, chip and medium

Publications (1)

Publication Number Publication Date
CN116614650A true CN116614650A (en) 2023-08-18

Family

ID=87674695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310721121.4A Pending CN116614650A (en) 2023-06-16 2023-06-16 Voice and picture synchronous private domain live broadcast method, system, equipment, chip and medium

Country Status (1)

Country Link
CN (1) CN116614650A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107770564A (en) * 2016-08-18 2018-03-06 腾讯科技(深圳)有限公司 The method and device of remote collection audio, video data
CN110213601A (en) * 2019-04-30 2019-09-06 大鱼互联科技(深圳)有限公司 A kind of live broadcast system and live broadcasting method based on cloud game, living broadcast interactive method
CN111314724A (en) * 2020-02-18 2020-06-19 华为技术有限公司 Cloud game live broadcasting method and device
CN112383794A (en) * 2020-12-01 2021-02-19 咪咕互动娱乐有限公司 Live broadcast method, live broadcast system, server and computer storage medium
WO2021179783A1 (en) * 2020-03-11 2021-09-16 叠境数字科技(上海)有限公司 Free viewpoint-based video live broadcast processing method, device, system, chip and medium
CN114827647A (en) * 2022-04-15 2022-07-29 北京百度网讯科技有限公司 Live broadcast data generation method, device, equipment, medium and program product
CN115514989A (en) * 2022-08-16 2022-12-23 如你所视(北京)科技有限公司 Data transmission method, system and storage medium
CN115767206A (en) * 2022-10-24 2023-03-07 阿里巴巴(中国)有限公司 Data processing method and system based on augmented reality
CN116071471A (en) * 2022-12-31 2023-05-05 杭州趣看科技有限公司 Multi-machine-position rendering method and device based on illusion engine

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107770564A (en) * 2016-08-18 2018-03-06 腾讯科技(深圳)有限公司 The method and device of remote collection audio, video data
CN110213601A (en) * 2019-04-30 2019-09-06 大鱼互联科技(深圳)有限公司 A kind of live broadcast system and live broadcasting method based on cloud game, living broadcast interactive method
CN111314724A (en) * 2020-02-18 2020-06-19 华为技术有限公司 Cloud game live broadcasting method and device
WO2021179783A1 (en) * 2020-03-11 2021-09-16 叠境数字科技(上海)有限公司 Free viewpoint-based video live broadcast processing method, device, system, chip and medium
CN112383794A (en) * 2020-12-01 2021-02-19 咪咕互动娱乐有限公司 Live broadcast method, live broadcast system, server and computer storage medium
CN114827647A (en) * 2022-04-15 2022-07-29 北京百度网讯科技有限公司 Live broadcast data generation method, device, equipment, medium and program product
CN115514989A (en) * 2022-08-16 2022-12-23 如你所视(北京)科技有限公司 Data transmission method, system and storage medium
CN115767206A (en) * 2022-10-24 2023-03-07 阿里巴巴(中国)有限公司 Data processing method and system based on augmented reality
CN116071471A (en) * 2022-12-31 2023-05-05 杭州趣看科技有限公司 Multi-machine-position rendering method and device based on illusion engine

Similar Documents

Publication Publication Date Title
CN105991962B (en) Connection method, information display method, device and system
EP2288104B1 (en) Flexible decomposition and recomposition of multimedia conferencing streams using real-time control information
JP2007534279A (en) Systems and methods for using graphics hardware for real time 2D and 3D, single and high definition video effects
WO2020124725A1 (en) Audio and video pushing method and audio and video stream pushing client based on webrtc protocol
JP2000197043A (en) Data communication controller, its control method, image processing unit, its method and data communication system
JP2000023132A (en) Data communication controller, control method therefor and data communication system
CN108683874B (en) Method for focusing attention of video conference and storage device
CN113893524B (en) Cloud application processing system, method, device and equipment
US20080124041A1 (en) Adding video effects for video enabled applications
EP2924985A1 (en) Low-bit-rate video conference system and method, sending end device, and receiving end device
CN111818383B (en) Video data generation method, system, device, electronic equipment and storage medium
WO2014121477A1 (en) Video redirection method, device and system, and computer readable medium
KR20220109373A (en) Method for providing speech video
CN110113298B (en) Data transmission method, device, signaling server and computer readable medium
US20220092143A1 (en) Device Augmentation Of Real Time Communications
CN116614650A (en) Voice and picture synchronous private domain live broadcast method, system, equipment, chip and medium
CN114124911B (en) Live echo cancellation method, computer readable storage medium and electronic device
CN116627577A (en) Third party application interface display method
CN115514989A (en) Data transmission method, system and storage medium
US20240046951A1 (en) Speech image providing method and computing device for performing the same
CN113938457B (en) Method, system and equipment for cloud mobile phone to apply remote camera
CN113923396B (en) Remote desktop control method, device and medium based on video conference scene
JP6436762B2 (en) Information processing apparatus and service providing method
CN114793295B (en) Video processing method and device, electronic equipment and computer readable storage medium
US11830120B2 (en) Speech image providing method and computing device for performing the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination