CN107027046B - Audio and video processing method and device for assisting live broadcast - Google Patents

Audio and video processing method and device for assisting live broadcast Download PDF

Info

Publication number
CN107027046B
CN107027046B CN201710240764.1A CN201710240764A CN107027046B CN 107027046 B CN107027046 B CN 107027046B CN 201710240764 A CN201710240764 A CN 201710240764A CN 107027046 B CN107027046 B CN 107027046B
Authority
CN
China
Prior art keywords
audio
video
stream
processing
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710240764.1A
Other languages
Chinese (zh)
Other versions
CN107027046A (en
Inventor
库宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Jinhong Network Media Co ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201710240764.1A priority Critical patent/CN107027046B/en
Publication of CN107027046A publication Critical patent/CN107027046A/en
Application granted granted Critical
Publication of CN107027046B publication Critical patent/CN107027046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4305Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a method and a device for assisting live broadcast audio and video processing. The method comprises the following steps: reading an original video stream and an original audio stream through an auxiliary live audio and video application; processing an original video stream to obtain a video stream; processing an original audio stream to obtain an audio stream; aligning the time stamps of the video stream and the audio stream, and generating an audio and video stream data packet containing the aligned video stream and audio stream; a cross-process communication channel for communicating the auxiliary live audio and video application with the live application; and transmitting the audio and video stream data packet to a live broadcast application through the cross-process communication channel. The invention reduces the loss of equipment resources, avoids the phenomena of blockage and downtime in the live broadcast process to a great extent, makes the live broadcast effect more vivid and improves the use experience of users.

Description

Audio and video processing method and device for assisting live broadcast
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of live broadcasting, in particular to an audio and video processing method and device for assisting live broadcasting.
[ background of the invention ]
Nowadays, the live broadcast industry is increasingly popular, the anchor often needs to process own voice in the live broadcast process or add some sound effects in good time in the live broadcast process to achieve the effect of harmonizing the live broadcast atmosphere, and the anchor is presented to audiences in the live broadcast to realize the functions of improving the color value of the anchor, maintaining the image characteristics of the anchor and beautifying the live broadcast environment. As shown in fig. 3, a working diagram of cooperative cooperation of multiple applications in the existing live broadcasting process is provided, wherein four or more live broadcasting related applications are applied by the anchor broadcaster in the live broadcasting process, wherein there are atmospheric sound addition, background sound addition, sound effect adjustment and video adjustment. Live broadcast itself has higher configuration requirement to the owner's broadcast user's machine, can consume great computer resource, if need open several software more simultaneously and carry out the cooperative work, often can cause the computer not to bear a burden to lead to the card pause phenomenon in the live broadcast process, if the situation that appears the shut down in the live broadcast process is the loss that can't compensate more, user experience is very not good. Moreover, the anchor needs to consume a great deal of energy of the anchor to learn the basic operation of various software, and different software needs to be switched back and forth in the live broadcasting process, so that the anchor is often busy and disorderly, and cannot concentrate on live broadcasting, and further the live broadcasting quality is influenced.
[ summary of the invention ]
In order to overcome the above technical problems or at least partially solve the above technical problems, the following technical solutions are proposed:
the invention provides an audio and video processing method for assisting live broadcast, which comprises the following steps:
reading an original video stream and an original audio stream through an auxiliary live audio and video application; processing an original video stream to obtain a video stream; processing an original audio stream to obtain an audio stream;
aligning the time stamps of the video stream and the audio stream, and generating an audio and video stream data packet containing the aligned video stream and audio stream;
a cross-process communication channel for communicating the auxiliary live audio and video application with the live application;
and transmitting the audio and video stream data packet to a live broadcast application through the cross-process communication channel.
Specifically, the reading of the original video stream and the original audio stream by the auxiliary live audio/video application includes:
and reading an original video stream obtained by using a camera and an original audio stream obtained by using a sound card through an auxiliary live audio and video application.
Specifically, the processing the original video stream to obtain the video stream includes:
and receiving a video processing instruction of a user through the auxiliary live audio and video application, converting the original video stream into frame data, processing the frame data according to the video processing instruction, and reassembling the processed frame data into the video stream.
Specifically, the processing of the frame data by the video processing instruction includes:
acquiring a video algorithm from a video algorithm set according to the video processing instruction, and processing frame data by using the video algorithm; the video algorithm set comprises a video whitening algorithm and a video special effect algorithm.
Specifically, the processing the original audio stream to obtain the audio stream includes:
and receiving an audio processing instruction of a user through the auxiliary live audio and video application, and processing an original audio stream according to the audio processing instruction to obtain an audio stream.
Specifically, the processing the original audio stream according to the audio processing instruction includes:
acquiring an audio algorithm from the audio algorithm set according to the audio processing instruction, and processing the original audio stream by using the audio algorithm; the audio algorithm set comprises an audio noise reduction algorithm and a superposition special effect algorithm.
Specifically, receiving a video processing instruction of a user through the auxiliary live audio/video application, converting an original video stream into frame data, processing the frame data according to the video processing instruction, and reassembling the processed frame data into a video stream, including:
receiving a video processing instruction of a user through an application program layer of the auxiliary live audio and video application; transmitting a video processing instruction to a multimedia service layer of the auxiliary live audio and video application through a multimedia interface layer of the auxiliary live audio and video application; and converting the original video stream into frame data through a multimedia service layer, processing the frame data according to the video processing instruction, and reassembling the processed frame data into the video stream.
Specifically, receiving an audio processing instruction of a user through the auxiliary live audio/video application, and processing an original audio stream according to the audio processing instruction to obtain an audio stream, including:
receiving an audio processing instruction of a user through an application program layer of the auxiliary live audio and video application, and transmitting the audio processing instruction to a multimedia service layer of the auxiliary live audio and video application through a multimedia interface layer of the auxiliary live audio and video application; and processing the original audio stream through the multimedia service layer according to the audio processing instruction to obtain the audio stream.
Optionally, the video processing instruction and the operation key corresponding to the audio processing instruction are displayed on a display interface of the auxiliary live audio/video application.
Optionally, after the audio/video stream data packet is uploaded to the live broadcast application through the cross-process communication channel, the method further includes:
and uploading the audio and video streaming data packet to a live broadcast server through the live broadcast application according to a preset rule.
Correspondingly, the invention also provides an audio and video processing device for assisting live broadcast, which comprises a live broadcast processing module, a live broadcast processing module and a live broadcast processing module;
a reading processing module: the method comprises the steps of reading an original video stream and an original audio stream through an auxiliary live audio and video application; processing an original video stream to obtain a video stream; processing an original audio stream to obtain an audio stream;
a time stamping module: the time stamps of the video stream and the audio stream are aligned, and an audio and video stream data packet containing the aligned video stream and audio stream is generated;
a communication module: a cross-process communication channel for communicating the auxiliary live audio and video application with the live application;
a transmission module: and the cross-process communication channel is used for transmitting the audio and video stream data packet to a live broadcast application.
Compared with the prior art, the invention has the following advantages:
in summary, the invention reads an original video stream and an original audio stream by using an auxiliary live broadcast audio/video application, obtains the video stream and the audio stream after performing relevant processing on the original video stream and the original audio stream, further aligns timestamps of the video stream and the audio stream to generate an audio/video stream data packet, and transmits the audio/video stream data packet to the live broadcast application through a cross-process communication channel to realize live broadcast, thereby realizing the processing operation of live broadcast audio/video on the same application program, saving the tedious operation of switching a plurality of live broadcast auxiliary applications back and forth, reducing the loss of equipment resources, avoiding the phenomena of blockage and downtime in the live broadcast process to a greater extent, and improving the user experience.
In the prior art, a plurality of live broadcast auxiliary applications are adopted, so that the operation is complicated, a plurality of processes are required to be started, and the resource loss is increased; in addition, data of a plurality of live auxiliary applications are processed respectively, the processing progress of the applications is full, and the time stamps of the video stream and the audio stream are difficult to align, so that the video stream and the audio stream can not be aligned in time point when applied; for example, audio streams are processed faster for applications, while video streams are processed slower for applications, with video lagging behind audio; the method only has one auxiliary live broadcast audio and video application, generates an audio and video stream data packet after aligning the time stamps of the video stream and the audio stream, and sends the audio and video stream data packet to the live broadcast application through the cross-process communication channel, thereby well solving the problems in the prior art. On the other hand, the auxiliary live broadcast audio and video application can be used as the auxiliary of different live broadcast applications, the two applications can be independently developed to reduce the development difficulty and shorten the development period, the two applications can respectively belong to different developers, the use flexibility of each other is improved, and a user can select to use or not use the auxiliary live broadcast audio and video application; and the auxiliary live audio and video application processes data, and transmits the audio and video stream data packet with the aligned timestamp to the live application through a cross-process communication channel, so that the connection of the two applications is realized.
In addition, the invention carries out relevant processing on the acquired original video stream and the original audio stream, specifically, obtains a relevant video algorithm by receiving a video processing instruction of a user, and processes frame data of the original video stream according to the video algorithm so as to achieve the purposes of adding the special effect of video beautification, video portrait whitening and figure slimming effects. Correspondingly, the related audio algorithm is obtained by receiving the audio processing instruction sent by the user, and the original audio stream is processed according to the audio algorithm, so that the purposes of audio noise reduction and sound effect superposition are achieved. The event played the greatest degree jump in the abundance of live effect, realized simultaneously video, audio frequency both "beautify", and then richened your live content more at live broadcasting in-process, lively anchor and spectator's interactive atmosphere, ask the atmosphere by fire for live effect is more lively.
In addition, the invention utilizes an auxiliary live broadcast audio and video application to process the original video stream and the original audio stream, and then transmits the processed original video stream and the original audio stream to the live broadcast application through a cross-program communication channel which is communicated with the live broadcast application through a request, and the live broadcast application uploads the audio and video stream data packet to a live broadcast server according to a preset rule, wherein the preset rule comprises that live broadcast data preset by a main broadcast are packaged into the audio and video data packet and uploaded together, so that the synchronous live broadcast function is realized, the memory occupation is reduced, the data transmission rate is improved, and the smoothness of the live broadcast is ensured.
In conclusion, the processing operation of live broadcast audio and video on the same application program is realized, the complex operation of switching a plurality of live broadcast auxiliary applications back and forth is omitted, the loss of equipment resources is saved, the phenomena of blockage and downtime of live broadcast equipment are avoided to the greatest extent, and the use experience of a user is improved; in addition, the live broadcast content is enriched in the live broadcast process, the interaction atmosphere of the anchor and audiences is activated, and the atmosphere is set off; the live broadcast effect is more vivid; in addition, live broadcast data preset by the anchor is packaged into the audio and video data packet and uploaded together, so that the synchronous live broadcast function is realized, the memory occupation is reduced, the data transmission rate is improved, and the live broadcast smoothness is ensured.
[ description of the drawings ]
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of an embodiment of an audio and video processing method for assisting live broadcast in the present invention;
fig. 2 is a block diagram of an embodiment of an audio/video processing apparatus for assisting live broadcasting in the present invention;
FIG. 3 is a working diagram of collaborative coordination of multiple applications in a prior live broadcast process;
FIG. 4 is a working diagram of an auxiliary live audio video application of the present invention;
fig. 5 is an architecture diagram of an auxiliary live audio video application of the present invention.
[ detailed description ] embodiments
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In order to make the technical field better understand the scheme of the embodiment of the invention, the invention is further described in detail with reference to the attached drawings and the embodiment. The following examples are illustrative only and are not to be construed as limiting the invention.
Referring to the flowchart of the embodiment shown in fig. 1, the method for processing auxiliary live audio/video provided by the present invention includes the following steps:
s101, reading an original video stream and an original audio stream through an auxiliary live audio and video application; processing an original video stream to obtain a video stream; processing an original audio stream to obtain an audio stream;
in the embodiment of the invention, the original video stream obtained by using the camera and the original audio stream obtained by using the sound card are read by the auxiliary live audio and video application.
In this embodiment of the present invention, the processing an original video stream to obtain a video stream includes:
and receiving a video processing instruction of a user through the auxiliary live audio and video application, converting the original video stream into frame data, processing the frame data according to the video processing instruction, and reassembling the processed frame data into the video stream.
In this embodiment of the present invention, the processing of frame data by the video processing instruction includes:
acquiring a video algorithm from a video algorithm set according to the video processing instruction, and processing frame data by using the video algorithm; the video algorithm set comprises a video whitening algorithm and a video special effect algorithm.
In this embodiment of the present invention, the processing an original audio stream to obtain an audio stream includes:
and receiving an audio processing instruction of a user through the auxiliary live audio and video application, and processing an original audio stream according to the audio processing instruction to obtain an audio stream.
In this embodiment of the present invention, the processing the original audio stream according to the audio processing instruction includes:
acquiring an audio algorithm from the audio algorithm set according to the audio processing instruction, and processing the original audio stream by using the audio algorithm; the audio algorithm set comprises an audio noise reduction algorithm and a superposition special effect algorithm.
Specifically, in the embodiment of the present invention, receiving a video processing instruction of a user through the auxiliary live audio/video application, converting an original video stream into frame data, processing the frame data according to the video processing instruction, and reassembling the processed frame data into a video stream includes:
receiving a video processing instruction of a user through an application program layer of the auxiliary live audio and video application; transmitting a video processing instruction to a multimedia service layer of the auxiliary live audio and video application through a multimedia interface layer of the auxiliary live audio and video application; and converting the original video stream into frame data through a multimedia service layer, processing the frame data according to the video processing instruction, and reassembling the processed frame data into the video stream.
In addition, receiving an audio processing instruction of a user through the auxiliary live audio/video application, and processing an original audio stream according to the audio processing instruction to obtain an audio stream, including:
receiving an audio processing instruction of a user through an application program layer of the auxiliary live audio and video application, and transmitting the audio processing instruction to a multimedia service layer of the auxiliary live audio and video application through a multimedia interface layer of the auxiliary live audio and video application; and processing the original audio stream through the multimedia service layer according to the audio processing instruction to obtain the audio stream.
In the embodiment of the present invention, the auxiliary live audio/video application is divided into three levels based on the auxiliary live audio/video application, and the architecture diagram of the present invention shown in fig. 5 includes an application program layer, a multimedia interface layer, and a multimedia service layer.
In the embodiment of the present invention, the application layer is responsible for processing the main logic of the service, including the main function interface of the application displayed to the user and the interface logic display when the user uses a specific function, and mainly includes:
a MixVideoManager provides a functional interface for related operations in the process of an application generating a mixed video stream;
a VideoContainer for containing relevant data input or output by the application;
the MixAudioManager provides a functional interface for related operations in the process of generating mixed audio streams by an application.
The structure of each part of the application program layer shows different interface typesetting, the effectiveness of interface display is improved, and a more convenient and favorable platform is provided for audio and video processing.
In the embodiment of the invention, the multimedia service layer mainly encapsulates and extracts related algorithm classes, including a whitening algorithm class, a video special effect algorithm class, a sound noise reduction algorithm class and the like; the specific algorithm implementation class is directed to what the input and output are, so that the application layer and the multimedia service layer are completely isolated in logic, wherein the specific algorithm implementation class mainly comprises the following steps: CCaptureVideo (video capture), caccom panymusic (accompaniment music), CMediaVideo (media video), CMoodMusic (atmosphere music), cdesktop video (desktop video), CDecorationLayer (decor layer). Wherein the CAccompanyMusic, the CMediaVideo and the CMoodMusic are used for carrying out related algorithm classes together to realize unification; the CDecorationLayer mainly extracts the correlation algorithm.
In the embodiment of the present invention, the application layer and the multimedia service layer mainly implement bidirectional communication through a multimedia interface layer, and the media interface layer is mainly responsible for providing a logical communication channel between an upper layer and a lower layer, and mainly includes: idecordiationlayertnotify, iplayaccupanampmusic, icontuturevideonotify, IPlayMediaVideo, IPaintPhoto, IDesktopVideoCB, icontuturevideo, iaccomppanymampmusic, imeediavideo, IMoodMusic, idecordiationlayer, and ivcamman. The distribution of each unit module simplifies the multimedia interface and reduces the occupation of interface resources.
As will be appreciated by those skilled in the art, as multimedia core module code (vcambiz) becomes more chaotic and redundant, the application is more convenient in the related maintenance of subsequent programs through the three-level architecture of the application, and an expandable plug-in multimedia data stream service framework is realized.
In the embodiment of the invention, the video processing instruction and the operation key corresponding to the audio processing instruction are displayed on the display interface of the auxiliary live audio and video application.
In the embodiment of the invention, the auxiliary live audio and video should obtain the original video stream acquired by the camera through a relevant reading function and decompose the original video stream into frame data, wherein the frame data is equivalent to an image, so that the processing of the frame data is equivalent to the processing of the image. The processing of the image involves an image processing technique and an image recognition technique, wherein the image processing generally refers to digital image processing, and mainly refers to a method for removing noise, enhancing, restoring, segmenting, extracting features and the like on the image.
In addition, in the embodiment of the invention, the video algorithm set mainly comprises a video whitening algorithm and a video special effect algorithm, wherein the whitening algorithm is mainly used for carrying out algorithm processing on character characteristic data in the acquired frame data; the video special effect algorithm is mainly characterized in that image data added by a main broadcasting user is added in frame data of an acquired original video stream and are respectively superposed into the frame data to realize the addition of video animation, or the addition of a video special effect is realized by changing image parameters of the frame data; the main changing image parameters are: image resolution, image contrast, image brightness, image saturation, image sharpness, image color temperature, and the like.
It should be noted that the method for processing the original audio data and/or the original video data provided by the present invention is not limited to the method provided in the embodiment of the present invention, and other methods may also be used, which is not limited by the present invention.
And S102, aligning the time stamps of the video stream and the audio stream, and generating an audio and video stream data packet containing the aligned video stream and audio stream.
In this embodiment of the present invention, the aligning the timestamps of the video stream and the audio stream specifically includes:
and acquiring timestamps corresponding to the processed video stream and the audio stream, and aligning the timestamps according to the acquired timestamps. The time stamp is a sequence of characters that uniquely identifies the time of a certain moment, for example, the start time of a main broadcast is 08: 30: 00, so the original video stream first frame image data is represented by 08: 30: 00 is a timestamp, and the original audio stream is also assumed to be at 08: 30: 00 is a time stamp, and on the basis of not changing the time length of the frame data corresponding to the original video stream and the time length corresponding to the original audio stream data, it is assumed that the time for generating the video stream and the audio stream after the processing is respectively 08: 30: 10 and 08: 30: 05, simultaneously marking the time stamps of the start bits corresponding to the video stream and the audio stream after the processing is finished as 08: 30: 00, so at 08: 30: 05, temporarily storing the audio stream data in the storage A, and when 08: 30: 10 after the synthesis of the video stream data is completed, temporarily storing the audio stream data and the video stream data in a storage B, extracting the audio stream data and the video stream data from storage A, B to storage C according to a triggered timestamp alignment instruction, and storing the audio stream data and the video stream data in a storage C according to a marked timestamp 08: 30: 00, synthesizing a complete audio and video stream data packet; another possibility is included, which will be described at 08: 30: 05, the audio stream data is temporarily stored in the a storage, when 08: 30: and after the video stream data at the position 10 are synthesized, temporarily storing the video stream data in the storage A, and synthesizing the audio and video stream data packet in the storage A.
In the embodiment of the present invention, in the working diagram of the auxiliary live broadcast audio/video application in fig. 4, a column of "live effect" has three contents: common sound effects, live animation, animation broadcasting. The method comprises the steps that preset sound effect data are collected in a common sound effect column, data updating data sent by a cloud server are obtained in real time and stored in a local database, and supposing that when a control area of 'forward playing' is touched, an obtaining instruction for obtaining the audio data corresponding to the 'forward playing' is issued by the auxiliary live audio and video application, so that the audio data corresponding to the 'forward playing' is superposed into original audio stream data collected in a live broadcasting process according to the obtaining instruction, time point data when the obtaining instruction is issued are mainly aligned with current time data corresponding to the original audio stream data, and the audio data corresponding to the 'forward playing' is superposed at the position where the original audio stream data are correspondingly aligned, so that the superposition of sound effects is realized. The method mainly comprises the steps of obtaining animation data of the animation, obtaining corresponding time point data when an extraction instruction of the animation data is issued, and adding the animation data at a position where the time point data is aligned with the current time data in original video stream data in an overlapping mode to achieve adding of the live animation. Therefore, the invention has the functions of atmosphere sound addition, background sound addition, sound effect adjustment and video adjustment, and provides more convenient use experience for the live broadcast work of the anchor.
It should be noted that, the method for superimposing audio data and/or video data and the method for aligning timestamps provided by the present invention are not limited to the method provided in the embodiment of the present invention, and other methods are also possible, which are not limited by the present invention.
And S103, communicating the cross-process communication channel of the auxiliary live broadcast audio and video application and the live broadcast application.
And S104, transmitting the audio and video stream data packet to a live broadcast application through the cross-process communication channel.
In the embodiment of the present invention, the cross-process communication refers to data transmission between processes, that is, data exchange between processes. Wherein the cross-process communication mode comprises the following steps: broadcast, interface access, object access, shared access.
Taking the communication between the auxiliary live audio and video application (expressed by the program A) and the live application (expressed by the program B) as an example: starting a program A, defining the audio and video stream data packet transmitted by the program A as an event C, and sending a broadcast to a program B; and the B program creates a class under the running condition to inherit the trigger of the C event, receives the broadcast of the A program and establishes A, B a cross-process communication channel.
The specific implementation manner of the interface access includes that the A program triggers the C event, the B program accesses an externally exposed interface of the A program under the permission of the related authority access, a cross-process communication channel between A, B is established, and data corresponding to the C event of the A program is obtained.
The specific implementation mode of the object access is to create a program B and establish a new activity named as an activity D, then create a program A and establish a new event as a D in the program B corresponding to the event C, trigger a related instruction corresponding to the activity D to access the program A to receive related data of the event C, and establish a cross-process communication channel between A, B.
The specific implementation manner of the shared access is to store data corresponding to the C event triggered by the program A in a preset memory and establish related access, run the program B to establish related access to the preset memory, acquire an audio/video stream data packet corresponding to the C event in the preset memory on the basis of the access, and establish a cross-process communication channel between A, B.
In this embodiment of the present invention, step S104 further includes:
and uploading the audio and video streaming data packet to a live broadcast server through the live broadcast application according to a preset rule.
In the embodiment of the present invention, the preset rule refers to a behavior specification for uploading the audio/video stream packet to a live broadcast server, and a specific process is to generate a detection instruction for detecting data integrity of the audio/video stream packet, where the audio/video stream packet includes not only the video stream and the audio stream, but also broadcast data and barrage data sent by a main broadcast in live broadcast, and when the broadcast data and/or the barrage data are available, the broadcast data and/or the barrage data are merged into the audio/video stream packet, and when the upload instruction is executed by the audio/video stream packet, a conversion instruction needs to be triggered to convert the audio/video stream packet into an electrical signal suitable for sending.
It should be understood that, in the embodiment of the present invention, the methods in steps S101 to S104 are all processed by the auxiliary live audio/video application, where the auxiliary live audio/video application transmits the audio/video stream data packet generated in step 101 and step 102 in combination with step 104 to the live application through the cross-process communication channel communicated in step 103.
It should be noted that the method for cross-process communication provided by the present invention is not limited to the method provided in the embodiment of the present invention, and other methods may also be provided, and the present invention is not limited to this.
Referring to fig. 2, a block diagram of an embodiment of an auxiliary live audio/video processing device is shown, where the auxiliary live audio/video processing device according to the present invention includes:
the reading processing module 11: the method comprises the steps of reading an original video stream and an original audio stream through an auxiliary live audio and video application; processing an original video stream to obtain a video stream; and processing the original audio stream to obtain the audio stream.
In the embodiment of the invention, the original video stream obtained by using the camera and the original audio stream obtained by using the sound card are read by the auxiliary live audio and video application.
In this embodiment of the present invention, the processing an original video stream to obtain a video stream includes:
a video processing unit: the system is used for receiving a video processing instruction of a user through the auxiliary live audio and video application, converting an original video stream into frame data, processing the frame data according to the video processing instruction, and reassembling the processed frame data into a video stream.
In an embodiment of the present invention, the video processing unit includes:
a video algorithm acquisition subunit: the video processing device is used for acquiring a video algorithm from a video algorithm set according to the video processing instruction and processing frame data by using the video algorithm; the video algorithm set comprises a video whitening algorithm and a video special effect algorithm.
In this embodiment of the present invention, the processing an original audio stream to obtain an audio stream includes:
an audio processing unit: and the audio processing device is used for receiving an audio processing instruction of a user through the auxiliary live audio and video application and processing an original audio stream according to the audio processing instruction to obtain an audio stream.
In an embodiment of the present invention, the audio processing unit includes:
audio algorithm subunit: the audio processing device is used for acquiring an audio algorithm from the audio algorithm set according to the audio processing instruction and processing an original audio stream by using the audio algorithm; the audio algorithm set comprises an audio noise reduction algorithm and a superposition special effect algorithm.
Specifically, in this embodiment of the present invention, the video processing unit further includes:
a first application layer subunit: the video processing instruction of a user is received through the application program layer of the auxiliary live audio and video application;
a first interface layer subunit: the multimedia service layer is used for transmitting the video processing instruction to the auxiliary live audio and video application through the multimedia interface layer of the auxiliary live audio and video application;
the first service layer sub-unit: and the video processing device is used for converting the original video stream into frame data through the multimedia service layer, processing the frame data according to the video processing instruction, and reassembling the processed frame data into the video stream.
Further, the audio processing unit further comprises:
a second application layer subunit: the audio processing instruction of a user is received through an application program layer of the auxiliary live audio and video application;
a second interface layer subunit: the multimedia service layer is used for transmitting the audio processing instruction to the auxiliary live audio and video application through the multimedia interface layer of the auxiliary live audio and video application;
the second service layer subunit: and the audio processing module is used for processing the original audio stream through the multimedia service layer according to the audio processing instruction to obtain the audio stream.
In the embodiment of the invention, the video processing instruction and the operation key corresponding to the audio processing instruction are displayed on the display interface of the auxiliary live audio and video application.
In the embodiment of the invention, the auxiliary live audio and video should obtain the original video stream acquired by the camera through a relevant reading function and decompose the original video stream into frame data, wherein the frame data is equivalent to an image, so that the processing of the frame data is equivalent to the processing of the image. The processing of the image involves an image processing technique and an image recognition technique, wherein the image processing generally refers to digital image processing, and mainly refers to a method for removing noise, enhancing, restoring, segmenting, extracting features and the like on the image.
In addition, in the embodiment of the invention, the video algorithm set mainly comprises a video whitening algorithm and a video special effect algorithm, wherein the whitening algorithm is mainly used for carrying out algorithm processing on character characteristic data in the acquired frame data; the video special effect algorithm is mainly characterized in that image data added by a main broadcasting user is added in frame data of an acquired original video stream and are respectively superposed into the frame data to realize the addition of video animation, or the addition of a video special effect is realized by changing image parameters of the frame data; the main changing image parameters are: image resolution, image contrast, image brightness, image saturation, image sharpness, image color temperature, and the like.
In the embodiment of the present invention, the application layer subunit further includes a main logic responsible for processing the service, including an application program main function interface displayed to the user, and an interface logic display when the user uses a specific function. The service layer subunit mainly encapsulates and extracts out related algorithm classes, including a whitening algorithm class, a video special effect algorithm class, a sound noise reduction algorithm class and the like; the specific algorithm implementation class is primarily directed to what the inputs and outputs are, thereby completely isolating the application layer and the multimedia service layer logically. The application layer subunit and the service layer subunit realize bidirectional communication mainly through an interface layer subunit, and the interface layer subunit is mainly responsible for providing a logical communication channel of an upper layer and a lower layer.
It should be noted that the method for processing the original audio data and/or the original video data provided by the present invention is not limited to the method provided in the embodiment of the present invention, and other methods may also be used, which is not limited by the present invention.
Time stamping module 12: and the time stamps of the video stream and the audio stream are aligned, and an audio and video stream data packet containing the aligned video stream and audio stream is generated.
In this embodiment of the present invention, the aligning the timestamps of the video stream and the audio stream specifically includes:
and acquiring timestamps corresponding to the processed video stream and the audio stream, and aligning the timestamps according to the acquired timestamps. The time stamp is a sequence of characters that uniquely identifies the time of a certain moment, for example, the start time of a main broadcast is 08: 30: 00, so the original video stream first frame image data is represented by 08: 30: 00 is a timestamp, and the original audio stream is also assumed to be at 08: 30: 00 is a time stamp, and on the basis of not changing the time length of the frame data corresponding to the original video stream and the time length corresponding to the original audio stream data, it is assumed that the time for generating the video stream and the audio stream after the processing is respectively 08: 30: 10 and 08: 30: 05, simultaneously marking the time stamps of the start bits corresponding to the video stream and the audio stream after the processing is finished as 08: 30: 00, so at 08: 30: 05, temporarily storing the audio stream data in the storage A, and when 08: 30: 10 after the synthesis of the video stream data is completed, temporarily storing the audio stream data and the video stream data in a storage B, extracting the audio stream data and the video stream data from storage A, B to storage C according to a triggered timestamp alignment instruction, and storing the audio stream data and the video stream data in a storage C according to a marked timestamp 08: 30: 00, synthesizing a complete audio and video stream data packet; another possibility is included, which will be described at 08: 30: 05, the audio stream data is temporarily stored in the a storage, when 08: 30: and after the video stream data at the position 10 are synthesized, temporarily storing the video stream data in the storage A, and synthesizing the audio and video stream data packet in the storage A.
In the embodiment of the present invention, in the working diagram of the auxiliary live broadcast audio/video application in fig. 4, a column of "live effect" has three contents: common sound effects, live animation, animation broadcasting. The method comprises the steps that preset sound effect data are collected in a common sound effect column, data updating data sent by a cloud server are obtained in real time and stored in a local database, and supposing that when a control area of 'forward playing' is touched, an obtaining instruction for obtaining the audio data corresponding to the 'forward playing' is issued by the auxiliary live audio and video application, so that the audio data corresponding to the 'forward playing' is superposed into original audio stream data collected in a live broadcasting process according to the obtaining instruction, time point data when the obtaining instruction is issued are mainly aligned with current time data corresponding to the original audio stream data, and the audio data corresponding to the 'forward playing' is superposed at the position where the original audio stream data are correspondingly aligned, so that the superposition of sound effects is realized. The method mainly comprises the steps of obtaining animation data of the animation, obtaining corresponding time point data when an extraction instruction of the animation data is issued, and adding the animation data at a position where the time point data is aligned with the current time data in original video stream data in an overlapping mode to achieve adding of the live animation.
It should be noted that, the method for superimposing audio data and/or video data and the method for aligning timestamps provided by the present invention are not limited to the method provided in the embodiment of the present invention, and other methods are also possible, which are not limited by the present invention.
The communication module 13: and the cross-process communication channel is used for communicating the auxiliary live audio and video application with the live application.
The transmission module 14: and the cross-process communication channel is used for transmitting the audio and video stream data packet to a live broadcast application.
In the embodiment of the present invention, the cross-process communication refers to data transmission between processes, that is, data exchange between processes. Wherein the cross-process communication mode comprises the following steps: broadcast, interface access, object access, shared access.
Taking the communication between the auxiliary live audio and video application (expressed by the program A) and the live application (expressed by the program B) as an example: starting a program A, defining the audio and video stream data packet transmitted by the program A as an event C, and sending a broadcast to a program B; and the B program creates a class under the running condition to inherit the trigger of the C event, receives the broadcast of the A program and establishes A, B a cross-process communication channel.
The specific implementation manner of the interface access includes that the A program triggers the C event, the B program accesses an externally exposed interface of the A program under the permission of the related authority access, a cross-process communication channel between A, B is established, and data corresponding to the C event of the A program is obtained.
The specific implementation mode of the object access is to create a program B and establish a new activity named as an activity D, then create a program A and establish a new event as a D in the program B corresponding to the event C, trigger a related instruction corresponding to the activity D to access the program A to receive related data of the event C, and establish a cross-process communication channel between A, B.
The specific implementation manner of the shared access is to store data corresponding to the C event triggered by the program A in a preset memory and establish related access, run the program B to establish related access to the preset memory, acquire an audio/video stream data packet corresponding to the C event in the preset memory on the basis of the access, and establish a cross-process communication channel between A, B.
In the embodiment of the present invention, the transmission module 14 further includes:
an uploading unit: and uploading the audio and video streaming data packet to a live broadcast server through the live broadcast application according to a preset rule.
In the embodiment of the present invention, the preset rule refers to a behavior specification for uploading the audio/video stream packet to a live broadcast server, and a specific process is to generate a detection instruction for detecting data integrity of the audio/video stream packet, where the audio/video stream packet includes not only the video stream and the audio stream, but also broadcast data and barrage data sent by a main broadcast in live broadcast, and when the broadcast data and/or the barrage data are available, the broadcast data and/or the barrage data are merged into the audio/video stream packet, and when the upload instruction is executed by the audio/video stream packet, a conversion instruction needs to be triggered to convert the audio/video stream packet into an electrical signal suitable for sending.
It should be understood that, in the embodiment of the present invention, the modules 11 to 14 all belong to the auxiliary live audio/video application, where the auxiliary live audio/video application transmits the audio/video stream data packet generated by combining the read processing module 11 with the timestamp module 12 to the live application through the cross-process communication channel of the communication module 13 by using the transmission module 14.
It should be noted that the method for cross-process communication provided by the present invention is not limited to the method provided in the embodiment of the present invention, and other methods may also be provided, and the present invention is not limited to this.
In summary, the invention reads an original video stream and an original audio stream by using an auxiliary live broadcast audio/video application, obtains the video stream and the audio stream after performing relevant processing on the original video stream and the original audio stream, further aligns timestamps of the video stream and the audio stream to generate an audio/video stream data packet, and transmits the audio/video stream data packet to the live broadcast application through a cross-process communication channel to realize live broadcast, thereby realizing the processing operation of live broadcast audio/video on the same application program, saving the tedious operation of switching a plurality of live broadcast auxiliary applications back and forth, reducing the loss of equipment resources, avoiding the phenomena of blockage and downtime in the live broadcast process to a great extent, and improving the user experience.
In addition, the invention carries out relevant processing on the acquired original video stream and the original audio stream, specifically, obtains a relevant video algorithm by receiving a video processing instruction of a user, and processes frame data of the original video stream according to the video algorithm so as to achieve the purposes of adding the special effect of video beautification, video portrait whitening and figure slimming effects. Correspondingly, the related audio algorithm is obtained by receiving the audio processing instruction sent by the user, and the original audio stream is processed according to the audio algorithm, so that the purposes of audio noise reduction and sound effect superposition are achieved. Therefore, the method plays a jump in the enrichment of the live broadcast effect to the greatest extent, realizes the beautification of the video and the audio, further enriches the live broadcast content in the live broadcast process, activates the interactive atmosphere of the anchor and audiences, and supports the atmosphere; the live broadcast effect is more vivid.
In addition, the invention utilizes an auxiliary live broadcast audio and video application to process the original video stream and the original audio stream, and then transmits the processed original video stream and the original audio stream to the live broadcast application through a cross-program communication channel which is communicated with the live broadcast application through a request, and the live broadcast application uploads the audio and video stream data packet to a live broadcast server according to a preset rule, wherein the preset rule comprises that live broadcast data preset by a main broadcast are packaged into the audio and video data packet and uploaded together, so that the synchronous live broadcast function is realized, the memory occupation is reduced, the data transmission rate is improved, and the smoothness of the live broadcast is ensured.
In conclusion, the processing operation of live broadcast audio and video on the same application program is realized, the complex operation of switching a plurality of live broadcast auxiliary applications back and forth is omitted, the loss of equipment resources is saved, the phenomena of blockage and downtime of live broadcast equipment are avoided to the greatest extent, and the use experience of a user is improved; in addition, the live broadcast content is enriched in the live broadcast process, the interaction atmosphere of the anchor and audiences is activated, and the atmosphere is set off; the live broadcast effect is more vivid; in addition, live broadcast data preset by the anchor is packaged into the audio and video data packet and uploaded together, so that the synchronous live broadcast function is realized, the memory occupation is reduced, the data transmission rate is improved, and the live broadcast smoothness is ensured.
In the description provided herein, although numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some embodiments, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Although a few exemplary embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these exemplary embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.

Claims (9)

1. A live audio and video processing method is characterized by comprising the following steps:
reading an original video stream and an original audio stream through an auxiliary live audio and video application, wherein the auxiliary live audio and video application comprises an application program layer, a multimedia interface layer and a multimedia service layer;
processing an original video stream to obtain a video stream; processing an original audio stream to obtain an audio stream; the method comprises the following steps: receiving a video processing instruction of a user through the application program layer, transmitting the video processing instruction to the multimedia service layer through the multimedia interface layer, converting the original video stream into frame data through the multimedia service layer, processing the frame data according to the video processing instruction, and reassembling the processed frame data into a video stream; receiving an audio processing instruction of a user through the application program layer, and transmitting the audio processing instruction to the multimedia service layer through the multimedia interface layer; processing the original audio stream according to the audio processing instruction to obtain an audio stream;
aligning the timestamps of the video stream and the audio stream through the auxiliary live broadcast audio/video application to generate an audio/video stream data packet containing the aligned video stream and audio stream;
a cross-process communication channel for communicating the auxiliary live audio and video application with the live application, wherein the cross-process communication channel represents a channel for data exchange between processes;
and transmitting the audio and video stream data packet to a live broadcast application through the cross-process communication channel.
2. The method of claim 1, wherein reading the original video stream and the original audio stream by the secondary live audio and video application comprises:
and reading an original video stream obtained by using a camera and an original audio stream obtained by using a sound card through an auxiliary live audio and video application.
3. The method of claim 1, wherein processing the original video stream to obtain the video stream comprises:
and receiving a video processing instruction of a user through the auxiliary live audio and video application, converting the original video stream into frame data, processing the frame data according to the video processing instruction, and reassembling the processed frame data into the video stream.
4. The method of claim 3, wherein the video processing instructions process frame data, comprising:
acquiring a video algorithm from a video algorithm set according to the video processing instruction, and processing frame data by using the video algorithm; the video algorithm set comprises a video whitening algorithm and a video special effect algorithm.
5. The method of claim 3, wherein processing the original audio stream to obtain an audio stream comprises:
and receiving an audio processing instruction of a user through the auxiliary live audio and video application, and processing an original audio stream according to the audio processing instruction to obtain an audio stream.
6. The method of claim 5, wherein the processing the original audio stream according to the audio processing instruction comprises:
acquiring an audio algorithm from the audio algorithm set according to the audio processing instruction, and processing the original audio stream by using the audio algorithm; the audio algorithm set comprises an audio noise reduction algorithm and a superposition special effect algorithm.
7. The method according to claim 5, wherein the video processing instruction and the operation key corresponding to the audio processing instruction are displayed on a display interface of the auxiliary live audio/video application.
8. The method of claim 1, wherein after uploading the audio/video stream data packet to a live application through the cross-process communication channel, further comprising:
and uploading the audio and video streaming data packet to a live broadcast server through the live broadcast application according to a preset rule.
9. An audio and video processing device for assisting live broadcast is characterized by comprising;
a reading processing module: the system comprises an application program layer, a multimedia interface layer and a multimedia service layer, wherein the application program layer is used for reading an original video stream and an original audio stream through an auxiliary live audio and video application; processing an original video stream to obtain a video stream; processing an original audio stream to obtain an audio stream; the method comprises the following steps: receiving a video processing instruction of a user through the application program layer, transmitting the video processing instruction to the multimedia service layer through the multimedia interface layer, converting the original video stream into frame data through the multimedia service layer, processing the frame data according to the video processing instruction, and reassembling the processed frame data into a video stream; receiving an audio processing instruction of a user through the application program layer, and transmitting the audio processing instruction to the multimedia service layer through the multimedia interface layer; processing the original audio stream according to the audio processing instruction to obtain an audio stream;
a time stamping module: the audio and video stream data packet containing the aligned video stream and audio stream is generated by aligning the timestamps of the video stream and the audio stream through the auxiliary live broadcast audio and video application;
a communication module: the cross-process communication channel is used for communicating the auxiliary live audio and video application with the live application, wherein the cross-process communication channel represents a channel for data exchange between processes;
a transmission module: and the cross-process communication channel is used for transmitting the audio and video stream data packet to a live broadcast application.
CN201710240764.1A 2017-04-13 2017-04-13 Audio and video processing method and device for assisting live broadcast Active CN107027046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710240764.1A CN107027046B (en) 2017-04-13 2017-04-13 Audio and video processing method and device for assisting live broadcast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710240764.1A CN107027046B (en) 2017-04-13 2017-04-13 Audio and video processing method and device for assisting live broadcast

Publications (2)

Publication Number Publication Date
CN107027046A CN107027046A (en) 2017-08-08
CN107027046B true CN107027046B (en) 2020-03-10

Family

ID=59526696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710240764.1A Active CN107027046B (en) 2017-04-13 2017-04-13 Audio and video processing method and device for assisting live broadcast

Country Status (1)

Country Link
CN (1) CN107027046B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259989B (en) * 2018-01-19 2021-09-17 广州方硅信息技术有限公司 Video live broadcast method, computer readable storage medium and terminal equipment
CN108540732B (en) * 2018-05-07 2020-09-04 广州酷狗计算机科技有限公司 Method and device for synthesizing video
CN109640141B (en) * 2018-12-19 2021-07-20 深圳银澎云计算有限公司 Audio timestamp correction method and device and audio and video terminal
CN110213640B (en) * 2019-06-28 2021-05-14 香港乐蜜有限公司 Virtual article generation method, device and equipment
CN110393921B (en) * 2019-08-08 2022-08-26 腾讯科技(深圳)有限公司 Cloud game processing method and device, terminal, server and storage medium
CN110971930B (en) * 2019-12-19 2023-03-10 广州酷狗计算机科技有限公司 Live virtual image broadcasting method, device, terminal and storage medium
CN112004100B (en) * 2020-08-31 2022-02-11 上海竞达科技有限公司 Driving method for integrating multiple audio and video sources into single audio and video source
CN112203106B (en) * 2020-10-10 2023-03-31 深圳市捷视飞通科技股份有限公司 Live broadcast teaching method and device, computer equipment and storage medium
CN112258912B (en) * 2020-10-10 2022-08-16 深圳市捷视飞通科技股份有限公司 Network interactive teaching method, device, computer equipment and storage medium
CN112637614B (en) * 2020-11-27 2023-04-21 深圳市创成微电子有限公司 Network direct broadcast video processing method, processor, device and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102447893A (en) * 2010-09-30 2012-05-09 北京沃安科技有限公司 Method and system for real-time acquisition and release of videos of mobile phone
CN103686450A (en) * 2013-12-31 2014-03-26 广州华多网络科技有限公司 Video processing method and system
CN104053014A (en) * 2013-03-13 2014-09-17 腾讯科技(北京)有限公司 Live broadcast system and method based on mobile terminal, and mobile terminal
CN104717552A (en) * 2015-03-31 2015-06-17 北京奇艺世纪科技有限公司 Method and device for issuing audio/video for live broadcast
CN105407361A (en) * 2015-11-09 2016-03-16 广州华多网络科技有限公司 Audio and video live broadcast data processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002237748A1 (en) * 2000-10-19 2002-05-21 Loudeye Technologies, Inc. System and method for selective insertion of content into streaming media

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102447893A (en) * 2010-09-30 2012-05-09 北京沃安科技有限公司 Method and system for real-time acquisition and release of videos of mobile phone
CN104053014A (en) * 2013-03-13 2014-09-17 腾讯科技(北京)有限公司 Live broadcast system and method based on mobile terminal, and mobile terminal
CN103686450A (en) * 2013-12-31 2014-03-26 广州华多网络科技有限公司 Video processing method and system
CN104717552A (en) * 2015-03-31 2015-06-17 北京奇艺世纪科技有限公司 Method and device for issuing audio/video for live broadcast
CN105407361A (en) * 2015-11-09 2016-03-16 广州华多网络科技有限公司 Audio and video live broadcast data processing method and device

Also Published As

Publication number Publication date
CN107027046A (en) 2017-08-08

Similar Documents

Publication Publication Date Title
CN107027046B (en) Audio and video processing method and device for assisting live broadcast
CN107027050B (en) Audio and video processing method and device for assisting live broadcast
WO2019205872A1 (en) Video stream processing method and apparatus, computer device and storage medium
CN108566558A (en) Video stream processing method, device, computer equipment and storage medium
CN110475150A (en) The rendering method and device of virtual present special efficacy, live broadcast system
CN110536151A (en) The synthetic method and device of virtual present special efficacy, live broadcast system
CN104202677B (en) Support the method and apparatus of the multihead display and control of multiwindow application
WO2020200302A1 (en) Live broadcast method and apparatus, and computer device and storage medium
CN106448297A (en) Cloud audio-video remote interactive class system
US20060242676A1 (en) Live streaming broadcast method, live streaming broadcast device, live streaming broadcast system, program, recording medium, broadcast method, and broadcast device
CN103947221A (en) User interface display method and device using same
CN103973732A (en) PPT playing method and device
EP3024223B1 (en) Videoconference terminal, secondary-stream data accessing method, and computer storage medium
CN110149518A (en) Processing method, system, device, equipment and the storage medium of media data
CN104837051A (en) Video playing method and client side
JP6089828B2 (en) Information processing system and information processing method
CN113965813A (en) Video playing method and system in live broadcast room and computer equipment
CN108876866B (en) Media data processing method, device and storage medium
US20180192064A1 (en) Transcoder for real-time compositing
CN113473194B (en) Intelligent device and response method
CN113489938A (en) Virtual conference control method, intelligent device and terminal device
CN116627577A (en) Third party application interface display method
US20220210520A1 (en) Online video data output method, system, and cloud platform
US20210176534A1 (en) Information processing apparatus, information processing method, and information processing program
CN106303643A (en) Remote control thereof and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210114

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511442 29 floor, block B-1, Wanda Plaza, Huambo business district, Panyu District, Guangzhou, Guangdong.

Patentee before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220111

Address after: 511442 block B1, Wanda Plaza, Wanbo Second Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee after: Guangzhou Jinhong network media Co.,Ltd.

Address before: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right