[ summary of the invention ]
In order to overcome the above technical problems or at least partially solve the above technical problems, the following technical solutions are proposed:
the invention provides an audio and video processing method for assisting live broadcast, which comprises the following steps:
reading an original video stream and an original audio stream through an auxiliary live audio and video application; processing an original video stream to obtain a video stream; processing an original audio stream to obtain an audio stream;
aligning the time stamps of the video stream and the audio stream, and generating an audio and video stream data packet containing the aligned video stream and audio stream;
a cross-process communication channel for communicating the auxiliary live audio and video application with the live application;
and transmitting the audio and video stream data packet to a live broadcast application through the cross-process communication channel.
Specifically, the reading of the original video stream and the original audio stream by the auxiliary live audio/video application includes:
and reading an original video stream obtained by using a camera and an original audio stream obtained by using a sound card through an auxiliary live audio and video application.
Specifically, the processing the original video stream to obtain the video stream includes:
and receiving a video processing instruction of a user through the auxiliary live audio and video application, converting the original video stream into frame data, processing the frame data according to the video processing instruction, and reassembling the processed frame data into the video stream.
Specifically, the processing of the frame data by the video processing instruction includes:
acquiring a video algorithm from a video algorithm set according to the video processing instruction, and processing frame data by using the video algorithm; the video algorithm set comprises a video whitening algorithm and a video special effect algorithm.
Specifically, the processing the original audio stream to obtain the audio stream includes:
and receiving an audio processing instruction of a user through the auxiliary live audio and video application, and processing an original audio stream according to the audio processing instruction to obtain an audio stream.
Specifically, the processing the original audio stream according to the audio processing instruction includes:
acquiring an audio algorithm from the audio algorithm set according to the audio processing instruction, and processing the original audio stream by using the audio algorithm; the audio algorithm set comprises an audio noise reduction algorithm and a superposition special effect algorithm.
Specifically, receiving a video processing instruction of a user through the auxiliary live audio/video application, converting an original video stream into frame data, processing the frame data according to the video processing instruction, and reassembling the processed frame data into a video stream, including:
receiving a video processing instruction of a user through an application program layer of the auxiliary live audio and video application; transmitting a video processing instruction to a multimedia service layer of the auxiliary live audio and video application through a multimedia interface layer of the auxiliary live audio and video application; and converting the original video stream into frame data through a multimedia service layer, processing the frame data according to the video processing instruction, and reassembling the processed frame data into the video stream.
Specifically, receiving an audio processing instruction of a user through the auxiliary live audio/video application, and processing an original audio stream according to the audio processing instruction to obtain an audio stream, including:
receiving an audio processing instruction of a user through an application program layer of the auxiliary live audio and video application, and transmitting the audio processing instruction to a multimedia service layer of the auxiliary live audio and video application through a multimedia interface layer of the auxiliary live audio and video application; and processing the original audio stream through the multimedia service layer according to the audio processing instruction to obtain the audio stream.
Optionally, the video processing instruction and the operation key corresponding to the audio processing instruction are displayed on a display interface of the auxiliary live audio/video application.
Optionally, after the audio/video stream data packet is uploaded to the live broadcast application through the cross-process communication channel, the method further includes:
and uploading the audio and video streaming data packet to a live broadcast server through the live broadcast application according to a preset rule.
Correspondingly, the invention also provides an audio and video processing device for assisting live broadcast, which comprises a live broadcast processing module, a live broadcast processing module and a live broadcast processing module;
a reading processing module: the method comprises the steps of reading an original video stream and an original audio stream through an auxiliary live audio and video application; processing an original video stream to obtain a video stream; processing an original audio stream to obtain an audio stream;
a time stamping module: the time stamps of the video stream and the audio stream are aligned, and an audio and video stream data packet containing the aligned video stream and audio stream is generated;
a communication module: a cross-process communication channel for communicating the auxiliary live audio and video application with the live application;
a transmission module: and the cross-process communication channel is used for transmitting the audio and video stream data packet to a live broadcast application.
Compared with the prior art, the invention has the following advantages:
in summary, the invention reads an original video stream and an original audio stream by using an auxiliary live broadcast audio/video application, obtains the video stream and the audio stream after performing relevant processing on the original video stream and the original audio stream, further aligns timestamps of the video stream and the audio stream to generate an audio/video stream data packet, and transmits the audio/video stream data packet to the live broadcast application through a cross-process communication channel to realize live broadcast, thereby realizing the processing operation of live broadcast audio/video on the same application program, saving the tedious operation of switching a plurality of live broadcast auxiliary applications back and forth, reducing the loss of equipment resources, avoiding the phenomena of blockage and downtime in the live broadcast process to a greater extent, and improving the user experience.
[ detailed description ] embodiments
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In order to make the technical field better understand the scheme of the embodiment of the invention, the invention is further described in detail with reference to the attached drawings and the embodiment. The following examples are illustrative only and are not to be construed as limiting the invention.
Referring to the flowchart of the embodiment shown in fig. 1, the method for processing auxiliary live audio/video provided by the present invention includes the following steps:
s101, reading an original video stream and an original audio stream through an auxiliary live audio and video application; processing an original video stream to obtain a video stream; processing an original audio stream to obtain an audio stream;
in the embodiment of the invention, the original video stream obtained by using the camera and the original audio stream obtained by using the sound card are read by the auxiliary live audio and video application.
In this embodiment of the present invention, the processing an original video stream to obtain a video stream includes:
and receiving a video processing instruction of a user through the auxiliary live audio and video application, converting the original video stream into frame data, processing the frame data according to the video processing instruction, and reassembling the processed frame data into the video stream.
In this embodiment of the present invention, the processing of frame data by the video processing instruction includes:
acquiring a video algorithm from a video algorithm set according to the video processing instruction, and processing frame data by using the video algorithm; the video algorithm set comprises a video whitening algorithm and a video special effect algorithm.
In this embodiment of the present invention, the processing an original audio stream to obtain an audio stream includes:
and receiving an audio processing instruction of a user through the auxiliary live audio and video application, and processing an original audio stream according to the audio processing instruction to obtain an audio stream.
In this embodiment of the present invention, the processing the original audio stream according to the audio processing instruction includes:
acquiring an audio algorithm from the audio algorithm set according to the audio processing instruction, and processing the original audio stream by using the audio algorithm; the audio algorithm set comprises an audio noise reduction algorithm and a superposition special effect algorithm.
Specifically, in the embodiment of the present invention, receiving a video processing instruction of a user through the auxiliary live audio/video application, converting an original video stream into frame data, processing the frame data according to the video processing instruction, and reassembling the processed frame data into a video stream includes:
receiving a video processing instruction of a user through an application program layer of the auxiliary live audio and video application; transmitting a video processing instruction to a multimedia service layer of the auxiliary live audio and video application through a multimedia interface layer of the auxiliary live audio and video application; and converting the original video stream into frame data through a multimedia service layer, processing the frame data according to the video processing instruction, and reassembling the processed frame data into the video stream.
In addition, receiving an audio processing instruction of a user through the auxiliary live audio/video application, and processing an original audio stream according to the audio processing instruction to obtain an audio stream, including:
receiving an audio processing instruction of a user through an application program layer of the auxiliary live audio and video application, and transmitting the audio processing instruction to a multimedia service layer of the auxiliary live audio and video application through a multimedia interface layer of the auxiliary live audio and video application; and processing the original audio stream through the multimedia service layer according to the audio processing instruction to obtain the audio stream.
In the embodiment of the present invention, the auxiliary live audio/video application is divided into three levels based on the auxiliary live audio/video application, and the architecture diagram of the present invention shown in fig. 5 includes an application program layer, a multimedia interface layer, and a multimedia service layer.
In the embodiment of the present invention, the application layer is responsible for processing the main logic of the service, including the main function interface of the application displayed to the user and the interface logic display when the user uses a specific function, and mainly includes:
a MixVideoManager provides a functional interface for related operations in the process of an application generating a mixed video stream;
a VideoContainer for containing relevant data input or output by the application;
the MixAudioManager provides a functional interface for related operations in the process of generating mixed audio streams by an application.
The structure of each part of the application program layer shows different interface typesetting, the effectiveness of interface display is improved, and a more convenient and favorable platform is provided for audio and video processing.
In the embodiment of the invention, the multimedia service layer mainly encapsulates and extracts related algorithm classes, including a whitening algorithm class, a video special effect algorithm class, a sound noise reduction algorithm class and the like; the specific algorithm implementation class is directed to what the input and output are, so that the application layer and the multimedia service layer are completely isolated in logic, wherein the specific algorithm implementation class mainly comprises the following steps: CCaptureVideo (video capture), caccom panymusic (accompaniment music), CMediaVideo (media video), CMoodMusic (atmosphere music), cdesktop video (desktop video), CDecorationLayer (decor layer). Wherein the CAccompanyMusic, the CMediaVideo and the CMoodMusic are used for carrying out related algorithm classes together to realize unification; the CDecorationLayer mainly extracts the correlation algorithm.
In the embodiment of the present invention, the application layer and the multimedia service layer mainly implement bidirectional communication through a multimedia interface layer, and the media interface layer is mainly responsible for providing a logical communication channel between an upper layer and a lower layer, and mainly includes: idecordiationlayertnotify, iplayaccupanampmusic, icontuturevideonotify, IPlayMediaVideo, IPaintPhoto, IDesktopVideoCB, icontuturevideo, iaccomppanymampmusic, imeediavideo, IMoodMusic, idecordiationlayer, and ivcamman. The distribution of each unit module simplifies the multimedia interface and reduces the occupation of interface resources.
As will be appreciated by those skilled in the art, as multimedia core module code (vcambiz) becomes more chaotic and redundant, the application is more convenient in the related maintenance of subsequent programs through the three-level architecture of the application, and an expandable plug-in multimedia data stream service framework is realized.
In the embodiment of the invention, the video processing instruction and the operation key corresponding to the audio processing instruction are displayed on the display interface of the auxiliary live audio and video application.
In the embodiment of the invention, the auxiliary live audio and video should obtain the original video stream acquired by the camera through a relevant reading function and decompose the original video stream into frame data, wherein the frame data is equivalent to an image, so that the processing of the frame data is equivalent to the processing of the image. The processing of the image involves an image processing technique and an image recognition technique, wherein the image processing generally refers to digital image processing, and mainly refers to a method for removing noise, enhancing, restoring, segmenting, extracting features and the like on the image.
In addition, in the embodiment of the invention, the video algorithm set mainly comprises a video whitening algorithm and a video special effect algorithm, wherein the whitening algorithm is mainly used for carrying out algorithm processing on character characteristic data in the acquired frame data; the video special effect algorithm is mainly characterized in that image data added by a main broadcasting user is added in frame data of an acquired original video stream and are respectively superposed into the frame data to realize the addition of video animation, or the addition of a video special effect is realized by changing image parameters of the frame data; the main changing image parameters are: image resolution, image contrast, image brightness, image saturation, image sharpness, image color temperature, and the like.
It should be noted that the method for processing the original audio data and/or the original video data provided by the present invention is not limited to the method provided in the embodiment of the present invention, and other methods may also be used, which is not limited by the present invention.
And S102, aligning the time stamps of the video stream and the audio stream, and generating an audio and video stream data packet containing the aligned video stream and audio stream.
In this embodiment of the present invention, the aligning the timestamps of the video stream and the audio stream specifically includes:
and acquiring timestamps corresponding to the processed video stream and the audio stream, and aligning the timestamps according to the acquired timestamps. The time stamp is a sequence of characters that uniquely identifies the time of a certain moment, for example, the start time of a main broadcast is 08: 30: 00, so the original video stream first frame image data is represented by 08: 30: 00 is a timestamp, and the original audio stream is also assumed to be at 08: 30: 00 is a time stamp, and on the basis of not changing the time length of the frame data corresponding to the original video stream and the time length corresponding to the original audio stream data, it is assumed that the time for generating the video stream and the audio stream after the processing is respectively 08: 30: 10 and 08: 30: 05, simultaneously marking the time stamps of the start bits corresponding to the video stream and the audio stream after the processing is finished as 08: 30: 00, so at 08: 30: 05, temporarily storing the audio stream data in the storage A, and when 08: 30: 10 after the synthesis of the video stream data is completed, temporarily storing the audio stream data and the video stream data in a storage B, extracting the audio stream data and the video stream data from storage A, B to storage C according to a triggered timestamp alignment instruction, and storing the audio stream data and the video stream data in a storage C according to a marked timestamp 08: 30: 00, synthesizing a complete audio and video stream data packet; another possibility is included, which will be described at 08: 30: 05, the audio stream data is temporarily stored in the a storage, when 08: 30: and after the video stream data at the position 10 are synthesized, temporarily storing the video stream data in the storage A, and synthesizing the audio and video stream data packet in the storage A.
In the embodiment of the present invention, in the working diagram of the auxiliary live broadcast audio/video application in fig. 4, a column of "live effect" has three contents: common sound effects, live animation, animation broadcasting. The method comprises the steps that preset sound effect data are collected in a common sound effect column, data updating data sent by a cloud server are obtained in real time and stored in a local database, and supposing that when a control area of 'forward playing' is touched, an obtaining instruction for obtaining the audio data corresponding to the 'forward playing' is issued by the auxiliary live audio and video application, so that the audio data corresponding to the 'forward playing' is superposed into original audio stream data collected in a live broadcasting process according to the obtaining instruction, time point data when the obtaining instruction is issued are mainly aligned with current time data corresponding to the original audio stream data, and the audio data corresponding to the 'forward playing' is superposed at the position where the original audio stream data are correspondingly aligned, so that the superposition of sound effects is realized. The method mainly comprises the steps of obtaining animation data of the animation, obtaining corresponding time point data when an extraction instruction of the animation data is issued, and adding the animation data at a position where the time point data is aligned with the current time data in original video stream data in an overlapping mode to achieve adding of the live animation. Therefore, the invention has the functions of atmosphere sound addition, background sound addition, sound effect adjustment and video adjustment, and provides more convenient use experience for the live broadcast work of the anchor.
It should be noted that, the method for superimposing audio data and/or video data and the method for aligning timestamps provided by the present invention are not limited to the method provided in the embodiment of the present invention, and other methods are also possible, which are not limited by the present invention.
And S103, communicating the cross-process communication channel of the auxiliary live broadcast audio and video application and the live broadcast application.
And S104, transmitting the audio and video stream data packet to a live broadcast application through the cross-process communication channel.
In the embodiment of the present invention, the cross-process communication refers to data transmission between processes, that is, data exchange between processes. Wherein the cross-process communication mode comprises the following steps: broadcast, interface access, object access, shared access.
Taking the communication between the auxiliary live audio and video application (expressed by the program A) and the live application (expressed by the program B) as an example: starting a program A, defining the audio and video stream data packet transmitted by the program A as an event C, and sending a broadcast to a program B; and the B program creates a class under the running condition to inherit the trigger of the C event, receives the broadcast of the A program and establishes A, B a cross-process communication channel.
The specific implementation manner of the interface access includes that the A program triggers the C event, the B program accesses an externally exposed interface of the A program under the permission of the related authority access, a cross-process communication channel between A, B is established, and data corresponding to the C event of the A program is obtained.
The specific implementation mode of the object access is to create a program B and establish a new activity named as an activity D, then create a program A and establish a new event as a D in the program B corresponding to the event C, trigger a related instruction corresponding to the activity D to access the program A to receive related data of the event C, and establish a cross-process communication channel between A, B.
The specific implementation manner of the shared access is to store data corresponding to the C event triggered by the program A in a preset memory and establish related access, run the program B to establish related access to the preset memory, acquire an audio/video stream data packet corresponding to the C event in the preset memory on the basis of the access, and establish a cross-process communication channel between A, B.
In this embodiment of the present invention, step S104 further includes:
and uploading the audio and video streaming data packet to a live broadcast server through the live broadcast application according to a preset rule.
In the embodiment of the present invention, the preset rule refers to a behavior specification for uploading the audio/video stream packet to a live broadcast server, and a specific process is to generate a detection instruction for detecting data integrity of the audio/video stream packet, where the audio/video stream packet includes not only the video stream and the audio stream, but also broadcast data and barrage data sent by a main broadcast in live broadcast, and when the broadcast data and/or the barrage data are available, the broadcast data and/or the barrage data are merged into the audio/video stream packet, and when the upload instruction is executed by the audio/video stream packet, a conversion instruction needs to be triggered to convert the audio/video stream packet into an electrical signal suitable for sending.
It should be understood that, in the embodiment of the present invention, the methods in steps S101 to S104 are all processed by the auxiliary live audio/video application, where the auxiliary live audio/video application transmits the audio/video stream data packet generated in step 101 and step 102 in combination with step 104 to the live application through the cross-process communication channel communicated in step 103.
It should be noted that the method for cross-process communication provided by the present invention is not limited to the method provided in the embodiment of the present invention, and other methods may also be provided, and the present invention is not limited to this.
Referring to fig. 2, a block diagram of an embodiment of an auxiliary live audio/video processing device is shown, where the auxiliary live audio/video processing device according to the present invention includes:
the reading processing module 11: the method comprises the steps of reading an original video stream and an original audio stream through an auxiliary live audio and video application; processing an original video stream to obtain a video stream; and processing the original audio stream to obtain the audio stream.
In the embodiment of the invention, the original video stream obtained by using the camera and the original audio stream obtained by using the sound card are read by the auxiliary live audio and video application.
In this embodiment of the present invention, the processing an original video stream to obtain a video stream includes:
a video processing unit: the system is used for receiving a video processing instruction of a user through the auxiliary live audio and video application, converting an original video stream into frame data, processing the frame data according to the video processing instruction, and reassembling the processed frame data into a video stream.
In an embodiment of the present invention, the video processing unit includes:
a video algorithm acquisition subunit: the video processing device is used for acquiring a video algorithm from a video algorithm set according to the video processing instruction and processing frame data by using the video algorithm; the video algorithm set comprises a video whitening algorithm and a video special effect algorithm.
In this embodiment of the present invention, the processing an original audio stream to obtain an audio stream includes:
an audio processing unit: and the audio processing device is used for receiving an audio processing instruction of a user through the auxiliary live audio and video application and processing an original audio stream according to the audio processing instruction to obtain an audio stream.
In an embodiment of the present invention, the audio processing unit includes:
audio algorithm subunit: the audio processing device is used for acquiring an audio algorithm from the audio algorithm set according to the audio processing instruction and processing an original audio stream by using the audio algorithm; the audio algorithm set comprises an audio noise reduction algorithm and a superposition special effect algorithm.
Specifically, in this embodiment of the present invention, the video processing unit further includes:
a first application layer subunit: the video processing instruction of a user is received through the application program layer of the auxiliary live audio and video application;
a first interface layer subunit: the multimedia service layer is used for transmitting the video processing instruction to the auxiliary live audio and video application through the multimedia interface layer of the auxiliary live audio and video application;
the first service layer sub-unit: and the video processing device is used for converting the original video stream into frame data through the multimedia service layer, processing the frame data according to the video processing instruction, and reassembling the processed frame data into the video stream.
Further, the audio processing unit further comprises:
a second application layer subunit: the audio processing instruction of a user is received through an application program layer of the auxiliary live audio and video application;
a second interface layer subunit: the multimedia service layer is used for transmitting the audio processing instruction to the auxiliary live audio and video application through the multimedia interface layer of the auxiliary live audio and video application;
the second service layer subunit: and the audio processing module is used for processing the original audio stream through the multimedia service layer according to the audio processing instruction to obtain the audio stream.
In the embodiment of the invention, the video processing instruction and the operation key corresponding to the audio processing instruction are displayed on the display interface of the auxiliary live audio and video application.
In the embodiment of the invention, the auxiliary live audio and video should obtain the original video stream acquired by the camera through a relevant reading function and decompose the original video stream into frame data, wherein the frame data is equivalent to an image, so that the processing of the frame data is equivalent to the processing of the image. The processing of the image involves an image processing technique and an image recognition technique, wherein the image processing generally refers to digital image processing, and mainly refers to a method for removing noise, enhancing, restoring, segmenting, extracting features and the like on the image.
In addition, in the embodiment of the invention, the video algorithm set mainly comprises a video whitening algorithm and a video special effect algorithm, wherein the whitening algorithm is mainly used for carrying out algorithm processing on character characteristic data in the acquired frame data; the video special effect algorithm is mainly characterized in that image data added by a main broadcasting user is added in frame data of an acquired original video stream and are respectively superposed into the frame data to realize the addition of video animation, or the addition of a video special effect is realized by changing image parameters of the frame data; the main changing image parameters are: image resolution, image contrast, image brightness, image saturation, image sharpness, image color temperature, and the like.
In the embodiment of the present invention, the application layer subunit further includes a main logic responsible for processing the service, including an application program main function interface displayed to the user, and an interface logic display when the user uses a specific function. The service layer subunit mainly encapsulates and extracts out related algorithm classes, including a whitening algorithm class, a video special effect algorithm class, a sound noise reduction algorithm class and the like; the specific algorithm implementation class is primarily directed to what the inputs and outputs are, thereby completely isolating the application layer and the multimedia service layer logically. The application layer subunit and the service layer subunit realize bidirectional communication mainly through an interface layer subunit, and the interface layer subunit is mainly responsible for providing a logical communication channel of an upper layer and a lower layer.
It should be noted that the method for processing the original audio data and/or the original video data provided by the present invention is not limited to the method provided in the embodiment of the present invention, and other methods may also be used, which is not limited by the present invention.
Time stamping module 12: and the time stamps of the video stream and the audio stream are aligned, and an audio and video stream data packet containing the aligned video stream and audio stream is generated.
In this embodiment of the present invention, the aligning the timestamps of the video stream and the audio stream specifically includes:
and acquiring timestamps corresponding to the processed video stream and the audio stream, and aligning the timestamps according to the acquired timestamps. The time stamp is a sequence of characters that uniquely identifies the time of a certain moment, for example, the start time of a main broadcast is 08: 30: 00, so the original video stream first frame image data is represented by 08: 30: 00 is a timestamp, and the original audio stream is also assumed to be at 08: 30: 00 is a time stamp, and on the basis of not changing the time length of the frame data corresponding to the original video stream and the time length corresponding to the original audio stream data, it is assumed that the time for generating the video stream and the audio stream after the processing is respectively 08: 30: 10 and 08: 30: 05, simultaneously marking the time stamps of the start bits corresponding to the video stream and the audio stream after the processing is finished as 08: 30: 00, so at 08: 30: 05, temporarily storing the audio stream data in the storage A, and when 08: 30: 10 after the synthesis of the video stream data is completed, temporarily storing the audio stream data and the video stream data in a storage B, extracting the audio stream data and the video stream data from storage A, B to storage C according to a triggered timestamp alignment instruction, and storing the audio stream data and the video stream data in a storage C according to a marked timestamp 08: 30: 00, synthesizing a complete audio and video stream data packet; another possibility is included, which will be described at 08: 30: 05, the audio stream data is temporarily stored in the a storage, when 08: 30: and after the video stream data at the position 10 are synthesized, temporarily storing the video stream data in the storage A, and synthesizing the audio and video stream data packet in the storage A.
In the embodiment of the present invention, in the working diagram of the auxiliary live broadcast audio/video application in fig. 4, a column of "live effect" has three contents: common sound effects, live animation, animation broadcasting. The method comprises the steps that preset sound effect data are collected in a common sound effect column, data updating data sent by a cloud server are obtained in real time and stored in a local database, and supposing that when a control area of 'forward playing' is touched, an obtaining instruction for obtaining the audio data corresponding to the 'forward playing' is issued by the auxiliary live audio and video application, so that the audio data corresponding to the 'forward playing' is superposed into original audio stream data collected in a live broadcasting process according to the obtaining instruction, time point data when the obtaining instruction is issued are mainly aligned with current time data corresponding to the original audio stream data, and the audio data corresponding to the 'forward playing' is superposed at the position where the original audio stream data are correspondingly aligned, so that the superposition of sound effects is realized. The method mainly comprises the steps of obtaining animation data of the animation, obtaining corresponding time point data when an extraction instruction of the animation data is issued, and adding the animation data at a position where the time point data is aligned with the current time data in original video stream data in an overlapping mode to achieve adding of the live animation.
It should be noted that, the method for superimposing audio data and/or video data and the method for aligning timestamps provided by the present invention are not limited to the method provided in the embodiment of the present invention, and other methods are also possible, which are not limited by the present invention.
The communication module 13: and the cross-process communication channel is used for communicating the auxiliary live audio and video application with the live application.
The transmission module 14: and the cross-process communication channel is used for transmitting the audio and video stream data packet to a live broadcast application.
In the embodiment of the present invention, the cross-process communication refers to data transmission between processes, that is, data exchange between processes. Wherein the cross-process communication mode comprises the following steps: broadcast, interface access, object access, shared access.
Taking the communication between the auxiliary live audio and video application (expressed by the program A) and the live application (expressed by the program B) as an example: starting a program A, defining the audio and video stream data packet transmitted by the program A as an event C, and sending a broadcast to a program B; and the B program creates a class under the running condition to inherit the trigger of the C event, receives the broadcast of the A program and establishes A, B a cross-process communication channel.
The specific implementation manner of the interface access includes that the A program triggers the C event, the B program accesses an externally exposed interface of the A program under the permission of the related authority access, a cross-process communication channel between A, B is established, and data corresponding to the C event of the A program is obtained.
The specific implementation mode of the object access is to create a program B and establish a new activity named as an activity D, then create a program A and establish a new event as a D in the program B corresponding to the event C, trigger a related instruction corresponding to the activity D to access the program A to receive related data of the event C, and establish a cross-process communication channel between A, B.
The specific implementation manner of the shared access is to store data corresponding to the C event triggered by the program A in a preset memory and establish related access, run the program B to establish related access to the preset memory, acquire an audio/video stream data packet corresponding to the C event in the preset memory on the basis of the access, and establish a cross-process communication channel between A, B.
In the embodiment of the present invention, the transmission module 14 further includes:
an uploading unit: and uploading the audio and video streaming data packet to a live broadcast server through the live broadcast application according to a preset rule.
In the embodiment of the present invention, the preset rule refers to a behavior specification for uploading the audio/video stream packet to a live broadcast server, and a specific process is to generate a detection instruction for detecting data integrity of the audio/video stream packet, where the audio/video stream packet includes not only the video stream and the audio stream, but also broadcast data and barrage data sent by a main broadcast in live broadcast, and when the broadcast data and/or the barrage data are available, the broadcast data and/or the barrage data are merged into the audio/video stream packet, and when the upload instruction is executed by the audio/video stream packet, a conversion instruction needs to be triggered to convert the audio/video stream packet into an electrical signal suitable for sending.
It should be understood that, in the embodiment of the present invention, the modules 11 to 14 all belong to the auxiliary live audio/video application, where the auxiliary live audio/video application transmits the audio/video stream data packet generated by combining the read processing module 11 with the timestamp module 12 to the live application through the cross-process communication channel of the communication module 13 by using the transmission module 14.
It should be noted that the method for cross-process communication provided by the present invention is not limited to the method provided in the embodiment of the present invention, and other methods may also be provided, and the present invention is not limited to this.
In summary, the invention reads an original video stream and an original audio stream by using an auxiliary live broadcast audio/video application, obtains the video stream and the audio stream after performing relevant processing on the original video stream and the original audio stream, further aligns timestamps of the video stream and the audio stream to generate an audio/video stream data packet, and transmits the audio/video stream data packet to the live broadcast application through a cross-process communication channel to realize live broadcast, thereby realizing the processing operation of live broadcast audio/video on the same application program, saving the tedious operation of switching a plurality of live broadcast auxiliary applications back and forth, reducing the loss of equipment resources, avoiding the phenomena of blockage and downtime in the live broadcast process to a great extent, and improving the user experience.
In addition, the invention carries out relevant processing on the acquired original video stream and the original audio stream, specifically, obtains a relevant video algorithm by receiving a video processing instruction of a user, and processes frame data of the original video stream according to the video algorithm so as to achieve the purposes of adding the special effect of video beautification, video portrait whitening and figure slimming effects. Correspondingly, the related audio algorithm is obtained by receiving the audio processing instruction sent by the user, and the original audio stream is processed according to the audio algorithm, so that the purposes of audio noise reduction and sound effect superposition are achieved. Therefore, the method plays a jump in the enrichment of the live broadcast effect to the greatest extent, realizes the beautification of the video and the audio, further enriches the live broadcast content in the live broadcast process, activates the interactive atmosphere of the anchor and audiences, and supports the atmosphere; the live broadcast effect is more vivid.
In addition, the invention utilizes an auxiliary live broadcast audio and video application to process the original video stream and the original audio stream, and then transmits the processed original video stream and the original audio stream to the live broadcast application through a cross-program communication channel which is communicated with the live broadcast application through a request, and the live broadcast application uploads the audio and video stream data packet to a live broadcast server according to a preset rule, wherein the preset rule comprises that live broadcast data preset by a main broadcast are packaged into the audio and video data packet and uploaded together, so that the synchronous live broadcast function is realized, the memory occupation is reduced, the data transmission rate is improved, and the smoothness of the live broadcast is ensured.
In conclusion, the processing operation of live broadcast audio and video on the same application program is realized, the complex operation of switching a plurality of live broadcast auxiliary applications back and forth is omitted, the loss of equipment resources is saved, the phenomena of blockage and downtime of live broadcast equipment are avoided to the greatest extent, and the use experience of a user is improved; in addition, the live broadcast content is enriched in the live broadcast process, the interaction atmosphere of the anchor and audiences is activated, and the atmosphere is set off; the live broadcast effect is more vivid; in addition, live broadcast data preset by the anchor is packaged into the audio and video data packet and uploaded together, so that the synchronous live broadcast function is realized, the memory occupation is reduced, the data transmission rate is improved, and the live broadcast smoothness is ensured.
In the description provided herein, although numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some embodiments, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Although a few exemplary embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these exemplary embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.