CN115103216A - Live broadcast data processing method and device, computer equipment and storage medium - Google Patents

Live broadcast data processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115103216A
CN115103216A CN202210854982.5A CN202210854982A CN115103216A CN 115103216 A CN115103216 A CN 115103216A CN 202210854982 A CN202210854982 A CN 202210854982A CN 115103216 A CN115103216 A CN 115103216A
Authority
CN
China
Prior art keywords
audio
video
live broadcast
data
network environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210854982.5A
Other languages
Chinese (zh)
Inventor
廖加旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kangjian Information Technology Shenzhen Co Ltd
Original Assignee
Kangjian Information Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kangjian Information Technology Shenzhen Co Ltd filed Critical Kangjian Information Technology Shenzhen Co Ltd
Priority to CN202210854982.5A priority Critical patent/CN115103216A/en
Publication of CN115103216A publication Critical patent/CN115103216A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Abstract

The invention relates to the technical field of data processing, and discloses a live data processing method, which comprises the following steps: responding to a network live broadcast starting instruction, and acquiring configuration data related to frame loss in a live broadcast process, wherein the configuration data comprises a maximum time span threshold of a sending queue; when a network environment conversion event is triggered, acquiring audio and video data by using the configuration data related to frame loss, and encoding the acquired audio and video data into an audio and video sequence to be transmitted; judging whether the head and tail time stamps corresponding to the audio and video sequence to be sent are larger than the maximum time span threshold value of the sending queue or not; and if so, performing frame loss processing on the audio and video data of the live broadcast associated port. The method and the device can realize the continuous playing of the live broadcast data in the weak network environment, slow down the pause and high delay of the live broadcast data in the weak network environment and improve the network live broadcast effect.

Description

Live broadcast data processing method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for processing live data, a computer device, and a storage medium.
Background
With the development of internet technology, many new network elements are continuously generated, wherein a live broadcast platform is used as a new interactive platform and becomes a common intercommunication way for people to carry out information transmission and communication interaction.
Because the live broadcast needs network stream pushing and has higher requirement on the network, if the network is unstable, the effect of the live broadcast is poor, and the live broadcast seen by audiences can be jammed. In the weak network environment, the situation that the maximum bit rate of an initiator in live broadcasting is too high can occur, frame dropping processing can be carried out on live broadcasting data at the moment in order to guarantee normal live broadcasting, and a positive video acceleration effect can be achieved through proper frame dropping. However, the live broadcast data with lost frames is usually put into a buffer queue, so that the user can catch up with the live broadcast in a delayed manner, and if the data in the buffer queue is too much, the live broadcast is blocked and delayed highly, and the live broadcast effect is influenced.
Disclosure of Invention
In view of this, the present invention provides a live broadcast data processing method, a live broadcast data processing apparatus, a computer device, and a storage medium, and mainly aims to solve the problem that in the prior art, a live broadcast video is blocked and delayed at a high time in a weak network environment, which affects a live broadcast effect of a network.
According to an aspect of the present invention, there is provided a method for processing live data, the method including:
responding to a network live broadcast starting instruction, and acquiring configuration data related to frame loss in a live broadcast process, wherein the configuration data comprises a maximum time span threshold of a sending queue;
when a network environment conversion event is triggered, acquiring audio and video data by using the configuration data related to frame loss, and encoding the acquired audio and video data into an audio and video sequence to be transmitted;
judging whether the head and tail time stamps corresponding to the audio and video sequence to be sent are larger than the maximum time span threshold value of the sending queue or not;
and if so, performing frame loss processing on the audio and video data of the live broadcast associated port.
According to another aspect of the present invention, there is provided a device for processing live data, the device comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for responding to a network live broadcast starting instruction and acquiring configuration data related to frame loss in a live broadcast process, and the configuration data comprises a maximum time span threshold of a sending queue;
the transmitting module is used for acquiring the audio and video data by using the configuration data related to the frame loss when a network environment conversion event is triggered, and encoding the acquired audio and video data into an audio and video sequence to be transmitted;
the judging module is used for judging whether the head and tail timestamps corresponding to the audio and video sequence to be sent are larger than the maximum time span threshold value of the sending queue or not;
and the frame dropping module is used for performing frame dropping processing on the audio and video data of the live broadcast associated port if the frame dropping module is used for performing frame dropping processing on the audio and video data of the live broadcast associated port.
According to yet another aspect of the present invention, there is provided a computer device comprising a memory storing a computer program and a processor implementing the steps of the method of processing live data when executing the computer program.
According to yet another aspect of the present invention, a computer storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of a method of processing live data.
By means of the technical scheme, the invention provides a live broadcast data processing method, a live broadcast data processing device, computer equipment and a storage medium, the live broadcast data processing method comprises the steps of responding to a network live broadcast starting instruction, obtaining frame loss related configuration data in a live broadcast process, wherein the configuration data comprises a maximum time span threshold of a sending queue, when a network environment conversion event is triggered, collecting audio and video data by using the frame loss related configuration data, coding the collected audio and video data into an audio and video sequence to be sent, judging whether a head and tail timestamp corresponding to the audio and video sequence to be sent is larger than the maximum time span threshold of the sending queue, and if so, performing frame loss on the audio and video data of a live broadcast associated port. Compared with the mode of processing live data under the condition of blocking by using a frame dropping mode in the prior art, the method and the device do not directly drop frames of the live data under the condition of blocking, collect audio and video data by using configuration data related to frame dropping when a network environment conversion event is triggered, and readjust the audio and video data to encode, further drop frames of the audio and video data with head and tail timestamps corresponding to the encoded audio and video sequences larger than the maximum span of a sending queue, realize continuous playing of the live data under the weak network environment, slow down blocking and high delay generated by the live data under the weak network environment, and further improve the network live broadcast effect.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic view of an application environment of a processing method of live data according to an embodiment of the present invention;
fig. 2 is a flow chart illustrating a processing method of live data according to an embodiment of the present invention;
fig. 3 is a flow chart of a processing method of live data according to another embodiment of the present invention;
FIG. 4 is a flowchart illustrating one embodiment of step S20 of FIG. 2;
FIG. 5 is a schematic flow chart illustrating another embodiment of step S21 in FIG. 4;
FIG. 6 is a flowchart illustrating one embodiment of step S40 of FIG. 2;
fig. 7 is a flow chart of a processing method of live data according to another embodiment of the present invention;
FIG. 8 is a flowchart illustrating one embodiment of step S60 of FIG. 7;
fig. 9 is a block diagram of another flow module of a processing method of live data in the embodiment of the present invention;
fig. 10 is a schematic structural diagram of a device for processing live data according to an embodiment of the present invention;
FIG. 11 is a block diagram of a computing device in accordance with an embodiment of the present invention;
FIG. 12 is a schematic diagram of another embodiment of a computer device.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The live data processing method provided by the embodiment of the invention can be applied to an application environment as shown in fig. 1, wherein a client communicates with a server through a network. The server side can respond to a network live broadcast starting instruction, obtain frame loss related configuration data in a live broadcast process, wherein the configuration data comprises a maximum time span threshold of a sending queue, when a network environment conversion event is triggered, the audio and video data are collected by the frame loss related configuration data, the collected audio and video data are coded into an audio and video sequence to be sent, whether a head and tail timestamp corresponding to the audio and video sequence to be sent is larger than the maximum time span threshold of the sending queue or not is judged, and if yes, the audio and video data of a live broadcast associated port are subjected to frame loss processing at a client side. In the invention, when a network environment conversion event is triggered, audio and video data are acquired by using configuration data related to frame loss, the audio and video data are readjusted to be encoded, frame loss processing is further carried out on the audio and video data of which the head and tail timestamps corresponding to the encoded audio and video sequence are larger than the maximum span of a sending queue, continuous playing of live broadcast data in a weak network environment is realized, the blocking and high delay generated by the live broadcast data in the weak network environment are reduced, and the network live broadcast effect is further improved. The client may be, but is not limited to, various personal computers, laptops, smart phones, tablet computers, and portable wearable devices. The server can be implemented by an independent server or a server cluster composed of a plurality of servers. The present invention is described in detail below with reference to specific examples.
Referring to fig. 2, fig. 2 is a schematic flow chart of a live data processing method according to an embodiment of the present invention, including the following steps:
s10, responding to the start instruction of the network live broadcast, and acquiring the configuration data related to frame loss in the live broadcast process.
The live broadcast data processing method provided by the embodiment of the invention can be applied to live network platforms corresponding to various scenes, such as game live broadcast, store live broadcast, comprehensive live broadcast and the like. The network live broadcast platform can be realized through a server, the server is respectively connected with a user side and a live broadcast equipment side, on one hand, live broadcast invitation is initiated to the user side in real time, on the other hand, live broadcast data can be acquired through the live broadcast equipment side in real time, and the live broadcast data are further sent to the user side entering live broadcast. Particularly in the live broadcasting process, as the live broadcasting data relates to the transmission of a plurality of different equipment ends, the network environment plays an important role in the live broadcasting data, and once the network environment is changed, the transmission of the live broadcasting data can be influenced.
Considering the influence of the network environment on the live broadcast effect, the live broadcast data can be subjected to frame loss processing under the condition of poor network environment under the normal condition, so that the code rate is reduced, and the live broadcast smoothness is improved. The configuration data related to frame loss in the live broadcast process can be configured in advance through the service server, and when the live broadcast application is started, the configuration data related to frame loss is obtained through the service port request. The configuration data related to frame loss may include a maximum time span threshold of a transmission queue, where the maximum time span of the transmission queue is an allowed time span threshold of the audio and video data to be sent, and if the time span corresponding to the audio and video data to be sent is greater than the time span threshold, it indicates that a certain stuck or high delay condition exists in the audio and video data to be sent.
And S20, when a network environment conversion event is triggered, acquiring audio and video data by using the configuration data related to frame loss, and encoding the acquired audio and video data into an audio and video sequence to be transmitted.
In the embodiment of the invention, the live broadcast effect is directly influenced by the quality of the network environment, and if the network environment is poor, frame loss processing can be carried out on the audio and video data to be transmitted so as to ensure the live broadcast effect. The network environment switching event is triggered once the network environment changes, and the change of the network environment can be influenced by the strength of a wireless signal, the strength of a mobile signal and the switching of signal types.
Further, in order to better monitor the network environment and discover changes in the network environment, specifically, as shown in fig. 3, before step S20, that is, when a network environment conversion event is triggered, the following steps are further included before the audio/video data is collected by using the configuration data related to frame loss and the collected audio/video data is encoded into an audio/video sequence to be sent:
and S50, detecting the network environment in the live broadcast process by using a network detection tool, and triggering a network environment conversion event if the network live broadcast environment is detected to meet the conversion condition.
Wherein, the conversion condition is that at least one of the following conditions occurs: the intensity variation amplitude of the wireless connection signal reaches a first threshold value, the wireless connection signal and the mobile network signal are converted, the intensity variation amplitude of the mobile network signal reaches a second threshold value, and the intensity variation amplitude is a difference value generated in a preset time range by the intensity of the wireless connection signal.
It should be understood that the configuration data related to frame dropping also includes a mapping relationship between the network environment and the audio and video sampling frequency, for example, the network environment represents that the network state quantity is good, the corresponding video sampling frequency is 30fps, the corresponding audio sampling frequency is 44100HZ, the network environment represents that the network state is poor, the corresponding video sampling frequency is 15fps, and the corresponding audio sampling frequency is 8000 HZ. Specifically, as shown in fig. 4, step S20, that is, when a network environment conversion event is triggered, acquires audio/video data by using configuration data related to frame loss, and encodes the acquired audio/video data into an audio/video sequence to be transmitted, includes the following steps:
and S21, when the network environment conversion event is triggered, resetting the audio and video sampling frequency by using the mapping relation between the network environment and the audio and video sampling frequency.
And S22, acquiring the audio and video data called back from the hardware acquisition equipment interface through the proxy mode, acquiring the audio and video data by using the reset audio and video sampling frequency, and encoding the acquired audio and video data into an audio and video sequence to be transmitted.
Because the audio data and the video data have different data characteristics, the audio and video sampling frequency is different in sampling frequency during initial acquisition, when the network environment changes, whether the network environment is changed from bad to good or from good to bad, the audio sampling frequency and the video sampling frequency corresponding to the current network environment are determined according to the mapping relation between the network environment and the audio and video sampling frequency, the audio data are sampled by using the redetermined audio sampling frequency, and the video data are sampled by using the redetermined video sampling frequency.
The audio and video sampling equipment can comprise video acquisition equipment such as a camera and audio sampling equipment such as a microphone, and under a common condition, the intelligent terminal can be simultaneously provided with the audio and video acquisition equipment. Considering that most of audio and video sampling devices in an application program have permission to acquire audio and video data, the permission of a camera and a microphone needs to be judged before audio and video acquisition, the proxy mode can inquire the access permission of hardware acquisition equipment for a user, and further, after the permission of access of the hardware acquisition equipment is acquired, the hardware acquisition equipment is used for calling back corresponding audio and video data.
It should be understood that, since the switching of the network environment corresponds to different fluctuation states, specifically, as shown in fig. 5, in step S21, that is, when a network environment conversion event is triggered, the resetting of the audio/video sampling frequency by using the mapping relationship between the network environment and the audio/video sampling frequency includes the following steps:
s211, when a network environment conversion event is triggered, acquiring state information corresponding to the current network environment, and inquiring an audio and video sampling frequency range applicable to the current network environment by using the mapping relation between the network environment and audio and video sampling frequency according to the state information corresponding to the current network environment.
And S212, resetting the audio and video sampling frequency according to the audio and video sampling frequency range applicable to the current network environment.
The state information corresponding to the network environment can be fluctuation information of a corresponding access network of the live broadcast equipment, such as fluctuation amplitude, fluctuation duration and the like, for the condition that the fluctuation amplitude is fast or the fluctuation time is long, the audio and video sampling frequency used in the current network environment obviously cannot meet the live broadcast requirement, the audio and video sampling frequency needs to be reset, the reset audio and video sampling frequency is used for carrying out audio and video data acquisition, for the condition that the fluctuation amplitude is slow and the fluctuation time is short, the audio and video sampling frequency used in the current network environment can be adjusted according to the requirement of a user, and if the user can accept short network delay, the audio and video sampling frequency does not need to be reset.
It can be understood that, considering the flexibility of selecting the audio/video sampling frequency, the mapping relationship between the network environment and the audio/video sampling frequency can also be set to be different audio/video sampling frequency ranges suitable for the network environment.
And S30, judging whether the head and tail time stamps corresponding to the audio and video sequence to be sent are larger than the maximum time span threshold value of the sending queue.
The head and tail timestamps corresponding to the audio and video sequences to be transmitted are larger than the maximum time span of the transmission queue, which indicates that the transmission of the transmission data is influenced by the network and is slow, so that the live broadcast data transmitted currently has time lag and can be blocked or delayed highly, if the live broadcast effect is influenced by continuous transmission, the live broadcast data is not suitable to be transmitted to the user side for display, otherwise, the live broadcast data can be normally transmitted to the user side for display.
And S40, if yes, performing frame loss processing on the audio and video data of the live broadcast associated port.
It can be understood that the frame dropping data of the audio and video data can relieve the seizure and high delay of the live broadcast process to a certain extent, but the frame dropping process is not directed to all live broadcast associated ports. Specifically, as shown in fig. 6, step S40, that is, performing frame loss processing on the audio/video data of the live broadcast associated port, includes the following steps:
and S41, respectively acquiring the audio and video data acquired from the live broadcast associated acquisition port and the audio and video data being encoded in the live broadcast associated encoding interface.
And S42, performing frame loss processing on the acquired audio and video data and the audio and video data being coded.
The method mainly comprises the process of performing frame loss processing on audio and video data collected in a collection port and the process of performing frame loss processing on audio and video data being edited in a coding interface.
Furthermore, the video data and the audio data have different sampling frequencies in the acquisition process, so that the subsequent audio and video synchronization process is influenced to a certain extent. Specifically, as shown in fig. 7, after step S40, that is, after performing frame dropping processing on the audio/video data of the live broadcast associated port, the method further includes the following steps:
and S60, encapsulating the audio and video sequence to be sent by using the configuration data related to frame loss, and then executing an audio and video synchronization mechanism so as to send the live data after audio and video synchronization to a server corresponding to a content distribution network.
The audio and video synchronization mechanism is used for enabling audio and video in live broadcast data in a live broadcast process to correspond to each other, if the audio data and the video data are asynchronous, live broadcast video and audio watched by a user side are inconsistent, and live broadcast effect is affected. Because the sampling frequency of the audio frame is far higher than that of the video frame, the audio data frame is larger than the video data frame under normal conditions, and an audio and video synchronization mechanism can be executed according to the corresponding proportional relation of the video frame and the audio frame.
The content distribution network can be constructed on an intelligent virtual network based on the existing network, and users can obtain needed live content nearby through functional modules of load balancing, content distribution, scheduling and the like of a central platform by means of edge servers deployed in various places.
Further, the configuration data related to frame loss further includes a time threshold for video frame overtime, specifically, as shown in fig. 8, step S60 is to execute an audio/video synchronization mechanism after an audio/video sequence to be sent is encapsulated by using the configuration data related to frame loss, so as to send live data after audio/video synchronization to a server corresponding to the content distribution network, and includes the following steps:
and S61, extracting the time stamp of the video frame to be sent at the current moment from the audio and video sequence to be sent, and calculating the time stamp difference value formed by the time stamp corresponding to the sent audio frame at the previous moment and the time stamp of the video frame to be sent at the current moment.
And S62, judging whether the timestamp difference is larger than the overtime time threshold of the video frame.
And S63, if yes, judging that the audio and video sequence to be sent does not meet the audio and video synchronization condition, and sending the live broadcast data after audio and video synchronization to a server corresponding to the content distribution network after frame dropping processing is carried out on the video frames lagging in the live broadcast associated coding interface.
For the difference value that the timestamp corresponding to the audio which has been sent at the last moment is greater than the timestamp of the video frame to be sent at present, and the difference value exceeds the overtime time threshold of the video frame, it indicates that the video frame is already lagged, and the lagged video frame in the coding interface, including the frame I, the frame P and the frame B, needs to be dropped until the next frame I, so as to ensure the audio and video synchronization. And if the difference value of the timestamp corresponding to the audio which is transmitted at the last moment is larger than the timestamp of the current video to be transmitted and does not exceed the overtime time threshold of the video frame, the video frame is synchronous with the audio needle, and the audio-video synchronous live broadcast data can be directly transmitted to a server corresponding to a content distribution network.
For further explaining the processing process of the live broadcast data, as shown in fig. 9, the embodiment of the present invention further provides another flow chart of the processing method of the live broadcast data, including data configuration, data acquisition, data caching, data encoding, and data transmission, and the specific implementation process is as follows: when the live application is started, the obtaining configuration module obtains configuration data related to frame loss, including: transmitting the maximum time span of a queue, the overtime time threshold of a video frame, and the mapping relation between the network environment and the audio and video sampling frequency, starting network environment monitoring while starting a live broadcast application, respectively using a preset audio sampling frequency to collect audio data and a preset video sampling frequency to collect video data in the whole live broadcast process, resetting the audio and video sampling frequency in a collection module by using the mapping relation between the network environment and the audio and video sampling frequency when the triggering of a network environment conversion event is monitored, and re-collecting the audio and video data by using the reset audio and video sampling frequency, further coding the collected audio and video data to obtain an audio and video sequence to be transmitted, and if the difference between head and tail timestamps of the audio and video sequence to be transmitted is greater than the maximum time span threshold of the transmission queue, and starting a frame dropping instruction in the data cache to perform frame dropping processing on the acquired audio and video data, starting a frame dropping instruction in the data coding to perform frame dropping processing on the currently coded audio and video sequence, and performing synchronous post-packaging processing on the audio and video data subjected to frame dropping processing so as to send the audio and video data subjected to synchronous packaging processing to a content distribution network.
In an embodiment, a processing apparatus of live data is provided, where the processing apparatus of live data corresponds to the processing method of live data in the foregoing embodiment one to one. As shown in fig. 10, the processing apparatus of live data includes an obtaining module 101, a sending module 102, a determining module 103, and a frame dropping module 104.
The functional modules are explained in detail as follows:
an obtaining module 101, configured to respond to a network live broadcast start instruction, and obtain configuration data related to frame loss in a live broadcast process, where the configuration data includes a maximum time span threshold of a sending queue;
the transmitting module 102 is configured to, when a network environment conversion event is triggered, acquire audio/video data by using the configuration data related to frame loss, and encode the acquired audio/video data into an audio/video sequence to be transmitted;
the judging module 103 is configured to judge whether a head-to-tail timestamp corresponding to the to-be-sent audio/video sequence is greater than the maximum time span threshold of the sending queue;
and the frame dropping unit 104 is used for performing frame dropping processing on the audio and video data of the live broadcast associated port if the frame dropping unit is yes.
In one embodiment, the apparatus further comprises:
the detection module can be used for acquiring audio and video data by using the configuration data related to frame loss when the network environment conversion event is triggered, detecting the network environment in the live broadcast process by using a network detection tool before encoding the acquired audio and video data into an audio and video sequence to be sent, and triggering the network environment conversion event if the network live broadcast environment is detected to meet the conversion condition;
wherein, the conversion condition is that at least one of the following conditions occurs: the intensity variation amplitude of the wireless connection signal reaches a first threshold value, the wireless connection signal and the mobile network signal are converted, the intensity variation amplitude of the mobile network signal reaches a second threshold value, and the intensity variation amplitude is a difference value generated in a preset time range by the intensity of the wireless connection signal.
In an embodiment, the configuration data related to frame loss further includes a mapping relationship between a network environment and an audio-video sampling frequency, and the sending module is specifically configured to:
when a network environment conversion event is triggered, resetting the audio and video sampling frequency by utilizing the mapping relation between the network environment and the audio and video sampling frequency;
the method comprises the steps of obtaining audio and video data called back from a hardware acquisition equipment interface through an agent mode, acquiring the audio and video data by using a reset audio and video sampling frequency, and encoding the acquired audio and video data into an audio and video sequence to be transmitted.
In an embodiment, the sending module is further configured to:
when a network environment conversion event is triggered, acquiring state information corresponding to the current network environment, and inquiring an audio and video sampling frequency range applicable to the current network environment by using a mapping relation between the network environment and audio and video sampling frequency according to the state information corresponding to the current network environment;
and resetting the audio and video sampling frequency according to the audio and video sampling frequency range applicable to the current network environment.
In an embodiment, the frame loss module is specifically configured to:
respectively acquiring audio and video data acquired from a live broadcast associated acquisition port and audio and video data being coded in a live broadcast associated coding interface;
and performing frame loss processing on the acquired audio and video data and the audio and video data being coded.
In one embodiment, the apparatus further comprises:
an execution module, configured to perform an audio and video synchronization mechanism after the audio and video data of the live broadcast associated port is subjected to frame dropping processing and the audio and video sequence to be sent is encapsulated by the configuration data related to frame dropping, so as to send the live broadcast data after audio and video synchronization to a server corresponding to a content distribution network
In an embodiment, the configuration data related to frame loss further includes a time threshold for video frame timeout, and the execution module is specifically configured to:
extracting a time stamp of a video frame to be sent at the current moment from the audio and video sequence to be sent, and calculating a time stamp difference value formed by the time stamp corresponding to the audio frame to be sent at the previous moment and the time stamp of the video frame to be sent at the current moment;
judging whether the timestamp difference value is larger than a time threshold value of overtime of the video frame;
and if so, judging that the audio and video sequence to be sent does not meet the audio and video synchronization condition, and sending the live broadcast data after audio and video synchronization to a server corresponding to a content distribution network after frame dropping processing is carried out on the lagging video frame in the live broadcast associated coding interface.
The embodiment provides a processing apparatus of live broadcast data, through when triggering network environment conversion event, utilize the configuration data that lose frame relevant to gather audio and video data, and readjust audio and video data and encode, audio and video data that the audio and video sequence corresponds after further to the coding is greater than the maximum span of sending the queue carries out the processing of losing the frame, realize the continuous broadcast of live broadcast data under the weak network environment, slow down the card pause and the high time delay that live broadcast data produced in the weak network environment, and then improve the live broadcast effect of network.
For specific limitations of the processing device of the live data, reference may be made to the above limitations on the processing method of the live data, and details are not described here. The modules in the processing device for live data may be implemented in whole or in part by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes non-volatile and/or volatile storage media, internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The network interface of the computer device is used for communicating with an external client through a network connection. The computer program is executed by a processor to implement the functions or steps of a service side of a method of processing live data.
In one embodiment, a computer device is provided, which may be a client, and its internal structure diagram may be as shown in fig. 12. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external server through a network connection. The computer program is executed by a processor to implement the functions or steps of a method for processing live data on the client side
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
responding to a network live broadcast starting instruction, and acquiring configuration data related to frame loss in a live broadcast process, wherein the configuration data comprises a maximum time span threshold of a sending queue;
when a network environment conversion event is triggered, acquiring audio and video data by using the configuration data related to frame loss, and encoding the acquired audio and video data into an audio and video sequence to be transmitted;
judging whether the head and tail time stamps corresponding to the audio and video sequence to be sent are larger than the maximum time span threshold value of the sending queue or not;
and if so, performing frame loss processing on the audio and video data of the live broadcast associated port.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, performs the steps of:
responding to a network live broadcast starting instruction, and acquiring configuration data related to frame loss in a live broadcast process, wherein the configuration data comprises a maximum time span threshold of a sending queue;
when a network environment conversion event is triggered, acquiring audio and video data by using the configuration data related to frame loss, and encoding the acquired audio and video data into an audio and video sequence to be transmitted;
judging whether the head and tail timestamps corresponding to the audio and video sequence to be sent are larger than the maximum time span threshold value of the sending queue or not;
and if so, performing frame loss processing on the audio and video data of the live broadcast associated port.
It should be noted that, the functions or steps that can be implemented by the computer-readable storage medium or the computer device can be referred to the related descriptions of the server side and the client side in the foregoing method embodiments, and are not described here one by one to avoid repetition.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus (Rambus) direct RAM (RDRAM), direct bused dynamic RAM (DRDRAM), and bused dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method for processing live data, the method comprising:
responding to a network live broadcast starting instruction, and acquiring configuration data related to frame loss in a live broadcast process, wherein the configuration data comprises a maximum time span threshold of a sending queue;
when a network environment conversion event is triggered, acquiring audio and video data by using the configuration data related to frame loss, and encoding the acquired audio and video data into an audio and video sequence to be transmitted;
judging whether the head and tail time stamps corresponding to the audio and video sequence to be sent are larger than the maximum time span threshold value of the sending queue or not;
and if so, performing frame loss processing on the audio and video data of the live broadcast associated port.
2. The method of claim 1, wherein before the collecting the audio/video data by using the configuration data related to the frame loss and encoding the collected audio/video data into an audio/video sequence to be transmitted when the network environment conversion event is triggered, the method further comprises:
detecting a network environment in the live broadcast process by using a network detection tool, and triggering a network environment conversion event if the network live broadcast environment is detected to meet the conversion condition;
wherein, the conversion condition is that at least one of the following conditions occurs: the intensity variation amplitude of the wireless connection signal reaches a first threshold value, the wireless connection signal and the mobile network signal are converted, the intensity variation amplitude of the mobile network signal reaches a second threshold value, and the intensity variation amplitude is a difference value generated in a preset time range by the intensity of the wireless connection signal.
3. The method according to claim 1, wherein the configuration data related to frame loss further includes a mapping relationship between a network environment and an audio-video sampling frequency, and when a network environment conversion event is triggered, the configuration data related to frame loss is used to collect audio-video data and encode the collected audio-video data into an audio-video sequence to be transmitted, specifically including:
when a network environment conversion event is triggered, resetting the audio and video sampling frequency by utilizing the mapping relation between the network environment and the audio and video sampling frequency;
and acquiring the audio and video data recalled back from the hardware acquisition equipment interface through the proxy mode, acquiring the audio and video data by using the reset audio and video sampling frequency, and encoding the acquired audio and video data into an audio and video sequence to be transmitted.
4. The method according to claim 3, wherein when a network environment conversion event is triggered, resetting the audio/video sampling frequency by using the mapping relationship between the network environment and the audio/video sampling frequency specifically comprises:
when a network environment conversion event is triggered, acquiring state information corresponding to the current network environment, and inquiring an audio and video sampling frequency range applicable to the current network environment by using a mapping relation between the network environment and audio and video sampling frequency according to the state information corresponding to the current network environment;
and resetting the audio and video sampling frequency according to the audio and video sampling frequency range applicable to the current network environment.
5. The method according to any one of claims 1 to 4, wherein the performing frame loss processing on the audio and video data of the live broadcast associated port specifically includes:
respectively acquiring audio and video data acquired from a live broadcast associated acquisition port and audio and video data being coded in a live broadcast associated coding interface;
and performing frame loss processing on the acquired audio and video data and the audio and video data being coded.
6. The method according to any one of claims 1-4, wherein after performing frame dropping processing on the audio-video data of the live associated port, the method further comprises:
and packaging the audio and video sequence to be sent by using the configuration data related to frame loss and then executing an audio and video synchronization mechanism so as to send the live data after audio and video synchronization to a server corresponding to a content distribution network.
7. The method according to claim 6, wherein the configuration data related to frame loss further includes a time threshold for video frame overtime, and the performing an audio/video synchronization mechanism after packaging the audio/video sequence to be transmitted by using the configuration data related to frame loss to transmit live data after audio/video synchronization to a server corresponding to a content distribution network specifically includes:
extracting a time stamp of a video frame to be sent at the current moment from the audio and video sequence to be sent, and calculating a time stamp difference value formed by the time stamp corresponding to the audio frame to be sent at the previous moment and the time stamp of the video frame to be sent at the current moment;
judging whether the timestamp difference value is larger than a time threshold value of overtime of the video frame;
and if so, judging that the audio and video sequence to be sent does not meet the audio and video synchronization condition, and sending the live broadcast data after audio and video synchronization to a server corresponding to a content distribution network after frame dropping processing is carried out on the lagging video frame in the live broadcast associated coding interface.
8. An apparatus for processing live data, the apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for responding to a network live broadcast starting instruction and acquiring configuration data related to frame loss in a live broadcast process, and the configuration data comprises a maximum time span threshold of a sending queue;
the transmitting module is used for acquiring the audio and video data by using the configuration data related to the frame loss when a network environment conversion event is triggered, and encoding the acquired audio and video data into an audio and video sequence to be transmitted;
the judging module is used for judging whether the head and tail timestamps corresponding to the audio and video sequence to be sent are larger than the maximum time span threshold value of the sending queue or not;
and the frame dropping module is used for performing frame dropping processing on the audio and video data of the live broadcast associated port if the frame dropping module is used for performing frame dropping processing on the audio and video data of the live broadcast associated port.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202210854982.5A 2022-07-19 2022-07-19 Live broadcast data processing method and device, computer equipment and storage medium Pending CN115103216A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210854982.5A CN115103216A (en) 2022-07-19 2022-07-19 Live broadcast data processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210854982.5A CN115103216A (en) 2022-07-19 2022-07-19 Live broadcast data processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115103216A true CN115103216A (en) 2022-09-23

Family

ID=83298510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210854982.5A Pending CN115103216A (en) 2022-07-19 2022-07-19 Live broadcast data processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115103216A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016015670A1 (en) * 2014-08-01 2016-02-04 广州金山网络科技有限公司 Audio stream decoding method and device
CN108495142A (en) * 2018-04-11 2018-09-04 腾讯科技(深圳)有限公司 Method for video coding and device
CN112822505A (en) * 2020-12-31 2021-05-18 杭州星犀科技有限公司 Audio and video frame loss method, device, system, storage medium and computer equipment
CN113037697A (en) * 2019-12-25 2021-06-25 深信服科技股份有限公司 Video frame processing method and device, electronic equipment and readable storage medium
CN114640886A (en) * 2022-02-28 2022-06-17 深圳市宏电技术股份有限公司 Bandwidth-adaptive audio and video transmission method and device, computer equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016015670A1 (en) * 2014-08-01 2016-02-04 广州金山网络科技有限公司 Audio stream decoding method and device
CN108495142A (en) * 2018-04-11 2018-09-04 腾讯科技(深圳)有限公司 Method for video coding and device
CN113037697A (en) * 2019-12-25 2021-06-25 深信服科技股份有限公司 Video frame processing method and device, electronic equipment and readable storage medium
CN112822505A (en) * 2020-12-31 2021-05-18 杭州星犀科技有限公司 Audio and video frame loss method, device, system, storage medium and computer equipment
CN114640886A (en) * 2022-02-28 2022-06-17 深圳市宏电技术股份有限公司 Bandwidth-adaptive audio and video transmission method and device, computer equipment and medium

Similar Documents

Publication Publication Date Title
US11228630B2 (en) Adaptive bit rate media streaming based on network conditions received via a network monitor
CN111135569A (en) Cloud game processing method and device, storage medium and electronic equipment
CN110121114B (en) Method for transmitting stream data and data transmitting apparatus
US10681413B2 (en) Determining a quality of experience metric based on uniform resource locator data
CN109600610B (en) Data encoding method, terminal and computer readable storage medium
US20040202109A1 (en) Data distribution server and terminal apparatus
US9781595B2 (en) Wireless communication device
KR102656605B1 (en) Method and apparatus to control sharing screen between plural devices and recording medium thereof
CN113891175B (en) Live broadcast push flow method, device and system
CN106658065B (en) Audio and video synchronization method, device and system
US10681400B2 (en) Method and device for transmitting video
CN104639501B (en) A kind of method of data stream transmitting, equipment and system
CN111200760A (en) Data processing method and device and electronic equipment
CN111263113B (en) Data packet sending method and device and data packet processing method and device
CN115103216A (en) Live broadcast data processing method and device, computer equipment and storage medium
CN110570614B (en) Video monitoring system and intelligent camera
CN112565337B (en) Request transmission method, server, client, system and electronic equipment
CN110798700B (en) Video processing method, video processing device, storage medium and electronic equipment
CN111479161B (en) Live broadcast quality data reporting method and device
US11240161B2 (en) Data communication apparatus for high-speed identification of adaptive bit rate, communication system, data communication method, and program
CN112217842A (en) Data transmission method and device
CN111385081A (en) End-to-end communication method, device, electronic equipment and medium
CN111131814A (en) Data feedback method and device and set top box
CN114339869B (en) Network management method, device, electronic equipment and storage medium
CN116781973B (en) Video encoding and decoding method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination