CN113676777A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN113676777A
CN113676777A CN202110949838.5A CN202110949838A CN113676777A CN 113676777 A CN113676777 A CN 113676777A CN 202110949838 A CN202110949838 A CN 202110949838A CN 113676777 A CN113676777 A CN 113676777A
Authority
CN
China
Prior art keywords
target
video
data
audio
playing state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110949838.5A
Other languages
Chinese (zh)
Other versions
CN113676777B (en
Inventor
汤然
欧阳俊
郑龙
尹壮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202110949838.5A priority Critical patent/CN113676777B/en
Publication of CN113676777A publication Critical patent/CN113676777A/en
Application granted granted Critical
Publication of CN113676777B publication Critical patent/CN113676777B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream

Abstract

The application provides a data processing method and a device, wherein the data processing method is applied to a client and comprises the following steps: monitoring the playing state of the target video; under the condition that the playing state is changed from the audio playing state to the video playing state, sending a video acquisition instruction to a target link node, wherein the video acquisition instruction carries a target video identifier; receiving target video data and target frame data information which are returned by the target link node and correspond to the target video identifier, wherein the target frame data information is cached in the target link node; according to the method, when the client side resumes the video live broadcast, the target video data is played according to the target frame data information cached by the target link node, so that the phenomenon that the live broadcast picture is forcibly resumed to cause picture screen splash is avoided, and the use experience of a user is improved.

Description

Data processing method and device
Technical Field
The application relates to the technical field of internet, in particular to a data processing method. The present application also relates to a data processing apparatus, a data processing system, a computing device, and a computer-readable storage medium.
Background
With the development of internet technology, more and more people watch live videos on the internet, some people may open a plurality of live videos to watch simultaneously, but users feel stuck when watching live videos because of limited internet speed. Sometimes, the user can play the live broadcast in the background, and when the video live broadcast in the background is restored to the foreground for playing, the conditions of screen blooming and video blocking can also occur.
Disclosure of Invention
In view of this, the embodiments of the present application provide a data processing method. The application also relates to a data processing device, a data processing system, a computing device and a computer readable storage medium, which are used for solving the problems that live pictures are displayed in a screen and blocked when video live broadcast is restored and live pictures are blocked when a plurality of video live broadcasts are watched in the prior art.
According to a first aspect of the embodiments of the present application, there is provided a data processing method applied to a client, including:
monitoring the playing state of the target video;
under the condition that the playing state is changed from the audio playing state to the video playing state, sending a video acquisition instruction to a target link node, wherein the video acquisition instruction carries a target video identifier;
receiving target video data and target frame data information which are returned by the target link node and correspond to the target video identifier, wherein the target frame data information is cached in the target link node;
and playing the target video data according to the target frame data information.
According to a second aspect of the embodiments of the present application, there is provided a data processing method applied to a target link node, including:
receiving video data sent by a server, and caching frame data information of the video data;
receiving a video acquisition instruction sent by a client, wherein the video acquisition instruction carries a target video identifier;
determining target video data and target frame data information according to the target video identifier;
and sending the target video data and the target frame data information to the client.
According to a third aspect of the embodiments of the present application, there is provided a data processing system, including a target link node and a client, where:
the target link node is configured to receive video data sent by a server and cache frame data information of the video data;
the client is configured to monitor a playing state of a target video, and send a video acquisition instruction to the target link node when the playing state is changed from an audio playing state to a video playing state, wherein the video acquisition instruction carries a target video identifier;
the target link node is further configured to receive the video acquisition instruction, determine target video data and target frame data information according to the target video identifier, and send the target video data and the target frame data information to the client;
the client is further configured to receive the target video data and the target frame data information, and play the target video data according to the target frame data information.
According to a fourth aspect of the embodiments of the present application, there is provided a data processing apparatus, applied to a client, including:
the monitoring module is configured to monitor the playing state of the target video;
the sending module is configured to send a video acquisition instruction to a target link node under the condition that the playing state is changed from the audio playing state to the video playing state, wherein the video acquisition instruction carries a target video identifier;
a receiving module, configured to receive target video data and target frame data information corresponding to the target video identifier, which are returned by the target link node, wherein the target frame data information is cached in the target link node;
and the playing module is configured to play the target video data according to the target frame data information.
According to a fifth aspect of the embodiments of the present application, there is provided a data processing apparatus, applied to a target link node, including:
the cache module is configured to receive video data sent by a server and cache frame data information of the video data;
the instruction receiving module is configured to receive a video acquisition instruction sent by a client, wherein the video acquisition instruction carries a target video identifier;
a determining module configured to determine target video data and target frame data information according to the target video identifier;
a sending module configured to send the target video data and the target frame data information to the client.
According to a sixth aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the data processing method when executing the computer instructions.
According to a seventh aspect of embodiments of the present application, there is provided a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the data processing method.
The data processing method provided by the application is applied to a client and comprises the following steps: monitoring the playing state of the target video; under the condition that the playing state is changed from the audio playing state to the video playing state, sending a video acquisition instruction to a target link node, wherein the video acquisition instruction carries a target video identifier; receiving target video data and target frame data information which are returned by the target link node and correspond to the target video identifier, wherein the target frame data information is cached in the target link node; and playing the target video data according to the target frame data information. According to the embodiment of the application, when the client side resumes the video live broadcast, the target video data is played according to the target frame data information cached by the target link node, so that the phenomenon that the live broadcast picture is forcibly resumed to cause picture screen splash is avoided, and the use experience of a user is improved.
Drawings
Fig. 1 is a flowchart of a data processing method applied to a client according to an embodiment of the present application;
fig. 2 is a flowchart of a data processing method applied to a target link node according to an embodiment of the present application;
FIG. 3 is a block diagram of a data processing system according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a data processing apparatus applied to a client according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a data processing apparatus applied to a target link node according to an embodiment of the present application;
fig. 6 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present application relate are explained.
And (4) live broadcast: live Broadcast, indicates the Live Broadcast of network video in this application, and Live Broadcast audio and video can be with the form propelling movement of media stream to the server, if audience watches the Live Broadcast, and the server can receive user's request, transmits the video stream to website, APP, the player of customer end through Content Distribution Network (CDN), realizes the video Broadcast.
CDN: content Delivery Network, i.e. a Content distribution Network. The CDN is an intelligent virtual network constructed on the basis of the existing network, and by means of edge servers deployed in various places and functional modules of load balancing, content distribution, scheduling and the like of a central platform, a user can obtain required content nearby, network congestion is reduced, and the access response speed and hit rate of the user are improved. The key technology of the CDN is mainly content storage and distribution technology.
RTMP: real Time Messaging Protocol, Real Time Messaging Protocol. The protocol is based on TCP and is a protocol family, comprising various variants of RTMP basic protocol and RTMPT/RTMPS/RTMPE. RTMP is a network protocol designed for real-time data communication, and is mainly used for audio-video and data communication between a Flash/AIR platform and a streaming media/interaction server supporting the RTMP protocol.
Pushing flow: and pushing the live content to the server.
Drawing flow: the method refers to a process that a server has live broadcast content, uses a specified address to pull, establishes connection with the server according to protocol types (such as RTMP, RTP, RTSP, HTTP and the like), and receives data.
The web player: and the webpage end can play a video window.
Key frame: also called I-frames, each frame represents a still image in video compression. In actual compression, various algorithms are adopted to reduce the data capacity, wherein IPB is the most common, I frame represents key frame, which can be understood as complete preservation of the frame, and decoding can be completed only by the frame data.
Difference frame: the P frame is the difference between the frame and a previous key frame (or P frame), and the difference defined by the frame needs to be superimposed on the previously buffered picture to generate the final picture.
Along with the popularization of networks, more and more people watch live videos on the internet, the protocol used during live broadcasting is an RTMP protocol, and some people can open a plurality of live broadcasting pages to watch the videos through computer webpages simultaneously, so that the live broadcasting webpages are minimized, and the phenomena of video picture screen splash, blockage and the like can be caused when video playing is recovered.
In view of this, in the present application, a data processing method is provided, and the present application relates to a data processing apparatus, a data processing system, a computing device, and a computer-readable storage medium, which are described in detail one by one in the following embodiments.
Fig. 1 is a flowchart illustrating a data processing method according to a first embodiment of the present application, where the data processing method is applied to a client, and specifically includes the following steps:
step 102: and monitoring the playing state of the target video.
In the first embodiment of the present application, a data processing method is applied to a client, a user uses the client to watch live video, and during the playing process of live video, there are usually two playing states, one is an audio playing state, that is, a video is played in a form of playing audio only, and there is no video picture; the other is a video playing state, i.e. the video plays both video and audio.
In practical application, a user may watch live videos through a client, and further may watch a plurality of live videos at the same time, where a video that needs to be processed is a target video. In the present application, the playing state of the target video needs to be monitored in real time.
In a specific embodiment provided by the application, for example, when a user watches live video through a webpage end of a notebook computer, the user needs to monitor the playing state of each live video when watching live video 1, live video 2 and live video 3, at this time, the playing state of live video 1 is an audio playing state, the playing state of live video 2 is an audio playing state, and the playing state of live video 3 is a video playing state.
Step 104: and sending a video acquisition instruction to a target link node under the condition that the playing state is changed from the audio playing state to the video playing state, wherein the video acquisition instruction carries a target video identifier.
When the playing state of the target video is changed, further detailed analysis needs to be performed according to the changed situation, and when the playing state of the target video is changed from the audio playing state to the video playing state, a video acquisition instruction is sent to the target link node.
In practical applications, there are many cases where the playing status of the target video is changed from the audio playing status to the video playing status, including: monitoring that the target video is changed from background playing to foreground playing; or receiving a video playing instruction sent by a user under the condition of an audio playing state.
In the process of live video, if a target video is played in the background, the target video can be generally regarded as an audio playing state, and when the target video is monitored to be played from the background to the foreground, the playing state of the target video can be determined to be changed from the audio playing state to the video playing state.
In addition, a button for only playing audio is arranged in the player for live video, when the button for only playing audio is selected, the playing state of the target video is the audio playing state, and at the moment, the target video is played in the form of only playing audio; when the audio-only play button is not selected, the playing state of the target video is the video playing state, namely, the audio and the video are played. When in the audio playing state, the user deselects only the audio playing button, that is, the user sends a video playing instruction to the client, and at this time, it can be determined that the playing state of the target video is changed from the audio playing state to the video playing state.
The target link node is specifically a CDN node used for scheduling video data, live video streams are pushed to the target link node from a server, and when a user needs to pull the live video streams, the live video streams are sent to the client from the target link node.
When the playing state is changed from the audio playing state to the video playing state, the video data of the target video needs to be acquired, and therefore a video acquisition instruction is sent to the target link node and used for acquiring the video data of the target video from the target link node, and the video acquisition instruction carries the target video identifier.
In a specific embodiment provided by the application, following the above example, taking the live video 1 as an example, the live video 1 is played in a background, and the playing state is an audio playing state, at this time, the user switches the live video 1 from the background playing to the foreground playing, the playing state of the live video 1 is changed from the audio playing state to the video playing state, at this time, a video acquisition instruction is sent to the target link node, and the video acquisition instruction carries a video identifier "live video 1".
Step 106: and receiving target video data and target frame data information which are returned by the target link node and correspond to the target video identifier, wherein the target frame data information is cached in the target link node.
And the target link node responds to the video acquisition instruction and returns target video data and target frame data information corresponding to the target video identifier, wherein the target frame data information is cached in the target link node in advance.
The target video data specifically refers to video stream data of a target video, the target frame data information refers to video frame data used for restoring a video picture, and the target frame data information can restore a target video frame at a current time point.
Specifically, receiving target frame data information corresponding to the target video identifier, which is returned by the target link node, includes:
and receiving a target key frame and a difference frame set which are returned by the target link node and correspond to the target video frame identification.
In practical application, when video playing is resumed, the obtained current frame is a difference frame with a high probability, the difference frame records difference information between the current frame and a key frame or a previous difference frame, and the video frame cannot be restored only by virtue of the difference frame, so that a key frame and a difference frame set between the key frame and the current frame are cached in advance by a target link node, the pre-cached key frame is the target key frame, and after a video obtaining instruction is sent to the target link node, the target link node returns the target key frame and the difference frame set corresponding to the target video frame. The current target video frame may be restored based on the target key frame and the set of difference frames.
Further, in order to reduce the amount of computation, the target link node usually buffers the last key frame and the difference frame set from the last key frame to the current frame, where the last key frame refers to the latest key frame received by the target link node, for example, taking a key frame every 100 video frames as an example, the 1 st, 101 th and 201 st frames … … are key frames, when the target link node receives the 88 th video frame, the 1 st frame is the last key frame, and when the target link node receives the 177 th video frame, the 101 th frame is the last key frame. And after the video acquisition instruction is sent to the target link node, the target link node returns the last key frame and the difference frame set corresponding to the target video frame. The current target video frame may be restored based on the last key frame and the set of difference frames.
In a specific embodiment provided by the present application, the live video stream L corresponding to the video identifier "live video 1", the target keyframe I, and the difference frame set { P1, P2, … … Pn } returned by the target link node are received along with the above example.
Step 108: and playing the target video data according to the target frame data information.
The target frame data information is specifically video picture information played by video stream data at the current moment, and after the target frame data information is determined, the target video data can be played on the basis of the target frame data information.
In practical application, the target frame data information includes a target key frame and a difference frame set, so that playing the target video data according to the target frame data information specifically includes:
generating a target video frame according to the target key frame and the difference frame set;
and playing the target video data based on the target video frame.
In practical application, after the target key frame and the difference frame set are determined, a complete video frame can be determined according to the target key frame and the difference frame set, the complete video frame is used as the target video frame at the current moment, the target video data is played from the target video frame, and the process that the target video is changed from the audio playing state to the video playing state is completed.
In a specific embodiment provided by the present application, following the above example, the target video frame T can be generated by restoring the target key frame I and the difference frame set { P1, P2, … … Pn }, and the live video stream L is played from the target video frame T.
In practical application, a user may also watch a plurality of live videos simultaneously, the video playing needs to occupy more bandwidth, and when each live video is in a video playing state, the power and performance of a client may be consumed, which causes a video playing jam, and in order to solve the problem, the data processing method provided by the present application further includes S1102-S1106:
and S1102, sending an audio acquisition instruction to the target link node under the condition that the playing state is changed from the video playing state to the audio playing state, wherein the audio acquisition instruction carries the target video identifier.
In the process of detecting the playing state of the target video, if the playing state is changed from the video playing state to the audio playing state, an audio acquisition instruction is sent to the target link node and is used for pulling only audio data information from the target link node without transmitting the video information, and the pause phenomenon caused by insufficient network bandwidth of the client is reduced. The audio acquisition command also carries the target video identifier.
In practical applications, the changing of the playing state from the video playing state to the audio playing state includes:
monitoring that the target video is changed from foreground playing to background playing; or receiving an audio playing instruction sent by a user under the condition of a video playing state.
When the user switches the target video to the background, the video part of the target video does not need to be played again, only the audio part of the target video needs to be played, and at the moment, the playing state of the target video can be determined to be changed from the video playing state to the audio playing state. Or, the user selects only the audio play button, that is, the user sends an audio play instruction to the client, and at this time, it may also be determined that the play state of the target video is changed from the video play state to the audio play state.
In a specific embodiment provided by the application, the above example is continued, after the playing state of the live video 1 is changed to the video playing state, at this time, the playing states of the live video 1 and the live video 3 are video playing states, the playing state of the live video 2 is an audio playing state, at this time, a situation that a user has a video jam occurs when playing the live video 1 and the live video 3, at this time, the user wants to watch the content of the live video 1, the live video 3 is switched to background playing, at this time, it is monitored that the live video 3 is changed from the video playing state to the audio playing state, an audio acquisition instruction is sent to the target link node, and the audio acquisition instruction carries the video identifier "live video 3".
S1104, receiving target audio data corresponding to the target video identifier returned by the target link node, wherein the target audio data is separated from the target video data by the target link node.
And receiving target audio data which is acquired by the target link node according to the audio acquisition instruction and corresponds to the target video identifier, wherein the target audio data is separated from the target video data by the target link node, the video data part in the target video data is filtered, and only the audio data part is reserved, so that the occupied bandwidth resource can be effectively reduced, the phenomenon of live broadcast blockage is avoided, and the use experience of a user is improved.
In a specific embodiment provided by the present application, along with the above example, target audio data V corresponding to the video identifier "live video 3" returned by the target link node is received, and the target audio data V is separated from the video data corresponding to the video identifier "live video 3" by the target link node.
And S1106, playing the target audio data.
After receiving the target audio data, the target audio data can be played, and the user can listen to the live use scene. The use experience of the user is improved.
The data processing method applied to the client side provided by the embodiment of the application comprises the steps of monitoring the playing state of a target video; under the condition that the playing state is changed from the audio playing state to the video playing state, sending a video acquisition instruction to a target link node, wherein the video acquisition instruction carries a target video identifier; receiving target video data and target frame data information which are returned by the target link node and correspond to the target video identifier, wherein the target frame data information is cached in the target link node; and playing the target video data according to the target frame data information. The data processing method provided by the application is applied to a scene using the RTMP as a transmission network stream, and when the client side resumes the video live broadcast, the target video data is played according to the target frame data information cached by the target link node, so that the screen splash caused by forcibly resuming the live broadcast picture is avoided, and the use experience of a user is improved.
Secondly, when the target video is changed from the video playing state to the audio playing state, the audio data sent by the CDN node is received, all live video data do not need to be received, the occupation amount of network bandwidth is effectively reduced, the problem of blocking caused by simultaneous processing of more video streams is avoided, and the use experience of a user is further improved.
Fig. 2 is a flowchart illustrating a data processing method according to an embodiment of the present application, where the data processing method is applied to a target link node, and specifically includes the following steps:
step 202: and receiving video data sent by a server, and caching frame data information of the video data.
In the processing process of the live streaming, the live streaming uploads video data to a server at an anchor end, the server processes the video data of the anchor and then distributes the video data to each CDN node, and after receiving the video data in the CDN node, frame data information of the video data needs to be cached, specifically, the last key frame and each subsequent difference frame are cached.
In a specific embodiment provided by the present application, taking 3 video data received by a CDN node as an example, the CDN node receives video data a, video data b, and video data c distributed by a live broadcast server, and simultaneously caches frame data information of each video data, that is, caches frame data information Ia corresponding to the video data a, caches frame data information Ib corresponding to the video data b, and caches frame data information Ic corresponding to the video data c.
Step 204: receiving a video acquisition instruction sent by a client, wherein the video acquisition instruction carries a target video identifier.
When a client needs to watch live video, sending video data to the client for playing, and in the playing process, if a video acquisition instruction sent by the client is received, carrying a target video identifier in the video acquisition instruction.
In a specific embodiment provided by the present application, along the above example, a video acquisition instruction sent by a client is received, and if the client wishes to pull video data of a video b, the video acquisition instruction carries a video identifier "video b".
Step 206: and determining target video data and target frame data information according to the target video identifier.
According to the target video identification, target video data and target frame data information corresponding to the target video identification can be searched and determined in numerous video data, the target video data is specifically live broadcast data, and the target frame data information is used for restoring a target video frame at the current moment.
In a specific embodiment provided by the present application, following the above example, the target video data may be determined as video data b according to the video identifier "video b", and meanwhile, the target frame data information is determined as frame data information Ib.
Step 208: and sending the target video data and the target frame data information to the client.
After the target video data and the target frame data information are determined, the target video data and the target frame data information can be returned to the client in response to the video acquisition instruction, so that the client can play the target video conveniently.
In a specific embodiment provided by the present application, following the above example, the video data b and the frame data information Ib are sent to the client, so that the client plays the video data b.
Optionally, the method further includes:
receiving an audio acquisition instruction sent by the client, wherein the audio acquisition instruction carries the target video identifier;
determining the target video data according to the target video identification;
separating target audio data from the target video data;
and sending the target audio data to the client.
When an audio acquisition instruction sent by a client is received, target audio data is separated from the target video data according to a target video identifier in the audio acquisition instruction, and only the target audio data is sent to the client, so that the occupancy rate of bandwidth flow is reduced, and the transmission efficiency is improved.
The data processing method is applied to a target link node, is suitable for a scene using an RTMP protocol as a transmission network stream, provides complete live broadcast video data for a user when a client sends a video acquisition instruction through an agreement between a CDN node and the client, and only provides audio data in the live broadcast video data for the user when the client sends the audio acquisition instruction, so that bandwidth consumption caused when the user watches a plurality of live broadcasts simultaneously is reduced, stagnation caused by network bandwidth limitation is reduced, and the use experience of the user is improved.
Referring to fig. 3, fig. 3 is a schematic diagram of a data processing system provided in an embodiment of the present application, the data processing system including a target link node 302 and a client 304, where:
the target link node 302 is configured to receive video data sent by a server and cache frame data information of the video data;
the client 304 is configured to monitor a playing state of a target video, and send a video acquisition instruction to the target link node 302 when the playing state is changed from an audio playing state to a video playing state, where the video acquisition instruction carries a target video identifier;
the target link node 302 is further configured to receive the video acquisition instruction, determine target video data and target frame data information according to the target video identifier, and send the target video data and the target frame data information to the client 304;
the client 304 is further configured to receive the target video data and the target frame data information, and play the target video data according to the target frame data information.
Optionally, the data processing system further includes:
the client 304 is further configured to send an audio obtaining instruction to the target link node 302 when the playing state is changed from the video playing state to the audio playing state, where the audio obtaining instruction carries the target video identifier;
the target link node 302 is further configured to receive the audio acquisition instruction, determine the target video data according to the target video identifier, separate target audio data from the target video data, and send the target audio data to the client 304;
the client 304 is further configured to receive and play the target audio data.
The data processing system provided by the embodiment of the application is suitable for a scene using an RTMP protocol as a transmission network stream, and when the client recovers video live broadcasting, the target video data is played according to target frame data information cached by a target link node, so that the phenomenon that a live broadcasting picture is forcibly recovered to cause picture screen splash is avoided, and the use experience of a user is improved.
Secondly, when the target video is changed from the video playing state to the audio playing state, the audio data sent by the target link node is received, all live video data do not need to be received, the occupation amount of network bandwidth is effectively reduced, the problem of blocking caused by simultaneous processing of more video streams is avoided, and the use experience of a user is further improved.
Corresponding to the above data processing method embodiment applied to the client, the present application further provides a data processing apparatus embodiment applied to the client, and fig. 4 shows a schematic structural diagram of a data processing apparatus provided in an embodiment of the present application. As shown in fig. 4, the apparatus includes:
a monitoring module 402 configured to monitor a play status of a target video;
a sending module 404, configured to send a video obtaining instruction to a target link node when the playing state is changed from the audio playing state to the video playing state, where the video obtaining instruction carries a target video identifier;
a receiving module 406, configured to receive target video data and target frame data information corresponding to the target video identifier, which are returned by the target link node, where the target frame data information is cached in the target link node;
a playing module 408 configured to play the target video data according to the target frame data information.
Optionally, the sending module 404 is further configured to:
sending an audio acquisition instruction to the target link node under the condition that the playing state is changed from a video playing state to an audio playing state, wherein the audio acquisition instruction carries the target video identifier;
the receiving module 406 is further configured to:
receiving target audio data corresponding to the target video identification returned by the target link node, wherein the target audio data is separated from the target video data by the target link node;
the play module 408 is further configured to:
and playing the target audio data.
Optionally, the receiving module 406 is further configured to:
and receiving a target key frame and a difference frame set which are returned by the target link node and correspond to the target video frame identification.
Optionally, the playing module 408 is further configured to:
generating a target video frame according to the target key frame and the difference frame set;
and playing the target video data based on the target video frame.
Optionally, the changing the playing state from the audio playing state to the video playing state includes:
monitoring that the target video is changed from background playing to foreground playing; or
And under the condition of an audio playing state, receiving a video playing instruction sent by a user.
Optionally, the changing the playing state from the video playing state to the audio playing state includes:
monitoring that the target video is changed from foreground playing to background playing; or
And receiving an audio playing instruction sent by a user under the condition of a video playing state.
The data processing device applied to the client comprises a monitoring module, a processing module and a display module, wherein the monitoring module monitors the playing state of a target video; under the condition that the playing state is changed from the audio playing state to the video playing state, sending a video acquisition instruction to a target link node, wherein the video acquisition instruction carries a target video identifier; receiving target video data and target frame data information which are returned by the target link node and correspond to the target video identifier, wherein the target frame data information is cached in the target link node; and playing the target video data according to the target frame data information. Through the data processing device provided by the application, when the client side recovers the video live broadcast, the target video data is played according to the target frame data information cached by the target link node, the phenomenon that the live broadcast picture is forcibly recovered to cause picture screen splash is avoided, and the use experience of a user is improved.
Secondly, when the target video is changed from the video playing state to the audio playing state, the audio data sent by the CDN node is received, all live video data do not need to be received, the occupation amount of network bandwidth is effectively reduced, the problem of blocking caused by simultaneous processing of more video streams is avoided, and the use experience of a user is further improved.
The foregoing is an illustrative scheme of a data processing apparatus applied to a client according to the embodiment. It should be noted that the technical solution applied to the data processing apparatus of the client belongs to the same concept as the technical solution applied to the data processing method of the client, and details that are not described in detail in the technical solution applied to the data processing apparatus of the client can be referred to the description of the technical solution applied to the data processing method of the client.
Corresponding to the above data processing method applied to the target link node, the present application also provides an embodiment of a data processing apparatus applied to the target link node, and fig. 5 shows a schematic structural diagram of a data processing apparatus provided in an embodiment of the present application. As shown in fig. 5, the apparatus includes:
a cache module 502 configured to receive video data sent by a server and cache frame data information of the video data;
an instruction receiving module 504, configured to receive a video acquisition instruction sent by a client, where the video acquisition instruction carries a target video identifier;
a determining module 506 configured to determine target video data and target frame data information according to the target video identifier;
a sending module 508 configured to send the target video data and the target frame data information to the client.
Optionally, the instruction receiving module 504 is further configured to:
receiving an audio acquisition instruction sent by the client, wherein the audio acquisition instruction carries the target video identifier;
the determining module 506, further configured to determine the target video data according to the target video identification;
a separation module configured to separate target audio data from the target video data;
the sending module 508, configured to send the target audio data to the client.
The data processing device is applied to a target link node, through an agreement between a CDN node and a client, when the client sends a video acquisition instruction, complete live video data is provided for a user, when the client sends an audio acquisition instruction, only audio data in the live video data is provided for the user, and therefore bandwidth consumption caused when the user watches a plurality of live broadcasts simultaneously is reduced, blocking caused by network bandwidth limitation is reduced, and user experience is improved.
The above is an illustrative scheme of the data processing apparatus applied to the target link node in this embodiment. It should be noted that the technical solution of the data processing apparatus belongs to the same concept as the technical solution of the data processing method applied to the target link node, and details of the technical solution of the data processing apparatus applied to the target link node, which are not described in detail, can be referred to the description of the technical solution of the data processing method applied to the target link node.
Fig. 6 illustrates a block diagram of a computing device 600 provided according to an embodiment of the present application. The components of the computing device 600 include, but are not limited to, a memory 610 and a processor 620. The processor 620 is coupled to the memory 610 via a bus 630 and a database 650 is used to store data.
Computing device 600 also includes access device 640, access device 640 enabling computing device 600 to communicate via one or more networks 660. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 640 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 600, as well as other components not shown in FIG. 6, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 6 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 600 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 600 may also be a mobile or stationary server.
Wherein the processor 620, when executing the computer instructions, performs the steps of the data processing method.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the data processing method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the data processing method.
An embodiment of the present application further provides a computer readable storage medium, which stores computer instructions, and the computer instructions, when executed by a processor, implement the steps of the data processing method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the data processing method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the data processing method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (14)

1. A data processing method is applied to a client and comprises the following steps:
monitoring the playing state of the target video;
under the condition that the playing state is changed from the audio playing state to the video playing state, sending a video acquisition instruction to a target link node, wherein the video acquisition instruction carries a target video identifier;
receiving target video data and target frame data information which are returned by the target link node and correspond to the target video identifier, wherein the target frame data information is cached in the target link node;
and playing the target video data according to the target frame data information.
2. The data processing method of claim 1, wherein the method further comprises:
sending an audio acquisition instruction to the target link node under the condition that the playing state is changed from a video playing state to an audio playing state, wherein the audio acquisition instruction carries the target video identifier;
receiving target audio data corresponding to the target video identification returned by the target link node, wherein the target audio data is separated from the target video data by the target link node;
and playing the target audio data.
3. The data processing method of claim 1, wherein receiving target frame data information corresponding to the target video identifier returned by the target link node comprises:
and receiving a target key frame and a difference frame set which are returned by the target link node and correspond to the target video frame identification.
4. The data processing method of claim 3, wherein playing the target video data according to the target frame data information comprises:
generating a target video frame according to the target key frame and the difference frame set;
and playing the target video data based on the target video frame.
5. The data processing method according to any one of claims 1 to 4, wherein the changing of the playing state from the audio playing state to the video playing state comprises:
monitoring that the target video is changed from background playing to foreground playing; or
And under the condition of an audio playing state, receiving a video playing instruction sent by a user.
6. The data processing method of claim 2, wherein changing the play state from a video play state to an audio play state comprises:
monitoring that the target video is changed from foreground playing to background playing; or
And receiving an audio playing instruction sent by a user under the condition of a video playing state.
7. A data processing method, applied to a target link node, comprising:
receiving video data sent by a server, and caching frame data information of the video data;
receiving a video acquisition instruction sent by a client, wherein the video acquisition instruction carries a target video identifier;
determining target video data and target frame data information according to the target video identifier;
and sending the target video data and the target frame data information to the client.
8. The data processing method of claim 7, wherein the method further comprises:
receiving an audio acquisition instruction sent by the client, wherein the audio acquisition instruction carries the target video identifier;
determining the target video data according to the target video identification;
separating target audio data from the target video data;
and sending the target audio data to the client.
9. A data processing system, comprising a target link node, a client, wherein:
the target link node is configured to receive video data sent by a server and cache frame data information of the video data;
the client is configured to monitor a playing state of a target video, and send a video acquisition instruction to the target link node when the playing state is changed from an audio playing state to a video playing state, wherein the video acquisition instruction carries a target video identifier;
the target link node is further configured to receive the video acquisition instruction, determine target video data and target frame data information according to the target video identifier, and send the target video data and the target frame data information to the client;
the client is further configured to receive the target video data and the target frame data information, and play the target video data according to the target frame data information.
10. The data processing system of claim 9, wherein the system further comprises:
the client is further configured to send an audio acquisition instruction to the target link node when the playing state is changed from a video playing state to an audio playing state, where the audio acquisition instruction carries the target video identifier;
the target link node is further configured to receive the audio acquisition instruction, determine the target video data according to the target video identifier, separate target audio data from the target video data, and send the target audio data to the client;
the client is further configured to receive and play the target audio data.
11. A data processing device, applied to a client, includes:
the monitoring module is configured to monitor the playing state of the target video;
the sending module is configured to send a video acquisition instruction to a target link node under the condition that the playing state is changed from the audio playing state to the video playing state, wherein the video acquisition instruction carries a target video identifier;
a receiving module, configured to receive target video data and target frame data information corresponding to the target video identifier, which are returned by the target link node, wherein the target frame data information is cached in the target link node;
and the playing module is configured to play the target video data according to the target frame data information.
12. A data processing apparatus, applied to a target link node, comprising:
the cache module is configured to receive video data sent by a server and cache frame data information of the video data;
the instruction receiving module is configured to receive a video acquisition instruction sent by a client, wherein the video acquisition instruction carries a target video identifier;
a determining module configured to determine target video data and target frame data information according to the target video identifier;
a sending module configured to send the target video data and the target frame data information to the client.
13. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-6 or 7-8 when executing the computer instructions.
14. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 6 or 7 to 8.
CN202110949838.5A 2021-08-18 2021-08-18 Data processing method and device Active CN113676777B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110949838.5A CN113676777B (en) 2021-08-18 2021-08-18 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110949838.5A CN113676777B (en) 2021-08-18 2021-08-18 Data processing method and device

Publications (2)

Publication Number Publication Date
CN113676777A true CN113676777A (en) 2021-11-19
CN113676777B CN113676777B (en) 2024-03-08

Family

ID=78543843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110949838.5A Active CN113676777B (en) 2021-08-18 2021-08-18 Data processing method and device

Country Status (1)

Country Link
CN (1) CN113676777B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8285886B1 (en) * 2010-08-30 2012-10-09 Adobe Systems Incorporated Live media playback adaptive buffer control
US20170155928A1 (en) * 2015-11-26 2017-06-01 Le Holdings (Beijing) Co., Ltd. Method, Device and System for Playing Live Video
CN108566561A (en) * 2018-04-18 2018-09-21 腾讯科技(深圳)有限公司 Video broadcasting method, device and storage medium
CN110493635A (en) * 2019-08-23 2019-11-22 腾讯科技(深圳)有限公司 Video broadcasting method, device and terminal
CN110784740A (en) * 2019-11-25 2020-02-11 北京三体云时代科技有限公司 Video processing method, device, server and readable storage medium
CN111510756A (en) * 2019-01-30 2020-08-07 上海哔哩哔哩科技有限公司 Audio and video switching method and device, computer equipment and readable storage medium
CN112261418A (en) * 2020-09-18 2021-01-22 网宿科技股份有限公司 Method for transmitting live video data and live broadcast acceleration system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8285886B1 (en) * 2010-08-30 2012-10-09 Adobe Systems Incorporated Live media playback adaptive buffer control
US20170155928A1 (en) * 2015-11-26 2017-06-01 Le Holdings (Beijing) Co., Ltd. Method, Device and System for Playing Live Video
CN108566561A (en) * 2018-04-18 2018-09-21 腾讯科技(深圳)有限公司 Video broadcasting method, device and storage medium
CN111510756A (en) * 2019-01-30 2020-08-07 上海哔哩哔哩科技有限公司 Audio and video switching method and device, computer equipment and readable storage medium
CN110493635A (en) * 2019-08-23 2019-11-22 腾讯科技(深圳)有限公司 Video broadcasting method, device and terminal
CN110784740A (en) * 2019-11-25 2020-02-11 北京三体云时代科技有限公司 Video processing method, device, server and readable storage medium
CN112261418A (en) * 2020-09-18 2021-01-22 网宿科技股份有限公司 Method for transmitting live video data and live broadcast acceleration system

Also Published As

Publication number Publication date
CN113676777B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
US20170311006A1 (en) Method, system and server for live streaming audio-video file
WO2011146898A2 (en) Internet system for ultra high video quality
CN106998485B (en) Video live broadcasting method and device
TW201427391A (en) Media streaming method, device therewith and device for providing the media streaming
CN113141522B (en) Resource transmission method, device, computer equipment and storage medium
CN104837043B (en) Multimedia information processing method and electronic equipment
CN112019905A (en) Live broadcast playback method, computer equipment and readable storage medium
EP1879353B1 (en) Contents distribution system, contents distribution server, contents reproduction terminal, and contents distribution method
JP7151004B2 (en) Interruptible video transcoding
CN113055692A (en) Data processing method and device
CN112752113B (en) Method and device for determining abnormal factors of live broadcast server
CN113891175B (en) Live broadcast push flow method, device and system
CN113923470A (en) Live stream processing method and device
CN107690093B (en) Video playing method and device
CN111818383B (en) Video data generation method, system, device, electronic equipment and storage medium
US10091265B2 (en) Catching up to the live playhead in live streaming
EP3466081A1 (en) Catching up to the live playhead in live streaming
CN114363703B (en) Video processing method, device and system
CN113923502B (en) Live video playing method and device
CN114679598B (en) Live broadcast pushing method and device
CN113676777B (en) Data processing method and device
CN114339296A (en) Method and device for transmitting media stream and media system
CN114501052B (en) Live broadcast data processing method, cloud platform, computer equipment and storage medium
CN115348409A (en) Video data processing method and device, terminal equipment and storage medium
US10904590B2 (en) Method and system for real time switching of multimedia content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant