CN117596444A - Video clip detection method, system, equipment and storage medium - Google Patents

Video clip detection method, system, equipment and storage medium Download PDF

Info

Publication number
CN117596444A
CN117596444A CN202311295365.7A CN202311295365A CN117596444A CN 117596444 A CN117596444 A CN 117596444A CN 202311295365 A CN202311295365 A CN 202311295365A CN 117596444 A CN117596444 A CN 117596444A
Authority
CN
China
Prior art keywords
video frame
frame
video
time difference
current video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311295365.7A
Other languages
Chinese (zh)
Inventor
周留刚
刘顶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Huacheng Software Technology Co Ltd
Original Assignee
Hangzhou Huacheng Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huacheng Software Technology Co Ltd filed Critical Hangzhou Huacheng Software Technology Co Ltd
Priority to CN202311295365.7A priority Critical patent/CN117596444A/en
Publication of CN117596444A publication Critical patent/CN117596444A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44209Monitoring of downstream path of the transmission network originating from a server, e.g. bandwidth variations of a wireless network

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a video jamming detection method, a system, equipment and a storage medium, wherein the video jamming detection method comprises the following steps: responding to the acquired current video frame as a non-first frame video frame, acquiring frame information of the current video frame and frame information of historical video frames in a video frame sampling queue, wherein the frame information comprises an encoding time stamp, a playing time stamp, a frame sequence number and identification information; judging whether the current video frame and the historical video frame are continuous video frames or not according to the frame sequence number interval between the frame sequence number of the current video frame and the frame sequence number of the historical video frame; if yes, calculating the coding time difference between the coding time stamp of the current video frame and the coding time stamp of the historical video frame, and the playing time difference between the playing time stamp of the current video frame and the playing time stamp of the historical video frame; and if the numerical comparison result between the play time difference and the code time difference is larger than the preset time difference threshold, marking the identification information of the current video frame as a cartoon video frame. By means of the scheme, video jamming can be detected.

Description

Video clip detection method, system, equipment and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video clip detection method, system, device, and storage medium.
Background
In a video acquisition system, video data is generally acquired by a video acquisition end, and is sent to a video playing end through a network, and the video playing end decodes the video data and displays the video data to a user.
When the video playing end displays the video, the situation of video jamming can occur due to various influences such as network fluctuation, so that the situation of unsmooth pictures occurs when the video playing end plays the video. In the prior art, whether the video is stuck or not is often judged by detecting the video frame rate or the video buffer data, but the methods have low efficiency and high performance loss.
Therefore, an accurate, efficient and low video jam detection method is needed to detect the video jam.
Disclosure of Invention
The application provides at least one video clip detection method, system, device, equipment and computer readable storage medium.
The first aspect of the present application provides a video clip detection method, including: responding to the acquired current video frame as a non-initial frame video frame, acquiring frame information of the current video frame and frame information of a historical video frame in a video frame sampling queue, wherein the time sequence of the historical video frame is prior to the time sequence of the current video frame, and the frame information comprises an encoding time stamp, a playing time stamp, a frame sequence number and identification information; judging whether the current video frame and the historical video frame are continuous video frames or not according to the frame sequence number interval between the frame sequence number of the current video frame and the frame sequence number of the historical video frame; in response to the current video frame and the historical video frame being the continuous video frames, calculating an encoding time difference between an encoding time stamp of the current video frame and an encoding time stamp of the historical video frame, and a playing time difference between a playing time stamp of the current video frame and a playing time stamp of the historical video frame; and if the numerical comparison result between the play time difference and the code time difference is larger than a preset time difference threshold, marking the identification information of the current video frame as a cartoon video frame.
In an embodiment, after the step of determining whether the current video frame and the historical video frame are consecutive video frames according to a frame sequence number interval between the frame sequence number of the current video frame and the frame sequence number of the historical video frame, the method further comprises: responding to the current video frame and the historical video frame as discontinuous video frames, adjusting the coding time difference between the current video frame and the historical video frame to obtain an adjusted coding time difference, wherein the numerical comparison result between the adjusted coding time difference and the playing time difference is larger than the time difference threshold; and marking the identification information of the current video frame as a cartoon video frame.
In an embodiment, after the step of calculating the encoding time difference between the encoding time stamp of the current video frame and the encoding time stamp of the historical video frame, the playing time difference between the playing time stamp of the current video frame and the playing time stamp of the historical video frame, the method further comprises: comparing the value comparison result between the play time difference and the code time difference with the time difference threshold; if the numerical comparison result between the play time difference and the code time difference is smaller than or equal to the time difference threshold, marking the identification information of the current video frame as a fluent video frame; and storing the fluent video frames into the video frame sampling queue.
In an embodiment, the method further comprises: responding to the acquired current video frame as the first frame video frame, and creating the video frame sampling queue; marking the identification information of the current video frame as a fluent video frame; and storing the fluent video frames to the video frame sampling queue.
In an embodiment, after the step of marking the identification information of the current video frame as a katon video frame to obtain the marked frame information of the current video frame, the method further includes:
storing the marked frame information of the current video frame into the video frame sampling queue, wherein the frame information in the video frame sampling queue comprises the cartoon video frame and/or the fluent video frame; responding to the fact that the number of the stuck video frames stored in the video frame sampling queue is larger than a preset stuck frame number threshold, and determining a stuck section corresponding to the stuck video frames in the video frame sampling queue based on frame information of the stuck video frames in the video frame sampling queue; obtaining the clamping degree of the clamping interval; if the jamming degree of the jamming section is larger than a preset jamming threshold value, video jamming information is generated, and the video jamming information is used for notifying a target object of video jamming.
In an embodiment, the step of obtaining the click degree of the click section includes:
acquiring the number of the stuck video frames in the stuck interval and the total number of the video frames in the stuck interval; and determining the jamming degree of the jamming section based on a numerical comparison result of the number of the jamming video frames in the jamming section and the total number of the video frames in the jamming section.
In an embodiment, the step of determining a stuck section corresponding to a stuck video frame in the video frame sampling queue based on frame information of the stuck video frame in the video frame sampling queue includes:
taking a stuck video frame corresponding to the current video frame in the video frame sampling queue as a right boundary of the stuck interval; searching a stuck video frame which is separated from the right boundary by a preset number of stuck video frames in the video frame sampling queue as a left boundary of the stuck section; and determining the clamping interval based on the left boundary and the right boundary.
A second aspect of the present application provides a video clip detection system, the system including a streaming server and a client, the client including:
the signaling processing module is used for interacting with the streaming media server, wherein the interaction process comprises the steps of sending a data streaming request to the streaming media server, so that the streaming media server sends video stream data to the client after receiving the data streaming request; the data receiving module is used for receiving the video stream data sent by the streaming media server; a decoding module for decoding the received video stream data into video frames of a target format; the device comprises a clamping detection module, a video frame sampling queue and a video frame detection module, wherein the clamping detection module is used for responding to the acquired current video frame as a non-initial frame video frame, acquiring frame information of the current video frame and frame information of a historical video frame in the video frame sampling queue, the time sequence of the historical video frame is prior to the time sequence of the current video frame, and the frame information comprises an encoding time stamp, a playing time stamp, a frame sequence number and identification information; judging whether the current video frame and the historical video frame are continuous video frames or not according to the frame sequence number interval between the frame sequence number of the current video frame and the frame sequence number of the historical video frame; in response to the current video frame and the historical video frame being the continuous video frames, calculating an encoding time difference between an encoding time stamp of the current video frame and an encoding time stamp of the historical video frame, and a playing time difference between a playing time stamp of the current video frame and a playing time stamp of the historical video frame; and if the numerical comparison result between the play time difference and the code time difference is larger than a preset time difference threshold, marking the identification information of the current video frame as a cartoon video frame.
A third aspect of the present application provides a video clip detecting apparatus, including: the acquisition module is used for responding to the acquired current video frame as a non-initial frame video frame, acquiring frame information of the current video frame and frame information of a historical video frame in a video frame sampling queue, wherein the time sequence of the historical video frame is prior to the time sequence of the current video frame, and the frame information comprises an encoding time stamp, a playing time stamp, a frame sequence number and identification information; the continuous judging module is used for judging whether the current video frame and the historical video frame are continuous video frames or not according to the frame sequence number interval between the frame sequence number of the current video frame and the frame sequence number of the historical video frame; a calculating module, configured to calculate, in response to the current video frame and the historical video frame being the continuous video frame, an encoding time difference between an encoding time stamp of the current video frame and an encoding time stamp of the historical video frame, and a play time difference between a play time stamp of the current video frame and a play time stamp of the historical video frame; and the cartoon mark module is used for marking the identification information of the current video frame as a cartoon video frame if the numerical comparison result between the play time difference and the coding time difference is larger than a preset time difference threshold value.
A fourth aspect of the present application provides an electronic device, including a memory and a processor, where the processor is configured to execute program instructions stored in the memory, so as to implement the video clip detection method.
A fifth aspect of the present application provides a computer readable storage medium having program instructions stored thereon, which when executed by a processor implement the video clip detection method described above.
According to the scheme, whether the current video frame and the historical video frame are continuous video frames or not is judged by acquiring the frame sequence number of the current video frame and the frame sequence number of the historical video frame in the video sampling queue; for continuous current video frames and historical video frames, calculating the coding time difference and the playing time difference between the current video frames and the historical video frames; if the numerical comparison result between the play time difference and the code time difference is larger than the preset time difference threshold, the fact that the difference between the code time difference and the play time difference between the current video frame and the historical video frame is too large is proved, and a clamping phenomenon possibly exists, so that the identification information of the current video frame is marked as the clamping video frame, video clamping detection can be achieved, and video clamping detection efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the technical aspects of the application.
FIG. 1 is a flow chart of an exemplary embodiment of a video clip detection method of the present application;
FIG. 2 is a schematic view of an exemplary application environment in the video clip detection method of the present application;
FIG. 3 is a block diagram of a video clip detection system shown in an exemplary embodiment of the present application;
FIG. 4 is a block diagram of a video clip detection apparatus according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of an embodiment of an electronic device of the present application;
FIG. 6 is a schematic diagram illustrating the construction of one embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes the embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Referring to fig. 1, fig. 1 is a flowchart illustrating an exemplary embodiment of a video clip detection method according to the present application. Specifically, the method applied to the client side can comprise the following steps:
step S110, in response to the acquired current video frame being a non-first frame video frame, acquiring frame information of the current video frame and frame information of a historical video frame in a video frame sampling queue, wherein the time sequence of the historical video frame is prior to the time sequence of the current video frame, and the frame information comprises an encoding time stamp, a playing time stamp, a frame sequence number and identification information.
For ease of understanding and description, the application environment shown in fig. 2 is taken as an example to describe the video clip detection method of the present application. Referring to fig. 2 specifically, fig. 2 is a schematic view of an exemplary application environment in the video clip detection method of the present application, where the application environment includes a video clip detection system, and the system includes a device side, a streaming media server side and a client side, where the device side may be a device with a video acquisition function, and may be used for video coding acquisition and sending video frame data to the streaming media server side; the streaming media server side has the functions of signaling service, streaming forwarding service and the like, and the signaling service is used for providing the functions of streaming request, control and the like; the stream forwarding service is used for providing functions such as stream data forwarding and the like; the client can be a user access terminal loaded with application software and comprises a signaling processing module, a data receiving module, a decoding module, a card-on detection module and the like, wherein the signaling processing module is used for interacting with a signaling service of the streaming media server and sending a streaming request and the like; the stream data receiving module is used for receiving video stream data sent by the stream media server through a network, after receiving the video stream data, demultiplexing the video stream data, sending the video stream data into the decoding module to decode the video stream data into YUV format, sending the frame information of the video frame data into the katon detection module to carry out katon detection before rendering and playing the video frame data in YUV format, and then carrying out playing and displaying.
The first frame of video frame refers to the first frame of video frame of the video stream when the client pulls from the streaming media server (the process that the client obtains real-time video stream content from the server or platform for display); similarly, a non-first frame video frame refers to other frames of the video stream except the first frame video frame.
It can be understood that if the current video frame is the first frame video frame, the video stream is indicated to start playing, that is, the current video frame is the first picture seen by the user when the client plays the video stream, and no other video frame exists before the current video frame, so that after the first frame video frame is acquired, a video frame sampling queue needs to be created, and the acquired video frame is stored in the video frame sampling queue for playing and displaying; if the current video frame is a non-first frame video frame, the client side is indicated to acquire a plurality of video frames (i.e. historical video frames) of the video stream from the streaming media server and store the video frames in the video frame sampling queue, so that the video frame sampling queue is not required to be established again after the current video frame of the same video stream is acquired, and the current video frame is sequentially stored in the video frame sampling queue of the video stream. The video frame sampling queue is used for storing or caching frame information of video frames.
Specifically, under the condition that the acquired current video frame is a non-initial frame video frame, acquiring frame information of the current video frame and acquiring frame information of historical video frames in a current video frame sampling queue, wherein the time sequence of the historical video frames of the same video stream is known to be earlier than that of the current video frame by the above description, and the frame information comprises an encoding time stamp, a playing time stamp, a frame sequence number and identification information.
Step S120, judging whether the current video frame and the historical video frame are continuous video frames or not according to the frame sequence number interval between the frame sequence number of the current video frame and the frame sequence number of the historical video frame.
As can be seen from the foregoing description, the timing sequence of the historical video frame of the same video stream is earlier than the timing sequence of the current video frame, so that there is a difference between the frame number of the current video frame and the frame number of the historical video frame in the same video stream, and the frame numbers of each frame of the video frame of the same video stream obey a certain sequence relationship.
Specifically, performing difference calculation on the frame number of the current video frame and the frame number of the historical video frame to obtain a frame number interval between the frame number of the current video frame and the frame number of the historical video frame; and judging whether the current video frame and the historical video frame are continuous video frames (two continuous video frames of a frame sequence) according to the frame sequence number interval.
Illustratively, the history video frame is defined as a previous frame of the current video frame, that is, the history video frame and the current video frame are two adjacent frames of video frames; performing difference value calculation on the frame sequence number of the current video frame and the frame sequence number of the historical video frame to obtain a frame sequence number interval; if the frame sequence number interval is 1, indicating that the current video frame and the historical video frame are continuous video frames; conversely, if the frame sequence number interval is not 1, it indicates that the current video frame and the historical video frame are not continuous video frames, and a frame loss phenomenon may occur between the current video frame and the historical video frame, which are not continuous video frames (generally refers to that in the video streaming process, some video frames are lost due to loss, attenuation, transmission line unblocking and other reasons).
Similarly, in another embodiment, the historical video frame may be defined as the first N frames of the current video frame, where N is a positive integer; performing difference value calculation on the frame sequence number of the current video frame and the frame sequence number of the historical video frame to obtain a frame sequence number interval n; in addition, the number m of video frames between the current video frame and the historical video frame is counted and acquired; if the frame sequence number interval is matched with the number of the video frames (n=m+1), indicating that the current video frame and the historical video frame are continuous video frames; otherwise, it indicates that the current video frame and the historical video frame are not continuous video frames.
In step S130, in response to the current video frame and the historical video frame being continuous video frames, the encoding time difference between the encoding time stamp of the current video frame and the encoding time stamp of the historical video frame, and the playing time difference between the playing time stamp of the current video frame and the playing time stamp of the historical video frame are calculated.
The encoding time stamp is a time stamp written by the device side when encoding the video frame.
The play time stamp is the time stamp of the video frame when it was sent to the stuck detect module. Because the client plays and displays the video frame after the video frame is sent to the cartoon detection module, and the data processing time of the cartoon detection module is very short, the time stamp of sending the video frame to the cartoon detection module is equivalent to the time stamp of playing the video frame under the condition of neglecting the data processing time of the cartoon detection module.
It can be appreciated that if the current video frame and the historical video frame are continuous video frames, it indicates that no frame loss occurs between the current video frame and the historical video frame, so that the current video frame and the historical video frame can be analyzed from the time dimension.
Specifically, calculating the coding time difference between the coding time stamp of the current video frame and the coding time stamp of the historical video frame, wherein the coding time difference indicates the time difference between the current video frame and the historical video frame when the video acquisition is carried out at the equipment end; in addition, the play time difference between the play time stamp of the current video frame and the play time stamp of the historical video frame is calculated, and the play time difference indicates the time difference between the current video frame and the historical video frame when the client plays and displays; in general, if the stuck phenomenon does not exist, the encoding time difference is the same as the playing time difference, or the difference between the playing time difference and the encoding time difference is very fine. Therefore, a time difference threshold for judging the size of the difference between the play time difference and the code time difference can be preset, and whether the clamping phenomenon occurs between the current video frame and the historical video frame or not can be judged by comparing the numerical comparison result between the play time difference and the code time difference with the time difference threshold.
In step S140, if the result of the numerical comparison between the play time difference and the code time difference is greater than the preset time difference threshold, the identification information of the current video frame is marked as a cartoon video frame.
The identification information of the video frames may be used to mark whether the corresponding video frames are in a stuck state or a fluent state.
As can be seen from the above description, if the numerical comparison result between the play time difference and the code time difference is greater than the preset time difference threshold, indicating that a clamping phenomenon exists between the current video frame and the historical video frame, marking the identification information of the current video frame as a clamping video frame, and storing the clamping video frame into a video frame sampling queue; if the numerical comparison result between the play time difference and the code time difference is smaller than or equal to a preset time difference threshold, indicating that no clamping phenomenon exists between the current video frame and the historical video frame, marking the identification information of the current video frame as a fluent video frame and storing the fluent video frame into a video frame sampling queue.
Illustratively, the numerical comparison result between the play time difference and the code time difference is calculated, and can be determined by calculating the ratio result between the play time difference and the code time difference. Setting the time difference threshold to 1, comparing the playing time difference a with the encoding time difference b, if the ratio results And if the play time difference is larger than 1, indicating that the play time difference is larger than the coding time difference, and marking the identification information of the current video frame as a cartoon video frame.
It should be noted that, the larger the ratio result is, the longer the delay is when playing the frame, the worse the restoration and smoothness is when playing the video, i.e. the more stuck is when playing, relative to the time of encoding the video frame. Therefore, the ratio is used as a judging standard for whether the frame is stuck or not, and the judgment is more accurate.
Similarly, in another embodiment, the numerical comparison result between the play time difference and the code time difference is calculated, and may be determined by calculating the difference result between the play time difference and the code time difference. Setting the time difference threshold value to be 0, subtracting the playing time difference a from the coding time difference b, if the difference result a-b is larger than 1, indicating that the playing time difference is larger than the coding time difference, and marking the identification information of the current video frame as a cartoon video frame.
It can be seen that, the present application judges whether the current video frame and the historical video frame are continuous video frames by acquiring the frame number of the current video frame and the frame number of the historical video frame in the video sampling queue; for continuous current video frames and historical video frames, calculating the coding time difference and the playing time difference between the current video frames and the historical video frames; if the numerical comparison result between the play time difference and the code time difference is larger than the preset time difference threshold, the fact that the difference between the code time difference and the play time difference between the current video frame and the historical video frame is too large is proved, and a clamping phenomenon possibly exists, so that the identification information of the current video frame is marked as the clamping video frame, video clamping detection can be achieved, and video clamping detection efficiency is improved.
On the basis of the above embodiments, the present embodiment describes steps after determining whether or not a current video frame and a historical video frame are consecutive video frames according to a frame number interval between the frame number of the current video frame and the frame number of the historical video frame. Specifically, the method of the embodiment comprises the following steps:
responding to the current video frame and the historical video frame as discontinuous video frames, adjusting the coding time difference between the current video frame and the historical video frame to obtain an adjusted coding time difference, wherein a numerical comparison result between the adjusted coding time difference and the playing time difference is larger than a time difference threshold; and marking the identification information of the current video frame as a cartoon video frame to obtain the frame information of the marked current video frame.
In the foregoing description, if the current video frame and the historical video frame are discontinuous video frames, it indicates that a frame loss phenomenon occurs between the current video frame and the historical video frame, so that it may also be considered that a jam occurs between the current video frame and the historical video frame, and the identification information of the current video frame needs to be marked as a jam video frame.
Specifically, in response to the current video frame and the historical video frame being discontinuous video frames, adjusting the coding time difference between the current video frame and the historical video frame to obtain an adjusted coding time difference, so that a numerical comparison result between the adjusted coding time difference and the playing time difference is larger than a time difference threshold value to represent that a clamping exists between the current video frame and the historical video frame, marking the identification information of the current video frame as the clamping video frame to obtain frame information of the marked current video frame, and storing the frame information into a video frame sampling queue.
Illustratively, taking the ratio of the play time difference a to the adjusted code time difference c as an example of the numerical comparison result; if the current video frame and the historical video frame are discontinuous video frames, the original coding time difference b is adjusted to obtain an adjusted coding time difference c, so that the ratio result of the play time difference a and the adjusted coding time difference c is obtainedGreater than 1 so that the current video frame must be marked as a stuck video frame.
In another executable implementation manner, if the current video frame and the historical video frame are discontinuous video frames, the identification information of the current video frame is directly marked as a cartoon video frame, the marked frame information of the current video frame is obtained, and then the marked frame information is stored in a video frame sampling queue without the processes of time difference calculation and numerical comparison.
On the basis of the above embodiments, the present embodiment describes steps after calculating the encoding time difference between the encoding time stamp of the current video frame and the encoding time stamp of the history video frame, and the playing time difference between the playing time stamp of the current video frame and the playing time stamp of the history video frame. Specifically, the method of the embodiment comprises the following steps:
Comparing the numerical comparison result between the play time difference and the code time difference with a time difference threshold; if the numerical comparison result between the play time difference and the code time difference is smaller than or equal to the time difference threshold value, marking the identification information of the current video frame as a smooth video frame, and obtaining the frame information of the marked current video frame; and storing the frame information of the marked current video frame into a video frame sampling queue.
In the foregoing description, after the encoding time difference and the playing time difference between the current video frame and the historical video frame are calculated, if the numerical comparison result between the playing time difference and the encoding time difference is less than or equal to the time difference threshold, it is indicated that no clip exists between the current video frame and the historical video frame, so that the identification information of the current video frame is marked as a fluent video frame, and then the marked frame information is stored in the video frame sampling queue.
Illustratively, taking the ratio of the play time difference a to the code time difference b as the numeric comparison result as an example, setting the time difference threshold value to be 1; comparing the playing time difference a with the encoding time difference b, if the ratio resultsAnd if the play time difference is smaller than or equal to 1, indicating that the play time difference is smaller than or equal to the coding time difference, and marking the identification information of the current video frame as a fluent video frame. It can be understood that if the play time difference is equal to the code time difference, the play process of the video frame and the time sequence of the code process are the same, so that no clamping phenomenon exists; if the play time difference is smaller than the code time difference, the video frame may appear in the scene of video double-speed play or video skip play (the video content in the middle period is skipped by fast-forward or fast-backward to a certain time node of the media file for play), if the video frame can be normally played, the video frame is also indicated that the video frame does not have a clamping phenomenon.
On the basis of the above embodiments, the embodiments of the present application describe other steps that may be implemented in the method. Specifically, the method of the embodiment comprises the following steps:
responding to the acquired current video frame as the first frame video frame, and creating a video frame sampling queue; marking the identification information of the current video frame as a fluent video frame to obtain the frame information of the marked current video frame; and storing the marked frame information of the current video frame into a video frame sampling queue.
As described in connection with the foregoing embodiments, if the current video frame is the first frame video frame of the video stream, the video frame sample queue of the video stream is not yet created in the client. Therefore, in response to the acquired current video frame being the first frame video frame, a video frame sampling queue is created first, whether the first frame video frame of the video stream is blocked or not is not needed to be judged, the identification information of the current video frame is marked as a fluent video frame, and then the frame information of the fluent video frame is stored in the video frame sampling queue. It will be appreciated that after a number of frames of video are subsequently acquired, the frame information for that frame becomes the frame information for the historical video frames in the video frame sample queue.
On the basis of the above embodiments, the present embodiment describes steps after marking the identification information of the current video frame as a katon video frame. Specifically, the method of the embodiment comprises the following steps:
storing the marked frame information of the current video frame into a video frame sampling queue, wherein the frame information in the video frame sampling queue comprises a cartoon video frame and/or a fluent video frame; responding to the fact that the number of the stuck video frames stored in the video frame sampling queue is larger than a preset stuck frame number threshold, and determining a stuck interval corresponding to the stuck video frames in the video frame sampling queue based on frame information of the stuck video frames in the video frame sampling queue; obtaining the blocking degree of a blocking interval; if the jamming degree of the jamming section is larger than a preset jamming threshold value, video jamming information is generated and used for notifying the target object of video jamming.
It should be noted that, in some video playing scenes, in order to alleviate the effect of video frame jamming, the definition in the video playing process, such as standard definition, high definition, super definition, and other video quality standards, is generally adaptively adjusted. If video definition is switched immediately when a cartoon video frame is detected, poor experience is brought to a user.
Therefore, the embodiment distinguishes the serious jamming condition and the slight jamming condition by setting the jamming degree of the jamming section so as to optimize the experience of the user in the process of watching the video.
Specifically, after marking the identification information of the current video frame as a stuck video frame, storing the frame information into a video frame sampling queue, and counting the number of the stuck video frames stored into the video frame sampling queue to obtain the number of the stuck video frames; if the number of the stuck video frames stored in the video frame sampling queue is greater than a preset stuck frame number threshold, determining a stuck section corresponding to the stuck video frames in the current video frame sampling queue based on frame information of the stuck video frames in the current video frame sampling queue, wherein the stuck section may also comprise the stuck video frames and/or the fluent video frames; acquiring the number of the stuck video frames in the stuck interval and the total number of all video frames (stuck video frames+smooth video frames) in the stuck interval; if the numerical comparison result (the jamming degree) between the number of the jamming video frames in the jamming interval and the total number of all the video frames in the jamming interval is larger than the preset jamming threshold value, the jamming degree in the jamming interval is proved to be higher, and accordingly video jamming information is generated and displayed to inform the target object of video jamming. In addition, the video definition in the video playing process can be correspondingly adjusted.
Similarly, if the numerical comparison result (the degree of jamming) between the number of the jammed video frames in the jamming section and the total number of all the video frames in the jamming section is smaller than or equal to the jamming threshold, it is proved that the concentration degree of the jammed video frames in the jamming section is lower, and the jammed video frames possibly caused by instant network fluctuation at a certain moment are jammed, so that the video jamming information is not required to be generated, and the video definition in the video playing process is not required to be regulated.
On the basis of the above embodiments, the embodiments of the present application describe the step of determining the stuck section corresponding to the stuck video frame in the video frame sampling queue based on the frame information of the stuck video frame in the video frame sampling queue. Specifically, the method of the embodiment comprises the following steps:
acquiring the number of the stuck video frames in the stuck interval and the total number of the video frames in the stuck interval; and determining the jamming degree of the jamming section based on a numerical comparison result of the number of the jamming video frames in the jamming section and the total number of the video frames in the jamming section.
The method for calculating the degree of jamming refers to the description of the numerical comparison result in the foregoing embodiment, which includes, but is not limited to, calculating a ratio between the number of jammed video frames in the jamming section and the total number of video frames in the jamming section, or calculating a difference between the number of jammed video frames in the jamming section and the total number of video frames in the jamming section.
Taking the example of calculating the ratio between the number of the video frames in the blocking interval and the total number of the video frames in the blocking interval, if the ratio between the number of the video frames in the blocking interval and the total number of the video frames in the blocking interval is greater than a preset blocking threshold, the higher concentration of the video frames in the blocking interval is indicated, and video blocking information is correspondingly generated for displaying so as to inform the target object of video blocking. Other embodiments may be similarly described with reference to the foregoing embodiments, and are not described herein.
As can be seen from the above, in this embodiment, the degree of jamming in the jamming section is reflected by calculating the numerical comparison result between the number of the jamming video frames in the jamming section and the total number of all the video frames in the jamming section; and comparing the numerical comparison result with a numerical comparison threshold value to better filter out the jamming condition caused by the instantaneous factors, so as to dynamically prompt the user of video jamming information.
On the basis of the above embodiments, the embodiments of the present application describe the step of determining the stuck section corresponding to the stuck video frame in the video frame sampling queue based on the frame information of the stuck video frame in the video frame sampling queue. Specifically, the method of the embodiment comprises the following steps:
Taking a stuck video frame corresponding to the current video frame in the video frame sampling queue as the right boundary of a stuck interval; searching a stuck video frame with a preset number of stuck video frames apart from the right boundary in the video frame sampling queue as the left boundary of the stuck interval; the stuck section is determined based on the left and right boundaries.
In the process of determining the stuck section, the obtained latest stuck video frame is taken as the right boundary of the stuck section, and the latest stuck video frame, that is, the current video frame marked as the stuck video frame in the video frame sampling queue, corresponds to each current video frame marked as the stuck video frame and stores the frame information thereof into the video frame sampling queue, and the right boundary of the stuck section is updated accordingly.
Further, starting from the right boundary, looking forward for a stuck video frame which is separated from the right boundary by a preset number of stuck video frames according to a time sequence relationship as a left boundary of the stuck interval, wherein the preset number can be N (N is more than 1, N is a positive integer); thus, the range of the stuck section is determined by the right and left boundaries.
Fig. 3 is a block diagram of a video clip detection system according to an exemplary embodiment of the present application, and as shown in fig. 3, the exemplary video clip detection system at least includes: streaming media server side and customer end. Specifically, the client 310 includes:
The signaling processing module 311 is configured to interact with the streaming media server, where the interaction process includes sending a data streaming request to the streaming media server, so that the streaming media server sends video streaming data to the client after receiving the data streaming request.
The data receiving module 312 is configured to receive video streaming data sent by the streaming media server.
A decoding module 313, configured to decode the received video stream data into video frames in a target format.
The stuck detection module 314 is configured to, in response to the acquired current video frame being a non-first frame video frame, acquire frame information of the current video frame and frame information of a historical video frame in a video frame sampling queue, where a time sequence of the historical video frame is earlier than a time sequence of the current video frame, where the frame information includes an encoding time stamp, a playing time stamp, a frame sequence number, and identification information; judging whether the current video frame and the historical video frame are continuous video frames or not according to the frame sequence number interval between the frame sequence number of the current video frame and the frame sequence number of the historical video frame; in response to the current video frame and the historical video frame being continuous video frames, calculating an encoding time difference between an encoding time stamp of the current video frame and an encoding time stamp of the historical video frame, and a playing time difference between a playing time stamp of the current video frame and a playing time stamp of the historical video frame; and if the numerical comparison result between the play time difference and the code time difference is larger than the preset time difference threshold, marking the identification information of the current video frame as a cartoon video frame.
The streaming server 320 includes:
the stream forwarding service module 321 is configured to send video stream data to a data receiving module of the client.
The signaling service module 322 is configured to interact with the signaling processing module of the client, where the interaction process includes receiving a data pull request sent by the signaling processing module of the client, processing the data pull request, and so on.
It should be noted that, if the video stream is stored in the streaming media server, the streaming media server may directly send the video stream data to the client from the local after receiving the data stream pulling request of the client; if the streaming media server is used for video monitoring scenes or live broadcasting scenes, as shown in fig. 2, the streaming media server can also have a communication relationship with the equipment end, the equipment end is responsible for collecting and encoding video streaming data, and the streaming media server performs streaming from the equipment end or performs push streaming from the equipment end to the streaming media server.
In the exemplary video cartoon detecting system, judging whether the current video frame and the historical video frame are continuous video frames or not by acquiring the frame sequence number of the current video frame and the frame sequence number of the historical video frame in the video sampling queue; for continuous current video frames and historical video frames, calculating the coding time difference and the playing time difference between the current video frames and the historical video frames; if the numerical comparison result between the play time difference and the code time difference is larger than the preset time difference threshold, the fact that the difference between the code time difference and the play time difference between the current video frame and the historical video frame is too large is proved, and a clamping phenomenon possibly exists, so that the identification information of the current video frame is marked as the clamping video frame, video clamping detection can be achieved, and video clamping detection efficiency is improved.
The functions of each module may be referred to an embodiment of a video clip detection method, which is not described herein.
It should be further noted that the main implementation of the video-clip detecting method may be a video-clip detecting apparatus, for example, the video-clip detecting method may be implemented by a terminal device or a server or other processing device, where the terminal device may be a User Equipment (UE), a computer, a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the video-on-detect method may be implemented by a processor invoking computer readable instructions stored in a memory.
Fig. 4 is a block diagram of a video clip detecting apparatus according to an exemplary embodiment of the present application. As shown in fig. 4, the exemplary video clip detecting apparatus 400 includes: an acquisition module 410, a continuous determination module 420, a calculation module 430, and a jam marking module 440. Specifically:
the obtaining module 410 is configured to obtain, in response to the obtained current video frame being a non-first frame video frame, frame information of the current video frame and frame information of a historical video frame in a video frame sampling queue, where a time sequence of the historical video frame is earlier than a time sequence of the current video frame, where the frame information includes an encoding time stamp, a playing time stamp, a frame sequence number, and identification information.
The continuous judging module 420 is configured to judge whether the current video frame and the historical video frame are continuous video frames according to a frame sequence number interval between the frame sequence number of the current video frame and the frame sequence number of the historical video frame.
The calculating module 430 is configured to calculate, in response to the current video frame and the historical video frame being consecutive video frames, an encoding time difference between an encoding time stamp of the current video frame and an encoding time stamp of the historical video frame, and a playing time difference between a playing time stamp of the current video frame and a playing time stamp of the historical video frame.
The katon marking module 440 is configured to mark the identification information of the current video frame as a katon video frame if the result of comparing the play time difference with the code time difference is greater than a preset time difference threshold.
In the exemplary video clip detecting apparatus, whether the current video frame and the historical video frame are continuous video frames is determined by acquiring the frame number of the current video frame and the frame number of the historical video frame in the video sample queue; for continuous current video frames and historical video frames, calculating the coding time difference and the playing time difference between the current video frames and the historical video frames; if the numerical comparison result between the play time difference and the code time difference is larger than the preset time difference threshold, the fact that the difference between the code time difference and the play time difference between the current video frame and the historical video frame is too large is proved, and a clamping phenomenon possibly exists, so that the identification information of the current video frame is marked as the clamping video frame, video clamping detection can be achieved, and video clamping detection efficiency is improved.
The functions of each module may be referred to an embodiment of a video clip detection method, which is not described herein.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of an electronic device of the present application. The electronic device 500 comprises a memory 501 and a processor 502, the processor 502 being adapted to execute program instructions stored in the memory 501 for implementing the steps of any of the embodiments of the video clip detection method described above. In one particular implementation scenario, electronic device 500 may include, but is not limited to: the electronic device 500 may also include mobile devices such as a notebook computer and a tablet computer, and is not limited herein.
In particular, the processor 502 is configured to control itself and the memory 501 to implement the steps of any of the video clip detection method embodiments described above. The processor 502 may also be referred to as a CPU (Central Processing Unit ). The processor 502 may be an integrated circuit chip with signal processing capabilities. The processor 502 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 502 may be commonly implemented by an integrated circuit chip.
According to the scheme, whether the current video frame and the historical video frame are continuous video frames or not is judged by acquiring the frame sequence number of the current video frame and the frame sequence number of the historical video frame in the video sampling queue; for continuous current video frames and historical video frames, calculating the coding time difference and the playing time difference between the current video frames and the historical video frames; if the numerical comparison result between the play time difference and the code time difference is larger than the preset time difference threshold, the fact that the difference between the code time difference and the play time difference between the current video frame and the historical video frame is too large is proved, and a clamping phenomenon possibly exists, so that the identification information of the current video frame is marked as the clamping video frame, video clamping detection can be achieved, and video clamping detection efficiency is improved.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of a computer readable storage medium of the present application. The computer readable storage medium 610 stores program instructions 611 executable by a processor, the program instructions 611 for implementing the steps of any of the video clip detection method embodiments described above.
According to the scheme, whether the current video frame and the historical video frame are continuous video frames or not is judged by acquiring the frame sequence number of the current video frame and the frame sequence number of the historical video frame in the video sampling queue; for continuous current video frames and historical video frames, calculating the coding time difference and the playing time difference between the current video frames and the historical video frames; if the numerical comparison result between the play time difference and the code time difference is larger than the preset time difference threshold, the fact that the difference between the code time difference and the play time difference between the current video frame and the historical video frame is too large is proved, and a clamping phenomenon possibly exists, so that the identification information of the current video frame is marked as the clamping video frame, video clamping detection can be achieved, and video clamping detection efficiency is improved.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (10)

1. A video clip detection method, the method comprising:
responding to the acquired current video frame as a non-initial frame video frame, acquiring frame information of the current video frame and frame information of a historical video frame in a video frame sampling queue, wherein the time sequence of the historical video frame is prior to the time sequence of the current video frame, and the frame information comprises an encoding time stamp, a playing time stamp, a frame sequence number and identification information;
judging whether the current video frame and the historical video frame are continuous video frames or not according to the frame sequence number interval between the frame sequence number of the current video frame and the frame sequence number of the historical video frame;
in response to the current video frame and the historical video frame being the continuous video frames, calculating an encoding time difference between an encoding time stamp of the current video frame and an encoding time stamp of the historical video frame, and a playing time difference between a playing time stamp of the current video frame and a playing time stamp of the historical video frame;
and if the numerical comparison result between the play time difference and the code time difference is larger than a preset time difference threshold, marking the identification information of the current video frame as a cartoon video frame.
2. The method of claim 1, wherein after the step of determining whether the current video frame and the historical video frame are consecutive video frames based on a frame sequence number interval between the frame sequence number of the current video frame and the frame sequence number of the historical video frame, the method further comprises:
responding to the current video frame and the historical video frame as discontinuous video frames, adjusting the coding time difference between the current video frame and the historical video frame to obtain an adjusted coding time difference, wherein the numerical comparison result between the adjusted coding time difference and the playing time difference is larger than the time difference threshold;
and marking the identification information of the current video frame as a cartoon video frame to obtain the marked frame information of the current video frame.
3. The method of claim 1, wherein after the step of calculating the encoding time difference between the encoding time stamp of the current video frame and the encoding time stamp of the historical video frame, the playing time difference between the playing time stamp of the current video frame and the playing time stamp of the historical video frame, the method further comprises:
Comparing the value comparison result between the play time difference and the code time difference with the time difference threshold;
if the numerical comparison result between the play time difference and the code time difference is smaller than or equal to the time difference threshold, marking the identification information of the current video frame as a smooth video frame, and obtaining the marked frame information of the current video frame;
and storing the marked frame information of the current video frame into the video frame sampling queue.
4. The method according to claim 1, wherein the method further comprises:
responding to the acquired current video frame as the first frame video frame, and creating the video frame sampling queue;
marking the identification information of the current video frame as a fluent video frame to obtain the marked frame information of the current video frame;
and storing the marked frame information of the current video frame into the video frame sampling queue.
5. The method according to claim 2, wherein after the step of marking the identification information of the current video frame as a katon video frame, obtaining the marked frame information of the current video frame, the method further comprises:
Storing the marked frame information of the current video frame into the video frame sampling queue, wherein the frame information in the video frame sampling queue comprises the cartoon video frame and/or the fluent video frame;
responding to the fact that the number of the stuck video frames stored in the video frame sampling queue is larger than a preset stuck frame number threshold, and determining a stuck section corresponding to the stuck video frames in the video frame sampling queue based on frame information of the stuck video frames in the video frame sampling queue;
obtaining the clamping degree of the clamping interval;
if the jamming degree of the jamming section is larger than a preset jamming threshold value, video jamming information is generated, and the video jamming information is used for notifying a target object of video jamming.
6. The method of claim 5, wherein the step of obtaining the degree of jamming of the jamming section comprises:
acquiring the number of the stuck video frames in the stuck interval and the total number of the video frames in the stuck interval;
and determining the jamming degree of the jamming section based on a numerical comparison result of the number of the jamming video frames in the jamming section and the total number of the video frames in the jamming section.
7. The method of claim 5, wherein the step of determining a stuck section corresponding to a stuck video frame in the video frame sample queue based on frame information of the stuck video frame in the video frame sample queue comprises:
taking a stuck video frame corresponding to the current video frame in the video frame sampling queue as a right boundary of the stuck interval;
searching a stuck video frame which is separated from the right boundary by a preset number of stuck video frames in the video frame sampling queue as a left boundary of the stuck section;
and determining the clamping interval based on the left boundary and the right boundary.
8. A video clip detection system, the system comprising a streaming server and a client, the client comprising:
the signaling processing module is used for interacting with the streaming media server, wherein the interaction process comprises the steps of sending a data streaming request to the streaming media server, so that the streaming media server sends video stream data to the client after receiving the data streaming request;
the data receiving module is used for receiving the video stream data sent by the streaming media server;
A decoding module for decoding the received video stream data into video frames of a target format;
the device comprises a clamping detection module, a video frame sampling queue and a video frame detection module, wherein the clamping detection module is used for responding to the acquired current video frame as a non-initial frame video frame, acquiring frame information of the current video frame and frame information of a historical video frame in the video frame sampling queue, the time sequence of the historical video frame is prior to the time sequence of the current video frame, and the frame information comprises an encoding time stamp, a playing time stamp, a frame sequence number and identification information; judging whether the current video frame and the historical video frame are continuous video frames or not according to the frame sequence number interval between the frame sequence number of the current video frame and the frame sequence number of the historical video frame; in response to the current video frame and the historical video frame being the continuous video frames, calculating an encoding time difference between an encoding time stamp of the current video frame and an encoding time stamp of the historical video frame, and a playing time difference between a playing time stamp of the current video frame and a playing time stamp of the historical video frame; and if the numerical comparison result between the play time difference and the code time difference is larger than a preset time difference threshold, marking the identification information of the current video frame as a cartoon video frame.
9. An electronic device comprising a memory and a processor for executing program instructions stored in the memory to implement the method of any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the method of any of claims 1 to 7.
CN202311295365.7A 2023-10-08 2023-10-08 Video clip detection method, system, equipment and storage medium Pending CN117596444A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311295365.7A CN117596444A (en) 2023-10-08 2023-10-08 Video clip detection method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311295365.7A CN117596444A (en) 2023-10-08 2023-10-08 Video clip detection method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117596444A true CN117596444A (en) 2024-02-23

Family

ID=89910386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311295365.7A Pending CN117596444A (en) 2023-10-08 2023-10-08 Video clip detection method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117596444A (en)

Similar Documents

Publication Publication Date Title
EP3253064A1 (en) Frame loss method for video frame and video sending apparatus
CN109660879B (en) Live broadcast frame loss method, system, computer equipment and storage medium
EP1592251B1 (en) Ticker processing in video sequences
US20130100152A1 (en) Method and apparatus for processing image display
CN113099272A (en) Video processing method and device, electronic equipment and storage medium
CN112019873A (en) Video code rate adjusting method and device and electronic equipment
US10075670B2 (en) Profile for frame rate conversion
CN113490055A (en) Data processing method and device
CN114095722A (en) Definition determining method, device and equipment
CN112948627B (en) Alarm video generation method, display method and device
CN110300326B (en) Video jamming detection method and device, electronic equipment and storage medium
CN117596444A (en) Video clip detection method, system, equipment and storage medium
JP4620516B2 (en) Image comparison method, image comparison system, and program
US8903223B1 (en) Video driver over a network
CN113271496B (en) Video smooth playing method and system in network live broadcast and readable storage medium
CN115348409A (en) Video data processing method and device, terminal equipment and storage medium
CN113055744B (en) Video decoding method and device
CN114449344A (en) Video stream transmission method and device, electronic equipment and storage medium
CN112087635A (en) Image coding control method, device, equipment and computer readable storage medium
CN113573142A (en) Resolution adjustment method and device
CN112822552A (en) Multimedia resource loading method, device, equipment and computer storage medium
CN110602507A (en) Frame loss processing method, device and system
EP4038892B1 (en) Methods, systems, and media for streaming video content using adaptive buffers
CN113596556B (en) Video transmission method, server and storage medium
CN102523513A (en) Implementation method for accurately obtaining images of original video file on basis of video player

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination