CN112511821A - Video jamming detection method and device and storage medium - Google Patents

Video jamming detection method and device and storage medium Download PDF

Info

Publication number
CN112511821A
CN112511821A CN202110144611.3A CN202110144611A CN112511821A CN 112511821 A CN112511821 A CN 112511821A CN 202110144611 A CN202110144611 A CN 202110144611A CN 112511821 A CN112511821 A CN 112511821A
Authority
CN
China
Prior art keywords
video
video frames
frame
adjacent
frame difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110144611.3A
Other languages
Chinese (zh)
Other versions
CN112511821B (en
Inventor
余冠东
易高雄
吴庆波
龚桂良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110144611.3A priority Critical patent/CN112511821B/en
Publication of CN112511821A publication Critical patent/CN112511821A/en
Application granted granted Critical
Publication of CN112511821B publication Critical patent/CN112511821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems

Abstract

A video stuck detection method, a device and a storage medium are provided, the method comprises the following steps: acquiring a plurality of video frames to be detected in a video to be detected; performing difference processing on adjacent video frames in the plurality of video frames to be detected to obtain a pixel intensity map of frame differences of the adjacent video frames; carrying out binarization processing on the pixel intensity map of the frame difference of the adjacent video frames to obtain a binarization image of the frame difference of the adjacent video frames; and performing video pause detection according to the binarized image of the frame difference of the adjacent video frames, thereby being beneficial to realizing simple and effective pause detection.

Description

Video jamming detection method and device and storage medium
Technical Field
The present invention relates to the field of video image processing, and more particularly, to a video stuck detection method, apparatus and storage medium.
Background
The rapid development of multimedia technology makes the requirement on the user experience level of multimedia files (such as videos, advertisements, etc.) higher and higher, wherein the fluency of video playing is a concern for users. In the video stuck detection of the related art, a neural network is mainly applied, deep learning can show excellent performance under the condition of sufficient data, but the performance of many deep learning models depends on huge training data, so that the functions of the deep learning models in practical application scenes are limited, and particularly, video frames and stuck frames with small picture motion amplitude and slow process are easy to confuse, so that how to realize simple and effective video stuck detection is a problem which needs to be solved urgently.
Disclosure of Invention
The application provides a video stuck detection method, a video stuck detection device and a storage medium, which can realize simple and effective video stuck detection.
In a first aspect, a video stuck detection method is provided, including: acquiring a plurality of video frames to be detected in a video to be detected; performing difference processing on adjacent video frames in the plurality of video frames to be detected to obtain a pixel intensity map of frame differences of the adjacent video frames; carrying out binarization processing on the pixel intensity map of the frame difference of the adjacent video frames to obtain a binarization image of the frame difference of the adjacent video frames; and performing video pause detection according to the binary image of the frame difference of the adjacent video frames.
In a second aspect, a video stuck detection apparatus is provided, including:
the acquisition module is used for acquiring a plurality of video frames to be detected in a video to be detected;
the frame difference processing module is used for carrying out difference processing on adjacent video frames in the plurality of video frames to be detected to obtain a pixel intensity graph of the frame difference of the adjacent video frames;
the binarization processing module is used for carrying out binarization processing on the pixel intensity map of the frame difference of the adjacent video frames to obtain a binarization image of the frame difference of the adjacent video frames;
and the video jam detection module is used for carrying out video jam detection according to the binary image of the frame difference of the adjacent video frames.
In a third aspect, a video stuck detection apparatus is provided, including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the video stuck detection method described above via execution of executable instructions.
In a fourth aspect, a storage medium is provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the video stuck detection method described above.
Based on the technical scheme, the binarization processing is carried out on the pixel intensity maps of the frame difference of the adjacent video frames in the video frames to be detected in the video to be detected, so that the video frames and the Canton frames with small picture motion amplitude and slow process can be effectively distinguished, the smooth playing frames and the Canton frames can be more obviously distinguished, the accuracy of video Canton detection is improved, the neural network is not depended on, huge calculation amount of model training and the like is avoided, and the difficulty of algorithm realization is reduced.
Drawings
Fig. 1 is a schematic flowchart of a video stuck detection method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating a normal play frame being converted into a white screen.
Fig. 3 is a schematic diagram of the transition from the white screen to the normal screen.
Fig. 4 (a) and (b) are image frames in which the contents of two frames change slowly and the amplitude is small.
Fig. 4 (c) is a pixel intensity map of the frame difference between the two frame images of fig. 4 (a) and (b).
Fig. 4 (d) is an image obtained by binarizing the pixel intensity map of the frame difference in fig. 4 (c).
Fig. 5 (a) and (b) are two-frame morton frames.
Fig. 5 (c) is a pixel intensity map of the frame difference between the two frame images of fig. 5 (a) and (b).
Fig. 5 (d) is an image obtained by binarizing the pixel intensity map of the frame difference in fig. 5 (c).
Fig. 6 is a schematic flow chart of a video stuck detection method according to an exemplary embodiment of the present application.
Fig. 7 (a) is a schematic diagram of pixel intensity obtained by a video stuck detection method based on a pixel intensity map of frame differences, and fig. 7 (b) is a schematic diagram of a target expected value of a binarized image obtained by a video stuck detection method according to the present application.
Fig. 8 (a) is a graph showing a stuck number index of the video stuck detection method based on the pixel intensity map of the frame difference, and fig. 8 (b) is a graph showing a stuck duration index of the video stuck detection method based on the pixel intensity map of the frame difference.
Fig. 9 (a) is a schematic diagram of a stuck number index based on the video stuck detection method of the present application, and fig. 9 (b) is a schematic diagram of a stuck time length index based on the video stuck detection method of the present application.
Fig. 10 is a schematic block diagram of a video stuck detection apparatus provided according to an embodiment of the present application.
Fig. 11 is a schematic block diagram of another video stuck detection apparatus provided according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art without making any creative effort with respect to the embodiments in the present application belong to the protection scope of the present application.
It should be understood that the drawings are schematic illustrations of the present application, in which the same reference numerals denote the same or similar parts, and thus, a repetitive description thereof will be omitted. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the present application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, steps, and so forth. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
It should also be understood that some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or may be embodied in different networks, processor devices, or micro-control devices.
Cloud technology refers to a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
Cloud technology (Cloud technology) is based on a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied in a Cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.
The embodiment of the application provides a video blockage detection method, a video blockage detection device and a storage medium. In practical applications, each functional module in the video stuck detection apparatus may be cooperatively implemented by hardware resources of a device (e.g., a terminal device, a server, or a server cluster), such as computing resources like a processor, and communication resources (e.g., for supporting various modes of communication, such as optical cable and cellular).
In the following embodiments, description will be made from the perspective of a video stuck detection apparatus, which may be any device having a calculation processing capability. The video jam detection device may be integrated into a terminal or a server having a memory, a processor, and an arithmetic capability, such as a tablet computer, a slave computer, or a notebook computer, or the video jam detection device may be the terminal or the server.
In some embodiments, the video-stuck detection apparatus may be applied to a client or a server having a video playing function, for example, video conference software.
Fig. 1 shows a schematic flow diagram of a video stuck detection method 100 according to an embodiment of the application.
As shown in fig. 1, the method 100 may include at least some of the following:
s101, a plurality of video frames to be detected in a video to be detected are obtained.
In some embodiments of the present application, the video to be detected is decoded video. The video to be detected may be a video played in a recent period of time, for example, the video stuck detection device may obtain the video to be detected at a certain time interval and determine stuck information of the video to be detected. As an example, the time interval may be 10 minutes, or 15 minutes, etc.
It should be understood that, in the embodiment of the present application, the obtaining manner of the video to be detected is not limited, for example, the client of the video to be detected may send the video to the video stuck detection device, or the video stuck detection device obtains the video by itself, which is not limited in the embodiment of the present application.
In some embodiments of the present application, the video stuck detection device may be integrated in a client or a server of the video to be detected, or the video stuck detection device may also be independent of the client and the server of the video to be detected, that is, the video stuck detection device may be a third-party client or a server, in which case, the video stuck detection device may obtain the video to be detected from the client and the server of the video to be detected.
In some embodiments of the present application, the plurality of video frames to be detected may include all video frames in the video to be detected.
In other embodiments of the present application, the plurality of video frames to be detected may include a portion of video frames in the video to be detected. For example, frames of a video to be detected may be extracted to obtain a plurality of video frames to be detected. In the following, a description is given by taking an example of extracting frames from a video to be detected to obtain a plurality of video frames to be detected, but the present application is not limited thereto.
In some embodiments of the present application, a plurality of video frames obtained by frame extraction may be further preprocessed to obtain a plurality of video frames to be detected.
In some exemplary embodiments, the preprocessing may include conversion to a grayscale map, or a downsampling process, or the like.
For example, the frame extraction may be performed on a video to be detected to obtain a plurality of video frames, the plurality of video frames may be further converted into a gray scale image, and the downsampling processing may be performed on the plurality of video frames to obtain the plurality of video frames to be detected.
As an example, the down-sampling process may be to reduce the resolution of the image, for example, to 1/10 in the original size, while ensuring that the aspect ratio is unchanged, which is beneficial for reducing the computation amount of the real-time video katon detection algorithm.
In some embodiments of the present application, a frame rate used for performing frame extraction on a video to be detected may be based on a minimum sensing time for a human eye to play a video.
For example, since the minimum sensing time of the human eye for playing pause is about 200ms, video pause events of 200ms or less can be ignored in algorithm design, and video pause events of 200ms or more are focused. Meanwhile, the larger the inter-frame interval is, the larger the difference between the frame difference between the stuck frames and the frame difference between the non-stuck frames is, which is beneficial to the discrimination of the stuck frames. Therefore, the embodiments of the present application need to consider both omission of any stuck event with a duration of more than 200ms and effective distinction between stuck frames and non-stuck frames when designing the frame rate of the decimated frames.
As an example, to ensure that any stuck event of 200ms or more duration is not missed, frame decimation detection is performed at least every 100ms, according to the nyquist sampling theorem.
For example, for a video with a frame rate of 30, 3 frames are played in 100ms, and according to the video morton method of the embodiment of the present application, 1 frame may be selected to be extracted every 3 frames. I.e. the frame rate of the decimated frames is 10.
In practical applications, some special frames may exist in the video to be detected, such as a black screen or a white screen when the video is transferred, and fig. 2 and fig. 3 respectively show schematic diagrams of the conversion from a normal picture to a white screen and the conversion from a white screen to a normal picture. In order to avoid identifying these consistent video frames as stuck frames, these frames need to be distinguished and culled before stuck detection.
In some embodiments of the present application, whether the extracted video frame is a special frame may be determined according to an expected value of the extracted video frame, where the expected value of the video frame may be an average value of pixel values in the video frame, denoted as e (I), where I denotes the video frame.
For example, if the expected value of the video frame is less than the fifth threshold, the video frame may be determined to be a transition blackscene.
As an example, the fifth threshold may be 0.5, or 1, etc., and may be set according to specific requirements, which is not limited in this application.
For another example, if the expected value of the video frame is greater than the sixth threshold, the video frame may be determined to be a transition white screen.
As an example, the sixth threshold may be 225, 230, etc., and may be set according to specific requirements, which is not limited in this application.
In some embodiments of the present application, the determined special frames may be culled prior to determining the pixel intensity map of frame differences for adjacent video frames. That is, the video frames with the expected value smaller than the fifth threshold and the video frames with the expected value larger than the sixth threshold are not included in the video frames to be detected.
In summary, in the embodiment of the present application, all video frames in a video to be detected may be used as the video frames to be detected, or a frame of the video to be detected may be extracted, and a part of the video frames may be used as the video frames to be detected, or a video frame in the video to be detected may be preprocessed, and the preprocessed video frame may be used as the video frame to be detected, or a special frame may be first removed, and then operations such as preprocessing and frame extraction may be performed, or operations such as frame extraction and preprocessing may be performed first, and then operations such as removing the special frame may be performed, and the specific manner of obtaining a plurality of video frames to be detected is not limited in the embodiment of the present application.
S102, performing difference processing on adjacent video frames in the video frames to be detected to obtain a pixel intensity image of the frame difference of the adjacent video frames.
In some embodiments, differencing the adjacent video frames may be subtracting temporally earlier video frames from temporally later ones of the adjacent video frames.
In some cases, e.g., case 1: when the video is converted into a white screen, for example, the white screen is converted into a normal screen; case 2: when the video is changed into a black screen, the normal picture is changed into a black screen picture. When both of the above cases occur, the pixel intensity of the subtracted video frame is lower than that of the other video frame, in this case, the expected value e (i) of the frame difference between the two video frames is 0, and the subsequent stuck detection based on the expected value is mistaken for a stuck frame.
In some embodiments of the present application, the subtracted video frame when the frame difference is obtained may be determined according to a difference value of expected values of adjacent video frames. It can be understood that the difference between the expected value of the video frames of the transition black screen and the transition white screen and the expected value of the video frames of the normal pictures is large, so whether the two situations are the above two situations can be distinguished according to the difference between the adjacent video frames.
In some embodiments, the adjacent video frames include a first video frame and a second video frame, the first video frame being a temporally previous video frame and the second video frame being a temporally subsequent video frame.
As an example, if the absolute value of the difference between the expected value of the first video frame and the expected value of the second video frame is less than or equal to the fourth threshold, the second video frame is subtracted from the first video frame to obtain the pixel intensity map of the frame difference of the adjacent video frames.
As another example, if the absolute value of the difference between the expected value of the first video frame and the expected value of the second video frame is greater than the fourth threshold, the video frame with the smaller expected value is subtracted from the video frame with the larger expected value in the second video frame and the first video frame to obtain the pixel intensity map of the frame difference between the adjacent video frames.
For example, if the difference between the expected value of the first video frame and the expected value of the second video frame is greater than the fourth threshold, the pixel intensity map of the frame difference is obtained by subtracting the second video frame from the first video frame.
For another example, if the difference between the expected value of the second video frame and the expected value of the first video frame is greater than the fourth threshold, the pixel intensity map of the frame difference is obtained by subtracting the first video frame from the second video frame.
In some embodiments, the expected value for the first video frame is an average of pixel values in the first video frame and the expected value for the second video frame is an average of pixel values in the second video frame.
In some embodiments, the fourth threshold may be 10, or 15, etc., and may be set according to specific requirements, which is not limited in this application.
In some embodiments of the present application, the frame difference pixel intensity map of two video frames may be obtained by subtracting pixel values of corresponding pixel points of the two video frames.
In some scenarios, when a picture plays katon, two situations may occur: (1) a rotating circle or a character symbol for prompting the playing of the card pause appears in the central area of the picture; (2) the picture will be completely still without any graphical reminder. The key point of the pause detection method based on the frame difference is that a threshold value is set, and a reasonable threshold value is set for E (I) of a pixel intensity image of the frame difference, so that a played pause frame and a played normal frame can be distinguished. However, the two cases can be easily confused by the frame difference-based algorithm with the case of normal play but slow change of picture content and small motion amplitude of object.
Fig. 4 (a) and (b) are images in which the contents of two frames of pictures change slowly and the amplitudes are small, and fig. 4 (c) is a pixel intensity map of the frame difference between the two frames of images. Fig. 5 (a) and (b) are two-frame stuck images, and fig. 5 (c) is a pixel intensity map of a frame difference between the two-frame images. In contrast, the pixel values in the pixel intensity map of the frame difference of the image having a slowly changing screen content and a small amplitude are generally small, and similarly, the pixel values in the pixel intensity map of the frame difference of the karton frame are also small, and therefore, it is not easy to set a reasonable threshold value for discrimination.
In the embodiment of the present application, as shown in fig. 1, the method 100 further includes:
s103, carrying out binarization processing on the pixel intensity map of the frame difference of the adjacent video frames to obtain a binarized image of the frame difference of the adjacent video frames.
In some embodiments, the pixel values greater than or equal to the third threshold in the pixel intensity map of the frame difference of the adjacent video frame are set to 255, and the pixel values less than the third threshold in the pixel intensity map of the frame difference of the adjacent video frame are set to zero, resulting in the binarized image of the frame difference of the adjacent video frame.
As an example, the third threshold may be 0.5, or 1, or 2, or other values, and may be set according to specific requirements, which is not limited in this application.
The difference of adjacent video frames can be amplified by carrying out binarization processing on a pixel intensity image with frame difference, for a picture which is played in a pause state and has a circle reminding in the middle, only a small area of the circle part has a non-0 value after the binarization processing, the pixel values of the rest areas are all 0, and for the conditions of normal playing but slow picture change and small object action amplitude, the tiny change can be captured by the binarization processing and amplified to the maximum extent, so that the two conditions which are easy to be mixed are obviously distinguished.
In contrast, fig. 4 (d) is an image obtained by binarizing the pixel intensity map of the frame difference in fig. 4 (c), and fig. 5 (d) is an image obtained by binarizing the pixel intensity map of the frame difference in fig. 4 (c), so that image frames with slow changes in screen content and small amplitudes can be effectively distinguished from katon frames by the binarization processing.
In the embodiment of the present application, as shown in fig. 1, the method 100 further includes:
and S104, performing video pause detection according to the binary image of the frame difference of the adjacent video frames.
In some embodiments of the present application, the performing video stuck detection based on the binarized image of the frame difference between adjacent video frames includes:
calculating an expected value of a binarized image of a frame difference of adjacent video frames, wherein the expected value of the binarized image is an average value of pixel values in the binarized image;
and carrying out video pause detection according to the expected value of the binary image of the frame difference of the adjacent video frames.
I.e. the indicator for video stuck detection may be the expected value of the binarized image of the frame difference of adjacent video frames.
As an example, if the expected value of the binarized image is greater than the first threshold value, it is determined that a stuck event has not occurred.
As another example, if the expected value of the binarized image is less than or equal to the first threshold value, it is determined that a stuck event has occurred.
In some embodiments, the first threshold may be 5, 10, or other values, which may be set according to specific requirements, and this application is not limited thereto.
In other embodiments of the present application, the performing video stuck detection based on the binarized image of the frame difference between adjacent video frames includes:
carrying out logarithm taking processing on the expected value of the binarized image of the frame difference of the adjacent video frames to obtain the target expected value of the binarized image of the frame difference of the adjacent video frames, wherein the expected value of the binarized image is the average value of pixel values in the binarized image;
and carrying out video pause detection according to the target expected value of the binary image of the frame difference of the adjacent video frames.
That is, the index for video stuck detection may be the target expectation value E' (I) of the binarized image of the frame difference of the adjacent video frames.
It should be understood that the logarithm processing of the expected value of the binarized image is equivalent to a normalization (normalization) processing, which is beneficial to stretching the difference between the small values and limiting the value range of the large value, thereby ensuring that the index for video stuck detection is in a proper range, and facilitating the selection of a proper threshold value for distinguishing stuck frames from flow field frames.
In some embodiments, the target expectation value E' (I) of the binarized image of frame differences for adjacent video frames is determined according to the following formula:
E’(I)=log2(1+E(I))
wherein I represents a frame difference of the neighboring video frames, and e (I) represents an expected value of the frame difference of the neighboring video frames.
As an example, if the target expected value of the binarized image is greater than the second threshold, it is determined that a stuck event has not occurred.
As another example, if the target expectation value of the binarized image is less than or equal to the second threshold, it is determined that a stuck event has occurred.
In some embodiments, the second threshold may be 0.5, 0.2, or other values, which may be set according to specific requirements, and this application is not limited thereto.
In some embodiments, the second threshold may be set by statistically plotting the experimental data, further observing the plot.
In some embodiments of the present application, the method 100 further comprises:
determining the pause information of the video to be detected according to the pause frame in the video frames to be detected, wherein the pause information of the video to be detected comprises at least one of the following items:
the total times of clamping pause, the total duration of clamping pause and the duration of a single clamping pause event of the video to be detected.
Specifically, the video stuck detection apparatus may determine stuck information of each stuck event in the video to be detected according to the manner of the foregoing embodiment, and further determine stuck information of the video to be detected.
It should be understood that in the embodiment of the present application, a plurality of consecutive katon frames may be regarded as one katon event, and the duration of the one katon event is counted.
Taking the second threshold value as 0.5 as an example, the stuck detection is implemented by the following algorithm:
if E’(I)<0.5:
Sd = Sd + c/fps
Sdi = Sdi + c/fps, i=Sn
if s=0:
Sn = Sn + 1
s = 1
else:
s=0
wherein Sd represents the total pause duration of the detected videos so far, and the initial value of Sd is 0 for each video to be detected; sn represents the total number of times of video blockage detected so far, and for each video to be detected, the initial value of Sn is 0; sdi represents the duration of the ith stuck event, and the initial value of Sdi is 0 for each stuck event; s represents the playing state of the current video frame, 1 represents that the playing is paused, 0 represents that the playing is normal, and the initial value of s is 0 for each video to be detected; fps represents the frame rate of the video to be detected; and c, taking 3 to indicate that 1 frame is extracted from every 3 frames for video blockage detection, and c taking 1 to indicate that all the frames are used for video blockage detection.
In some embodiments, in the presence of a frame decimation operation, the method 100 further comprises:
and according to the time interval between adjacent video frames in the video frames to be detected, compensating the duration of each pause event to obtain the target duration of each pause event.
In the case of the frame extraction operation, several video frames before and after the extracted video frame may be missed, and therefore, the error caused by frame extraction needs to be considered when counting the duration of a single pause event.
For example, half of the time interval between adjacent video frames may be compensated before and after the duration of a single stuck event, respectively, as the target duration of the single stuck event.
Taking the example of performing the stuck detection on the video with the frame rate of 30, assuming that 1 frame is extracted every 3 frames, 1 to 2 frames of video frames before and after the video frame to be detected may be mostly missed in the actual detection, and in order to compensate for the error caused by the frame extraction, for each stuck event, half of the detection interval duration (i.e., 100 ms) to the duration of each detected stuck event may be respectively compensated before and after the stuck event.
For example, for each Sdi, after detecting the end of the stuck event, the following is done: sdi = Sdi + c/fps.
Further, the video pause detection device can output the pause information of the video to be detected, for example, the pause information of the video to be detected is output to a server corresponding to the client of the video to be detected, so that the server can optimize video playing configuration according to the pause information of the video to be detected, and user experience is improved.
With reference to fig. 6, an overall flow according to an exemplary embodiment of the present application is described. As shown in fig. 6, at least some of the following may be included:
s601, extracting a plurality of video frames from the video to be detected.
The frame rate of the frame extraction can be determined according to the minimum sensing time of human eyes to video jamming.
S602, preprocessing the extracted multiple video frames.
For example, the extracted multiple video frames are subjected to gray scale conversion and downsampling processing to reduce the complexity of subsequent algorithm processing.
S603, the video frame processed in S602 is preprocessed before frame difference is obtained.
For example, a transition black screen, a transition white screen, etc. are removed.
For another example, the difference sequence of the frame difference is determined according to the difference value of the expected values of the adjacent video frames.
The specific implementation of the above steps refers to the related description of the foregoing embodiments.
It should be understood that in the application embodiment, the removal of the transition black curtain and the transition white curtain may be after S602, or may also be before S602, which is not limited in this application.
S604, calculating a pixel intensity image, or pixel intensity spectrum, of the frame difference of the adjacent video frames according to the difference sequence determined in S603.
And S605, performing binarization processing on the pixel intensity image of the frame difference of the adjacent video frames to obtain a binarized image of the frame difference of the adjacent video frames.
S606, calculating the target expected value of the binary image of the frame difference of the adjacent video frames.
For example, the target expectation value of the binarized image of the frame difference of the adjacent video frames may be calculated from the expectation value of the binarized image of the frame difference of the adjacent video frames. For concrete implementation, reference is made to the related description of the foregoing embodiments, and for brevity, no further description is provided here.
S607, whether the seizure event occurs is determined according to the target expectation value of the binarized image of the frame difference of the adjacent video frames.
For example, the second threshold value is 0.5, and it is determined that a stuck event has occurred if the expected target value E '(I) of the binarized image is less than 0.5, or it is determined that no stuck event has occurred if the expected target value E' (I) of the binarized image is equal to or greater than 0.5.
And S608, counting the pause information of the whole video to be detected according to the mode.
And S609, correcting the counted pause information according to the frame rate of the frame extraction.
S610, outputting the pause information of the video to be detected.
For example, the pause information may be output to a client of the video, so that the client optimizes a video playing configuration, and user experience is improved.
The performance difference between the video stuck detection method according to the embodiment of the present application and the video stuck detection method based on the pixel intensity map of the frame difference is described with reference to fig. 7 to 9.
Fig. 7 (a) is a schematic diagram of pixel intensity of the video stuck detection method based on a pixel intensity map of frame difference, where the abscissa is the number of the video frame and the ordinate is the change in pixel intensity of the frame difference, and fig. 7 (b) is a schematic diagram of the target expected value of the binarized image based on the technical solution of the present application, where the abscissa is the number of the video frame and the ordinate is the target expected value of the binarized image of the frame difference.
As can be seen from comparison between (a) and (b) in fig. 7, from the perspective of threshold setting, it is difficult for the technical solution based on the pixel intensity map of the frame difference to select an appropriate threshold parameter by visualizing the line graph, and the technical solution of the present application can clearly locate the position where the stuck occurs by visualizing the line graph, which is convenient for threshold setting. In addition, because the method can also carry out logarithm normalization operation on the expected value of the binary image, the method can be applied to different scenes and different data sets.
From the aspect of detection accuracy, for the video stuck detection method based on the pixel intensity map of the frame difference, if the index value at the position where the stuck really occurs is taken as the threshold, the situations of normal playing, slow change of picture content and small motion amplitude of an object are easily judged as playing stuck, so that the accuracy is reduced, and the usability in the actual scene is greatly reduced. The video stuck detection algorithm can capture the slight change of the frame difference through binarization processing, and can amplify the frame difference to the maximum extent, so that a proper threshold value can be conveniently selected to distinguish stuck frames from flow field frames.
Fig. 8 (a) is a schematic diagram of a stuck number index of a video stuck detection method based on a frame difference pixel intensity map, where an abscissa is a detected stuck number and an ordinate is an actual stuck number, and fig. 8 (b) is a schematic diagram of a stuck duration index of a video stuck detection method based on a frame difference pixel intensity map, where an abscissa is a detected stuck duration and an ordinate is an actual stuck duration.
Fig. 9 (a) is a schematic diagram of a stuck number index according to the present disclosure, where an abscissa is a detected stuck number, and an ordinate is an actual stuck number, and fig. 9 (b) is a schematic diagram of a stuck duration index according to the present disclosure, where an abscissa is a detected stuck duration, and an ordinate is an actual stuck duration.
As can be seen by comparing the corresponding indexes in fig. 8 and fig. 9, the index of the stuck duration and the stuck times obtained based on the calculation scheme of the present application has better prediction linear correlation, i.e., higher accuracy.
While method embodiments of the present application are described in detail above with reference to fig. 1-9, apparatus embodiments of the present application are described in detail below with reference to fig. 10-11, it being understood that apparatus embodiments correspond to method embodiments and that similar descriptions may be had with reference to method embodiments.
Fig. 10 is a schematic structural diagram of a video stuck detection apparatus 1000 according to an embodiment of the present application, and as shown in fig. 10, the video stuck detection apparatus 1000 may include:
an obtaining module 1001, configured to obtain multiple video frames to be detected in a video to be detected;
the frame difference processing module 1002 is configured to perform difference processing on adjacent video frames in the multiple video frames to be detected to obtain a pixel intensity map of a frame difference of the adjacent video frames;
a binarization processing module 1003, configured to perform binarization processing on the pixel intensity map of the frame difference between adjacent video frames to obtain a binarized image of the frame difference between adjacent video frames;
and a stuck detection module 1004, configured to perform video stuck detection according to the binarized image of the frame difference between adjacent video frames.
In some embodiments of the present application, the stuck detection module 1004 is further configured to:
calculating an expected value of a binarized image of a frame difference of adjacent video frames, wherein the expected value of the binarized image is an average value of pixel values in the binarized image;
and carrying out video pause detection according to the expected value of the binary image of the frame difference of the adjacent video frames.
In some embodiments of the present application, the stuck detection module 1004 is further configured to:
if the expected value of the binary image is larger than a first threshold value, determining that no stuck event occurs; or
And if the expected value of the binary image is less than or equal to the first threshold value, determining that the seizure event occurs.
In some embodiments of the present application, the video stuck detection apparatus 1000 further includes:
the logarithm taking processing module is used for carrying out logarithm taking processing on the expected value of the binarized image of the frame difference of the adjacent video frames to obtain the target expected value of the binarized image of the frame difference of the adjacent video frames;
the stuck detection module 1004 is further configured to: and carrying out video pause detection according to the target expected value of the binary image of the frame difference of the adjacent video frames.
In some embodiments of the present application, the log extraction processing module is further configured to:
determining a target expectation value of the binarized image of the frame difference of adjacent video frames according to the following formula:
E’(I)=log2(1+E(I))
wherein I represents a frame difference of the adjacent video frames, E (I) represents an expected value of the frame difference of the adjacent video frames, and E' (I) represents a target expected value of the frame difference of the adjacent video frames.
In some embodiments of the present application, the stuck detection module 1004 is further configured to:
if the target expected value of the binary image is larger than a second threshold value, determining that no stuck event occurs; or
And if the target expected value of the binary image is less than or equal to the second threshold value, determining that the seizure event occurs.
In some embodiments of the present application, the binarization processing module 1003 is further configured to:
and setting the pixel value which is larger than or equal to the third threshold value in the pixel intensity image of the frame difference of the adjacent video frame as 255, and setting the pixel value which is smaller than the third threshold value in the pixel intensity image of the frame difference of the adjacent video frame as zero to obtain the binarized image of the frame difference of the adjacent video frame.
In some embodiments of the present application, the adjacent video frames include a first video frame and a second video frame, the first video frame is a video frame in the adjacent video frames in a previous time, and the second video frame is a video frame in the adjacent video frames in a later time;
wherein the frame difference processing module 1002 is further configured to:
if the absolute value of the difference value between the expected value of the second video frame and the expected value of the first video frame is smaller than or equal to a fourth threshold value, subtracting the first video frame from the second video frame to obtain a pixel intensity map of the frame difference of the adjacent video frames; or
If the absolute value of the difference value between the expected value of the first video frame and the expected value of the second video frame is larger than a fourth threshold value, subtracting the video frame with the smaller expected value from the video frame with the larger expected value in the second video frame and the first video frame to obtain a pixel intensity image of the frame difference of the adjacent video frames;
the expected value of the first video frame is the average value of the pixel values in the first video frame, and the expected value of the second video frame is the average value of the pixel values in the second video frame.
In some embodiments of the present application, the obtaining module 1001 is further configured to:
determining a frame rate of frame extraction according to the minimum sensing time of human eyes to video pause;
acquiring a plurality of video frames in a video to be detected according to a frame rate;
and converting the plurality of video frames into a gray image, and performing downsampling processing on the plurality of video frames to obtain a plurality of video frames to be detected.
In some embodiments of the present application, the video frames with the expected value smaller than the fifth threshold and the video frames with the expected value larger than the sixth threshold are not included in the plurality of video frames to be detected.
In some embodiments of the present application, the stuck detection module 1004 is further configured to:
determining the pause information of the video to be detected according to the pause information of each pause event in a plurality of video frames to be detected, wherein the pause information of the video to be detected comprises at least one of the following items:
the total times of clamping pause, the total duration of clamping pause and the duration of a single clamping pause event of the video to be detected.
In some embodiments of the present application, the stuck detection module 1004 is further configured to:
and according to the time interval between adjacent video frames in the video frames to be detected, compensating the duration of each pause event to obtain the target duration of each pause event.
It should be noted that, the functions of each module in the video stuck detection apparatus 1000 in the embodiment of the present application may refer to the specific implementation manner of any embodiment in fig. 1 to fig. 10 in each method embodiment described above, and are not described herein again.
The modules in the video stuck detection device can be wholly or partially realized by software, hardware and a combination thereof. The modules may be embedded in hardware or independent of a processor in the computer device, or may be stored in a memory in the computer device in software, so that the processor can call and execute operations corresponding to the modules.
The video jam detection device 1000 may be integrated into a terminal or a server having a memory, a processor, and an arithmetic capability, such as a tablet computer, a slave computer, or a notebook computer, or the video jam detection device 1000 may be the terminal or the server.
Fig. 11 is a further schematic structural diagram of a video stuck detection apparatus according to an embodiment of the present application, and as shown in fig. 11, the video stuck detection apparatus 1100 may include: a communication interface 1101, a memory 1102, a processor 1103, and a communication bus 1104. The communication interface 1101, the memory 1102, and the processor 1103 communicate with each other via a communication bus 1104. The communication interface 1101 is used for the apparatus 1100 to perform data communication with an external device. The memory 1102 may be used for storing software programs and modules, and the processor 1103 may execute the software programs and modules stored in the memory 1102, such as the software programs of the corresponding operations in the foregoing method embodiments.
In some embodiments, the processor 1103 may invoke software programs and modules stored in the memory 1102 to perform the following operations:
acquiring a plurality of video frames to be detected in a video to be detected;
performing difference processing on adjacent video frames in the plurality of video frames to be detected to obtain a pixel intensity map of frame differences of the adjacent video frames;
carrying out binarization processing on the pixel intensity map of the frame difference of the adjacent video frames to obtain a binarization image of the frame difference of the adjacent video frames;
and performing video pause detection according to the binary image of the frame difference of the adjacent video frames.
In some embodiments, the video stuck detection apparatus 1100 may be integrated into a terminal or a server having a memory and a processor and having an operation capability, such as a tablet computer, a sub-computer, a notebook computer, or the like, or the video stuck detection apparatus 1100 may be the terminal or the server.
In some embodiments, the present application further provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of the above method embodiments when executing the computer program.
The embodiment of the application also provides a computer readable storage medium for storing the computer program. The computer-readable storage medium can be applied to a computer device, and the computer program enables the computer device to execute a corresponding process in the video stuck detection method in the embodiment of the present application, which is not described herein again for brevity.
Embodiments of the present application also provide a computer program product including computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instruction from the computer-readable storage medium, and executes the computer instruction, so that the computer device executes a corresponding process in the video stuck detection method in the embodiment of the present application, which is not described herein again for brevity.
Embodiments of the present application also provide a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instruction from the computer-readable storage medium, and executes the computer instruction, so that the computer device executes a corresponding process in the video stuck detection method in the embodiment of the present application, which is not described herein again for brevity.
It should be understood that the processor of the embodiments of the present application may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
It will be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and Direct Rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
It should be understood that the above memories are exemplary but not limiting illustrations, for example, the memories in the embodiments of the present application may also be Static Random Access Memory (SRAM), dynamic random access memory (dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (enhanced SDRAM, ESDRAM), Synchronous Link DRAM (SLDRAM), Direct Rambus RAM (DR RAM), and the like. That is, the memory in the embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer or a server) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A video stuck detection method is characterized by comprising the following steps:
acquiring a plurality of video frames to be detected in a video to be detected;
performing difference processing on adjacent video frames in the plurality of video frames to be detected to obtain a pixel intensity map of frame differences of the adjacent video frames;
carrying out binarization processing on the pixel intensity map of the frame difference of the adjacent video frames to obtain a binarization image of the frame difference of the adjacent video frames;
and performing video pause detection according to the binary image of the frame difference of the adjacent video frames.
2. The method according to claim 1, wherein the performing video stuck detection based on the binarized image of the frame difference of the adjacent video frames comprises:
calculating an expected value of a binarized image of the frame difference of the adjacent video frames, wherein the expected value of the binarized image is an average value of pixel values in the binarized image;
and carrying out video pause detection according to the expected value of the binary image of the frame difference of the adjacent video frames.
3. The method according to claim 2, wherein the performing video stuck detection based on the expected value of the binarized image of frame difference of the adjacent video frames comprises:
if the expected value of the binary image is larger than a first threshold value, determining that no stuck event occurs; or
And if the expected value of the binarized image is less than or equal to a first threshold value, determining that a stuck event occurs.
4. The method according to claim 2, wherein the performing video stuck detection based on the expected value of the binarized image of frame difference of the adjacent video frames comprises:
carrying out logarithm taking processing on the expected value of the frame difference binarization image of the adjacent video frames to obtain a target expected value of the frame difference binarization image of the adjacent video frames;
and performing video pause detection according to the target expected value of the binary image of the frame difference of the adjacent video frames.
5. The method according to claim 4, wherein the logarithm processing of the expected value of the binarized image of frame difference of the adjacent video frames to obtain the target expected value of the binarized image of frame difference of the adjacent video frames comprises:
determining a target expectation value of the binarized image of the frame difference of the adjacent video frames according to the following formula:
E’(I)=log2(1+E(I))
wherein I represents a frame difference of the adjacent video frames, E (I) represents an expected value of the frame difference of the adjacent video frames, and E' (I) represents a target expected value of the frame difference of the adjacent video frames.
6. The method according to claim 4, wherein the performing video stuck detection based on the target expectation value of the binarized image of the frame difference of the adjacent video frames comprises:
if the target expected value of the binary image is larger than a second threshold value, determining that no stuck event occurs; or
And if the target expected value of the binarized image is less than or equal to a second threshold value, determining that a stuck event occurs.
7. The method according to any one of claims 1-6, wherein the binarizing pixel intensity map of frame difference of the adjacent video frames to obtain a binarized image of frame difference of the adjacent video frames comprises:
and setting the pixel value which is larger than or equal to a third threshold value in the pixel intensity image of the frame difference of the adjacent video frames to be 255, and setting the pixel value which is smaller than the third threshold value in the pixel intensity image of the frame difference of the adjacent video frames to be zero, so as to obtain the binarized image of the frame difference of the adjacent video frames.
8. The method according to any one of claims 1-6, wherein the neighboring video frames comprise a first video frame and a second video frame, the first video frame being a temporally previous video frame of the neighboring video frames, the second video frame being a temporally subsequent video frame of the neighboring video frames;
wherein, the performing difference processing on the adjacent video frames in the plurality of video frames to be detected to obtain the pixel intensity map of the frame difference of the adjacent video frames comprises:
if the absolute value of the difference value between the expected value of the second video frame and the expected value of the first video frame is smaller than or equal to a fourth threshold value, subtracting the first video frame from the second video frame to obtain a pixel intensity map of the frame difference of the adjacent video frames; or
If the absolute value of the difference value between the expected value of the first video frame and the expected value of the second video frame is larger than a fourth threshold value, subtracting the video frame with the smaller expected value from the video frame with the larger expected value in the second video frame and the first video frame to obtain a pixel intensity graph of the frame difference of the adjacent video frames;
the expected value of the first video frame is an average value of pixel values in the first video frame, and the expected value of the second video frame is an average value of pixel values in the second video frame.
9. The method according to any one of claims 1-6, wherein the obtaining a plurality of video frames to be detected in the video to be detected comprises:
determining a frame rate of frame extraction according to the minimum sensing time of human eyes to video pause;
acquiring a plurality of video frames in the video to be detected according to the frame rate;
and converting the video frames into a gray image, and performing downsampling processing on the video frames to obtain a plurality of video frames to be detected.
10. The method according to claim 9, wherein the video frames to be detected do not include video frames with expected values smaller than a fifth threshold value and video frames with expected values larger than a sixth threshold value.
11. The method according to any one of claims 1-6, further comprising:
determining the pause information of the video to be detected according to the pause information of each pause event in the video frames to be detected, wherein the pause information of the video to be detected comprises at least one of the following items:
the total times of clamping pause, the total duration of clamping pause and the duration of a single clamping pause event of the video to be detected.
12. The method of claim 11, further comprising:
and according to the time interval between adjacent video frames in the plurality of video frames to be detected, compensating the duration of each pause event to obtain the target duration of each pause event.
13. A video stuck detection apparatus, comprising:
the acquisition module is used for acquiring a plurality of video frames to be detected in a video to be detected;
the frame difference processing module is used for carrying out difference processing on adjacent video frames in the plurality of video frames to be detected to obtain a pixel intensity graph of the frame difference of the adjacent video frames;
the binarization processing module is used for carrying out binarization processing on the pixel intensity map of the frame difference of the adjacent video frames to obtain a binarization image of the frame difference of the adjacent video frames;
and the video jam detection module is used for carrying out video jam detection according to the binary image of the frame difference of the adjacent video frames.
14. A video stuck detection apparatus, comprising: a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-12 via execution of the executable instructions.
15. A storage medium for storing a computer program which causes a computer to perform the method of any one of claims 1 to 12.
CN202110144611.3A 2021-02-03 2021-02-03 Video jamming detection method and device and storage medium Active CN112511821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110144611.3A CN112511821B (en) 2021-02-03 2021-02-03 Video jamming detection method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110144611.3A CN112511821B (en) 2021-02-03 2021-02-03 Video jamming detection method and device and storage medium

Publications (2)

Publication Number Publication Date
CN112511821A true CN112511821A (en) 2021-03-16
CN112511821B CN112511821B (en) 2021-05-28

Family

ID=74952433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110144611.3A Active CN112511821B (en) 2021-02-03 2021-02-03 Video jamming detection method and device and storage medium

Country Status (1)

Country Link
CN (1) CN112511821B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113382233A (en) * 2021-06-10 2021-09-10 展讯通信(上海)有限公司 Method, system, medium and terminal for detecting video recording abnormity of terminal
CN113408440A (en) * 2021-06-24 2021-09-17 展讯通信(上海)有限公司 Video data jam detection method, device, equipment and storage medium
CN113657218A (en) * 2021-08-02 2021-11-16 上海影谱科技有限公司 Video object detection method and device capable of reducing redundant data
CN113766306A (en) * 2021-04-21 2021-12-07 腾讯科技(北京)有限公司 Method and device for detecting video jamming, computer equipment and storage medium
CN114040197A (en) * 2021-11-29 2022-02-11 北京字节跳动网络技术有限公司 Video detection method, device, equipment and storage medium
CN114202728A (en) * 2021-12-10 2022-03-18 北京百度网讯科技有限公司 Video detection method, device, electronic equipment, medium and product
CN116916093A (en) * 2023-09-12 2023-10-20 荣耀终端有限公司 Method for identifying clamping, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105761255A (en) * 2016-02-04 2016-07-13 网易(杭州)网络有限公司 Game frame stagnation test method and device
CN107734326A (en) * 2017-11-01 2018-02-23 上海斐讯数据通信技术有限公司 The method of testing and system of a kind of video stabilisation
CN111654756A (en) * 2020-06-03 2020-09-11 腾讯科技(深圳)有限公司 Method, device and equipment for detecting stuck and readable storage medium
CN111888758A (en) * 2020-07-09 2020-11-06 深圳市腾讯网域计算机网络有限公司 Fluency detection method, device, equipment and storage medium
CN112153373A (en) * 2020-09-23 2020-12-29 平安国际智慧城市科技股份有限公司 Fault identification method and device for bright kitchen range equipment and storage medium
CN112183406A (en) * 2020-09-30 2021-01-05 魏芳芳 Video blockage detection method and detection system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105761255A (en) * 2016-02-04 2016-07-13 网易(杭州)网络有限公司 Game frame stagnation test method and device
CN107734326A (en) * 2017-11-01 2018-02-23 上海斐讯数据通信技术有限公司 The method of testing and system of a kind of video stabilisation
CN111654756A (en) * 2020-06-03 2020-09-11 腾讯科技(深圳)有限公司 Method, device and equipment for detecting stuck and readable storage medium
CN111888758A (en) * 2020-07-09 2020-11-06 深圳市腾讯网域计算机网络有限公司 Fluency detection method, device, equipment and storage medium
CN112153373A (en) * 2020-09-23 2020-12-29 平安国际智慧城市科技股份有限公司 Fault identification method and device for bright kitchen range equipment and storage medium
CN112183406A (en) * 2020-09-30 2021-01-05 魏芳芳 Video blockage detection method and detection system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113766306A (en) * 2021-04-21 2021-12-07 腾讯科技(北京)有限公司 Method and device for detecting video jamming, computer equipment and storage medium
CN113766306B (en) * 2021-04-21 2023-11-14 腾讯科技(北京)有限公司 Method, device, computer equipment and storage medium for detecting video clamping
CN113382233A (en) * 2021-06-10 2021-09-10 展讯通信(上海)有限公司 Method, system, medium and terminal for detecting video recording abnormity of terminal
CN113408440A (en) * 2021-06-24 2021-09-17 展讯通信(上海)有限公司 Video data jam detection method, device, equipment and storage medium
CN113657218A (en) * 2021-08-02 2021-11-16 上海影谱科技有限公司 Video object detection method and device capable of reducing redundant data
CN114040197A (en) * 2021-11-29 2022-02-11 北京字节跳动网络技术有限公司 Video detection method, device, equipment and storage medium
CN114040197B (en) * 2021-11-29 2023-07-28 北京字节跳动网络技术有限公司 Video detection method, device, equipment and storage medium
CN114202728A (en) * 2021-12-10 2022-03-18 北京百度网讯科技有限公司 Video detection method, device, electronic equipment, medium and product
CN114202728B (en) * 2021-12-10 2022-09-02 北京百度网讯科技有限公司 Video detection method, device, electronic equipment and medium
CN116916093A (en) * 2023-09-12 2023-10-20 荣耀终端有限公司 Method for identifying clamping, electronic equipment and storage medium
CN116916093B (en) * 2023-09-12 2023-11-17 荣耀终端有限公司 Method for identifying clamping, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112511821B (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN112511821B (en) Video jamming detection method and device and storage medium
CN110691259B (en) Video playing method, system, device, electronic equipment and storage medium
CN110365973B (en) Video detection method and device, electronic equipment and computer readable storage medium
CN110287891B (en) Gesture control method and device based on human body key points and electronic equipment
KR20130025944A (en) Method, apparatus and computer program product for providing object tracking using template switching and feature adaptation
US10332243B2 (en) Tampering detection for digital images
JP6924064B2 (en) Image processing device and its control method, and image pickup device
CN111325096A (en) Live stream sampling method and device and electronic equipment
CN113038176B (en) Video frame extraction method and device and electronic equipment
CN111191556A (en) Face recognition method and device and electronic equipment
US10628681B2 (en) Method, device, and non-transitory computer readable medium for searching video event
Yang et al. No‐reference image quality assessment via structural information fluctuation
CN111369557A (en) Image processing method, image processing device, computing equipment and storage medium
CN110751120A (en) Detection method and device and electronic equipment
CN116129316A (en) Image processing method, device, computer equipment and storage medium
US11637953B2 (en) Method, apparatus, electronic device, storage medium and system for vision task execution
CN111339367B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN110213457B (en) Image transmission method and device
Bai et al. Detection and localization of video object removal by spatio-temporal lbp coherence analysis
CN113033552A (en) Text recognition method and device and electronic equipment
CN112492333B (en) Image generation method and apparatus, cover replacement method, medium, and device
KR20200094940A (en) Device and method for generating heat map
CN114173194B (en) Page smoothness detection method and device, server and storage medium
CN117176979B (en) Method, device, equipment and storage medium for extracting content frames of multi-source heterogeneous video
CN112995488B (en) High-resolution video image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40041041

Country of ref document: HK