CN116996666A - Video jamming detection method, device, equipment and medium - Google Patents

Video jamming detection method, device, equipment and medium Download PDF

Info

Publication number
CN116996666A
CN116996666A CN202210441530.4A CN202210441530A CN116996666A CN 116996666 A CN116996666 A CN 116996666A CN 202210441530 A CN202210441530 A CN 202210441530A CN 116996666 A CN116996666 A CN 116996666A
Authority
CN
China
Prior art keywords
video
video frames
motion
different
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210441530.4A
Other languages
Chinese (zh)
Inventor
张开元
朱保丞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210441530.4A priority Critical patent/CN116996666A/en
Publication of CN116996666A publication Critical patent/CN116996666A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/04Diagnosis, testing or measuring for television systems or their details for receivers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure relates to a video jamming detection method, a device, equipment and a medium, wherein the method comprises the following steps: acquiring a video to be detected, wherein the video to be detected comprises a plurality of video frames; determining the quantity of motion among different video frames of a video to be detected, wherein the different video frames are separated by a preset number of video frames, and the quantity of motion represents the degree of change among the different video frames; determining corresponding stuck threshold values according to the motion amounts included in different time periods; and determining a target video frame with the blocking in the video to be detected according to the motion quantity among different video frames and the blocking threshold value of the time period of the different video frames. According to the embodiment of the disclosure, the determination of the stuck threshold can be adaptively determined and adjusted, so that the flexibility is improved, the labor cost is saved, the threshold can be different in different scenes of different time periods, and the accuracy rate and recall rate of stuck detection are improved.

Description

Video jamming detection method, device, equipment and medium
Technical Field
The disclosure relates to the technical field of video processing, and in particular relates to a video clamping detection method, a device, equipment and a medium.
Background
With the continuous development of internet technology and mobile communication technology, online video viewing is the first choice of users due to the convenience. The smoothness of the video greatly influences the watching experience of the user and determines the retention rate of the user.
Currently, video jamming detection is usually performed from a user perception level, and whether the video is jammed or not can be judged by detecting the jamming of the video itself, for example, based on pixel differences of frames before and after the video and a certain threshold value. However, the above method relies on manual setting and adjustment of the threshold, has low flexibility and consumes large labor cost, and the single threshold corresponds to different scenes, so that the accuracy and recall rate of the detection result are uncontrollable.
Disclosure of Invention
In order to solve the technical problems, the present disclosure provides a video clip detection method, a device, equipment and a medium.
The embodiment of the disclosure provides a video clip detection method, which comprises the following steps:
acquiring a video to be detected, wherein the video to be detected comprises a plurality of video frames;
determining the quantity of motion among different video frames of the video to be detected, wherein the different video frames are separated by a preset number of video frames, and the quantity of motion represents the degree of change among the different video frames;
determining a corresponding stuck threshold according to the motion amounts included in different time periods;
and determining a target video frame with a clamping action in the video to be detected according to the motion quantity between the different video frames and the clamping threshold value of the time period of the different video frames.
The embodiment of the disclosure also provides a video jamming detection device, which comprises:
the video module is used for acquiring a video to be detected, wherein the video to be detected comprises a plurality of video frames;
the motion quantity module is used for determining the motion quantity among different video frames of the video to be detected, wherein the different video frames are separated by a preset number of video frames, and the motion quantity represents the change degree among the different video frames;
the threshold module is used for determining a corresponding clamping threshold according to the motion amounts included in different time periods;
and the jamming module is used for determining a target video frame with jamming in the video to be detected according to the motion quantity among the different video frames and the jamming threshold value of the time period of the different video frames.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement the video clip detection method according to the embodiments of the present disclosure.
The embodiments of the present disclosure also provide a computer-readable storage medium storing a computer program for executing the video clip detection method as provided by the embodiments of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: according to the video clamping detection scheme provided by the embodiment of the disclosure, a video to be detected is obtained, and the video to be detected comprises a plurality of video frames; determining the motion quantity among different video frames of a video to be detected, wherein the different video frames are separated by a preset number of video frames, and the motion quantity represents the change degree among the different video frames; determining corresponding stuck threshold values according to the motion amounts included in different time periods; and determining a target video frame with the blocking in the video to be detected according to the motion quantity among different video frames and the blocking threshold value of the time period of the different video frames. By adopting the scheme, the movement quantity among different video frames of the video is determined, the clamping threshold value of different time periods can be determined in a self-adaptive manner, further the clamping detection is realized, the determination of the clamping threshold value can be determined and adjusted in an adaptive manner, the flexibility is improved, the labor cost is saved, the threshold value can be different in different scenes of different time periods, and the accuracy rate and the recall rate of the clamping detection are improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a flow chart of a video clip detection method according to an embodiment of the disclosure;
fig. 2 is a flowchart of another video clip detection method according to an embodiment of the present disclosure;
fig. 3 is a flowchart of another video clip detection method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a video clip detecting device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The reasons for video clip may include a variety of reasons such as a large number of users watching video at the same time, unreasonable design and policy of the application, and limitations due to the device itself. Because the cause of video jamming is complex, the cost of detecting video jamming from a network or code layer is too high, and at present, the jamming is usually detected from a user perception layer through the video, so that user experience can be effectively simulated, the real video jamming condition is detected, and whether the video is jammed or not can be judged based on pixel differences of frames before and after the video and a certain threshold value. However, the above approach has the following drawbacks: on the one hand, for different types of videos, the threshold needs to be adjusted, for example, for videos with more dynamic scenes, the threshold should be appropriately adjusted; the threshold value is properly reduced for the video with more static scenes, but the adjustment of the threshold value depends on manual experience, and the manual setting of the threshold value is not flexible enough, so that multiple debugging and trial are needed, and the large labor cost is consumed; on the other hand, the scheme can still aim at a single scene effect, but once multiple scene switching is involved, the recall rate and the accuracy rate of the cartoon detection result are uncontrollable.
In order to solve the above-mentioned problems, embodiments of the present disclosure provide a video clip detection method, which is described below with reference to specific embodiments.
Fig. 1 is a flow chart of a video clip on detection method according to an embodiment of the present disclosure, where the method may be performed by a video clip on detection apparatus, and the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 1, the method includes:
step 101, obtaining a video to be detected, wherein the video to be detected comprises a plurality of video frames.
The video to be detected can be any video to be subjected to video jamming detection, and the specific type and the source are not limited, for example, the video to be detected can be a video shot in real time or a video downloaded from the internet.
Step 102, determining the amount of motion among different video frames of the video to be detected, wherein the different video frames are separated by a preset number of video frames, and the amount of motion represents the degree of change among the different video frames.
The motion amount may be a parameter added in the embodiments of the present disclosure to characterize the degree of variation between different video frames. The video frames can be the minimum units constituting the video, can be extracted from the video to be measured, and can be understood as one image. Because the video to be tested may include a plurality of video frames, the preset number of video frames at intervals between different video frames may be set according to practical situations, in this embodiment of the disclosure, one video frame at intervals between different video frames is taken as an example, that is, different video frames are adjacent video frames, that is, two video frames before and after each other.
In an embodiment of the present disclosure, determining a motion amount between different video frames of a video to be measured may include: acquiring gray level images of different video frames; determining pixel differences of each pixel point between gray level images of different video frames to obtain a plurality of pixel differences; the amount of motion between different video frames is determined based on a plurality of pixel differences. Optionally, determining the amount of motion between different video frames based on the plurality of pixel differences includes: determining a target pixel difference of the plurality of pixel differences, and determining a square value of the target pixel difference; the sum of the square values of the plurality of target pixel differences divided by the number of pixel points is determined as the amount of motion between different video frames.
Specifically, after the video capture detection device acquires the video to be detected, a plurality of video frames included in the video to be detected can be extracted, different video frames can be characterized by adopting a first video frame and a second video frame, when the different video frames are adjacent video frames, the first video frame can be a previous video frame of the adjacent video frames, the second video frame can be a subsequent video frame of the adjacent video frames, and the first video frame and the second video frame can comprise a plurality of pixel points and have the same number of pixel points; respectively converting the first video frame and the second video frame into corresponding gray maps, performing pixel subtraction calculation on each pixel point in the two converted gray maps to obtain pixel differences of each pixel point, and taking absolute values of each pixel difference to obtain a plurality of pixel differences; then each pixel difference can be traversed, the target pixel difference is determined, the square value of the target pixel difference is calculated, the sum of the square values of the target pixel differences is calculated, the sum of the square values is divided by the number of pixel points included in the first video frame or the second video frame, and the quotient obtained is determined as the motion quantity between the first video frame and the second video frame. By adopting the mode, the motion quantity between every two different video frames in the video to be detected can be obtained.
Optionally, the determining the target pixel difference of the plurality of pixel differences may include: a pixel difference of the plurality of pixel differences that is greater than or equal to a pixel difference threshold is determined as a target pixel difference. The target pixel difference may be understood as a pixel difference which is screened from a plurality of pixel differences and meets the threshold requirement. The pixel difference threshold may be set according to the actual situation.
Specifically, when determining a target pixel difference in the plurality of pixel differences, the video-on detecting device may traverse each pixel difference, determine whether the pixel difference is greater than or equal to a pixel difference threshold, if so, determine the pixel difference as the target pixel difference, and calculate a target pixel difference square value; otherwise, the pixel difference is assigned 0.
Optionally, when determining the motion amount between different video frames of the video to be detected, after determining the pixel difference of each pixel point between the gray level maps of different video frames to obtain a plurality of pixel differences, the sum of the pixel differences can be directly divided by the value of the number of the pixel points to determine the motion amount between different video frames.
And 103, determining corresponding blocking threshold values according to the motion amounts included in different time periods.
The click threshold may be a minimum threshold of an amount of motion corresponding to a time period, and the click thresholds corresponding to different time periods may be different, that is, the click threshold in the embodiment of the disclosure supports dynamic change according to scenes of different time periods. The time period may be a part of the time period in the playing duration of the video to be tested.
Fig. 2 is a schematic flow chart of another video clip detection method according to an embodiment of the present disclosure, and as shown in fig. 2, determining a corresponding clip threshold according to motion amounts included in different time periods may include the following steps:
step 201, dividing the playing time of the video to be tested into a plurality of time periods according to a preset time interval.
The preset time interval can be understood as a time interval for determining a blocking threshold, and can be set according to specific service requirements, for example, when the video to be detected is a short video, the preset time interval can be set to 15 seconds; when the video to be measured is a long video, the preset time interval may be set to 5 minutes.
Specifically, the video clip detecting device may divide a time period at intervals of a preset time interval for a playing duration of a video to be detected, and finally divide the playing duration into a plurality of time periods, where the duration of each time period is the preset time interval, for example, when the preset time interval is 15 seconds and the playing duration is 150 seconds, 10 time periods may be finally obtained, and the duration of each time period is also 15 seconds.
Step 202, determining the sum of the motion amounts between different video frames of each time period.
The video clip detecting device may determine, for each time period, a sum of amounts of motion between different video frames corresponding to a plurality of video frames included in the time period after dividing a play duration of the video to be detected into a plurality of time periods.
Step 203, determining a corresponding click threshold according to the sum of the motion amounts of each time period and the number of the included video frames.
In some embodiments, determining the corresponding stuck threshold based on the sum of the amounts of motion for each time period and the number of video frames included includes: for each time period, a value obtained by dividing the sum of the motion amounts by the number of video frames is determined as an average motion amount, and a corresponding click threshold is determined according to the average motion amount and a preset threshold formula.
Wherein the average amount of motion can be understood as an average of the amounts of motion of different video frames at each time period. For each time period, calculating a value obtained by dividing a sum of motion amounts among different video frames corresponding to a plurality of video frames included in the time period by the number of video frames included in the time period, and determining the calculated value as an average motion amount; the average motion amount is then input into a preset threshold formula y=c [ (a log) e Mi)+b]The corresponding threshold value of each time period can be obtained, in the above formula, Y represents the corresponding threshold value of the ith time period, mi represents the average motion amount corresponding to the ith time period, i=1, 2,3 … n, n represents the number of time periods included in the video to be detected, a and c represent two coefficients, b represents a constant, and a specific value can be determined according to practical situations, for example, a and c can be set to 1, and b is set to 0.
Step 104, determining a target video frame with a stuck in the video to be detected according to the motion quantity among different video frames and the stuck threshold value of the time period of the different video frames.
The target video frame may be understood as a video frame in which a stuck condition exists.
In an embodiment of the present disclosure, determining, according to a motion amount between different video frames and a blocking threshold value of a time period in which the different video frames are located, a target video frame in a video to be detected in which blocking occurs may include: and when the motion quantity between the different video frames is smaller than the blocking threshold value of the time period, determining the current different video frame as a target video frame with blocking.
The video jam detection device has determined the motion amount between every two different video frames in the step 102, and performs the jam detection by taking the two different video frames as units, that is, the motion amount between the different video frames is compared with the jam threshold value of the time period where the two different video frames are located for the two different video frames, if the motion amount is smaller than the Yu Kadu threshold value, which indicates that the change between the two different video frames is very small, the two current different video frames are all target video frames with jam; if the amount of motion is greater than the Yu Kadu threshold, this indicates that the change between the two different video frames is greater and that no stuck condition exists for the current two different video frames.
And stopping the detection process until all video frames of the video to be detected are detected, and finally obtaining a target video frame with the blocking and storing the target video frame. Optionally, the target video frame may be stored in a database, for example, in MySQL, which is a relational database management system, so as to facilitate subsequent investigation of the katon problem.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: according to the video clamping detection scheme provided by the embodiment of the disclosure, a video to be detected is obtained, and the video to be detected comprises a plurality of video frames; determining the motion quantity between different video frames of the video to be detected; determining corresponding stuck threshold values according to the motion amounts included in different time periods; and determining a target video frame with the blocking in the video to be detected according to the motion quantity among different video frames and the blocking threshold value of the time period of the different video frames. By adopting the scheme, the movement quantity among different video frames of the video is determined, the clamping threshold value of different time periods can be determined in a self-adaptive manner, further clamping detection is realized, the determination of the clamping threshold value can be determined and adjusted in an adaptive manner, the flexibility is improved, the labor cost is saved, the threshold value can be different in different scenes of different time periods, and the accuracy rate and the recall rate of the clamping detection are improved.
In some embodiments, after acquiring the video to be detected, the video clip detection method may further include: the method comprises the steps of preprocessing a video to be detected, wherein the preprocessing comprises at least one of cutting, reproducing the size and reducing noise.
The preprocessing can be understood as the pre-processing of the video to be detected, and the detection efficiency of the video to be detected can be improved. In the embodiment of the disclosure, the preprocessing may include at least one of clipping, reproducing, and noise reduction, where clipping may be understood as clipping a partial region of the video to be detected, and the partial region may be a region of interest or a key region of a user; the reproducing size can be that each video frame of the video to be measured is reduced to the original preset multiple, the preset multiple can be set according to actual conditions, for example, the preset multiple can be 1/4, 1/6 and the like, the subsequent processing speed can be improved, and the indirect noise reduction effect is achieved by sacrificing part of details; since a large amount of noise may be included in the video to be measured, noise reduction may be to smooth most of the noise using a noise reduction algorithm.
In the scheme, the step of preprocessing can be added before the video to be detected is subjected to the katon detection, so that the processing speed and the accuracy of the subsequent detection can be improved.
The video clip detection process in the embodiments of the present disclosure is further described below by way of a specific example. Fig. 3 is a schematic flow chart of still another video clip detection method according to an embodiment of the present disclosure, and as shown in fig. 3, the video clip detection process may include: step 301, obtaining a video to be tested. Step 302, preprocessing. The video to be detected is preprocessed, including at least one of cropping, reproducing the size, and reducing noise. Step 303, calculating the motion amount between different video frames. The mode calculation in the above embodiment is specifically sampled, and will not be described here in detail. Step 304, calculating dynamic thresholds of different time periods. The dynamic threshold is the above-mentioned stuck threshold, different time periods correspond to different dynamic thresholds, and the specific dynamic threshold is determined in the manner described in the above embodiment. Step 305, judging whether the motion quantity between different video frames is smaller than the dynamic threshold value of the time period where the motion quantity is located, if yes, executing step 306; otherwise, proceed back to step 305 to continue the determination for other different video frames. Step 306, count into the card. I.e. the current different video frame is determined to be the target video frame of the stuck. Step 307, merging the stuck data. I.e. merging multiple target video frames. Step 308, storing in a database. For example, the database may be the relational database management system MySQL, just as an example.
In the scheme, based on the amplitude difference of different frames (such as adjacent frames), the concept of the motion quantity is provided for representing the degree of the change of the different frames, and the corresponding threshold value can be determined in a self-adaptive manner by determining the motion quantity of the different frames in the video, so that a complete set of self-adaptive threshold video cartoon detection method is obtained; according to the method, through calculation of the threshold value, the problems of dependence and flexibility of manually setting the threshold value are solved; meanwhile, according to the method, dynamic thresholds are calculated at intervals according to set time intervals, whether the video in the period of time is blocked or not is judged according to the dynamic thresholds, namely, corresponding local thresholds can be determined for various scenes in a plurality of time periods, the problem that the recall rate and the accuracy rate of single thresholds for various scenes are uncontrollable is solved, and the accuracy rate and the recall rate of blocking detection are improved.
Fig. 4 is a schematic structural diagram of a video clip detecting apparatus according to an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 4, the apparatus includes:
the video module 401 is configured to obtain a video to be tested, where the video to be tested includes a plurality of video frames;
a motion amount module 402, configured to determine a motion amount between different video frames of the video to be detected, where the different video frames are separated by a preset number of video frames, and the motion amount represents a degree of change between the different video frames;
a threshold module 403, configured to determine a corresponding katon threshold according to the motion amounts included in different time periods;
and the blocking module 404 is configured to determine a target video frame in the video to be detected, where blocking occurs, according to the motion amount between the different video frames and a blocking threshold value of a time period in which the different video frames are located.
Optionally, the motion amount module 402 includes:
a first unit, configured to obtain gray maps of the different video frames;
a second unit, configured to determine a pixel difference of each pixel point between the gray maps of the different video frames, so as to obtain a plurality of pixel differences;
and a third unit for determining an amount of motion between the different video frames based on the plurality of pixel differences.
Optionally, the third unit includes:
a first subunit configured to determine a target pixel difference of the plurality of pixel differences, and determine a square value of the target pixel difference;
a second subunit configured to determine a value obtained by dividing a sum of square values of the plurality of target pixel differences by the number of the pixel points as an amount of motion between the different video frames.
Optionally, the first subunit is configured to:
a pixel difference of the plurality of pixel differences that is greater than or equal to a pixel difference threshold is determined as the target pixel difference.
Optionally, the threshold module 403 includes:
the time period unit is used for dividing the playing time length of the video to be detected into a plurality of time periods according to a preset time interval;
a motion amount unit for determining a sum of motion amounts between different video frames for each of the time periods;
and the determining unit is used for determining a corresponding clamping threshold according to the sum of the motion amounts of each time period and the number of the included video frames.
Optionally, the determining unit is configured to:
and for each time period, determining a value obtained by dividing the sum of the motion amounts by the number of video frames as an average motion amount, and determining a corresponding cartoon threshold according to the average motion amount and a preset threshold formula.
Optionally, the blocking module 404 is configured to:
and when the motion quantity between the different video frames is smaller than the blocking threshold value of the time period of the different video frames, determining the current different video frames as target video frames with blocking.
Optionally, the apparatus further comprises a preprocessing module, configured to: after the video to be measured is acquired,
and preprocessing the video to be detected, wherein the preprocessing comprises at least one of cutting, reproducing the size and reducing noise.
Optionally, the different video frames comprise adjacent video frames.
The video jamming detection device provided by the embodiment of the disclosure can execute the video jamming detection method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
Embodiments of the present disclosure also provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements the video clip detection method provided by any embodiment of the present disclosure.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Referring now in particular to fig. 5, a schematic diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 500 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 5, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. When executed by the processing device 501, the computer program performs the above-described functions defined in the video clip detection method of the embodiment of the present disclosure.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a video to be detected, wherein the video to be detected comprises a plurality of video frames; determining the quantity of motion among different video frames of the video to be detected, wherein the different video frames are separated by a preset number of video frames, and the quantity of motion represents the degree of change among the different video frames; determining a corresponding stuck threshold according to the motion amounts included in different time periods; and determining a target video frame with a clamping action in the video to be detected according to the motion quantity between the different video frames and the clamping threshold value of the time period of the different video frames.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (11)

1. A video clip detection method, comprising:
acquiring a video to be detected, wherein the video to be detected comprises a plurality of video frames;
determining the quantity of motion among different video frames of the video to be detected, wherein the different video frames are separated by a preset number of video frames, and the quantity of motion represents the degree of change among the different video frames;
determining a corresponding stuck threshold according to the motion amounts included in different time periods;
and determining a target video frame with a clamping action in the video to be detected according to the motion quantity between the different video frames and the clamping threshold value of the time period of the different video frames.
2. The method of claim 1, wherein determining the amount of motion between different video frames of the video under test comprises:
acquiring gray level diagrams of the different video frames;
determining pixel differences of each pixel point between the gray level images of different video frames to obtain a plurality of pixel differences;
an amount of motion between the different video frames is determined based on the plurality of pixel differences.
3. The method of claim 2, wherein the determining the amount of motion between the different video frames based on the plurality of pixel differences comprises:
determining a target pixel difference of the plurality of pixel differences and determining a square value of the target pixel difference;
and determining a sum of square values of the plurality of target pixel differences divided by the number of pixel points as an amount of motion between the different video frames.
4. A method according to claim 3, wherein determining a target pixel difference of the plurality of pixel differences comprises:
a pixel difference of the plurality of pixel differences that is greater than or equal to a pixel difference threshold is determined as the target pixel difference.
5. The method of claim 1, wherein determining the corresponding stuck threshold from the amount of motion included in the different time periods comprises:
dividing the playing time length of the video to be tested into a plurality of time periods according to a preset time interval;
determining a sum of the amounts of motion between different video frames for each of the time periods;
and determining a corresponding clamping threshold according to the sum of the motion amounts of each time period and the number of the included video frames.
6. The method of claim 5, wherein determining the corresponding stuck threshold based on the sum of the amounts of motion for each of the time periods and the number of video frames included comprises:
and for each time period, determining a value obtained by dividing the sum of the motion amounts by the number of video frames as an average motion amount, and determining a corresponding cartoon threshold according to the average motion amount and a preset threshold formula.
7. The method of claim 1, wherein determining the target video frame in the video to be detected that is stuck according to the amount of motion between the different video frames and the stuck threshold for the time period in which the different video frames are located, comprises:
and when the motion quantity between the different video frames is smaller than the blocking threshold value of the time period of the different video frames, determining the current different video frames as target video frames with blocking.
8. The method of claim 1, wherein after capturing the video to be tested, the method further comprises:
and preprocessing the video to be detected, wherein the preprocessing comprises at least one of cutting, reproducing the size and reducing noise.
9. A video clip detecting apparatus, comprising:
the video module is used for acquiring a video to be detected, wherein the video to be detected comprises a plurality of video frames;
the motion quantity module is used for determining the motion quantity among different video frames of the video to be detected, wherein the different video frames are separated by a preset number of video frames, and the motion quantity represents the change degree among the different video frames;
the threshold module is used for determining a corresponding clamping threshold according to the motion amounts included in different time periods;
and the jamming module is used for determining a target video frame with jamming in the video to be detected according to the motion quantity among the different video frames and the jamming threshold value of the time period of the different video frames.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the video clip detection method according to any one of claims 1-8.
11. A computer readable storage medium, characterized in that the storage medium stores a computer program for executing the video clip detection method according to any one of the preceding claims 1-8.
CN202210441530.4A 2022-04-25 2022-04-25 Video jamming detection method, device, equipment and medium Pending CN116996666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210441530.4A CN116996666A (en) 2022-04-25 2022-04-25 Video jamming detection method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210441530.4A CN116996666A (en) 2022-04-25 2022-04-25 Video jamming detection method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN116996666A true CN116996666A (en) 2023-11-03

Family

ID=88525320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210441530.4A Pending CN116996666A (en) 2022-04-25 2022-04-25 Video jamming detection method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116996666A (en)

Similar Documents

Publication Publication Date Title
CN114584849B (en) Video quality evaluation method, device, electronic equipment and computer storage medium
CN112561840B (en) Video clipping method and device, storage medium and electronic equipment
CN111258736B (en) Information processing method and device and electronic equipment
CN110347875B (en) Video scene classification method and device, mobile terminal and storage medium
CN115103210B (en) Information processing method, device, terminal and storage medium
CN111191556A (en) Face recognition method and device and electronic equipment
CN111783632B (en) Face detection method and device for video stream, electronic equipment and storage medium
CN110809166B (en) Video data processing method and device and electronic equipment
CN112561779B (en) Image stylization processing method, device, equipment and storage medium
CN114584709B (en) Method, device, equipment and storage medium for generating zooming special effects
CN110751120A (en) Detection method and device and electronic equipment
CN114528433B (en) Template selection method and device, electronic equipment and storage medium
CN111680754B (en) Image classification method, device, electronic equipment and computer readable storage medium
CN116996666A (en) Video jamming detection method, device, equipment and medium
CN111737575B (en) Content distribution method, content distribution device, readable medium and electronic equipment
CN111414921B (en) Sample image processing method, device, electronic equipment and computer storage medium
CN113407344A (en) Method and device for processing stuck
CN114915849B (en) Video preloading method, device, equipment and medium
CN110991312A (en) Method, apparatus, electronic device, and medium for generating detection information
CN115103023B (en) Video caching method, device, equipment and storage medium
CN113744259B (en) Forest fire smoke detection method and equipment based on gray value increasing number sequence
CN112488943B (en) Model training and image defogging method, device and equipment
CN111524085B (en) Adaptive image processing method, adaptive image processing device, electronic equipment and storage medium
CN112651909B (en) Image synthesis method, device, electronic equipment and computer readable storage medium
CN114359673B (en) Small sample smoke detection method, device and equipment based on metric learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination