CN117135399A - Video jamming detection method and device, electronic equipment and storage medium - Google Patents

Video jamming detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117135399A
CN117135399A CN202310726212.7A CN202310726212A CN117135399A CN 117135399 A CN117135399 A CN 117135399A CN 202310726212 A CN202310726212 A CN 202310726212A CN 117135399 A CN117135399 A CN 117135399A
Authority
CN
China
Prior art keywords
video
image
marking information
marking
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310726212.7A
Other languages
Chinese (zh)
Inventor
吴静兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202310726212.7A priority Critical patent/CN117135399A/en
Publication of CN117135399A publication Critical patent/CN117135399A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched

Abstract

The invention discloses a video clamping detection method, a device, electronic equipment and a storage medium. By adopting the embodiment of the invention, the video to be detected can be detected in a blocking way under the condition of not depending on the video information of the video to be detected, and the video blocking detection efficiency is improved.

Description

Video jamming detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a video clip detection method, a video clip detection device, an electronic device, and a storage medium.
Background
With the rapid development of multimedia technology, the requirements on the user experience degree of multimedia files such as video are higher and higher, wherein the smoothness of video playing is a problem of concern for users.
In the related art, in order to determine the fluency of video playing, the fluency is usually determined by observing the video playing jam condition of the test device, and whether the video playing jam of the test device is manually and subjectively judged, so that the video jam detection has low detection precision and high error rate, and the requirements of different users on the fluency of video playing cannot be met.
Therefore, how to implement simple and effective video clip detection is a problem to be solved.
Disclosure of Invention
The embodiment of the invention aims to provide a video jamming detection method, a video jamming detection device, electronic equipment and a storage medium, so as to solve the technical problems of low detection precision and high error rate of video jamming detection in the related technology.
In a first aspect, an embodiment of the present invention provides a video clip detection method, including:
marking the video to be detected to obtain a first video containing marking information;
recording the playing process of the first video to obtain a second video;
acquiring marking information acquired in a target time period in the second video;
and determining a clamping result of the playing process according to the video frame rate of the second video and the acquired marking information.
In a second aspect, an embodiment of the present invention provides a video clip detecting apparatus, including:
the marking module is used for marking the video to be detected to obtain a first video containing marking information;
the recording module is used for recording the playing process of the first video to obtain a second video;
the acquisition module is used for acquiring the mark information acquired in the target time period in the second video;
and the clamping and stopping detection module is used for determining the clamping and stopping result of the playing process according to the video frame rate of the second video and the acquired marking information.
In some embodiments, the tagging module comprises an extraction unit, a tagging unit, and a synthesis unit;
the extraction unit is used for carrying out image frame extraction processing on the video to be detected to obtain a plurality of first images;
the marking unit is used for marking the first image according to the image frame number of the first image in the video to be detected to obtain a plurality of second images containing marking information, wherein the marking information corresponds to the image frame number;
and the synthesis unit is used for carrying out video synthesis processing on the plurality of second images to obtain a first video containing marking information.
In some embodiments, the marking unit is specifically configured to generate marking information associated with the image frame number according to the image frame number of the first image; and adding the marking information into the first image to obtain a plurality of second images containing the marking information.
In some embodiments, the tag information includes two-dimensional code information;
the marking unit is specifically further configured to add the two-dimensional code information to the first image in an image form to obtain a plurality of second images containing two-dimensional code images, where the size of the two-dimensional code images is smaller than that of the first image.
In some embodiments, the marking unit is specifically further configured to construct a marked region in the first image, where a pixel value of the marked region is different from a pixel value of a non-marked region, and the non-marked region is a region in the first image other than the marked region; and adding the marking information into the marking area to obtain a plurality of second images containing the marking information.
In some embodiments, the synthesizing unit is specifically configured to sort the plurality of second images according to the image frame sequence numbers corresponding to the marking information, so as to obtain a plurality of sorted second images; and carrying out video reconstruction on the plurality of sequenced second images at the target video frame rate to obtain a first video containing marking information.
In some embodiments, the jamming detection module comprises a first determining unit, a second determining unit and a jamming detection unit;
the first determining unit is used for determining the image frame number of each frame image corresponding to the acquired marking information;
the second determining unit is used for determining a first image frame sequence number with the smallest sequence number and a second image frame sequence number with the largest sequence number in the image frame sequence numbers of the frame images;
the jamming detection unit is used for determining a jamming result of the playing process according to the difference value between the first image frame number and the second image frame number, the time length of the target time period and the video frame rate of the second video.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and the steps in the video clip detection method described in any one of the foregoing are implemented when the processor executes the computer program.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the video clip detection method of any one of the above.
The embodiment of the invention provides a video jamming detection method, a device, electronic equipment and a storage medium. By adopting the embodiment of the invention, the video to be detected can be detected in a blocking way under the condition of not depending on the video information of the video to be detected, and the video blocking detection efficiency is improved.
Drawings
Fig. 1 is a schematic flow chart of a video clip detection method according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of a second image containing marking information provided in accordance with an embodiment of the present invention;
FIG. 3 is another schematic illustration of a second image containing marking information provided in an embodiment of the present invention;
fig. 4 is a schematic flow chart of a video clip detection principle according to an embodiment of the present invention;
fig. 5 is a schematic view of an application scenario corresponding to fig. 4;
fig. 6 is a schematic structural diagram of a video clip detecting device according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a marking module according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a stuck detection module according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 10 is a schematic diagram of another structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
In the related art, in order to determine the fluency of video playing, the fluency is usually determined by observing the video playing jam condition of the test device, and whether the video playing jam of the test device is manually and subjectively judged, so that the video jam detection has low detection precision and high error rate, and the requirements of different users on the fluency of video playing cannot be met.
Therefore, how to implement simple and effective video clip detection is a problem to be solved.
In order to solve the technical problems in the related art, an embodiment of the present application provides a video clip detection method, please refer to fig. 1, fig. 1 is a flow chart of the video clip detection method provided in the embodiment of the present application, the method includes steps 101 to 104;
and step 101, marking the video to be detected to obtain a first video containing marking information.
In the related art, the video jamming detection is usually completed by observing the jamming condition of playing the video to be detected, but the error rate of manually and subjectively observing the jamming condition of the video by naked eyes is high, so that the detection precision of the video jamming detection cannot be improved. Therefore, the first video containing the marking information can be obtained by marking the video to be detected, then the marking information in the first video is identified in the playing process by playing the first video, and the clamping condition in the playing process can be determined based on the marking information in the first video.
In some embodiments, the marking process provided by the present embodiment may be marking each frame of image in the video to be detected, so as to generate specific marking information in each frame of image in the video to be detected, so that when video clip detection is performed on the video to be detected, that is, the first video, which contains the mark, later, the clip condition of video playing can be determined based on the marking information contained in each frame of image.
Specifically, the step of performing marking processing on the video to be detected to obtain the first video containing marking information provided in this embodiment may be: carrying out image frame extraction processing on the video to be detected to obtain a plurality of first images; marking the first image according to the image frame number of the first image in the video to be detected to obtain a plurality of second images containing marking information; and carrying out video synthesis processing on the plurality of second images to obtain a first video containing the marking information.
Since the video is composed of a plurality of images with consecutive frame numbers, the image frame extraction processing provided in this embodiment mainly splits the video to be detected into a plurality of images with consecutive frame numbers, that is, the first image provided in this embodiment. And then, marking the corresponding first images according to the image frame numbers corresponding to the first images, so as to obtain a plurality of second images containing marking information. The marking information provided by the embodiment corresponds to the image frame number, so that the image frame number of the second image in the corresponding video can be determined according to the marking information.
In this embodiment, the marking information provided in this embodiment may be information related to an image frame number of each frame of image, so that the image frame number of each frame of image played in the playing process can be determined according to the marking information, so that the subsequent processing of detecting the video according to the image frame number is facilitated. Specifically, the step of marking the first image according to the image frame number of the first image in the video to be detected to obtain a plurality of second images containing marking information provided in this embodiment may be: generating mark information associated with the image frame number according to the image frame number of the first image; and adding the marking information into the first image to obtain a plurality of second images containing the marking information.
The video to be detected provided in this embodiment may include n frames of images, and the image frame numbers provided in this embodiment may be 1, 2, 3, … …, and n, and the marking information may obtain the corresponding image frame numbers after identification.
As an alternative embodiment, please refer to fig. 2, fig. 2 is a schematic diagram of a second image containing marking information provided by the embodiment of the present invention, as shown in fig. 2, the marking information provided by the embodiment may be two-dimensional code information, and by adding the two-dimensional code information to the first image, the situation that the marking information is blocked by the content in the first image can be avoided, so that the recognition rate of the marking information is improved, and further, the detection accuracy of video jam detection is improved. Specifically, the step of adding the marking information to the first image to obtain a plurality of second images containing the marking information provided in this embodiment may be: and adding the two-dimensional code information into the first image in the form of an image to obtain a plurality of second images containing two-dimensional code images, wherein the size of the two-dimensional code images is smaller than that of the first image.
As an alternative embodiment, referring to fig. 3, fig. 3 is another schematic diagram of a second image containing marking information provided in the embodiment of the present invention, as shown in fig. 3, marking information provided in the embodiment may be added to a preset area of the first image, so as to further improve the recognition rate of the marking information. Specifically, the step of adding the marking information to the first image to obtain a plurality of second images containing the marking information provided in this embodiment may be: constructing a marker region in the first image; and adding the marking information into the marking area to obtain a plurality of second images containing the marking information.
In this embodiment, the preset area is a marking area, and the area may be a pre-selected area, for example, the area shown in fig. 3 is located at the lower end of the first image, and the area may be located at the upper end, the middle end, or the like of the first image, where the marking area may be 1/5, or 1/3, 1/2, or the like of the first image, as long as the marking area is not greater than the first image. The pixel values of the marked areas provided in this embodiment are different from those of the non-marked areas, which are areas other than the marked areas in the first image. Alternatively, the pixel value of the marking area provided in this embodiment may be 0, that is, black as shown in fig. 3, and by setting the pixel value of the marking area to 0 and setting the marking information in the black marking area, the recognition rate of the marking information can be further improved, so as to further improve the detection accuracy of video clip detection.
The pixel value of the marker region provided in this embodiment is not limited to 0, but may be 1, 2, 254, 255, or the like, as long as marker information in the marker region can be accurately identified, and the present invention is not limited thereto.
In some embodiments, after the marking information is added to the first image to obtain the second image, the second image containing the marking information needs to be subjected to video synthesis processing to obtain the first video containing the marking information. Optionally, in this embodiment, the second image containing the marking information is subjected to video synthesis processing at a fixed video frame rate, so that identification processing is conveniently performed on the marking information in the second image in the second video in the process of playing the second video. Specifically, the step of performing video synthesis processing on the plurality of second images to obtain the first video containing the marking information provided in this embodiment may be: sorting the plurality of second images according to the image frame numbers corresponding to the marking information to obtain a plurality of sorted second images; and carrying out video reconstruction on the plurality of ordered second images at the target video frame rate to obtain a first video containing the marking information.
In this embodiment, the target video frame rate provided in this embodiment may be 30 frames, or may be 45 frames, 60 frames, or the like, which is greater than 30 frames, which is not specifically limited herein. The video reconstruction provided in this embodiment mainly synthesizes the second images corresponding to the continuous image frame numbers to form continuous frames, so as to obtain the first video containing the marking information.
Step 102, recording the playing process of the first video to obtain a second video.
In this embodiment, a recording tool carried by the acquisition card or the test device may be used to record the playing process of the first video, so as to obtain the second video. In this embodiment, when recording the playing process of the first video, a specific frame rate is required to be used for recording, specifically, 30 frames may be used, or frames greater than 30 frames, such as 45 frames, 60 frames, etc., may be used, which is not limited specifically herein.
And step 103, acquiring the mark information acquired in the target time period in the second video.
In this embodiment, after obtaining the recorded second video, if video card segment detection is required in a specified third party application, the recorded second video needs to be published to the third party application to generate a first external network link, and the recorded second video is played in the third party application through the first external network link; if video blocking detection is required to be carried out online, publishing the recorded second video to an extranet server to generate a second extranet link, and playing the recorded second video through the second extranet link in an online mode; if video jamming detection is needed locally, the recorded second video is needed to be stored in the local equipment, and the stored second video is needed to be played in the local equipment.
In order to complete video capture detection of the playing process, the embodiment needs to acquire the mark information acquired in the target time period in the second video, so as to determine whether the frame number of the image played in the target time period in the playing process is matched with the video frame rate of the second video according to the acquired mark information, thereby determining the capture result of the playing process. Specifically, in this embodiment, a second image played in a target time period in a second video is collected, and then corresponding marking information, such as two-dimensional code information, is obtained in the played second image, so as to identify the marking information, so as to determine an image frame number of the played image in the target time period in the second video, and facilitate a subsequent video cartoon detection process to complete video cartoon detection according to the image frame number of the played image.
In this embodiment, the target period provided in this embodiment may be 1 second, or may be a value such as 2 seconds or 3 seconds, as long as the target period is not greater than the playing duration of the second video, which is not specifically limited herein.
Step 104, determining the clamping result in the playing process according to the video frame rate of the second video and the acquired marking information.
In this embodiment, the video frame rate of the second video provided in this embodiment may be a specific frame rate set in the recording process. When the recording process of the first video does not adopt the fixed frame rate for recording, the average frame rate of the second video when the second video is played can be used as the video frame rate of the second video according to the playing time of the second video and the frame number of the images contained in the second video.
As an optional embodiment, the implementation may determine an actual frame rate of the second video in the playing process according to the image frame number corresponding to the collected marking information, then compare the actual frame rate with a video frame rate of the second video, and determine that the result of the blocking in the playing process of the second video is blocking when the actual frame rate and the video frame rate of the second video satisfy a preset relationship. The preset relationship provided in this embodiment may be that the actual frame rate is less than half of the video frame rate of the second video. Specifically, the step of determining the stuck result in the playing process according to the video frame rate of the second video and the collected mark information provided in this embodiment may be: determining an image frame number of each frame image corresponding to the acquired marking information; determining a first image frame number with the minimum sequence number and a second image frame number with the maximum sequence number in the image frame numbers of the frame images; and determining a clamping result of the playing process according to the difference value between the first image frame number and the second image frame number, the time length of the target time period and the video frame rate of the second video.
For example, when the target time period is 1 second, the image frame numbers corresponding to the mark information acquired within 1 second are 1, 2, 3, … …, 25, respectively, and the video frame rate of the second video is 60, the first image frame number may be determined to be 1, and the second image frame number may be determined to be 25, so as to determine that the difference between the first image frame number and the second image frame number is 24, and then it is determined whether the difference 24 between the first image frame number and the second image frame number is less than half of the video frame rate of the second video is 60, that is, 30. Since 24 is smaller than 30, the result of the jamming of the playing process of the second video can be determined to be jamming.
In this embodiment, when the target period provided in this embodiment is a period greater than 1 second, for example, 2 seconds, then when determining the actual frame rate of the playing process in the target period, the difference between the first image frame number and the second image frame number needs to be divided by the target period, that is, 2, so that the actual frame rate of the playing process in the target period can be obtained.
In order to better explain the video clip detection principle of the embodiment of the present invention, please refer to fig. 4 and fig. 5, fig. 4 is a schematic flow chart of the video clip detection principle provided by the embodiment of the present invention, and fig. 5 is a schematic application scenario diagram corresponding to fig. 4. As shown in fig. 4 and fig. 5, when video jam detection is required to be performed on a video to be detected, firstly, image frame extraction processing is performed on the video to be detected to split the video to be detected into a plurality of first images (including n frames of first images) containing image frame numbers, then, marking information is added in each frame of first images, specifically, two-dimensional code information (marking information) representing the image frame numbers is added in each frame of images, so as to obtain a plurality of second images containing the two-dimensional code information, and then, video reconstruction processing is performed on the second images to obtain the first video containing the two-dimensional code information. And then, recording/collecting the playing process of the first video by adopting a collection card or a recording tool to obtain a second video. After the second video is obtained, the video frame rate of the second video is obtained, the second video is subjected to frame splitting processing to obtain a plurality of second images containing two-dimensional code information, the two-dimensional code information in each second image played in the target time period is identified, and the actual frame rate of the second video in the playing process in the target time period is determined according to the image frame sequence number identified by the two-dimensional code information. And finally judging whether the actual frame rate is less than half of the video frame rate of the second video, if so, determining that the blocking result of the playing process of the second video is blocking, otherwise, not blocking.
So far, the description of the video clip detection method provided in this embodiment is completed.
In summary, the invention discloses a video jamming detection method, which comprises the steps of marking a video to be detected to obtain a first video containing marking information, recording a playing process of the first video to obtain a second video, obtaining marking information acquired in a target time period in the second video, and determining a jamming result in the playing process according to the video frame rate of the second video and the acquired marking information. By adopting the embodiment of the invention, the video to be detected can be detected in a blocking way under the condition of not depending on the video information of the video to be detected, and the video blocking detection efficiency is improved.
According to the method described in the above embodiments, the present embodiment will be further described from the perspective of a video clip detecting device, which may be implemented as a separate entity, or may be implemented as an integrated electronic device, such as a terminal, where the terminal may include a mobile phone, a tablet computer, and so on.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a video clip detecting apparatus according to an embodiment of the present invention, and as shown in fig. 6, a video clip detecting apparatus 600 according to an embodiment of the present invention includes: the device comprises a marking module 601, a recording module 602, an obtaining module 603 and a clamping detection module 604;
The marking module 601 is configured to perform marking processing on a video to be detected, so as to obtain a first video containing marking information.
The recording module 602 is configured to record a playing process of the first video to obtain a second video.
The obtaining module 603 is configured to obtain the marking information collected in the target time period in the second video.
The jamming detection module 604 is configured to determine a jamming result in the playing process according to the video frame rate of the second video and the collected mark information.
In some embodiments, please refer to fig. 7, fig. 7 is a schematic structural diagram of a marking module provided in an embodiment of the present invention, and as shown in fig. 7, a marking module 601 provided in an embodiment of the present invention includes an extraction unit 6011, a marking unit 6012, and a synthesis unit 6013;
the extracting unit 6011 is configured to perform image frame extraction processing on a video to be detected, so as to obtain a plurality of first images.
The marking unit 6012 is configured to perform marking processing on the first image according to an image frame number of the first image in the video to be detected, so as to obtain a plurality of second images containing marking information, where the marking information corresponds to the image frame number.
And a synthesizing unit 6013 for performing video synthesizing processing on the plurality of second images to obtain a first video containing the marking information.
In some embodiments, the marking unit 6012 is specifically configured to generate marking information associated with the image frame number according to the image frame number of the first image; and adding the marking information into the first image to obtain a plurality of second images containing the marking information.
In some embodiments, the marking information includes two-dimensional code information, and the marking unit 6012 is specifically further configured to add the two-dimensional code information to the first image in the form of an image, so as to obtain a plurality of second images containing two-dimensional code images, where the size of the two-dimensional code images is smaller than that of the first image.
In some embodiments, the marking unit 6012 is specifically further configured to construct a marked area in the first image, where a pixel value of the marked area is different from a pixel value of a non-marked area, and the non-marked area is an area other than the marked area in the first image; and adding the marking information into the marking area to obtain a plurality of second images containing the marking information.
In some embodiments, the synthesizing unit 6013 is specifically configured to sort the plurality of second images according to the image frame numbers corresponding to the marking information, so as to obtain a plurality of sorted second images; and carrying out video reconstruction on the plurality of ordered second images at the target video frame rate to obtain a first video containing the marking information.
In some embodiments, please refer to fig. 8, fig. 8 is a schematic structural diagram of a katon detection module provided in the embodiment of the present invention, as shown in fig. 8, the katon detection module 604 provided in the embodiment includes a first determining unit 6041, a second determining unit 6042, and a katon detection unit 6043;
wherein, the first determining unit 6041 is configured to determine an image frame number of each frame image corresponding to the acquired mark information.
The second determining unit 6042 is configured to determine a first image frame number having the smallest sequence number and a second image frame number having the largest sequence number among the image frame numbers of the respective frame images.
The blocking detection unit 6043 is configured to determine a blocking result of the playing process according to the difference between the first image frame number and the second image frame number, the time length of the target time period, and the video frame rate of the second video.
In the implementation, each module and/or unit may be implemented as an independent entity, or may be combined arbitrarily and implemented as the same entity or a plurality of entities, where the implementation of each module and/or unit may refer to the foregoing method embodiment, and the specific beneficial effects that may be achieved may refer to the beneficial effects in the foregoing method embodiment, which are not described herein again.
In addition, referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device may be a mobile terminal, such as a smart phone, a tablet computer, or the like. As shown in fig. 9, the electronic device 900 includes a processor 901 and a memory 902. The processor 901 is electrically connected to the memory 902.
Processor 901 is a control center of electronic device 900 that connects various portions of the overall electronic device using various interfaces and lines, and performs various functions of electronic device 900 and processes data by running or loading applications stored in memory 902, and invoking data stored in memory 902, thereby performing overall monitoring of electronic device 900.
In this embodiment, the processor 901 in the electronic device 900 loads instructions corresponding to the processes of one or more application programs into the memory 902 according to the following steps, and the processor 901 runs the application program stored in the memory 902, so as to implement any step of the video clip detection method provided in the foregoing embodiment.
The electronic device 900 may implement the steps in any embodiment of the video clip detection method provided by the embodiment of the present invention, so that the beneficial effects that any video clip detection method provided by the embodiment of the present invention can implement are described in detail in the previous embodiments, and are not described herein.
Referring to fig. 10, fig. 10 is another schematic structural diagram of an electronic device provided in the embodiment of the present invention, and fig. 10 is a specific structural block diagram of the electronic device provided in the embodiment of the present invention, where the electronic device may be used to implement the video clip detection method provided in the embodiment. The electronic device 1000 may be a mobile terminal such as a smart phone or a notebook computer.
The RF circuit 1010 is configured to receive and transmit electromagnetic waves, and to perform mutual conversion between the electromagnetic waves and the electrical signals, thereby communicating with a communication network or other devices. RF circuitry 1010 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF circuitry 1010 may communicate with various networks such as the internet, intranets, wireless networks, or other devices via wireless networks. The wireless network may include a cellular telephone network, a wireless local area network, or a metropolitan area network. The wireless network may use various communication standards, protocols, and technologies including, but not limited to, global system for mobile communications (Global System for Mobile Communication, GSM), enhanced mobile communications technology (Enhanced Data GSM Environment, EDGE), wideband code division multiple access technology (Wideband Code Division Multiple Access, WCDMA), code division multiple access technology (Code Division Access, CDMA), time division multiple access technology (Time Division Multiple Access, TDMA), wireless fidelity technology (Wireless Fidelity, wi-Fi) (e.g., institute of electrical and electronics engineers standards IEEE 802.11a,IEEE 802.11b,IEEE802.11g and/or IEEE802.11 n), internet telephony (Voice over Internet Protocol, voIP), worldwide interoperability for microwave access (Worldwide Interoperability for Microwave Access, wi-Max), other protocols for mail, instant messaging, and short messaging, as well as any other suitable communication protocols, even including those not currently developed.
The memory 1020 may be used to store software programs and modules, such as program instructions/modules corresponding to the video jam detection method in the above embodiments, and the processor 1080 may execute the software programs and modules stored in the memory 1020 to perform various functional applications and video jam detection.
Memory 1020 may include high-speed random access memory, but may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, memory 1020 may further include memory located remotely from processor 1080, which may be connected to electronic device 1000 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input unit 1030 may be used for receiving input numeric or character information and generating keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 1030 may include a touch-sensitive surface 1031 and other input devices 1032. The touch-sensitive surface 1031, also referred to as a touch display screen or touch pad, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch-sensitive surface 1031 or thereabout using any suitable object or accessory such as a finger, stylus, etc.), and actuate the corresponding connection device according to a pre-set program. Alternatively, the touch sensitive surface 1031 may comprise two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 1080 and can receive commands from the processor 1080 and execute them. In addition, the touch sensitive surface 1031 may be implemented in a variety of types, such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface 1031, the input unit 1030 may include other input devices 1032. In particular, other input devices 1032 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a track ball, a mouse, a joystick, etc.
The display unit 1040 may be used to display information input by a user or information provided to a user, and various graphical user interfaces of the electronic device 1000, which may be composed of graphics, text, icons, video, and any combination thereof. The display unit 1040 may include a display panel 1041, and alternatively, the display panel 1041 may be configured in the form of an LCD (Liquid Crystal Display ), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch sensitive surface 1031 can overlay the display panel 1041, and upon detection of a touch operation thereon or thereabout by the touch sensitive surface 1031, is communicated to the processor 1080 to determine a type of touch event, and the processor 1080 then provides a corresponding visual output on the display panel 1041 based on the type of touch event. Although in the figures the touch sensitive surface 1031 and the display panel 1041 are implemented as two separate components, in some embodiments the touch sensitive surface 1031 may be integrated with the display panel 1041 to implement the input and output functions.
The electronic device 1000 can also include at least one sensor 1050, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of ambient light, and the proximity sensor may generate an interruption when the flip cover is closed or closed. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and the direction when the mobile phone is stationary, and can be used for applications of recognizing the gesture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the electronic device 1000 are not described in detail herein.
Audio circuitry 1060, a speaker 1061, and a microphone 1062 may provide an audio interface between a user and the electronic device 1000. Audio circuit 1060 may transmit the received electrical signal after audio data conversion to speaker 1061 for conversion by speaker 1061 into an audio signal output; on the other hand, microphone 1062 converts the collected sound signals into electrical signals, which are received by audio circuit 1060 and converted into audio data, which are processed by audio data output processor 1080 for transmission to, for example, another terminal via RF circuit 1010 or for output to memory 1020 for further processing. Audio circuitry 1060 may also include an ear bud jack to provide communication of the peripheral headphones with the electronic device 1000.
The electronic device 1000, via a transmission module 1070 (e.g., wi-Fi module), may facilitate user reception of requests, transmission of information, etc., that provides wireless broadband internet access to the user. Although a transmission module 1070 is shown, it is understood that it is not an essential component of the electronic device 1000 and can be omitted entirely as desired within the scope of not changing the essence of the invention.
Processor 1080 is a control center of electronic device 1000 and utilizes various interfaces and lines to connect the various parts of the overall handset, perform various functions of electronic device 1000 and process data by running or executing software programs and/or modules stored in memory 1020, and invoking data stored in memory 1020, thereby performing overall monitoring of the electronic device. Optionally, processor 1080 may include one or more processing cores; in some embodiments, processor 1080 may integrate an application processor primarily handling operating systems, user interfaces, applications, and the like, with a modem processor primarily handling wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1080.
The electronic device 1000 also includes a power source 1090 (e.g., a battery) that provides power to the various components and, in some embodiments, is logically coupled to the processor 1080 via a power management system to manage charging, discharging, and power consumption. The power source 1090 may also include one or more of any of a direct current or alternating current power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the electronic device 1000 further includes a camera (such as a front camera, a rear camera), a bluetooth module, etc., and will not be described herein. In particular, in this embodiment, the display unit of the electronic device is a touch screen display, and the mobile terminal further includes a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by the one or more processors, where the one or more programs implement any step in the video clip detection method provided in the foregoing embodiment.
In the implementation, each module may be implemented as an independent entity, or may be combined arbitrarily, and implemented as the same entity or several entities, and the implementation of each module may be referred to the foregoing method embodiment, which is not described herein again.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor. To this end, an embodiment of the present application provides a storage medium having stored therein a plurality of instructions that when executed by a processor can implement any of the steps of the video clip detection method provided in the above embodiment.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in any embodiment of the video clip detection method provided by the embodiment of the present application can be executed due to the instructions stored in the storage medium, so that the beneficial effects that any video clip detection method provided by the embodiment of the present application can achieve can be achieved, and detailed descriptions of the previous embodiments are omitted herein.
The foregoing describes in detail a video clip detection method, apparatus, electronic device and storage medium provided by the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the foregoing description of the embodiments is only for helping to understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application. Moreover, it will be apparent to those skilled in the art that various modifications and variations can be made without departing from the principles of the present application, and such modifications and variations are also considered to be within the scope of the application.

Claims (10)

1. A video clip detection method, comprising:
marking the video to be detected to obtain a first video containing marking information;
recording the playing process of the first video to obtain a second video;
acquiring marking information acquired in a target time period in the second video;
and determining a clamping result of the playing process according to the video frame rate of the second video and the acquired marking information.
2. The method of claim 1, wherein the marking the video to be detected to obtain the first video containing marking information comprises:
carrying out image frame extraction processing on the video to be detected to obtain a plurality of first images;
marking the first image according to the image frame number of the first image in the video to be detected to obtain a plurality of second images containing marking information, wherein the marking information corresponds to the image frame number;
and carrying out video synthesis processing on the plurality of second images to obtain a first video containing marking information.
3. The method according to claim 2, wherein the marking the first image according to the image frame number of the first image in the video to be detected to obtain a plurality of second images containing marking information includes:
Generating mark information associated with the image frame number according to the image frame number of the first image;
and adding the marking information into the first image to obtain a plurality of second images containing the marking information.
4. The method of claim 3, wherein the tag information comprises two-dimensional code information;
the step of adding the marking information to the first image to obtain a plurality of second images containing the marking information, which comprises the following steps:
and adding the two-dimensional code information into the first image in the form of an image to obtain a plurality of second images containing two-dimensional code images, wherein the size of the two-dimensional code images is smaller than that of the first image.
5. A method according to claim 3, wherein said adding the marking information to the first image results in a number of second images containing marking information, comprising:
constructing a marked area in the first image, wherein the pixel value of the marked area is different from the pixel value of a non-marked area, and the non-marked area is an area except the marked area in the first image;
and adding the marking information into the marking area to obtain a plurality of second images containing the marking information.
6. The method of claim 2, wherein said video composition of said plurality of second images to obtain a first video containing marking information comprises:
sorting the plurality of second images according to the image frame numbers corresponding to the marking information to obtain a plurality of sorted second images;
and carrying out video reconstruction on the plurality of sequenced second images at the target video frame rate to obtain a first video containing marking information.
7. The method of claim 2, wherein determining the stuck result of the playing process based on the video frame rate of the second video and the collected marking information comprises:
determining the image frame sequence number of each frame image corresponding to the acquired marking information;
determining a first image frame number with the minimum sequence number and a second image frame number with the maximum sequence number in the image frame numbers of the frame images;
and determining a clamping result of the playing process according to the difference value between the first image frame number and the second image frame number, the time length of the target time period and the video frame rate of the second video.
8. A video clip detecting apparatus, comprising:
the marking module is used for marking the video to be detected to obtain a first video containing marking information;
the recording module is used for recording the playing process of the first video to obtain a second video;
the acquisition module is used for acquiring the mark information acquired in the target time period in the second video;
and the clamping and stopping detection module is used for determining the clamping and stopping result of the playing process according to the video frame rate of the second video and the acquired marking information.
9. An electronic device comprising a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of claims 1 to 7 when the computer program is executed.
10. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the steps in the method according to any one of claims 1 to 7.
CN202310726212.7A 2023-06-16 2023-06-16 Video jamming detection method and device, electronic equipment and storage medium Pending CN117135399A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310726212.7A CN117135399A (en) 2023-06-16 2023-06-16 Video jamming detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310726212.7A CN117135399A (en) 2023-06-16 2023-06-16 Video jamming detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117135399A true CN117135399A (en) 2023-11-28

Family

ID=88855470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310726212.7A Pending CN117135399A (en) 2023-06-16 2023-06-16 Video jamming detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117135399A (en)

Similar Documents

Publication Publication Date Title
CN110109593B (en) Screen capturing method and terminal equipment
CN107241552B (en) Image acquisition method, device, storage medium and terminal
CN111078523B (en) Log acquisition method and device, storage medium and electronic equipment
WO2018161540A1 (en) Fingerprint registration method and related product
CN110069407B (en) Function test method and device for application program
CN111405043A (en) Information processing method and device and electronic equipment
US10706282B2 (en) Method and mobile terminal for processing image and storage medium
WO2019015574A1 (en) Unlocking control method and related product
CN112489082A (en) Position detection method, position detection device, electronic equipment and readable storage medium
US11200437B2 (en) Method for iris-based living body detection and related products
CN111427644A (en) Target behavior identification method and electronic equipment
CN109726726B (en) Event detection method and device in video
CN109213398A (en) A kind of application quick start method, terminal and computer readable storage medium
CN117135399A (en) Video jamming detection method and device, electronic equipment and storage medium
CN109451295A (en) A kind of method and system obtaining virtual information
CN111355991B (en) Video playing method and device, storage medium and mobile terminal
CN114140655A (en) Image classification method and device, storage medium and electronic equipment
CN108829600B (en) Method and device for testing algorithm library, storage medium and electronic equipment
CN113744736A (en) Command word recognition method and device, electronic equipment and storage medium
CN115221888A (en) Entity mention identification method, device, equipment and storage medium
CN110750193A (en) Scene topology determination method and device based on artificial intelligence
CN112379857B (en) Audio data processing method and device, storage medium and mobile terminal
CN109084771A (en) Air navigation aid and device based on airport icon
CN112181266B (en) Graphic code identification method, device, terminal and storage medium
CN111221782B (en) File searching method and device, storage medium and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination