CN116760968A - Video playing effect detection method and device and computer readable storage medium - Google Patents

Video playing effect detection method and device and computer readable storage medium Download PDF

Info

Publication number
CN116760968A
CN116760968A CN202310672753.6A CN202310672753A CN116760968A CN 116760968 A CN116760968 A CN 116760968A CN 202310672753 A CN202310672753 A CN 202310672753A CN 116760968 A CN116760968 A CN 116760968A
Authority
CN
China
Prior art keywords
gray
difference
images
value
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310672753.6A
Other languages
Chinese (zh)
Inventor
唐晓微
戴忠旭
魏治国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Huaqin Electronic Technology Co ltd
Original Assignee
Nanchang Huaqin Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Huaqin Electronic Technology Co ltd filed Critical Nanchang Huaqin Electronic Technology Co ltd
Priority to CN202310672753.6A priority Critical patent/CN116760968A/en
Publication of CN116760968A publication Critical patent/CN116760968A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a video playing effect detection method, a device and a computer readable storage medium, comprising the following steps: obtaining K gray maps which are respectively corresponding to continuous K frame images in the video to be detected; determining K-1 gray level difference images according to the K gray level images, respectively determining corresponding K-1 gray level average values according to the K-1 gray level difference images, determining K-2 gray level average value difference values according to the K-1 gray level average values, and judging whether the K frame image is blocked or not by adopting a classifier model according to the K-1 gray level difference images when the absolute values of the K-2 gray level average value difference values are smaller than a first preset threshold value. By the method, the problems that the existing video detection method is too simple and single and has probability of missing detection and wrong detection on video clamping can be solved.

Description

Video playing effect detection method and device and computer readable storage medium
Technical Field
The present invention relates to the field of video detection technologies, and in particular, to a method and apparatus for detecting video playback effects, and a computer readable storage medium.
Background
Electronic equipment such as a mobile phone, a tablet, a vehicle-mounted screen and the like can be used for playing videos, and the electronic equipment has the defects of blocking, flashing, screen blacking and the like when playing videos. Therefore, in order to avoid the above-mentioned drawbacks after the electronic device is put into use, it is necessary to detect the effect of playing video by the electronic device before shipment.
Currently, manual testing and automated equipment testing are mainly used to detect the effects of video playback. The manual test can generate fatigue due to long-time test of a tester, so that the clamping at each moment is difficult to observe accurately, and subjectivity and uncertainty are realized. Automated equipment testing includes both invasive testing and non-invasive testing. The invasive test needs to interact with the electronic device in a signal manner, so that the related index data are influenced. Non-invasive testing simulates operation of a human user device by means of a robotic arm while recording video using a high-speed camera, and then employs algorithms to analyze relevant indicators of the video.
The concept of adopting an algorithm in the current non-invasive test is mostly to calculate the adjacent frame difference of the video first, and then preset a threshold value to determine whether a jam exists. However, the detection method is simple and single, the setting of the threshold value is seriously depended on, and if the setting of the threshold value is large, no fine change is detected, so that missed detection is caused; if the threshold is set smaller, the false stuck is detected, and erroneous detection is caused. Therefore, a detection method is needed to accurately measure whether the video is stuck.
Disclosure of Invention
The invention provides a video playing effect detection method, a video playing effect detection device and a computer readable storage medium, which are used for solving the problems that the existing video detection method is too simple and single and has probability of missing detection and misplacement detection on video clamping.
In a first aspect, an embodiment of the present invention provides a method for detecting a video playing effect, including:
obtaining K gray level images, wherein the K gray level images are gray level images corresponding to continuous K frame images in a video to be detected respectively; wherein K is a positive integer greater than or equal to 3;
determining K-1 gray level difference images according to the K gray level images, wherein an ith gray level difference image is determined according to an ith gray level image and an (i+1) th gray level image in the K gray level images, and i is a positive integer smaller than or equal to K-1;
respectively determining corresponding K-1 gray average values according to the K-1 gray difference images, wherein the ith gray average value is the gray average value of a moving pixel in the ith gray difference image, and the moving pixel is a pixel with a gray value larger than a preset gray value;
determining a K-2 gray average value difference value according to the K-1 gray average value, wherein the j-th gray average value difference value is the difference value between the j-th gray average value and the j+1th gray average value in the K-1 gray average values, and j is a positive integer smaller than or equal to K-2;
and when the absolute values of the K-2 gray average value differences are smaller than a first preset threshold value, judging whether the K frame images are blocked or not by adopting a classifier model according to the K-1 gray difference images.
By adopting the method, the gray level images corresponding to the continuous images of the multiple frames are obtained, the corresponding gray level difference value images are obtained according to the obtained gray level images, and then the gray level average value of the moving pixels in each gray level difference value image is determined. And further obtaining the difference value of the two adjacent gray average values according to the determined gray average values. If the absolute value of the gray average value difference value is larger than or equal to the preset threshold value, the probability of the image corresponding to the gray average value difference value to be blocked is lower. In order to further improve the accuracy of judging whether the image is stuck or not, when the absolute value of the gray average value difference value is smaller than a preset threshold value, a classifier model is used for judging whether the image is stuck or not, and time and resources are saved. The method solves the problems that the existing stuck detection method is too simple and single, and is easy to misjudge or miss judge the stuck. The method is used for judging whether the image is stuck or not, and is accurate and high in efficiency.
Optionally, before judging whether the continuous K frame images are stuck or not according to the K-1 gray level difference images by using a classifier model, the method further includes:
respectively determining the number of corresponding K-1 motion pixels according to the K-1 gray difference images;
Dividing the number of the K-1 motion pixels by the product of the width and the height of the corresponding K-1 gray level difference value graphs to obtain corresponding K-1 motion pixel duty ratios;
and determining that the K-1 motion pixel duty ratios are smaller than a second preset threshold value.
By adopting the method, after the duty ratio of the motion pixels of the gray level difference value graph is smaller than the preset threshold value, the classifier model is adopted to judge the jam, so that the probability of the jam of the image judged by the classifier model is ensured to be high, the accuracy of judging whether the jam exists in the image is improved, whether the jam exists in all the images is avoided, and the problems of wasting resources and time are solved.
Optionally, before judging whether the continuous K frame images are stuck or not according to the K-1 gray level difference images by using a classifier model, the method further includes:
sorting the motion pixels in the K-1 gray difference maps according to the sequence of the gray values from large to small, and calculating the sum of the gray values of the motion pixels in the front N bits of each gray difference map in the K-1 gray difference maps to obtain the sum of the gray values respectively corresponding to the K-1 gray difference maps; wherein N is a positive integer;
and determining that the sum of gray values respectively corresponding to the K-1 gray difference maps is smaller than a third preset threshold value.
By adopting the method, after the sum of the gray values of the gray difference value graphs is smaller than the preset threshold, the classifier model is adopted to judge the jam, so that the probability of the jam of the image judged by the classifier model is ensured to be high, the accuracy of judging whether the jam exists in the image is improved, whether the jam exists in all the images is avoided, and the problems of wasting resources and time are solved.
Optionally, after the K gray-scale images are acquired, the method further includes:
and judging whether the K frame images have black screens or flash backs by adopting the classifier model according to the K gray level images, wherein the classifier model is a lightweight mobile phone network combined with a convolutional neural network.
By adopting the method, the classifier model can be suitable for images with various sizes by combining the convolutional neural network.
In a second aspect, an embodiment of the present invention provides a video playback effect detection apparatus, including:
the receiving and transmitting unit is used for acquiring K gray level images, wherein the K gray level images are gray level images corresponding to continuous K frame images in the video to be detected respectively; wherein K is a positive integer greater than or equal to 3;
the processing unit is used for determining K-1 gray level difference images according to the K gray level images, wherein the ith gray level difference image is determined according to the ith gray level image and the (i+1) th gray level image in the K gray level images, and i is a positive integer less than or equal to K-1; respectively determining corresponding K-1 gray average values according to the K-1 gray difference images, wherein the ith gray average value is the gray average value of a moving pixel in the ith gray difference image, and the moving pixel is a pixel with a gray value larger than a preset gray value; determining a K-2 gray average value difference value according to the K-1 gray average value, wherein the j-th gray average value difference value is the difference value between the j-th gray average value and the j+1th gray average value in the K-1 gray average values, and j is a positive integer smaller than or equal to K-2;
And the processing unit is further used for judging whether the K frame image is blocked or not by adopting a classifier model according to the K-1 gray level difference images when the absolute values of the K-2 gray level average value differences are smaller than a first preset threshold value.
Optionally, the processing unit is configured to determine, according to the K-1 gray difference maps, the number of corresponding K-1 motion pixels before determining, according to the K-1 gray difference maps, whether the consecutive K-frame images are stuck by using a classifier model; dividing the number of the K-1 motion pixels by the product of the width and the height of the corresponding K-1 gray level difference value graphs to obtain corresponding K-1 motion pixel duty ratios; and determining that the K-1 motion pixel duty ratios are smaller than a second preset threshold value.
Optionally, before judging whether the continuous K frame images are blocked or not by using a classifier model according to the K-1 gray difference images, the processing unit is configured to sort the motion pixels in the K-1 gray difference images in order of gray values from large to small, calculate a sum of gray values of the motion pixels in the front N bits in each of the K-1 gray difference images, and obtain a sum of gray values respectively corresponding to the K-1 gray difference images; wherein N is a positive integer; and determining that the sum of gray values respectively corresponding to the K-1 gray difference maps is smaller than a third preset threshold value.
Optionally, after the K gray level images are obtained, the processing unit uses the classifier model to determine whether the K frame images have a black screen or flash back according to the K gray level images, where the classifier model is a lightweight mobile phone network combined with a convolutional neural network.
In a third aspect, the present application also provides an apparatus. The apparatus may perform the above method design. The apparatus may be a chip or a circuit capable of performing the functions corresponding to the above-described methods, or a device including the chip or the circuit.
In one possible implementation, the apparatus includes: a memory for storing computer executable program code; and a processor coupled to the memory. Wherein the program code stored in the memory comprises instructions which, when executed by the processor, cause the apparatus or device in which the apparatus is installed to carry out the method of any one of the possible designs described above.
The device may further comprise a communication interface, which may be a transceiver, or if the device is a chip or a circuit, the communication interface may be an input/output interface of the chip, such as input/output pins or the like.
In one possible design, the device comprises corresponding functional units for implementing the steps in the above method, respectively. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more units corresponding to the above functions.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which, when run on a device, performs a method of any one of the above possible designs.
In addition, the technical effects caused by any implementation manner of the third aspect to the fourth aspect may refer to the technical effects caused by different implementation manners of the first aspect, which are not described herein.
Drawings
Fig. 1 is a schematic flow chart of acquiring a video to be detected according to an embodiment of the present application;
fig. 2 is a flow chart of a method for detecting video playing effect according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a second precondition for judging whether a K-frame image has a jam by using a classifier model according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a third precondition for judging whether a K-frame image has a jam by using a classifier model according to an embodiment of the present application;
Fig. 5 is a schematic diagram of a communication device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of another communication apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The application scenario described in the embodiment of the present invention is for more clearly describing the technical solution of the embodiment of the present invention, and does not constitute a limitation on the technical solution provided by the embodiment of the present invention, and as a person of ordinary skill in the art can know that the technical solution provided by the embodiment of the present invention is applicable to similar technical problems as the new application scenario appears. In the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
The electronic equipment can be used for playing videos, and the defects such as blocking, black screen, flashing and the like occur at probability during video playing, so that the effect of playing the videos of the electronic equipment before leaving a factory is required to be detected in order to avoid the defects after the electronic equipment is put into use.
Based on the above, the application provides a video playing effect detection method and device, which are used for solving the problems of missed detection and false detection caused by probability of video clamping in the existing video detection method which is too simple and single.
Before detecting the video playing effect, the video to be detected needs to be acquired. As shown in fig. 1, the flow of acquiring the video to be detected is specifically as follows.
Step 101: and acquiring a software icon for playing the video to be detected, and extracting feature points and feature descriptors of the software icon.
For example, software for playing the video to be detected is preset, an icon of the software is obtained, and feature points and feature descriptors of the software icon are extracted. The software for playing the video to be detected can be any software with a video playing function, and the application is not limited.
Illustratively, feature points and feature descriptors of the software icons are extracted by an algorithm. The algorithm may be scale invariant feature transform (Scaleinvariant feature transform, SIFT), or any other algorithm, which is not limited by the present application.
Specifically, when the SIFI algorithm is adopted, the SIFI algorithm includes five parameters, respectively: the number of feature points, the number of layers of each group in the pyramid, a contrast threshold, an edge threshold and an image Gaussian filter coefficient. Because the software icon is a small-size image, in order to acquire more feature points and feature descriptors, subsequent image matching is facilitated, and the comparison threshold value is set smaller than the default parameter value.
Step 102: and acquiring characteristic points and characteristic descriptors of the display image of the electronic equipment, and matching the characteristic points and the characteristic descriptors of the software icon for playing the video to be detected.
Illustratively, after the software for playing the video to be detected is set in step 101, the software is installed in the electronic device. Specifically, the electronic device is an electronic device that needs to play a video to be detected.
In step 101, feature points and feature descriptors of the software icon are extracted, and after the software is installed in the electronic device, the electronic device is placed in a detection designated position, and detection is triggered to start. And acquiring the characteristic points and the characteristic descriptors of the display image of the electronic equipment through an algorithm, and matching the characteristic points and the characteristic descriptors of the software icon extracted in the step 101. The display image of the electronic device is a desktop of the electronic device, and the display image comprises a plurality of software icons. Specifically, the algorithm may be an SIFI algorithm or any other algorithm, and the present application is not limited. When the SIFI algorithm is adopted, the parameters of the SIFI algorithm adopt default parameters.
Specifically, when the matching result is that the matching is successful, the representative has a software icon for playing the video to be detected on the display image of the electronic device, and the step is skipped to 103; and when the matching result is that the matching fails, the display image of the electronic equipment does not comprise a software icon for playing the video to be detected, and the process is ended.
Step 103: triggering the mechanical arm to click and play the software of the video to be detected, and recording the playing picture of the electronic equipment by the camera to obtain the video to be detected.
In an exemplary embodiment, after determining that the displayed image of the electronic device includes a software icon for playing the video to be detected in step 102, the mechanical arm is triggered to click on the software for playing the video to be detected and click on the video to be detected, and a camera is used to record a playing picture of the electronic device to obtain the video to be detected. The video to be detected that is played can be any video in software, and the application is not limited. The recorded playing picture can be used for generating a video paragraph at intervals of 1 minute as a video to be detected to be placed in a specified folder, or generating the video paragraph at other intervals as the video to be detected, and the application is not limited.
For example, after obtaining the video to be detected, continuous K frame images are obtained based on the video to be detected, and the K frame images are further converted into corresponding K gray scale images respectively. Wherein K is a positive integer greater than or equal to 3. Specifically, in order to facilitate the detection of whether the subsequent image has a jam, K is as small as possible and is a larger positive integer. Generally, K is a positive integer of 5 or less.
For example, before determining whether the image is stuck, it may be determined whether the image is black or is flashing. After K gray images are obtained, a classifier model is adopted for the K gray images to judge whether a black screen or a flash back exists in the K frame images corresponding to the K gray images. Specifically, when the classifier model judges that the gray level image is a full black image, determining that the state of the image corresponding to the gray level image is a black screen, and outputting the video playing category corresponding to the image as the black screen; when the classifier model judges that the gray level image is the gray level image corresponding to the desktop image, determining that the state of the image corresponding to the gray level image is flash back, and outputting the video playing type corresponding to the image to be flash back. Particularly, if the image corresponding to the gray level image is the first frame image and the gray level image is the gray level image corresponding to the desktop image, the video to be detected fails to be played.
Specifically, the classifier model is a lightweight mobile phone network combined with a convolutional neural network, and specifically, the combination with the convolutional neural network is realized by changing a full connection layer of the penultimate layer of the lightweight mobile phone network into a convolutional layer. Specifically, the classifier model may output four categories including flash back, motion, black screen, and others. The flash-back type representative existing image is a desktop, the motion type representative image is in a normal playing state, the black screen type representative existing image is a full black image, and other type representative images are in abnormal states, such as a cartoon. By arranging the convolutional neural network, the classifier model can be suitable for images with different specifications and sizes.
For example, after the classifier model determines whether the state of the K frame images corresponding to the K gray scales has a black screen or is in a flash-back state, the determination result of the K frame images is written into the created flash-back queue and the black screen queue. The image serial number of each frame of image corresponds to the judging result of each frame of image in the flash-back queue and the black screen queue. Specifically, the 1 st and 2 nd frame images are described as examples. If the state of the 1 st frame image is black screen, writing FALSE in a flash-back queue corresponding to the 1 st frame image, and writing TRUE in the black screen queue; if the status of the 2 nd frame image is flashing, writing TRUE in a flashing queue corresponding to the 2 nd frame image, and writing FALSE in a black screen queue. And further, by respectively counting the two queues, the flash-back time length or the black screen time length of the K-frame image can be counted, and the flash-back time length or the black screen time length of the video to be detected can be obtained. The flash back time length or the black screen time length of the K frame image can be counted by setting a counter.
After K gray level images of the video to be detected are obtained, and black screen and flash back detection are carried out on the video to be detected, the video to be detected can be subjected to the cartoon detection. As shown in fig. 2, the flow of a video playing effect detection method provided by the present application is specifically as follows.
Step 200: k gray maps are obtained.
Specifically, the K obtained gray-scale images are obtained after the video to be detected is obtained in step 103.
Step 210: and determining K-1 gray difference images according to the K gray images.
Illustratively, after obtaining K gray maps in step 200, K-1 gray difference maps may be determined from the K gray maps. The ith gray level difference image is determined according to the ith gray level image and the (i+1) th gray level image in the K gray level images, and i is a positive integer less than or equal to K-1.
Specifically, the i-th gradation difference map is determined from the absolute value of the difference in gradation values of the pixels whose position coordinates are the same in the i+1-th gradation map.
For example, after obtaining K-1 gray difference maps, the K-1 gray difference maps may be placed in a created gray difference map queue for subsequent determination of whether a jam exists in the image.
Illustratively, a specific explanation will be given by taking 5,5 gray-scale images of K as an example, A, B, C, D, E respectively. And taking the difference of gray values of pixels with the same position coordinates of the 1 st gray image A and the 2 nd gray image B, and taking the absolute value of the difference value after the difference so as to obtain the 1 st gray difference image AB. Similarly, the 2 nd gray difference map BC, the 3 rd gray difference map CD, and the 4 th gray difference map DE can be obtained.
Step 220: and respectively determining corresponding K-1 gray average values according to the K-1 gray difference images.
Illustratively, after obtaining the K-1 gray scale difference maps in step 210, corresponding K-1 gray scale average values may be determined from the K-1 gray scale difference maps, respectively. The ith gray average value is the gray average value of a motion pixel in the ith gray difference value graph, and i is a positive integer less than or equal to K-1. Specifically, the motion pixel is a pixel having a gray value greater than a preset gray value. Wherein the preset gray value is determined according to an empirical value. Typically, the preset gray value is 10.
Illustratively, after obtaining the K-1 gray average values, the K-1 gray average values may be placed in a created gray average value queue for use in subsequent calculations.
Illustratively, a specific explanation will be given by taking 5,5 gray-scale images of K as an example, A, B, C, D, E respectively. The 4 gray level difference maps are AB, BC, CD and DE respectively, and the 1 st gray level average value is the gray level average value of the moving pixels in the 1 st gray level difference map AB, the 2 nd gray level average value is the gray level average value of the moving pixels in the 2 nd gray level difference map BC, the 3 rd gray level average value is the gray level average value of the moving pixels in the 3 rd gray level difference map CD, and the 4 th gray level average value is the gray level average value of the moving pixels in the 4 th gray level difference map DE.
Step 230: and determining the K-2 gray average value differences according to the K-1 gray average values.
Illustratively, after obtaining the K-1 gray average values in step 220, K-2 gray average value differences may be determined from the K-1 gray average values. Specifically, K-2 gray average differences are calculated by obtaining K-1 gray averages in a gray average queue. The difference value of the jth gray average value is the difference value between the jth gray average value and the jth+1th gray average value in the K-1 gray average values, and j is a positive integer less than or equal to K-2.
Illustratively, a specific description will be given of examples in which K is 5,5 gray scale maps are A, B, C, D, E, and 4 gray scale difference maps are AB, BC, CD, and DE, respectively. The 1 st gray average value corresponding to the 1 st gray difference map AB is set as a, the 2 nd gray average value corresponding to the 2 nd gray difference map BC is set as b, the 3 rd gray average value corresponding to the 3 rd gray difference map CD is set as c, and the 4 th gray average value corresponding to the 4 th gray difference map DE is set as d. The 1 st gray average value difference value obtained based on the 1 st gray average value and the 2 nd gray average value is a-b, the 2 nd gray average value difference value obtained based on the 2 nd gray average value and the 3 rd gray average value is b-c, and the 3 rd gray average value difference value obtained based on the 3 rd gray average value and the 4 th gray average value is c-d.
Step 240: and when the absolute values of the K-2 gray average value differences are smaller than a first preset threshold value, judging whether the K frame image is stuck or not by adopting a classifier model according to the K-1 gray difference value images.
Illustratively, after obtaining K-2 gray average differences in step 230, comparing the absolute value of the K-2 gray average differences with a first preset threshold, obtaining K-1 gray difference maps placed in the gray difference map queue in step 210 when the absolute values of the K-2 gray average differences are smaller than the first preset threshold, and using a classifier model for the K-1 gray difference maps to determine whether there is any jam in the K frame images. Wherein the first preset threshold is determined according to an empirical value.
Specifically, when the absolute value of the difference value of the gray average values is smaller than a first preset threshold value, the difference of the gray average values representing two adjacent gray graphs is smaller, and the probability of the existence of a jam is larger; when the absolute value of the difference value of the gray average values is larger than the first preset threshold value, the probability of the existence of the clamping is smaller if the difference of the gray average values of the two adjacent gray graphs is larger.
Illustratively, step 240 is a first precondition for determining whether there is a jam in the K-frame image by using a classifier model, and in addition to step 240, the precondition for determining whether there is a jam in the K-frame image by using a classifier model further includes a second precondition, as shown in fig. 3, specifically:
Step 301: and respectively determining the number of the corresponding K-1 motion pixels according to the K-1 gray level difference value graphs.
The number of corresponding K-1 motion pixels is determined according to the K-1 gray level difference maps by obtaining the K-1 gray level difference maps placed in the gray level difference map queue in step 210. Specifically, after the number of K-1 motion pixels is obtained, the number of K-1 motion pixels may be put into a created motion pixel number queue for use in subsequent calculations.
Step 302: and respectively determining the corresponding K-1 motion pixel duty ratio according to the number of the K-1 motion pixels.
Illustratively, after the number of K-1 motion pixels is placed in the motion pixel number queue in step 301, the number of K-1 motion pixels is then divided by the product of the width and height of the corresponding K-1 gray scale difference map to obtain the corresponding K-1 motion pixel duty cycle.
Illustratively, a specific description will be given of examples in which K is 5,5 gray scale maps are A, B, C, D, E, and 4 gray scale difference maps are AB, BC, CD, and DE, respectively. Setting the number of motion pixels of the gray difference value diagram AB as x1 and the product of the width and the height of the gray difference value diagram AB as y1; the number of the motion pixels of the gray difference value diagram BC is x2, and the product of the width and the height of the gray difference value diagram BC is y2; the number of the motion pixels of the gray difference value diagram CD is x3, and the product of the width and the height of the gray difference value diagram AB is y3; the number of moving pixels of the gray difference map DE is x4, and the product of the width and height of the gray difference map AB is y4. The motion pixel corresponding to the gray level difference map AB has a duty ratio of x1/y1, the motion pixel corresponding to the gray level difference map BC has a duty ratio of x2/y2, the motion pixel corresponding to the gray level difference map CD has a duty ratio of x3/y3, and the motion pixel corresponding to the gray level difference map CD has a duty ratio of x4/y4.
Step 303: and judging that the duty ratio of the K-1 motion pixels is smaller than a second preset threshold value.
Illustratively, after obtaining the K-1 moving pixel duty ratios, comparing the K-1 moving pixel duty ratios with a second preset threshold, when judging that the K-1 moving pixel duty ratios are smaller than the second preset threshold, obtaining K-1 gray difference maps placed in the gray difference map queue in step 210, and using a classifier model for the K-1 gray difference maps to judge whether the K frame image is stuck. Wherein the second preset threshold is determined based on an empirical value.
Specifically, when the motion pixel duty ratio is smaller than a second preset threshold value, the proportion of the motion pixel to the total pixel is lower, and the probability of the existence of the blocking is higher; when the motion pixel duty ratio is larger than the second preset threshold value, the probability of existence of the clamping is smaller if the proportion of the motion pixel to the total pixel is higher.
Illustratively, the classifier model is used to determine whether the K-frame image has a cartoon, which includes a third precondition, as shown in fig. 4, in addition to the first precondition of step 240 and the second preconditions of steps 301-303, specifically:
step 401: and respectively determining the corresponding sum of K-1 gray values according to the K-1 gray difference value graphs.
The sum of the gray values of the motion pixels in the first N bits of each gray difference map in the K-1 gray difference maps is calculated by obtaining the K-1 gray difference maps placed in the gray difference map queue in step 210 and sorting the motion pixels in the K-1 gray difference maps in the order of the gray values from large to small, so as to obtain the sum of the gray values respectively corresponding to the K-1 gray difference maps. Where N is a positive integer determined from empirical values. Specifically, after obtaining the sums of the gray values corresponding to the K-1 gray difference maps, the sums of the K-1 gray values may be put into a created gray value sum queue for subsequent computation.
Step 402: and judging that the sum of K-1 gray values is smaller than a third preset threshold value.
Illustratively, after obtaining the sum of K-1 gray values in step 401, comparing the sum of K-1 gray values with a third preset threshold, when determining that the sum of K-1 gray values is smaller than the third preset threshold, obtaining K-1 gray difference maps placed in the gray difference map queue in step 210, and using a classifier model for the K-1 gray difference maps to determine whether there is any jam in the K frame image. Wherein the third preset threshold is determined based on an empirical value.
Specifically, when the sum of the gray values is smaller than a third preset threshold, the gray value representing the motion pixels ordered in the first N bits is smaller, that is, the gray value of the corresponding gray difference map is smaller, the difference between the two gray maps corresponding to the gray difference map is smaller, and the probability of having a click is larger at the moment; otherwise, the probability of the existence of the jamming is smaller.
From the above, the present application provides three preconditions before using the classifier model to determine whether the K frame image has a clip, specifically:
first: and judging that the absolute values of the K-2 gray average value differences are smaller than a first preset threshold value.
Second,: and judging that the duty ratio of the K-1 motion pixels is smaller than a second preset threshold value.
Third,: and judging that the sum of K-1 gray values is smaller than a third preset threshold value.
For the three preconditions, whether the K frame image has a clip is judged by adopting a classifier model, the judgment can be performed under the condition that any one precondition or any two preconditions of the three preconditions are met, or the judgment can be performed under the condition that all the three preconditions are met, and the application is not limited.
Illustratively, the classifier model determines whether a K-frame image has a jam specifically: when the classifier model judges that the gray level difference image is in a non-motion state, determining that the state of an image corresponding to the gray level difference image is a cartoon, and outputting video playing categories corresponding to the image to be other; when the classifier model judges that the gray level difference image is in a motion state, determining that the state of the image corresponding to the gray level difference image is motion, and outputting the video playing category corresponding to the image as motion.
After the classifier model judges whether the K frame images corresponding to the K gray scales are stuck or not, the judgment result of the K frame images is written into the created stuck queue. The image sequence number of each frame of image corresponds to the judgment result of the frame of image in the clamping queue. Specifically, the 1 st and 2 nd frame images are described as examples. If the state of the 1 st frame image is stuck, writing TRUE in a stuck queue corresponding to the 1 st frame image; if the state of the 2 nd frame image is motion, writing FALSE in the corresponding cartoon queue of the 2 nd frame image. And further, by counting the adjacent TRUE records in the jamming queue, the jamming duration and specific jamming paragraph of the K-frame image can be obtained, namely the jamming duration and the jamming paragraph of the video to be detected are obtained. The blocking duration of the K frame image can be counted by setting a counter.
According to the method, gray level images corresponding to a plurality of images of the video to be detected are obtained, gray level difference images are obtained based on the gray level images, and three conditions of the absolute value of the gray level average value difference value, the duty ratio of the moving pixels of the gray level difference images and the sum of gray level values are determined based on the gray level difference images and the proposed moving pixels. And when any one, any two or three conditions meet a preset threshold value, judging whether the image is stuck or not by adopting a classifier model. Therefore, whether the clamping exists or not is avoided from judging all the images, and only the images with larger clamping probability are judged, so that the time is saved. According to the method, whether the image is stuck or not is judged by combining the conditions, the preset threshold value and the model algorithm, the problems that the existing stuck detection method is too simple and easy to misjudge or miss judge the stuck are solved, and the accuracy of judging whether the image is stuck or not is improved. In addition, whether the image has a black screen or is in flashing back can be directly judged through the classifier model, and detection is comprehensive.
The division of the units in the embodiments of the present invention is schematically shown, which is merely a logic function division, and may have another division manner when actually implemented, and in addition, each functional unit in each embodiment of the present invention may be integrated in one processor, or may exist separately and physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The embodiment of the present invention further provides a communication device 500, referring to fig. 5, the communication device 500 includes: a processing module 510 and a transceiver module 520.
The transceiver module 520 may include a receiving module and a transmitting module. The processing module 510 is configured to control and manage the operation of the communication device 500. The transceiver module 520 is used to support communication between the communication device 500 and other devices. Optionally, the communication device 500 may further comprise a storage module for storing program code and data of the communication device 500.
Alternatively, each module in the communication device 500 may be implemented by software.
In the alternative, the processing module 510 may be a processor or a controller, and the transceiver module 520 may be a communication interface, a transceiver circuit, or the like, where the communication interface is generally called, and in a specific implementation, the communication interface may include multiple interfaces, and the storage module may be a memory.
In one possible implementation, the communication apparatus 500 is adapted for use with a radio access controller device or a radio access point device;
the transceiver module 520 is configured to obtain K gray-scale images, where the K gray-scale images are gray-scale images corresponding to consecutive K frame images in the video to be detected respectively; wherein K is a positive integer greater than or equal to 3;
a processing module 510, configured to determine K-1 gray level difference maps according to the K gray level maps, where an i-th gray level difference map is determined according to an i-th gray level map and an i+1-th gray level map in the K gray level maps, and i is a positive integer less than or equal to K-1; respectively determining corresponding K-1 gray average values according to the K-1 gray difference images, wherein the ith gray average value is the gray average value of a moving pixel in the ith gray difference image, and the moving pixel is a pixel with a gray value larger than a preset gray value; determining a K-2 gray average value difference value according to the K-1 gray average value, wherein the j-th gray average value difference value is the difference value between the j-th gray average value and the j+1th gray average value in the K-1 gray average values, and j is a positive integer smaller than or equal to K-2;
the processing module 510 is further configured to determine whether the K frame image is stuck according to the K-1 gray level difference map by using a classifier model when the absolute values of the K-2 gray level average differences are smaller than a first preset threshold.
The embodiment of the present invention further provides another communication apparatus 600, where the communication apparatus 600 may be a terminal device or a chip system inside the terminal device, as shown in fig. 6, including:
a communication interface 601, a memory 602, and a processor 603;
wherein the communication device 600 communicates with other apparatuses, such as receiving and sending messages, through the communication interface 601; a memory 602 for storing program instructions; and a processor 603 for calling the program instructions stored in the memory 602, according to the method executed by the obtained program.
The processor 603 invokes execution of program instructions stored in the communication interface 601 and the memory 602:
obtaining K gray level images, wherein the K gray level images are gray level images corresponding to continuous K frame images in a video to be detected respectively; wherein K is a positive integer greater than or equal to 3;
determining K-1 gray level difference images according to the K gray level images, wherein an ith gray level difference image is determined according to an ith gray level image and an (i+1) th gray level image in the K gray level images, and i is a positive integer smaller than or equal to K-1;
respectively determining corresponding K-1 gray average values according to the K-1 gray difference images, wherein the ith gray average value is the gray average value of a moving pixel in the ith gray difference image, and the moving pixel is a pixel with a gray value larger than a preset gray value;
Determining a K-2 gray average value difference value according to the K-1 gray average value, wherein the j-th gray average value difference value is the difference value between the j-th gray average value and the j+1th gray average value in the K-1 gray average values, and j is a positive integer smaller than or equal to K-2;
and when the absolute values of the K-2 gray average value differences are smaller than a first preset threshold value, judging whether the K frame images are blocked or not by adopting a classifier model according to the K-1 gray difference images.
The specific connection medium between the communication interface 601, the memory 602, and the processor 603 is not limited to the above embodiment of the present invention, and may be a bus, for example, an address bus, a data bus, a control bus, or the like.
In the embodiment of the present invention, the processor may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps and logic blocks disclosed in the embodiments of the present invention. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
In the embodiment of the present invention, the memory may be a nonvolatile memory, such as a hard disk (HDD) or a Solid State Drive (SSD), or may be a volatile memory (RAM). The memory may also be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in embodiments of the present invention may also be circuitry or any other device capable of performing memory functions for storing program instructions and/or data.
The embodiment of the present invention also provides a computer readable storage medium including program code for causing a computer to execute the steps of the method provided in the embodiment of the present invention.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A method for detecting video playing effect, the method comprising:
Obtaining K gray level images, wherein the K gray level images are gray level images corresponding to continuous K frame images in a video to be detected respectively; wherein K is a positive integer greater than or equal to 3;
determining K-1 gray level difference images according to the K gray level images, wherein an ith gray level difference image is determined according to an ith gray level image and an (i+1) th gray level image in the K gray level images, and i is a positive integer smaller than or equal to K-1;
respectively determining corresponding K-1 gray average values according to the K-1 gray difference images, wherein the ith gray average value is the gray average value of a moving pixel in the ith gray difference image, and the moving pixel is a pixel with a gray value larger than a preset gray value;
determining a K-2 gray average value difference value according to the K-1 gray average value, wherein the j-th gray average value difference value is the difference value between the j-th gray average value and the j+1th gray average value in the K-1 gray average values, and j is a positive integer smaller than or equal to K-2;
and when the absolute values of the K-2 gray average value differences are smaller than a first preset threshold value, judging whether the K frame images are blocked or not by adopting a classifier model according to the K-1 gray difference images.
2. The method of claim 1, further comprising, prior to determining whether a jam exists in the consecutive K frame images using a classifier model based on the K-1 gray scale difference maps:
Respectively determining the number of corresponding K-1 motion pixels according to the K-1 gray difference images;
dividing the number of the K-1 motion pixels by the product of the width and the height of the corresponding K-1 gray level difference value graphs to obtain corresponding K-1 motion pixel duty ratios;
and determining that the K-1 motion pixel duty ratios are smaller than a second preset threshold value.
3. The method of claim 1 or 2, further comprising, prior to determining whether a jam exists in the consecutive K-frame images using a classifier model from the K-1 gray scale difference maps:
sorting the motion pixels in the K-1 gray difference maps according to the sequence of the gray values from large to small, and calculating the sum of the gray values of the motion pixels in the front N bits of each gray difference map in the K-1 gray difference maps to obtain the sum of the gray values respectively corresponding to the K-1 gray difference maps; wherein N is a positive integer;
and determining that the sum of gray values respectively corresponding to the K-1 gray difference maps is smaller than a third preset threshold value.
4. The method of claim 1, further comprising, after acquiring the K gray maps:
judging whether the K frame images have black screens or flash backs or not by adopting the classifier model according to the K gray maps;
The classifier model is a lightweight mobile phone network combined with a convolutional neural network.
5. A video playback effect detection apparatus, the apparatus comprising:
the receiving and transmitting unit is used for acquiring K gray level images, wherein the K gray level images are gray level images corresponding to continuous K frame images in the video to be detected respectively; wherein K is a positive integer greater than or equal to 3;
the processing unit is used for determining K-1 gray level difference images according to the K gray level images, wherein the ith gray level difference image is determined according to the ith gray level image and the (i+1) th gray level image in the K gray level images, and i is a positive integer less than or equal to K-1; respectively determining corresponding K-1 gray average values according to the K-1 gray difference images, wherein the ith gray average value is the gray average value of a moving pixel in the ith gray difference image, and the moving pixel is a pixel with a gray value larger than a preset gray value; determining a K-2 gray average value difference value according to the K-1 gray average value, wherein the j-th gray average value difference value is the difference value between the j-th gray average value and the j+1th gray average value in the K-1 gray average values, and j is a positive integer smaller than or equal to K-2;
And the processing unit is further used for judging whether the K frame image is blocked or not by adopting a classifier model according to the K-1 gray level difference images when the absolute values of the K-2 gray level average value differences are smaller than a first preset threshold value.
6. The apparatus of claim 5, wherein the processing unit is configured to determine the number of corresponding K-1 motion pixels from the K-1 gray difference maps, respectively, before determining whether the consecutive K-frame images are stuck using a classifier model from the K-1 gray difference maps; dividing the number of the K-1 motion pixels by the product of the width and the height of the corresponding K-1 gray level difference value graphs to obtain corresponding K-1 motion pixel duty ratios; and determining that the K-1 motion pixel duty ratios are smaller than a second preset threshold value.
7. The apparatus according to claim 5 or 6, wherein the processing unit is configured to, before determining whether the continuous K frame images have a clip according to the K-1 gray difference maps by using a classifier model, sort the moving pixels in the K-1 gray difference maps according to a sequence from a larger gray value to a smaller gray value, calculate a sum of gray values of the moving pixels in the first N bits in each of the K-1 gray difference maps, and obtain a sum of gray values respectively corresponding to the K-1 gray difference maps; wherein N is a positive integer; and determining that the sum of gray values respectively corresponding to the K-1 gray difference maps is smaller than a third preset threshold value.
8. The apparatus of claim 5, wherein after the K gray scale images are acquired, the processing unit determines whether there is a black screen or a flash back of the K frame image using the classifier model according to the K gray scale images; the classifier model is a lightweight mobile phone network combined with a convolutional neural network.
9. A video playback effect detection device, characterized in that the device comprises a processor and an interface circuit for receiving signals from other devices than the device and transmitting signals from the processor to the processor or sending signals from the processor to other devices than the device, the processor being adapted to implement the method according to any one of claims 1 to 4 by means of logic circuits or executing code instructions.
10. A computer readable storage medium storing computer instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 4.
CN202310672753.6A 2023-06-08 2023-06-08 Video playing effect detection method and device and computer readable storage medium Pending CN116760968A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310672753.6A CN116760968A (en) 2023-06-08 2023-06-08 Video playing effect detection method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310672753.6A CN116760968A (en) 2023-06-08 2023-06-08 Video playing effect detection method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116760968A true CN116760968A (en) 2023-09-15

Family

ID=87954511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310672753.6A Pending CN116760968A (en) 2023-06-08 2023-06-08 Video playing effect detection method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116760968A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745664A (en) * 2023-12-15 2024-03-22 苏州智华汽车电子有限公司 Image dynamic detection method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745664A (en) * 2023-12-15 2024-03-22 苏州智华汽车电子有限公司 Image dynamic detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107509107B (en) Method, device and equipment for detecting video playing fault and readable medium
CN110288599B (en) Dead pixel detection method and device, electronic equipment and storage medium
US20080137800A1 (en) Method, Apparatus, and Program for Detecting the Correlation Between Repeating Events
CN116760968A (en) Video playing effect detection method and device and computer readable storage medium
CN100375530C (en) Movement detecting method
CN110114801B (en) Image foreground detection device and method and electronic equipment
CN112465871A (en) Method and system for evaluating accuracy of visual tracking algorithm
CN109284700B (en) Method, storage medium, device and system for detecting multiple faces in image
CN109102026A (en) A kind of vehicle image detection method, apparatus and system
CN114742992A (en) Video abnormity detection method and device and electronic equipment
CN109324967A (en) The method, apparatus and terminal device of application program pop-up components testing
CN116824311A (en) Performance detection method, device, equipment and storage medium of crowd analysis algorithm
CN111639578A (en) Method, device, equipment and storage medium for intelligently identifying illegal parabola
CN112162888A (en) Method and device for determining reason of black screen of display and computer storage medium
CN105354833A (en) Shadow detection method and apparatus
CN115222699A (en) Defect detection method, defect detection device and system
CN109472772A (en) Image smear detection method, device and equipment
CN109145821A (en) The method and device that pupil image is positioned in a kind of pair of eye image
CN114998283A (en) Lens blocking object detection method and device
AU2021240277A1 (en) Methods and apparatuses for classifying game props and training neural network
CN112949490A (en) Device action detection method and device, electronic device and readable storage medium
CN115004245A (en) Target detection method, target detection device, electronic equipment and computer storage medium
CN111199179B (en) Target object tracking method, terminal equipment and medium
CN113360402A (en) Test method, electronic device, chip and storage medium
CN113284141A (en) Model determination method, device and equipment for defect detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination