CN114827712A - Video playing detection method and device and electronic equipment - Google Patents

Video playing detection method and device and electronic equipment Download PDF

Info

Publication number
CN114827712A
CN114827712A CN202110061115.1A CN202110061115A CN114827712A CN 114827712 A CN114827712 A CN 114827712A CN 202110061115 A CN202110061115 A CN 202110061115A CN 114827712 A CN114827712 A CN 114827712A
Authority
CN
China
Prior art keywords
image
video
frame
frame image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110061115.1A
Other languages
Chinese (zh)
Inventor
胡玉同
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110061115.1A priority Critical patent/CN114827712A/en
Publication of CN114827712A publication Critical patent/CN114827712A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a video playing detection method, a video playing detection device and electronic equipment, and solves the problem of low testing efficiency of an existing video playing performance index testing scheme. The method comprises the following steps: acquiring a target recorded video, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played; framing a target recorded video to obtain continuous multi-frame images; identifying a first image and a first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images; the time interval between the first picture and the first picture is determined. The invention utilizes the brightness characteristic of the image to identify the first frame image, reduces the complexity of the first frame image identification algorithm, improves the image identification efficiency, has simple test process and improves the test efficiency, and the test process does not depend on the code of a video application program.

Description

Video playing detection method and device and electronic equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a video playback detection method and apparatus, and an electronic device.
Background
As is well known, the video playing start time, that is, the time taken by the video from triggering playing to displaying the first frame image of the video, is one of the important performance indexes for measuring the playing-type video application, in the existing partial scheme, the first frame image of the video determines the first frame image characteristics of the target video to be predicted, that is, the first frame image characteristics need to be obtained again whenever the target video source changes. The other scheme is to use a dynamic link library to acquire a video rendering process, but the scheme needs to establish a target dynamic library to be embedded into a target folder, and a new target application program is generated, namely, a tester needs to own the code of the application program. The test scheme has the problems of complicated test process, labor time consumption and low test efficiency.
Disclosure of Invention
The invention aims to provide a video playing detection method, a video playing detection device and electronic equipment, which are used for solving the problem of low testing efficiency of the existing video playing performance index testing scheme.
In order to achieve the above object, the present invention provides a video playing detection method, including:
acquiring a target recorded video, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played;
framing the target recorded video to obtain continuous multi-frame images;
identifying the first image and the first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images;
and determining the time interval between the first image and the first frame image, and taking the time interval as the video playing starting duration of the first video.
Wherein, the acquiring of the target recorded video includes:
after the screen recording is started, triggering and playing the first video;
and when the recording duration reaches the preset duration, stopping recording the screen, and storing the recorded video as the target recorded video.
The framing the target recorded video to obtain continuous multi-frame images comprises the following steps:
acquiring a screen recording frame rate of the target recorded video;
cutting the target recorded video, and storing the video content in the video playing area as the cut recorded video;
and framing the cut recorded video according to the screen recording frame rate to obtain continuous multi-frame images.
Wherein identifying the first image according to the plurality of frames of images comprises:
traversing the multi-frame images for image recognition according to the time sequence of the multi-frame image display during video playing;
and determining the current frame image as the first image when the current frame image is identified to have the preset image characteristic which triggers the playing of the first video.
Wherein, according to the multi-frame image, identifying the first frame image comprises:
after the first image is identified, traversing frame images behind the first image to extract brightness components to obtain a brightness mean value of an ith frame image and a brightness mean value of an (i + 1) th frame image in the frame images behind the first image, wherein the (i + 1) th frame image is a current frame image, i is not less than 1, and i is a positive integer;
calculating the brightness change amplitude between the ith frame image and the (i + 1) th frame image;
under the condition that the brightness change amplitude is smaller than a preset threshold value, adding 1 to i, and returning to the step of calculating the brightness change amplitude between the ith frame image and the (i + 1) th frame image until the brightness change amplitude is larger than the preset threshold value;
and under the condition that the brightness change amplitude is larger than a preset threshold value, determining the (i + 1) th frame image as the first frame image.
Before the target recorded video is obtained, the method further comprises the following steps:
determining the maximum value of the brightness change amplitude between adjacent frame images in the multi-frame loaded image before the first frame image of the first video is played as the preset threshold value; or,
and obtaining a preset threshold value according to the corresponding relation between the video application and the threshold value, wherein the preset threshold value corresponds to the video application playing the first video.
Wherein the method further comprises:
acquiring a screen recording frame rate of the target recorded video, a frame number corresponding to the first image and a frame number corresponding to the first image;
the determining a time interval between the first image and the first frame image comprises:
and calculating the time interval between the first image and the first frame image according to the screen recording frame rate, the frame number corresponding to the first image and the frame number corresponding to the first frame image.
The invention also provides a video playing detection device, which comprises:
the first acquisition module is used for acquiring a target recorded video, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played;
the framing module is used for framing the target recorded video to obtain continuous multi-frame images;
the image identification module is used for identifying the first image and the first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images;
and the processing module is used for determining the time interval between the first image and the first frame image and taking the time interval as the video playing starting duration of the first video.
Wherein, the first obtaining module comprises:
the first processing unit is used for triggering and playing the first video after the screen recording is started;
and the first acquisition unit is used for stopping recording the screen when the recording duration reaches the preset duration, and storing the recorded video as the target recorded video.
Wherein, the framing module comprises:
the second acquisition unit is used for acquiring the screen recording frame rate of the target recorded video;
the second processing unit is used for cutting the target recorded video and storing the video content in the video playing area as the cut recorded video;
and the framing unit is used for framing the cut recorded video according to the screen recording frame rate to obtain continuous multi-frame images.
Wherein the image recognition module comprises:
the third processing unit is used for traversing the multi-frame images for image recognition according to the time sequence of the multi-frame image display during video playing;
and the first image identification unit is used for determining the current frame image as the first image under the condition that the current frame image is identified to have the preset image characteristic which triggers the playing of the first video.
Wherein the image recognition module comprises:
the fourth processing unit is used for traversing the frame images after the first image to extract the brightness component after identifying the first image, and obtaining the brightness mean value of the ith frame image and the brightness mean value of the (i + 1) th frame image in the frame images after the first image, wherein the (i + 1) th frame image is the current frame image, i is more than or equal to 1, and i is a positive integer;
a first calculating unit, configured to calculate a luminance change amplitude between the i-th frame image and the i + 1-th frame image;
a fifth processing unit, configured to add 1 to i when the brightness change amplitude is smaller than a preset threshold, and then return to the step of calculating the brightness change amplitude between the i-th frame image and the i + 1-th frame image until the brightness change amplitude is larger than the preset threshold;
and the first frame image identification unit is used for determining the (i + 1) th frame image as the first frame image under the condition that the brightness change amplitude is greater than a preset threshold value.
Wherein the apparatus further comprises:
a first threshold value obtaining module, configured to determine a maximum value of a luminance change amplitude between adjacent frame images in a multi-frame loaded image before a first frame image of the first video is played as the preset threshold value; or,
and the second threshold acquisition module is used for acquiring a preset threshold according to the corresponding relation between the video application and the threshold, wherein the preset threshold corresponds to the video application playing the first video.
Wherein the apparatus further comprises:
the second acquisition module is used for acquiring the screen recording frame rate of the target recorded video, the frame number corresponding to the first image and the frame number corresponding to the first image;
the processing module comprises:
and the second calculating unit is used for calculating the time interval between the first image and the first frame image according to the screen recording frame rate, the frame number corresponding to the first image and the frame number corresponding to the first frame image.
The invention also provides an electronic device comprising a processor and a transceiver, the transceiver receiving and transmitting data under the control of the processor, characterized in that the processor is configured to:
acquiring a target recorded video, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played;
framing the target recorded video to obtain continuous multi-frame images;
identifying the first image and the first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images;
and determining the time interval between the first image and the first frame image, and taking the time interval as the video playing starting duration of the first video.
Wherein the processor is further configured to:
after the screen recording is started, triggering and playing the first video;
and when the recording duration reaches the preset duration, stopping recording the screen, and storing the recorded video as the target recorded video.
Wherein the processor is further configured to:
acquiring a screen recording frame rate of the target recorded video;
cutting the target recorded video, and storing the video content in the video playing area as the cut recorded video;
and framing the cut recorded video according to the screen recording frame rate to obtain continuous multi-frame images.
Wherein the processor is further configured to:
traversing the multi-frame images for image recognition according to the time sequence of the multi-frame image display during video playing;
and determining the current frame image as the first image when the current frame image is identified to have the preset image characteristic which triggers the playing of the first video.
Wherein the processor is further configured to:
after the first image is identified, traversing frame images behind the first image to extract brightness components to obtain a brightness mean value of an ith frame image and a brightness mean value of an (i + 1) th frame image in the frame images behind the first image, wherein the (i + 1) th frame image is a current frame image, i is not less than 1, and i is a positive integer;
calculating the brightness change amplitude between the ith frame image and the (i + 1) th frame image;
under the condition that the brightness change amplitude is smaller than a preset threshold value, adding 1 to i, and returning to the step of calculating the brightness change amplitude between the ith frame image and the (i + 1) th frame image until the brightness change amplitude is larger than the preset threshold value;
and under the condition that the brightness change amplitude is larger than a preset threshold value, determining the (i + 1) th frame image as the first frame image.
Wherein the processor is further configured to:
determining the maximum value of the brightness change amplitude between adjacent frame images in the multi-frame loaded image before the first frame image of the first video is played as the preset threshold value; or,
and obtaining a preset threshold value according to the corresponding relation between the video application and the threshold value, wherein the preset threshold value corresponds to the video application for playing the first video.
Wherein the processor is further configured to:
acquiring a screen recording frame rate of the target recorded video, a frame number corresponding to the first image and a frame number corresponding to the first image;
calculating to obtain a time interval between the first image and the first frame image according to the screen recording frame rate, the frame number corresponding to the first image and the frame number corresponding to the first frame image
The invention also provides an electronic device, which comprises a memory, a processor and a program which is stored on the memory and can run on the processor; when the processor executes the program, the video playing detection method is realized.
The present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps in the video play detection method as described above.
The technical scheme of the invention at least has the following beneficial effects:
in the embodiment of the invention, a target recorded video is obtained, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played; framing the target recorded video to obtain continuous multi-frame images; identifying the first image and the first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images; the time interval between the first image and the first frame image is determined, and the time interval is used as the video playing starting time of the first video, so that the brightness characteristics of the images are utilized to identify the first frame image, the complexity of a first frame image identification algorithm is reduced, the image identification efficiency is improved, in addition, the test process does not depend on the codes of a video application program, the test process is simple, non-technical testers can also execute the test, and the test efficiency is improved.
Drawings
Fig. 1 is a schematic flow chart illustrating a video play detection method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram showing a storage method of YV 12;
FIG. 3 is a schematic diagram showing the storage of NV 12;
FIG. 4 is a block diagram of a video playback detection apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The invention provides a video playing detection method, a video playing detection device and electronic equipment, aiming at the problem of low testing efficiency of the existing testing scheme of radio frequency playing performance indexes.
Fig. 1 is a schematic flow chart of a video play detection method according to an embodiment of the present invention. The method specifically comprises the following steps:
step 101, acquiring a target recorded video, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played;
in this step, in the recording process of the target recorded video, the first video is played through the video application to be tested.
It should be noted that the target recorded video can be obtained through a screen recording function of the electronic device.
102, framing the target recorded video to obtain continuous multi-frame images;
in this step, a preset framing tool can be used to frame the target recorded video. Optionally, the preset framing tool is FFmpeg or OpenCV.
It should be noted that the color coding method of the target recorded video is a YUV color coding method. Where Y denotes luminance, and U and V denote chrominance.
Because the video stream of the target recorded video is a YUV stream, the image after framing the target recorded video is also an image in a YUV color space.
103, identifying the first image and the first frame image according to the multiple frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multiple frame images;
in this step, as can be seen from the previous step, the image of the target recorded video after framing is an image in YUV color space, and the brightness change amplitude between adjacent frame images can be obtained by extracting the brightness Y component of the image. Specifically, the difference between the luminance mean values of the adjacent frame images may be determined as the luminance change amplitude between the adjacent frame images.
It should be noted that the human eye is much more sensitive to the luminance component than to the color component. The video is composed of images which are continuous in time, and because the image inside the target recorded video is an image in a YUV color space, compared with a mode of simultaneously transmitting three primary colors of RGB coding, the method for separating a brightness signal Y and a chrominance signal U, V of the YUV image reduces the bandwidth occupied by transmission. Compared with the complexity of RGB color value traversal calculation and OCR recognition algorithm, the method improves the efficiency of first frame image recognition by using the sequential storage mode of the Y components of the YUV image.
And 104, determining a time interval between the first image and the first frame image, and taking the time interval as the video playing starting duration of the first video.
The video playing detection method comprises the steps of obtaining a target recorded video, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played; framing the target recorded video to obtain continuous multi-frame images; identifying the first image and the first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images; the time interval between the first image and the first frame image is determined, and the time interval is used as the video playing starting time of the first video, so that the brightness characteristics of the images are utilized to identify the first frame image, the complexity of a first frame image identification algorithm is reduced, the image identification efficiency is improved, in addition, the test process does not depend on the codes of a video application program, the test process is simple, non-technical testers can also execute the test, and the test efficiency is improved.
The embodiment of the invention adopts an automatic script mode to record the target recorded video, wherein the automatic script can adopt a user interface UI automatic script test framework, such as UiAutomator, Apium and the like. That is, the target recorded video is acquired in an automatic script manner. As an optional implementation manner, in step 101 of the method according to the embodiment of the present invention, the acquiring a target recorded video includes:
after the screen recording is started, triggering and playing the first video;
it should be noted that the electronic device executing the method has a screen recording function.
And when the recording duration reaches the preset duration, stopping recording the screen, and storing the recorded video as the target recorded video.
Here, the preset duration is a preset duration, generally 5 seconds, and may be adjusted according to actual conditions to ensure that the first frame image is loaded.
It should be noted that after the electronic device starts the screen recording, the video recording is performed at a preset screen recording frame rate, wherein in the recording process, a first image corresponding to the first video and a first frame image of the first video are triggered to be played and recorded.
It should be noted that the detection method of the present embodiment is driven by an automated script in the whole process, and does not depend on the code of a video application program, so that a non-technical tester can also perform a test.
As an alternative implementation, step 102 may include:
acquiring a screen recording frame rate of the target recorded video;
optionally, when the target recorded video is obtained through the screen recording, the screen recording parameters are saved at the same time. Here, the screen recording parameters include, but are not limited to: the screen recording frame rate and the screen recording frame number.
Cutting the target recorded video, and storing the video content in the video playing area as the cut recorded video;
it should be noted that this step is performed to avoid the interference of the images of the first frame of image in the target recorded video except for the images of other areas in the video playing area.
Here, the video playback area is a playback area of the first video. That is, when recording a video, the video playing area of the first video is played through the video application to be tested.
And framing the cut recorded video according to the screen recording frame rate to obtain continuous multi-frame images.
The cut recorded video can be framed according to the recording frame rate through an FFmpeg tool, and continuous multi-frame images are obtained. Alternatively, the multiple frame images are saved in a jpg format in chronological order.
Here, the continuous multi-frame image refers to a temporally continuous multi-frame image.
As an optional implementation manner, in step 103, identifying the first image according to the multiple frames of images may include:
traversing the multi-frame images for image recognition according to the time sequence of the multi-frame image display during video playing;
here, since the multi-frame images are temporally continuous multi-frame images, there is a chronological order.
And determining the current frame image as the first image when the current frame image is identified to have the preset image characteristic which triggers the playing of the first video.
It should be noted that the preset image feature is an image feature of an image corresponding to a pre-stored trigger play video. For example, the status flag of the play control button is a characteristic of the play flag. Specifically, the identified feature is not played in the image features corresponding to the image before the first image.
It should be noted that the multiple frames of images have respective corresponding frame numbers in chronological order, and when a first image is identified, the frame number corresponding to the first image is acquired. Alternatively, after the first image is identified, a frame number corresponding to the first image, for example, number 1, is assigned.
As an optional implementation manner, in the method step 103, identifying the first frame image according to the multiple frame images may include:
after the first image is identified, traversing frame images behind the first image to extract brightness components to obtain a brightness mean value of an ith frame image and a brightness mean value of an (i + 1) th frame image in the frame images behind the first image, wherein the (i + 1) th frame image is a current frame image, i is not less than 1, and i is a positive integer;
it should be noted that there are three YUV sampling methods, YUV4:4:4, YUV4:2:2, and YUV4:2: 0. The storage mode of the YUV stream is closely related to the sampling mode, wherein the storage mode includes YV12, YU12, NV12, NV21 and the like. As shown in fig. 2, the schematic diagram of the storage method of YV12 and fig. 3, the schematic diagram of the storage method of NV12 show that the luminance Y component is stored continuously at the start address regardless of the storage method, so that the extracted Y component can be read continuously and efficiently. Here, the storage of YUV is different from RGB in that data of each point of RGB is continuously stored together, and YUV reduces U, V component space in order to save space.
Calculating the brightness change amplitude between the ith frame image and the (i + 1) th frame image;
specifically, a difference value between a luminance component of an ith frame image and a luminance component of an (i + 1) th frame image is calculated to obtain a luminance change amplitude between the ith frame image and the (i + 1) th frame image.
Under the condition that the brightness change amplitude is smaller than a preset threshold value, adding 1 to i, and returning to the step of calculating the brightness change amplitude between the ith frame image and the (i + 1) th frame image until the brightness change amplitude is larger than the preset threshold value;
for example, after the first image is recognized, the luminance component of the 1 st frame image after the first image is extracted, the luminance average value of the 1 st frame image is calculated, the luminance component of the 2 nd frame image is extracted, and the luminance average value of the 2 nd frame image is calculated. And then, calculating the difference value between the brightness mean value of the 1 st frame image and the brightness mean value of the 2 nd frame image to obtain a brightness change amplitude value between the brightness mean value of the 1 st frame image and the brightness mean value of the 2 nd frame image. And then, if the brightness change amplitude value between the brightness mean value of the 1 st frame image and the brightness mean value of the 2 nd frame image is smaller than a preset threshold value, adding 1 to i, calculating the brightness change amplitude value between the 2 nd frame image and the 3 rd frame image, and repeating the steps until the brightness change amplitude value between the current frame image and the previous frame image is larger than the preset threshold value.
After adding 1 to i, extracting the luminance component of the 3 rd frame image, calculating the luminance average value of the 3 rd frame image, and then calculating the luminance change amplitude between the 2 nd frame image and the 3 rd frame image.
And under the condition that the brightness change amplitude is larger than a preset threshold value, determining the (i + 1) th frame image as the first frame image.
In this implementation manner, it should be noted that, before the first frame of video is played, the video playing area is usually a black screen, an advertisement, a video logo, and the like, and the image change of this picture area is small, so the brightness change amplitude of adjacent frames is also small, and the brightness change amplitude between adjacent frames is compared with a preset threshold value to serve as a determination mode for the occurrence of the first frame of video.
Further, in order to solve the problem of how to obtain the preset threshold in this embodiment, as an optional implementation manner, before obtaining the target recorded video, the method further includes:
determining the maximum value of the brightness change amplitude between adjacent frame images in the multi-frame loaded image before the first frame image of the first video is played as the preset threshold value;
specifically, before testing, screenshot is carried out on a picture before playing a first frame image of a first video, only an image in a video playing area is reserved, a brightness Y component of the image is extracted, a brightness mean value is obtained through calculation, and the maximum value of the brightness change amplitude of adjacent frames is taken as a preset threshold value.
Or obtaining a preset threshold value according to the corresponding relation between the video application and the threshold value, wherein the preset threshold value corresponds to the video application playing the first video.
Here, in the same video application, the frames before the first frame of the video is played are basically the same, so the same threshold value is used for testing different video moments. If the picture before the first frame of the video is played is updated, or the application of the tested video is changed, the threshold value needs to be recalculated. Therefore, the user can establish the corresponding relation between the video application and the threshold value according to the requirements of multiple types of the tested video application, namely different video applications correspond to different preset threshold values.
As an optional implementation manner, the method according to the embodiment of the present invention may further include:
acquiring a screen recording frame rate of the target recorded video, a frame number corresponding to the first image and a frame number corresponding to the first image;
it should be noted that the multiple frames of images have respective corresponding frame numbers in chronological order, and when a first image is identified, the frame number corresponding to the first image is acquired, and when a first frame of image is identified, the frame number corresponding to the first frame of image is acquired.
Or after the first image is identified, the frame number corresponding to the first image is given, and the frame images after the first image are accumulated according to the time sequence after the first image.
For example, the frame number assigned to the first image is 1, and the frame numbers of the images subsequent to the first image are 2, 3, ·.
Correspondingly, the determining the time interval between the first image and the first frame image includes:
and calculating the time interval between the first image and the first frame image according to the screen recording frame rate, the frame number corresponding to the first image and the frame number corresponding to the first frame image.
In this step, the time interval between the first image and the first frame image is calculated according to the formula T ═ N2-N1)/FPS.
Here, N1 denotes a frame number corresponding to the first image, N2 denotes a frame number corresponding to the first frame image, FPS denotes a screen recording frame rate, and T denotes a time interval between the first image and the first frame image.
The video playing detection method comprises the steps of obtaining a target recorded video, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played; framing the target recorded video to obtain continuous multi-frame images; identifying the first image and the first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images; the time interval between the first image and the first frame image is determined, and the time interval is used as the video playing starting time of the first video, so that the brightness characteristics of the images are utilized to identify the first frame image, the complexity of a first frame image identification algorithm is reduced, the image identification efficiency is improved, in addition, the test process does not depend on the codes of a video application program, the test process is simple, non-technical testers can also execute the test, and the test efficiency is improved.
As shown in fig. 4, an embodiment of the present invention further provides a video playback detection apparatus, where the apparatus includes:
a first obtaining module 401, configured to obtain a target recorded video, where the target recorded video is a target video segment including a first image and a first frame image of the first video, and the first image is a frame image corresponding to a trigger to play the first video;
a framing module 402, configured to frame the target recorded video to obtain continuous multi-frame images;
an image recognition module 403, configured to recognize the first image and the first frame image according to the multiple frame images, where the first frame image is determined based on a brightness change amplitude between adjacent frame images of the multiple frame images;
a processing module 404, configured to determine a time interval between the first image and the first frame image, and use the time interval as a video play start duration of the first video.
Optionally, the first obtaining module 401 includes:
the first processing unit is used for triggering and playing the first video after the screen recording is started;
and the first acquisition unit is used for stopping recording the screen when the recording duration reaches the preset duration, and storing the recorded video as the target recorded video.
Optionally, the framing module 402 includes:
the second acquisition unit is used for acquiring the screen recording frame rate of the target recorded video;
the second processing unit is used for cutting the target recorded video and storing the video content in the video playing area as the cut recorded video;
and the framing unit is used for framing the cut recorded video according to the screen recording frame rate to obtain continuous multi-frame images.
Optionally, the image recognition module 403 includes:
the third processing unit is used for traversing the multi-frame images for image recognition according to the time sequence of the multi-frame image display during video playing;
and the first image identification unit is used for determining the current frame image as the first image under the condition that the current frame image is identified to have the preset image characteristic which triggers the playing of the first video.
Optionally, the image recognition module 403 includes:
the fourth processing unit is used for traversing the frame images after the first image to extract the brightness component after identifying the first image, and obtaining the brightness mean value of the ith frame image and the brightness mean value of the (i + 1) th frame image in the frame images after the first image, wherein the (i + 1) th frame image is the current frame image, i is more than or equal to 1, and i is a positive integer;
a first calculating unit, configured to calculate a luminance change amplitude between the i-th frame image and the i + 1-th frame image;
a fifth processing unit, configured to add 1 to i when the brightness change amplitude is smaller than a preset threshold, and then return to the step of calculating the brightness change amplitude between the i-th frame image and the i + 1-th frame image until the brightness change amplitude is larger than the preset threshold;
and the first frame image identification unit is used for determining the (i + 1) th frame image as the first frame image under the condition that the brightness change amplitude is greater than a preset threshold value.
Optionally, the apparatus further comprises:
a first threshold value obtaining module, configured to determine a maximum value of a luminance change amplitude between adjacent frame images in a multi-frame loaded image before a first frame image of the first video is played as the preset threshold value; or,
and the second threshold acquisition module is used for acquiring a preset threshold according to the corresponding relation between the video application and the threshold, wherein the preset threshold corresponds to the video application playing the first video.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring the screen recording frame rate of the target recorded video, the frame number corresponding to the first image and the frame number corresponding to the first image;
the processing module 404 includes:
and the second calculating unit is used for calculating the time interval between the first image and the first frame image according to the screen recording frame rate, the frame number corresponding to the first image and the frame number corresponding to the first frame image.
The video playing detection device of the embodiment of the invention obtains a target recorded video, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played; framing the target recorded video to obtain continuous multi-frame images; identifying the first image and the first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images; the time interval between the first image and the first frame image is determined, and the time interval is used as the video playing starting time of the first video, so that the brightness characteristics of the images are utilized to identify the first frame image, the complexity of a first frame image identification algorithm is reduced, the image identification efficiency is improved, in addition, the test process does not depend on the codes of a video application program, the test process is simple, non-technical testers can also execute the test, and the test efficiency is improved.
It should be noted that, the apparatus provided in the embodiment of the present invention can implement all the method steps implemented by the method embodiment and achieve the same technical effect, and detailed descriptions of the same parts and beneficial effects as the method embodiment in this embodiment are omitted here.
In order to better achieve the above object, as shown in fig. 5, an embodiment of the present invention further provides an electronic device, which includes a processor 500 and a transceiver 510, where the transceiver 510 receives and transmits data under the control of the processor, and the processor 500 is configured to perform the following processes:
acquiring a target recorded video, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played;
framing the target recorded video to obtain continuous multi-frame images;
identifying the first image and the first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images;
and determining the time interval between the first image and the first frame image, and taking the time interval as the video playing starting duration of the first video.
Optionally, the processor 500 is further configured to:
after the screen recording is started, triggering and playing the first video;
and when the recording duration reaches the preset duration, stopping recording the screen, and storing the recorded video as the target recorded video.
Optionally, the processor 500 is further configured to:
acquiring a screen recording frame rate of the target recorded video;
cutting the target recorded video, and storing the video content in the video playing area as the cut recorded video;
and framing the cut recorded video according to the screen recording frame rate to obtain continuous multi-frame images.
Optionally, the processor 500 is further configured to:
traversing the multi-frame images for image identification according to the time sequence of the multi-frame image display during video playing;
and determining the current frame image as the first image when the current frame image is identified to have the preset image characteristic which triggers the playing of the first video.
Optionally, the processor 500 is further configured to:
after the first image is identified, traversing frame images behind the first image to extract brightness components to obtain a brightness mean value of an ith frame image and a brightness mean value of an (i + 1) th frame image in the frame images behind the first image, wherein the (i + 1) th frame image is a current frame image, i is not less than 1, and i is a positive integer;
calculating the brightness change amplitude between the ith frame image and the (i + 1) th frame image;
under the condition that the brightness change amplitude is smaller than a preset threshold value, adding 1 to i, and returning to the step of calculating the brightness change amplitude between the ith frame image and the (i + 1) th frame image until the brightness change amplitude is larger than the preset threshold value;
and under the condition that the brightness change amplitude is larger than a preset threshold value, determining the (i + 1) th frame image as the first frame image.
Optionally, the processor 500 is further configured to:
determining the maximum value of the brightness change amplitude between adjacent frame images in the multi-frame loaded image before the first frame image of the first video is played as the preset threshold value; or,
and obtaining a preset threshold value according to the corresponding relation between the video application and the threshold value, wherein the preset threshold value corresponds to the video application playing the first video.
Optionally, the processor 500 is further configured to:
acquiring a screen recording frame rate of the target recorded video, a frame number corresponding to the first image and a frame number corresponding to the first image;
and calculating the time interval between the first image and the first frame image according to the screen recording frame rate, the frame number corresponding to the first image and the frame number corresponding to the first frame image.
According to the electronic equipment provided by the embodiment of the invention, a target recorded video is obtained, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played; framing the target recorded video to obtain continuous multi-frame images; identifying the first image and the first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images; the time interval between the first image and the first frame image is determined, and the time interval is used as the video playing starting time of the first video, so that the brightness characteristics of the images are utilized to identify the first frame image, the complexity of a first frame image identification algorithm is reduced, the image identification efficiency is improved, in addition, the test process does not depend on the codes of a video application program, the test process is simple, non-technical testers can also execute the test, and the test efficiency is improved.
An embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program that is stored in the memory and can be run on the processor, where the processor implements each process in the video playback detection method embodiment described above when executing the program, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements each process in the video playing detection method embodiment described above, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block or blocks.
These computer program instructions may also be stored in a computer-readable storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable storage medium produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (17)

1. A video play detection method is characterized by comprising the following steps:
acquiring a target recorded video, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played;
framing the target recorded video to obtain continuous multi-frame images;
identifying the first image and the first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images;
and determining the time interval between the first image and the first frame image, and taking the time interval as the video playing starting duration of the first video.
2. The method of claim 1, wherein obtaining the target recording comprises:
after the screen recording is started, triggering and playing the first video;
and when the recording duration reaches the preset duration, stopping recording the screen, and storing the recorded video as the target recorded video.
3. The method of claim 1, wherein the framing the target recorded video to obtain a plurality of consecutive frame images comprises:
acquiring a screen recording frame rate of the target recorded video;
cutting the target recorded video, and storing the video content in the video playing area as the cut recorded video;
and framing the cut recorded video according to the screen recording frame rate to obtain continuous multi-frame images.
4. The method of claim 1, wherein identifying the first image from the plurality of frames of images comprises:
traversing the multi-frame images for image recognition according to the time sequence of the multi-frame image display during video playing;
and determining the current frame image as the first image when the current frame image is identified to have the preset image characteristic which triggers the playing of the first video.
5. The method of claim 1, wherein identifying the first frame of image from the plurality of frames of images comprises:
after the first image is identified, traversing frame images behind the first image to extract brightness components to obtain a brightness mean value of an ith frame image and a brightness mean value of an (i + 1) th frame image in the frame images behind the first image, wherein the (i + 1) th frame image is a current frame image, i is not less than 1, and i is a positive integer;
calculating the brightness change amplitude between the ith frame image and the (i + 1) th frame image;
under the condition that the brightness change amplitude is smaller than a preset threshold value, adding 1 to i, and returning to the step of calculating the brightness change amplitude between the ith frame image and the (i + 1) th frame image until the brightness change amplitude is larger than the preset threshold value;
and under the condition that the brightness change amplitude is larger than a preset threshold value, determining the (i + 1) th frame image as the first frame image.
6. The method of claim 5, wherein prior to obtaining the target recorded video, the method further comprises:
determining the maximum value of the brightness change amplitude between adjacent frame images in the multi-frame loaded image before the first frame image of the first video is played as the preset threshold value; or,
and obtaining a preset threshold value according to the corresponding relation between the video application and the threshold value, wherein the preset threshold value corresponds to the video application playing the first video.
7. The method of claim 1, further comprising:
acquiring a screen recording frame rate of the target recorded video, a frame number corresponding to the first image and a frame number corresponding to the first image;
the determining a time interval between the first image and the first frame image comprises:
and calculating the time interval between the first image and the first frame image according to the screen recording frame rate, the frame number corresponding to the first image and the frame number corresponding to the first frame image.
8. A video playback detection apparatus, comprising:
the first acquisition module is used for acquiring a target recorded video, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played;
the framing module is used for framing the target recorded video to obtain continuous multi-frame images;
the image identification module is used for identifying the first image and the first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images;
and the processing module is used for determining a time interval between the first image and the first frame image and taking the time interval as the video playing starting time of the first video.
9. An electronic device comprising a processor and a transceiver, the transceiver receiving and transmitting data under control of the processor, characterized in that the processor is adapted to:
acquiring a target recorded video, wherein the target recorded video is a target video segment comprising a first image and a first frame image of the first video, and the first image is a frame image corresponding to the first video when the first video is triggered to be played;
framing the target recorded video to obtain continuous multi-frame images;
identifying the first image and the first frame image according to the multi-frame images, wherein the first frame image is determined based on the brightness change amplitude between the adjacent frame images of the multi-frame images;
and determining the time interval between the first image and the first frame image, and taking the time interval as the video playing starting duration of the first video.
10. The electronic device of claim 9, wherein the processor is further configured to:
after the screen recording is started, triggering and playing the first video;
and when the recording time reaches the preset time, stopping recording the screen, and storing the recorded video as the target recorded video.
11. The electronic device of claim 9, wherein the processor is further configured to:
acquiring a screen recording frame rate of the target recorded video;
cutting the target recorded video, and storing the video content in a video playing area as the cut recorded video;
and framing the cut recorded video according to the screen recording frame rate to obtain continuous multi-frame images.
12. The electronic device of claim 9, wherein the processor is further configured to:
traversing the multi-frame images for image recognition according to the time sequence of the multi-frame image display during video playing;
and determining the current frame image as the first image when the current frame image is identified to have the preset image characteristic which triggers the playing of the first video.
13. The electronic device of claim 9, wherein the processor is further configured to:
after the first image is identified, traversing frame images behind the first image to extract brightness components to obtain a brightness mean value of an ith frame image and a brightness mean value of an (i + 1) th frame image in the frame images behind the first image, wherein the (i + 1) th frame image is a current frame image, i is not less than 1, and i is a positive integer;
calculating the brightness change amplitude between the ith frame image and the (i + 1) th frame image;
under the condition that the brightness change amplitude is smaller than a preset threshold value, adding 1 to i, and returning to the step of calculating the brightness change amplitude between the ith frame image and the (i + 1) th frame image until the brightness change amplitude is larger than the preset threshold value;
and under the condition that the brightness change amplitude is larger than a preset threshold value, determining the (i + 1) th frame image as the first frame image.
14. The electronic device of claim 13, wherein the processor is further configured to:
determining the maximum value of the brightness change amplitude between adjacent frame images in the multi-frame loaded image before the first frame image of the first video is played as the preset threshold value; or,
and obtaining a preset threshold value according to the corresponding relation between the video application and the threshold value, wherein the preset threshold value corresponds to the video application playing the first video.
15. The electronic device of claim 9, wherein the processor is further configured to:
acquiring a screen recording frame rate of the target recorded video, a frame number corresponding to the first image and a frame number corresponding to the first image;
and calculating the time interval between the first image and the first frame image according to the screen recording frame rate, the frame number corresponding to the first image and the frame number corresponding to the first frame image.
16. An electronic device comprising a memory, a processor, and a program stored on the memory and executable on the processor; characterized in that the processor implements the video playback detection method according to any one of claims 1 to 7 when executing the program.
17. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the video playback detection method according to any one of claims 1 to 7.
CN202110061115.1A 2021-01-18 2021-01-18 Video playing detection method and device and electronic equipment Pending CN114827712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110061115.1A CN114827712A (en) 2021-01-18 2021-01-18 Video playing detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110061115.1A CN114827712A (en) 2021-01-18 2021-01-18 Video playing detection method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114827712A true CN114827712A (en) 2022-07-29

Family

ID=82523637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110061115.1A Pending CN114827712A (en) 2021-01-18 2021-01-18 Video playing detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114827712A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396705A (en) * 2022-08-19 2022-11-25 上海哔哩哔哩科技有限公司 Screen projection operation verification method, platform and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5083860A (en) * 1990-08-31 1992-01-28 Institut For Personalized Information Environment Method for detecting change points in motion picture images
US5732146A (en) * 1994-04-18 1998-03-24 Matsushita Electric Industrial Co., Ltd. Scene change detecting method for video and movie
CN108882019A (en) * 2017-05-09 2018-11-23 腾讯科技(深圳)有限公司 Video playing test method, electronic equipment and system
CN110177270A (en) * 2019-06-05 2019-08-27 北京字节跳动网络技术有限公司 Video head frame test method and device
CN111131812A (en) * 2019-12-31 2020-05-08 北京奇艺世纪科技有限公司 Broadcast time testing method and device and computer readable storage medium
CN112203150A (en) * 2020-09-30 2021-01-08 腾讯科技(深圳)有限公司 Time-consuming acquisition method, device, equipment and computer-readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5083860A (en) * 1990-08-31 1992-01-28 Institut For Personalized Information Environment Method for detecting change points in motion picture images
US5732146A (en) * 1994-04-18 1998-03-24 Matsushita Electric Industrial Co., Ltd. Scene change detecting method for video and movie
CN108882019A (en) * 2017-05-09 2018-11-23 腾讯科技(深圳)有限公司 Video playing test method, electronic equipment and system
CN110177270A (en) * 2019-06-05 2019-08-27 北京字节跳动网络技术有限公司 Video head frame test method and device
CN111131812A (en) * 2019-12-31 2020-05-08 北京奇艺世纪科技有限公司 Broadcast time testing method and device and computer readable storage medium
CN112203150A (en) * 2020-09-30 2021-01-08 腾讯科技(深圳)有限公司 Time-consuming acquisition method, device, equipment and computer-readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396705A (en) * 2022-08-19 2022-11-25 上海哔哩哔哩科技有限公司 Screen projection operation verification method, platform and system
CN115396705B (en) * 2022-08-19 2024-03-19 上海哔哩哔哩科技有限公司 Screen operation verification method, platform and system

Similar Documents

Publication Publication Date Title
US8379154B2 (en) Key-frame extraction from video
CN107509107B (en) Method, device and equipment for detecting video playing fault and readable medium
US8913195B2 (en) Information processing device, information processing method and program
JP6343430B2 (en) Video detection apparatus and missing video frame detection method
CN107801093B (en) Video rendering method and device, computer equipment and readable storage medium
CN115396705B (en) Screen operation verification method, platform and system
CN114902687A (en) Game screen recording method and device and computer readable storage medium
CN114026874A (en) Video processing method and device, mobile device and readable storage medium
CN110933406A (en) Objective evaluation method for short video music matching quality
CN114827712A (en) Video playing detection method and device and electronic equipment
CN110582016A (en) video information display method, device, server and storage medium
US20130242116A1 (en) Image processing apparatus, electronic device, image processing method, and program
CN110322525B (en) Method and terminal for processing dynamic diagram
JP2007304948A (en) Image quality objective evaluation device and method
CN110113630B (en) Video detection method and device, electronic equipment and storage medium
CN114745537A (en) Sound and picture delay testing method and device, electronic equipment and storage medium
WO2023280117A1 (en) Indication signal recognition method and device, and computer storage medium
CN114422777A (en) Image recognition-based time delay testing method and device and storage medium
EP1988405A2 (en) Photographic subject tracking method, computer program and photographic subject tracking device
CN113141433B (en) Method and device for testing screen sensitivity and processor
JPH07236153A (en) Detection of cut point of moving picture and device for detecting cut picture group
CN112770080B (en) Meter reading method, meter reading device and electronic equipment
CN115243073A (en) Video processing method, device, equipment and storage medium
CN114116464A (en) Image processing test method and device
US20110097000A1 (en) Face-detection Processing Methods, Image Processing Devices, And Articles Of Manufacture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination