CN111669574A - Video playing quality detection method and device - Google Patents

Video playing quality detection method and device Download PDF

Info

Publication number
CN111669574A
CN111669574A CN202010563626.9A CN202010563626A CN111669574A CN 111669574 A CN111669574 A CN 111669574A CN 202010563626 A CN202010563626 A CN 202010563626A CN 111669574 A CN111669574 A CN 111669574A
Authority
CN
China
Prior art keywords
frame
image
sample video
images
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010563626.9A
Other languages
Chinese (zh)
Inventor
廖钜城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Big Data Technologies Co Ltd
Original Assignee
New H3C Big Data Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Big Data Technologies Co Ltd filed Critical New H3C Big Data Technologies Co Ltd
Priority to CN202010563626.9A priority Critical patent/CN111669574A/en
Publication of CN111669574A publication Critical patent/CN111669574A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application relates to the technical field of virtual desktops, in particular to a video playing quality detection method and device. The method comprises the following steps: playing a first sample video on a virtual machine, wherein the video played on the virtual machine is displayed on a client corresponding to the virtual machine, and each frame image of the first sample video carries a unique identifier corresponding to each frame image; acquiring a second sample video displayed on a client corresponding to the virtual machine, splitting the second sample video into frame-by-frame images, and acquiring a unique identification list of each frame of image based on a time sequence of each frame of image; and determining the play quality detection result of the first sample video based on the acquired unique identification list of each frame of image. By the video playing quality detection method, automatic detection of video playing quality is achieved, labor cost is reduced, and precision and accuracy of detection results are improved through data quantification of the detection results.

Description

Video playing quality detection method and device
Technical Field
The application relates to the technical field of virtual desktops, in particular to a video playing quality detection method and device.
Background
VDI, the Virtual Desktop Infrastructure, is known as Virtual Desktop Infrastructure. The VDI mainly provides services for users by means of virtual machines divided from servers, the virtual machines run on the servers of the data center, and virtual desktops are presented to the users at client sides through the VDI technology, so that the users can control the virtual machines through the client sides. A Simple Protocol for Independent Computing Environment (SPICE) is a virtualized transmission Protocol applied to VDI, and a client (e.g., a thin client TC or a home PC) is connected with a virtual machine through SPICE and performs data interaction.
In the process of transmitting video playing data between the client and the virtual machine through SPICE, the problems of video blocking, screen splash and the like during playing at the client can be caused by some reasons. At present, a method for detecting whether a video playing process has a problem generally includes that a tester manually monitors the video playing process at a client and determines whether the video playing has the problems of blocking, screen splash and the like; if so, the developer is contacted to locate and reproduce the problem.
The method for detecting the video playing quality needs the monitoring of testers at any time, so that the labor cost is high; meanwhile, in the playing process of the video, the problem degree needs to be judged manually, and slight problems are easy to ignore, so that detection omission is caused, and the problem of low detection precision is caused.
Disclosure of Invention
The embodiment of the application provides a video playing quality detection method and device, which are used for solving the problems of low detection precision and inaccurate detection result in the prior art.
The embodiment of the application provides the following specific technical scheme:
in a first aspect, the present application provides a method for detecting video playing quality, where the method includes:
playing a first sample video on a virtual machine, wherein the video played on the virtual machine is displayed on a client corresponding to the virtual machine, and each frame image of the first sample video carries a unique identifier corresponding to each frame image;
acquiring a second sample video displayed on a client corresponding to the virtual machine, splitting the second sample video into frame-by-frame images, and acquiring a unique identification list of each frame of image based on a time sequence of each frame of image;
and determining the play quality detection result of the first sample video based on the acquired unique identification list of each frame of image.
Optionally, before playing the first sample video on the virtual machine, the method further includes:
splitting a video to be detected into frame-by-frame images;
adding a unique identifier for uniquely identifying each frame of image at the designated position of each frame of image;
and merging the frames of images added with the unique identification into the first sample video.
Optionally, the step of determining the play quality detection result of the first sample video based on the obtained unique identifier list of each frame of image includes:
determining a first number of image frames to be played by the first sample video in a period based on the frame rate and the period of the first sample video, and respectively determining a second number of non-repeated unique identifiers appearing in each period based on the period;
judging whether the difference value between the first number and the second number respectively corresponding to each period is greater than or equal to a first preset threshold value;
and if the difference value of the first number and the second data corresponding to one period is judged to be larger than or equal to a first preset threshold value, determining that the frame is lost when the first sample video is played in the period.
Optionally, the step of determining the play quality detection result of the first sample video based on the obtained unique identifier list of each frame of image includes:
judging whether the continuous occurrence frequency of the unique identifier of the same frame of image is greater than or equal to a second preset threshold value or not based on the acquired unique identifier list of each frame of image;
and if the continuous occurrence frequency of the unique identifier of one frame of image is judged to be greater than or equal to a second preset threshold value, determining that the frame of image is stuck when being played.
Optionally, after splitting the second sample video into frame-by-frame images, the method further comprises:
the following operations are respectively performed for each frame image of the frame images of the second sample video: determining the frame image of the first sample video corresponding to the target frame image based on the unique identifier of the target frame image, and calculating the similarity of the two corresponding frame images; if the similarity of the two frame images is larger than or equal to a first threshold value, determining that the image restoration degree is high when the target frame image is played; and if the similarity of the two images is less than or equal to a second threshold value, determining that the image restoration degree is low when the target frame image is played.
Optionally, the method further comprises:
if the similarity of the two frames of images is smaller than a first threshold and larger than a second threshold, dividing the target frame of image into N blocks of images, and respectively calculating peak signal-to-noise ratios of the N blocks of images;
and calculating the mean value of the peak signal-to-noise ratios of the N images, and if the difference value between the mean value of the peak signal-to-noise ratios of the N images and the peak signal-to-noise ratio of one image in the N images is judged to be larger than or equal to a third threshold value, determining that the screen is shown in the image of the target frame.
In a second aspect, the present application provides an apparatus for detecting video playing quality, the apparatus comprising:
the system comprises a playing unit, a processing unit and a processing unit, wherein the playing unit is used for playing a first sample video on a virtual machine, the video played on the virtual machine is displayed on a client corresponding to the virtual machine, and each frame image of the first sample video carries a unique identifier corresponding to each frame image;
the execution unit is used for acquiring a second sample video displayed on a client corresponding to the virtual machine, splitting the second sample video into frame-by-frame images, and acquiring a unique identification list of each frame of image based on a time sequence of each frame of image;
and the determining unit is used for determining the playing quality detection result of the first sample video based on the acquired unique identification list of each frame of image.
Optionally, before playing the first sample video on the virtual machine, the execution unit is further configured to:
splitting a video to be detected into frame-by-frame images;
adding a unique identifier for uniquely identifying each frame of image at the designated position of each frame of image;
and merging the frames of images added with the unique identification into the first sample video.
Optionally, when determining the play quality detection result of the first sample video based on the obtained unique identifier list of each frame of image, the determining unit is specifically configured to:
determining a first number of image frames to be played by the first sample video in a period based on the frame rate and the period of the first sample video, and respectively determining a second number of non-repeated unique identifiers appearing in each period based on the period;
judging whether the difference value between the first number and the second number respectively corresponding to each period is greater than or equal to a first preset threshold value;
and if the difference value of the first number and the second data corresponding to one period is judged to be larger than or equal to a first preset threshold value, determining that the frame is lost when the first sample video is played in the period.
Optionally, when determining the play quality detection result of the first sample video based on the obtained unique identifier list of each frame of image, the determining unit is specifically configured to:
judging whether the continuous occurrence frequency of the unique identifier of the same frame of image is greater than or equal to a second preset threshold value or not based on the acquired unique identifier list of each frame of image;
and if the continuous occurrence frequency of the unique identifier of one frame of image is judged to be greater than or equal to a second preset threshold value, determining that the frame of image is stuck when being played.
Optionally, after splitting the second sample video into frame-by-frame images, the determining unit is further configured to:
the following operations are respectively performed for each frame image of the frame images of the second sample video: determining the frame image of the first sample video corresponding to the target frame image based on the unique identifier of the target frame image, and calculating the similarity of the two corresponding frame images; if the similarity of the two frame images is larger than or equal to a first threshold value, determining that the image restoration degree is high when the target frame image is played; and if the similarity of the two images is less than or equal to a second threshold value, determining that the image restoration degree is low when the target frame image is played.
Optionally, the determining unit is further configured to:
if the similarity of the two frames of images is smaller than a first threshold and larger than a second threshold, dividing the target frame of image into N blocks of images, and respectively calculating peak signal-to-noise ratios of the N blocks of images;
and calculating the mean value of the peak signal-to-noise ratios of the N images, and if the difference value between the mean value of the peak signal-to-noise ratios of the N images and the peak signal-to-noise ratio of one image in the N images is judged to be larger than or equal to a third threshold value, determining that the screen is shown in the image of the target frame.
The beneficial effect of this application is as follows:
to sum up, the video playing quality detection method provided by the present application plays a first sample video on a virtual machine, where the video played on the virtual machine is displayed on a client corresponding to the virtual machine, and each frame image of the first sample video carries a unique identifier corresponding to the frame image; acquiring a second sample video displayed on a client corresponding to the virtual machine, splitting the second sample video into frame-by-frame images, and acquiring a unique identification list of each frame of image based on a time sequence of each frame of image; and determining the play quality detection result of the first sample video based on the acquired unique identification list of each frame of image.
By the video playing quality detection method, the unique identifier for uniquely identifying each frame of image of the video to be detected is added to each frame of image, the video data transmitted from the virtual machine side is collected on the client side of the video image displayed by the user, the collected video is split into frame-by-frame images, the unique identifier of each frame of image is identified, the unique identifier list is obtained, the playing quality of the video to be detected is judged according to the unique identifier list, the automatic detection of the playing quality of the video is realized, the labor cost is reduced, and the precision and the accuracy of the detection result are improved through the data quantization detection result.
Drawings
Fig. 1 is a schematic diagram of a framework of a VDI system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a video playing quality detection method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a process of acquiring a second sample video according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a video playback quality detection apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First, the term "and" in the embodiment of the present application is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
When the present application refers to the ordinal numbers "first", "second", "third" or "fourth", etc., it should be understood that this is done for differentiation only, unless it is clear from the context that the order is actually expressed.
The scheme of the present application will be described in detail by specific examples, but the present application is not limited to the following examples.
Illustratively, referring to fig. 1, an architectural diagram of a VDI system provided by the present application is shown, in the embodiment of the present application, a desktop is virtualized (i.e., a virtual machine is created) on a server in a data center, and a client is remotely connected to the virtual desktop through a virtualization transport protocol (e.g., SPICE protocol), so that related services (e.g., playing video) executed on the virtual machine can be displayed on the client.
Specifically, in the embodiment of the present application, a virtualized transport protocol is taken as an SPICE protocol for explanation, where the SPICE protocol includes four parts, which are a protocol, a client, a server, and a virtual machine side, where the protocol: is a criterion followed by the interaction of three parts of a client side, a server side and a virtual machine side;
client (Client): the data processing system is responsible for receiving and converting virtual machine data and sending user input data to the virtual machine, so that a user can interact with the virtual machine; the Spice client handles different events by establishing different channels (channels). The client implements each channel as a separate thread, the client channel comprising: RedClient (main), Display Channel, Cursor Channel, Inputs Channel, Playback Channel, Recordchannel, etc. RedClient is responsible for building the main channel, which then creates the following channels via channel _ type: 1. display Channel: the system is responsible for processing graphic commands, pictures and video stream display; 2. inputs Channel: the system is responsible for processing keyboard and mouse input; 3. cursor Channel: responsible for handling the display of pointer device location, visibility, and shape; 4. playback Channel: the server is responsible for receiving the sound data of the server and playing the sound data at the client; 5. RecordChannel: and the system is responsible for capturing sound of the sound equipment of the client and transferring the sound into the virtual machine.
Server (Server): is a user layer component integrated inside the Hypervisor, so that the Hypervisor (such as QEMU) supports the Spice protocol; the Spice server internally corresponds to the client and also has a plurality of channels. The channels are mainly responsible for transmitting input of a client-side user to a virtual device (such as a keyboard and a mouse) of the virtual machine, and receiving and displaying a picture (QXL) corresponding to a virtual display card of the virtual machine. In order to enable the Spice server to be relatively independent, the desktop cloud interacts with virtual devices seen by the virtual machines through various virtual device back-end interfaces provided by the QEMU, such as a Playback Interface and the like.
Virtual machine side (VM): all necessary components deployed inside the virtual machine, such as QXL driver, Spice Agent and the like, are referred to.
In actual practice, graphics commands begin with a graphics application program inside a virtual machine requesting a drawing operation (e.g., a GDI command) from the OS. And then, the Spice captures the drawing operation of the application program by utilizing the QXL driver installed in the virtual machine, converts the drawing operation into a Spice QXL command and transmits the Spice QXL command to the back end of the QEMU virtual QXL device of the service end. And then, LibSpice reads the QXL command, recombines and optimizes the QXL command, encapsulates the QXL command into a Spice protocol message format and sends the QXL command to the client. And finally, the client analyzes the corresponding graphical operation message according to the Spice protocol, and further completes the picture updating operation.
Exemplarily, referring to fig. 2, a schematic flow chart of a video playing quality detection method provided by the present application is shown, the method is applied to a VDI system, and a detailed flow chart of the video playing quality detection method is as follows:
step 200: playing a first sample video on a virtual machine, wherein the video played on the virtual machine is displayed on a client corresponding to the virtual machine, and each frame image of the first sample video carries a corresponding unique identifier.
In this embodiment of the application, before performing step 200, the video playing quality detection method may further include: splitting a video to be detected into frame-by-frame images; adding a unique identifier for uniquely identifying each frame of image at the designated position of each frame of image; and merging the frames of images added with the unique identification into the first sample video.
For example, first, a video splitting tool may be used to split a video to be detected into frame-by-frame images; then, adding a frame number to the upper left corner of each frame image; and finally, merging the frame images added with the frame number numbers into a first sample video by adopting a video merging tool, wherein the first sample video is a video played on the virtual machine.
Specifically, in this embodiment of the application, when step 200 is executed, the virtual machine locally plays the first sample video when receiving the playing quality of the first sample video input from the outside, and as can be seen from the above, when the virtual machine locally plays the first sample video, the virtual machine sends the video playing related data to the client, and the client analyzes the related data, and then locally performs video restoration and display on the client.
Step 210: the method comprises the steps of collecting a second sample video displayed on a client corresponding to the virtual machine, splitting the second sample video into frame-by-frame images, and acquiring a unique identification list of each frame of image based on a time sequence of each frame of image.
In practical application, when the first sample video is played on the virtual machine, the virtual machine transmits the relevant data of the first sample video to the client, so that the data relevant to the first sample video can be collected on the client through the collection tool, wherein the client can analyze and process the received data relevant to the first sample video and display a video picture locally. In the embodiment of the present application, a video corresponding to data related to a first video acquired by an acquisition tool is referred to as a second sample video. Specifically, after the data related to the first sample video is acquired, the acquisition tool obtains a second sample video corresponding to the first sample video through parsing, and it should be noted that the second sample video may be understood as a video corresponding to the first sample video played on the client.
It should be noted that the acquisition frequency used by the acquisition box is greater than the frame rate of the video to be detected. For example, the frame rate of the video to be detected is 24 frames/second, and the acquisition frequency of the acquisition box is 60 frames/second.
Referring to fig. 3, a schematic diagram of a process for acquiring a second sample video is provided; when a virtual machine (not shown in the figure) plays a first sample video, the related data of the first sample video is sent to a corresponding VDI client side, video picture display is carried out on the VDI client side, the acquisition box is in communication connection with the VDI client side through an HDMI (high-definition multimedia interface) line, so that the acquisition box can acquire the related data of the first sample video from the VDI client side, the acquisition box is in communication connection with an acquisition terminal through a Type-c to USB line, so that the acquisition box can send the acquired related data of the first sample video to the acquisition terminal, and the acquisition terminal analyzes the related data of the first sample video, so that a second sample video is obtained.
Further, after the terminal for acquisition acquires the second sample video, the video splitting tool can be used for splitting the second sample video into frame-by-frame images, and the unique identifier of each frame image is acquired according to the time sequence of each frame image to form a unique identifier list of each frame image.
For example, assuming that the unique identification information on each frame image is a frame number, the acquisition terminal automatically identifies the frame number on each frame image through Tesseract-OCR after splitting the second sample video into frame-by-frame images, and generates a frame number list. Illustratively, the frame number of the first frame image is 1, the frame number of the second frame image is 2, … …, and the frame number of the nth frame image is n, then, if the frame rate of the second sample video is 24 frames/second, and the acquisition frequency of the acquisition tool is 60 frames/second, a case of normally filling frames occurs, the 24 frames are automatically filled to 60 frames, that is, under a normal condition, there are frame images with non-repeating 24 frame numbers in the 60 frames, and each frame image is continuously acquired 2-3 times, that is, each frame number is repeated 2-3 times.
Step 220: and determining the play quality detection result of the first sample video based on the acquired unique identification list of each frame of image.
In the embodiment of the present application, when determining the result of detecting the playing quality of the first sample video based on the obtained unique identifier list of each frame of image, a preferred implementation manner is to determine a first number of image frames to be played by the first sample video in a period based on the frame rate of the first sample video and the period, and determine a second number of non-repeating unique identifiers appearing in each period based on the period; judging whether the difference value between the first number and the second number corresponding to each period is greater than or equal to a first preset threshold value or not; and if the difference value of the first number and the second data corresponding to one period is judged to be larger than or equal to a first preset threshold value, determining that the frame is lost when the first sample video is played in the period.
For example, assuming that the period is 1 second, the frame rate of the first sample video is 24 frames/second, and the capture frequency of the capture box is 60 frames/second, first, determining a first number of image frames to be played by the first sample video within 1 second (24 frames/second 1 second is 24 frames) according to the frame rate and the period of the first sample video; then, with a period of 1 second, a second number of different unique identifiers occurring within each period (every 60 unique identifiers) is determined, respectively. The second number is a frame rate of the second sample video, and in a normal case, the second number is 24, and if the second number is much smaller than the first number (24), for example, the second number is 20, it indicates that a frame loss occurs in the process of transmitting the first sample video related data from the virtual machine to the client.
Optionally, in this embodiment of the application, when determining the play quality detection result of the first sample video based on the acquired unique identifier list of each frame of image, another preferable implementation manner is to determine whether the number of times that the unique identifier of the same frame of image appears continuously is greater than or equal to a second preset threshold value based on the acquired unique identifier list of each frame of image; and if the continuous occurrence frequency of the unique identifier of one frame of image is judged to be greater than or equal to a second preset threshold value, determining that the frame of image is stuck when being played.
Still taking the frame rate of the video to be detected as 24 frames/second and the collection frequency of the collection box as 60 frames/second as an example for explanation, normally, each unique identifier in the unique identifier list will be continuously repeated for 2-3 times, so when one unique identifier is continuously repeated for multiple times, it can be considered that the video is stuck when the frame of image is played.
For example, in practical applications, when a video is jammed within 200 milliseconds (ms), the human eye may detect the video jamming, that is, when the unique identifier of the same frame image appears more than 12 times in the unique identifier list, it is determined that the video is jammed when playing the frame image.
It should be noted that, in the embodiment of the present application, the first preset threshold and the second preset threshold may be preset based on a user requirement and/or a specific application scenario, and in the embodiment of the present application, the first preset threshold and the second preset threshold are not specifically limited herein.
Further, in this embodiment of the application, after splitting the second sample video into frame-by-frame images, the method may further include: the following operations are respectively performed for each frame image of the frame images of the second sample video: determining the frame image of the first sample video corresponding to the target frame image based on the unique identifier of the target frame image, and calculating the similarity of the two corresponding frame images; if the similarity of the two frame images is larger than or equal to a first threshold value, determining that the image restoration degree is high when the target frame image is played; and if the similarity of the two images is less than or equal to a second threshold value, determining that the image restoration degree is low when the target frame image is played.
Furthermore, if the similarity of the two frame images is smaller than a first threshold and larger than a second threshold, dividing the target frame image into N blocks of images, and respectively calculating peak signal-to-noise ratios of the N blocks of images; and calculating the average value of the peak signal-to-noise ratios of the N images, and if the difference value between the average value of the peak signal-to-noise ratios of the N images and the peak signal-to-noise ratio of one image in the N images is judged to be larger than or equal to a third threshold value, determining that the screen is displayed at the image in the target frame image.
For example, after splitting a second sample video acquired by an acquisition terminal into frame-by-frame videos, frame images of a first sample video corresponding to the frame images of the second sample video may be determined based on unique identifiers of the frame images, and the corresponding images of the two frames may be converted into grayscale images, and a similarity (e.g., Structural Similarity (SSIM)) of the two frame images may be calculated.
Further, if the similarity is greater than 70% and less than 92%, segmenting the frame image of the second sample video into a plurality of image blocks of 200 pixels by 200 pixels, calculating Peak Signal to noise ratio (PSNR) of each image block, calculating a Peak snr mean value of each image block according to the Peak snr mean value of each image block, comparing the Peak snr mean value of each image block with the Peak snr mean value of each image block, and if there is an image block whose Peak snr is much lower than the Peak snr mean value of each image block, determining that an image quality problem (e.g., image tearing, partial blurring, etc.) occurs at the image block.
In practical application, the first sample video is stored in advance locally in the acquisition terminal, so that the acquisition terminal can locally split the first sample video into frame-by-frame images, and during subsequent comparison, the corresponding frame image can be determined from the frame-by-frame images split from the first sample video according to the unique identifier of each frame image in the second sample video.
It should be noted that, in the embodiment of the present application, the first threshold, the second threshold, and the third threshold may be preset based on a user requirement and/or a specific application scenario, and in the embodiment of the present application, the first threshold, the second threshold, and the third threshold are not specifically limited herein.
Based on the foregoing embodiment, referring to fig. 4, a schematic structural diagram of a video playing quality detection apparatus provided by the present application is shown, where the apparatus includes:
the playing unit 40 is configured to play a first sample video on a virtual machine, where the video played on the virtual machine is displayed on a client corresponding to the virtual machine, and each frame image of the first sample video carries a unique identifier corresponding to each frame image;
the execution unit 41 is configured to collect a second sample video displayed on a client corresponding to the virtual machine, split the second sample video into frame-by-frame images, and obtain a unique identifier list of each frame image based on a time sequence of each frame image;
and a determining unit 42, configured to determine a playing quality detection result of the first sample video based on the obtained unique identifier list of each frame of image.
Optionally, before playing the first sample video on the virtual machine, the execution unit 41 is further configured to:
splitting a video to be detected into frame-by-frame images;
adding a unique identifier for uniquely identifying each frame of image at the designated position of each frame of image;
and merging the frames of images added with the unique identification into the first sample video.
Optionally, when determining the play quality detection result of the first sample video based on the obtained unique identifier list of each frame of image, the determining unit 42 is specifically configured to:
determining a first number of image frames to be played by the first sample video in a period based on the frame rate and the period of the first sample video, and respectively determining a second number of non-repeated unique identifiers appearing in each period based on the period;
judging whether the difference value between the first number and the second number respectively corresponding to each period is greater than or equal to a first preset threshold value;
and if the difference value of the first number and the second data corresponding to one period is judged to be larger than or equal to a first preset threshold value, determining that the frame is lost when the first sample video is played in the period.
Optionally, when determining the play quality detection result of the first sample video based on the obtained unique identifier list of each frame of image, the determining unit 42 is specifically configured to:
judging whether the continuous occurrence frequency of the unique identifier of the same frame of image is greater than or equal to a second preset threshold value or not based on the acquired unique identifier list of each frame of image;
and if the continuous occurrence frequency of the unique identifier of one frame of image is judged to be greater than or equal to a second preset threshold value, determining that the frame of image is stuck when being played.
Optionally, after splitting the second sample video into frame-by-frame images, the determining unit 42 is further configured to:
the following operations are respectively performed for each frame image of the frame images of the second sample video: determining the frame image of the first sample video corresponding to the target frame image based on the unique identifier of the target frame image, and calculating the similarity of the two corresponding frame images; if the similarity of the two frame images is larger than or equal to a first threshold value, determining that the image restoration degree is high when the target frame image is played; and if the similarity of the two images is less than or equal to a second threshold value, determining that the image restoration degree is low when the target frame image is played.
Optionally, the determining unit 42 is further configured to:
if the similarity of the two frames of images is smaller than a first threshold and larger than a second threshold, dividing the target frame of image into N blocks of images, and respectively calculating peak signal-to-noise ratios of the N blocks of images;
and calculating the mean value of the peak signal-to-noise ratios of the N images, and if the difference value between the mean value of the peak signal-to-noise ratios of the N images and the peak signal-to-noise ratio of one image in the N images is judged to be larger than or equal to a third threshold value, determining that the screen is shown in the image of the target frame.
To sum up, the video playing quality detection method provided by the present application plays a first sample video on a virtual machine, where the video played on the virtual machine is displayed on a client corresponding to the virtual machine, and each frame image of the first sample video carries a unique identifier corresponding to the frame image; acquiring a second sample video displayed on a client corresponding to the virtual machine, splitting the second sample video into frame-by-frame images, and acquiring a unique identification list of each frame of image based on a time sequence of each frame of image; and determining the play quality detection result of the first sample video based on the acquired unique identification list of each frame of image.
By the video playing quality detection method, the unique identifier for uniquely identifying each frame of image of the video to be detected is added to each frame of image, the video data transmitted from the virtual machine side is collected on the client side of the video image displayed by the user, the collected video is split into frame-by-frame images, the unique identifier of each frame of image is identified, the unique identifier list is obtained, the playing quality of the video to be detected is judged according to the unique identifier list, the automatic detection of the playing quality of the video is realized, the labor cost is reduced, and the precision and the accuracy of the detection result are improved through the data quantization detection result.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.

Claims (12)

1. A video playing quality detection method is characterized by comprising the following steps:
playing a first sample video on a virtual machine, wherein the video played on the virtual machine is displayed on a client corresponding to the virtual machine, and each frame image of the first sample video carries a unique identifier corresponding to each frame image;
acquiring a second sample video displayed on a client corresponding to the virtual machine, splitting the second sample video into frame-by-frame images, and acquiring a unique identification list of each frame of image based on a time sequence of each frame of image;
and determining the play quality detection result of the first sample video based on the acquired unique identification list of each frame of image.
2. The method of claim 1, wherein prior to playing the first sample video on the virtual machine, the method further comprises:
splitting a video to be detected into frame-by-frame images;
adding a unique identifier for uniquely identifying each frame of image at the designated position of each frame of image;
and merging the frames of images added with the unique identification into the first sample video.
3. The method according to claim 1 or 2, wherein the step of determining the play quality detection result of the first sample video based on the acquired unique identification list of each frame image comprises:
determining a first number of image frames to be played by the first sample video in a period based on the frame rate and the period of the first sample video, and respectively determining a second number of non-repeated unique identifiers appearing in each period based on the period;
judging whether the difference value between the first number and the second number respectively corresponding to each period is greater than or equal to a first preset threshold value;
and if the difference value of the first number and the second data corresponding to one period is judged to be larger than or equal to a first preset threshold value, determining that the frame is lost when the first sample video is played in the period.
4. The method according to claim 1 or 2, wherein the step of determining the play quality detection result of the first sample video based on the acquired unique identification list of each frame image comprises:
judging whether the continuous occurrence frequency of the unique identifier of the same frame of image is greater than or equal to a second preset threshold value or not based on the acquired unique identifier list of each frame of image;
and if the continuous occurrence frequency of the unique identifier of one frame of image is judged to be greater than or equal to a second preset threshold value, determining that the frame of image is stuck when being played.
5. The method of claim 1 or 2, wherein after splitting the second sample video into frame-by-frame images, the method further comprises:
the following operations are respectively performed for each frame image of the frame images of the second sample video: determining the frame image of the first sample video corresponding to the target frame image based on the unique identifier of the target frame image, and calculating the similarity of the two corresponding frame images; if the similarity of the two frame images is larger than or equal to a first threshold value, determining that the image restoration degree is high when the target frame image is played; and if the similarity of the two images is less than or equal to a second threshold value, determining that the image restoration degree is low when the target frame image is played.
6. The method of claim 5, wherein the method further comprises:
if the similarity of the two frames of images is smaller than a first threshold and larger than a second threshold, dividing the target frame of image into N blocks of images, and respectively calculating peak signal-to-noise ratios of the N blocks of images;
and calculating the mean value of the peak signal-to-noise ratios of the N images, and if the difference value between the mean value of the peak signal-to-noise ratios of the N images and the peak signal-to-noise ratio of one image in the N images is judged to be larger than or equal to a third threshold value, determining that the screen is shown in the image of the target frame.
7. An apparatus for detecting video playback quality, the apparatus comprising:
the system comprises a playing unit, a processing unit and a processing unit, wherein the playing unit is used for playing a first sample video on a virtual machine, the video played on the virtual machine is displayed on a client corresponding to the virtual machine, and each frame image of the first sample video carries a unique identifier corresponding to each frame image;
the execution unit is used for acquiring a second sample video displayed on a client corresponding to the virtual machine, splitting the second sample video into frame-by-frame images, and acquiring a unique identification list of each frame of image based on a time sequence of each frame of image;
and the determining unit is used for determining the playing quality detection result of the first sample video based on the acquired unique identification list of each frame of image.
8. The apparatus of claim 7, wherein prior to playing the first sample video on the virtual machine, the execution unit is further to:
splitting a video to be detected into frame-by-frame images;
adding a unique identifier for uniquely identifying each frame of image at the designated position of each frame of image;
and merging the frames of images added with the unique identification into the first sample video.
9. The apparatus according to claim 7 or 8, wherein when determining the result of detecting the playback quality of the first sample video based on the obtained unique identifier list of each frame image, the determining unit is specifically configured to:
determining a first number of image frames to be played by the first sample video in a period based on the frame rate and the period of the first sample video, and respectively determining a second number of non-repeated unique identifiers appearing in each period based on the period;
judging whether the difference value between the first number and the second number respectively corresponding to each period is greater than or equal to a first preset threshold value;
and if the difference value of the first number and the second data corresponding to one period is judged to be larger than or equal to a first preset threshold value, determining that the frame is lost when the first sample video is played in the period.
10. The apparatus according to claim 7 or 8, wherein when determining the result of detecting the playback quality of the first sample video based on the obtained unique identifier list of each frame image, the determining unit is specifically configured to:
judging whether the continuous occurrence frequency of the unique identifier of the same frame of image is greater than or equal to a second preset threshold value or not based on the acquired unique identifier list of each frame of image;
and if the continuous occurrence frequency of the unique identifier of one frame of image is judged to be greater than or equal to a second preset threshold value, determining that the frame of image is stuck when being played.
11. The apparatus of claim 7 or 8, wherein after splitting the second sample video into frame-by-frame images, the determination unit is further to:
the following operations are respectively performed for each frame image of the frame images of the second sample video: determining the frame image of the first sample video corresponding to the target frame image based on the unique identifier of the target frame image, and calculating the similarity of the two corresponding frame images; if the similarity of the two frame images is larger than or equal to a first threshold value, determining that the image restoration degree is high when the target frame image is played; and if the similarity of the two images is less than or equal to a second threshold value, determining that the image restoration degree is low when the target frame image is played.
12. The apparatus of claim 11, wherein the determination unit is further to:
if the similarity of the two frames of images is smaller than a first threshold and larger than a second threshold, dividing the target frame of image into N blocks of images, and respectively calculating peak signal-to-noise ratios of the N blocks of images;
and calculating the mean value of the peak signal-to-noise ratios of the N images, and if the difference value between the mean value of the peak signal-to-noise ratios of the N images and the peak signal-to-noise ratio of one image in the N images is judged to be larger than or equal to a third threshold value, determining that the screen is shown in the image of the target frame.
CN202010563626.9A 2020-06-19 2020-06-19 Video playing quality detection method and device Withdrawn CN111669574A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010563626.9A CN111669574A (en) 2020-06-19 2020-06-19 Video playing quality detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010563626.9A CN111669574A (en) 2020-06-19 2020-06-19 Video playing quality detection method and device

Publications (1)

Publication Number Publication Date
CN111669574A true CN111669574A (en) 2020-09-15

Family

ID=72388922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010563626.9A Withdrawn CN111669574A (en) 2020-06-19 2020-06-19 Video playing quality detection method and device

Country Status (1)

Country Link
CN (1) CN111669574A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112135123A (en) * 2020-09-24 2020-12-25 三星电子(中国)研发中心 Video quality detection method and device
CN112153374A (en) * 2020-09-25 2020-12-29 腾讯科技(深圳)有限公司 Method, device and equipment for testing video frame image and computer storage medium
CN112351273A (en) * 2020-11-04 2021-02-09 新华三大数据技术有限公司 Video playing quality detection method and device
CN113676722A (en) * 2021-07-21 2021-11-19 南京巨鲨显示科技有限公司 Video equipment image frame testing method and time delay measuring method
CN113724225A (en) * 2021-08-31 2021-11-30 北京达佳互联信息技术有限公司 Method and device for determining transmission quality of application program
CN115022675A (en) * 2022-07-01 2022-09-06 天翼数字生活科技有限公司 Video playing detection method and system
CN115499708A (en) * 2022-09-26 2022-12-20 深圳前海深蕾半导体有限公司 Video playing processing method and device and electronic equipment
WO2023035662A1 (en) * 2021-09-13 2023-03-16 中兴通讯股份有限公司 Cloud desktop running method, server, and terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060093589A (en) * 2005-02-22 2006-08-25 엘지전자 주식회사 Compressed video quality testing method for picture quality estimation
CN1859584A (en) * 2005-11-14 2006-11-08 华为技术有限公司 Video frequency broadcast quality detecting method for medium broadcast terminal device
CN102740111A (en) * 2012-06-15 2012-10-17 福建升腾资讯有限公司 Method and device for testing video fluency based on frame number watermarks under remote desktop
US20140002735A1 (en) * 2011-12-28 2014-01-02 Barry A. O'Mahony Method of and apparatus for performing an objective video quality assessment using non-intrusive video frame tracking
CN105491372A (en) * 2015-11-24 2016-04-13 努比亚技术有限公司 Mobile terminal and information processing method
CN108495120A (en) * 2018-01-31 2018-09-04 华为技术有限公司 A kind of video frame detection, processing method, apparatus and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060093589A (en) * 2005-02-22 2006-08-25 엘지전자 주식회사 Compressed video quality testing method for picture quality estimation
CN1859584A (en) * 2005-11-14 2006-11-08 华为技术有限公司 Video frequency broadcast quality detecting method for medium broadcast terminal device
US20140002735A1 (en) * 2011-12-28 2014-01-02 Barry A. O'Mahony Method of and apparatus for performing an objective video quality assessment using non-intrusive video frame tracking
CN102740111A (en) * 2012-06-15 2012-10-17 福建升腾资讯有限公司 Method and device for testing video fluency based on frame number watermarks under remote desktop
CN105491372A (en) * 2015-11-24 2016-04-13 努比亚技术有限公司 Mobile terminal and information processing method
CN108495120A (en) * 2018-01-31 2018-09-04 华为技术有限公司 A kind of video frame detection, processing method, apparatus and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨飞;朱志祥;梁小江;: "基于SPICE协议的视频性能优化与改进", 计算机技术与发展, no. 12 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112135123A (en) * 2020-09-24 2020-12-25 三星电子(中国)研发中心 Video quality detection method and device
CN112153374A (en) * 2020-09-25 2020-12-29 腾讯科技(深圳)有限公司 Method, device and equipment for testing video frame image and computer storage medium
CN112153374B (en) * 2020-09-25 2022-06-07 腾讯科技(深圳)有限公司 Method, device and equipment for testing video frame image and computer storage medium
CN112351273B (en) * 2020-11-04 2022-03-01 新华三大数据技术有限公司 Video playing quality detection method and device
CN112351273A (en) * 2020-11-04 2021-02-09 新华三大数据技术有限公司 Video playing quality detection method and device
CN113676722A (en) * 2021-07-21 2021-11-19 南京巨鲨显示科技有限公司 Video equipment image frame testing method and time delay measuring method
CN113724225A (en) * 2021-08-31 2021-11-30 北京达佳互联信息技术有限公司 Method and device for determining transmission quality of application program
CN113724225B (en) * 2021-08-31 2024-04-09 北京达佳互联信息技术有限公司 Method and device for determining transmission quality of application program
WO2023035662A1 (en) * 2021-09-13 2023-03-16 中兴通讯股份有限公司 Cloud desktop running method, server, and terminal
CN115022675A (en) * 2022-07-01 2022-09-06 天翼数字生活科技有限公司 Video playing detection method and system
CN115022675B (en) * 2022-07-01 2023-12-15 天翼数字生活科技有限公司 Video playing detection method and system
WO2024001000A1 (en) * 2022-07-01 2024-01-04 天翼数字生活科技有限公司 Video playing detection method and system
CN115499708A (en) * 2022-09-26 2022-12-20 深圳前海深蕾半导体有限公司 Video playing processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN111669574A (en) Video playing quality detection method and device
US9578373B2 (en) Remote display performance measurement triggered by application display upgrade
JP6898968B2 (en) Methods and devices for determining response time
US8910228B2 (en) Measurement of remote display performance with image-embedded markers
EP3879839A1 (en) Video processing method and apparatus, and electronic device and computer-readable medium
EP2663925B1 (en) A method and mechanism for performing both server-side and client-side rendering of visual data
US20140229527A1 (en) Real-time, interactive measurement techniques for desktop virtualization
US10645391B2 (en) Graphical instruction data processing method and apparatus, and system
EP3285484A1 (en) Image processing apparatus, image generation method, and program
US9516303B2 (en) Timestamp in performance benchmark
CN110337035B (en) Method and device for detecting video playing quality
CN113596488B (en) Live broadcast room display method and device, electronic equipment and storage medium
US9264704B2 (en) Frame image quality as display quality benchmark for remote desktop
CN111866058B (en) Data processing method and system
US7912803B2 (en) Creating a session log with a table of records for a computing device being studied for usability by a plurality of usability experts
US11037297B2 (en) Image analysis method and device
JP4932741B2 (en) Monitoring device
CN115878379A (en) Data backup method, main server, backup server and storage medium
CN111654702B (en) Data transmission method and system
CN112351273B (en) Video playing quality detection method and device
Vankeirsbilck et al. Automatic fine-grained area detection for thin client systems
CN114035719B (en) Remote desktop fluency performance evaluation method, system and medium
KR101767929B1 (en) System for evaluating performance and method thereof
CN114120195A (en) Mouse delay detection method and device
EP4340370A1 (en) Cloud-based input latency measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200915