CN110337035B - Method and device for detecting video playing quality - Google Patents

Method and device for detecting video playing quality Download PDF

Info

Publication number
CN110337035B
CN110337035B CN201910680154.2A CN201910680154A CN110337035B CN 110337035 B CN110337035 B CN 110337035B CN 201910680154 A CN201910680154 A CN 201910680154A CN 110337035 B CN110337035 B CN 110337035B
Authority
CN
China
Prior art keywords
screen recording
recording image
frame
video
definition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910680154.2A
Other languages
Chinese (zh)
Other versions
CN110337035A (en
Inventor
廖钜城
丁志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Big Data Technologies Co Ltd
Original Assignee
New H3C Big Data Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Big Data Technologies Co Ltd filed Critical New H3C Big Data Technologies Co Ltd
Priority to CN201910680154.2A priority Critical patent/CN110337035B/en
Publication of CN110337035A publication Critical patent/CN110337035A/en
Application granted granted Critical
Publication of CN110337035B publication Critical patent/CN110337035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4437Implementing a Virtual Machine [VM]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a method and a device for detecting video playing quality, wherein a first definition of a first screen recording image obtained by recording a screen when a video to be detected is played in a virtual machine and a second definition of a second screen recording image obtained by recording the screen when the video to be detected is played in a client are automatically obtained so as to obtain a playing quality detection result of the video to be detected, manual detection is not needed, labor cost is saved, and detection precision is improved. The method comprises the following steps: acquiring a first definition of each frame of first screen recording image in the first screen recording image set, and acquiring a second definition of each frame of second screen recording image in the second screen recording image set; the first screen recording image set is obtained by recording a to-be-detected video played by the virtual machine; the second screen recording image set is obtained by recording a to-be-detected video played by the client; and acquiring a playing quality detection result of the video to be detected according to the first definition of each frame of the first screen recording image and the second definition of each frame of the second screen recording image.

Description

Method and device for detecting video playing quality
Technical Field
The application relates to the technical field of virtual machines, in particular to a method and a device for detecting video playing quality.
Background
Hypervisor is an intermediate software layer running between the server and the operating system that allows multiple operating systems and applications to share a basic set of physical hardware for coordinated access to all the physical devices and virtual machines on the server. The VDI (Virtual Desktop Infrastructure) mainly relies on a Virtual machine divided by hypervisors on a server to provide services for users. The virtual machine runs on the server, and the virtual desktop is presented to the user at the client through the VDI technology, so that the user can control the virtual machine through the client. The Simple Protocol for Independent Computing Environment (SPICE) is a virtualized transmission Protocol applied to VDI, and a client is connected with a virtual machine through SPICE and performs data interaction.
In the process of transmitting video playing data between the client and the virtual machine through SPICE, the problems of pause, screen splash and the like of video images during playing at the client can be caused. Generally, a method for detecting whether a video has a problem includes that a tester manually monitors a playing process of the video at a client and determines whether the playing of the video has a problem; if so, the developer is contacted to locate and reproduce the problem.
The method for detecting the video has high labor cost because the method needs the monitoring of testers at any time; meanwhile, in the playing process of the video, the problem degree needs to be judged manually, and slight problems are easy to ignore, so that detection omission is caused, and the problem of low detection precision is caused.
Disclosure of Invention
In view of this, an object of the embodiments of the present application is to provide a method and an apparatus for detecting video playing quality, which can achieve automatic and high-precision detection of problems occurring in a video playing process.
In a first aspect, a method for detecting video playing quality is provided, where the method is applied to a virtual machine or a control end; the method comprises the following steps:
acquiring a first definition of each frame of first screen recording image in the first screen recording image set, and acquiring a second definition of each frame of second screen recording image in the second screen recording image set; the first screen recording image set is obtained by recording a to-be-detected video played by a virtual machine; the second screen recording image set is obtained by recording a to-be-detected video played by the client; the first screen recording images and the second screen recording images are matched one by one, and the playing progress of the to-be-detected video corresponding to the matched first screen recording images and the matched second screen recording images is the same;
and acquiring a playing quality detection result of the video to be detected according to the first definition of the first screen recording image of each frame and the second definition of the second screen recording image of each frame.
In a second aspect, an apparatus for detecting video playing quality is provided, where the apparatus is applied to a virtual machine or a control end; the device comprises:
the acquisition module is used for acquiring the first definition of each frame of first screen recording image in the first screen recording image set and acquiring the second definition of each frame of second screen recording image in the second screen recording image set; the first screen recording image set is obtained by recording a to-be-detected video played by a virtual machine; the second screen recording image set is obtained by recording a to-be-detected video played by the client; the first screen recording images and the second screen recording images are matched one by one, and the playing progress of the to-be-detected video corresponding to the matched first screen recording images and the matched second screen recording images is the same;
and the detection module is used for acquiring a playing quality detection result of the video to be detected according to the first definition of the first screen recording image of each frame and the second definition of the second screen recording image of each frame.
In a third aspect, an embodiment of the present application further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
According to the embodiment of the application, the first definition of the first screen recording image obtained by recording the screen when the video to be detected is played in the virtual machine and the second definition of the second screen recording image obtained by recording the screen when the video to be detected is played in the client are obtained automatically, so that the playing quality detection result of the video to be detected is obtained, manual detection is not needed, and the labor cost is saved.
Meanwhile, the first screen recording image and the second screen recording image are matched one by one, the playing progress of the to-be-detected video corresponding to the matched first screen recording image and the matched second screen recording image is the same, detection is carried out based on the first screen recording image and the second screen recording image of each frame, slight problems can be accurately detected, missing detection caused by judgment is avoided, and detection precision is improved.
In addition, the time and the number of image frames of the playing quality with problems can be recorded, the situation that the problems can be located only by repeatedly reproducing the problems due to the fact that detailed information of the problems cannot be reserved during manual detection is avoided, and the detection precision is further improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating a method for detecting video according to an embodiment of the present application;
fig. 2 shows a networking schematic diagram implemented at a control end by the video detection method provided by the embodiment of the present application;
FIG. 3 is a flowchart illustrating a specific method for obtaining a first sharpness of each frame of an original image according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a specific method for obtaining a play quality detection result of a video to be detected according to an embodiment of the present application;
fig. 5 is a flowchart illustrating another specific method for obtaining a play quality detection result of a video to be detected according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating an apparatus for video detection provided by an embodiment of the present application;
fig. 7 shows a schematic diagram of a computer device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
On the hardware level, the VDI comprises: a Client, a Server and a Virtual Machine (VM); SPICE is the criterion that clients, servers, and virtual machines follow when interacting with data. From the software level, SPICE contains: client side (SPICE Client), Server side (SPICE Server), virtual machine side (SPICE VM), and protocol.
The protocol is a criterion followed by the interaction of the three parts of the SPICE Client, the SPICE Server and the SPICE VM.
The Client is responsible for receiving and converting virtual machine data, presenting a virtual desktop for a user, and sending data input by the user to the virtual machine, so that the user can realize interaction with the virtual machine VM through the Client. SPICE Client runs inside the Client; SPICE Client processes different events by establishing different channels; SPICE Client implements each channel as a separate thread; the SPICE Client channel comprises: red Client (main), Display Channel, Cursor Channel, Inputs Channel, Playback Channel, Record Channel, etc. The Red Client is responsible for establishing the main channel, and then the main channel creates the following channels: 1. display Channel: the system is responsible for processing graphic commands, pictures and video stream display; 2. inputs Channel: the system is responsible for processing keyboard and mouse input; 3. cursor Channel: responsible for handling the display of pointer device location, visibility, and shape; 4. playback Channel: the system is responsible for receiving the sound data sent by the virtual machine and playing the sound data at the client; 5. record Channel: and the system is responsible for capturing sound of the sound equipment of the client and transferring the sound into the virtual machine.
The SPICE Server is a user layer component integrated in the Hypervisor, so that the Hypervisor supports the SPICE protocol; SPICE Server corresponds to SPICE Client, and there are also several channels. The channels are mainly responsible for transmitting the input of the client to the virtual device (such as a keyboard and a mouse) of the SPICE VM, and receiving and displaying the picture corresponding to the virtual video card of the SPICE VM. In order to enable SPICE servers to be relatively independent, the desktop cloud interacts with virtual devices seen by the virtual machine VMs through various virtual device backend interfaces provided by the Hypervisor.
SPICE VM runs inside the virtual machine VM, meaning all the necessary components deployed inside the VM, such as QXL driver, SPICE Agent (SPICE Agent), etc.; wherein, the functions that QXL realizes include: a Display Driver (Display Driver); the Display Driver mainly provides an Application Program Interface (API) to a Graphics Device Interface (GDI), so that when an upper layer Application needs to draw, the GDI API is called, and the GDI calls the drawing API of the QXL to achieve the drawing purpose.
Specifically, the virtual machine controls the process of displaying the video on the client, that is, the virtual machine performs a graphic drawing process on each frame of image in the video based on SPICE. The graphics command starts from a graphics application program inside the virtual machine to request a drawing operation from an operating system of the virtual machine; SPICE is driven by QXL installed in the virtual machine, captures the drawing operation of an application program, then converts the drawing operation into SPICE QXL commands, and transmits the SPICE QXL commands to the QXL device back end virtualized by an SPICE Server. Then, the SPICE Server reads the SPICE QXL command, recombines and optimizes the command, encapsulates the command into video playing data in an SPICE protocol message format and sends the video playing data to an SPICE Client; and the SPICE Client analyzes the corresponding video playing data according to the SPICE protocol so as to complete the picture updating operation.
In the process, as video playing data needs to be processed by the SPICE Server and transmitted by the network, the problems of pause, screen splash and the like in the picture updating process can be caused, and then the video is caused to pause and screen splash in the playing process. Problems occurring in the video playing process are currently detected manually. However, the method for detecting the video has high labor cost because the method needs the monitoring of testers at any time; meanwhile, in the playing process of the video, the problem degree needs to be judged manually, and slight problems are easy to be ignored, so that detection omission is caused; when a problem occurs in video playing, detailed information of the problem cannot be reserved, the problem can be located only by repeatedly reproducing the problem, and the problem which occurs accidentally is easy to miss detection, so that the detection precision is low.
Based on the research, the application provides a method and a device for detecting video playing quality, which can automatically acquire a first definition of a first screen recording image obtained by recording a screen when a video to be detected is played in a virtual machine and a second definition of a second screen recording image obtained by recording the screen when the video to be detected is played in a client, so as to acquire a playing quality detection result of the video to be detected, and the method and the device do not need manual detection, save labor cost and have higher detection precision.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solution proposed by the present application to the above-mentioned problems in the following should be the contribution of the inventor to the present application in the process of the present application.
The technical solutions in the present application will be described clearly and completely with reference to the drawings in the present application, and it should be understood that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the present application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a method for detecting video playing quality disclosed in the embodiments of the present application is described in detail, and an execution subject of the method for detecting video playing quality provided in the embodiments of the present application may be a virtual machine or a control end. The control end is a third-party device end independent of the virtual machine and the client.
Example one
Referring to fig. 1, a flowchart of a method for detecting a video according to an embodiment of the present application is shown, where the method includes steps S101 to S102, where:
s101: the method comprises the steps of obtaining a first definition of each frame of first screen recording image in a first screen recording image set, and obtaining a second definition of each frame of second screen recording image in a second screen recording image set.
S102: and acquiring a playing quality detection result of the video to be detected according to the first definition of each frame of the first screen recording image and the second definition of each frame of the second screen recording image.
The following are descriptions of the above S101 to S102, respectively:
i: in the step S101, the first screen recording image set is obtained by recording a to-be-detected video played by the virtual machine; the second screen recording image set is obtained by recording a to-be-detected video played by the client; the first screen recording image and the second screen recording image are matched one by one, and the playing progress of the video to be detected corresponding to the matched first screen recording image and the matched second screen recording image is the same.
In order to obtain a first screen recording image set and a second screen recording image set, a virtual machine is controlled to play a video to be detected locally and a client side, the virtual machine is controlled to record a screen when the video to be detected is played, the first screen recording image set is generated, the client side is controlled to record a screen when the video to be detected is played, and the second screen recording image set is generated. And then, acquiring the first definition of each frame of first screen recording image by processing the related data of each frame of first screen recording image in the first screen recording image set, and acquiring the second definition of each frame of second screen recording image by processing the related data of each frame of second screen recording image in the second screen recording image set.
Specifically, the method comprises the following steps:
a: aiming at the situation that the method for detecting the video playing quality provided by the embodiment of the application is executed in the virtual machine:
and the virtual machine loads the video to be detected and generates video playing data.
The video playing data comprises reconstructed image data of each frame of original image in the video to be detected; the video playing data is used for playing the video to be detected locally and at the client.
After the virtual machine generates video playing data, the video to be detected is played locally based on the video playing data, and screen recording is performed when the video to be detected is played, so that a first screen recording image set is generated.
The virtual machine locally plays the video to be detected based on the video playing data, and simultaneously sends the video playing data to the client based on the SPICE; and after receiving the video playing data, the client plays the video based on the video playing data.
The virtual machine may also send a third screen recording instruction and a third definition fetch instruction to the client.
The third screen recording instruction is used for instructing the client to record the screen when the video to be detected is played based on the video playing data, and a second screen recording image set is generated; a third definition obtaining instruction, configured to instruct the client to obtain a second definition of each frame of the second screen recording image in the second screen recording image set;
when the client side obtains the second definition of each frame of second screen recording image, the mode of screen recording and screen obtaining can be adopted, the mode of screen recording first can also be adopted, and the second definition is obtained after screen recording is finished.
B: for the situation that the method for detecting a video provided by the embodiment of the present application is executed at a control end, referring to fig. 2, a networking schematic diagram implemented at the control end by the method for detecting a video is provided, which includes: a client 10, a virtual machine 20, and a control end 30; wherein, the client 10 and the virtual machine 20 establish links with the control end 30 respectively; the link is, for example, a socket link, a Point-to-Point Tunneling Protocol (PPTP) link, and the like, and these connection modes can ensure reliable transmission of data between the virtual machine and the control end and between the client and the control end. The virtual machine 20 runs in a server; the virtual machine 20 and the client 10 interact through SPICE.
b 1: in one embodiment:
and the control terminal sends a video playing instruction and a first screen recording instruction to the virtual machine based on the link between the control terminal and the virtual machine.
The video playing instruction is used for instructing the virtual machine to control the video to be detected to be played locally and at the client; the first screen recording instruction is used for indicating the virtual machine to record the screen when the video to be detected is controlled to be played locally.
The control end also sends a second screen recording instruction to the client end based on the link between the control end and the client end. The video playing instruction is used for instructing the virtual machine to control the video to be detected to be played at the client; and the second screen recording instruction is used for indicating the client to record the screen when the video to be detected is played, so that a second screen recording image set is generated.
After receiving a video playing instruction sent by a control end, the virtual machine generates video playing data based on the loaded original video data of the video to be detected, and realizes local playing of the video to be detected based on the video playing data; the virtual machine records a screen when controlling the video to be detected to be locally played, generates a first screen recording image set, and sends the generated first screen recording image set to the control end.
The virtual machine also sends the video playing data to the client so that the client can play the video to be detected on the client based on the video playing data.
And the client records the screen based on the received second screen recording instruction while playing the video to be detected, generates a second screen recording image set and sends the generated second screen recording image set to the control end.
After receiving the first screen recording image set, the control terminal generates a first definition of each frame of first screen recording image based on the relevant data of each frame of first screen recording image.
And after receiving the second screen recording image set, the control terminal generates a second definition of each frame of second screen recording image based on the related data of each frame of second screen recording image.
b 2: in another embodiment:
the control end sends a video playing instruction, a first screen recording instruction and a first definition obtaining instruction to the virtual machine based on the link between the control end and the virtual machine.
The video playing instruction is used for instructing the virtual machine to control the video to be detected to be played locally and at the client; the first screen recording instruction is used for indicating the virtual machine to record the screen when controlling the video to be detected to be played locally, and generating a first screen recording image set; the first definition obtaining instruction is used for instructing the virtual machine to obtain the first definition of the first screen recording image of each frame in the first screen recording image set.
After receiving a video playing instruction sent by a control end, the virtual machine generates video playing data based on the loaded original video data of the video to be detected, and realizes local playing of the video to be detected based on the video playing data; the virtual machine records a screen when controlling a video to be detected to be locally played, generates a first screen recording image set, generates a first definition of each frame of first screen recording image based on relevant data of each frame of first screen recording image, and transmits the first definition of each frame of first screen recording image to the control end.
And the control terminal also sends a second screen recording instruction and a second definition obtaining instruction to the client terminal based on the link between the control terminal and the client terminal.
The second screen recording instruction is used for indicating the client to record the screen when the video to be detected is played, and a second screen recording image set is generated; and the second definition obtaining instruction is used for indicating the client to obtain the second definition of each frame of second screen recording image in the second screen recording image set.
And the client records the screen based on the received second screen recording instruction while playing the video to be detected to generate a second screen recording image set, generates a second definition of each frame of second screen recording image based on the relevant data of each frame of second screen recording image, and sends the second definition to the control end.
In the above embodiment, when the virtual machine is controlled to play the video to be detected, full-screen playing or non-full-screen playing may be performed; for the case of full-screen playing, the obtained first screen recording image and second screen recording image can be directly used in subsequent S101 and S102; for the non-full-screen playing situation, before executing S101 and S102 based on the first screen recording image and the second screen recording image, image capture needs to be performed on the first screen recording image and the second screen recording image to obtain a captured image only containing the content of the video playing to be detected, and then subsequent S101 and S102 are executed according to the captured image serving as the first screen recording image and the second screen recording image.
Referring to fig. 3, an embodiment of the present application further provides a specific method for obtaining the definition of each frame of screen recording image in a screen recording image set, where the screen recording image set may be a first screen recording image set or a second screen recording image set; accordingly, the screen recording image may be the first screen recording image or the second screen recording image.
Specifically, the method for acquiring the definition of the screen recording image comprises the following steps:
s301: aiming at each frame of screen recording image, converting the frame of screen recording image into a gray image to obtain a target pixel matrix of the frame of screen recording image; and the value of each element in the target pixel matrix represents the gray value of the pixel point corresponding to each element of the frame screen recording image.
S302: and taking the Laplace operator as a convolution kernel, and performing convolution operation on the target pixel matrix to obtain a characteristic pixel matrix corresponding to the target pixel matrix.
Illustratively, a laplacian of 3 × 3 may be used
Figure BDA0002144500920000111
And performing convolution operation on the target pixel matrix for convolution kernel to obtain a characteristic pixel matrix.
S303: calculating the variance corresponding to the characteristic pixel matrix according to the value of each element in the characteristic pixel matrix; and taking the variance as the definition of the frame screen recording image.
Here, the higher the variance, the higher the definition characterizing the screen-recorded image.
II: in the above S102, the result of detecting the playing quality of the video to be detected includes: whether the video to be detected is blocked and/or whether a screen is shown or not is detected.
Specifically, for different playing problems of the video to be detected, one or more of the following modes can be adopted to determine the playing quality detection result of the video to be detected:
a: referring to fig. 4, the following method may be adopted to obtain the play quality detection result of the video to be detected:
s401: and detecting whether the second definition of each frame of second screen recording image is equal to the second definition of the previous frame of second screen recording image adjacent to the frame of second screen recording image or not. If yes, jumping to S402; if not, then jump to S404.
S402: detecting whether the first definition of the first screen recording image matched with the second screen recording image of the frame is equal to the first definition of the first screen recording image matched with the second screen recording image of the previous frame; if not, jumping to S403; if so, then a jump is made to S404.
S403: and confirming that the original image corresponding to the second screen recording image of the previous frame is blocked when the client plays.
S404: and confirming that the original image corresponding to the second screen recording image of the previous frame is not blocked when the client plays.
In addition, in another embodiment of the present application, for each frame of the second screen recording image, when it is detected that a pause occurs in the playing of the original image corresponding to the frame of the second screen recording image at the client, the playing time corresponding to the frame of the original image and the number of image frames of the frame of the original image are recorded.
B: referring to fig. 5, the following method may be adopted to obtain the play quality detection result of the video to be detected:
s501: detecting whether a difference value between a second definition of each frame of second screen recording image and a first definition of a first screen recording image matched with the frame of second screen recording image is larger than a preset difference value threshold value or not aiming at each frame of second screen recording image; if yes, jumping to S502; if not, it jumps to S503.
S502: and confirming that the original image corresponding to the frame of the second screen recording image appears screen splash when the client plays.
S503: and confirming that the original image corresponding to the frame of the second screen recording image does not generate screen splash when the client plays.
Here, generally, the screen-recorded image obtained by screen recording has a reduced definition compared to the corresponding original image. If the difference value of the definition reduction exceeds a preset difference value threshold value, the situation that the original image is displayed in a screen-splash state when the original image is played at the client side is shown.
In addition, in another embodiment of the present application, for each frame of the second screen recording image, when it is detected that an original image corresponding to the frame of the second screen recording image is displayed on a screen during playing at the client, the playing time corresponding to the frame of the original image and the number of frames of the image of the frame of the original image are recorded.
According to the embodiment of the application, the first definition of the first screen recording image obtained by recording the screen when the video to be detected is played in the virtual machine and the second definition of the second screen recording image obtained by recording the screen when the video to be detected is played in the client are obtained automatically, so that the playing quality detection result of the video to be detected is obtained, manual detection is not needed, and the labor cost is saved.
Meanwhile, the first screen recording image and the second screen recording image are matched one by one, the playing progress of the to-be-detected video corresponding to the matched first screen recording image and the matched second screen recording image is the same, detection is carried out based on the first screen recording image and the second screen recording image of each frame, slight problems can be accurately detected, missing detection caused by judgment is avoided, and detection precision is improved.
In addition, the time and the number of image frames of the playing quality with problems can be recorded, the situation that the problems can be located only by repeatedly reproducing the problems due to the fact that detailed information of the problems cannot be reserved during manual detection is avoided, and the detection precision is further improved.
Based on the same inventive concept, the embodiment of the present application further provides a device for detecting video playing quality corresponding to the method for detecting video playing quality, and as the principle of solving the problem of the device in the embodiment of the present application is similar to the method for detecting video playing quality in the embodiment of the present application, the implementation of the device can refer to the implementation of the method, and repeated details are not repeated.
Example two
Referring to fig. 6, which is a schematic diagram of an apparatus for detecting video according to a second embodiment of the present application, the apparatus includes: an acquisition module 61 and a detection module 62; wherein,
the acquiring module 61 is configured to acquire a first definition of each frame of a first screen recording image in the first screen recording image set, and acquire a second definition of each frame of a second screen recording image in the second screen recording image set; the first screen recording image set is obtained by recording a to-be-detected video played by the virtual machine; the second screen recording image set is obtained by recording a to-be-detected video played by the client; the first screen recording image and the second screen recording image are matched one by one, and the playing progress of the video to be detected corresponding to the matched first screen recording image and the matched second screen recording image is the same;
the detection module 62 is configured to obtain a playing quality detection result of the video to be detected according to the first definition of each frame of the first screen recording image and the second definition of each frame of the second screen recording image.
According to the embodiment of the application, the first definition of the first screen recording image obtained by recording the screen when the video to be detected is played in the virtual machine and the second definition of the second screen recording image obtained by recording the screen when the video to be detected is played in the client are obtained automatically, so that the playing quality detection result of the video to be detected is obtained, manual detection is not needed, and the labor cost is saved.
Meanwhile, the first screen recording image and the second screen recording image are matched one by one, the playing progress of the to-be-detected video corresponding to the matched first screen recording image and the matched second screen recording image is the same, detection is carried out based on the first screen recording image and the second screen recording image of each frame, slight problems can be accurately detected, missing detection caused by judgment is avoided, and detection precision is improved.
In addition, the time and the number of image frames of the playing quality with problems can be recorded, the situation that the problems can be located only by repeatedly reproducing the problems due to the fact that detailed information of the problems cannot be reserved during manual detection is avoided, and the detection precision is further improved.
In a possible embodiment, the obtaining module 61 is specifically configured to obtain the first definition and the second definition by the following method, for a case where the apparatus is applied to the control end:
sending a video playing instruction and a first screen recording instruction to the virtual machine based on a link with the virtual machine, and sending a second screen recording instruction to the client based on the link with the client; the video playing instruction is used for instructing the virtual machine to control the video to be detected to be played locally and at the client; the first screen recording instruction is used for indicating the virtual machine to record the screen when controlling the video to be detected to be played locally, and generating a first screen recording image set; the second screen recording instruction is used for indicating the client to record the screen when the video to be detected is played, and a second screen recording image set is generated;
receiving a first screen recording image set sent by a virtual machine, and acquiring the first definition of each frame of first screen recording images based on the first screen recording image set;
and receiving a second screen recording image set sent by the client, and acquiring a second definition of each frame of second screen recording image based on the second screen recording image set.
In a possible embodiment, the obtaining module 61 is specifically configured to obtain the first definition and the second definition by the following method, for a case where the apparatus is applied to the control end:
sending a video playing instruction, a first screen recording instruction and a first definition obtaining instruction to the virtual machine based on a link with the virtual machine, and sending a second screen recording instruction and a second definition obtaining instruction to the client based on the link with the client; the video playing instruction is used for instructing the virtual machine to control the video to be detected to be played locally and at the client; the first screen recording instruction is used for indicating the virtual machine to record the screen when controlling the video to be detected to be played locally, and generating a first screen recording image set; the first definition obtaining instruction is used for indicating the virtual machine to obtain the first definition of each frame of the first screen recording image in the first screen recording image set; the second screen recording instruction is used for indicating the client to record the screen when the video to be detected is played, and a second screen recording image set is generated; the second definition obtaining instruction is used for indicating the client to obtain the second definition of each frame of second screen recording image in the second screen recording image set;
receiving a first definition sent by the virtual machine, and receiving a second definition sent by the client.
In a possible implementation manner, for a case where the apparatus is applied to a virtual machine, the first obtaining module 61 is specifically configured to obtain the first definition and the second definition by:
loading a video to be detected and generating video playing data; the video playing data comprises reconstructed image data of each frame of original image in the video to be detected; the video playing data is used for playing the video to be detected locally and at the client;
locally playing a video to be detected based on video playing data, recording a screen when the video to be detected is played, generating a first screen recording image set, and acquiring the first definition of each frame of first screen recording image in the first screen recording image set;
sending video playing data, a third screen recording instruction and a third definition obtaining instruction to the client; the third screen recording instruction is used for instructing the client to record the screen when the video to be detected is played based on the video playing data, and a second screen recording image set is generated; a third definition obtaining instruction, configured to instruct the client to obtain a second definition of each frame of the second screen recording image in the second screen recording image set;
and receiving the second definition sent by the client.
In a possible implementation, the obtaining module 61 obtains the definition of each frame of the screen recording image in the screen recording image set, and includes:
aiming at each frame of screen recording image, converting the frame of screen recording image into a gray image to obtain a target pixel matrix of the frame of screen recording image; the value of each element in the target pixel matrix represents the gray value of the pixel point corresponding to the frame screen recording image and each element;
taking a Laplace operator as a convolution kernel, and performing convolution operation on the target pixel matrix to obtain a characteristic pixel matrix corresponding to the target pixel matrix;
calculating the variance corresponding to the characteristic pixel matrix according to the value of each element in the characteristic pixel matrix;
and taking the variance as the definition of the frame screen recording image.
In a possible implementation manner, the detection module 62 is specifically configured to obtain a playing quality detection result of the video to be detected by using the following method:
detecting whether the second definition of each frame of second screen recording image is equal to the second definition of the previous frame of second screen recording image adjacent to the frame of second screen recording image or not aiming at each frame of second screen recording image;
if so, detecting whether the first definition of the first screen recording image matched with the second screen recording image of the frame is equal to the first definition of the first screen recording image matched with the second screen recording image of the previous frame;
and if the image data is not equal to the first frame of the second screen recording image, confirming that the original image corresponding to the previous frame of the second screen recording image is blocked when the client plays.
In a possible embodiment, the detecting module 62 is further configured to, for each frame of the second screen recording image, record a playing time corresponding to the frame of the original image and an image frame number of the frame of the original image when detecting that an original image corresponding to the frame of the second screen recording image appears as a pause when the client plays the original image.
In a possible implementation manner, the detection module 62 is specifically configured to obtain a playing quality detection result of the video to be detected by using the following method: detecting whether a difference value between a second definition of each frame of second screen recording image and a first definition of a first screen recording image matched with the frame of second screen recording image is larger than a preset difference value threshold value or not aiming at each frame of second screen recording image;
and if so, confirming that the original image corresponding to the frame of the second screen recording image appears screen splash when the client plays.
In a possible embodiment, the detecting module 62 is further configured to, for each frame of the second screen recording image, when it is detected that an original image corresponding to the frame of the second screen recording image is displayed on the screen when the client plays the original image, record a playing time corresponding to the frame of the original image and an image frame number of the frame of the original image.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
EXAMPLE III
An embodiment of the present application further provides a computer device 70, as shown in fig. 7, which is a schematic structural diagram of the computer device 70 provided in the embodiment of the present application, and includes: a processor 71, a memory 72, and a bus 73. The memory 72 stores machine-readable instructions executable by the processor 71 (for example, the execution instructions corresponding to the obtaining module 61, the detecting module 62, etc. in the apparatus in fig. 6), when the computer device 70 is running, the processor 71 communicates with the memory 72 via the bus 73, and the machine-readable instructions when executed by the processor 71 perform the following processes:
acquiring a first definition of each frame of first screen recording image in the first screen recording image set, and acquiring a second definition of each frame of second screen recording image in the second screen recording image set; the first screen recording image set is obtained by recording a to-be-detected video played by a virtual machine; the second screen recording image set is obtained by recording a to-be-detected video played by the client; the first screen recording images and the second screen recording images are matched one by one, and the playing progress of the to-be-detected video corresponding to the matched first screen recording images and the matched second screen recording images is the same;
and acquiring a playing quality detection result of the video to be detected according to the first definition of the first screen recording image of each frame and the second definition of the second screen recording image of each frame.
In a possible implementation, the processor 71 executes instructions to obtain the first definition and the second definition in the following manner, for the case that the method is applied to the control end:
sending a video playing instruction and a first screen recording instruction to the virtual machine based on a link with the virtual machine, and sending a second screen recording instruction to the client based on the link with the client; the video playing instruction is used for instructing the virtual machine to control the video to be detected to be played locally and the client; the first screen recording instruction is used for indicating the virtual machine to record a screen when the video to be detected is controlled to be played locally, and generating a first screen recording image set; the second screen recording instruction is used for indicating the client to record the screen when the video to be detected is played, and generating a second screen recording image set;
receiving the first screen recording image set sent by the virtual machine, and acquiring the first definition of each frame of the first screen recording image based on the first screen recording image set;
and receiving the second screen recording image set sent by the client, and acquiring the second definition of each frame of the second screen recording image based on the second screen recording image set.
In a possible implementation, the processor 71 executes instructions to obtain the first definition and the second definition in the following manner, for the case that the method is applied to the control end:
sending a video playing instruction, a first screen recording instruction and a first definition obtaining instruction to the virtual machine based on a link with the virtual machine, and sending a second screen recording instruction and a second definition obtaining instruction to the client based on the link with the client; the video playing instruction is used for instructing the virtual machine to control the video to be detected to be played locally and the client; the first screen recording instruction is used for indicating the virtual machine to record a screen when the video to be detected is controlled to be played locally, and generating a first screen recording image set; the first definition obtaining instruction is used for indicating the virtual machine to obtain the first definition of each frame of the first screen recording image in the first screen recording image set; the second screen recording instruction is used for indicating the client to record the screen when the video to be detected is played, and generating a second screen recording image set; the second definition obtaining instruction is used for indicating the client to obtain the second definition of each frame of the second screen recording image in the second screen recording image set;
receiving the first definition sent by the virtual machine, and receiving the second definition sent by the client.
In a possible implementation, the processor 71 executes instructions to obtain the first definition and the second definition in the following manner, for the case that the method is applied to the virtual machine:
loading the video to be detected and generating video playing data; the video playing data comprises reconstructed image data of each frame of original image in the video to be detected; the video playing data is used for playing the video to be detected locally and the client;
locally playing the video to be detected based on the video playing data, recording a screen when the video to be detected is played, generating a first screen recording image set, and acquiring the first definition of each frame of the first screen recording image in the first screen recording image set;
sending the video playing data, a third screen recording instruction and a third definition obtaining instruction to the client; the third screen recording instruction is used for instructing the client to record the screen when the video to be detected is played based on the video playing data, and generating a second screen recording image set; the third definition obtaining instruction is used for instructing the client to obtain the second definition of each frame of the second screen recording image in the second screen recording image set;
and receiving the second definition sent by the client.
In one possible embodiment, the instructions executed by processor 71 to obtain the sharpness of each frame of the video-on-screen image in the video-on-screen image set include:
aiming at each frame of screen recording image, converting the frame of screen recording image into a gray image to obtain a target pixel matrix of the frame of screen recording image; the value of each element in the target pixel matrix represents the gray value of the pixel point corresponding to the frame screen recording image and each element;
taking a Laplace operator as a convolution kernel, and performing convolution operation on the target pixel matrix to obtain a characteristic pixel matrix corresponding to the target pixel matrix;
calculating the variance corresponding to the characteristic pixel matrix according to the value of each element in the characteristic pixel matrix;
and taking the variance as the definition of the frame screen recording image.
In a possible implementation manner, in the instructions executed by processor 71, the acquiring a play quality detection result of the video to be detected includes:
detecting whether the second definition of each frame of second screen recording image is equal to the second definition of the previous frame of second screen recording image adjacent to the frame of second screen recording image or not aiming at each frame of second screen recording image;
if so, detecting whether the first definition of the first screen recording image matched with the second screen recording image of the frame is equal to the first definition of the first screen recording image matched with the second screen recording image of the previous frame;
and if the image data is not equal to the first frame of the second screen recording image, confirming that the original image corresponding to the previous frame of the second screen recording image is blocked when the client plays.
In a possible implementation manner, in the instructions executed by the processor 71, the acquiring a play quality detection result of the video to be detected further includes:
and for each frame of second screen recording image, when detecting that the original image corresponding to the frame of second screen recording image appears as pause when the client plays, recording the playing time corresponding to the frame of original image and the image frame number of the frame of original image.
In a possible implementation manner, in the instructions executed by processor 71, the acquiring a play quality detection result of the video to be detected includes:
detecting whether a difference value between a second definition of each frame of second screen recording image and a first definition of a first screen recording image matched with the frame of second screen recording image is larger than a preset difference value threshold value or not aiming at each frame of second screen recording image;
and if so, confirming that the original image corresponding to the frame of the second screen recording image appears screen splash when the client plays.
In a possible implementation manner, in the instructions executed by the processor 71, the acquiring a play quality detection result of the video to be detected further includes:
and for each frame of second screen recording image, when detecting that the original image corresponding to the frame of second screen recording image has screen splash during playing at the client, recording the playing time corresponding to the frame of original image and the image frame number of the frame of original image.
The present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the method for detecting playback quality of a video in the foregoing method embodiments.
The computer program product of the method for detecting video playing quality provided in the embodiment of the present application includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment, and is not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A method for detecting video playing quality is characterized in that the method is applied to a virtual machine or a control end; the method comprises the following steps:
acquiring a first definition of each frame of first screen recording image in the first screen recording image set, and acquiring a second definition of each frame of second screen recording image in the second screen recording image set; the first screen recording image set is obtained by recording a to-be-detected video played by a virtual machine; the second screen recording image set is obtained by recording a to-be-detected video played by the client; the first screen recording images and the second screen recording images are matched one by one, and the playing progress of the to-be-detected video corresponding to the matched first screen recording images and the matched second screen recording images is the same;
acquiring a play quality detection result of the video to be detected according to the first definition of the first screen recording image of each frame and the second definition of the second screen recording image of each frame;
the acquiring of the play quality detection result of the video to be detected comprises:
detecting whether the second definition of each frame of second screen recording image is equal to the second definition of the previous frame of second screen recording image adjacent to the frame of second screen recording image or not aiming at each frame of second screen recording image;
if so, detecting whether the first definition of the first screen recording image matched with the second screen recording image of the frame is equal to the first definition of the first screen recording image matched with the second screen recording image of the previous frame;
and if the image data is not equal to the first frame of the second screen recording image, confirming that the original image corresponding to the previous frame of the second screen recording image is blocked when the client plays.
2. The method according to claim 1, wherein the first definition and the second definition are obtained in the following manner for the case that the method is applied to the control end:
sending a video playing instruction and a first screen recording instruction to the virtual machine based on a link with the virtual machine, and sending a second screen recording instruction to the client based on the link with the client; the video playing instruction is used for instructing the virtual machine to control the video to be detected to be played locally and the client; the first screen recording instruction is used for indicating the virtual machine to record a screen when the video to be detected is controlled to be played locally, and generating a first screen recording image set; the second screen recording instruction is used for indicating the client to record the screen when the video to be detected is played, and generating a second screen recording image set;
receiving the first screen recording image set sent by the virtual machine, and acquiring the first definition of each frame of the first screen recording image based on the first screen recording image set;
and receiving the second screen recording image set sent by the client, and acquiring the second definition of each frame of the second screen recording image based on the second screen recording image set.
3. The method according to claim 1, wherein the first definition and the second definition are obtained in the following manner for the case that the method is applied to the control end:
sending a video playing instruction, a first screen recording instruction and a first definition obtaining instruction to the virtual machine based on a link with the virtual machine, and sending a second screen recording instruction and a second definition obtaining instruction to the client based on the link with the client; the video playing instruction is used for instructing the virtual machine to control the video to be detected to be played locally and the client; the first screen recording instruction is used for indicating the virtual machine to record a screen when the video to be detected is controlled to be played locally, and generating a first screen recording image set; the first definition obtaining instruction is used for indicating the virtual machine to obtain the first definition of each frame of the first screen recording image in the first screen recording image set; the second screen recording instruction is used for indicating the client to record the screen when the video to be detected is played, and generating a second screen recording image set; the second definition obtaining instruction is used for indicating the client to obtain the second definition of each frame of the second screen recording image in the second screen recording image set;
receiving the first definition sent by the virtual machine, and receiving the second definition sent by the client.
4. The method of claim 1, wherein the first definition and the second definition are obtained as follows for the case where the method is applied to the virtual machine:
loading the video to be detected and generating video playing data; the video playing data comprises reconstructed image data of each frame of original image in the video to be detected; the video playing data is used for playing the video to be detected locally and the client;
locally playing the video to be detected based on the video playing data, recording a screen when the video to be detected is played, generating a first screen recording image set, and acquiring the first definition of each frame of the first screen recording image in the first screen recording image set;
sending the video playing data, a third screen recording instruction and a third definition obtaining instruction to the client; the third screen recording instruction is used for instructing the client to record the screen when the video to be detected is played based on the video playing data, and generating a second screen recording image set; the third definition obtaining instruction is used for instructing the client to obtain the second definition of each frame of the second screen recording image in the second screen recording image set;
and receiving the second definition sent by the client.
5. The method of any one of claims 1-4, wherein obtaining the sharpness of each frame of the set of screen recorded images comprises:
aiming at each frame of screen recording image, converting the frame of screen recording image into a gray image to obtain a target pixel matrix of the frame of screen recording image; the value of each element in the target pixel matrix represents the gray value of the pixel point corresponding to the frame screen recording image and each element;
taking a Laplace operator as a convolution kernel, and performing convolution operation on the target pixel matrix to obtain a characteristic pixel matrix corresponding to the target pixel matrix;
calculating the variance corresponding to the characteristic pixel matrix according to the value of each element in the characteristic pixel matrix;
and taking the variance as the definition of the frame screen recording image.
6. The method according to any one of claims 1 to 4, wherein the obtaining of the detection result of the playing quality of the video to be detected further comprises:
and for each frame of second screen recording image, when detecting that the original image corresponding to the frame of second screen recording image appears as pause when the client plays, recording the playing time corresponding to the frame of original image and the image frame number of the frame of original image.
7. The method according to any one of claims 1 to 4, wherein the obtaining of the detection result of the playing quality of the video to be detected comprises:
detecting whether a difference value between a second definition of each frame of second screen recording image and a first definition of a first screen recording image matched with the frame of second screen recording image is larger than a preset difference value threshold value or not aiming at each frame of second screen recording image;
and if so, confirming that the original image corresponding to the frame of the second screen recording image appears screen splash when the client plays.
8. The method according to claim 7, wherein the obtaining of the play quality detection result of the video to be detected further comprises:
and for each frame of second screen recording image, when detecting that the original image corresponding to the frame of second screen recording image has screen splash during playing at the client, recording the playing time corresponding to the frame of original image and the image frame number of the frame of original image.
9. The device for detecting the video playing quality is characterized in that the device is applied to a virtual machine or a control end; the device comprises:
the acquisition module is used for acquiring the first definition of each frame of first screen recording image in the first screen recording image set and acquiring the second definition of each frame of second screen recording image in the second screen recording image set; the first screen recording image set is obtained by recording a to-be-detected video played by a virtual machine; the second screen recording image set is obtained by recording a to-be-detected video played by the client; the first screen recording images and the second screen recording images are matched one by one, and the playing progress of the to-be-detected video corresponding to the matched first screen recording images and the matched second screen recording images is the same;
the detection module is used for acquiring a play quality detection result of the video to be detected according to the first definition of the first screen recording image of each frame and the second definition of the second screen recording image of each frame;
the acquiring of the play quality detection result of the video to be detected comprises:
detecting whether the second definition of each frame of second screen recording image is equal to the second definition of the previous frame of second screen recording image adjacent to the frame of second screen recording image or not aiming at each frame of second screen recording image;
if so, detecting whether the first definition of the first screen recording image matched with the second screen recording image of the frame is equal to the first definition of the first screen recording image matched with the second screen recording image of the previous frame;
and if the image data is not equal to the first frame of the second screen recording image, confirming that the original image corresponding to the previous frame of the second screen recording image is blocked when the client plays.
10. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the method of detecting video playback quality as claimed in any one of claims 1 to 8.
11. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method for detecting video playback quality according to any one of claims 1 to 8.
CN201910680154.2A 2019-07-26 2019-07-26 Method and device for detecting video playing quality Active CN110337035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910680154.2A CN110337035B (en) 2019-07-26 2019-07-26 Method and device for detecting video playing quality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910680154.2A CN110337035B (en) 2019-07-26 2019-07-26 Method and device for detecting video playing quality

Publications (2)

Publication Number Publication Date
CN110337035A CN110337035A (en) 2019-10-15
CN110337035B true CN110337035B (en) 2021-11-09

Family

ID=68147653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910680154.2A Active CN110337035B (en) 2019-07-26 2019-07-26 Method and device for detecting video playing quality

Country Status (1)

Country Link
CN (1) CN110337035B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111639235B (en) * 2020-06-01 2023-08-25 重庆紫光华山智安科技有限公司 Video recording quality detection method and device, storage medium and electronic equipment
CN112135123B (en) * 2020-09-24 2023-04-21 三星电子(中国)研发中心 Video quality detection method and device
CN113033292B (en) * 2021-02-02 2024-08-02 佛山市青松科技股份有限公司 Display content detection method and system of LED display screen and client

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160021161A1 (en) * 2014-07-16 2016-01-21 Alcatel-Lucent Usa, Inc. Mobile network video optimization for centralized processing base stations
CN108243033A (en) * 2016-12-26 2018-07-03 中国移动通信有限公司研究院 A kind of method given a mark to video quality, cloud server terminal and system

Also Published As

Publication number Publication date
CN110337035A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN110337035B (en) Method and device for detecting video playing quality
US11677806B2 (en) Platform-independent content generation for thin client applications
CN109091861B (en) Interactive control method in game, electronic device and storage medium
CN110559651A (en) Control method and device of cloud game, computer storage medium and electronic equipment
EP3311565B1 (en) Low latency application streaming using temporal frame transformation
CN111669574A (en) Video playing quality detection method and device
EP3202472A1 (en) Method for selecting a display capturing mode
US20090096909A1 (en) Information processing apparatus, remote indication system, and computer readable medium
CN111760267A (en) Information sending method and device in game, storage medium and electronic equipment
EP3285484A1 (en) Image processing apparatus, image generation method, and program
US20140229527A1 (en) Real-time, interactive measurement techniques for desktop virtualization
CN109343922B (en) GPU (graphics processing Unit) virtual picture display method and device
CN113407086B (en) Object dragging method, device and storage medium
JPWO2013128709A1 (en) Information processing system, information processing method, information processing program, computer-readable recording medium recording the information processing program, and information processing apparatus
CN111870948A (en) Window management method and system under cloud game single-host multi-user environment
US9152872B2 (en) User experience analysis system to analyze events in a computer desktop
US20140223380A1 (en) Image processing apparatus, method of controlling the same, and storage medium
CN111290722A (en) Screen sharing method, device and system, electronic equipment and storage medium
CN110545415A (en) Data transmission method and device and server
CN114237481A (en) Handwriting display processing method, system, device, equipment and storage medium
CN110782530B (en) Method and device for displaying vehicle information in automatic driving simulation system
CN109739648B (en) Animation playing control method, device, equipment and storage medium
US8125525B2 (en) Information processing apparatus, remote indication system, and computer readable medium
CN114020396A (en) Display method of application program and data generation method of application program
CN106484561B (en) Virtual computer system and external device control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant