WO2017092343A1 - 一种视频数据的检测方法和装置 - Google Patents

一种视频数据的检测方法和装置 Download PDF

Info

Publication number
WO2017092343A1
WO2017092343A1 PCT/CN2016/089357 CN2016089357W WO2017092343A1 WO 2017092343 A1 WO2017092343 A1 WO 2017092343A1 CN 2016089357 W CN2016089357 W CN 2016089357W WO 2017092343 A1 WO2017092343 A1 WO 2017092343A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
image frame
video
data
display time
Prior art date
Application number
PCT/CN2016/089357
Other languages
English (en)
French (fr)
Inventor
李云龙
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Publication of WO2017092343A1 publication Critical patent/WO2017092343A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/467Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4305Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Definitions

  • the present invention relates to the field of video technologies, and in particular, to a method for detecting video data and a device for detecting video data.
  • the test engineer determines the playback of the video and the data loss by watching the video; or, through the feedback of the user, discovers that the online video is playing and the video data is lost. problem.
  • the technical problem to be solved by the embodiments of the present invention is to provide a method for detecting video data, which solves the problem that the video detection progress is limited due to manual detection, and improves the detection efficiency while improving the detection accuracy.
  • an embodiment of the present invention further provides a video data detecting apparatus for ensuring implementation and application of the foregoing method.
  • an embodiment of the present invention discloses a method for detecting video data, including:
  • the continuity of the image frame is detected based on the display time of each image frame, and the detection result of the video is generated.
  • an embodiment of the present invention further discloses a video data checking apparatus, including:
  • a receiving module configured to receive frame data of each image frame of the played video, where the frame data includes watermark timing information
  • a display time determining module configured to respectively extract watermark timing information from each frame data, and determine a display time of the image frame corresponding to each frame data
  • a detecting module configured to detect continuity of the image frame based on a display time of each image frame, and generate a detection result of the video.
  • Embodiments of the present invention provide a computer program comprising computer readable code that, when executed on a server, causes the server to perform the above-described method of detecting video data.
  • Embodiments of the present invention provide a computer readable medium in which the above computer program is stored.
  • the embodiment of the invention provides a server, including:
  • One or more processors are One or more processors;
  • a memory for storing processor executable instructions
  • processor is configured to:
  • the continuity of the image frame is detected based on the display time of each image frame, and the detection result of the video is generated.
  • the embodiments of the invention include the following advantages:
  • the embodiment of the present invention may determine the display time of the image frame corresponding to each frame data by extracting the watermark timing information included in the frame data, based on the display of each image frame.
  • Time detection of the continuity of the image frame the detection result of the video is generated, that is, the probability of the occurrence of the Carton phenomenon and the time of the stuck time can be accurately counted, thereby improving the accuracy of the detection; at the same time, avoiding the video detection by manually detecting the video
  • the phenomenon that the detection progress is slow, the detection efficiency is improved while reducing the workload of the test technician.
  • FIG. 1 is a flow chart showing the steps of an embodiment of a method for detecting video data according to the present invention
  • FIG. 2 is a flow chart showing the steps of a preferred embodiment of a method for detecting video data according to the present invention
  • 3A is a structural block diagram of an embodiment of a device for detecting video data according to the present invention.
  • 3B is a structural block diagram of a preferred embodiment of a video data detecting apparatus according to the present invention.
  • Figure 4 shows schematically a block diagram of a server for carrying out the method according to the invention
  • Fig. 5 schematically shows a storage unit for holding or carrying program code implementing the method according to the invention.
  • the human resources of the test engineer are wasted; 2.
  • the automatic detection cannot be performed, that is, the idle time (such as evening) cannot be effectively utilized, resulting in The detection progress is limited and the detection efficiency is low.
  • the accuracy of the video playback and the stuck time can not be accurately counted, that is, the detection accuracy is low.
  • One of the core concepts of the embodiments of the present invention is that the watermark timing information is extracted from the frame data of each image frame, and the display time of the image frame corresponding to each frame data is determined according to the watermark timing information, and the display of each image frame is performed.
  • the time is detected by the continuity of the video image frame, and the detection result of the video is generated, that is, the probability of occurrence of the Carton phenomenon and the time of the stuck time can be accurately counted, the accuracy of the detection is improved, and the detection efficiency is improved.
  • FIG. 1 a flow chart of steps of a method for detecting video data according to the present invention is shown, which may specifically include the following steps:
  • Step 101 Receive frame data of each image frame of the played video.
  • the frame data includes watermark timing information.
  • watermark timing information can be added to each image frame of the video during encoding.
  • a watermark may be added to each image frame of the video, and the content of the watermark includes display timing information of each image frame of the video, such as a frame number and time of the image frame.
  • the stamp or the like is such that the frame data of each image frame includes watermark timing information.
  • a watermark may be added in a specified area of each image frame, that is, the watermark is embedded in a relatively unchanged area of the video, for example, based on a video mark (logo), the position in each image frame is substantially unchanged, and the video may be in the video.
  • the logo adds fragile transparent watermark timing information.
  • the frame number or time stamp of each image frame of the video source is converted into a 32-bit binary number by quantization, such as a quantized display time stamp (PTS).
  • PTS quantized display time stamp
  • the quantized PTS is embedded as a fragile transparent watermark into the P macroblock of the logo. Since the watermark is fragile and transparent, the watermark is invisible to the human eye when displayed, that is, it does not affect the display of the image frame, and the integrity of the image frame display of the video is ensured.
  • an intelligent terminal such as a smart phone
  • plays a video and can capture the frame data of each image frame after decoding, and forward it to the Carton Drop Frame Analysis Server (referred to as a server).
  • the smart terminal can obtain the frame data sent by the display screen by calling a screen capture interface, such as an interface provided by an interface device of the Android system (Surface Flinger);
  • the frame data of each image frame can be acquired by directly driving data of a designated area (such as a logo area) of the display screen through a liquid crystal display (LCD).
  • the frame data may be data generated in accordance with the YUV format, that is, YUV data.
  • Y indicates brightness (Luminance or Luma)
  • U indicates chromaticity (Chrominance or Chroma).
  • the server After receiving the frame data of each image frame of the video, the server can automatically detect the frame data of each image frame of the video, and generate a detection result of the video. The specific detection process is described later.
  • the frame data between the intelligent terminal and the server can be transmitted through the network, such as through TCP_SOCKET; or can be transmitted by a universal serial bus (USB) serial port, such as the Android Debug Bridge (Android Debug Bridge, ADB) and the like, the embodiment of the present invention does not limit this.
  • USB universal serial bus
  • the foregoing step 101 may include the following substeps:
  • Sub-step 10101 the smart terminal where the player is located is connected by wireless or wired.
  • Sub-step 10103 Receive frame data extracted by the smart terminal from a designated area of each image frame.
  • the image frames are image frames played in the played video of the player.
  • Step 103 Extract watermark timing information from each frame data respectively, and determine a display time of the image frame corresponding to each frame data.
  • the server may extract the watermark timing information from each frame data by inverse transform, and parse the watermark timing information to determine the display time of the image frame corresponding to each frame number.
  • watermark timing information is added in the logo area.
  • the server can determine the parity of the macroblock by traversing the macroblock of the logo area, thereby restoring the timing information embedded in the watermark during encoding, that is, determining each image.
  • the display time of the frame For example, the server can obtain the 32-bit binary number by traversing the parity of 32 macroblocks, mapping the odd number to 1, and mapping the even number to 0, and restoring the PTS embedded in the watermark, that is, determining each image frame. display time.
  • the display time of the image frame is represented by a hexadecimal number. If the display time of the first image frame of the video is 16 milliseconds (ms), the display time of the first image frame of the video can be determined by calculation. 0x00000010ms.
  • X, Y determines the implementation time of one image frame by traversing the parity of 64 macroblocks.
  • X and Y can be 1 or 0.
  • the parity of the macroblock can be determined based on the value of X or Y.
  • the embodiment of the invention restores watermark timing information
  • the implementation of the display time of each image frame is not limited.
  • step 103 may include the following sub-steps:
  • Sub-step 10301 for each frame data, watermark timing information is extracted from the frame data.
  • Sub-step 10303 parsing the extracted watermark timing information to determine a display time of the image frame corresponding to the frame data.
  • Step 105 Detect continuity of the image frame based on display time of each image frame, and generate a detection result of the video.
  • the display time of each extracted image frame should be uniformly increased.
  • the display time of each image frame By comparing the display time of each image frame with the synchronization time and determining whether the display time of each frame image is irregular, it is possible to determine the jamming situation during the video playback. Specifically, comparing the display time of each image frame with the server local NTP (Network Time Protocol) time, the time difference corresponding to each image frame can be obtained.
  • the time difference corresponding to an image frame is not within the preset playing time range, the image frame may be considered to be stuck, and the display time of the image frame is taken as the stuck time point, and recorded.
  • detecting the continuity of the image frame based on the display time of each image frame, and generating the detection result of the video may include: calculating a difference between the display time of each image frame and the pre-generated synchronization time information, and generating each image. The time difference corresponding to the frame; determining whether the time difference corresponding to each image frame is within the playing time range; and when the time difference corresponding to the image frame is not within the playing time range, displaying the time point of the image frame As a stuck time point, a carton detection result of the video is generated.
  • the display time of each image frame is uniformly incremented, it can be determined whether the video has a frame loss. Specifically, when the interval between the display time of two adjacent image frames is greater than the time difference between the frames corresponding to the video, it may be determined that there is a lost image frame between the two image frames, that is, in the two image frames. Data was lost. By recording the display time of the two image frames, the time point at which the video is dropped can be determined.
  • detecting the continuity of the image frame based on the display time of each image frame, and generating the detection result of the video may further include: performing statistics on the display time of each image frame, and determining the The inter-frame time difference corresponding to the video, and the display time difference corresponding to each adjacent two image frames; determining whether the display time difference corresponding to the two image frames is greater than the inter-frame time difference; when the display time difference is greater than the When the time difference between frames is determined, the two image frames are determined The data is lost between the frames, and the frame loss detection result of the video is generated.
  • the frame loss card analysis server can automatically generate video detection results, such as the Carton detection result and the frame loss detection result.
  • the detection result may include a time point of video frame loss, a stuck time point, etc., so that the probability of occurrence of the stuck phenomenon and the time of the stuck time during video playback can be accurately counted, and the accuracy of the detection is improved.
  • the server can automatically detect the video playback jam and the video data loss by extracting the watermark timing information in the frame data of each image frame, and generate a video detection result, thereby reducing the workload of the test engineer. Saves human resources and reduces the cost of testing.
  • the server can automatically detect, that is, the video can be detected by using idle time, which speeds up the detection process and improves the detection efficiency.
  • FIG. 2 a flow chart of steps of a method for detecting video data according to the present invention is shown. Specifically, the method may include the following steps:
  • Step 201 Connect the smart terminal where the player is located by wireless or wired.
  • the server can connect to the smart terminal where the player is located by wireless or wired.
  • the wireless method refers to a communication method in which wireless communication uses information that a radio wave signal can propagate in a free space.
  • the wired method refers to a method of transmitting information by using a tangible medium such as a metal wire or an optical fiber.
  • the video is usually played by the player of the terminal, such as playing through a player of the smart phone, or playing through a web player of the smart terminal.
  • the smart terminal where the player is located can connect to the server through a wireless connection, such as a WI-FI connection.
  • the smart terminal can also connect to the server through a wired method such as a universal serial bus.
  • the embodiment of the present invention does not limit the connection manner between the server and the smart terminal.
  • Step 203 Receive frame data extracted by the smart terminal from a designated area of each image frame.
  • Each image frame is an image frame played in a video that has been played by the player. Specifically, the player can realize video playback by continuously displaying image frames of the video.
  • the display of each image frame is equivalent to displaying one picture.
  • the intelligent terminal can obtain the YUV data displayed on the display screen through the Surface Flinger service provided by the system itself; or directly capture the data in the designated area of the display screen through the LCD driver.
  • the smart terminal can extract the frame data from the designated area of each image frame of the video, such as obtaining the YUV data from the area displayed by the logo.
  • the frame data of each image frame contains the image The display time information of the frame, that is, the watermark timing information.
  • the intelligent terminal transmits the extracted number of frames to the server.
  • the server can receive the frame data sent by the smart terminal through a network interface such as a TCP_SOCKET interface, a USB serial port, or the like.
  • Step 205 Extract watermark timing information from the frame data for each frame data.
  • the server may extract the watermark timing information of the image frame corresponding to the frame data from the frame data by inverse transform, such as extracting the watermark timing information in the YUV data of the designated area. .
  • Step 207 Parse the extracted watermark timing information to determine a display time of the image frame corresponding to the frame data.
  • the watermark timing information is embedded as a watermark in a macroblock of the designated area.
  • the display time of each image frame can be determined by inversely transforming the macroblock in which the watermark timing information is located, that is, by traversing the macroblock of the designated area to determine the parity of the macroblock.
  • the detection result of the video is detected based on the display time of each image frame, and the detection result of the video is generated.
  • the detection result of the video includes: a carton detection result and a frame loss detection result, as follows.
  • the method for generating the video of the video may include: step 209, step 211, and step 213; generating a frame loss detection result of the video may specifically include: step 215, step 217, and step 219.
  • Step 209 Calculate a difference between the display time of each image frame and the synchronization time information generated in advance, and generate a time difference corresponding to each image frame.
  • the server's local NTP time such as the current system time (System.currentTime)
  • System.currentTime the time that each image frame is actually displayed during playback can be determined.
  • the time difference between the actual playback time of each image frame and the start time of the video may be generated, and synchronization time information corresponding to each image frame is generated.
  • the System.currentTime is 3500 milliseconds
  • the start time of the video is 3500 milliseconds
  • the display time of the first image frame is 16 milliseconds
  • the actual playback time is 3517 milliseconds.
  • the synchronization time information corresponding to one image frame is 17 milliseconds
  • the display time of the second image frame is 48 milliseconds
  • the actual playback time is 3555 milliseconds
  • the synchronization time information corresponding to the second image frame is 55 milliseconds
  • the display time of 3 image frames is 80 milliseconds
  • the actual playback time is 3689 milliseconds
  • the synchronization time information corresponding to the third image frame is 189 milliseconds.
  • the time difference timegaps corresponding to each image frame can be generated. For example, for the first image frame, the difference between the display time of the image frame and the corresponding synchronization time information of 17 milliseconds is calculated, and the time difference timegaps corresponding to the first image frame can be generated to be 1 millisecond. Similarly, the time difference timegaps corresponding to the second image frame can be generated to be 7 milliseconds, and the time difference timegaps corresponding to the third image frame is 109 milliseconds.
  • Step 211 Determine whether the time difference corresponding to each image frame is within the playing time range.
  • online video can be delayed in playback in a good network transmission environment. It is said that if the image frame of the video can be played within the preset playing time range, the image frame is considered to be able to play smoothly. Assume that the video frame of the video is delayed to play within 40 milliseconds, and the image frame can be considered to be played smoothly, that is, the image frame is not played. When the image frame of the video is delayed for more than 40 milliseconds, the image is considered to be stuck.
  • the server can determine whether the time difference timegaps corresponding to each image frame of the video is within the play time range (A, B), where A can be 0 milliseconds, and B can be 40 milliseconds.
  • the video frame can be considered to be smooth, otherwise the video frame is considered to be stuck.
  • the time difference of the first image frame is timegaps 1 millisecond.
  • the first image frame of the video can be considered to be smooth; the third The time difference of the image frame corresponding to timegaps is 109 milliseconds. If it is not within the playback time range (0 milliseconds, 40 milliseconds), the third image frame of the video may be considered to be stuck, and then step 213 is performed.
  • Step 213 When the time difference corresponding to the image frame is not in the playing time range, the display time point of the image frame is used as the stuck time point, and the card detection result of the video is generated.
  • the display time point of the image frame is used as the stuck time point, and the card detection result of the video is generated.
  • the time difference of the third image frame, timegaps, 109 milliseconds is not within the playback time range, and the display time of the third image frame is 80 milliseconds as the stuck time point to generate the jam detection of the video. result.
  • Step 215 Perform statistics on the display time of each image frame, determine an inter-frame time difference corresponding to the video, and a display time difference corresponding to each adjacent two image frames.
  • the interframe time difference of the video is 32 milliseconds, and the display time difference corresponding to each adjacent two image frames, such as the first image frame and the second image frame.
  • the corresponding display time difference is 32 milliseconds, that is, the difference between the display time 0x00000030ms of the second image frame and the display time 0x00000010ms of the second image frame.
  • the display time difference corresponding to other adjacent two image frames can be calculated, for example, the display time difference corresponding to the second image frame and the third image frame is 32 milliseconds, and the third image frame and the fourth image frame are The corresponding display time difference is 64 milliseconds, the display time difference corresponding to the fourth image frame and the fifth image frame is 32 milliseconds, and the display time difference corresponding to the fifth image frame and the sixth image frame is 64 milliseconds...
  • Step 217 Determine whether a display time difference corresponding to the two image frames is greater than the inter-frame time difference.
  • the display time difference corresponding to the two image frames is large The time difference corresponding to the video. If the display time difference corresponding to two adjacent image frames is not greater than the inter-frame time difference corresponding to the video, it is considered that there is no data loss between the two image frames, that is, no lost frame. If the display time difference corresponding to two adjacent image frames is greater than the inter-frame time difference corresponding to the video, it is considered that data is lost between the two image frames, that is, lost frames, such as the third image frame and the fourth image frame.
  • the corresponding display time 64 milliseconds is greater than the inter-frame time difference corresponding to the video by 32 milliseconds, and then step 219 is performed.
  • Step 219 When the display time difference is greater than the inter-frame time difference, determine that data is lost between the two image frames, and generate a frame loss detection result of the video.
  • the frame loss detection result of the video is generated.
  • the frame loss detection result of the video can record which frames are lost between the image frames, such as recording the lost frame between the third image frame and the fourth image frame, and the fifth image frame and the sixth image frame.
  • the frame is lost between them; or the frame may be lost between the display times, such as the lost frame between the display time 0x00000050ms and 0x00000090ms, the lost frame between the display time 0x000000b0ms and 0x000000f0ms, etc., which is not limited by the embodiment of the present invention. .
  • the frame loss detection result of the video is generated, so that the probability of the frame loss of the video and the time point of the frame loss can be accurately counted, and the accuracy of the detection is improved.
  • FIG. 3A a structural block diagram of an embodiment of a device for detecting video data according to the present invention is shown, which may specifically include the following modules:
  • the receiving module 301 is configured to receive frame data of each image frame of the played video, where the frame data includes watermark timing information.
  • the display time determining module 303 is configured to separately extract watermark timing information from each frame data, and determine a display time of each frame data corresponding to the image frame.
  • the detecting module 305 is configured to detect continuity of the image frame based on display time of each image frame, and generate a detection result of the video.
  • the receiving module 301 can include a connection sub-module 30101 and a data receiving sub-module 30103, with reference to FIG. 3B.
  • the connection submodule 30101 is configured to connect the smart terminal where the player is located by using a wireless or a wired manner.
  • the data receiving sub-module 30103 is configured to receive frame data extracted by the smart terminal from a designated area of each image frame, where each image frame is an image frame played in a video that has been played by the player.
  • the display time determination module 303 can include the following sub-modules:
  • the extraction sub-module 30301 is configured to extract watermark timing information from the frame data for each frame data.
  • the parsing sub-module 30303 is configured to parse the extracted watermark timing information to determine a display time of the image data corresponding to the frame data.
  • the detection module 305 can include the following sub-modules:
  • the time difference generation sub-module 30501 is configured to calculate a difference between the display time of each image frame and the pre-generated synchronization time information, and generate a time difference corresponding to each image frame.
  • the cardon judgment sub-module 30503 is configured to determine whether the time difference corresponding to each image frame is within the play time range.
  • the Carton result generation sub-module 30505 is configured to generate a card detection result of the video when the time difference corresponding to the image frame is not in the playing time range, and the display time point of the image frame is used as a stuck time point. .
  • the detecting module 305 may further include the following submodules:
  • the statistics sub-module 30507 is configured to perform statistics on the display time of each image frame, determine an inter-frame time difference corresponding to the video, and a display time difference corresponding to each adjacent two image frames.
  • the frame loss judging sub-module 30509 is configured to determine whether a display time difference corresponding to the two image frames is greater than the inter-frame time difference.
  • the frame loss result generation sub-module 30511 determines, when the display time difference is greater than the inter-frame time difference, that data is lost between the two image frames, and generates a frame loss detection result of the video.
  • the embodiment of the present invention may determine the display time of the image frame corresponding to each frame data by extracting the watermark timing information included in the frame data, based on the display of each image frame.
  • Time detection of the continuity of the image frame the detection result of the video is generated, that is, the probability of the occurrence of the Carton phenomenon and the time of the stuck time can be accurately counted, thereby improving the accuracy of the detection; at the same time, avoiding the video detection by manually detecting the video
  • the phenomenon that the detection progress is slow, the detection efficiency is improved while reducing the workload of the test technician.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • embodiments of the embodiments of the invention may be provided as a method, apparatus, or computer program product.
  • embodiments of the invention may be in the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware.
  • embodiments of the invention may take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • Figure 4 illustrates a server in which the present invention can be implemented.
  • the server conventionally includes a processor 410 and a computer program product or computer readable medium in the form of a memory 420.
  • the memory 420 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • Memory 420 has a memory space 430 for program code 431 for performing any of the method steps described above.
  • storage space 430 for program code may include various program code 431 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such computer program products are typically portable or fixed storage units as described with reference to FIG.
  • the storage unit may have a storage section, a storage space, and the like arranged similarly to the storage 420 in the server of FIG.
  • the program code can be, for example, in a suitable shape Compress.
  • the storage unit includes computer readable code 431', code that can be read by a processor, such as 410, which, when executed by a server, causes the server to perform various steps in the methods described above.
  • Embodiments of the invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG.
  • These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal device to produce a machine such that instructions are executed by a processor of a computer or other programmable data processing terminal device
  • Means are provided for implementing the functions specified in one or more of the flow or in one or more blocks of the flow chart.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing terminal device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the instruction device implements the functions specified in one or more blocks of the flowchart or in a flow or block of the flowchart.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本发明实施例提供了一种视频数据的检测方法和装置,该方法包括:接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间;基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。本发明实施例通过提取帧数据中所包含的水印时序信息,检测图像帧的连续性,生成视频的检测结果,即可以对卡顿现象发生的机率和卡顿时间点进行精确的统计,提高了检测的精确度;同时,避免由人工检测视频播放卡顿现象而导致检测进度慢的问题,在减少测试技术员的工作量的同时,提高了检测效率。

Description

一种视频数据的检测方法和装置
本申请要求在2015年12月4日提交中国专利局、申请号为201510889781.9、发明名称为“一种视频数据的检测方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及视频技术领域,特别是涉及一种视频数据的检测方法和一种视频数据的检测装置。
背景技术
视频在播放的过程中,容易发生播放卡顿,或者数据丢失(如视频帧丢失)的现象,导致了视频不能流畅地播放。为了确保视频播放流畅,提高用户体验,需要对视频的播放卡顿、数据丢失的情况进行检测。
发明人在实现本发明的过程中发现,目前,主要通过技术人员对视频播放的情况进行检测,确定视频播放卡顿、数据丢失的情况。以播放在线视频为例,在播放过程中,测试工程师通过观看视频,确定该视频的播放卡顿、数据丢失的情况;或者,通过用户反馈的问题,发现在线视频播放卡顿、视频数据丢失的问题。但是,通过人工检测,难以记录视频在播放过程中发生卡顿现象对应的时间点,并且限制了检测进度。
显然,目前通过人工检测视频播放卡顿现象,不能对卡顿现象发生的机率和卡顿时间点进行精确的统计,并且检测效率低。
发明内容
本发明实施例所要解决的技术问题是提供一种视频数据的检测方法,解决人工检测而导致视频检测进度有限的问题,在提高检测效率的同时,提高检测的精确度。
相应的,本发明实施例还提供了一种视频数据的检测装置,用以保证上述方法的实现及应用。
为了解决上述问题,本发明实施例公开了一种视频数据的检测方法,包括:
接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;
分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间;
基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。
相应的,本发明实施例还公开一种视频数据的检查装置,包括:
接收模块,用于接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;
显示时间确定模块,用于分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间;
检测模块,用于基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。
本发明实施例提供一种计算机程序,其包括计算机可读代码,当所述计算机可读代码在服务器上运行时,导致所述服务器执行上述的视频数据的检测方法。
本发明实施例提供一种计算机可读介质,其中存储了上述的计算机程序。
发明实施例提供一种服务器,包括:
一个或多个处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;
分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间;
基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。
与现有技术相比,本发明实施例包括以下优点:
本发明实施例可以在接收到已播放视频的各图像帧的帧数据后,通过提取帧数据中所包含的水印时序信息,确定每个帧数据对应图像帧的显示时间,基于各图像帧的显示时间检测图像帧的连续性,生成视频的检测结果,即可以对卡顿现象发生的机率和卡顿时间点进行精确的统计,提高了检测的精确度;同时,避免由人工检测视频播放卡顿现象而导致检测进度慢的问题,在减少测试技术员的工作量的同时,提高了检测效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明的一种视频数据的检测方法实施例的步骤流程图;
图2是本发明的一种视频数据的检测方法优选实施例的步骤流程图;
图3A是本发明的一种视频数据的检测装置实施例的结构框图;
图3B是本发明的一种视频数据的检测装置优选实施例的结构框图;
图4示意性地示出了用于执行根据本发明的方法的服务器的框图;以及
图5示意性地示出了用于保持或者携带实现根据本发明的方法的程序代码的存储单元。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
目前,通过人工发现的方式检测视频播放卡顿现象、视频数据丢失,存在如下问题:1、浪费测试工程师的人力资源;2、不能自动检测,即不能有效利用闲时时间(如晚上),导致检测进度有限,检测效率低;3、不能精确统计视频播放的卡顿机率和卡顿时间点,即检测的精确度低。
针对上述问题,本发明实施例的核心构思之一在于,从各图像帧的帧数据提取水印时序信息,依据该水印时序信息确定每个帧数据对应图像帧的显示时间,通过各图像帧的显示时间检测视频图像帧的连续性,生成该视频的检测结果,即可以对卡顿现象发生的机率和卡顿时间点进行精确的统计,提高检测的精确度,以及提高检测效率。
参照图1,示出了本发明的一种视频数据的检测方法实施例的步骤流程图,具体可以包括如下步骤:
步骤101,接收已播放视频的各图像帧的帧数据。
其中,帧数据包括水印时序信息。实际上,在编码的过程中,可以在视频的每一图像帧上添加水印时序信息。具体而言,在视频编码的过程中,通过水印技术,可以在视频的各图像帧上增加水印,该水印的内容包含了视频的各图像帧的显示时序信息,如图像帧的帧号、时间戳等,使得各图像帧的帧数据包括水印时序信息。
优选的,可以在各图像帧的指定区域添加水印,即将水印嵌入到视频中相对不变的区域,例如,基于视频标记(logotype,logo)在各图像帧中的位置基本不变,可以在视频的logo添加脆弱性透明的水印时序信息。具体的,通过量化,将视频源的各图像帧的帧号或时间戳变换为32位二进制数,如量化后的显示时间戳(Presentation Time Stamp,PTS)。在编码的时候,把量化后的PTS(相当于时序信息)作为脆弱性透明的水印嵌入到logo的P宏块中。由于水印是脆弱性透明的,因此在显示时该水印通过人眼是无法看到的,即不会影响图像帧的显示,保证了视频的图像帧显示的完整性。
在实际应用中,智能终端如智能手机播放视频,可以把解码之后各图像帧的帧数据抓取出来,转发到卡顿丢帧分析服务器(简称服务器)。具体的,智能终端可以通过调用截屏接口,如安卓(Android)系统的界面投递者(Surface Flinger)服务提供的接口,可以获得送显示屏显示的帧数据;也可 以直接通过液晶显示器(Liquid Crystal Display,LCD)驱动截取显示屏幕指定区域(如logo区域)的数据,即可以获取各图像帧的帧数据。例如,帧数据可以是按照YUV格式生成的数据,即是YUV数据。其中“Y”表示明亮度(Luminance或Luma),“U”和“V”表示色度(Chrominance或Chroma)。服务器接收该放视频的各图像帧的帧数据后,就可以对视频的各图像帧的帧数据进行自动检测,生成该视频的检测结果,具体的检测过程在后文进行描述。
需要说明的是,智能终端和服务器之间的帧数据,可以通过网络传输,如通过TCP_SOCKET传输;也可以通用串行总线(Universal Serial Bus,USB)串口传输,如安卓调试桥(Android Debug Bridge,ADB)等,本发明实施例对此不作限制。
可选的,上述步骤101,可以包括如下子步骤:
子步骤10101,通过无线或者有线方式连接播放器所在的智能终端。
子步骤10103,接收所述智能终端从各图像帧的指定区域提取的帧数据。
其中,所述各图像帧为所述播放器已播放视频中播放的图像帧。
步骤103,分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间。
在接收到帧数据后,服务器可以通过逆变换,从每个帧数据中提取水印时序信息,对水印时序信息进行解析,就可以确定每个帧数对应图像帧的显示时间。如上例中在logo区域添加水印时序信息,服务器在接收到帧数据后,可以通过遍历logo区域的宏块,确定宏块的奇偶性,从而还原编码时嵌入水印中的时序信息,即确定各图像帧的显示时间。例如,服务器可以通过遍历32个宏块的奇偶性,并把奇数映射为1,把偶数映射为0,就可以得到32位的二进制数,还原编码嵌入水印的PTS,即确定每个图像帧的显示时间。例如,采用十六进制数表示图像帧的显示时间,假设视频的第1图像帧的显示时间为16毫秒(ms),则通过计算,可以确定该视频的第1个图像帧的显示时间为0x00000010ms。
当然,为了保持健壮性,也可以遍历两个宏块,确定一位二进制系数(X,Y),即通过遍历64个宏块的奇偶性确定一个图像帧的实施时间。其中,X、Y可以是1或者0。当X和Y相同时,如X和Y都为1或者都为0时,可以根据X或者Y的值,确定宏块的奇偶性。本发明实施例对水印时序信息还原 成各图像帧的显示时间的实现方式不作限制。
可选的,上述步骤103可以包括如下子步骤:
子步骤10301,针对每个帧数据,从所述帧数据中提取水印时序信息。
子步骤10303,对所提取的水印时序信息进行解析,确定所述帧数据对应图像帧的显示时间。
步骤105,基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。
实际上,在视频正常流畅播放时,所提取的各图像帧的显示时间应该是均匀递增的。通过将各图像帧的显示时间与同步时间相比较,判断各帧图像的显示时间是否是不规则变化,就可以确定该视频播放时的卡顿情况。具体的,将各图像帧的显示时间与服务器本地NTP(Network Time Protocol)时间相比较,可以得到各图像帧对应的时间差值。当某一个图像帧对应的时间差值不在预置的播放时间范围内时,就可以认为该图像帧播放卡顿,将该图像帧的显示时间作为卡顿时间点,并进行记录。
可选的,基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果,可以包括:计算各图像帧的显示时间与预先生成的同步时间信息的差值,生成各图像帧对应的时间差值;分别判断每个图像帧对应的时间差值是否在播放时间范围内;当图像帧对应的时间差值不在所述播放时间范围内,将所述图像帧的显示时间点作为卡顿时间点,生成所述视频的卡顿检测结果。
另外,通过判断各图像帧的显示时间是否是均匀递增,就可以判断该视频是否存在丢帧的情况。具体的,当相邻的两个图像帧的显示时间的间隔大于视频对应的帧间时间差时,就可以确定在这两个图像帧之间存在丢失的图像帧,即在这两个图像帧之间丢失了数据。通过记录这两个图像帧的显示时间,可以确定该视频丢帧的时间点。
在本发明的一种优选实施例中,基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果,还可以包括:对各图像帧的显示时间进行统计,确定所述视频对应的帧间时间差,以及各相邻的两个图像帧所对应的显示时间差;判断所述两个图像帧所对应的显示时间差是否大于所述帧间时间差;当所述显示时间差大于所述帧间时间差时,确定在所述两个图像帧 之间丢失数据,生成所述视频的丢帧检测结果。
基于视频的丢帧情况和/或卡顿情况,丢帧卡顿分析服务器可以自动生成视频的检测结果,如卡顿检测结果、丢帧检测结果等。该检测结果可以包括视频丢帧的时间点、卡顿时间点等,从而可以对视频播放时卡顿现象发生的机率、卡顿时间点进行精确的统计,提高了检测的精确度。
在本发明实施例中,服务器可以通过提取各图像帧的帧数据中的水印时序信息,自动检测视频播放卡顿、视频数据丢失的情况,生成视频的检测结果,减少了测试工程师的工作量,节省了人力资源,从而降低检测的成本。此外,服务器可以自动检测,即可以利用空闲时间对视频进行检测,加快了检测进度,提高检测效率。
参照图2,示出了本发明的一种视频数据的检测方法实施例的步骤流程图,具体可以包括如下步骤:
步骤201,通过无线或者有线方式连接播放器所在的智能终端。
实际上,在因特网如局域网中,服务器可以通过无线方式或者有线方式连接播放器所在的智能终端。其中,无线方式是指无线通信,利用电波信号可以在自由空间中传播的特性进行信息交换的一种通信方式。有线方式是指有线通信,利用金属导线、光纤等有形媒质传送信息的方式。
具体的,视频通常是通过终端的播放器进行播放的,如通过智能手机的播放器进行播放,或者通过智能终端的网页播放器进行播放。在播放器在播放视频时,该播放器所在的智能终端可以通过无线连接的方式如WI-FI连接方式,连接服务器。当然,智能终端也可以通过有线方式如通用串行总线,连接服务器。本发明实施例对服务器与智能终端的连接方式不作限制。
步骤203,接收所述智能终端从各图像帧的指定区域提取的帧数据。
其中,各图像帧为所述播放器已播放视频中播放的图像帧。具体的,播放器可以通过连续显示视频的图像帧,实现视频的播放。每一个图像帧的显示相当于显示一个画面。
智能终端可以通过系统本身提供的Surface Flinger服务获取送显示屏显示的YUV数据;或者直接通过LCD驱动截取显示屏幕指定区域的数据。在播放器播放视频时,智能终端可以从视频的各图像帧的指定区域提取帧数据,如从logo显示的区域获取YUV数据。每一个图像帧的帧数据包含了该图像 帧的显示时间信息,即包含了水印时序信息。智能终端从各图像帧的指定区域提取帧数据后,将所提取的帧数发送给服务器。服务器可以通过网络接口如TCP_SOCKET接口、USB串口等,接收智能终端所发送的帧数据。
步骤205,针对每个帧数据,从所述帧数据中提取水印时序信息。
在接收到帧数据后,针对每个帧数据,服务器可以通过逆变换,从帧数据中提取该帧数据对应的图像帧的水印时序信息,如将指定区域的YUV数据中的水印时序信息提取出来。
步骤207,对所提取的水印时序信息进行解析,确定所述帧数据对应图像帧的显示时间。
具体的,水印时序信息作为水印嵌入到指定区域的宏块中。通过对水印时序信息所在的宏块进行逆变换,即通过遍历指定区域的宏块,确定宏块的奇偶性,就可以确定各图像帧的显示时间。
此后可以基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果,本实施例中,视频的检测结果包括:卡顿检测结果和丢帧检测结果,具体如下。
其中,生成视频的卡顿检查结果,具体可以包括:步骤209、步骤211以及步骤213;生成视频的丢帧检测结果,具体可以包括:步骤215、步骤217以及步骤219。
一、生成视频的卡顿检查结果的具体步骤
步骤209,计算各图像帧的显示时间与预先生成的同步时间信息的差值,生成各图像帧对应的时间差值。
在基于服务器本地NTP时间如当前系统时间(System.currentTime),可以确定各图像帧在播放时实际显示的时间。在视频播放时,可以将各图像帧实际播放时间相对与该视频起始播放时间的时间差,生成各图像帧对应的同步时间信息。
作为本发明实施例的一个具体示例,假设System.currentTime为3500毫秒,视频的起播时间为3500毫秒,第1个图像帧的显示时间为16毫秒,其的实际播放时间为3517毫秒,则第1个图像帧对应的同步时间信息为17毫秒;第2个图像帧的显示时间为48毫秒,其的实际播放时间为3555毫秒,则第2个图像帧对应的同步时间信息为55毫秒;第3个图像帧的显示时间为 80毫秒,其的实际播放时间为3689毫秒,则第3个图像帧对应的同步时间信息为189毫秒。
通过计算各图像帧的显示时间与其对应的同步时间信息的差值,可以生成各图像帧对应的时间差值timegaps。例如,针对第1个图像帧,计算该图像帧的其显示时间16毫秒与其对应的同步时间信息17毫秒的差值,可以生成第1图像帧对应的时间差值timegaps为1毫秒。同理,可以生成第2图像帧对应的时间差值timegaps为7毫秒,第3图像帧对应的时间差值timegaps为109毫秒。
步骤211,分别判断每个图像帧对应的时间差值是否在播放时间范围内。
针对不同清晰度,在线视频在良好网络传输环境下,其播放可接受延迟。具有的说,若视频的图像帧可以在预置的播放时间范围内播放,则认为该图像帧可以流畅播放。假设,视频的图像帧延迟播放在40毫秒之内,都可以认为该图像帧可以流畅播放,即该图像帧播放不卡顿。当视频的图像帧延迟播放超过40毫秒,则认为该图像播放卡顿。服务器可以判断视视频的各图像帧对应的时间差值timegaps是否在播放时间范围(A,B)内,其中A可以是0毫秒,B可以是40毫秒。若图像帧对应的时间差值timegaps大于A且小于B时,则可以认为该视频帧播放流畅,否则以认为该视频帧卡顿。如上述例子中,第1个图像帧对应的时间差值timegaps 1毫秒,在播放时间范围(0毫秒,40毫秒)之内,则可以认为该视频的第1个图像帧播放流畅;第3个图像帧对应的时间差值timegaps 109毫秒,不在播放时间范围(0毫秒,40毫秒)之内,则可以认为该视频的第3个图像帧播放卡顿,然后执行步骤213。
步骤213,当图像帧对应的时间差值不在所述播放时间范围内,将所述图像帧的显示时间点作为卡顿时间点,生成所述视频的卡顿检测结果。
当某一个图像帧对应的时间timegaps差值不在播放时间范围内,将该图像帧的显示时间点作为卡顿时间点,生成所述视频的卡顿检测结果。如上述例子中,第3个图像帧对应的时间差值timegaps 109毫秒不在播放时间范围之内,将第3个图像帧的显示时间80毫秒作为卡顿时间点,以生成该视频的卡顿检测结果。
通过判断视频所有图像帧对应时间差是否在播放时间范围内,从而可以记录该视频所有的卡顿时间点,生成该视频的卡断检测结果,进而可以对对 视频播放时卡顿现象发生的机率、卡顿时间点进行精确的统计,提高了检测的精确度。
二、生成视频的丢帧检测结果的具体步骤
步骤215,对各图像帧的显示时间进行统计,确定所述视频对应的帧间时间差,以及各相邻的两个图像帧所对应的显示时间差。
作为本发明实施例的一个具体示例,假设某一视频的各图像帧的显示时间如表1所示。其中,“第1帧”表示第1个图像帧;“0x”表示采用十六进制数,如0x00000010表示16。
图像帧的帧号 显示时间
第1帧 0x00000010ms
第2帧 0x00000030ms
第3帧: 0x00000050ms
第4帧 0x00000090ms
第5帧 0x000000b0ms
第6帧 0x000000f0m
...... ......
表1
通过统计该视频所有图像帧的显示时间,可以确定该视频的帧间时间差为32毫秒,以及各相邻的两个图像帧所对应的显示时间差,如第1个图像帧与第2个图像帧所对应的显示时间差为32毫秒,即是第2个图像帧的显示时间0x00000030ms减去第2个图像帧的显示时间0x00000010ms的差值。同理,可以计算其他相邻两个图像帧对应的显示时间差,如第2个图像帧与第3个图像帧所对应的显示时间差为32毫秒、第3个图像帧与第4个图像帧所对应的显示时间差为64毫秒、第4个图像帧与第5个图像帧所对应的显示时间差为32毫秒、第5个图像帧与第6个图像帧所对应的显示时间差为64毫秒……
步骤217,判断所述两个图像帧所对应的显示时间差是否大于所述帧间时间差。
针对每两个相邻的图像帧,判断这两个图像帧对应的显示时间差是否大 于该视频对应的时间差。若相邻的两个图像帧对应的显示时间差不大于该视频对应的帧间时间差,则认为在这两个图像帧之间没有丢失数据,即没有丢失帧。若相邻的两个图像帧对应的显示时间差大于该视频对应的帧间时间差,则认为在这两个图像帧之间丢失数据,即丢失帧,如第3个图像帧与第4个图像帧所对应的显示时间64毫秒大于该视频对应的帧间时间差32毫秒,则然后执行步骤219。
步骤219,当所述显示时间差大于所述帧间时间差时,确定在所述两个图像帧之间丢失数据,生成所述视频的丢帧检测结果。
在相邻的两个图像帧的显示时间差大于视频对应的帧间时间差时,可以确定在这两个图像帧之间丢失数据,如在第3个图像帧与第4个图像帧之间丢失帧,生成该视频的丢帧检测结果。该视频的丢帧检测结果可以记录在哪些图像帧之间丢失帧,如记录了在第3个图像帧与第4个图像帧之间丢失帧、在第5个图像帧与第6个图像帧之间丢失帧;或者也可以记录在哪些显示时间之间丢失帧,如在显示时间0x00000050ms到0x00000090ms之间丢失帧、在显示时间0x000000b0ms到0x000000f0ms之间丢失帧等,本发明实施例对此不作限制。
在本发明实施例中,通过判断相邻的两个图像帧的显示时间差是否大于视频对应的帧间时间差,可以确定在相邻的两个图像帧之间是否丢失帧,记录该视频丢失帧的情况,生成该视频的丢帧检测结果,从而可以对该视频丢帧的机率、丢帧的时间点进行精确的统计,提高检测的精确度。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明实施例并不受所描述的动作顺序的限制,因为依据本发明实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明实施例所必须的。
参照图3A,示出了本发明一种视频数据的检测装置实施例的结构框图,具体可以包括如下模块:
接收模块301,用于接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息。
显示时间确定模块303,用于分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间。
检测模块305,用于基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。
在图3A的基础上,可选的,接收模块301可以包括连接子模块30101和数据接收子模块30103,参照图3B。
其中,连接子模块30101,用于通过无线或者有线方式连接播放器所在的智能终端。数据接收子模块30103,用于接收所述智能终端从各图像帧的指定区域提取的帧数据,各图像帧为所述播放器已播放视频中播放的图像帧。
在本发明的一种优选实施例中,显示时间确定模块303可以包括如下子模块:
提取子模块30301,用于针对每个帧数据,从所述帧数据中提取水印时序信息。
解析子模块30303,用于对所提取的水印时序信息进行解析,确定所述帧数据对应图像帧的显示时间。
在本发明的一种优选实施例中,检测模块305可以包括如下子模块:
时间差值生成子模块30501,用于计算各图像帧的显示时间与预先生成的同步时间信息的差值,生成各图像帧对应的时间差值.
卡顿判断子模块30503,用于分别判断每个图像帧对应的时间差值是否在播放时间范围内。
卡顿结果生成子模块30505,用于当图像帧对应的时间差值不在所述播放时间范围内,将所述图像帧的显示时间点作为卡顿时间点,生成所述视频的卡顿检测结果。
可选的,检测模块305还可以包括如下子模块:
统计子模块30507,用于对各图像帧的显示时间进行统计,确定所述视频对应的帧间时间差,以及各相邻的两个图像帧所对应的显示时间差。
丢帧判断子模块30509,用于判断所述两个图像帧所对应的显示时间差是否大于所述帧间时间差。
丢帧结果生成子模块30511,当所述显示时间差大于所述帧间时间差时,确定在所述两个图像帧之间丢失数据,生成所述视频的丢帧检测结果。
本发明实施例可以在接收到已播放视频的各图像帧的帧数据后,通过提取帧数据中所包含的水印时序信息,确定每个帧数据对应图像帧的显示时间,基于各图像帧的显示时间检测图像帧的连续性,生成视频的检测结果,即可以对卡顿现象发生的机率和卡顿时间点进行精确的统计,提高了检测的精确度;同时,避免由人工检测视频播放卡顿现象而导致检测进度慢的问题,在减少测试技术员的工作量的同时,提高了检测效率。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本发明实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本发明实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
例如,图4示出了可以实现根据本发明的服务器。该服务器传统上包括处理器410和以存储器420形式的计算机程序产品或者计算机可读介质。存储器420可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器420具有用于执行上述方法中的任何方法步骤的程序代码431的存储空间430。例如,用于程序代码的存储空间430可以包括分别用于实现上面的方法中的各种步骤的各个程序代码431。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图5所述的便携式或者固定存储单元。该存储单元可以具有与图4的服务器中的存储器420类似布置的存储段、存储空间等。程序代码可以例如以适当形 式进行压缩。通常,存储单元包括计算机可读代码431’,即可以由例如诸如410之类的处理器读取的代码,这些代码当由服务器运行时,导致该服务器执行上面所描述的方法中的各个步骤。
本发明实施例是参照根据本发明实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或 者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本发明所提供的一种视频数据的检测方法和一种视频数据的检测装置,进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (13)

  1. 一种视频数据的检测方法,其特征在于,包括:
    接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;
    分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间;
    基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。
  2. 根据权利要求1所述的方法,其特征在于,所述接收已播放视频的各图像帧的帧数据,包括:
    通过无线或者有线方式连接播放器所在的智能终端;
    接收所述智能终端从各图像帧的指定区域提取的帧数据,其中,所述各图像帧为所述播放器已播放视频中播放的图像帧。
  3. 根据权利要求1所述的方法,其特征在于,所述分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间,包括:
    针对每个帧数据,从所述帧数据中提取水印时序信息;
    对所提取的水印时序信息进行解析,确定所述帧数据对应图像帧的显示时间。
  4. 根据权利要求1至3任一所述的方法,其特征在于,所述基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果,包括:
    计算各图像帧的显示时间与预先生成的同步时间信息的差值,生成各图像帧对应的时间差值;
    分别判断每个图像帧对应的时间差值是否在播放时间范围内;
    当图像帧对应的时间差值不在所述播放时间范围内,将所述图像帧的显示时间点作为卡顿时间点,生成所述视频的卡顿检测结果。
  5. 根据权利要求4所述的方法,其特征在于,所述基于各图像帧的显示 时间检测图像帧的连续性,生成所述视频的检测结果,还包括:
    对各图像帧的显示时间进行统计,确定所述视频对应的帧间时间差,以及各相邻的两个图像帧所对应的显示时间差;
    判断所述两个图像帧所对应的显示时间差是否大于所述帧间时间差;
    当所述显示时间差大于所述帧间时间差时,确定在所述两个图像帧之间丢失数据,生成所述视频的丢帧检测结果。
  6. 一种视频数据的检查装置,其特征在于,包括:
    接收模块,用于接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;
    显示时间确定模块,用于分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间;
    检测模块,用于基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。
  7. 根据权利要求6所述的装置,其特征在于,所述接收模块包括:
    连接子模块,用于通过无线或者有线方式连接播放器所在的智能终端;
    数据接收子模块,用于接收所述智能终端从各图像帧的指定区域提取的帧数据,其中,所述各图像帧为所述播放器已播放视频中播放的图像帧。
  8. 根据权利要求6所述的装置,其特征在于,所述显示时间确定模块包括:
    提取子模块,用于针对每个帧数据,从所述帧数据中提取水印时序信息;
    解析子模块,用于对所提取的水印时序信息进行解析,确定所述帧数据对应图像帧的显示时间。
  9. 根据权利要求6至8任一所述的装置,其特征在于,所述检测模块包括:
    时间差值生成子模块,用于计算各图像帧的显示时间与预先生成的同步时间信息的差值,生成各图像帧对应的时间差值;
    卡顿判断子模块,用于分别判断每个图像帧对应的时间差值是否在播放时间范围内;
    卡顿结果生成子模块,用于当图像帧对应的时间差值不在所述播放时间范围内,将所述图像帧的显示时间点作为卡顿时间点,生成所述视频的卡顿检测结果。
  10. 根据权利要求9所述的装置,其特征在于,所述检测模块还包括:
    统计子模块,用于对各图像帧的显示时间进行统计,确定所述视频对应的帧间时间差,以及各相邻的两个图像帧所对应的显示时间差;
    丢帧判断子模块,用于判断所述两个图像帧所对应的显示时间差是否大于所述帧间时间差;
    丢帧结果生成子模块,当所述显示时间差大于所述帧间时间差时,确定在所述两个图像帧之间丢失数据,生成所述视频的丢帧检测结果。
  11. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在服务器上运行时,导致所述服务器执行根据权利要求1-5中的任一个所述的视频数据的检测方法。
  12. 一种计算机可读介质,其中存储了如权利要求11所述的计算机程序。
  13. 一种服务器,其特征在于,包括:
    一个或多个处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;
    分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间;
    基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。
PCT/CN2016/089357 2015-12-04 2016-07-08 一种视频数据的检测方法和装置 WO2017092343A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510889781.9A CN105979332A (zh) 2015-12-04 2015-12-04 一种视频数据的检测方法和装置
CN201510889781.9 2015-12-04

Publications (1)

Publication Number Publication Date
WO2017092343A1 true WO2017092343A1 (zh) 2017-06-08

Family

ID=56988250

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/089357 WO2017092343A1 (zh) 2015-12-04 2016-07-08 一种视频数据的检测方法和装置

Country Status (2)

Country Link
CN (1) CN105979332A (zh)
WO (1) WO2017092343A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426603A (zh) * 2017-08-21 2019-03-05 北京京东尚科信息技术有限公司 一种分析应用程序卡顿的方法和装置
CN110457177A (zh) * 2019-07-24 2019-11-15 Oppo广东移动通信有限公司 开机异常检测方法及装置、电子设备、存储介质
CN111973994A (zh) * 2020-09-08 2020-11-24 网易(杭州)网络有限公司 游戏配置的调整方法、装置、设备及存储介质
CN112711519A (zh) * 2019-10-25 2021-04-27 腾讯科技(深圳)有限公司 画面流畅度检测方法、装置、存储介质和计算机设备
CN113541832A (zh) * 2021-06-24 2021-10-22 青岛海信移动通信技术股份有限公司 一种终端、网络传输质量检测方法及存储介质
CN114845164A (zh) * 2021-02-02 2022-08-02 中国移动通信有限公司研究院 一种数据处理方法、装置及设备
CN114928769A (zh) * 2022-04-21 2022-08-19 瑞芯微电子股份有限公司 用于显示帧数据的方法和电子设备
CN115022675A (zh) * 2022-07-01 2022-09-06 天翼数字生活科技有限公司 一种视频播放检测的方法和系统
CN116760973A (zh) * 2023-08-18 2023-09-15 天津华来科技股份有限公司 基于二维码时钟的智能摄像机长连接性能测试方法及系统

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106775403B (zh) * 2016-12-14 2020-03-03 北京小米移动软件有限公司 获取卡顿信息的方法及装置
CN106604135A (zh) * 2016-12-21 2017-04-26 深圳市泰普森科技有限公司 基于机顶盒的视频播放方法及机顶盒
CN108270635B (zh) * 2016-12-30 2020-08-07 亿度慧达教育科技(北京)有限公司 网络卡顿判断方法、装置及在线课程直播系统
CN106851384A (zh) * 2017-01-18 2017-06-13 环球智达科技(北京)有限公司 一种android系统播放器检测缓冲的方案
CN106878703B (zh) * 2017-03-14 2019-01-04 珠海全志科技股份有限公司 一种行车记录仪录像检测方法
CN106973321B (zh) * 2017-03-31 2019-07-30 广州酷狗计算机科技有限公司 确定视频卡顿的方法及装置
US10306270B2 (en) * 2017-06-26 2019-05-28 Netflix, Inc. Techniques for detecting media playback errors
CN107451066A (zh) * 2017-08-22 2017-12-08 网易(杭州)网络有限公司 卡顿处理方法和装置、存储介质、终端
CN109698961B (zh) * 2017-10-24 2021-06-22 阿里巴巴集团控股有限公司 一种监控方法、装置及电子设备
CN108495120A (zh) * 2018-01-31 2018-09-04 华为技术有限公司 一种视频帧检测、处理方法、装置及系统
CN108449626A (zh) * 2018-03-16 2018-08-24 北京视觉世界科技有限公司 视频处理、视频的识别方法、装置、设备和介质
CN110602481B (zh) * 2018-06-12 2021-11-16 浙江宇视科技有限公司 一种视频监控系统中视频质量检测方法及装置
CN108924575B (zh) * 2018-07-09 2021-02-02 武汉斗鱼网络科技有限公司 一种视频解码分析方法、装置、设备及介质
CN109120995B (zh) * 2018-07-09 2021-01-01 武汉斗鱼网络科技有限公司 一种视频缓存分析方法、装置、设备及介质
CN110704268B (zh) * 2018-07-10 2023-10-27 浙江宇视科技有限公司 一种视频图像自动化测试方法及装置
CN109144858B (zh) * 2018-08-02 2022-02-25 腾讯科技(北京)有限公司 流畅度检测方法、装置、计算设备及存储介质
CN109412901B (zh) * 2018-12-07 2022-09-27 成都博宇利华科技有限公司 基于时域处理的采集数据连续性检测方法及检测系统
CN111314640B (zh) * 2020-02-23 2022-06-07 苏州浪潮智能科技有限公司 一种视频压缩方法、设备以及介质
CN112073713B (zh) * 2020-09-07 2023-04-25 三六零科技集团有限公司 视频漏录测试方法、装置、设备及存储介质
CN112073714A (zh) * 2020-09-09 2020-12-11 福建新大陆软件工程有限公司 视频播放质量自动检测方法、装置、设备及可读存储介质
CN114512077B (zh) * 2020-10-23 2023-12-05 西安诺瓦星云科技股份有限公司 接收卡输出的驱动时序检测方法、装置及系统
CN113034430B (zh) * 2020-12-02 2023-06-20 武汉大千信息技术有限公司 基于时间水印变化分析的视频真伪检验鉴定方法和系统
CN115408071A (zh) * 2021-05-26 2022-11-29 华为技术有限公司 一种动效计算方法及装置
CN114240830A (zh) * 2021-11-05 2022-03-25 珠海全志科技股份有限公司 基于ffmpeg和图像识别的行车记录仪录像检测方法及装置
CN114915846B (zh) * 2022-05-10 2024-06-21 中移(杭州)信息技术有限公司 数据处理方法、装置、设备及计算机可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020174425A1 (en) * 2000-10-26 2002-11-21 Markel Steven O. Collection of affinity data from television, video, or similar transmissions
JP2006270634A (ja) * 2005-03-24 2006-10-05 Victor Co Of Japan Ltd デジタル放送同期再生装置、ストリーム同期再生装置及びストリーム同期再生システム
CN101322410A (zh) * 2005-12-02 2008-12-10 皇家飞利浦电子股份有限公司 检测视频数据错误的方法及装置
CN101888513A (zh) * 2010-06-29 2010-11-17 深圳市融创天下科技发展有限公司 一种视频帧率转换方法
CN103283251A (zh) * 2010-12-26 2013-09-04 Lg电子株式会社 广播服务发送方法、广播服务接收方法和广播服务接收设备
CN104519372A (zh) * 2014-12-19 2015-04-15 深圳市九洲电器有限公司 一种流媒体播放的切换方法和系统
CN104918133A (zh) * 2014-03-12 2015-09-16 北京视联动力国际信息技术有限公司 一种视联网中视频流的播放方法和装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020174425A1 (en) * 2000-10-26 2002-11-21 Markel Steven O. Collection of affinity data from television, video, or similar transmissions
JP2006270634A (ja) * 2005-03-24 2006-10-05 Victor Co Of Japan Ltd デジタル放送同期再生装置、ストリーム同期再生装置及びストリーム同期再生システム
CN101322410A (zh) * 2005-12-02 2008-12-10 皇家飞利浦电子股份有限公司 检测视频数据错误的方法及装置
CN101888513A (zh) * 2010-06-29 2010-11-17 深圳市融创天下科技发展有限公司 一种视频帧率转换方法
CN103283251A (zh) * 2010-12-26 2013-09-04 Lg电子株式会社 广播服务发送方法、广播服务接收方法和广播服务接收设备
CN104918133A (zh) * 2014-03-12 2015-09-16 北京视联动力国际信息技术有限公司 一种视联网中视频流的播放方法和装置
CN104519372A (zh) * 2014-12-19 2015-04-15 深圳市九洲电器有限公司 一种流媒体播放的切换方法和系统

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426603A (zh) * 2017-08-21 2019-03-05 北京京东尚科信息技术有限公司 一种分析应用程序卡顿的方法和装置
CN110457177A (zh) * 2019-07-24 2019-11-15 Oppo广东移动通信有限公司 开机异常检测方法及装置、电子设备、存储介质
CN112711519A (zh) * 2019-10-25 2021-04-27 腾讯科技(深圳)有限公司 画面流畅度检测方法、装置、存储介质和计算机设备
CN111973994A (zh) * 2020-09-08 2020-11-24 网易(杭州)网络有限公司 游戏配置的调整方法、装置、设备及存储介质
CN114845164A (zh) * 2021-02-02 2022-08-02 中国移动通信有限公司研究院 一种数据处理方法、装置及设备
CN113541832B (zh) * 2021-06-24 2023-11-03 青岛海信移动通信技术有限公司 一种终端、网络传输质量检测方法及存储介质
CN113541832A (zh) * 2021-06-24 2021-10-22 青岛海信移动通信技术股份有限公司 一种终端、网络传输质量检测方法及存储介质
CN114928769A (zh) * 2022-04-21 2022-08-19 瑞芯微电子股份有限公司 用于显示帧数据的方法和电子设备
CN114928769B (zh) * 2022-04-21 2024-05-14 瑞芯微电子股份有限公司 用于显示帧数据的方法和电子设备
CN115022675A (zh) * 2022-07-01 2022-09-06 天翼数字生活科技有限公司 一种视频播放检测的方法和系统
CN115022675B (zh) * 2022-07-01 2023-12-15 天翼数字生活科技有限公司 一种视频播放检测的方法和系统
CN116760973A (zh) * 2023-08-18 2023-09-15 天津华来科技股份有限公司 基于二维码时钟的智能摄像机长连接性能测试方法及系统
CN116760973B (zh) * 2023-08-18 2023-10-24 天津华来科技股份有限公司 基于二维码时钟的智能摄像机长连接性能测试方法及系统

Also Published As

Publication number Publication date
CN105979332A (zh) 2016-09-28

Similar Documents

Publication Publication Date Title
WO2017092343A1 (zh) 一种视频数据的检测方法和装置
US10425679B2 (en) Method and device for displaying information on video image
US20210350828A1 (en) Reference and Non-Reference Video Quality Evaluation
WO2017107649A1 (zh) 一种视频传输方法和装置
US20170164026A1 (en) Method and device for detecting video data
CN109842795B (zh) 音视频同步性能测试方法、装置、电子设备、存储介质
CN108989883B (zh) 一种直播广告方法、装置、设备及介质
US11763431B2 (en) Scene-based image processing method, apparatus, smart terminal and storage medium
WO2017067489A1 (zh) 机顶盒音视频同步的方法及装置、存储介质
WO2021244224A1 (zh) 卡顿检测方法、装置、设备及可读存储介质
JP4267649B2 (ja) ビデオ番組の処理方法、関連装置及び関連媒体
CN104967903A (zh) 一种视频播放的检测方法及装置
CN110475156B (zh) 一种视频延迟值的计算方法及装置
US9516303B2 (en) Timestamp in performance benchmark
US10237593B2 (en) Monitoring quality of experience (QoE) at audio/video (AV) endpoints using a no-reference (NR) method
CN106331820A (zh) 音视频的同步处理方法和装置
CN108696713B (zh) 码流的安全测试方法、装置及测试设备
CN111641758B (zh) 一种视音频录制方法及装置、计算机可读存储介质
CN110300326B (zh) 一种视频卡顿的检测方法、装置、电子设备及存储介质
CN113839829A (zh) 云游戏延时测试方法、装置、系统及电子设备
CN115878379A (zh) 一种数据备份方法、主服务器、备份服务器及存储介质
CN114339284A (zh) 直播延迟的监控方法、设备、存储介质及程序产品
US20200286120A1 (en) Advertising monitoring method, system, apparatus, and electronic equipment
CN110381308B (zh) 一种测试直播视频处理效果的系统
TWI735297B (zh) 具有初始化片段之視訊及音訊之寫碼

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16869656

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16869656

Country of ref document: EP

Kind code of ref document: A1