WO2017092343A1 - Video data detection method and device - Google Patents

Video data detection method and device Download PDF

Info

Publication number
WO2017092343A1
WO2017092343A1 PCT/CN2016/089357 CN2016089357W WO2017092343A1 WO 2017092343 A1 WO2017092343 A1 WO 2017092343A1 CN 2016089357 W CN2016089357 W CN 2016089357W WO 2017092343 A1 WO2017092343 A1 WO 2017092343A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
image frame
video
data
display time
Prior art date
Application number
PCT/CN2016/089357
Other languages
French (fr)
Chinese (zh)
Inventor
李云龙
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Publication of WO2017092343A1 publication Critical patent/WO2017092343A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/467Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4305Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Definitions

  • the present invention relates to the field of video technologies, and in particular, to a method for detecting video data and a device for detecting video data.
  • the test engineer determines the playback of the video and the data loss by watching the video; or, through the feedback of the user, discovers that the online video is playing and the video data is lost. problem.
  • the technical problem to be solved by the embodiments of the present invention is to provide a method for detecting video data, which solves the problem that the video detection progress is limited due to manual detection, and improves the detection efficiency while improving the detection accuracy.
  • an embodiment of the present invention further provides a video data detecting apparatus for ensuring implementation and application of the foregoing method.
  • an embodiment of the present invention discloses a method for detecting video data, including:
  • the continuity of the image frame is detected based on the display time of each image frame, and the detection result of the video is generated.
  • an embodiment of the present invention further discloses a video data checking apparatus, including:
  • a receiving module configured to receive frame data of each image frame of the played video, where the frame data includes watermark timing information
  • a display time determining module configured to respectively extract watermark timing information from each frame data, and determine a display time of the image frame corresponding to each frame data
  • a detecting module configured to detect continuity of the image frame based on a display time of each image frame, and generate a detection result of the video.
  • Embodiments of the present invention provide a computer program comprising computer readable code that, when executed on a server, causes the server to perform the above-described method of detecting video data.
  • Embodiments of the present invention provide a computer readable medium in which the above computer program is stored.
  • the embodiment of the invention provides a server, including:
  • One or more processors are One or more processors;
  • a memory for storing processor executable instructions
  • processor is configured to:
  • the continuity of the image frame is detected based on the display time of each image frame, and the detection result of the video is generated.
  • the embodiments of the invention include the following advantages:
  • the embodiment of the present invention may determine the display time of the image frame corresponding to each frame data by extracting the watermark timing information included in the frame data, based on the display of each image frame.
  • Time detection of the continuity of the image frame the detection result of the video is generated, that is, the probability of the occurrence of the Carton phenomenon and the time of the stuck time can be accurately counted, thereby improving the accuracy of the detection; at the same time, avoiding the video detection by manually detecting the video
  • the phenomenon that the detection progress is slow, the detection efficiency is improved while reducing the workload of the test technician.
  • FIG. 1 is a flow chart showing the steps of an embodiment of a method for detecting video data according to the present invention
  • FIG. 2 is a flow chart showing the steps of a preferred embodiment of a method for detecting video data according to the present invention
  • 3A is a structural block diagram of an embodiment of a device for detecting video data according to the present invention.
  • 3B is a structural block diagram of a preferred embodiment of a video data detecting apparatus according to the present invention.
  • Figure 4 shows schematically a block diagram of a server for carrying out the method according to the invention
  • Fig. 5 schematically shows a storage unit for holding or carrying program code implementing the method according to the invention.
  • the human resources of the test engineer are wasted; 2.
  • the automatic detection cannot be performed, that is, the idle time (such as evening) cannot be effectively utilized, resulting in The detection progress is limited and the detection efficiency is low.
  • the accuracy of the video playback and the stuck time can not be accurately counted, that is, the detection accuracy is low.
  • One of the core concepts of the embodiments of the present invention is that the watermark timing information is extracted from the frame data of each image frame, and the display time of the image frame corresponding to each frame data is determined according to the watermark timing information, and the display of each image frame is performed.
  • the time is detected by the continuity of the video image frame, and the detection result of the video is generated, that is, the probability of occurrence of the Carton phenomenon and the time of the stuck time can be accurately counted, the accuracy of the detection is improved, and the detection efficiency is improved.
  • FIG. 1 a flow chart of steps of a method for detecting video data according to the present invention is shown, which may specifically include the following steps:
  • Step 101 Receive frame data of each image frame of the played video.
  • the frame data includes watermark timing information.
  • watermark timing information can be added to each image frame of the video during encoding.
  • a watermark may be added to each image frame of the video, and the content of the watermark includes display timing information of each image frame of the video, such as a frame number and time of the image frame.
  • the stamp or the like is such that the frame data of each image frame includes watermark timing information.
  • a watermark may be added in a specified area of each image frame, that is, the watermark is embedded in a relatively unchanged area of the video, for example, based on a video mark (logo), the position in each image frame is substantially unchanged, and the video may be in the video.
  • the logo adds fragile transparent watermark timing information.
  • the frame number or time stamp of each image frame of the video source is converted into a 32-bit binary number by quantization, such as a quantized display time stamp (PTS).
  • PTS quantized display time stamp
  • the quantized PTS is embedded as a fragile transparent watermark into the P macroblock of the logo. Since the watermark is fragile and transparent, the watermark is invisible to the human eye when displayed, that is, it does not affect the display of the image frame, and the integrity of the image frame display of the video is ensured.
  • an intelligent terminal such as a smart phone
  • plays a video and can capture the frame data of each image frame after decoding, and forward it to the Carton Drop Frame Analysis Server (referred to as a server).
  • the smart terminal can obtain the frame data sent by the display screen by calling a screen capture interface, such as an interface provided by an interface device of the Android system (Surface Flinger);
  • the frame data of each image frame can be acquired by directly driving data of a designated area (such as a logo area) of the display screen through a liquid crystal display (LCD).
  • the frame data may be data generated in accordance with the YUV format, that is, YUV data.
  • Y indicates brightness (Luminance or Luma)
  • U indicates chromaticity (Chrominance or Chroma).
  • the server After receiving the frame data of each image frame of the video, the server can automatically detect the frame data of each image frame of the video, and generate a detection result of the video. The specific detection process is described later.
  • the frame data between the intelligent terminal and the server can be transmitted through the network, such as through TCP_SOCKET; or can be transmitted by a universal serial bus (USB) serial port, such as the Android Debug Bridge (Android Debug Bridge, ADB) and the like, the embodiment of the present invention does not limit this.
  • USB universal serial bus
  • the foregoing step 101 may include the following substeps:
  • Sub-step 10101 the smart terminal where the player is located is connected by wireless or wired.
  • Sub-step 10103 Receive frame data extracted by the smart terminal from a designated area of each image frame.
  • the image frames are image frames played in the played video of the player.
  • Step 103 Extract watermark timing information from each frame data respectively, and determine a display time of the image frame corresponding to each frame data.
  • the server may extract the watermark timing information from each frame data by inverse transform, and parse the watermark timing information to determine the display time of the image frame corresponding to each frame number.
  • watermark timing information is added in the logo area.
  • the server can determine the parity of the macroblock by traversing the macroblock of the logo area, thereby restoring the timing information embedded in the watermark during encoding, that is, determining each image.
  • the display time of the frame For example, the server can obtain the 32-bit binary number by traversing the parity of 32 macroblocks, mapping the odd number to 1, and mapping the even number to 0, and restoring the PTS embedded in the watermark, that is, determining each image frame. display time.
  • the display time of the image frame is represented by a hexadecimal number. If the display time of the first image frame of the video is 16 milliseconds (ms), the display time of the first image frame of the video can be determined by calculation. 0x00000010ms.
  • X, Y determines the implementation time of one image frame by traversing the parity of 64 macroblocks.
  • X and Y can be 1 or 0.
  • the parity of the macroblock can be determined based on the value of X or Y.
  • the embodiment of the invention restores watermark timing information
  • the implementation of the display time of each image frame is not limited.
  • step 103 may include the following sub-steps:
  • Sub-step 10301 for each frame data, watermark timing information is extracted from the frame data.
  • Sub-step 10303 parsing the extracted watermark timing information to determine a display time of the image frame corresponding to the frame data.
  • Step 105 Detect continuity of the image frame based on display time of each image frame, and generate a detection result of the video.
  • the display time of each extracted image frame should be uniformly increased.
  • the display time of each image frame By comparing the display time of each image frame with the synchronization time and determining whether the display time of each frame image is irregular, it is possible to determine the jamming situation during the video playback. Specifically, comparing the display time of each image frame with the server local NTP (Network Time Protocol) time, the time difference corresponding to each image frame can be obtained.
  • the time difference corresponding to an image frame is not within the preset playing time range, the image frame may be considered to be stuck, and the display time of the image frame is taken as the stuck time point, and recorded.
  • detecting the continuity of the image frame based on the display time of each image frame, and generating the detection result of the video may include: calculating a difference between the display time of each image frame and the pre-generated synchronization time information, and generating each image. The time difference corresponding to the frame; determining whether the time difference corresponding to each image frame is within the playing time range; and when the time difference corresponding to the image frame is not within the playing time range, displaying the time point of the image frame As a stuck time point, a carton detection result of the video is generated.
  • the display time of each image frame is uniformly incremented, it can be determined whether the video has a frame loss. Specifically, when the interval between the display time of two adjacent image frames is greater than the time difference between the frames corresponding to the video, it may be determined that there is a lost image frame between the two image frames, that is, in the two image frames. Data was lost. By recording the display time of the two image frames, the time point at which the video is dropped can be determined.
  • detecting the continuity of the image frame based on the display time of each image frame, and generating the detection result of the video may further include: performing statistics on the display time of each image frame, and determining the The inter-frame time difference corresponding to the video, and the display time difference corresponding to each adjacent two image frames; determining whether the display time difference corresponding to the two image frames is greater than the inter-frame time difference; when the display time difference is greater than the When the time difference between frames is determined, the two image frames are determined The data is lost between the frames, and the frame loss detection result of the video is generated.
  • the frame loss card analysis server can automatically generate video detection results, such as the Carton detection result and the frame loss detection result.
  • the detection result may include a time point of video frame loss, a stuck time point, etc., so that the probability of occurrence of the stuck phenomenon and the time of the stuck time during video playback can be accurately counted, and the accuracy of the detection is improved.
  • the server can automatically detect the video playback jam and the video data loss by extracting the watermark timing information in the frame data of each image frame, and generate a video detection result, thereby reducing the workload of the test engineer. Saves human resources and reduces the cost of testing.
  • the server can automatically detect, that is, the video can be detected by using idle time, which speeds up the detection process and improves the detection efficiency.
  • FIG. 2 a flow chart of steps of a method for detecting video data according to the present invention is shown. Specifically, the method may include the following steps:
  • Step 201 Connect the smart terminal where the player is located by wireless or wired.
  • the server can connect to the smart terminal where the player is located by wireless or wired.
  • the wireless method refers to a communication method in which wireless communication uses information that a radio wave signal can propagate in a free space.
  • the wired method refers to a method of transmitting information by using a tangible medium such as a metal wire or an optical fiber.
  • the video is usually played by the player of the terminal, such as playing through a player of the smart phone, or playing through a web player of the smart terminal.
  • the smart terminal where the player is located can connect to the server through a wireless connection, such as a WI-FI connection.
  • the smart terminal can also connect to the server through a wired method such as a universal serial bus.
  • the embodiment of the present invention does not limit the connection manner between the server and the smart terminal.
  • Step 203 Receive frame data extracted by the smart terminal from a designated area of each image frame.
  • Each image frame is an image frame played in a video that has been played by the player. Specifically, the player can realize video playback by continuously displaying image frames of the video.
  • the display of each image frame is equivalent to displaying one picture.
  • the intelligent terminal can obtain the YUV data displayed on the display screen through the Surface Flinger service provided by the system itself; or directly capture the data in the designated area of the display screen through the LCD driver.
  • the smart terminal can extract the frame data from the designated area of each image frame of the video, such as obtaining the YUV data from the area displayed by the logo.
  • the frame data of each image frame contains the image The display time information of the frame, that is, the watermark timing information.
  • the intelligent terminal transmits the extracted number of frames to the server.
  • the server can receive the frame data sent by the smart terminal through a network interface such as a TCP_SOCKET interface, a USB serial port, or the like.
  • Step 205 Extract watermark timing information from the frame data for each frame data.
  • the server may extract the watermark timing information of the image frame corresponding to the frame data from the frame data by inverse transform, such as extracting the watermark timing information in the YUV data of the designated area. .
  • Step 207 Parse the extracted watermark timing information to determine a display time of the image frame corresponding to the frame data.
  • the watermark timing information is embedded as a watermark in a macroblock of the designated area.
  • the display time of each image frame can be determined by inversely transforming the macroblock in which the watermark timing information is located, that is, by traversing the macroblock of the designated area to determine the parity of the macroblock.
  • the detection result of the video is detected based on the display time of each image frame, and the detection result of the video is generated.
  • the detection result of the video includes: a carton detection result and a frame loss detection result, as follows.
  • the method for generating the video of the video may include: step 209, step 211, and step 213; generating a frame loss detection result of the video may specifically include: step 215, step 217, and step 219.
  • Step 209 Calculate a difference between the display time of each image frame and the synchronization time information generated in advance, and generate a time difference corresponding to each image frame.
  • the server's local NTP time such as the current system time (System.currentTime)
  • System.currentTime the time that each image frame is actually displayed during playback can be determined.
  • the time difference between the actual playback time of each image frame and the start time of the video may be generated, and synchronization time information corresponding to each image frame is generated.
  • the System.currentTime is 3500 milliseconds
  • the start time of the video is 3500 milliseconds
  • the display time of the first image frame is 16 milliseconds
  • the actual playback time is 3517 milliseconds.
  • the synchronization time information corresponding to one image frame is 17 milliseconds
  • the display time of the second image frame is 48 milliseconds
  • the actual playback time is 3555 milliseconds
  • the synchronization time information corresponding to the second image frame is 55 milliseconds
  • the display time of 3 image frames is 80 milliseconds
  • the actual playback time is 3689 milliseconds
  • the synchronization time information corresponding to the third image frame is 189 milliseconds.
  • the time difference timegaps corresponding to each image frame can be generated. For example, for the first image frame, the difference between the display time of the image frame and the corresponding synchronization time information of 17 milliseconds is calculated, and the time difference timegaps corresponding to the first image frame can be generated to be 1 millisecond. Similarly, the time difference timegaps corresponding to the second image frame can be generated to be 7 milliseconds, and the time difference timegaps corresponding to the third image frame is 109 milliseconds.
  • Step 211 Determine whether the time difference corresponding to each image frame is within the playing time range.
  • online video can be delayed in playback in a good network transmission environment. It is said that if the image frame of the video can be played within the preset playing time range, the image frame is considered to be able to play smoothly. Assume that the video frame of the video is delayed to play within 40 milliseconds, and the image frame can be considered to be played smoothly, that is, the image frame is not played. When the image frame of the video is delayed for more than 40 milliseconds, the image is considered to be stuck.
  • the server can determine whether the time difference timegaps corresponding to each image frame of the video is within the play time range (A, B), where A can be 0 milliseconds, and B can be 40 milliseconds.
  • the video frame can be considered to be smooth, otherwise the video frame is considered to be stuck.
  • the time difference of the first image frame is timegaps 1 millisecond.
  • the first image frame of the video can be considered to be smooth; the third The time difference of the image frame corresponding to timegaps is 109 milliseconds. If it is not within the playback time range (0 milliseconds, 40 milliseconds), the third image frame of the video may be considered to be stuck, and then step 213 is performed.
  • Step 213 When the time difference corresponding to the image frame is not in the playing time range, the display time point of the image frame is used as the stuck time point, and the card detection result of the video is generated.
  • the display time point of the image frame is used as the stuck time point, and the card detection result of the video is generated.
  • the time difference of the third image frame, timegaps, 109 milliseconds is not within the playback time range, and the display time of the third image frame is 80 milliseconds as the stuck time point to generate the jam detection of the video. result.
  • Step 215 Perform statistics on the display time of each image frame, determine an inter-frame time difference corresponding to the video, and a display time difference corresponding to each adjacent two image frames.
  • the interframe time difference of the video is 32 milliseconds, and the display time difference corresponding to each adjacent two image frames, such as the first image frame and the second image frame.
  • the corresponding display time difference is 32 milliseconds, that is, the difference between the display time 0x00000030ms of the second image frame and the display time 0x00000010ms of the second image frame.
  • the display time difference corresponding to other adjacent two image frames can be calculated, for example, the display time difference corresponding to the second image frame and the third image frame is 32 milliseconds, and the third image frame and the fourth image frame are The corresponding display time difference is 64 milliseconds, the display time difference corresponding to the fourth image frame and the fifth image frame is 32 milliseconds, and the display time difference corresponding to the fifth image frame and the sixth image frame is 64 milliseconds...
  • Step 217 Determine whether a display time difference corresponding to the two image frames is greater than the inter-frame time difference.
  • the display time difference corresponding to the two image frames is large The time difference corresponding to the video. If the display time difference corresponding to two adjacent image frames is not greater than the inter-frame time difference corresponding to the video, it is considered that there is no data loss between the two image frames, that is, no lost frame. If the display time difference corresponding to two adjacent image frames is greater than the inter-frame time difference corresponding to the video, it is considered that data is lost between the two image frames, that is, lost frames, such as the third image frame and the fourth image frame.
  • the corresponding display time 64 milliseconds is greater than the inter-frame time difference corresponding to the video by 32 milliseconds, and then step 219 is performed.
  • Step 219 When the display time difference is greater than the inter-frame time difference, determine that data is lost between the two image frames, and generate a frame loss detection result of the video.
  • the frame loss detection result of the video is generated.
  • the frame loss detection result of the video can record which frames are lost between the image frames, such as recording the lost frame between the third image frame and the fourth image frame, and the fifth image frame and the sixth image frame.
  • the frame is lost between them; or the frame may be lost between the display times, such as the lost frame between the display time 0x00000050ms and 0x00000090ms, the lost frame between the display time 0x000000b0ms and 0x000000f0ms, etc., which is not limited by the embodiment of the present invention. .
  • the frame loss detection result of the video is generated, so that the probability of the frame loss of the video and the time point of the frame loss can be accurately counted, and the accuracy of the detection is improved.
  • FIG. 3A a structural block diagram of an embodiment of a device for detecting video data according to the present invention is shown, which may specifically include the following modules:
  • the receiving module 301 is configured to receive frame data of each image frame of the played video, where the frame data includes watermark timing information.
  • the display time determining module 303 is configured to separately extract watermark timing information from each frame data, and determine a display time of each frame data corresponding to the image frame.
  • the detecting module 305 is configured to detect continuity of the image frame based on display time of each image frame, and generate a detection result of the video.
  • the receiving module 301 can include a connection sub-module 30101 and a data receiving sub-module 30103, with reference to FIG. 3B.
  • the connection submodule 30101 is configured to connect the smart terminal where the player is located by using a wireless or a wired manner.
  • the data receiving sub-module 30103 is configured to receive frame data extracted by the smart terminal from a designated area of each image frame, where each image frame is an image frame played in a video that has been played by the player.
  • the display time determination module 303 can include the following sub-modules:
  • the extraction sub-module 30301 is configured to extract watermark timing information from the frame data for each frame data.
  • the parsing sub-module 30303 is configured to parse the extracted watermark timing information to determine a display time of the image data corresponding to the frame data.
  • the detection module 305 can include the following sub-modules:
  • the time difference generation sub-module 30501 is configured to calculate a difference between the display time of each image frame and the pre-generated synchronization time information, and generate a time difference corresponding to each image frame.
  • the cardon judgment sub-module 30503 is configured to determine whether the time difference corresponding to each image frame is within the play time range.
  • the Carton result generation sub-module 30505 is configured to generate a card detection result of the video when the time difference corresponding to the image frame is not in the playing time range, and the display time point of the image frame is used as a stuck time point. .
  • the detecting module 305 may further include the following submodules:
  • the statistics sub-module 30507 is configured to perform statistics on the display time of each image frame, determine an inter-frame time difference corresponding to the video, and a display time difference corresponding to each adjacent two image frames.
  • the frame loss judging sub-module 30509 is configured to determine whether a display time difference corresponding to the two image frames is greater than the inter-frame time difference.
  • the frame loss result generation sub-module 30511 determines, when the display time difference is greater than the inter-frame time difference, that data is lost between the two image frames, and generates a frame loss detection result of the video.
  • the embodiment of the present invention may determine the display time of the image frame corresponding to each frame data by extracting the watermark timing information included in the frame data, based on the display of each image frame.
  • Time detection of the continuity of the image frame the detection result of the video is generated, that is, the probability of the occurrence of the Carton phenomenon and the time of the stuck time can be accurately counted, thereby improving the accuracy of the detection; at the same time, avoiding the video detection by manually detecting the video
  • the phenomenon that the detection progress is slow, the detection efficiency is improved while reducing the workload of the test technician.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • embodiments of the embodiments of the invention may be provided as a method, apparatus, or computer program product.
  • embodiments of the invention may be in the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware.
  • embodiments of the invention may take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • Figure 4 illustrates a server in which the present invention can be implemented.
  • the server conventionally includes a processor 410 and a computer program product or computer readable medium in the form of a memory 420.
  • the memory 420 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • Memory 420 has a memory space 430 for program code 431 for performing any of the method steps described above.
  • storage space 430 for program code may include various program code 431 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such computer program products are typically portable or fixed storage units as described with reference to FIG.
  • the storage unit may have a storage section, a storage space, and the like arranged similarly to the storage 420 in the server of FIG.
  • the program code can be, for example, in a suitable shape Compress.
  • the storage unit includes computer readable code 431', code that can be read by a processor, such as 410, which, when executed by a server, causes the server to perform various steps in the methods described above.
  • Embodiments of the invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG.
  • These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal device to produce a machine such that instructions are executed by a processor of a computer or other programmable data processing terminal device
  • Means are provided for implementing the functions specified in one or more of the flow or in one or more blocks of the flow chart.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing terminal device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the instruction device implements the functions specified in one or more blocks of the flowchart or in a flow or block of the flowchart.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Provided are a video data detection method and device. The method comprises: receiving frame data of each image frame of a played video, wherein the frame data comprises watermark timing information; respectively extracting the watermark timing information from each piece of the frame data, and determining a display time of the image frame corresponding to each piece of the frame data; and detecting continuity of the image frame based on the display time of each image frame, and generating a detection result of the video. According to the embodiments of the present invention, by extracting watermark timing information contained in frame data, detecting continuity of an image frame, and generating a detection result of a video, accurate statistics of the probabilities of jam and pause occurrence and time points where jam and pause occur can be made, and the accuracy of detection is improved. Moreover, the problem of low detection progress caused by manually detecting jam and pause phenomenons during video displaying is avoided, and the workload of a test technician is reduced while the detection efficiency is improved.

Description

一种视频数据的检测方法和装置Method and device for detecting video data
本申请要求在2015年12月4日提交中国专利局、申请号为201510889781.9、发明名称为“一种视频数据的检测方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. 201510889781.9, entitled "A Method and Apparatus for Detecting Video Data", filed on December 4, 2015, the entire contents of which is incorporated herein by reference. In the application.
技术领域Technical field
本发明涉及视频技术领域,特别是涉及一种视频数据的检测方法和一种视频数据的检测装置。The present invention relates to the field of video technologies, and in particular, to a method for detecting video data and a device for detecting video data.
背景技术Background technique
视频在播放的过程中,容易发生播放卡顿,或者数据丢失(如视频帧丢失)的现象,导致了视频不能流畅地播放。为了确保视频播放流畅,提高用户体验,需要对视频的播放卡顿、数据丢失的情况进行检测。During the playback of the video, it is prone to play jams or data loss (such as loss of video frames), resulting in the video not being played smoothly. In order to ensure smooth video playback and improve the user experience, it is necessary to detect the playback of the video and the loss of data.
发明人在实现本发明的过程中发现,目前,主要通过技术人员对视频播放的情况进行检测,确定视频播放卡顿、数据丢失的情况。以播放在线视频为例,在播放过程中,测试工程师通过观看视频,确定该视频的播放卡顿、数据丢失的情况;或者,通过用户反馈的问题,发现在线视频播放卡顿、视频数据丢失的问题。但是,通过人工检测,难以记录视频在播放过程中发生卡顿现象对应的时间点,并且限制了检测进度。The inventor found in the process of implementing the present invention that at present, the situation of video playback is mainly detected by a technician, and the situation in which the video is played and the data is lost is determined. Taking the online video as an example, during the playback process, the test engineer determines the playback of the video and the data loss by watching the video; or, through the feedback of the user, discovers that the online video is playing and the video data is lost. problem. However, by manual detection, it is difficult to record the time point at which the video is stuck during playback, and the detection progress is limited.
显然,目前通过人工检测视频播放卡顿现象,不能对卡顿现象发生的机率和卡顿时间点进行精确的统计,并且检测效率低。Obviously, by manually detecting the video playback phenomenon, the probability of the occurrence of the Carton phenomenon and the time of the stuck time cannot be accurately counted, and the detection efficiency is low.
发明内容Summary of the invention
本发明实施例所要解决的技术问题是提供一种视频数据的检测方法,解决人工检测而导致视频检测进度有限的问题,在提高检测效率的同时,提高检测的精确度。The technical problem to be solved by the embodiments of the present invention is to provide a method for detecting video data, which solves the problem that the video detection progress is limited due to manual detection, and improves the detection efficiency while improving the detection accuracy.
相应的,本发明实施例还提供了一种视频数据的检测装置,用以保证上述方法的实现及应用。 Correspondingly, an embodiment of the present invention further provides a video data detecting apparatus for ensuring implementation and application of the foregoing method.
为了解决上述问题,本发明实施例公开了一种视频数据的检测方法,包括:In order to solve the above problem, an embodiment of the present invention discloses a method for detecting video data, including:
接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;Receiving frame data of each image frame of the played video, the frame data including watermark timing information;
分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间;Extracting watermark timing information from each frame data respectively, and determining a display time of the image frame corresponding to each frame data;
基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。The continuity of the image frame is detected based on the display time of each image frame, and the detection result of the video is generated.
相应的,本发明实施例还公开一种视频数据的检查装置,包括:Correspondingly, an embodiment of the present invention further discloses a video data checking apparatus, including:
接收模块,用于接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;a receiving module, configured to receive frame data of each image frame of the played video, where the frame data includes watermark timing information;
显示时间确定模块,用于分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间;a display time determining module, configured to respectively extract watermark timing information from each frame data, and determine a display time of the image frame corresponding to each frame data;
检测模块,用于基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。And a detecting module, configured to detect continuity of the image frame based on a display time of each image frame, and generate a detection result of the video.
本发明实施例提供一种计算机程序,其包括计算机可读代码,当所述计算机可读代码在服务器上运行时,导致所述服务器执行上述的视频数据的检测方法。Embodiments of the present invention provide a computer program comprising computer readable code that, when executed on a server, causes the server to perform the above-described method of detecting video data.
本发明实施例提供一种计算机可读介质,其中存储了上述的计算机程序。Embodiments of the present invention provide a computer readable medium in which the above computer program is stored.
发明实施例提供一种服务器,包括:The embodiment of the invention provides a server, including:
一个或多个处理器;One or more processors;
用于存储处理器可执行指令的存储器;a memory for storing processor executable instructions;
其中,所述处理器被配置为:Wherein the processor is configured to:
接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;Receiving frame data of each image frame of the played video, the frame data including watermark timing information;
分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间; Extracting watermark timing information from each frame data respectively, and determining a display time of the image frame corresponding to each frame data;
基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。The continuity of the image frame is detected based on the display time of each image frame, and the detection result of the video is generated.
与现有技术相比,本发明实施例包括以下优点:Compared with the prior art, the embodiments of the invention include the following advantages:
本发明实施例可以在接收到已播放视频的各图像帧的帧数据后,通过提取帧数据中所包含的水印时序信息,确定每个帧数据对应图像帧的显示时间,基于各图像帧的显示时间检测图像帧的连续性,生成视频的检测结果,即可以对卡顿现象发生的机率和卡顿时间点进行精确的统计,提高了检测的精确度;同时,避免由人工检测视频播放卡顿现象而导致检测进度慢的问题,在减少测试技术员的工作量的同时,提高了检测效率。After receiving the frame data of each image frame of the played video, the embodiment of the present invention may determine the display time of the image frame corresponding to each frame data by extracting the watermark timing information included in the frame data, based on the display of each image frame. Time detection of the continuity of the image frame, the detection result of the video is generated, that is, the probability of the occurrence of the Carton phenomenon and the time of the stuck time can be accurately counted, thereby improving the accuracy of the detection; at the same time, avoiding the video detection by manually detecting the video The phenomenon that the detection progress is slow, the detection efficiency is improved while reducing the workload of the test technician.
附图说明DRAWINGS
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, a brief description of the drawings used in the embodiments or the prior art description will be briefly described below. Obviously, the drawings in the following description It is a certain embodiment of the present invention, and other drawings can be obtained from those skilled in the art without any creative work.
图1是本发明的一种视频数据的检测方法实施例的步骤流程图;1 is a flow chart showing the steps of an embodiment of a method for detecting video data according to the present invention;
图2是本发明的一种视频数据的检测方法优选实施例的步骤流程图;2 is a flow chart showing the steps of a preferred embodiment of a method for detecting video data according to the present invention;
图3A是本发明的一种视频数据的检测装置实施例的结构框图;3A is a structural block diagram of an embodiment of a device for detecting video data according to the present invention;
图3B是本发明的一种视频数据的检测装置优选实施例的结构框图;3B is a structural block diagram of a preferred embodiment of a video data detecting apparatus according to the present invention;
图4示意性地示出了用于执行根据本发明的方法的服务器的框图;以及Figure 4 shows schematically a block diagram of a server for carrying out the method according to the invention;
图5示意性地示出了用于保持或者携带实现根据本发明的方法的程序代码的存储单元。Fig. 5 schematically shows a storage unit for holding or carrying program code implementing the method according to the invention.
具体实施方式detailed description
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。 The technical solutions in the embodiments of the present invention will be clearly and completely described in conjunction with the drawings in the embodiments of the present invention. It is a partial embodiment of the invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
目前,通过人工发现的方式检测视频播放卡顿现象、视频数据丢失,存在如下问题:1、浪费测试工程师的人力资源;2、不能自动检测,即不能有效利用闲时时间(如晚上),导致检测进度有限,检测效率低;3、不能精确统计视频播放的卡顿机率和卡顿时间点,即检测的精确度低。At present, the phenomenon of video playback jam and video data loss is detected by manual discovery. The following problems exist: 1. The human resources of the test engineer are wasted; 2. The automatic detection cannot be performed, that is, the idle time (such as evening) cannot be effectively utilized, resulting in The detection progress is limited and the detection efficiency is low. 3. The accuracy of the video playback and the stuck time can not be accurately counted, that is, the detection accuracy is low.
针对上述问题,本发明实施例的核心构思之一在于,从各图像帧的帧数据提取水印时序信息,依据该水印时序信息确定每个帧数据对应图像帧的显示时间,通过各图像帧的显示时间检测视频图像帧的连续性,生成该视频的检测结果,即可以对卡顿现象发生的机率和卡顿时间点进行精确的统计,提高检测的精确度,以及提高检测效率。One of the core concepts of the embodiments of the present invention is that the watermark timing information is extracted from the frame data of each image frame, and the display time of the image frame corresponding to each frame data is determined according to the watermark timing information, and the display of each image frame is performed. The time is detected by the continuity of the video image frame, and the detection result of the video is generated, that is, the probability of occurrence of the Carton phenomenon and the time of the stuck time can be accurately counted, the accuracy of the detection is improved, and the detection efficiency is improved.
参照图1,示出了本发明的一种视频数据的检测方法实施例的步骤流程图,具体可以包括如下步骤:Referring to FIG. 1 , a flow chart of steps of a method for detecting video data according to the present invention is shown, which may specifically include the following steps:
步骤101,接收已播放视频的各图像帧的帧数据。Step 101: Receive frame data of each image frame of the played video.
其中,帧数据包括水印时序信息。实际上,在编码的过程中,可以在视频的每一图像帧上添加水印时序信息。具体而言,在视频编码的过程中,通过水印技术,可以在视频的各图像帧上增加水印,该水印的内容包含了视频的各图像帧的显示时序信息,如图像帧的帧号、时间戳等,使得各图像帧的帧数据包括水印时序信息。The frame data includes watermark timing information. In fact, watermark timing information can be added to each image frame of the video during encoding. Specifically, in the process of video encoding, by using a watermarking technique, a watermark may be added to each image frame of the video, and the content of the watermark includes display timing information of each image frame of the video, such as a frame number and time of the image frame. The stamp or the like is such that the frame data of each image frame includes watermark timing information.
优选的,可以在各图像帧的指定区域添加水印,即将水印嵌入到视频中相对不变的区域,例如,基于视频标记(logotype,logo)在各图像帧中的位置基本不变,可以在视频的logo添加脆弱性透明的水印时序信息。具体的,通过量化,将视频源的各图像帧的帧号或时间戳变换为32位二进制数,如量化后的显示时间戳(Presentation Time Stamp,PTS)。在编码的时候,把量化后的PTS(相当于时序信息)作为脆弱性透明的水印嵌入到logo的P宏块中。由于水印是脆弱性透明的,因此在显示时该水印通过人眼是无法看到的,即不会影响图像帧的显示,保证了视频的图像帧显示的完整性。Preferably, a watermark may be added in a specified area of each image frame, that is, the watermark is embedded in a relatively unchanged area of the video, for example, based on a video mark (logo), the position in each image frame is substantially unchanged, and the video may be in the video. The logo adds fragile transparent watermark timing information. Specifically, the frame number or time stamp of each image frame of the video source is converted into a 32-bit binary number by quantization, such as a quantized display time stamp (PTS). At the time of encoding, the quantized PTS (equivalent to timing information) is embedded as a fragile transparent watermark into the P macroblock of the logo. Since the watermark is fragile and transparent, the watermark is invisible to the human eye when displayed, that is, it does not affect the display of the image frame, and the integrity of the image frame display of the video is ensured.
在实际应用中,智能终端如智能手机播放视频,可以把解码之后各图像帧的帧数据抓取出来,转发到卡顿丢帧分析服务器(简称服务器)。具体的,智能终端可以通过调用截屏接口,如安卓(Android)系统的界面投递者(Surface Flinger)服务提供的接口,可以获得送显示屏显示的帧数据;也可 以直接通过液晶显示器(Liquid Crystal Display,LCD)驱动截取显示屏幕指定区域(如logo区域)的数据,即可以获取各图像帧的帧数据。例如,帧数据可以是按照YUV格式生成的数据,即是YUV数据。其中“Y”表示明亮度(Luminance或Luma),“U”和“V”表示色度(Chrominance或Chroma)。服务器接收该放视频的各图像帧的帧数据后,就可以对视频的各图像帧的帧数据进行自动检测,生成该视频的检测结果,具体的检测过程在后文进行描述。In practical applications, an intelligent terminal, such as a smart phone, plays a video, and can capture the frame data of each image frame after decoding, and forward it to the Carton Drop Frame Analysis Server (referred to as a server). Specifically, the smart terminal can obtain the frame data sent by the display screen by calling a screen capture interface, such as an interface provided by an interface device of the Android system (Surface Flinger); The frame data of each image frame can be acquired by directly driving data of a designated area (such as a logo area) of the display screen through a liquid crystal display (LCD). For example, the frame data may be data generated in accordance with the YUV format, that is, YUV data. Wherein "Y" indicates brightness (Luminance or Luma), and "U" and "V" indicate chromaticity (Chrominance or Chroma). After receiving the frame data of each image frame of the video, the server can automatically detect the frame data of each image frame of the video, and generate a detection result of the video. The specific detection process is described later.
需要说明的是,智能终端和服务器之间的帧数据,可以通过网络传输,如通过TCP_SOCKET传输;也可以通用串行总线(Universal Serial Bus,USB)串口传输,如安卓调试桥(Android Debug Bridge,ADB)等,本发明实施例对此不作限制。It should be noted that the frame data between the intelligent terminal and the server can be transmitted through the network, such as through TCP_SOCKET; or can be transmitted by a universal serial bus (USB) serial port, such as the Android Debug Bridge (Android Debug Bridge, ADB) and the like, the embodiment of the present invention does not limit this.
可选的,上述步骤101,可以包括如下子步骤:Optionally, the foregoing step 101 may include the following substeps:
子步骤10101,通过无线或者有线方式连接播放器所在的智能终端。Sub-step 10101, the smart terminal where the player is located is connected by wireless or wired.
子步骤10103,接收所述智能终端从各图像帧的指定区域提取的帧数据。Sub-step 10103: Receive frame data extracted by the smart terminal from a designated area of each image frame.
其中,所述各图像帧为所述播放器已播放视频中播放的图像帧。The image frames are image frames played in the played video of the player.
步骤103,分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间。Step 103: Extract watermark timing information from each frame data respectively, and determine a display time of the image frame corresponding to each frame data.
在接收到帧数据后,服务器可以通过逆变换,从每个帧数据中提取水印时序信息,对水印时序信息进行解析,就可以确定每个帧数对应图像帧的显示时间。如上例中在logo区域添加水印时序信息,服务器在接收到帧数据后,可以通过遍历logo区域的宏块,确定宏块的奇偶性,从而还原编码时嵌入水印中的时序信息,即确定各图像帧的显示时间。例如,服务器可以通过遍历32个宏块的奇偶性,并把奇数映射为1,把偶数映射为0,就可以得到32位的二进制数,还原编码嵌入水印的PTS,即确定每个图像帧的显示时间。例如,采用十六进制数表示图像帧的显示时间,假设视频的第1图像帧的显示时间为16毫秒(ms),则通过计算,可以确定该视频的第1个图像帧的显示时间为0x00000010ms。After receiving the frame data, the server may extract the watermark timing information from each frame data by inverse transform, and parse the watermark timing information to determine the display time of the image frame corresponding to each frame number. In the above example, watermark timing information is added in the logo area. After receiving the frame data, the server can determine the parity of the macroblock by traversing the macroblock of the logo area, thereby restoring the timing information embedded in the watermark during encoding, that is, determining each image. The display time of the frame. For example, the server can obtain the 32-bit binary number by traversing the parity of 32 macroblocks, mapping the odd number to 1, and mapping the even number to 0, and restoring the PTS embedded in the watermark, that is, determining each image frame. display time. For example, the display time of the image frame is represented by a hexadecimal number. If the display time of the first image frame of the video is 16 milliseconds (ms), the display time of the first image frame of the video can be determined by calculation. 0x00000010ms.
当然,为了保持健壮性,也可以遍历两个宏块,确定一位二进制系数(X,Y),即通过遍历64个宏块的奇偶性确定一个图像帧的实施时间。其中,X、Y可以是1或者0。当X和Y相同时,如X和Y都为1或者都为0时,可以根据X或者Y的值,确定宏块的奇偶性。本发明实施例对水印时序信息还原 成各图像帧的显示时间的实现方式不作限制。Of course, in order to maintain robustness, it is also possible to traverse two macroblocks and determine a one-bit binary coefficient (X, Y), that is, determine the implementation time of one image frame by traversing the parity of 64 macroblocks. Where X and Y can be 1 or 0. When X and Y are the same, such as when both X and Y are 1 or both are 0, the parity of the macroblock can be determined based on the value of X or Y. The embodiment of the invention restores watermark timing information The implementation of the display time of each image frame is not limited.
可选的,上述步骤103可以包括如下子步骤:Optionally, the foregoing step 103 may include the following sub-steps:
子步骤10301,针对每个帧数据,从所述帧数据中提取水印时序信息。Sub-step 10301, for each frame data, watermark timing information is extracted from the frame data.
子步骤10303,对所提取的水印时序信息进行解析,确定所述帧数据对应图像帧的显示时间。Sub-step 10303, parsing the extracted watermark timing information to determine a display time of the image frame corresponding to the frame data.
步骤105,基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。Step 105: Detect continuity of the image frame based on display time of each image frame, and generate a detection result of the video.
实际上,在视频正常流畅播放时,所提取的各图像帧的显示时间应该是均匀递增的。通过将各图像帧的显示时间与同步时间相比较,判断各帧图像的显示时间是否是不规则变化,就可以确定该视频播放时的卡顿情况。具体的,将各图像帧的显示时间与服务器本地NTP(Network Time Protocol)时间相比较,可以得到各图像帧对应的时间差值。当某一个图像帧对应的时间差值不在预置的播放时间范围内时,就可以认为该图像帧播放卡顿,将该图像帧的显示时间作为卡顿时间点,并进行记录。In fact, when the video is normally played smoothly, the display time of each extracted image frame should be uniformly increased. By comparing the display time of each image frame with the synchronization time and determining whether the display time of each frame image is irregular, it is possible to determine the jamming situation during the video playback. Specifically, comparing the display time of each image frame with the server local NTP (Network Time Protocol) time, the time difference corresponding to each image frame can be obtained. When the time difference corresponding to an image frame is not within the preset playing time range, the image frame may be considered to be stuck, and the display time of the image frame is taken as the stuck time point, and recorded.
可选的,基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果,可以包括:计算各图像帧的显示时间与预先生成的同步时间信息的差值,生成各图像帧对应的时间差值;分别判断每个图像帧对应的时间差值是否在播放时间范围内;当图像帧对应的时间差值不在所述播放时间范围内,将所述图像帧的显示时间点作为卡顿时间点,生成所述视频的卡顿检测结果。Optionally, detecting the continuity of the image frame based on the display time of each image frame, and generating the detection result of the video may include: calculating a difference between the display time of each image frame and the pre-generated synchronization time information, and generating each image. The time difference corresponding to the frame; determining whether the time difference corresponding to each image frame is within the playing time range; and when the time difference corresponding to the image frame is not within the playing time range, displaying the time point of the image frame As a stuck time point, a carton detection result of the video is generated.
另外,通过判断各图像帧的显示时间是否是均匀递增,就可以判断该视频是否存在丢帧的情况。具体的,当相邻的两个图像帧的显示时间的间隔大于视频对应的帧间时间差时,就可以确定在这两个图像帧之间存在丢失的图像帧,即在这两个图像帧之间丢失了数据。通过记录这两个图像帧的显示时间,可以确定该视频丢帧的时间点。In addition, by determining whether the display time of each image frame is uniformly incremented, it can be determined whether the video has a frame loss. Specifically, when the interval between the display time of two adjacent image frames is greater than the time difference between the frames corresponding to the video, it may be determined that there is a lost image frame between the two image frames, that is, in the two image frames. Data was lost. By recording the display time of the two image frames, the time point at which the video is dropped can be determined.
在本发明的一种优选实施例中,基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果,还可以包括:对各图像帧的显示时间进行统计,确定所述视频对应的帧间时间差,以及各相邻的两个图像帧所对应的显示时间差;判断所述两个图像帧所对应的显示时间差是否大于所述帧间时间差;当所述显示时间差大于所述帧间时间差时,确定在所述两个图像帧 之间丢失数据,生成所述视频的丢帧检测结果。In a preferred embodiment of the present invention, detecting the continuity of the image frame based on the display time of each image frame, and generating the detection result of the video may further include: performing statistics on the display time of each image frame, and determining the The inter-frame time difference corresponding to the video, and the display time difference corresponding to each adjacent two image frames; determining whether the display time difference corresponding to the two image frames is greater than the inter-frame time difference; when the display time difference is greater than the When the time difference between frames is determined, the two image frames are determined The data is lost between the frames, and the frame loss detection result of the video is generated.
基于视频的丢帧情况和/或卡顿情况,丢帧卡顿分析服务器可以自动生成视频的检测结果,如卡顿检测结果、丢帧检测结果等。该检测结果可以包括视频丢帧的时间点、卡顿时间点等,从而可以对视频播放时卡顿现象发生的机率、卡顿时间点进行精确的统计,提高了检测的精确度。Based on the video frame loss situation and/or the stuck condition, the frame loss card analysis server can automatically generate video detection results, such as the Carton detection result and the frame loss detection result. The detection result may include a time point of video frame loss, a stuck time point, etc., so that the probability of occurrence of the stuck phenomenon and the time of the stuck time during video playback can be accurately counted, and the accuracy of the detection is improved.
在本发明实施例中,服务器可以通过提取各图像帧的帧数据中的水印时序信息,自动检测视频播放卡顿、视频数据丢失的情况,生成视频的检测结果,减少了测试工程师的工作量,节省了人力资源,从而降低检测的成本。此外,服务器可以自动检测,即可以利用空闲时间对视频进行检测,加快了检测进度,提高检测效率。In the embodiment of the present invention, the server can automatically detect the video playback jam and the video data loss by extracting the watermark timing information in the frame data of each image frame, and generate a video detection result, thereby reducing the workload of the test engineer. Saves human resources and reduces the cost of testing. In addition, the server can automatically detect, that is, the video can be detected by using idle time, which speeds up the detection process and improves the detection efficiency.
参照图2,示出了本发明的一种视频数据的检测方法实施例的步骤流程图,具体可以包括如下步骤:Referring to FIG. 2, a flow chart of steps of a method for detecting video data according to the present invention is shown. Specifically, the method may include the following steps:
步骤201,通过无线或者有线方式连接播放器所在的智能终端。Step 201: Connect the smart terminal where the player is located by wireless or wired.
实际上,在因特网如局域网中,服务器可以通过无线方式或者有线方式连接播放器所在的智能终端。其中,无线方式是指无线通信,利用电波信号可以在自由空间中传播的特性进行信息交换的一种通信方式。有线方式是指有线通信,利用金属导线、光纤等有形媒质传送信息的方式。In fact, in the Internet, such as a local area network, the server can connect to the smart terminal where the player is located by wireless or wired. Among them, the wireless method refers to a communication method in which wireless communication uses information that a radio wave signal can propagate in a free space. The wired method refers to a method of transmitting information by using a tangible medium such as a metal wire or an optical fiber.
具体的,视频通常是通过终端的播放器进行播放的,如通过智能手机的播放器进行播放,或者通过智能终端的网页播放器进行播放。在播放器在播放视频时,该播放器所在的智能终端可以通过无线连接的方式如WI-FI连接方式,连接服务器。当然,智能终端也可以通过有线方式如通用串行总线,连接服务器。本发明实施例对服务器与智能终端的连接方式不作限制。Specifically, the video is usually played by the player of the terminal, such as playing through a player of the smart phone, or playing through a web player of the smart terminal. When the player is playing a video, the smart terminal where the player is located can connect to the server through a wireless connection, such as a WI-FI connection. Of course, the smart terminal can also connect to the server through a wired method such as a universal serial bus. The embodiment of the present invention does not limit the connection manner between the server and the smart terminal.
步骤203,接收所述智能终端从各图像帧的指定区域提取的帧数据。Step 203: Receive frame data extracted by the smart terminal from a designated area of each image frame.
其中,各图像帧为所述播放器已播放视频中播放的图像帧。具体的,播放器可以通过连续显示视频的图像帧,实现视频的播放。每一个图像帧的显示相当于显示一个画面。Each image frame is an image frame played in a video that has been played by the player. Specifically, the player can realize video playback by continuously displaying image frames of the video. The display of each image frame is equivalent to displaying one picture.
智能终端可以通过系统本身提供的Surface Flinger服务获取送显示屏显示的YUV数据;或者直接通过LCD驱动截取显示屏幕指定区域的数据。在播放器播放视频时,智能终端可以从视频的各图像帧的指定区域提取帧数据,如从logo显示的区域获取YUV数据。每一个图像帧的帧数据包含了该图像 帧的显示时间信息,即包含了水印时序信息。智能终端从各图像帧的指定区域提取帧数据后,将所提取的帧数发送给服务器。服务器可以通过网络接口如TCP_SOCKET接口、USB串口等,接收智能终端所发送的帧数据。The intelligent terminal can obtain the YUV data displayed on the display screen through the Surface Flinger service provided by the system itself; or directly capture the data in the designated area of the display screen through the LCD driver. When the player plays the video, the smart terminal can extract the frame data from the designated area of each image frame of the video, such as obtaining the YUV data from the area displayed by the logo. The frame data of each image frame contains the image The display time information of the frame, that is, the watermark timing information. After extracting the frame data from the designated area of each image frame, the intelligent terminal transmits the extracted number of frames to the server. The server can receive the frame data sent by the smart terminal through a network interface such as a TCP_SOCKET interface, a USB serial port, or the like.
步骤205,针对每个帧数据,从所述帧数据中提取水印时序信息。Step 205: Extract watermark timing information from the frame data for each frame data.
在接收到帧数据后,针对每个帧数据,服务器可以通过逆变换,从帧数据中提取该帧数据对应的图像帧的水印时序信息,如将指定区域的YUV数据中的水印时序信息提取出来。After receiving the frame data, for each frame data, the server may extract the watermark timing information of the image frame corresponding to the frame data from the frame data by inverse transform, such as extracting the watermark timing information in the YUV data of the designated area. .
步骤207,对所提取的水印时序信息进行解析,确定所述帧数据对应图像帧的显示时间。Step 207: Parse the extracted watermark timing information to determine a display time of the image frame corresponding to the frame data.
具体的,水印时序信息作为水印嵌入到指定区域的宏块中。通过对水印时序信息所在的宏块进行逆变换,即通过遍历指定区域的宏块,确定宏块的奇偶性,就可以确定各图像帧的显示时间。Specifically, the watermark timing information is embedded as a watermark in a macroblock of the designated area. The display time of each image frame can be determined by inversely transforming the macroblock in which the watermark timing information is located, that is, by traversing the macroblock of the designated area to determine the parity of the macroblock.
此后可以基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果,本实施例中,视频的检测结果包括:卡顿检测结果和丢帧检测结果,具体如下。Then, the detection result of the video is detected based on the display time of each image frame, and the detection result of the video is generated. In this embodiment, the detection result of the video includes: a carton detection result and a frame loss detection result, as follows.
其中,生成视频的卡顿检查结果,具体可以包括:步骤209、步骤211以及步骤213;生成视频的丢帧检测结果,具体可以包括:步骤215、步骤217以及步骤219。The method for generating the video of the video may include: step 209, step 211, and step 213; generating a frame loss detection result of the video may specifically include: step 215, step 217, and step 219.
一、生成视频的卡顿检查结果的具体步骤First, the specific steps to generate a video of the Carton check results
步骤209,计算各图像帧的显示时间与预先生成的同步时间信息的差值,生成各图像帧对应的时间差值。Step 209: Calculate a difference between the display time of each image frame and the synchronization time information generated in advance, and generate a time difference corresponding to each image frame.
在基于服务器本地NTP时间如当前系统时间(System.currentTime),可以确定各图像帧在播放时实际显示的时间。在视频播放时,可以将各图像帧实际播放时间相对与该视频起始播放时间的时间差,生成各图像帧对应的同步时间信息。Based on the server's local NTP time, such as the current system time (System.currentTime), the time that each image frame is actually displayed during playback can be determined. During video playback, the time difference between the actual playback time of each image frame and the start time of the video may be generated, and synchronization time information corresponding to each image frame is generated.
作为本发明实施例的一个具体示例,假设System.currentTime为3500毫秒,视频的起播时间为3500毫秒,第1个图像帧的显示时间为16毫秒,其的实际播放时间为3517毫秒,则第1个图像帧对应的同步时间信息为17毫秒;第2个图像帧的显示时间为48毫秒,其的实际播放时间为3555毫秒,则第2个图像帧对应的同步时间信息为55毫秒;第3个图像帧的显示时间为 80毫秒,其的实际播放时间为3689毫秒,则第3个图像帧对应的同步时间信息为189毫秒。As a specific example of the embodiment of the present invention, it is assumed that the System.currentTime is 3500 milliseconds, the start time of the video is 3500 milliseconds, the display time of the first image frame is 16 milliseconds, and the actual playback time is 3517 milliseconds. The synchronization time information corresponding to one image frame is 17 milliseconds; the display time of the second image frame is 48 milliseconds, and the actual playback time is 3555 milliseconds, and the synchronization time information corresponding to the second image frame is 55 milliseconds; The display time of 3 image frames is 80 milliseconds, the actual playback time is 3689 milliseconds, and the synchronization time information corresponding to the third image frame is 189 milliseconds.
通过计算各图像帧的显示时间与其对应的同步时间信息的差值,可以生成各图像帧对应的时间差值timegaps。例如,针对第1个图像帧,计算该图像帧的其显示时间16毫秒与其对应的同步时间信息17毫秒的差值,可以生成第1图像帧对应的时间差值timegaps为1毫秒。同理,可以生成第2图像帧对应的时间差值timegaps为7毫秒,第3图像帧对应的时间差值timegaps为109毫秒。By calculating the difference between the display time of each image frame and its corresponding synchronization time information, the time difference timegaps corresponding to each image frame can be generated. For example, for the first image frame, the difference between the display time of the image frame and the corresponding synchronization time information of 17 milliseconds is calculated, and the time difference timegaps corresponding to the first image frame can be generated to be 1 millisecond. Similarly, the time difference timegaps corresponding to the second image frame can be generated to be 7 milliseconds, and the time difference timegaps corresponding to the third image frame is 109 milliseconds.
步骤211,分别判断每个图像帧对应的时间差值是否在播放时间范围内。Step 211: Determine whether the time difference corresponding to each image frame is within the playing time range.
针对不同清晰度,在线视频在良好网络传输环境下,其播放可接受延迟。具有的说,若视频的图像帧可以在预置的播放时间范围内播放,则认为该图像帧可以流畅播放。假设,视频的图像帧延迟播放在40毫秒之内,都可以认为该图像帧可以流畅播放,即该图像帧播放不卡顿。当视频的图像帧延迟播放超过40毫秒,则认为该图像播放卡顿。服务器可以判断视视频的各图像帧对应的时间差值timegaps是否在播放时间范围(A,B)内,其中A可以是0毫秒,B可以是40毫秒。若图像帧对应的时间差值timegaps大于A且小于B时,则可以认为该视频帧播放流畅,否则以认为该视频帧卡顿。如上述例子中,第1个图像帧对应的时间差值timegaps 1毫秒,在播放时间范围(0毫秒,40毫秒)之内,则可以认为该视频的第1个图像帧播放流畅;第3个图像帧对应的时间差值timegaps 109毫秒,不在播放时间范围(0毫秒,40毫秒)之内,则可以认为该视频的第3个图像帧播放卡顿,然后执行步骤213。For different definitions, online video can be delayed in playback in a good network transmission environment. It is said that if the image frame of the video can be played within the preset playing time range, the image frame is considered to be able to play smoothly. Assume that the video frame of the video is delayed to play within 40 milliseconds, and the image frame can be considered to be played smoothly, that is, the image frame is not played. When the image frame of the video is delayed for more than 40 milliseconds, the image is considered to be stuck. The server can determine whether the time difference timegaps corresponding to each image frame of the video is within the play time range (A, B), where A can be 0 milliseconds, and B can be 40 milliseconds. If the time difference corresponding to the time frame of the image frame is greater than A and less than B, the video frame can be considered to be smooth, otherwise the video frame is considered to be stuck. For example, in the above example, the time difference of the first image frame is timegaps 1 millisecond. Within the playback time range (0 milliseconds, 40 milliseconds), the first image frame of the video can be considered to be smooth; the third The time difference of the image frame corresponding to timegaps is 109 milliseconds. If it is not within the playback time range (0 milliseconds, 40 milliseconds), the third image frame of the video may be considered to be stuck, and then step 213 is performed.
步骤213,当图像帧对应的时间差值不在所述播放时间范围内,将所述图像帧的显示时间点作为卡顿时间点,生成所述视频的卡顿检测结果。Step 213: When the time difference corresponding to the image frame is not in the playing time range, the display time point of the image frame is used as the stuck time point, and the card detection result of the video is generated.
当某一个图像帧对应的时间timegaps差值不在播放时间范围内,将该图像帧的显示时间点作为卡顿时间点,生成所述视频的卡顿检测结果。如上述例子中,第3个图像帧对应的时间差值timegaps 109毫秒不在播放时间范围之内,将第3个图像帧的显示时间80毫秒作为卡顿时间点,以生成该视频的卡顿检测结果。When the time timegaps difference corresponding to an image frame is not in the play time range, the display time point of the image frame is used as the stuck time point, and the card detection result of the video is generated. For example, in the above example, the time difference of the third image frame, timegaps, 109 milliseconds is not within the playback time range, and the display time of the third image frame is 80 milliseconds as the stuck time point to generate the jam detection of the video. result.
通过判断视频所有图像帧对应时间差是否在播放时间范围内,从而可以记录该视频所有的卡顿时间点,生成该视频的卡断检测结果,进而可以对对 视频播放时卡顿现象发生的机率、卡顿时间点进行精确的统计,提高了检测的精确度。By judging whether the time difference of all the image frames of the video is within the playing time range, all the carding time points of the video can be recorded, and the jam detection result of the video is generated, and then the pair can be Accurate statistics are generated during the video playback, and the accuracy of the detection is improved.
二、生成视频的丢帧检测结果的具体步骤Second, the specific steps to generate video frame loss detection results
步骤215,对各图像帧的显示时间进行统计,确定所述视频对应的帧间时间差,以及各相邻的两个图像帧所对应的显示时间差。Step 215: Perform statistics on the display time of each image frame, determine an inter-frame time difference corresponding to the video, and a display time difference corresponding to each adjacent two image frames.
作为本发明实施例的一个具体示例,假设某一视频的各图像帧的显示时间如表1所示。其中,“第1帧”表示第1个图像帧;“0x”表示采用十六进制数,如0x00000010表示16。As a specific example of the embodiment of the present invention, it is assumed that the display time of each image frame of a certain video is as shown in Table 1. Wherein, "1st frame" represents the first image frame; "0x" represents a hexadecimal number, such as 0x00000010.
图像帧的帧号Frame number of the image frame 显示时间display time
第1帧Frame 1 0x00000010ms0x00000010ms
第2帧Frame 2 0x00000030ms0x00000030ms
第3帧:Frame 3: 0x00000050ms0x00000050ms
第4帧Frame 4 0x00000090ms0x00000090ms
第5帧Frame 5 0x000000b0ms0x000000b0ms
第6帧Frame 6 0x000000f0m0x000000f0m
............ ............
表1Table 1
通过统计该视频所有图像帧的显示时间,可以确定该视频的帧间时间差为32毫秒,以及各相邻的两个图像帧所对应的显示时间差,如第1个图像帧与第2个图像帧所对应的显示时间差为32毫秒,即是第2个图像帧的显示时间0x00000030ms减去第2个图像帧的显示时间0x00000010ms的差值。同理,可以计算其他相邻两个图像帧对应的显示时间差,如第2个图像帧与第3个图像帧所对应的显示时间差为32毫秒、第3个图像帧与第4个图像帧所对应的显示时间差为64毫秒、第4个图像帧与第5个图像帧所对应的显示时间差为32毫秒、第5个图像帧与第6个图像帧所对应的显示时间差为64毫秒……By counting the display time of all image frames of the video, it can be determined that the interframe time difference of the video is 32 milliseconds, and the display time difference corresponding to each adjacent two image frames, such as the first image frame and the second image frame. The corresponding display time difference is 32 milliseconds, that is, the difference between the display time 0x00000030ms of the second image frame and the display time 0x00000010ms of the second image frame. Similarly, the display time difference corresponding to other adjacent two image frames can be calculated, for example, the display time difference corresponding to the second image frame and the third image frame is 32 milliseconds, and the third image frame and the fourth image frame are The corresponding display time difference is 64 milliseconds, the display time difference corresponding to the fourth image frame and the fifth image frame is 32 milliseconds, and the display time difference corresponding to the fifth image frame and the sixth image frame is 64 milliseconds...
步骤217,判断所述两个图像帧所对应的显示时间差是否大于所述帧间时间差。Step 217: Determine whether a display time difference corresponding to the two image frames is greater than the inter-frame time difference.
针对每两个相邻的图像帧,判断这两个图像帧对应的显示时间差是否大 于该视频对应的时间差。若相邻的两个图像帧对应的显示时间差不大于该视频对应的帧间时间差,则认为在这两个图像帧之间没有丢失数据,即没有丢失帧。若相邻的两个图像帧对应的显示时间差大于该视频对应的帧间时间差,则认为在这两个图像帧之间丢失数据,即丢失帧,如第3个图像帧与第4个图像帧所对应的显示时间64毫秒大于该视频对应的帧间时间差32毫秒,则然后执行步骤219。For each two adjacent image frames, it is determined whether the display time difference corresponding to the two image frames is large The time difference corresponding to the video. If the display time difference corresponding to two adjacent image frames is not greater than the inter-frame time difference corresponding to the video, it is considered that there is no data loss between the two image frames, that is, no lost frame. If the display time difference corresponding to two adjacent image frames is greater than the inter-frame time difference corresponding to the video, it is considered that data is lost between the two image frames, that is, lost frames, such as the third image frame and the fourth image frame. The corresponding display time 64 milliseconds is greater than the inter-frame time difference corresponding to the video by 32 milliseconds, and then step 219 is performed.
步骤219,当所述显示时间差大于所述帧间时间差时,确定在所述两个图像帧之间丢失数据,生成所述视频的丢帧检测结果。Step 219: When the display time difference is greater than the inter-frame time difference, determine that data is lost between the two image frames, and generate a frame loss detection result of the video.
在相邻的两个图像帧的显示时间差大于视频对应的帧间时间差时,可以确定在这两个图像帧之间丢失数据,如在第3个图像帧与第4个图像帧之间丢失帧,生成该视频的丢帧检测结果。该视频的丢帧检测结果可以记录在哪些图像帧之间丢失帧,如记录了在第3个图像帧与第4个图像帧之间丢失帧、在第5个图像帧与第6个图像帧之间丢失帧;或者也可以记录在哪些显示时间之间丢失帧,如在显示时间0x00000050ms到0x00000090ms之间丢失帧、在显示时间0x000000b0ms到0x000000f0ms之间丢失帧等,本发明实施例对此不作限制。When the display time difference between two adjacent image frames is greater than the inter-frame time difference corresponding to the video, it may be determined that data is lost between the two image frames, such as losing frames between the third image frame and the fourth image frame. , the frame loss detection result of the video is generated. The frame loss detection result of the video can record which frames are lost between the image frames, such as recording the lost frame between the third image frame and the fourth image frame, and the fifth image frame and the sixth image frame. The frame is lost between them; or the frame may be lost between the display times, such as the lost frame between the display time 0x00000050ms and 0x00000090ms, the lost frame between the display time 0x000000b0ms and 0x000000f0ms, etc., which is not limited by the embodiment of the present invention. .
在本发明实施例中,通过判断相邻的两个图像帧的显示时间差是否大于视频对应的帧间时间差,可以确定在相邻的两个图像帧之间是否丢失帧,记录该视频丢失帧的情况,生成该视频的丢帧检测结果,从而可以对该视频丢帧的机率、丢帧的时间点进行精确的统计,提高检测的精确度。In the embodiment of the present invention, by determining whether the display time difference between two adjacent image frames is greater than the inter-frame time difference of the video, it may be determined whether a frame is lost between two adjacent image frames, and recording the lost frame of the video In this case, the frame loss detection result of the video is generated, so that the probability of the frame loss of the video and the time point of the frame loss can be accurately counted, and the accuracy of the detection is improved.
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明实施例并不受所描述的动作顺序的限制,因为依据本发明实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明实施例所必须的。It should be noted that, for the method embodiments, for the sake of simple description, they are all expressed as a series of action combinations, but those skilled in the art should understand that the embodiments of the present invention are not limited by the described action sequence, because In accordance with embodiments of the invention, certain steps may be performed in other sequences or concurrently. In the following, those skilled in the art should also understand that the embodiments described in the specification are all preferred embodiments, and the actions involved are not necessarily required by the embodiments of the present invention.
参照图3A,示出了本发明一种视频数据的检测装置实施例的结构框图,具体可以包括如下模块:Referring to FIG. 3A, a structural block diagram of an embodiment of a device for detecting video data according to the present invention is shown, which may specifically include the following modules:
接收模块301,用于接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息。 The receiving module 301 is configured to receive frame data of each image frame of the played video, where the frame data includes watermark timing information.
显示时间确定模块303,用于分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间。The display time determining module 303 is configured to separately extract watermark timing information from each frame data, and determine a display time of each frame data corresponding to the image frame.
检测模块305,用于基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。The detecting module 305 is configured to detect continuity of the image frame based on display time of each image frame, and generate a detection result of the video.
在图3A的基础上,可选的,接收模块301可以包括连接子模块30101和数据接收子模块30103,参照图3B。On the basis of FIG. 3A, optionally, the receiving module 301 can include a connection sub-module 30101 and a data receiving sub-module 30103, with reference to FIG. 3B.
其中,连接子模块30101,用于通过无线或者有线方式连接播放器所在的智能终端。数据接收子模块30103,用于接收所述智能终端从各图像帧的指定区域提取的帧数据,各图像帧为所述播放器已播放视频中播放的图像帧。The connection submodule 30101 is configured to connect the smart terminal where the player is located by using a wireless or a wired manner. The data receiving sub-module 30103 is configured to receive frame data extracted by the smart terminal from a designated area of each image frame, where each image frame is an image frame played in a video that has been played by the player.
在本发明的一种优选实施例中,显示时间确定模块303可以包括如下子模块:In a preferred embodiment of the invention, the display time determination module 303 can include the following sub-modules:
提取子模块30301,用于针对每个帧数据,从所述帧数据中提取水印时序信息。The extraction sub-module 30301 is configured to extract watermark timing information from the frame data for each frame data.
解析子模块30303,用于对所提取的水印时序信息进行解析,确定所述帧数据对应图像帧的显示时间。The parsing sub-module 30303 is configured to parse the extracted watermark timing information to determine a display time of the image data corresponding to the frame data.
在本发明的一种优选实施例中,检测模块305可以包括如下子模块:In a preferred embodiment of the invention, the detection module 305 can include the following sub-modules:
时间差值生成子模块30501,用于计算各图像帧的显示时间与预先生成的同步时间信息的差值,生成各图像帧对应的时间差值.The time difference generation sub-module 30501 is configured to calculate a difference between the display time of each image frame and the pre-generated synchronization time information, and generate a time difference corresponding to each image frame.
卡顿判断子模块30503,用于分别判断每个图像帧对应的时间差值是否在播放时间范围内。The cardon judgment sub-module 30503 is configured to determine whether the time difference corresponding to each image frame is within the play time range.
卡顿结果生成子模块30505,用于当图像帧对应的时间差值不在所述播放时间范围内,将所述图像帧的显示时间点作为卡顿时间点,生成所述视频的卡顿检测结果。The Carton result generation sub-module 30505 is configured to generate a card detection result of the video when the time difference corresponding to the image frame is not in the playing time range, and the display time point of the image frame is used as a stuck time point. .
可选的,检测模块305还可以包括如下子模块:Optionally, the detecting module 305 may further include the following submodules:
统计子模块30507,用于对各图像帧的显示时间进行统计,确定所述视频对应的帧间时间差,以及各相邻的两个图像帧所对应的显示时间差。The statistics sub-module 30507 is configured to perform statistics on the display time of each image frame, determine an inter-frame time difference corresponding to the video, and a display time difference corresponding to each adjacent two image frames.
丢帧判断子模块30509,用于判断所述两个图像帧所对应的显示时间差是否大于所述帧间时间差。The frame loss judging sub-module 30509 is configured to determine whether a display time difference corresponding to the two image frames is greater than the inter-frame time difference.
丢帧结果生成子模块30511,当所述显示时间差大于所述帧间时间差时,确定在所述两个图像帧之间丢失数据,生成所述视频的丢帧检测结果。 The frame loss result generation sub-module 30511 determines, when the display time difference is greater than the inter-frame time difference, that data is lost between the two image frames, and generates a frame loss detection result of the video.
本发明实施例可以在接收到已播放视频的各图像帧的帧数据后,通过提取帧数据中所包含的水印时序信息,确定每个帧数据对应图像帧的显示时间,基于各图像帧的显示时间检测图像帧的连续性,生成视频的检测结果,即可以对卡顿现象发生的机率和卡顿时间点进行精确的统计,提高了检测的精确度;同时,避免由人工检测视频播放卡顿现象而导致检测进度慢的问题,在减少测试技术员的工作量的同时,提高了检测效率。After receiving the frame data of each image frame of the played video, the embodiment of the present invention may determine the display time of the image frame corresponding to each frame data by extracting the watermark timing information included in the frame data, based on the display of each image frame. Time detection of the continuity of the image frame, the detection result of the video is generated, that is, the probability of the occurrence of the Carton phenomenon and the time of the stuck time can be accurately counted, thereby improving the accuracy of the detection; at the same time, avoiding the video detection by manually detecting the video The phenomenon that the detection progress is slow, the detection efficiency is improved while reducing the workload of the test technician.
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。For the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。The various embodiments in the present specification are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same similar parts between the various embodiments can be referred to each other.
本领域内的技术人员应明白,本发明实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本发明实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that embodiments of the embodiments of the invention may be provided as a method, apparatus, or computer program product. Thus, embodiments of the invention may be in the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, embodiments of the invention may take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
例如,图4示出了可以实现根据本发明的服务器。该服务器传统上包括处理器410和以存储器420形式的计算机程序产品或者计算机可读介质。存储器420可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器420具有用于执行上述方法中的任何方法步骤的程序代码431的存储空间430。例如,用于程序代码的存储空间430可以包括分别用于实现上面的方法中的各种步骤的各个程序代码431。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图5所述的便携式或者固定存储单元。该存储单元可以具有与图4的服务器中的存储器420类似布置的存储段、存储空间等。程序代码可以例如以适当形 式进行压缩。通常,存储单元包括计算机可读代码431’,即可以由例如诸如410之类的处理器读取的代码,这些代码当由服务器运行时,导致该服务器执行上面所描述的方法中的各个步骤。For example, Figure 4 illustrates a server in which the present invention can be implemented. The server conventionally includes a processor 410 and a computer program product or computer readable medium in the form of a memory 420. The memory 420 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM. Memory 420 has a memory space 430 for program code 431 for performing any of the method steps described above. For example, storage space 430 for program code may include various program code 431 for implementing various steps in the above methods, respectively. The program code can be read from or written to one or more computer program products. These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks. Such computer program products are typically portable or fixed storage units as described with reference to FIG. The storage unit may have a storage section, a storage space, and the like arranged similarly to the storage 420 in the server of FIG. The program code can be, for example, in a suitable shape Compress. Typically, the storage unit includes computer readable code 431', code that can be read by a processor, such as 410, which, when executed by a server, causes the server to perform various steps in the methods described above.
本发明实施例是参照根据本发明实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。Embodiments of the invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal device to produce a machine such that instructions are executed by a processor of a computer or other programmable data processing terminal device Means are provided for implementing the functions specified in one or more of the flow or in one or more blocks of the flow chart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing terminal device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The instruction device implements the functions specified in one or more blocks of the flowchart or in a flow or block of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing terminal device such that a series of operational steps are performed on the computer or other programmable terminal device to produce computer-implemented processing, such that the computer or other programmable terminal device The instructions executed above provide steps for implementing the functions specified in one or more blocks of the flowchart or in a block or blocks of the flowchart.
尽管已描述了本发明实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明实施例范围的所有变更和修改。While a preferred embodiment of the present invention has been described, it will be apparent that those skilled in the art can make further changes and modifications to the embodiments. Therefore, the appended claims are intended to be interpreted as including the preferred embodiments and the modifications and
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或 者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。Finally, it should also be noted that in this context, relational terms such as first and second are used merely to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these entities. There is any such actual relationship or order between operations. Furthermore, the terms "comprises" or "comprising" or "comprising" or any other variations are intended to encompass a non-exclusive inclusion, such that a process, method, article, or terminal device that includes a plurality of elements includes not only those elements but also Other elements that are included, or are included in the process, method, article or The elements inherent in the terminal equipment. An element defined by the phrase "comprising a ..." does not exclude the presence of additional identical elements in the process, method, article, or terminal device that comprises the element, without further limitation.
以上对本发明所提供的一种视频数据的检测方法和一种视频数据的检测装置,进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。 The method for detecting video data and the device for detecting video data provided by the present invention are described in detail above. The principle and implementation manner of the present invention are described in the following, and the description of the above embodiments is described. It is only used to help understand the method of the present invention and its core ideas; at the same time, for those skilled in the art, according to the idea of the present invention, there will be changes in the specific embodiments and application scopes. The contents of this specification are not to be construed as limiting the invention.

Claims (13)

  1. 一种视频数据的检测方法,其特征在于,包括:A method for detecting video data, comprising:
    接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;Receiving frame data of each image frame of the played video, the frame data including watermark timing information;
    分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间;Extracting watermark timing information from each frame data respectively, and determining a display time of the image frame corresponding to each frame data;
    基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。The continuity of the image frame is detected based on the display time of each image frame, and the detection result of the video is generated.
  2. 根据权利要求1所述的方法,其特征在于,所述接收已播放视频的各图像帧的帧数据,包括:The method according to claim 1, wherein the receiving frame data of each image frame of the played video comprises:
    通过无线或者有线方式连接播放器所在的智能终端;Connect the smart terminal where the player is located by wireless or wired;
    接收所述智能终端从各图像帧的指定区域提取的帧数据,其中,所述各图像帧为所述播放器已播放视频中播放的图像帧。Receiving frame data extracted by the smart terminal from a designated area of each image frame, wherein each image frame is an image frame played in a video that has been played by the player.
  3. 根据权利要求1所述的方法,其特征在于,所述分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间,包括:The method according to claim 1, wherein the extracting the watermark timing information from each frame data to determine the display time of the image frame corresponding to each frame data comprises:
    针对每个帧数据,从所述帧数据中提取水印时序信息;Extracting watermark timing information from the frame data for each frame data;
    对所提取的水印时序信息进行解析,确定所述帧数据对应图像帧的显示时间。The extracted watermark timing information is parsed to determine a display time of the image data corresponding to the frame data.
  4. 根据权利要求1至3任一所述的方法,其特征在于,所述基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果,包括:The method according to any one of claims 1 to 3, wherein the detecting the continuity of the image frame based on the display time of each image frame, and generating the detection result of the video comprises:
    计算各图像帧的显示时间与预先生成的同步时间信息的差值,生成各图像帧对应的时间差值;Calculating a difference between a display time of each image frame and pre-generated synchronization time information, and generating a time difference corresponding to each image frame;
    分别判断每个图像帧对应的时间差值是否在播放时间范围内;Determining whether the time difference corresponding to each image frame is within the playing time range;
    当图像帧对应的时间差值不在所述播放时间范围内,将所述图像帧的显示时间点作为卡顿时间点,生成所述视频的卡顿检测结果。When the time difference corresponding to the image frame is not within the playing time range, the display time point of the image frame is used as the stuck time point, and the card detection result of the video is generated.
  5. 根据权利要求4所述的方法,其特征在于,所述基于各图像帧的显示 时间检测图像帧的连续性,生成所述视频的检测结果,还包括:The method according to claim 4, wherein said display based on each image frame Time detecting the continuity of the image frame, and generating the detection result of the video, further comprising:
    对各图像帧的显示时间进行统计,确定所述视频对应的帧间时间差,以及各相邻的两个图像帧所对应的显示时间差;Performing statistics on the display time of each image frame, determining an inter-frame time difference corresponding to the video, and a display time difference corresponding to each adjacent two image frames;
    判断所述两个图像帧所对应的显示时间差是否大于所述帧间时间差;Determining whether a display time difference corresponding to the two image frames is greater than the inter-frame time difference;
    当所述显示时间差大于所述帧间时间差时,确定在所述两个图像帧之间丢失数据,生成所述视频的丢帧检测结果。When the display time difference is greater than the inter-frame time difference, it is determined that data is lost between the two image frames, and a frame loss detection result of the video is generated.
  6. 一种视频数据的检查装置,其特征在于,包括:A device for inspecting video data, comprising:
    接收模块,用于接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;a receiving module, configured to receive frame data of each image frame of the played video, where the frame data includes watermark timing information;
    显示时间确定模块,用于分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间;a display time determining module, configured to respectively extract watermark timing information from each frame data, and determine a display time of the image frame corresponding to each frame data;
    检测模块,用于基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。And a detecting module, configured to detect continuity of the image frame based on a display time of each image frame, and generate a detection result of the video.
  7. 根据权利要求6所述的装置,其特征在于,所述接收模块包括:The apparatus according to claim 6, wherein the receiving module comprises:
    连接子模块,用于通过无线或者有线方式连接播放器所在的智能终端;Connecting a sub-module for connecting the smart terminal where the player is located by wireless or wired;
    数据接收子模块,用于接收所述智能终端从各图像帧的指定区域提取的帧数据,其中,所述各图像帧为所述播放器已播放视频中播放的图像帧。And a data receiving submodule, configured to receive frame data extracted by the smart terminal from a specified area of each image frame, where each image frame is an image frame played in a video that has been played by the player.
  8. 根据权利要求6所述的装置,其特征在于,所述显示时间确定模块包括:The device according to claim 6, wherein the display time determining module comprises:
    提取子模块,用于针对每个帧数据,从所述帧数据中提取水印时序信息;Extracting a sub-module for extracting watermark timing information from the frame data for each frame data;
    解析子模块,用于对所提取的水印时序信息进行解析,确定所述帧数据对应图像帧的显示时间。The parsing sub-module is configured to parse the extracted watermark timing information to determine a display time of the image data corresponding to the frame data.
  9. 根据权利要求6至8任一所述的装置,其特征在于,所述检测模块包括:The device according to any one of claims 6 to 8, wherein the detecting module comprises:
    时间差值生成子模块,用于计算各图像帧的显示时间与预先生成的同步时间信息的差值,生成各图像帧对应的时间差值; a time difference generation sub-module, configured to calculate a difference between a display time of each image frame and pre-generated synchronization time information, and generate a time difference corresponding to each image frame;
    卡顿判断子模块,用于分别判断每个图像帧对应的时间差值是否在播放时间范围内;The Carton judgment sub-module is configured to respectively determine whether the time difference corresponding to each image frame is within a play time range;
    卡顿结果生成子模块,用于当图像帧对应的时间差值不在所述播放时间范围内,将所述图像帧的显示时间点作为卡顿时间点,生成所述视频的卡顿检测结果。The Carton result generation sub-module is configured to generate a carton detection result of the video when a time difference corresponding to the image frame is not within the playing time range, and a display time point of the image frame is used as a stuck time point.
  10. 根据权利要求9所述的装置,其特征在于,所述检测模块还包括:The apparatus according to claim 9, wherein the detecting module further comprises:
    统计子模块,用于对各图像帧的显示时间进行统计,确定所述视频对应的帧间时间差,以及各相邻的两个图像帧所对应的显示时间差;a statistic sub-module, configured to perform statistics on display time of each image frame, determine an inter-frame time difference corresponding to the video, and a display time difference corresponding to each adjacent two image frames;
    丢帧判断子模块,用于判断所述两个图像帧所对应的显示时间差是否大于所述帧间时间差;a frame loss judging module, configured to determine whether a display time difference corresponding to the two image frames is greater than the inter-frame time difference;
    丢帧结果生成子模块,当所述显示时间差大于所述帧间时间差时,确定在所述两个图像帧之间丢失数据,生成所述视频的丢帧检测结果。The frame loss result generation sub-module determines, when the display time difference is greater than the inter-frame time difference, that data is lost between the two image frames, and generates a frame loss detection result of the video.
  11. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在服务器上运行时,导致所述服务器执行根据权利要求1-5中的任一个所述的视频数据的检测方法。A computer program comprising computer readable code causing the server to perform a method of detecting video data according to any one of claims 1-5 when the computer readable code is run on a server.
  12. 一种计算机可读介质,其中存储了如权利要求11所述的计算机程序。A computer readable medium storing the computer program of claim 11.
  13. 一种服务器,其特征在于,包括:A server, comprising:
    一个或多个处理器;One or more processors;
    用于存储处理器可执行指令的存储器;a memory for storing processor executable instructions;
    其中,所述处理器被配置为:Wherein the processor is configured to:
    接收已播放视频的各图像帧的帧数据,所述帧数据包括水印时序信息;Receiving frame data of each image frame of the played video, the frame data including watermark timing information;
    分别从每个帧数据中提取水印时序信息,确定每个帧数据对应图像帧的显示时间;Extracting watermark timing information from each frame data respectively, and determining a display time of the image frame corresponding to each frame data;
    基于各图像帧的显示时间检测图像帧的连续性,生成所述视频的检测结果。 The continuity of the image frame is detected based on the display time of each image frame, and the detection result of the video is generated.
PCT/CN2016/089357 2015-12-04 2016-07-08 Video data detection method and device WO2017092343A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510889781.9 2015-12-04
CN201510889781.9A CN105979332A (en) 2015-12-04 2015-12-04 Video data detection method and device

Publications (1)

Publication Number Publication Date
WO2017092343A1 true WO2017092343A1 (en) 2017-06-08

Family

ID=56988250

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/089357 WO2017092343A1 (en) 2015-12-04 2016-07-08 Video data detection method and device

Country Status (2)

Country Link
CN (1) CN105979332A (en)
WO (1) WO2017092343A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426603A (en) * 2017-08-21 2019-03-05 北京京东尚科信息技术有限公司 A kind of method and apparatus for analyzing application program Caton
CN110457177A (en) * 2019-07-24 2019-11-15 Oppo广东移动通信有限公司 Be switched on method for detecting abnormality and device, electronic equipment, storage medium
CN111973994A (en) * 2020-09-08 2020-11-24 网易(杭州)网络有限公司 Game configuration adjusting method, device, equipment and storage medium
CN112711519A (en) * 2019-10-25 2021-04-27 腾讯科技(深圳)有限公司 Method and device for detecting fluency of picture, storage medium and computer equipment
CN113541832A (en) * 2021-06-24 2021-10-22 青岛海信移动通信技术股份有限公司 Terminal, network transmission quality detection method and storage medium
CN114845164A (en) * 2021-02-02 2022-08-02 中国移动通信有限公司研究院 Data processing method, device and equipment
CN114928769A (en) * 2022-04-21 2022-08-19 瑞芯微电子股份有限公司 Method and electronic device for displaying frame data
CN115022675A (en) * 2022-07-01 2022-09-06 天翼数字生活科技有限公司 Video playing detection method and system
CN116760973A (en) * 2023-08-18 2023-09-15 天津华来科技股份有限公司 Intelligent camera long connection performance test method and system based on two-dimensional code clock

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106775403B (en) * 2016-12-14 2020-03-03 北京小米移动软件有限公司 Method and device for acquiring stuck information
CN106604135A (en) * 2016-12-21 2017-04-26 深圳市泰普森科技有限公司 Video playing method based on set top box and set top box
CN108270635B (en) * 2016-12-30 2020-08-07 亿度慧达教育科技(北京)有限公司 Network blockage judging method and device and online course live broadcast system
CN106851384A (en) * 2017-01-18 2017-06-13 环球智达科技(北京)有限公司 A kind of scheme of Android system player detection buffering
CN106878703B (en) * 2017-03-14 2019-01-04 珠海全志科技股份有限公司 A kind of automobile data recorder video recording detection method
CN106973321B (en) * 2017-03-31 2019-07-30 广州酷狗计算机科技有限公司 Determine the method and device of video cardton
US10306270B2 (en) * 2017-06-26 2019-05-28 Netflix, Inc. Techniques for detecting media playback errors
CN107451066A (en) * 2017-08-22 2017-12-08 网易(杭州)网络有限公司 Interim card treating method and apparatus, storage medium, terminal
CN109698961B (en) * 2017-10-24 2021-06-22 阿里巴巴集团控股有限公司 Monitoring method and device and electronic equipment
CN108495120A (en) * 2018-01-31 2018-09-04 华为技术有限公司 A kind of video frame detection, processing method, apparatus and system
CN108449626A (en) * 2018-03-16 2018-08-24 北京视觉世界科技有限公司 Video processing, the recognition methods of video, device, equipment and medium
CN110602481B (en) * 2018-06-12 2021-11-16 浙江宇视科技有限公司 Video quality detection method and device in video monitoring system
CN108924575B (en) * 2018-07-09 2021-02-02 武汉斗鱼网络科技有限公司 Video decoding analysis method, device, equipment and medium
CN109120995B (en) * 2018-07-09 2021-01-01 武汉斗鱼网络科技有限公司 Video cache analysis method, device, equipment and medium
CN110704268B (en) * 2018-07-10 2023-10-27 浙江宇视科技有限公司 Automatic testing method and device for video images
CN109144858B (en) * 2018-08-02 2022-02-25 腾讯科技(北京)有限公司 Fluency detection method and device, computing equipment and storage medium
CN109412901B (en) * 2018-12-07 2022-09-27 成都博宇利华科技有限公司 Method and system for detecting continuity of acquired data based on time domain processing
CN111314640B (en) * 2020-02-23 2022-06-07 苏州浪潮智能科技有限公司 Video compression method, device and medium
CN112073713B (en) * 2020-09-07 2023-04-25 三六零科技集团有限公司 Video leakage test method, device, equipment and storage medium
CN112073714A (en) * 2020-09-09 2020-12-11 福建新大陆软件工程有限公司 Video playing quality automatic detection method, device, equipment and readable storage medium
CN114512077B (en) * 2020-10-23 2023-12-05 西安诺瓦星云科技股份有限公司 Method, device and system for detecting driving time sequence of receiving card output
CN113034430B (en) * 2020-12-02 2023-06-20 武汉大千信息技术有限公司 Video authenticity verification and identification method and system based on time watermark change analysis
CN115408071A (en) * 2021-05-26 2022-11-29 华为技术有限公司 Dynamic effect calculation method and device
CN114240830A (en) * 2021-11-05 2022-03-25 珠海全志科技股份有限公司 Automobile data recorder video detection method and device based on ffmpeg and image recognition
CN114915846B (en) * 2022-05-10 2024-06-21 中移(杭州)信息技术有限公司 Data processing method, device, equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020174425A1 (en) * 2000-10-26 2002-11-21 Markel Steven O. Collection of affinity data from television, video, or similar transmissions
JP2006270634A (en) * 2005-03-24 2006-10-05 Victor Co Of Japan Ltd Digital broadcast synchronizing reproducing apparatus, stream synchronization reproducing apparatus, and stream synchronization reproducing system
CN101322410A (en) * 2005-12-02 2008-12-10 皇家飞利浦电子股份有限公司 Method and device for detecting video data error
CN101888513A (en) * 2010-06-29 2010-11-17 深圳市融创天下科技发展有限公司 Method for converting video frame rate
CN103283251A (en) * 2010-12-26 2013-09-04 Lg电子株式会社 Broadcast service transmitting method, broadcast service receiving method and broadcast service receiving apparatus
CN104519372A (en) * 2014-12-19 2015-04-15 深圳市九洲电器有限公司 Switching method and switching system for streaming media playing
CN104918133A (en) * 2014-03-12 2015-09-16 北京视联动力国际信息技术有限公司 Method and device for playing video streams in articulated naturality web

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020174425A1 (en) * 2000-10-26 2002-11-21 Markel Steven O. Collection of affinity data from television, video, or similar transmissions
JP2006270634A (en) * 2005-03-24 2006-10-05 Victor Co Of Japan Ltd Digital broadcast synchronizing reproducing apparatus, stream synchronization reproducing apparatus, and stream synchronization reproducing system
CN101322410A (en) * 2005-12-02 2008-12-10 皇家飞利浦电子股份有限公司 Method and device for detecting video data error
CN101888513A (en) * 2010-06-29 2010-11-17 深圳市融创天下科技发展有限公司 Method for converting video frame rate
CN103283251A (en) * 2010-12-26 2013-09-04 Lg电子株式会社 Broadcast service transmitting method, broadcast service receiving method and broadcast service receiving apparatus
CN104918133A (en) * 2014-03-12 2015-09-16 北京视联动力国际信息技术有限公司 Method and device for playing video streams in articulated naturality web
CN104519372A (en) * 2014-12-19 2015-04-15 深圳市九洲电器有限公司 Switching method and switching system for streaming media playing

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426603A (en) * 2017-08-21 2019-03-05 北京京东尚科信息技术有限公司 A kind of method and apparatus for analyzing application program Caton
CN110457177A (en) * 2019-07-24 2019-11-15 Oppo广东移动通信有限公司 Be switched on method for detecting abnormality and device, electronic equipment, storage medium
CN112711519A (en) * 2019-10-25 2021-04-27 腾讯科技(深圳)有限公司 Method and device for detecting fluency of picture, storage medium and computer equipment
CN111973994A (en) * 2020-09-08 2020-11-24 网易(杭州)网络有限公司 Game configuration adjusting method, device, equipment and storage medium
CN114845164A (en) * 2021-02-02 2022-08-02 中国移动通信有限公司研究院 Data processing method, device and equipment
CN113541832B (en) * 2021-06-24 2023-11-03 青岛海信移动通信技术有限公司 Terminal, network transmission quality detection method and storage medium
CN113541832A (en) * 2021-06-24 2021-10-22 青岛海信移动通信技术股份有限公司 Terminal, network transmission quality detection method and storage medium
CN114928769A (en) * 2022-04-21 2022-08-19 瑞芯微电子股份有限公司 Method and electronic device for displaying frame data
CN114928769B (en) * 2022-04-21 2024-05-14 瑞芯微电子股份有限公司 Method and electronic device for displaying frame data
CN115022675A (en) * 2022-07-01 2022-09-06 天翼数字生活科技有限公司 Video playing detection method and system
CN115022675B (en) * 2022-07-01 2023-12-15 天翼数字生活科技有限公司 Video playing detection method and system
CN116760973A (en) * 2023-08-18 2023-09-15 天津华来科技股份有限公司 Intelligent camera long connection performance test method and system based on two-dimensional code clock
CN116760973B (en) * 2023-08-18 2023-10-24 天津华来科技股份有限公司 Intelligent camera long connection performance test method and system based on two-dimensional code clock

Also Published As

Publication number Publication date
CN105979332A (en) 2016-09-28

Similar Documents

Publication Publication Date Title
WO2017092343A1 (en) Video data detection method and device
US10425679B2 (en) Method and device for displaying information on video image
US20210350828A1 (en) Reference and Non-Reference Video Quality Evaluation
WO2017107649A1 (en) Video transmission method and device
US20170164026A1 (en) Method and device for detecting video data
CN109842795B (en) Audio and video synchronization performance testing method and device, electronic equipment and storage medium
CN108989883B (en) Live broadcast advertisement method, device, equipment and medium
US11763431B2 (en) Scene-based image processing method, apparatus, smart terminal and storage medium
WO2017067489A1 (en) Set-top box audio-visual synchronization method, device and storage medium
WO2021244224A1 (en) Lagging detection method and apparatus, and device and readable storage medium
JP4267649B2 (en) VIDEO PROGRAM PROCESSING METHOD, RELATED DEVICE, AND RELATED MEDIUM
CN104967903A (en) Video play detection method and device
CN110475156B (en) Method and device for calculating video delay value
WO2018166162A1 (en) System and method for detecting playing status of client in audio and video live broadcast
US9516303B2 (en) Timestamp in performance benchmark
US10237593B2 (en) Monitoring quality of experience (QoE) at audio/video (AV) endpoints using a no-reference (NR) method
CN106331820A (en) Synchronous audio and video processing method and device
CN108696713B (en) Code stream safety test method, device and test equipment
CN111641758B (en) Video and audio recording method and device and computer readable storage medium
CN110300326B (en) Video jamming detection method and device, electronic equipment and storage medium
CN113839829A (en) Cloud game delay testing method, device and system and electronic equipment
CN113542888B (en) Video processing method and device, electronic equipment and storage medium
CN115878379A (en) Data backup method, main server, backup server and storage medium
CN114339284A (en) Method, device, storage medium and program product for monitoring live broadcast delay
US20200286120A1 (en) Advertising monitoring method, system, apparatus, and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16869656

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16869656

Country of ref document: EP

Kind code of ref document: A1