CN114845164A - Data processing method, device and equipment - Google Patents

Data processing method, device and equipment Download PDF

Info

Publication number
CN114845164A
CN114845164A CN202110145485.3A CN202110145485A CN114845164A CN 114845164 A CN114845164 A CN 114845164A CN 202110145485 A CN202110145485 A CN 202110145485A CN 114845164 A CN114845164 A CN 114845164A
Authority
CN
China
Prior art keywords
video
pause
target video
point
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110145485.3A
Other languages
Chinese (zh)
Inventor
张伟
胡玉同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110145485.3A priority Critical patent/CN114845164A/en
Publication of CN114845164A publication Critical patent/CN114845164A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a data processing method, a device and equipment, wherein the data processing method comprises the following steps: under the condition that the target video is blocked, the target video is divided into a front-stage video and a rear-stage video; acquiring a pause starting point in a target video from a front-segment video or a rear-segment video according to a last frame of the front-segment video and a reference frame of a corresponding time point in a source video; acquiring a pause end point corresponding to the pause starting point in the target video according to the pause starting point; acquiring the pause information of the target video according to the pause starting point and the pause ending point; the target video is video data obtained according to actual playing of the source video. This scheme can realize cutting apart the video on the basis of judging the card and pause, and the last frame location of video through cutting apart arrives the first frame of card pause, and then confirms the last frame of card pause, has reduced the work load that needs the contrast, has promoted contrast efficiency, has just also improved the treatment effeciency of scheme.

Description

Data processing method, device and equipment
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data processing method, apparatus, and device.
Background
Video playing fluency is an important factor influencing the viewing experience of a user, and the video playing is unsmooth due to the fact that the video playing is blocked, so that the blockage becomes one of important concerns of video quality detection.
The existing video jam detection method comprises the steps of analyzing full data (videos, cache data and the like) and analyzing after data is obtained through periodic sampling, wherein the analysis is carried out from first data or a first frame picture to last data or a last frame picture, the processing workload is large, the efficiency is low, the accuracy of the periodic sampling depends on a sampling interval, and the condition of large workload and low efficiency also exists under the condition of short interval.
As can be seen from the above, the existing data processing scheme for video stuck detection has the problem of low processing efficiency.
Disclosure of Invention
The invention aims to provide a data processing method, a data processing device and data processing equipment, which aim to solve the problem that the data processing scheme aiming at video jam detection in the prior art is low in processing efficiency.
In order to solve the above technical problem, an embodiment of the present invention provides a data processing method, including:
under the condition that a target video is blocked, dividing the target video into a front-segment video and a rear-segment video;
acquiring a pause starting point in the target video from the front-segment video or the rear-segment video according to the last frame of the front-segment video and the reference frame of the corresponding time point in the source video;
acquiring a pause end point corresponding to the pause starting point in the target video according to the pause starting point;
obtaining the pause information of the target video according to the pause starting point and the pause ending point;
and the target video is video data obtained according to the actual playing of the source video.
Optionally, the obtaining a katon start point in the target video from the previous video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video includes:
confirming whether the front-segment video is a minimum segmentation unit or not when the similarity between the last frame and the reference frame is smaller than a first threshold value;
taking the last frame as a katon starting point in a target video under the condition that the front-segment video is a minimum segmentation unit;
and under the condition that the front-segment video is not the minimum segmentation unit, taking the front-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
Optionally, the obtaining, according to the last frame of the previous video and the reference frame of the corresponding time point in the source video, the katon start point in the target video from the later video includes:
confirming whether the rear video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is greater than or equal to a first threshold value;
and under the condition that the front-segment video is not the minimum segmentation unit, taking the rear-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
Optionally, the obtaining a pause end point corresponding to the pause start point in the target video according to the pause start point includes:
and acquiring a pause end point corresponding to the pause starting point in the target video according to the next frame corresponding to the pause starting point in the source video.
Optionally, the obtaining, according to the last frame of the previous video and the reference frame of the corresponding time point in the source video, the katon start point in the target video from the later video further includes:
taking the starting point of the back video as the starting point of the pause in the target video under the condition that the back video is the minimum segmentation unit;
the obtaining of the pause end point corresponding to the pause start point in the target video according to the pause start point includes:
and taking an end point of the back-segment video corresponding to the start point of the back-segment video as a pause end point corresponding to the pause start point in the target video.
Optionally, before dividing the target video into a front-segment video and a back-segment video, the method further includes:
acquiring a first difference value between a second time length corresponding to the target video and a first time length corresponding to the source video;
and determining that the target video is stuck if the first difference is greater than or equal to a second threshold.
Optionally, the obtaining the pause information of the target video according to the pause starting point and the pause ending point includes:
obtaining a stuck frame number and a stuck image according to the stuck starting point and the stuck ending point;
obtaining the pause duration according to the pause frame number and the frame rate used by the actual playing of the source video;
and acquiring the pause information of the target video according to at least one of the pause frame number, the pause image and the pause time.
Optionally, after obtaining the hiton information of the target video according to the hiton start point and the hiton end point, the method further includes:
acquiring a second difference value between a second time length corresponding to the target video and the pause time length in the pause information;
and under the condition that the second difference value is larger than the first time length corresponding to the source video, taking the video after the pause end point in the target video as a new target video, taking the video after the frame corresponding to the pause end point in the source video as a new source video, and returning to the step of dividing the target video into a front-segment video and a rear-segment video.
An embodiment of the present invention further provides a data processing apparatus, including:
the first segmentation module is used for segmenting a target video into a front-segment video and a rear-segment video under the condition that the target video is blocked;
a first obtaining module, configured to obtain a katon start point in the target video from the front-stage video or the rear-stage video according to a last frame of the front-stage video and a reference frame of a corresponding time point in a source video;
the second obtaining module is used for obtaining a pause end point corresponding to the pause starting point in the target video according to the pause starting point;
the first processing module is used for obtaining the pause information of the target video according to the pause starting point and the pause ending point;
and the target video is video data obtained according to the actual playing of the source video.
Optionally, the obtaining a katon start point in the target video from the previous video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video includes:
confirming whether the front-segment video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is smaller than a first threshold value;
taking the last frame as a katon starting point in a target video under the condition that the front-segment video is a minimum segmentation unit;
and under the condition that the front-segment video is not the minimum segmentation unit, taking the front-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
Optionally, the obtaining, according to the last frame of the previous video and the reference frame of the corresponding time point in the source video, the katon start point in the target video from the later video includes:
confirming whether the rear video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is greater than or equal to a first threshold value;
and under the condition that the front-segment video is not the minimum segmentation unit, taking the rear-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
Optionally, the obtaining a pause end point corresponding to the pause start point in the target video according to the pause start point includes:
and acquiring a pause end point corresponding to the pause starting point in the target video according to the next frame corresponding to the pause starting point in the source video.
Optionally, the obtaining, according to the last frame of the previous video and the reference frame of the corresponding time point in the source video, the katon start point in the target video from the later video further includes:
taking the starting point of the back video as the starting point of the pause in the target video under the condition that the back video is the minimum segmentation unit;
the obtaining of the pause end point corresponding to the pause start point in the target video according to the pause start point includes:
and taking an end point of the back-segment video corresponding to the start point of the back-segment video as a pause end point corresponding to the pause start point in the target video.
Optionally, the method further includes:
a third obtaining module, configured to obtain a first difference between a second time length corresponding to the target video and a first time length corresponding to the source video before the target video is divided into a front-stage video and a rear-stage video;
and the first determining module is used for determining that the target video is stuck under the condition that the first difference value is greater than or equal to a second threshold value.
Optionally, the obtaining the pause information of the target video according to the pause starting point and the pause ending point includes:
obtaining a stuck frame number and a stuck image according to the stuck starting point and the stuck ending point;
obtaining the pause duration according to the pause frame number and the frame rate used by the actual playing of the source video;
and acquiring the pause information of the target video according to at least one of the pause frame number, the pause image and the pause time.
Optionally, the method further includes:
a fourth obtaining module, configured to obtain a second difference between a second time length corresponding to the target video and a pause time length in the pause information after the pause information of the target video is obtained according to the pause starting point and the pause ending point;
and a second processing module, configured to, when the second difference is greater than the first duration corresponding to the source video, take a video after a pause end point in the target video as a new target video, take a video after a frame corresponding to the pause end point in the source video as a new source video, and return to the step of dividing the target video into a front-stage video and a rear-stage video.
An embodiment of the present invention further provides a data processing apparatus, including: a processor;
the processor is used for dividing the target video into a front-segment video and a rear-segment video under the condition that the target video is blocked;
acquiring a pause starting point in the target video from the front-segment video or the rear-segment video according to the last frame of the front-segment video and the reference frame of the corresponding time point in the source video;
acquiring a pause end point corresponding to the pause starting point in the target video according to the pause starting point;
acquiring the pause information of the target video according to the pause starting point and the pause ending point;
and the target video is video data obtained according to the actual playing of the source video.
Optionally, the obtaining a katon start point in the target video from the previous video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video includes:
confirming whether the front-segment video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is smaller than a first threshold value;
taking the last frame as a katon starting point in a target video under the condition that the front-segment video is a minimum segmentation unit;
and under the condition that the front-segment video is not the minimum segmentation unit, taking the front-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
Optionally, the obtaining, according to the last frame of the previous video and the reference frame of the corresponding time point in the source video, the katon start point in the target video from the later video includes:
confirming whether the rear video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is greater than or equal to a first threshold value;
and under the condition that the front-segment video is not the minimum segmentation unit, taking the rear-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
Optionally, the obtaining a pause end point corresponding to the pause start point in the target video according to the pause start point includes:
and acquiring a pause end point corresponding to the pause starting point in the target video according to the next frame corresponding to the pause starting point in the source video.
Optionally, the obtaining, according to the last frame of the previous video and the reference frame of the corresponding time point in the source video, the katon start point in the target video from the later video further includes:
taking the starting point of the back video as the starting point of the pause in the target video under the condition that the back video is the minimum segmentation unit;
the obtaining of the pause end point corresponding to the pause start point in the target video according to the pause start point includes:
and taking an end point of the back-segment video corresponding to the start point of the back-segment video as a pause end point corresponding to the pause start point in the target video.
Optionally, the processor is further configured to:
before the target video is divided into a front-stage video and a rear-stage video, acquiring a first difference value between a second time length corresponding to the target video and a first time length corresponding to the source video;
and determining that the target video is stuck if the first difference is greater than or equal to a second threshold.
Optionally, the obtaining the pause information of the target video according to the pause starting point and the pause ending point includes:
obtaining a stuck frame number and a stuck image according to the stuck starting point and the stuck ending point;
obtaining the pause duration according to the pause frame number and the frame rate used by the actual playing of the source video;
and acquiring the pause information of the target video according to at least one of the pause frame number, the pause image and the pause time.
Optionally, the processor is further configured to:
after the pause information of the target video is obtained according to the pause starting point and the pause ending point, a second difference value between a second time length corresponding to the target video and the pause time length in the pause information is obtained;
and under the condition that the second difference value is larger than the first time length corresponding to the source video, taking the video after the pause end point in the target video as a new target video, taking the video after the frame corresponding to the pause end point in the source video as a new source video, and returning to the step of dividing the target video into a front-segment video and a rear-segment video.
The embodiment of the invention provides data processing equipment, which comprises a memory, a processor and a program, wherein the program is stored on the memory and can run on the processor; the processor implements the above-described data processing method when executing the program.
An embodiment of the present invention provides a readable storage medium, on which a program is stored, which when executed by a processor implements the steps in the data processing method described above.
The technical scheme of the invention has the following beneficial effects:
in the above scheme, the data processing method divides a target video into a front-stage video and a rear-stage video under the condition that the target video is stuck; acquiring a pause starting point in the target video from the front-segment video or the rear-segment video according to the last frame of the front-segment video and the reference frame of the corresponding time point in the source video; acquiring a pause end point corresponding to the pause starting point in the target video according to the pause starting point; obtaining the pause information of the target video according to the pause starting point and the pause ending point; the target video is video data obtained according to actual playing of the source video; the video can be segmented on the basis of judging the stuck, the segmented video last frame is positioned to the stuck first frame, and then the stuck last frame is determined, so that the workload of comparison is reduced, the comparison efficiency is improved, and the processing efficiency of the scheme is improved; the problem of low processing efficiency of a data processing scheme aiming at video blockage detection in the prior art is well solved. In addition, the scheme can cover all the calories meeting the preset standard, and compared with the existing scheme, the accuracy and the precision of the processing result are better.
Drawings
FIG. 1 is a flow chart of a data processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a specific implementation of a data processing method according to an embodiment of the present invention;
FIG. 3 is a block diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
Aiming at the problem of low processing efficiency of a data processing scheme for video blockage detection in the prior art, the invention provides a data processing method, as shown in fig. 1, comprising the following steps:
step 11: under the condition that a target video is blocked, dividing the target video into a front-segment video and a rear-segment video;
step 12: acquiring a pause starting point in the target video from the front-segment video or the rear-segment video according to the last frame of the front-segment video and the reference frame of the corresponding time point in the source video;
step 13: acquiring a pause end point corresponding to the pause starting point in the target video according to the pause starting point;
step 14: obtaining the pause information of the target video according to the pause starting point and the pause ending point; and the target video is video data obtained according to the actual playing of the source video.
According to the data processing method provided by the embodiment of the invention, under the condition that a target video is blocked, the target video is divided into a front-segment video and a rear-segment video; acquiring a pause starting point in the target video from the front-segment video or the rear-segment video according to the last frame of the front-segment video and the reference frame of the corresponding time point in the source video; acquiring a pause end point corresponding to the pause starting point in the target video according to the pause starting point; acquiring the pause information of the target video according to the pause starting point and the pause ending point; the target video is video data obtained according to actual playing of the source video; the video can be segmented on the basis of judging the pause, and the last frame of the video after segmentation is positioned to the first frame of the pause, so that the last frame of the pause is determined, the workload required for comparison is reduced, the comparison efficiency is improved, and the processing efficiency of the scheme is improved; the problem of low processing efficiency of a data processing scheme aiming at video blockage detection in the prior art is well solved. In addition, the scheme can cover all the calories meeting the preset standard, and compared with the existing scheme, the accuracy and the precision of the processing result are better.
Wherein, the obtaining the katon start point in the target video from the previous video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video comprises: confirming whether the front-segment video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is smaller than a first threshold value; taking the last frame as a katon starting point in a target video under the condition that the front-segment video is a minimum segmentation unit; and under the condition that the front-segment video is not the minimum segmentation unit, taking the front-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
In this case, it is considered that there is a pause in the front-end video.
In an embodiment of the present invention, the acquiring a katon start point in the target video from the later video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video includes: confirming whether the rear video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is greater than or equal to a first threshold value; and under the condition that the front-segment video is not the minimum segmentation unit, taking the rear-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
In this case, it is assumed that there is no pause in the front video and there is pause in the rear video.
Correspondingly, the obtaining a pause end point corresponding to the pause start point in the target video according to the pause start point includes: and acquiring a pause end point corresponding to the pause starting point in the target video according to the next frame corresponding to the pause starting point in the source video.
In this embodiment of the present invention, the obtaining a katon start point in the target video from the later video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video further includes: taking the starting point of the back video as the starting point of the pause in the target video under the condition that the back video is the minimum segmentation unit; the obtaining of the pause end point corresponding to the pause start point in the target video according to the pause start point includes: and taking an end point of the back-segment video corresponding to the start point of the back-segment video as a pause end point corresponding to the pause start point in the target video.
It can also be understood that the latter video is actually a frame of video, and a stuck is present in the latter video, then the stuck is the frame.
Further, before the target video is divided into a front-segment video and a rear-segment video, the method further includes: acquiring a first difference value between a second time length corresponding to the target video and a first time length corresponding to the source video; and determining that the target video is stuck if the first difference is greater than or equal to a second threshold.
The second threshold may be set according to an actual demand for the duration of the stuck time, for example, the default setting is 0.1 second.
In this embodiment of the present invention, the obtaining the pause information of the target video according to the pause starting point and the pause ending point includes: obtaining a stuck frame number and a stuck image according to the stuck starting point and the stuck ending point; obtaining the pause duration according to the pause frame number and the frame rate used by the actual playing of the source video; and acquiring the pause information of the target video according to at least one of the pause frame number, the pause image and the pause time.
Further, after obtaining the hiton information of the target video according to the hiton starting point and the hiton ending point, the method further includes: acquiring a second difference value between a second time length corresponding to the target video and the pause time length in the pause information; and under the condition that the second difference value is larger than the first time length corresponding to the source video, taking the video after the pause end point in the target video as a new target video, taking the video after the frame corresponding to the pause end point in the source video as a new source video, and returning to the step of dividing the target video into a front-segment video and a rear-segment video.
It is also understood that the above jam detection operation (similar to the present jam determination flow) may be executed again when another jam still exists after the present jam (corresponding to the jam time length) is confirmed.
The data processing method provided by the embodiment of the present invention is illustrated below.
In view of the above technical problems, an embodiment of the present invention provides a data processing method, which can be specifically implemented as a video stuck detection method: on the basis of judging the stuck time through the comparison of the playing time (corresponding to the first time and the second time), the video is segmented, and the clipped last frame is determined frame by frame only by comparing and positioning the segmented last frame of the video to the first frame of the stuck time, so that the workload required to be compared is reduced, and the comparison efficiency is improved. On the basis, the scheme can cover all the stuck times meeting the preset standard, and compared with a sampling detection method, the accuracy and the precision of the result are better.
The scheme can also be understood as providing a video pause detection and measurement method, which judges whether the target video is paused or not based on the time length comparison of the source video and the target video, quickly determines the starting point and the ending point of the pause by carrying out multiple segmentation and frame comparison on the source video and the target video, and further calculates the information of the pause times, the time length and the like.
Specifically, the scheme provided by the embodiment of the present invention may be specifically as shown in fig. 2 (the minimum partition unit takes the minimum partitionable frame as an example), and includes:
step 21: acquiring a video to be played (a source video), and determining the playing time (corresponding to the first time);
it can also be understood that the video to be detected is obtained as the source video, and the time length required by playing the video is determined.
Step 22: playing a video, and recording the video (a target video) according to the actually played frame rate to obtain the actually played time length;
it can also be understood that the video is played, and the video is recorded according to the frame rate of the actual playing, so as to obtain the target video playing time length (the actual playing time length, corresponding to the second time length) for detecting whether the jam exists.
Step 23: comparing the playing time lengths (a source video VS target video), if the playing time length of the source video is equal to the actual playing time length of the target video, entering step 24, and if the playing time length of the source video is less than the actual playing time length of the target video, entering step 25;
specifically, the playing duration of the source video is compared with the actual playing duration of the target video, if the playing duration of the source video is not equal, it is determined that the video is stuck, the video segmentation process is performed, that is, step 25 is performed, and then step 26 is performed.
Regarding "play duration comparison (source video VS target video)", the following may be specifically: and determining whether the first difference between the actual playing time length of the target video and the playing time length of the source video is greater than or equal to a second threshold value, wherein the second threshold value can be set by itself, such as default to 0.1 s.
Step 24: determining that the time lengths are the same and no jamming exists;
that is, if the playing time length of the source video is equal to the actual playing time length of the target video, the target video is considered to be not blocked, and the judgment is finished.
Step 25: determining that the target video is blocked when the time lengths are different;
step 26: dividing a target video according to a certain standard, and acquiring the duration, the initial frame and the end frame of a divided video block;
step 27: comparing whether the end frame of the first half section of the target video (namely the front section of the target video) is the same as the frame of the corresponding time of the source video; if yes, go to step 28; if not, go to step 211;
step 28: the video block (i.e. the first half video) is not blocked and is not compared; entering step 29;
step 29: judging whether the second half video (namely the second video) is a minimum partitionable frame (corresponding to the minimum partition unit); if yes, go to step 210, if no, return to step 26;
step 210: recording a time point corresponding to the frame, and taking the frame as a starting point and an end point of the pause; entering step 213 (or not entering step 213, directly calculating the time length of the current pause according to the frame rate and the frame number actually used by the video playing, and recording the obtained relevant information of the pause, but not limited to this);
the above can be understood in particular as: the target video is divided into two parts (the division is not necessarily half) according to certain rules and standards (such as time point halving, or other clue prompts of the monitored playing process (such as plot development)). And comparing the similarity between the last frame (namely the last frame) of the former part (corresponding to the front-segment video) and the frame (namely the reference frame) of the source video at the same time point, if the similarity reaches a preset standard (namely the first threshold), determining that the first half part does not have pause, continuously dividing the second half part (corresponding to the rear-segment video) and comparing the last frame of the same time point (namely comparing the last frame of the divided video with the frame of the source video at the corresponding time point) until finding the initial frame of the pause.
Specifically, the method comprises the following steps: in the case of no stuck in the first half, next, it can be determined first whether the second half is a minimum divisible frame (corresponding to the minimum partition unit); if not, the latter half is taken as a new target video, the source video part corresponding to the latter half is taken as a new source video (that is, the starting point of the latter half is taken as the starting frame of the new target video, and the ending point is taken as the ending frame of the new target video, the frame corresponding to the time point in the source video with the starting point of the latter half is taken as the starting frame of the new source video, and the ending point of the source video is taken as the ending frame of the new source video), and the procedure returns to step 26.
Step 211: determining that the video is blocked, and judging whether the first half section of the video is the minimum divisible frame; if yes, go to step 212, otherwise, go back to step 26;
step 212: recording a time point corresponding to the frame, taking the next frame of the source video as an end point to compare the frames, and comparing the next frame of the target video frame by frame until the comparison frame of the source video is matched; and taking the matched frame as a stuck end point.
Regarding "recording the time point corresponding to the frame, taking the next frame of the source video as the end point comparison frame, and comparing the next frame of the target video frame by frame until the comparison frame of the source video is matched", it can be understood as: recording the time point in the target video, and acquiring a next frame in the source video as an end frame according to the time point; and matching the end frame with the target video until the end frame is matched with the target video to obtain the end point of the section of pause.
That is: if the ending frame of the first half video of the target video is not matched with the frame of the corresponding time of the source video (not meeting the preset standard, namely not identical), the first half part is considered to be blocked, the first half part is continuously divided, the last frame comparison at the same time point is continuously carried out, and step 28 or step 211 is executed according to the comparison result until the starting frame of the blockage at the time is found.
In the embodiment of the present invention, the "minimum partitionable frame" may be understood as: the minimum unit of the division is a frame, and specifically, in implementation, if the similarity between two frames before and after the division is lower than a first threshold, the frame is not the minimum divisible frame, and if the similarity is equal to or higher than the first threshold, the frame is the minimum divisible frame.
In the embodiment of the present invention, after determining the beginning frame of the mortgage, the next frame of the beginning frame of the mortgage of the source video may be taken, and compared with the next frame of the beginning frame of the mortgage of the target video frame by frame until a frame with a similarity reaching the standard is found, and the frame is recorded as the last frame of the mortgage (i.e., the end point of the mortgage or the end frame of the mortgage).
In embodiments of the present invention, the last frame may span multiple partitioned video blocks (i.e., the last frame may span multiple partitioned video blocks).
Step 213: and calculating the time length of the current pause according to the frame rate and the frame number.
Specifically, the frame number of the start frame and the frame number of the end frame may be calculated, and the duration of the current pause may be calculated according to the frame rate of the video during actual playing.
Step 214: subtracting the pause time length from the actual video playing time length to obtain a difference value (corresponding to the second difference value);
step 215: comparing the difference value with the source video time length (namely the source video playing time length); if the difference is greater than the source video duration, go to step 216; if the difference is equal to the source video duration, go to step 217;
that is, whether the difference is greater than the source video duration is determined, if yes, step 216 is performed, and if not, step 217 is performed;
step 216: and determining that the difference is longer than the time length of the source video and the target video is still stuck, taking the next frame of the last frame of the stuck as a starting point (serving as the starting point of a new target video), and returning to the step 26 (continuing to perform segmentation matching).
Specifically, the difference obtained by subtracting the current pause time length from the target video time length (i.e. the actual playing time length of the target video) is compared with the source video time length. If the former is higher than the latter, the phenomenon of pause still exists, the processes of cutting, comparing, judging and calculating are continued from the next frame of the last frame of the pause to the last frame of the target video as a starting point until the pause does not exist.
The scheme can detect the pause of the video during actual playing.
Step 217: determining that the difference is equal to the source video time length, ending the operation and storing information if no other card is in place;
regarding "saving information", what is specifically saved may be katon-related information, such as: a stuck time duration, a stuck time point, a stuck frame, etc.;
that is, if the target video duration is equal to the source video duration (there is no case of being greater than the source video duration), it is determined that there is no stuck.
Subsequently, information such as the calendar morton time point, the picture (frame image), the duration and the like can be saved. This information can be used for subsequent reporting.
As can be seen from the above, in the scheme provided in the embodiment of the present invention, a source video for reference comparison and a target video for actual playing are first constructed, and the target video is recorded in the process of playing the source video. And comparing the time lengths of the source video and the target video, and judging that the video is stuck if the time length of the target video is greater than that of the source video. Cutting the video according to a halving thought, and comparing the last frame of the cut front half video with the frame of the corresponding time point of the source video; if the two video segments are the same, the video is paused in the second half segment, and the second half segment is continuously divided; if the difference is different, the video in the first half section is blocked, and the video in the first half section is continuously divided until the first frame of the blockage is found. And subsequently, comparing the next frame corresponding to the first frame of the pause in the source video with the later frame of the target video frame by frame until the last frame of the pause is found. Then, the duration between the first frame and the last frame of the calton can be calculated as the calton duration. And comparing the difference of the target video minus the pause time with the time of the source video, if the difference is still greater than the time of the source video, judging that the pause still exists, and continuing to divide. If the two are the same, the fact that the card is not in existence is judged, and the comparison is finished.
It is also understood that the solution provided by the embodiments of the present invention involves: starting from a target video, when the existence of the pause is judged through time length comparison, firstly, the target video is divided, and then whether the video is a pause starting point (namely a pause starting frame) is judged by comparing the last frame of the cut previous video with the frame of the corresponding time point in the source video; after the start point of the pause is determined, the method continues from the next frame in the source video, finds a corresponding matching frame (i.e., the end point of the pause, which may also be referred to as the end frame of the pause) in the target video, and then calculates the pause duration by the number of frames between the two detected frames (i.e., the start frame and the end frame of the pause) in the target video. In the embodiment of the present invention, the divided video may be continuously split into a plurality of segments (at least two segments) according to the time length, which is not limited herein.
In summary, the scheme provided by the embodiment of the invention can realize that:
1) the playing time lengths of the source video and the target video are firstly compared to judge whether the video is stuck. The advantage of adding this judgment link is to avoid comparison under the condition that the target video is not jammed, and reduce the consumption of evaluation resources.
2) By carrying out frame comparison after the video is segmented and starting the comparison from the first time of pause, the comparison resource of the normal playing part is avoided, and the workload of frame comparison is reduced.
3) After the first frame of the Kadun is determined, the next frame of the source video is selected to be gradually compared with the frame of the target video, so that the segmentation of the source video and the comparison of the frames are reduced as much as possible, and the workload of comparison and comparison is reduced.
4) After the primary jamming is determined, whether the jamming exists or not is judged continuously through time comparison, and the segmentation and comparison are carried out continuously, so that all the jamming can be exhausted, and the accuracy is better.
In addition, the scheme provided by the embodiment of the invention can be used for carrying out local analysis after the source video and the target video are obtained, and no additional network connection is needed; the method judges the pause frame in a frame comparison mode after video cutting, so that the required working resources are less and the efficiency is higher; according to the scheme, the picture and the video in the blocking process can be visually obtained by positioning the specific blocking first frame and the specific blocking last frame, and the picture and the video are closer to and accord with the perception experience of a user.
An embodiment of the present invention further provides a data processing apparatus, as shown in fig. 3, including:
the first segmentation module 31 is configured to segment a target video into a front-segment video and a rear-segment video when the target video is stuck;
a first obtaining module 32, configured to obtain a katon start point in the target video from the front-stage video or the rear-stage video according to a last frame of the front-stage video and a reference frame at a corresponding time point in a source video;
a second obtaining module 33, configured to obtain, according to the pause starting point, a pause ending point corresponding to the pause starting point in the target video;
the first processing module 34 is configured to obtain the pause information of the target video according to the pause starting point and the pause ending point;
and the target video is video data obtained according to the actual playing of the source video.
The data processing device provided by the embodiment of the invention divides a target video into a front-segment video and a rear-segment video under the condition that the target video is blocked; acquiring a pause starting point in the target video from the front-segment video or the rear-segment video according to the last frame of the front-segment video and the reference frame of the corresponding time point in the source video; acquiring a pause end point corresponding to the pause starting point in the target video according to the pause starting point; obtaining the pause information of the target video according to the pause starting point and the pause ending point; the target video is video data obtained according to actual playing of the source video; the video can be segmented on the basis of judging the stuck, the segmented video last frame is positioned to the stuck first frame, and then the stuck last frame is determined, so that the workload of comparison is reduced, the comparison efficiency is improved, and the processing efficiency of the scheme is improved; the problem of low processing efficiency of a data processing scheme aiming at video blockage detection in the prior art is well solved. In addition, the scheme can cover all the mortars meeting the preset standard, and compared with the existing scheme, the accuracy and precision of the processing result are better.
Wherein, the obtaining the katon start point in the target video from the previous video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video comprises: confirming whether the front-segment video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is smaller than a first threshold value; taking the last frame as a katon starting point in a target video under the condition that the front-segment video is a minimum segmentation unit; and under the condition that the front-segment video is not the minimum segmentation unit, taking the front-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
In an embodiment of the present invention, the acquiring a katon start point in the target video from the later video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video includes: confirming whether the rear video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is greater than or equal to a first threshold value; and under the condition that the front-segment video is not the minimum segmentation unit, taking the rear-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
Wherein, the obtaining of the pause end point corresponding to the pause start point in the target video according to the pause start point comprises: and acquiring a pause end point corresponding to the pause starting point in the target video according to the next frame corresponding to the pause starting point in the source video.
In this embodiment of the present invention, the obtaining a katon start point in the target video from the later video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video further includes: taking the starting point of the back video as the starting point of the pause in the target video under the condition that the back video is the minimum segmentation unit; the obtaining of the pause end point corresponding to the pause start point in the target video according to the pause start point includes: and taking an end point of the back-segment video corresponding to the start point of the back-segment video as a pause end point corresponding to the pause start point in the target video.
Further, the data processing apparatus further includes: a third obtaining module, configured to obtain a first difference between a second time length corresponding to the target video and a first time length corresponding to the source video before the target video is divided into a front-stage video and a rear-stage video; and the first determining module is used for determining that the target video is stuck under the condition that the first difference value is greater than or equal to a second threshold value.
Obtaining the pause information of the target video according to the pause starting point and the pause ending point, wherein the obtaining of the pause information of the target video comprises the following steps: obtaining a stuck frame number and a stuck image according to the stuck starting point and the stuck ending point; obtaining the pause duration according to the pause frame number and the frame rate used by the actual playing of the source video; and acquiring the pause information of the target video according to at least one of the pause frame number, the pause image and the pause time.
Further, the data processing apparatus further includes: a fourth obtaining module, configured to obtain a second difference between a second time length corresponding to the target video and a pause time length in the pause information after the pause information of the target video is obtained according to the pause starting point and the pause ending point; and a second processing module, configured to, when the second difference is greater than the first duration corresponding to the source video, take a video after a pause end point in the target video as a new target video, take a video after a frame corresponding to the pause end point in the source video as a new source video, and return to the step of dividing the target video into a front-stage video and a rear-stage video.
The implementation embodiments of the data processing method are all applicable to the embodiment of the data processing device, and the same technical effects can be achieved.
An embodiment of the present invention further provides a data processing apparatus, as shown in fig. 4, including: a processor 41;
the processor 41 is configured to, in the case that a target video is stuck, divide the target video into a front-stage video and a rear-stage video;
acquiring a pause starting point in the target video from the front-segment video or the rear-segment video according to the last frame of the front-segment video and the reference frame of the corresponding time point in the source video;
acquiring a pause end point corresponding to the pause starting point in the target video according to the pause starting point;
obtaining the pause information of the target video according to the pause starting point and the pause ending point;
and the target video is video data obtained according to the actual playing of the source video.
Of course, in the embodiment of the present invention, the data processing device may further include a transceiver in communication with the processor, which is not limited herein.
The data processing equipment provided by the embodiment of the invention divides a target video into a front-segment video and a rear-segment video under the condition that the target video is blocked; acquiring a pause starting point in the target video from the front-segment video or the rear-segment video according to the last frame of the front-segment video and the reference frame of the corresponding time point in the source video; acquiring a pause end point corresponding to the pause starting point in the target video according to the pause starting point; obtaining the pause information of the target video according to the pause starting point and the pause ending point; the target video is video data obtained according to actual playing of the source video; the video can be segmented on the basis of judging the stuck, the segmented video last frame is positioned to the stuck first frame, and then the stuck last frame is determined, so that the workload of comparison is reduced, the comparison efficiency is improved, and the processing efficiency of the scheme is improved; the problem of low processing efficiency of a data processing scheme aiming at video blockage detection in the prior art is well solved. In addition, the scheme can cover all the calories meeting the preset standard, and compared with the existing scheme, the accuracy and the precision of the processing result are better.
Wherein, the obtaining the katon start point in the target video from the previous video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video comprises: confirming whether the front-segment video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is smaller than a first threshold value; taking the last frame as a katon starting point in a target video under the condition that the front-segment video is a minimum segmentation unit; and under the condition that the front-segment video is not the minimum segmentation unit, taking the front-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
In an embodiment of the present invention, the acquiring a morton start point in the target video from the rear video according to the last frame of the front video and the reference frame of the source video at the corresponding time point includes: confirming whether the rear video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is greater than or equal to a first threshold value; and under the condition that the front-segment video is not the minimum segmentation unit, taking the rear-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a Canton starting point is obtained.
Wherein, the obtaining of the pause end point corresponding to the pause start point in the target video according to the pause start point comprises: and acquiring a pause end point corresponding to the pause starting point in the target video according to the next frame corresponding to the pause starting point in the source video.
In this embodiment of the present invention, the obtaining a katon start point in the target video from the later video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video further includes: taking the starting point of the back video as the starting point of the pause in the target video under the condition that the back video is the minimum segmentation unit; the obtaining of the pause end point corresponding to the pause start point in the target video according to the pause start point includes: and taking an end point of the back-segment video corresponding to the start point of the back-segment video as a pause end point corresponding to the pause start point in the target video.
Further, the processor is further configured to: before the target video is divided into a front-stage video and a rear-stage video, acquiring a first difference value between a second time length corresponding to the target video and a first time length corresponding to the source video; and determining that the target video is stuck if the first difference is greater than or equal to a second threshold.
Obtaining the pause information of the target video according to the pause starting point and the pause ending point, wherein the obtaining of the pause information of the target video comprises the following steps: obtaining a stuck frame number and a stuck image according to the stuck starting point and the stuck ending point; obtaining the pause duration according to the pause frame number and the frame rate used by the actual playing of the source video; and acquiring the pause information of the target video according to at least one of the pause frame number, the pause image and the pause time.
Further, the processor is further configured to: after the pause information of the target video is obtained according to the pause starting point and the pause ending point, a second difference value between a second time length corresponding to the target video and the pause time length in the pause information is obtained; and under the condition that the second difference value is larger than the first time length corresponding to the source video, taking the video after the pause end point in the target video as a new target video, taking the video after the frame corresponding to the pause end point in the source video as a new source video, and returning to the step of dividing the target video into a front-segment video and a rear-segment video.
The implementation embodiments of the data processing method are all applicable to the embodiment of the data processing device, and the same technical effect can be achieved.
The embodiment of the invention also provides data processing equipment, which comprises a memory, a processor and a program which is stored on the memory and can be operated on the processor; the processor implements the above-described data processing method when executing the program.
The implementation embodiments of the data processing method are all applicable to the embodiment of the data processing device, and the same technical effect can be achieved.
An embodiment of the present invention further provides a readable storage medium, on which a program is stored, and the program, when executed by a processor, implements the steps in the data processing method.
The implementation embodiments of the data processing method are all applicable to the embodiment of the readable storage medium, and the same technical effects can be achieved.
It should be noted that many of the functional units described in this specification have been referred to as modules, in order to more particularly emphasize their implementation independence.
In embodiments of the present invention, modules may be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be constructed as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different bits which, when joined logically together, comprise the module and achieve the stated purpose for the module.
Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Likewise, operational data may be identified within the modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
When a module can be implemented by software, considering the level of existing hardware technology, a module implemented by software may build a corresponding hardware circuit to implement a corresponding function, without considering cost, and the hardware circuit may include a conventional Very Large Scale Integration (VLSI) circuit or a gate array and an existing semiconductor such as a logic chip, a transistor, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
While the preferred embodiments of the present invention have been described, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (19)

1. A data processing method, comprising:
under the condition that a target video is blocked, dividing the target video into a front-segment video and a rear-segment video;
acquiring a pause starting point in the target video from the front-segment video or the rear-segment video according to the last frame of the front-segment video and the reference frame of the corresponding time point in the source video;
acquiring a pause end point corresponding to the pause starting point in the target video according to the pause starting point;
obtaining the pause information of the target video according to the pause starting point and the pause ending point;
and the target video is video data obtained according to the actual playing of the source video.
2. The data processing method of claim 1, wherein the obtaining a katon start point in the target video from the previous video according to the last frame of the previous video and the reference frame of the source video at the corresponding time point comprises:
confirming whether the front-segment video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is smaller than a first threshold value;
taking the last frame as a katon starting point in a target video under the condition that the front-segment video is a minimum segmentation unit;
and under the condition that the front-segment video is not the minimum segmentation unit, taking the front-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
3. The data processing method of claim 1, wherein the obtaining a katon start point in the target video from the later video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video comprises:
confirming whether the rear video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is greater than or equal to a first threshold value;
and under the condition that the front-segment video is not the minimum segmentation unit, taking the rear-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
4. The data processing method according to claim 2 or 3, wherein the obtaining a pause end point corresponding to the pause start point in the target video according to the pause start point comprises:
and acquiring a pause end point corresponding to the pause starting point in the target video according to the next frame corresponding to the pause starting point in the source video.
5. The data processing method according to claim 3, wherein the obtaining a katon start point in the target video from the later video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video further comprises:
taking the starting point of the back video as the starting point of the pause in the target video under the condition that the back video is the minimum segmentation unit;
the obtaining of the pause end point corresponding to the pause start point in the target video according to the pause start point includes:
and taking an end point of the back-segment video corresponding to the start point of the back-segment video as a pause end point corresponding to the pause start point in the target video.
6. The data processing method according to claim 1, further comprising, before dividing the target video into a front-segment video and a back-segment video:
acquiring a first difference value between a second time length corresponding to the target video and a first time length corresponding to the source video;
and determining that the target video is stuck if the first difference is greater than or equal to a second threshold.
7. The data processing method of claim 1, wherein the obtaining the stuck information of the target video according to the stuck starting point and the stuck ending point comprises:
obtaining a stuck frame number and a stuck image according to the stuck starting point and the stuck ending point;
obtaining the pause duration according to the pause frame number and the frame rate used by the actual playing of the source video;
and acquiring the pause information of the target video according to at least one of the pause frame number, the pause image and the pause time.
8. The data processing method of claim 1, further comprising, after obtaining the hiton information of the target video according to the hiton start point and the hiton end point:
acquiring a second difference value between a second time length corresponding to the target video and the pause time length in the pause information;
and under the condition that the second difference value is larger than the first time length corresponding to the source video, taking the video after the pause end point in the target video as a new target video, taking the video after the frame corresponding to the pause end point in the source video as a new source video, and returning to the step of dividing the target video into a front-segment video and a rear-segment video.
9. A data processing apparatus, comprising:
the first segmentation module is used for segmenting a target video into a front-segment video and a rear-segment video under the condition that the target video is blocked;
a first obtaining module, configured to obtain a katon start point in the target video from the front-stage video or the rear-stage video according to a last frame of the front-stage video and a reference frame of a corresponding time point in a source video;
the second obtaining module is used for obtaining a pause end point corresponding to the pause starting point in the target video according to the pause starting point;
the first processing module is used for obtaining the pause information of the target video according to the pause starting point and the pause ending point;
and the target video is video data obtained according to the actual playing of the source video.
10. The data processing apparatus according to claim 9, wherein said obtaining a katon start point in the target video from the previous video according to the last frame of the previous video and the reference frame of the source video at the corresponding time point comprises:
confirming whether the front-segment video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is smaller than a first threshold value;
taking the last frame as a katon starting point in a target video under the condition that the front-segment video is a minimum segmentation unit;
and under the condition that the front-segment video is not the minimum segmentation unit, taking the front-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
11. The data processing apparatus of claim 9, wherein the obtaining a katon start point in the target video from the later video according to the last frame of the former video and the reference frame of the corresponding time point in the source video comprises:
confirming whether the rear video is a minimum segmentation unit or not under the condition that the similarity between the last frame and the reference frame is greater than or equal to a first threshold value;
and under the condition that the front-segment video is not the minimum segmentation unit, taking the rear-segment video as a new target video, and returning to the step of segmenting the target video into the front-segment video and the rear-segment video until a starting point of the pause is obtained.
12. The data processing apparatus according to claim 10 or 11, wherein the obtaining a stuck end point corresponding to the stuck start point in the target video according to the stuck start point comprises:
and acquiring a pause end point corresponding to the pause starting point in the target video according to the next frame corresponding to the pause starting point in the source video.
13. The data processing apparatus according to claim 11, wherein the obtaining a katon start point in the target video from the later video according to the last frame of the previous video and the reference frame of the corresponding time point in the source video further comprises:
taking the starting point of the back video as the starting point of the pause in the target video under the condition that the back video is the minimum segmentation unit;
the obtaining of the pause end point corresponding to the pause start point in the target video according to the pause start point includes:
and taking an end point of the back-segment video corresponding to the start point of the back-segment video as a pause end point corresponding to the pause start point in the target video.
14. The data processing apparatus of claim 9, further comprising:
a third obtaining module, configured to obtain a first difference between a second time length corresponding to the target video and a first time length corresponding to the source video before the target video is divided into a front-stage video and a rear-stage video;
and the first determining module is used for determining that the target video is stuck when the first difference value is greater than or equal to a second threshold value.
15. The data processing apparatus according to claim 9, wherein said deriving the stuck information of the target video according to the stuck starting point and the stuck ending point comprises:
obtaining a stuck frame number and a stuck image according to the stuck starting point and the stuck ending point;
obtaining the pause duration according to the pause frame number and the frame rate used by the actual playing of the source video;
and acquiring the pause information of the target video according to at least one of the pause frame number, the pause image and the pause time.
16. The data processing apparatus of claim 9, further comprising:
a fourth obtaining module, configured to obtain a second difference between a second time length corresponding to the target video and a pause time length in the pause information after the pause information of the target video is obtained according to the pause starting point and the pause ending point;
and a second processing module, configured to, when the second difference is greater than the first duration corresponding to the source video, take a video after a pause end point in the target video as a new target video, take a video after a frame corresponding to the pause end point in the source video as a new source video, and return to the step of dividing the target video into a front-stage video and a rear-stage video.
17. A data processing apparatus, characterized by comprising: a processor;
the processor is used for dividing the target video into a front-segment video and a rear-segment video under the condition that the target video is blocked;
acquiring a pause starting point in the target video from the front-segment video or the rear-segment video according to the last frame of the front-segment video and the reference frame of the corresponding time point in the source video;
acquiring a pause end point corresponding to the pause starting point in the target video according to the pause starting point;
obtaining the pause information of the target video according to the pause starting point and the pause ending point;
and the target video is video data obtained according to the actual playing of the source video.
18. A data processing apparatus comprising a memory, a processor and a program stored on the memory and executable on the processor; characterized in that the processor implements the data processing method according to any one of claims 1 to 8 when executing the program.
19. A readable storage medium, on which a program is stored, which, when being executed by a processor, carries out the steps of the data processing method according to any one of claims 1 to 8.
CN202110145485.3A 2021-02-02 2021-02-02 Data processing method, device and equipment Pending CN114845164A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110145485.3A CN114845164A (en) 2021-02-02 2021-02-02 Data processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110145485.3A CN114845164A (en) 2021-02-02 2021-02-02 Data processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN114845164A true CN114845164A (en) 2022-08-02

Family

ID=82562896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110145485.3A Pending CN114845164A (en) 2021-02-02 2021-02-02 Data processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN114845164A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847977A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Streaming media file processing method and streaming media file processing device
WO2017092343A1 (en) * 2015-12-04 2017-06-08 乐视控股(北京)有限公司 Video data detection method and device
CN107147947A (en) * 2017-05-11 2017-09-08 腾讯科技(深圳)有限公司 Key frame recognition methods and device
CN108664377A (en) * 2018-05-02 2018-10-16 腾讯音乐娱乐科技(深圳)有限公司 A kind of interim card of user interface determines method, apparatus and storage medium
CN109587551A (en) * 2017-09-29 2019-04-05 北京金山云网络技术有限公司 A kind of judgment method, device, equipment and the storage medium of live streaming media Caton
CN110418170A (en) * 2019-07-03 2019-11-05 腾讯科技(深圳)有限公司 Detection method and device, storage medium and electronic device
CN111327964A (en) * 2018-12-17 2020-06-23 中国移动通信集团北京有限公司 Method and equipment for positioning video playing card pause

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017092343A1 (en) * 2015-12-04 2017-06-08 乐视控股(北京)有限公司 Video data detection method and device
CN105847977A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Streaming media file processing method and streaming media file processing device
CN107147947A (en) * 2017-05-11 2017-09-08 腾讯科技(深圳)有限公司 Key frame recognition methods and device
CN109587551A (en) * 2017-09-29 2019-04-05 北京金山云网络技术有限公司 A kind of judgment method, device, equipment and the storage medium of live streaming media Caton
CN108664377A (en) * 2018-05-02 2018-10-16 腾讯音乐娱乐科技(深圳)有限公司 A kind of interim card of user interface determines method, apparatus and storage medium
CN111327964A (en) * 2018-12-17 2020-06-23 中国移动通信集团北京有限公司 Method and equipment for positioning video playing card pause
CN110418170A (en) * 2019-07-03 2019-11-05 腾讯科技(深圳)有限公司 Detection method and device, storage medium and electronic device

Similar Documents

Publication Publication Date Title
US7038736B2 (en) Moving image processing apparatus and method, and computer readable memory
EP2659387B1 (en) Predictive software streaming
CN111401315B (en) Face recognition method based on video, recognition device and storage device
CN110941553A (en) Code detection method, device, equipment and readable storage medium
CN113869137A (en) Event detection method and device, terminal equipment and storage medium
CN114584836B (en) Method, device, system and medium for detecting using behavior of electronic product
CN112214394B (en) Memory leakage detection method, device and equipment
CN114845164A (en) Data processing method, device and equipment
CN116761020A (en) Video processing method, device, equipment and medium
CN114819110B (en) Method and device for identifying speaker in video in real time
US20230326036A1 (en) Method for detecting and tracking a target, electronic device, and storage medium
CN116258995A (en) Video transition identification method, device, computing equipment and computer storage medium
CN115878379A (en) Data backup method, main server, backup server and storage medium
JP2023107399A (en) Endoscope system and operation method thereof
US20110154304A1 (en) Determining compiler efficiency
CN111858313A (en) Interface card pause detection method and device and storage medium
US6795879B2 (en) Apparatus and method for wait state analysis in a digital signal processing system
CN114710685B (en) Video stream processing method and device, terminal equipment and storage medium
KR102285039B1 (en) Shot boundary detection method and apparatus using multi-classing
CN114780400A (en) Method for blocking cyclic calling among services based on periodic data balance statistics
US20230351613A1 (en) Method of detecting object in video and video analysis terminal
CN112581506A (en) Face tracking method, system and computer readable storage medium
CN116166466A (en) Rendering card frame detection method, electronic device and computer storage medium
CN116149968A (en) Chip timer diagnostic method and device
CN101329730B (en) Method for identifying symbol image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination