CN117278802A - Video clip trace comparison method and device - Google Patents

Video clip trace comparison method and device Download PDF

Info

Publication number
CN117278802A
CN117278802A CN202311568483.0A CN202311568483A CN117278802A CN 117278802 A CN117278802 A CN 117278802A CN 202311568483 A CN202311568483 A CN 202311568483A CN 117278802 A CN117278802 A CN 117278802A
Authority
CN
China
Prior art keywords
clip
array
target segment
video
fragment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311568483.0A
Other languages
Chinese (zh)
Other versions
CN117278802B (en
Inventor
彭斌斌
周红丽
肖中渠
关捷
熊爱平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Original Assignee
Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Happly Sunshine Interactive Entertainment Media Co Ltd filed Critical Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Priority to CN202311568483.0A priority Critical patent/CN117278802B/en
Publication of CN117278802A publication Critical patent/CN117278802A/en
Application granted granted Critical
Publication of CN117278802B publication Critical patent/CN117278802B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a method and a device for comparing video clip traces, wherein the method comprises the following steps: after editing the first video into the second video, acquiring a first clip array and a second clip array; if the first clip fragment array and the second clip fragment array are not empty sets, removing a first element from the first clip fragment array to obtain a first target fragment, and removing the first element from the second clip fragment array to obtain a second target fragment; corresponding operations are adopted according to the main video IDs and the start-stop time of the first target segment and the second target segment, and a first result array and a second result array are obtained; based on the clipping type identifiers of the fragments in the first result array and the second result array, clipping marks between the first video and the second video are displayed, and comparison of video clipping marks can be achieved without relying on differences between manual arrangement videos, so that the comparison efficiency of the video clipping marks is improved, and the comparison accuracy of the video clipping marks can be guaranteed.

Description

Video clip trace comparison method and device
Technical Field
The invention relates to the technical field of video processing, in particular to a method and a device for comparing video clip traces.
Background
The video clipping refers to the process of generating a new video by cutting, merging, recombining, secondarily encoding and the like; the alignment of video clip tracks has at least the following effects: the method can help the editing personnel to quickly locate the effect of video editing before and after the video editing, and also can help the reviewer to quickly locate and examine the difference of the versions before and after the video editing.
At present, the comparison of video clip traces mainly depends on manual operation, namely, the difference between video versions is tidied by manual operation, the method is time-consuming and labor-consuming, is easy to cause careless mistakes, has lower comparison efficiency of the video clip traces and cannot ensure the comparison accuracy of the video clip traces.
Disclosure of Invention
Therefore, the embodiment of the invention provides a method and a device for comparing video clip traces, which are used for solving the problems that the existing mode relying on manual arrangement of video clip traces is low in comparison efficiency of video clip traces, the comparison accuracy of video clip traces cannot be guaranteed and the like.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
an embodiment of the invention discloses a method for comparing video clip traces, which comprises the following steps:
After a first video is clipped into a second video, a first clip array and a second clip array are obtained, wherein the first clip array comprises a plurality of clips forming the first video, the second clip array comprises a plurality of clips forming the second video, and the first video is a source video or a video obtained by clipping on the basis of the source video;
judging whether the first clip fragment array and the second clip fragment array are empty sets or not;
if the first clip fragment array and the second clip fragment array are not empty sets, moving out a first element from the first clip fragment array to obtain a first target fragment, and moving out a first element from the second clip fragment array to obtain a second target fragment;
under the condition that time overlapping exists between the first target segment and the second target segment and the first target segment and the second target segment have the same main video ID, the first target segment and the second target segment are respectively pushed into a first result array corresponding to the first video and a second result array corresponding to the second video, or the first target segment and the second target segment are respectively inserted into the first clip segment array and the first bit of the second clip segment array after time filling; returning to the step of judging whether the first clip fragment array and the second clip fragment array are empty sets;
When the first target segment has the same main video ID and no time overlap exists between the first target segment and the second target segment, when the first target segment ends at the second target segment, modifying the clip type identification of the first target segment into a deletion identification and pushing the deletion identification into the second result array, modifying the clip type identification of the second target segment into a deletion identification and pushing the deletion identification into the first result array, and returning to the step of judging whether the first clip segment array and the second clip segment array are empty sets;
returning to execute the step of judging whether the first clip fragment array and the second clip fragment array are empty sets or not, in the case that the first target fragment and the second target fragment have different main video IDs, based on the first target fragment and the second target fragment updating the first element of the first clip fragment array and the second clip fragment array;
and if the first clip fragment array and the second clip fragment array are empty, displaying clip marks between the first video and the second video based on clip type identifiers of fragments in the first result array and the second result array, wherein the clip marks at least comprise deletion, insertion and/or substitution.
Preferably, when there is time overlap between the first target segment and the second target segment and the first target segment and the second target segment have the same main video ID, the pushing the first target segment and the second target segment to the first result array corresponding to the first video and the second result array corresponding to the second video respectively, or inserting the first target segment and the second target segment into the first position of the first clip array and the first position of the second clip array respectively after performing time alignment, includes:
when the first target segment and the second target segment have the same main video ID and have the same starting and stopping time, pushing the first target segment into a first result array corresponding to the first video and pushing the second target segment into a second result array corresponding to the second video;
inserting a first portion and a second portion of the first target segment into a first bit of the first clip array when the first target segment begins before the second target segment; the first part of the first target segment is supplemented to the second target segment, and the supplemented second target segment is inserted into the first position of the second clip segment array; the time period corresponding to the first part of the first target segment is from the start time of the first target segment to the start time of the second target segment, and the time period corresponding to the second part of the first target segment is from the start time of the second target segment to the end time of the first target segment;
Inserting a first portion and a second portion of the second target segment into a first bit of the second clip segment array when the second target segment begins before the first target segment; the first part of the second target segment is supplemented to the first target segment, and the supplemented first target segment is inserted into the first position of the first clip segment array; the time period corresponding to the first part of the second target segment is from the start time of the second target segment to the start time of the first target segment, and the time period corresponding to the second part of the second target segment is from the start time of the first target segment to the end time of the second target segment;
when the first target segment ends before the second target segment, creating a first new item to patch the first target segment, and inserting the patch-up first target segment into the first position of the first clip segment array, wherein the time period corresponding to the first new item is from the ending time of the first target segment to the ending time of the second target segment;
when the first target segment ends after the second target segment, a second new item is created to patch the second target segment, and the patch-up second target segment is inserted into the first position of the second clip segment array, wherein the time period corresponding to the second new item is from the ending time of the second target segment to the ending time of the first target segment.
Preferably, in the case that the first target segment and the second target segment have different main video IDs, the process of updating the first element of the first clip segment array and the second clip segment array based on the first target segment and the second target segment includes:
copying the second target segment when the main video ID of the first target segment is the same as the main video ID of the source video in the case where the first target segment and the second target segment have different main video IDs; setting the clip type identifier of the second target fragment obtained by copying as a deletion identifier and then inserting the deletion identifier into the head of the first clip fragment array, and modifying the clip type identifier of the second target fragment into an insertion identifier and then inserting the insertion identifier into the head of the second clip fragment array;
copying the first target segment when the main video ID of the second target segment is the same as the main video ID of the source video; setting the clip type identifier of the first target fragment obtained by copying as a deletion identifier and then inserting the deletion identifier into the head of the second clip fragment array, and modifying the clip type identifier of the first target fragment into a replacement identifier and then inserting the replacement identifier into the head of the first clip fragment array.
Preferably, after determining whether the first clip array and the second clip array are empty sets, the method further includes:
if the first clip array is an empty set and the second clip array is not an empty set, modifying clip type identifiers of the remaining clips in the second clip array into insertion identifiers, pushing the insertion identifiers into a first result array corresponding to the first video, and pushing the remaining clips in the second clip array into a second result array corresponding to the second video;
if the first clip fragment array is not an empty set and the second clip fragment array is an empty set, modifying clip type identifiers of the remaining clips in the first clip fragment array to pruning identifiers, pushing the deleting identifiers to the second result array, and pushing the remaining clips in the first clip fragment array to the first result array.
Preferably, if the first clip array and the second clip array are both empty, displaying a clip trace between the first video and the second video based on clip type identifiers of the clips in the first result array and the second result array, including:
And if the first clip fragment array and the second clip fragment array are empty, displaying clip traces between the first video and the second video in a video track mode based on clip type identifiers of the fragments in the first result array and the second result array.
The second aspect of the embodiment of the invention discloses a device for comparing video clip traces, which comprises:
the device comprises an acquisition unit, a first video editing unit and a second video editing unit, wherein the acquisition unit is used for acquiring a first clip fragment array and a second clip fragment array after editing a first video into a second video, the first clip fragment array comprises a plurality of fragments forming the first video, the second clip fragment array comprises a plurality of fragments forming the second video, and the first video is a source video or a video obtained by editing on the basis of the source video;
a judging unit, configured to judge whether the first clip fragment array and the second clip fragment array are empty sets;
the moving-out unit is used for moving out a first element from the first clip fragment array to obtain a first target fragment and moving out a first element from the second clip fragment array to obtain a second target fragment if the first clip fragment array and the second clip fragment array are not empty sets;
The first processing unit is configured to push the first target segment and the second target segment to a first result array corresponding to the first video and a second result array corresponding to the second video respectively, or insert the first target segment and the second target segment into first positions of the first clip segment array and the second clip segment array respectively after time-filling, when there is time overlap between the first target segment and the second target segment and the same main video ID; returning to execute the judging unit;
a second processing unit, configured to, when the first target segment has the same main video ID and there is no time overlap between the first target segment and the second target segment, modify the clip type identifier of the first target segment to a pruned identifier and push the pruned identifier into the second result array, modify the clip type identifier of the second target segment to a pruned identifier and push the pruned identifier into the first result array, and return to executing the determining unit;
a third processing unit, configured to, in a case where the first target segment and the second target segment have different main video IDs, update first elements of the first clip segment array and the second clip segment array based on the first target segment and the second target segment, and return to execute the determining unit;
And the display unit is used for displaying the clipping trace between the first video and the second video based on the clipping type identification of each fragment in the first result array and the second result array if the first clipping fragment array and the second clipping fragment array are empty, wherein the clipping trace at least comprises deletion, insertion and/or substitution.
Preferably, the first processing unit includes:
the pushing module is used for pushing the first target segment to a first result array corresponding to the first video and pushing the second target segment to a second result array corresponding to the second video when the start time and the stop time of the first target segment are the same under the condition that the first target segment and the second target segment are overlapped in time and have the same main video ID;
a first processing module, configured to insert a first portion and a second portion of the first target segment into a first bit of the first clip segment array when the first target segment begins before the second target segment; the first part of the first target segment is supplemented to the second target segment, and the supplemented second target segment is inserted into the first position of the second clip segment array; the time period corresponding to the first part of the first target segment is from the start time of the first target segment to the start time of the second target segment, and the time period corresponding to the second part of the first target segment is from the start time of the second target segment to the end time of the first target segment;
A second processing module, configured to insert a first portion and a second portion of the second target segment into a first bit of the second clip segment array when the second target segment begins before the first target segment; the first part of the second target segment is supplemented to the first target segment, and the supplemented first target segment is inserted into the first position of the first clip segment array; the time period corresponding to the first part of the second target segment is from the start time of the second target segment to the start time of the first target segment, and the time period corresponding to the second part of the second target segment is from the start time of the first target segment to the end time of the second target segment;
a third processing module, configured to create a first new item to patch the first target segment when the first target segment ends before the second target segment, and insert the first target segment after the patch into a first position of the first clip segment array, where a period of time corresponding to the first new item is from an end time of the first target segment to an end time of the second target segment;
And a fourth processing module, configured to create a second new item to patch the second target segment when the first target segment ends after the second target segment, and insert the second target segment after the patch into the first bit of the second clip segment array, where a time period corresponding to the second new item is from an end time of the second target segment to an end time of the first target segment.
Preferably, the third processing unit includes:
a first processing module, configured to copy the second target segment when the main video ID of the first target segment is the same as the main video ID of the source video in a case where the first target segment and the second target segment have different main video IDs; setting the clip type identifier of the second target fragment obtained by copying as a deletion identifier and then inserting the deletion identifier into the head of the first clip fragment array, and modifying the clip type identifier of the second target fragment into an insertion identifier and then inserting the insertion identifier into the head of the second clip fragment array;
the second processing module is used for copying the first target segment when the main video ID of the second target segment is the same as the main video ID of the source video; setting the clip type identifier of the first target fragment obtained by copying as a deletion identifier and then inserting the deletion identifier into the head of the second clip fragment array, and modifying the clip type identifier of the first target fragment into a replacement identifier and then inserting the replacement identifier into the head of the first clip fragment array.
Preferably, the method further comprises:
a fourth processing unit, configured to modify, if the first clip array is an empty set and the second clip array is not an empty set, clip type identifiers of fragments remaining in the second clip array to insert identifiers and push the modified clip type identifiers to first result arrays corresponding to the first video, and push fragments remaining in the second clip array to second result arrays corresponding to the second video;
and a fifth processing unit, configured to, if the first clip array is not an empty set and the second clip array is an empty set, modify clip type identifiers of fragments remaining in the first clip array to pruning identifiers, push the modified clip type identifiers into the second result array, and push fragments remaining in the first clip array into the first result array.
Preferably, the display unit is specifically configured to: and if the first clip fragment array and the second clip fragment array are empty, displaying clip traces between the first video and the second video in a video track mode based on clip type identifiers of the fragments in the first result array and the second result array.
Based on the comparison method and device of video clip traces provided by the embodiment of the invention, the method comprises the following steps: after editing the first video into the second video, acquiring a first clip array and a second clip array; judging whether the first clip fragment array and the second clip fragment array are empty sets or not; if the first clip fragment array and the second clip fragment array are not empty sets, moving out a first element from the first clip fragment array to obtain a first target fragment, and moving out a first element from the second clip fragment array to obtain a second target fragment; corresponding operations are adopted according to the main video IDs and the start-stop time of the first target segment and the second target segment, and a first result array corresponding to the first video and a second result array corresponding to the second video are obtained; based on the clipping type identifiers of the fragments in the first result array and the second result array, clipping marks between the first video and the second video are displayed, and comparison of video clipping marks can be achieved without relying on differences between manual arrangement videos, so that the comparison efficiency of the video clipping marks is improved, and the comparison accuracy of the video clipping marks can be guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for comparing video clip traces provided in an embodiment of the present invention;
FIG. 2 (A) is an exemplary diagram of a first video track provided by an embodiment of the present invention; FIG. 2 (B) is an exemplary diagram of a second video track provided by an embodiment of the present invention;
FIGS. 3 (A) -3 (D) are diagrams illustrating comparison results of clip traces according to embodiments of the present invention;
FIG. 4 is another flow chart of a method for comparing video clip traces provided by an embodiment of the present invention;
fig. 5 is a block diagram of a device for comparing video clip traces according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In combination with the background technology, the video clipping refers to the process of generating a new video by cutting, merging, recombining, secondary coding and other operations on the video; the alignment of video clip tracks has at least the following effects: the operation traces such as cutting, merging, reorganizing, secondary coding and the like which are made before and after video editing can be intuitively displayed; the editing personnel can be helped to quickly position the effects before and after video editing; it can also help the reviewer to quickly locate differences in the versions before and after the video clip is delivered. At present, the comparison of video clip traces mainly depends on manual operation, namely, the difference between video versions is tidied by manual operation, the method is time-consuming and labor-consuming, is easy to cause careless mistakes, has lower comparison efficiency of the video clip traces and cannot ensure the comparison accuracy of the video clip traces.
Therefore, the scheme provides a method and a device for comparing video clip traces, wherein after a first video is clipped into a second video, a first clip array and a second clip array are obtained. If neither the first clip array nor the second clip array is empty, the first element is moved out of the first clip array to obtain a first target segment, and the first element is moved out of the second clip array to obtain a second target segment. And corresponding operations are adopted according to the main video IDs and the start-stop time of the first target segment and the second target segment, so that a first result array corresponding to the first video and a second result array corresponding to the second video are obtained. Based on the clipping type identifiers of the fragments in the first result array and the second result array, clipping marks between the first video and the second video are displayed, and comparison of video clipping marks can be achieved without relying on differences between manual arrangement videos, so that the comparison efficiency of the video clipping marks is improved, and the comparison accuracy of the video clipping marks can be guaranteed.
In practical application, the scheme can realize the comparison of video clip traces based on a transcoding project file, wherein the transcoding project file specifically refers to parameters for converting clip projects into calling audio and video processing software (such as ffmpeg); the present embodiment will be described in detail with reference to the following examples.
Referring to fig. 1, a flowchart of a method for comparing video clip traces provided by an embodiment of the present invention is shown, the method includes the following steps:
step S101: and after the first video is clipped into the second video, a first clip array and a second clip array are obtained.
In the specific implementation process of step S101, after the first video (clipped video) is clipped into the second video (clipped video), a first clip array and a second clip array are obtained; the first clip array includes a plurality of clips (i.e., clips) that constitute a first video, and the second clip array includes a plurality of clips that constitute a second video, the first video being a source video or a video that is clipped on the basis of the source video.
It should be noted that, more specifically, the first video and the second video are two videos that need to be compared with each other to clip a trace; the first clip array may be denoted as basevideo and the second clip array may be denoted as diffvideo; the basevideo and diffvideo are arrays of clips that need to be aligned to the comparison clip trace.
Pre-constructing a first result array (marked as basevideo_) corresponding to the first video and a second result array (diffvideo_) corresponding to the second video; the first result array and the second result array are arrays for storing the clip segments after the supplemental and synchronization alignment.
Each clip in the first clip array and the second clip array has a corresponding FileId, which is the primary video ID, which can be used to distinguish the attributes of the clip from insert (also called mid-insert, add), replace (rep), or prune (del).
Step S102: it is determined whether the first clip array and the second clip array are empty sets. If neither the first clip array nor the second clip array is empty, executing step S103-step S106; if the first clip array and the second clip array are empty, step S107 is performed.
In the process of specifically implementing step S102, it is determined whether the first clip array and the second clip array are empty sets, that is, whether the first clip data and the second clip array are exhausted.
If neither the first clip array nor the second clip array is empty (neither the first clip data nor the second clip array is exhausted), steps S103-S106 are performed.
If the first clip array and the second clip array are empty (the first clip data and the second clip array are exhausted), step S107 is performed.
In some embodiments, during the process of retrieving elements from the first clip array and the second clip array for comparison (content of step S103 described below), there may be a case where the first target segment is missing or the second target segment is missing (i.e., the first clip array or the second clip array is exhausted), in which case, the remaining elements are pushed into the result array, and clip type identifiers (modified as add or del) are modified for these remaining elements. In a specific implementation, if the first clip array is an empty set and the second clip array is not an empty set, modifying clip type identifiers of the remaining clips in the second clip array to insert identifiers, pushing the modified clip type identifiers to first result arrays corresponding to the first video, and pushing the remaining clips in the second clip array to second result arrays corresponding to the second video.
If the first clip array is not the empty set and the second clip array is the empty set, modifying the clip type identification of the remaining clips in the first clip array to the pruning identification, pushing the deleting identification to the second result array, and pushing the remaining clips in the first clip array to the first result array.
That is, the clip type identifier of the added clip needs to be modified when the clip is added to the counterpart result array, and the clip type identifier does not need to be modified when the clip of the counterpart result array is added.
Step S103: if neither the first clip array nor the second clip array is empty, the first element is moved out of the first clip array to obtain a first target segment, and the first element is moved out of the second clip array to obtain a second target segment.
In the specific implementation step S103, if neither the first clip fragment array nor the second clip fragment array is empty, moving the first element from the first clip fragment array to store in the baseItem, and moving the first element from the second clip fragment array to store in the diffItem; baseItem is referred to as a first target fragment and diffItem is referred to as a second target fragment.
After the first target segment and the second target segment are obtained, different measures are taken according to whether the first target segment and the second target segment have the same main video ID (fileID), whether there is time overlap between the first target segment and the second target segment, and the like, and the specific content of these measures is described in the following steps S104-S106.
Step S104: under the condition that time overlapping exists between the first target segment and the second target segment and the same main video ID exists, the first target segment and the second target segment are respectively pushed into a first result array corresponding to the first video and a second result array corresponding to the second video, or the first target segment and the second target segment are respectively inserted into the first positions of the first clip segment array and the second clip segment array after time filling; the process returns to step S102.
In the specific implementation process of step S104, in the case that there is time overlap between the first target segment and the second target segment and the same main video ID is provided, that is, in the case that the base item and the diffItem have the same fileID and the time overlap, the first target segment and the second target segment are respectively pushed to the first result array (base video_) corresponding to the first video and the second result array (diffvideo_) corresponding to the second video, and the step S102 is returned to execute the comparison of the next round;
or, after time-filling the first target segment and the second target segment, respectively inserting the first target segment and the second target segment into the first positions of the first clip segment array and the second clip segment array, and returning to the step S102 to continue the next round of comparison.
In some embodiments, in the case where there is a temporal overlap between the first target segment and the second target segment and the same main video ID, the specific measures taken can be seen as follows:
when the start and stop time of the first target segment and the second target segment are identical, the first target segment is pushed to a first result array (basevideo_) corresponding to the first video, and the second target segment is pushed to a second result array (diffvideo_) corresponding to the second video.
Inserting the first portion and the second portion of the first target segment into the first bit of the first clip array when the first target segment begins before the second target segment; the first part of the first target segment is complemented to the second target segment, and the complemented second target segment is inserted into the first bit of the second clip segment array; the time period corresponding to the first portion of the first target segment is from the start time of the first target segment to the start time of the second target segment, the time period corresponding to the second portion of the first target segment is from the start time of the second target segment to the end time of the first target segment, and the time period corresponding to the second portion of the first target segment corresponds to the remaining portion of the first target segment except the first portion.
Inserting the first portion and the second portion of the second target segment into the first bit of the second clip array when the second target segment begins before the first target segment; the first part of the second target segment is complemented to the first target segment, and the complemented first target segment is inserted into the first bit of the first clip segment array; the time period corresponding to the first portion of the second target segment is from the start time of the second target segment to the start time of the first target segment, the time period corresponding to the second portion of the second target segment is from the start time of the first target segment to the end time of the second target segment, and the time period corresponding to the second portion of the second target segment corresponds to the remaining portion of the second target segment except the first portion.
When the first target segment ends before the second target segment, a first new item is created to complement the first target segment, and the first segment after the complement is inserted into the first position of the first clip segment array, wherein the time period corresponding to the first new item is the time period from the end time of the first target segment to the end time of the second target segment, that is, the first new item is used for representing the time period after the end of the first target segment.
When the first target segment ends after the second target segment, a second new item is created to complement the second target segment, and the second target segment after being complemented is inserted into the first position of the second clip segment array, wherein the time period corresponding to the second new item is the time period from the ending time of the second target segment to the ending time of the first target segment, that is, the time period after the second target segment ends is represented by the second new item.
Step S105: when the first target segment has the same main video ID as the second target segment but there is no time overlap, the clip type identifier of the first target segment is modified to be a pruned identifier and then pushed into the second result array, and the clip type identifier of the second target segment is modified to be a pruned identifier and then pushed into the first result array, when the first target segment ends at the second target segment, and the step S102 is executed.
In the specific implementation of step S105, when the first target segment starts at the end of the second target segment (corresponding to the first target segment starting immediately after the end of the second target segment) and the same main video ID exists between the first target segment and the second target segment, the clip type identifier of the first target segment is modified to be a pruning identifier and then pushed into the second result array, and the clip type identifier of the second target segment is modified to be a pruning identifier and then pushed into the first result array, and the next round of comparison is performed in step S102.
It should be noted that, when the first target segment does not exist, the clip type identifier of the second target segment is modified into the insertion identifier and then pushed into the first result array.
Step S106: in the case where the first target segment and the second target segment have different main video IDs, the first element of the first clip segment array and the second clip segment array is updated based on the first target segment and the second target segment, and the process returns to step S102.
In the specific implementation process of step S106, in the case that the first target segment and the second target segment have different main video IDs, updating the first element of the first clip segment array and the second clip segment array based on the first target segment and the second target segment, and returning to execute step S102 to continue the next round of comparison.
In a specific implementation, in the case that the first target segment and the second target segment have different main video IDs, a procedure of updating the first element of the first clip segment array and the second clip segment array based on the first target segment and the second target segment is as follows:
copying the second target segment when the main video ID of the first target segment is the same as the main video ID of the source video in the case where the first target segment and the second target segment have different main video IDs; setting the clip type identifier of the second target fragment obtained by copying as a deletion identifier (del) and inserting the deletion identifier into the head of the first clip fragment array, and modifying the clip type identifier of the second target fragment as an insertion identifier (add) and inserting the modified clip type identifier into the head of the second clip fragment array.
Copying the first target segment when the main video ID of the second target segment is the same as the main video ID of the source video; setting the clip type identifier of the first target fragment obtained by copying as a deletion identifier and then inserting the deletion identifier into the head of the second clip fragment array, and modifying the clip type identifier of the first target fragment into a replacement identifier (rep) and then inserting the replacement identifier into the head of the first clip fragment array.
Step S107: and if the first clip fragment array and the second clip fragment array are empty, displaying the clip trace between the first video and the second video based on the clip type identification of each fragment in the first result array and the second result array.
It should be noted that the clip trace includes at least deletion, insertion, and/or substitution.
In the process of implementing step S107, if the first clip array and the second clip array are empty, displaying the clip trace between the first video and the second video in a video track manner based on the clip type identifiers of the clips in the first result array and the second result array.
Specifically, the first result array and the second result array are obtained through the processing of the step S102 to the step S106; the fragments in the first result array and the second result array are in one-to-one correspondence (the start and stop time is the same), and each fragment in the first result array and the second result array is rendered in sequence according to the clip type identifiers of each fragment in the first result array and the second result array (the fragments with different clip type identifiers can be represented by different colors), so that a first video track formed by each fragment of the first result array and a second video track formed by each fragment of the second result array are obtained; the first video track and the second video track at least comprise clip type identifiers of all fragments, and clip traces between the first video and the second video can be intuitively reflected by placing the first video track and the second video track in the same interface for display.
For example: by the processing of the above steps S101 to S107, a video track as shown in fig. 2 (a) and 2 (B) can be obtained, where fig. 2 (a) is a first video track (also referred to as an upper track) corresponding to a first video, and fig. 2 (B) is a second video track (also referred to as a lower track) corresponding to a second video. The clip type identification of each clip is represented by a different color, where blue (init) represents the source video (main video), red (add, rep) represents the other video, and black (del) represents the deleted clip.
The "|" between the segments in fig. 2 (a) and 2 (B) is the anchor point (moment) at which the two video clip operations are aligned, indicating that the clip operation occurred at the anchor point, and can be intuitively fed back to the operator: at which moments the upper track was clipped becomes the lower track.
In fig. 2 (a) and 2 (B), the blue segment on the upper track can be characterized: the clip is not clipped. The black segments on the upper track can be characterized: when the main video IDs of the baseItem and the diffItem are different and the main video ID of the baseItem is the same as the main video ID of the source video, then the clip type identification of the diffItem is modified to a prune identification (del) to supply basevideo, or when the baseItem and the diffItem have no temporal intersection, the clip type identification of the diffItem is modified to a prune identification (del) to supply basevideo_. The red fragment on the upper track can be characterized: if the base video is empty and the diffItem exists, the clip type identifier of the diffItem is modified to an insert identifier (add, red) to supply the base video_.
The blue segment on the lower track can be characterized: the clip is not complementary to the base video and is not clipped. The black segments on the lower track can be characterized: the fragment is simply pruned. The red fragment on the lower track can be characterized: the clip type identifier of the clip is referred to as a clip identifier (rep) without the clip on the upper track, the main video ID of the clip being different from the main video ID of the source video.
In the embodiment of the invention, after the first video is clipped into the second video, a first clip array and a second clip array are obtained. If neither the first clip array nor the second clip array is empty, the first element is moved out of the first clip array to obtain a first target segment, and the first element is moved out of the second clip array to obtain a second target segment. And corresponding operations are adopted according to the main video IDs and the start-stop time of the first target segment and the second target segment, so that a first result array corresponding to the first video and a second result array corresponding to the second video are obtained. Based on the clipping type identifiers of the fragments in the first result array and the second result array, clipping marks between the first video and the second video are displayed, and comparison of video clipping marks can be achieved without relying on differences between manual arrangement videos, so that the comparison efficiency of the video clipping marks is improved, and the comparison accuracy of the video clipping marks can be guaranteed.
In practical applications, the present solution may be implemented by defining a program function, and a specific implementation is illustrated by the following processes A1-A5.
A1, defining an alignvideo function, wherein the alignvideo function is used for carrying out alignment and synchronous alignment processing on the fragments of two videos; the alignvideo function receives the following five parameters: fileId, baseVideos, baseVideos _, diffVideos, diffVideos _.
Wherein, fileId is the main video ID (the attribute used for distinguishing the fragments is insert, replace or delete); the basevideo and diffvideo are arrays of clips that need to be aligned to contrast clip tracks; the basevideo_and diffvideo_are used to store an array of clip segments that are aligned after replenishment and synchronization.
A2, when the AlignVideos function starts, the first item is shifted out from the baseVideos and the diffVideos and stored in the baseItem and the diffItem respectively.
A3, checking whether the baseItem and the diffItem exist; if both exist, depending on whether the baseItem and diffItem have the same FileId, whether the times overlap, etc., the following actions are taken:
1) If baseItem and diffItem have the same FileId and time overlap:
a. if the baseItem and diffItem are two fragments with the same start-stop time, the baseItem is pushed into basevideo_and the diffItem is pushed into diffvideo_.
b. If the baseItem begins before the diffItem, splitting the baseItem into two parts; the first part and the second part of the baseItem are inserted into the basevideo, and the first part of the baseItem is padded to the diffItem and then inserted into the diffvideo, so that the starting time of the baseItem and the diffItem fetched in the next round can be aligned.
c. If diffItem starts before baseItem, processing is performed in a similar manner as in "b" above.
d. If the baseItem ends before the diffItem, a new item is created to represent the time period after the end of the baseItem and pushed into the corresponding array.
If the baseItem ends after the diffItem, a new item is created to represent the period of time after the diffItem ends and pushed into the corresponding array.
2) If the baseItem and diffItem have the same FileId but no time to charge, and the baseItem begins immediately after the diffItem ends, the clip type identification of the diffItem is modified to del and pushed to basevideo_, and the clip type identification of the baseItem is modified to del and pushed to diffvideo_.
3) If the fileids of baseItem and diffItem are not the same:
a. if the FileId of the base item is the same as the main video ID of the source video, a copy of the diff item (the clip type identifier of the copied diff item is set to del) is inserted into the first bit of the base video, and then the clip type identifier of the diff item itself is modified to add and inserted into the first bit of the diff video.
b. If the FileId of the diffItem is the same as the main video ID of the source video, a copy of the baseItem (the clip type identifier of the copied baseItem is set to del) is inserted into the first bit of the diffvideo, and then the clip type identifier of the baseItem itself is modified to rep and inserted into the first bit of the basevideo.
A4, if the baseItem or the diffItem is missing (i.e. the baseVideos or the diffVideos are exhausted), the rest fragments are pushed to the baseVideos_and the diffVideos_, and the clip type identification (modified as add or del) of the rest fragments is modified according to the actual situation.
A5, the AlignVideos function continues to recursively call itself until both the base Videos and the diffVideos are empty, thereby obtaining final base Videos_and diffVideos_arrays.
By running the alignvideo function, a clip trace comparison result example diagram as shown in fig. 3 (a) -3 (D) can be obtained; fig. 3 (a) includes: clip trace comparison results (referred to as version differences) between version a and version A1; fig. 3 (B) includes: version distinction between version A1 and version A11; fig. 3 (C) includes: version distinction between version a and version a 11; fig. 3 (D) includes: version distinction between version a11 and version A1.
Note that, the blank (blank interruption portion) in fig. 3 (a) -3 (D) is a time alignment process performed to better show the comparison result. Specifically, "-" in fig. 3 (a) -3 (D) indicates a segment of a source video that is continuous, and "O", "P" and "I" indicate segments of a non-source video, and "X" indicates segments that are deleted, and "blank" is continuous in the time dimension (mainly for the sake of clarity of the alignment effect).
A represents a source video, and the video obtained by clipping on the basis of A can be named after A by adding a digital number; clipping A to obtain A1, and clipping A1 to obtain A11.
To better explain the process of the above-mentioned alignvideo function operation, by way of example with another flowchart of a video clip trace comparison method shown in fig. 4, fig. 4 includes the steps of:
step S401: the alignvideo function is run.
Step S402: it is determined whether both basevideo and diffvideo are non-empty. If not, pushing the rest fragments in the non-empty array to the corresponding result array; if yes, go to step S403.
Step S403: and taking the first elements of the basevideo and the diffvideo to obtain baseItem and diffItem.
Step S404: whether the fileids of baseItem and diffItem are equal. If not, the following content patch is performed: 1. the other party is preferably supplemented with the segments different from the FileId of the source video, and 2, the other party is supplemented with the segments; the process returns to step S401. If so, step S405 is performed.
Step S405: whether there is a temporal overlap of baseItem and diffItem. If there is no temporal overlap, the segments are patched (specifically, only the forward patched segments), wherein the forward patchings: supplementing the early fragments to the other party; the process returns to step S401. If there is time overlap, step S406 is performed.
Step S406: whether the start-stop times of the baseItem and diffItem are equal. If the start and stop times are not equal, the following is executed: 1. partition segment: specifically, separating the time overlapping parts; 2. and (3) supplementing the segments: only forward-fill fragments; 3. sequentially adding the separated and complemented fragments from the head to the respective arrays; the process returns to step S401. If the start-stop times are equal, step S407 is executed.
Step S407: the baseItem is pushed into basevideo_and the diffItem is pushed into diffvideo_and the process returns to step S401.
The above is an illustration of the operation of the alignvideo function, and the execution principle of the steps S401-S407 is described in detail in the above embodiments, which are not described herein.
Corresponding to the above-mentioned method for comparing video clip traces provided by the embodiment of the present invention, referring to fig. 5, the embodiment of the present invention further provides a structural block diagram of a device for comparing video clip traces, where the device for comparing video clip traces includes: an acquisition unit 501, a judgment unit 502, a shift-out unit 503, a first processing unit 504, a second processing unit 505, a third processing unit 506, and a display unit 507;
the obtaining unit 501 is configured to obtain a first clip array and a second clip array after clipping a first video into a second video, where the first clip array includes a plurality of clips that compose the first video, and the second clip array includes a plurality of clips that compose the second video, and the first video is a source video or a video obtained by clipping the source video.
A determining unit 502, configured to determine whether the first clip array and the second clip array are empty sets.
The shift-out unit 503 is configured to shift out a first element from the first clip array to obtain a first target segment, and shift out a first element from the second clip array to obtain a second target segment if neither the first clip array nor the second clip array is empty.
The first processing unit 504 is configured to push the first target segment and the second target segment to a first result array corresponding to the first video and a second result array corresponding to the second video respectively, or insert the first target segment and the second target segment into first bits of the first clip segment array and the second clip segment array respectively after time-alignment, where there is time overlap between the first target segment and the second target segment and the same main video ID; the execution judgment unit 502 is returned.
The second processing unit 505 is configured to, when the first target segment has the same main video ID and there is no time overlap between the first target segment and the second target segment, start when the first target segment ends at the second target segment, modify the clip type identifier of the first target segment to a pruning identifier and push the pruning identifier to the second result array, modify the clip type identifier of the second target segment to a pruning identifier and push the pruning identifier to the first result array, and return to the execution determining unit 502.
The third processing unit 506 is configured to, in a case where the first target segment and the second target segment have different main video IDs, update the first element of the first clip segment array and the second clip segment array based on the first target segment and the second target segment, and return to the execution determining unit 502.
And the display unit 507 is configured to display a clip trace between the first video and the second video based on the clip type identifier of each clip in the first result array and the second result array if the first clip array and the second clip array are empty, where the clip trace at least includes deletion, insertion, and/or replacement.
In a specific implementation, the display unit 507 is specifically configured to: and if the first clip fragment array and the second clip fragment array are empty, displaying clip traces between the first video and the second video in a video track mode based on clip type identifiers of the fragments in the first result array and the second result array.
In the embodiment of the invention, after the first video is clipped into the second video, a first clip array and a second clip array are obtained. If neither the first clip array nor the second clip array is empty, the first element is moved out of the first clip array to obtain a first target segment, and the first element is moved out of the second clip array to obtain a second target segment. And corresponding operations are adopted according to the main video IDs and the start-stop time of the first target segment and the second target segment, so that a first result array corresponding to the first video and a second result array corresponding to the second video are obtained. Based on the clipping type identifiers of the fragments in the first result array and the second result array, clipping marks between the first video and the second video are displayed, and comparison of video clipping marks can be achieved without relying on differences between manual arrangement videos, so that the comparison efficiency of the video clipping marks is improved, and the comparison accuracy of the video clipping marks can be guaranteed.
Preferably, in conjunction with the content shown in fig. 5, the first processing unit 504 includes a pushing module, a first processing module, a second processing module, a third processing module, and a fourth processing module; the execution principle of each module is as follows:
the pushing module is used for pushing the first target segment to a first result array corresponding to the first video and pushing the second target segment to a second result array corresponding to the second video when the start time and the stop time of the first target segment are the same under the condition that the first target segment and the second target segment are overlapped in time and have the same main video ID;
a first processing module for inserting a first portion and a second portion of the first target segment into a first bit of the first clip array when the first target segment begins before the second target segment; the first part of the first target segment is complemented to the second target segment, and the complemented second target segment is inserted into the first bit of the second clip segment array; the time period corresponding to the first portion of the first target segment is from the start time of the first target segment to the start time of the second target segment, and the time period corresponding to the second portion of the first target segment is from the start time of the second target segment to the end time of the first target segment.
A second processing module for inserting the first portion and the second portion of the second target segment into the first bit of the second clip segment array when the second target segment begins before the first target segment; the first part of the second target segment is complemented to the first target segment, and the complemented first target segment is inserted into the first bit of the first clip segment array; the time period corresponding to the first portion of the second target segment is from the start time of the second target segment to the start time of the first target segment, and the time period corresponding to the second portion of the second target segment is from the start time of the first target segment to the end time of the second target segment.
And the third processing module is used for creating a first new item to patch the first target segment when the first target segment ends before the second target segment, and inserting the patch-up first target segment into the first position of the first clip segment array, wherein the time period corresponding to the first new item is from the ending time of the first target segment to the ending time of the second target segment.
And the fourth processing module is used for creating a second new item to complement the second target segment when the first target segment ends after the second target segment, and inserting the complemented second target segment into the first bit of the second clip segment array, wherein the time period corresponding to the second new item is from the ending time of the second target segment to the ending time of the first target segment.
Preferably, in conjunction with what is shown in fig. 5, the third processing unit 506 includes a first processing module and a second processing module; the execution principle of each module is as follows:
the first processing module is used for copying the second target segment when the main video ID of the first target segment is the same as the main video ID of the source video under the condition that the first target segment and the second target segment have different main video IDs; setting the clip type identifier of the second target fragment obtained by copying as a deletion identifier and then inserting the deletion identifier into the head of the first clip fragment array, and modifying the clip type identifier of the second target fragment into an insertion identifier and then inserting the insertion identifier into the head of the second clip fragment array.
The second processing module is used for copying the first target fragment when the main video ID of the second target fragment is the same as the main video ID of the source video; setting the clip type identifier of the first target fragment obtained by copying as a deletion identifier and then inserting the deletion identifier into the first position of the second clip fragment array, and modifying the clip type identifier of the first target fragment into a replacement identifier and then inserting the replacement identifier into the first position of the first clip fragment array.
Preferably, in combination with the content shown in fig. 5, the apparatus further includes:
And the fourth processing unit is used for modifying the clip type identification of the rest fragments in the second clip fragment array into the insertion identification and then pushing the insertion identification into the first result array corresponding to the first video, and pushing the rest fragments in the second clip fragment array into the second result array corresponding to the second video if the first clip fragment array is an empty set and the second clip fragment array is not an empty set.
And the fifth processing unit is used for modifying the clip type identification of the rest fragments in the first clip fragment array into the deletion identification and then pushing the deletion identification into the second result array and pushing the rest fragments in the first clip fragment array into the first result array if the first clip fragment array is not the empty set and the second clip fragment array is the empty set.
In summary, the embodiment of the invention provides a method and a device for comparing video clip traces, which acquire a first clip array and a second clip array after a first video is clipped into a second video. If neither the first clip array nor the second clip array is empty, the first element is moved out of the first clip array to obtain a first target segment, and the first element is moved out of the second clip array to obtain a second target segment. And corresponding operations are adopted according to the main video IDs and the start-stop time of the first target segment and the second target segment, so that a first result array corresponding to the first video and a second result array corresponding to the second video are obtained. Based on the clipping type identifiers of the fragments in the first result array and the second result array, clipping marks between the first video and the second video are displayed, and comparison of video clipping marks can be achieved without relying on differences between manual arrangement videos, so that the comparison efficiency of the video clipping marks is improved, and the comparison accuracy of the video clipping marks can be guaranteed.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of video clip trace comparison, the method comprising:
after a first video is clipped into a second video, a first clip array and a second clip array are obtained, wherein the first clip array comprises a plurality of clips forming the first video, the second clip array comprises a plurality of clips forming the second video, and the first video is a source video or a video obtained by clipping on the basis of the source video;
judging whether the first clip fragment array and the second clip fragment array are empty sets or not;
if the first clip fragment array and the second clip fragment array are not empty sets, moving out a first element from the first clip fragment array to obtain a first target fragment, and moving out a first element from the second clip fragment array to obtain a second target fragment;
Under the condition that time overlapping exists between the first target segment and the second target segment and the first target segment and the second target segment have the same main video ID, the first target segment and the second target segment are respectively pushed into a first result array corresponding to the first video and a second result array corresponding to the second video, or the first target segment and the second target segment are respectively inserted into the first clip segment array and the first bit of the second clip segment array after time filling; returning to the step of judging whether the first clip fragment array and the second clip fragment array are empty sets;
when the first target segment has the same main video ID and no time overlap exists between the first target segment and the second target segment, when the first target segment ends at the second target segment, modifying the clip type identification of the first target segment into a deletion identification and pushing the deletion identification into the second result array, modifying the clip type identification of the second target segment into a deletion identification and pushing the deletion identification into the first result array, and returning to the step of judging whether the first clip segment array and the second clip segment array are empty sets;
Returning to execute the step of judging whether the first clip fragment array and the second clip fragment array are empty sets or not, in the case that the first target fragment and the second target fragment have different main video IDs, based on the first target fragment and the second target fragment updating the first element of the first clip fragment array and the second clip fragment array;
and if the first clip fragment array and the second clip fragment array are empty, displaying clip marks between the first video and the second video based on clip type identifiers of fragments in the first result array and the second result array, wherein the clip marks at least comprise deletion, insertion and/or substitution.
2. The method of claim 1, wherein pushing the first target segment and the second target segment to a first result array corresponding to the first video and a second result array corresponding to the second video, respectively, or inserting the first target segment and the second target segment to first bits of the first clip array and the second clip array, respectively, after time-filling, when there is a time overlap between the first target segment and the second target segment and the same main video ID, comprises:
When the first target segment and the second target segment have the same main video ID and have the same starting and stopping time, pushing the first target segment into a first result array corresponding to the first video and pushing the second target segment into a second result array corresponding to the second video;
inserting a first portion and a second portion of the first target segment into a first bit of the first clip array when the first target segment begins before the second target segment; the first part of the first target segment is supplemented to the second target segment, and the supplemented second target segment is inserted into the first position of the second clip segment array; the time period corresponding to the first part of the first target segment is from the start time of the first target segment to the start time of the second target segment, and the time period corresponding to the second part of the first target segment is from the start time of the second target segment to the end time of the first target segment;
Inserting a first portion and a second portion of the second target segment into a first bit of the second clip segment array when the second target segment begins before the first target segment; the first part of the second target segment is supplemented to the first target segment, and the supplemented first target segment is inserted into the first position of the first clip segment array; the time period corresponding to the first part of the second target segment is from the start time of the second target segment to the start time of the first target segment, and the time period corresponding to the second part of the second target segment is from the start time of the first target segment to the end time of the second target segment;
when the first target segment ends before the second target segment, creating a first new item to patch the first target segment, and inserting the patch-up first target segment into the first position of the first clip segment array, wherein the time period corresponding to the first new item is from the ending time of the first target segment to the ending time of the second target segment;
when the first target segment ends after the second target segment, a second new item is created to patch the second target segment, and the patch-up second target segment is inserted into the first position of the second clip segment array, wherein the time period corresponding to the second new item is from the ending time of the second target segment to the ending time of the first target segment.
3. The method of claim 1, wherein updating the first element of the first clip array and the second clip array based on the first target segment and the second target segment if the first target segment and the second target segment have different primary video IDs, comprises:
copying the second target segment when the main video ID of the first target segment is the same as the main video ID of the source video in the case where the first target segment and the second target segment have different main video IDs; setting the clip type identifier of the second target fragment obtained by copying as a deletion identifier and then inserting the deletion identifier into the head of the first clip fragment array, and modifying the clip type identifier of the second target fragment into an insertion identifier and then inserting the insertion identifier into the head of the second clip fragment array;
copying the first target segment when the main video ID of the second target segment is the same as the main video ID of the source video; setting the clip type identifier of the first target fragment obtained by copying as a deletion identifier and then inserting the deletion identifier into the head of the second clip fragment array, and modifying the clip type identifier of the first target fragment into a replacement identifier and then inserting the replacement identifier into the head of the first clip fragment array.
4. The method of claim 1, wherein after determining whether the first clip array and the second clip array are empty sets, further comprising:
if the first clip array is an empty set and the second clip array is not an empty set, modifying clip type identifiers of the remaining clips in the second clip array into insertion identifiers, pushing the insertion identifiers into a first result array corresponding to the first video, and pushing the remaining clips in the second clip array into a second result array corresponding to the second video;
if the first clip fragment array is not an empty set and the second clip fragment array is an empty set, modifying clip type identifiers of the remaining clips in the first clip fragment array to pruning identifiers, pushing the deleting identifiers to the second result array, and pushing the remaining clips in the first clip fragment array to the first result array.
5. The method of any of claims 1-4, wherein if the first clip array and the second clip array are empty, displaying a clip trace between the first video and the second video based on clip type identifications of the respective clips in the first result array and the second result array, comprises:
And if the first clip fragment array and the second clip fragment array are empty, displaying clip traces between the first video and the second video in a video track mode based on clip type identifiers of the fragments in the first result array and the second result array.
6. A video clip trace alignment apparatus, the apparatus comprising:
the device comprises an acquisition unit, a first video editing unit and a second video editing unit, wherein the acquisition unit is used for acquiring a first clip fragment array and a second clip fragment array after editing a first video into a second video, the first clip fragment array comprises a plurality of fragments forming the first video, the second clip fragment array comprises a plurality of fragments forming the second video, and the first video is a source video or a video obtained by editing on the basis of the source video;
a judging unit, configured to judge whether the first clip fragment array and the second clip fragment array are empty sets;
the moving-out unit is used for moving out a first element from the first clip fragment array to obtain a first target fragment and moving out a first element from the second clip fragment array to obtain a second target fragment if the first clip fragment array and the second clip fragment array are not empty sets;
The first processing unit is configured to push the first target segment and the second target segment to a first result array corresponding to the first video and a second result array corresponding to the second video respectively, or insert the first target segment and the second target segment into first positions of the first clip segment array and the second clip segment array respectively after time-filling, when there is time overlap between the first target segment and the second target segment and the same main video ID; returning to execute the judging unit;
a second processing unit, configured to, when the first target segment has the same main video ID and there is no time overlap between the first target segment and the second target segment, modify the clip type identifier of the first target segment to a pruned identifier and push the pruned identifier into the second result array, modify the clip type identifier of the second target segment to a pruned identifier and push the pruned identifier into the first result array, and return to executing the determining unit;
a third processing unit, configured to, in a case where the first target segment and the second target segment have different main video IDs, update first elements of the first clip segment array and the second clip segment array based on the first target segment and the second target segment, and return to execute the determining unit;
And the display unit is used for displaying the clipping trace between the first video and the second video based on the clipping type identification of each fragment in the first result array and the second result array if the first clipping fragment array and the second clipping fragment array are empty, wherein the clipping trace at least comprises deletion, insertion and/or substitution.
7. The apparatus of claim 6, wherein the first processing unit comprises:
the pushing module is used for pushing the first target segment to a first result array corresponding to the first video and pushing the second target segment to a second result array corresponding to the second video when the start time and the stop time of the first target segment are the same under the condition that the first target segment and the second target segment are overlapped in time and have the same main video ID;
a first processing module, configured to insert a first portion and a second portion of the first target segment into a first bit of the first clip segment array when the first target segment begins before the second target segment; the first part of the first target segment is supplemented to the second target segment, and the supplemented second target segment is inserted into the first position of the second clip segment array; the time period corresponding to the first part of the first target segment is from the start time of the first target segment to the start time of the second target segment, and the time period corresponding to the second part of the first target segment is from the start time of the second target segment to the end time of the first target segment;
A second processing module, configured to insert a first portion and a second portion of the second target segment into a first bit of the second clip segment array when the second target segment begins before the first target segment; the first part of the second target segment is supplemented to the first target segment, and the supplemented first target segment is inserted into the first position of the first clip segment array; the time period corresponding to the first part of the second target segment is from the start time of the second target segment to the start time of the first target segment, and the time period corresponding to the second part of the second target segment is from the start time of the first target segment to the end time of the second target segment;
a third processing module, configured to create a first new item to patch the first target segment when the first target segment ends before the second target segment, and insert the first target segment after the patch into a first position of the first clip segment array, where a period of time corresponding to the first new item is from an end time of the first target segment to an end time of the second target segment;
And a fourth processing module, configured to create a second new item to patch the second target segment when the first target segment ends after the second target segment, and insert the second target segment after the patch into the first bit of the second clip segment array, where a time period corresponding to the second new item is from an end time of the second target segment to an end time of the first target segment.
8. The apparatus of claim 6, wherein the third processing unit comprises:
a first processing module, configured to copy the second target segment when the main video ID of the first target segment is the same as the main video ID of the source video in a case where the first target segment and the second target segment have different main video IDs; setting the clip type identifier of the second target fragment obtained by copying as a deletion identifier and then inserting the deletion identifier into the head of the first clip fragment array, and modifying the clip type identifier of the second target fragment into an insertion identifier and then inserting the insertion identifier into the head of the second clip fragment array;
the second processing module is used for copying the first target segment when the main video ID of the second target segment is the same as the main video ID of the source video; setting the clip type identifier of the first target fragment obtained by copying as a deletion identifier and then inserting the deletion identifier into the head of the second clip fragment array, and modifying the clip type identifier of the first target fragment into a replacement identifier and then inserting the replacement identifier into the head of the first clip fragment array.
9. The apparatus as recited in claim 6, further comprising:
a fourth processing unit, configured to modify, if the first clip array is an empty set and the second clip array is not an empty set, clip type identifiers of fragments remaining in the second clip array to insert identifiers and push the modified clip type identifiers to first result arrays corresponding to the first video, and push fragments remaining in the second clip array to second result arrays corresponding to the second video;
and a fifth processing unit, configured to, if the first clip array is not an empty set and the second clip array is an empty set, modify clip type identifiers of fragments remaining in the first clip array to pruning identifiers, push the modified clip type identifiers into the second result array, and push fragments remaining in the first clip array into the first result array.
10. The device according to any one of claims 6 to 9, wherein the display unit is specifically configured to: and if the first clip fragment array and the second clip fragment array are empty, displaying clip traces between the first video and the second video in a video track mode based on clip type identifiers of the fragments in the first result array and the second result array.
CN202311568483.0A 2023-11-23 2023-11-23 Video clip trace comparison method and device Active CN117278802B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311568483.0A CN117278802B (en) 2023-11-23 2023-11-23 Video clip trace comparison method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311568483.0A CN117278802B (en) 2023-11-23 2023-11-23 Video clip trace comparison method and device

Publications (2)

Publication Number Publication Date
CN117278802A true CN117278802A (en) 2023-12-22
CN117278802B CN117278802B (en) 2024-02-13

Family

ID=89220044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311568483.0A Active CN117278802B (en) 2023-11-23 2023-11-23 Video clip trace comparison method and device

Country Status (1)

Country Link
CN (1) CN117278802B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999173A (en) * 1992-04-03 1999-12-07 Adobe Systems Incorporated Method and apparatus for video editing with video clip representations displayed along a time line
US20030002851A1 (en) * 2001-06-28 2003-01-02 Kenny Hsiao Video editing method and device for editing a video project
US6560620B1 (en) * 1999-08-03 2003-05-06 Aplix Research, Inc. Hierarchical document comparison system and method
KR20090002076A (en) * 2007-06-04 2009-01-09 (주)엔써즈 Method and apparatus for determining sameness and detecting common frame of moving picture data
US20120136729A1 (en) * 2010-11-30 2012-05-31 At&T Intellectual Property I, L.P. Method and system for snippet-modified television advertising
CN107111620A (en) * 2014-10-10 2017-08-29 三星电子株式会社 Video editing using context data and the content discovery using group
CN108241598A (en) * 2016-12-26 2018-07-03 北京奇虎科技有限公司 The production method and device of a kind of PowerPoint
CN109246446A (en) * 2018-11-09 2019-01-18 东方明珠新媒体股份有限公司 Compare the method, apparatus and equipment of video content similitude
KR20190061621A (en) * 2017-11-28 2019-06-05 주식회사 트라이웍스 Video comparison method and video comparison system having the method
CN112437344A (en) * 2020-10-30 2021-03-02 福建星网视易信息系统有限公司 Video matching method and terminal
US20210406157A1 (en) * 2020-06-24 2021-12-30 Webomates LLC Software defect creation
CN113923472A (en) * 2021-09-01 2022-01-11 北京奇艺世纪科技有限公司 Video content analysis method and device, electronic equipment and storage medium
US20220180899A1 (en) * 2019-09-06 2022-06-09 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Matching method, terminal and readable storage medium
CN114666637A (en) * 2022-03-10 2022-06-24 阿里巴巴(中国)有限公司 Video editing method, audio editing method and electronic equipment
CN115499707A (en) * 2022-09-22 2022-12-20 北京百度网讯科技有限公司 Method and device for determining video similarity
WO2023030098A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 Video editing method, electronic device, and storage medium
WO2023088484A1 (en) * 2021-11-22 2023-05-25 北京字跳网络技术有限公司 Method and apparatus for editing multimedia resource scene, device, and storage medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999173A (en) * 1992-04-03 1999-12-07 Adobe Systems Incorporated Method and apparatus for video editing with video clip representations displayed along a time line
US6560620B1 (en) * 1999-08-03 2003-05-06 Aplix Research, Inc. Hierarchical document comparison system and method
US20030002851A1 (en) * 2001-06-28 2003-01-02 Kenny Hsiao Video editing method and device for editing a video project
KR20090002076A (en) * 2007-06-04 2009-01-09 (주)엔써즈 Method and apparatus for determining sameness and detecting common frame of moving picture data
US20120136729A1 (en) * 2010-11-30 2012-05-31 At&T Intellectual Property I, L.P. Method and system for snippet-modified television advertising
CN107111620A (en) * 2014-10-10 2017-08-29 三星电子株式会社 Video editing using context data and the content discovery using group
CN108241598A (en) * 2016-12-26 2018-07-03 北京奇虎科技有限公司 The production method and device of a kind of PowerPoint
KR20190061621A (en) * 2017-11-28 2019-06-05 주식회사 트라이웍스 Video comparison method and video comparison system having the method
CN109246446A (en) * 2018-11-09 2019-01-18 东方明珠新媒体股份有限公司 Compare the method, apparatus and equipment of video content similitude
US20220180899A1 (en) * 2019-09-06 2022-06-09 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Matching method, terminal and readable storage medium
US20210406157A1 (en) * 2020-06-24 2021-12-30 Webomates LLC Software defect creation
CN112437344A (en) * 2020-10-30 2021-03-02 福建星网视易信息系统有限公司 Video matching method and terminal
WO2023030098A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 Video editing method, electronic device, and storage medium
CN113923472A (en) * 2021-09-01 2022-01-11 北京奇艺世纪科技有限公司 Video content analysis method and device, electronic equipment and storage medium
WO2023088484A1 (en) * 2021-11-22 2023-05-25 北京字跳网络技术有限公司 Method and apparatus for editing multimedia resource scene, device, and storage medium
CN114666637A (en) * 2022-03-10 2022-06-24 阿里巴巴(中国)有限公司 Video editing method, audio editing method and electronic equipment
CN115499707A (en) * 2022-09-22 2022-12-20 北京百度网讯科技有限公司 Method and device for determining video similarity

Also Published As

Publication number Publication date
CN117278802B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN106033436B (en) Database merging method
CN105302533B (en) Code synchronization method and device
WO2017084410A1 (en) Network management data synchronization method and apparatus
CN109885581A (en) Synchronous method, device, equipment and the storage medium of database
CN107491429B (en) Method for solving conflict of simultaneously editing document contents
CN109840194B (en) Method and system for detecting configuration file
CN105912628A (en) Synchronization method and device for master database and slave database
CA2530395A1 (en) Method and system for updating versions of content stored in a storage device
US20220179642A1 (en) Software code change method and apparatus
CN106055334B (en) Code management system and method
CN108399082B (en) Method and system for generating continuous integration assembly line
CN109783451A (en) File updating method, device, equipment and medium based on Message Digest 5
CN109165169B (en) Branch management method and system for testing
CN105867903A (en) Method and device or splitting code library
CN105554044A (en) Method and apparatus for synchronizing object in local object storage node
CN117278802B (en) Video clip trace comparison method and device
CN110633101A (en) Program code management method, device and equipment and readable storage medium
CN112905441A (en) Test case generation method, test method, device and equipment
CN111125067B (en) Data maintenance method and device
US9037539B2 (en) Data synchronization
CN110532006B (en) Complex configuration file upgrading method based on state machine
CN117076574B (en) Method and device capable of arranging multiple data sources for synchronous aggregation of data
CN111506583A (en) Update method, update apparatus, server, computer device, and storage medium
CN113553373A (en) Data synchronization method and device, storage medium and electronic equipment
CN104135628A (en) Video editing method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant