CN109257545B - Multi-source video editing method and device and storage medium - Google Patents
Multi-source video editing method and device and storage medium Download PDFInfo
- Publication number
- CN109257545B CN109257545B CN201810983670.8A CN201810983670A CN109257545B CN 109257545 B CN109257545 B CN 109257545B CN 201810983670 A CN201810983670 A CN 201810983670A CN 109257545 B CN109257545 B CN 109257545B
- Authority
- CN
- China
- Prior art keywords
- music
- video
- duration
- piece
- segment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000003860 storage Methods 0.000 title claims abstract description 16
- 230000033764 rhythmic process Effects 0.000 claims abstract description 21
- 239000012634 fragment Substances 0.000 claims description 41
- 238000009826 distribution Methods 0.000 claims description 5
- 238000004904 shortening Methods 0.000 claims description 5
- 239000000126 substance Substances 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 5
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention discloses a multi-source video editing method, which comprises the following steps: acquiring target music, and dividing the target music into at least two music pieces; selecting a second number of video clips from at least one source video file according to the first number of the music clips; judging whether the duration of the current music piece meets the set constraint condition or not; distributing the current music segment to the corresponding video segment based on the judgment result until the at least two music segments are completely distributed to the corresponding video segments; determining unallocated maximum duration video frame intervals in the video clips, and performing interval division on the maximum duration video frame intervals on the basis of a set first proportion; and associating each video segment with the distributed music segments based on the interval division result to generate a target video file with music rhythm. The invention also discloses a multi-source video editing device and a storage medium.
Description
Technical Field
The present invention relates to the field of multimedia data processing technologies, and in particular, to a multi-source video editing method, apparatus, and storage medium.
Background
At present, when a video music catwalk is edited, a manual operation mode is usually adopted for realizing the specific operation process, for example, a user selects a certain number of video sources, selects video segments meeting own interests and hobbies from the video sources, specially marks the selected video segments, then an operator splices the marked video segments according to a preset sequence by using video editing software, then edits background music of the video, and finally loads the background music into the video so as to achieve the video playing effect accompanied with music rhythm.
However, since this method relies on manual selection of video segments, the operation efficiency is limited by the number of source videos and the duration of the source videos, and therefore, when the number of videos to be edited is large, the workload is large, and the efficiency is low. In practical application, the video segments can be selected by randomly extracting the segments, however, the content of a plurality of videos cannot be considered in the video edited by the method, and the requirements of music catkins and video visual diversity cannot be met, so that the quality of the edited video is affected, and the user experience is reduced.
Disclosure of Invention
In view of the above, embodiments of the present invention are directed to a multi-source video editing method, apparatus and storage medium, which are used to at least solve the problem in the related art that it is difficult to implement simple operation and effectively improve the quality of an edited video.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a multi-source video clipping method, where the method includes:
acquiring target music, and dividing the target music into at least two music pieces;
selecting a second number of video clips from at least one source video file according to the first number of the music clips, wherein the first number is matched with the second number;
judging whether the duration of the current music piece meets the set constraint condition or not;
distributing the current music segment to the corresponding video segment based on the judgment result until the at least two music segments are completely distributed to the corresponding video segments;
determining unallocated maximum duration video frame intervals in the video clips, and performing interval division on the maximum duration video frame intervals on the basis of a set first proportion;
and associating each video segment with the distributed music segments based on the interval division result to generate a target video file with music rhythm.
In a second aspect, an embodiment of the present invention further provides a multi-source video editing apparatus, where the apparatus includes: the device comprises an acquisition module, a division module, a selection module, a judgment module, a distribution module and an association module; wherein the content of the first and second substances,
the acquisition module is used for acquiring target music;
the dividing module is used for dividing the target music into at least two music pieces;
the selecting module is used for selecting a second number of video clips from at least one source video file according to the first number of the music clips, wherein the first number is matched with the second number;
the judging module is used for judging whether the duration of the current music piece meets the set constraint condition or not;
the distribution module is used for distributing the current music segments to corresponding video segments based on the judgment result until the at least two music segments are completely distributed to the corresponding video segments;
the dividing module is further configured to determine an unallocated maximum duration video frame interval in the video segment, and perform interval division on the maximum duration video frame interval based on a set first proportion;
and the association module is used for associating each video clip with the distributed music clips based on the interval division result to generate a target video file with music rhythm.
In a third aspect, an embodiment of the present invention further provides a multi-source video clipping device, which includes a memory, a processor, and an executable program stored on the memory and capable of being executed by the processor, where the processor executes the steps of the multi-source video clipping method provided in the embodiment of the present invention when executing the executable program.
In a fourth aspect, an embodiment of the present invention further provides a storage medium, on which an executable program is stored, where the executable program, when executed by a processor, implements the steps of the multi-source video clipping method provided by the embodiment of the present invention.
According to the multi-source video clipping method, the multi-source video clipping device and the storage medium, a second number of matched video clips are selected from at least one source video file according to a first number of divided music clips, whether the duration of the current music clip meets a set constraint condition or not is judged, the current music clip is distributed into the corresponding video clips on the basis of a judgment result until the at least two music clips are completely distributed into the corresponding video clips, the unallocated maximum duration video frame interval in the video clips is determined, and the maximum duration video frame interval is subjected to interval division on the basis of a set first proportion; and associating each video segment with the distributed music segments based on the interval division result to generate a target video file with music rhythm. Therefore, the problems of large workload and low efficiency caused by a manual clipping mode can be solved, and meanwhile, the video clipped by the technical scheme of the embodiment of the invention can give consideration to the contents of a plurality of videos, meets the requirements of music catkin and video vision diversification, achieves the purpose of simply and conveniently operating and effectively improving the quality of the clipped video, and greatly improves the use experience of users.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of a multi-source video editing method according to an embodiment of the present invention;
FIG. 2 is a functional block diagram of a multi-source video editing apparatus according to an embodiment of the present invention;
FIG. 3 is a functional block diagram of another multi-source video editing apparatus according to an embodiment of the present invention;
FIG. 4 is a functional block diagram of another multi-source video editing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of a multi-source video editing apparatus according to an embodiment of the present invention.
Detailed Description
So that the manner in which the features and aspects of the embodiments of the present invention can be understood in detail, a more particular description of the embodiments of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. It should be understood by those skilled in the art that the technical solutions described in the embodiments of the present invention may be arbitrarily combined without conflict.
Fig. 1 is a schematic flow chart of an implementation of a multi-source video editing method provided in an embodiment of the present invention, where the multi-source video editing method is applicable to a terminal device; as shown in fig. 1, an implementation flow of the multi-source video clipping method in the embodiment of the present invention may include the following steps:
step 101: the method comprises the steps of obtaining target music and dividing the target music into at least two music fragments.
In the embodiment of the invention, the obtained target music comprises at least one rhythm point, and the target music can be divided into at least two music pieces according to the rhythm point of the target music. For example, the target music includes N tempo points, and the N tempo points may divide the target music into N +1 pieces of music; wherein N is a positive integer greater than or equal to 1.
Here, the rhythm point of the target music may be set according to a music characteristic of the target music, the music characteristic including a beat characteristic, wherein the beat characteristic may include sound amplitude information of the target music. The process of setting the rhythm point of the target music according to the sound amplitude information of the target music may specifically be: extracting sound amplitude information of a preset frequency domain from the target music, and selecting a time point of sound amplitude surge in the preset frequency domain as a rhythm point of the target music to enable the time interval duration between adjacent rhythm points to be larger than the preset duration.
The time point at which the sound amplitude is abruptly increased may be understood as a time inflection point at which the sound amplitude is decreased from increasing in a preset frequency domain. The time interval duration between the adjacent rhythm points is longer than the preset duration, so that the situation that the time interval between the adjacent rhythm points is too short to cause the matched video segment to be very short, and further the playing effect of the clipped video is influenced can be avoided.
Step 102: and selecting a second number of video clips from at least one source video file according to the first number of the music clips, wherein the first number is matched with the second number.
Here, the first number matches the second number, for example, the target music includes N tempo points, the N tempo points may divide the target music into N +1 pieces of music, N is a positive integer equal to or greater than 1, and then N +1 pieces of video may be cut from M source video files. Wherein M is a positive integer greater than or equal to 2.
Step 103: and judging whether the duration of the current music piece meets the set constraint condition.
Here, the constraint condition includes a first constraint condition and a second constraint condition.
In this embodiment of the present invention, before determining whether the duration of the current music piece satisfies the set constraint condition in this step 103, the multi-source video editing method further includes:
detecting whether each video frame in the video clip comprises a face image, determining the video frame comprising the face image, and marking the video frame comprising the face image so as to distinguish the video frame not comprising the face image;
distributing a corresponding fourth music segment for the video frame comprising the face image, and if the situation that the at least two music segments are not completely distributed is detected, judging whether the duration of the current music segment meets a set constraint condition or not;
wherein the current music piece is the other music pieces except the fourth music piece in the at least two music pieces.
It should be noted that, in the embodiment of the present invention, a face recognition module in an Open Source Computer Vision Library (OpenCV) architecture is called to recognize face images included in video segments selected from a plurality of video sources, and mark video frames of the recognized face images, so as to distinguish video frames not including the face images. OpenCV, can be used to develop real-time image processing, computer vision, and pattern recognition programs.
Here, the allocating a corresponding fourth music piece to the video frame including the face image specifically includes:
detecting a first duration of the video frame interval comprising the face image;
traversing all video frames comprising face images, and searching music segments matched with the first time length;
and if the fourth music segment successfully matched with the first time length is found, taking the found fourth music segment as the music segment distributed for the video frame comprising the face image.
Wherein, the multi-source video clipping method further comprises: and if the fourth music segment successfully matched with the first time length is not found, adjusting the first time length of the video frame interval including the face image.
Here, for adjusting the first duration of the video frame interval including the face image, the following may be implemented:
when detecting that the second time length of the fourth music segment is longer than the first time length, extending the starting time and/or the ending time of the video frame interval comprising the face image so as to enable the first time length to be matched with the second time length;
and when the second time length is smaller than the first time length, shortening the starting time and/or the ending time of the video frame interval including the face image so as to enable the first time length to be matched with the second time length.
Therefore, the embodiment of the invention automatically selects the video frames comprising the face images, preferentially distributes the corresponding music segments for the video frames comprising the face images, can ensure that more face images appear in the finally edited target video file with music rhythm, meets the requirements of users on music catkins and video visual diversification, and has more human feelings.
It should be noted that, in the embodiment of the present invention, after the corresponding fourth music segment is allocated to the video frame including the face image, if it is detected that the at least two music segments are completely allocated, the step of determining whether the duration of the current music segment meets the set constraint condition is not required to be performed, but each video segment may be directly associated with the allocated music segment, so as to generate the target video file with the music tempo; if it is detected that the at least two music pieces are not completely allocated, step 104 is executed to allocate the remaining music pieces of the at least two music pieces to corresponding video pieces by using a multiple-iteration allocation method until the at least two music pieces are completely allocated.
Step 104: and distributing the current music segment to the corresponding video segment based on the judgment result until the at least two music segments are completely distributed to the corresponding video segments.
In this embodiment of the present invention, when the constraint condition is the first constraint condition, the following may be implemented for the step 104 of allocating the current music piece to the corresponding video piece based on the above determination result:
traversing the at least two music segments in descending order according to the duration of the music segments, and traversing the second number of video segments in descending order according to the duration of the videos which are not distributed;
and when determining that at least one first music piece which is not distributed exists in the at least two music pieces and the duration of the first music piece meets the first constraint condition, distributing the at least one first music piece to the corresponding at least one first video piece until the duration of the first music piece is detected not to meet the first constraint condition.
Wherein the duration of the first music piece satisfies the first constraint condition, including:
the duration of the first music piece is less than the duration of the video of the first video piece which is not allocated, and the duration of the first music piece is less than the duration of the music piece to which the video of the first video piece is allocated.
Here, the music piece duration to which the video of the first video piece should be allocated may be determined by: determining the proportion of the duration of each first video clip in the total duration of the second number of video clips according to the duration of each first video clip; and determining the duration of the music piece to which the video of the first video piece is to be distributed according to the proportion and the total duration of at least one first music piece which is not distributed in the at least two music pieces.
It should be noted that, when the duration of repeatedly executing the first music piece satisfies the first constraint condition, at least one first music piece is allocated to at least one corresponding first video piece, until it is detected that the duration of the first music piece does not satisfy the first constraint condition and cannot be allocated continuously, it is detected whether at least one second music piece that is not allocated still exists in the at least two music pieces, and if so, when allocating the at least one second music piece to the at least one corresponding second video piece, the constraint condition is relaxed, so as to ensure that the at least two music pieces can be completely allocated.
In this embodiment of the present invention, when the constraint condition is the second constraint condition, the following may be implemented for the step 104 of allocating the current music piece to the corresponding video piece based on the above determination result:
when the duration of the first music piece is detected not to meet the first constraint condition, detecting whether at least one second music piece which is not distributed exists in the at least two music pieces;
when detecting that at least one second music piece which is not distributed exists in the at least two music pieces, traversing the at least one second music piece in a descending order according to the duration of the music pieces, and traversing other video pieces except the at least one first video piece in the second number of video pieces in a descending order according to the duration that the video is not distributed;
when the duration of the second music piece is determined to meet the second constraint condition, at least one second music piece is distributed to at least one corresponding second video piece until the duration of the second music piece is detected not to meet the second constraint condition;
wherein the duration of the second music piece satisfies the second constraint condition, which includes:
the duration of the second music piece is less than the video unassigned duration of the second video piece.
It should be noted that, when the duration of repeatedly executing the second music piece satisfies the second constraint condition, at least one second music piece is allocated to at least one corresponding second video piece, until it is detected that the duration of the second music piece does not satisfy the second constraint condition and cannot be allocated continuously, it is detected whether at least one third music piece that is not allocated still exists in the at least two music pieces, and if so, when allocating the at least one third music piece to the corresponding video piece, the constraint condition is removed, so as to ensure that the at least two music pieces can be completely allocated.
In the embodiment of the present invention, the current music piece is allocated to the corresponding video piece based on the above determination result in this step 104, which may be implemented in the following manner:
when the duration of the second music piece is detected not to meet the second constraint condition, detecting whether at least one third music piece which is not allocated exists in the at least two music pieces;
when detecting that at least one third music fragment which is not allocated exists in the at least two music fragments, traversing the at least one third music fragment in a descending order according to the duration of the music fragments, and traversing other video fragments except the at least one first video fragment and the second video fragment in the second number of video fragments in a descending order according to the duration of the video which is not allocated;
and distributing the music piece with the longest duration in at least one third music piece to the corresponding third video piece with the longest duration until the at least two music pieces are completely distributed.
In this embodiment of the present invention, after the current music piece is allocated to the corresponding video piece in this step 104 until the at least two music pieces are completely allocated to the corresponding video piece, the multi-source video clipping method may further include:
determining whether the video frames in the selected video clips have an overlapped position relation;
when it is determined that the video frames in the video segment have an overlapping positional relationship, the video frames having the overlapping positional relationship are adjusted so that the overlapping portions are staggered.
Here, the video frames having the overlapping positional relationship may be adjusted to stagger the overlapping portions by increasing or decreasing the start time point and the end time point of the video frame segment using the time axis shift method.
Step 105: determining the unallocated maximum duration video frame interval in the video clip, and performing interval division on the maximum duration video frame interval based on a set first proportion.
In this embodiment of the present invention, for the interval division of the maximum duration video frame interval based on the set first ratio in this step 105, the following manner may be adopted: based on the first proportion, dividing the maximum duration video frame interval into a first subinterval and a second subinterval.
Here, the first ratio is any value between 15% and 35%. Preferably, the first ratio is 25%, that is, the maximum duration video frame interval is divided into a first sub-interval and a second sub-interval from a 25% ratio of the maximum duration video frame interval, so that the fragmentation phenomenon occurring in the video clipping process can be reduced, and the video frame space can be utilized more efficiently.
It should be noted that the durations of the video frame intervals that are not allocated in the video segment may be sequentially arranged to find the video frame interval with the maximum duration, which is not described in detail herein.
Step 106: and associating each video segment with the distributed music segments based on the interval division result to generate a target video file with music rhythm.
In the embodiment of the present invention, for associating each video segment with the assigned music segment based on the above-mentioned interval division result in this step 106, the following manner can be adopted:
determining the music piece with the longest duration in the at least two allocated music pieces;
detecting whether a first length corresponding to the music piece with the longest duration is smaller than or equal to a second length corresponding to the first subinterval, and if the first length is smaller than or equal to the second length, placing the music piece with the longest duration at the initial position of the first subinterval for association;
detecting whether the first length is smaller than or equal to a third length corresponding to the second subinterval, and if the first length is smaller than or equal to the third length, placing the music piece with the longest duration at the initial position of the second subinterval for association;
and detecting whether the first length is greater than the second length and the third length, and if the first length is greater than the second length and the third length, placing the music piece with the longest duration at the initial position of the first subinterval for association.
Here, after associating each video segment with the assigned music segment, it can be seen that each music segment maps a video segment, the video segments corresponding to the assigned music segments are sequentially carried on the video track according to the playing sequence of the music segments, the music segments carried on the audio track and the video segments carried on the video track are synthesized by the synthesis plug-in, and finally, a target video file with music tempo is synthesized and output.
By adopting the technical scheme of the embodiment of the invention, the video segments are automatically selected by adopting a cross discrete algorithm aiming at a plurality of source videos, so that the source video files are ensured to be uniformly distributed and selected, the continuity and the non-repeatability of the pictures are ensured, and the problems of large workload and low efficiency caused by adopting a manual clipping mode can be avoided.
In order to implement the above multi-source video editing method, an embodiment of the present invention further provides a multi-source video editing apparatus, where the multi-source video editing apparatus can be applied to a terminal device, and fig. 2 is a functional structure schematic diagram of the multi-source video editing apparatus provided in the embodiment of the present invention; as shown in fig. 2, the multi-source video clipping device includes: the device comprises an acquisition module 21, a dividing module 22, a selection module 23, a judgment module 24, an allocation module 25 and an association module 26. The functions of the program modules will be described in detail below. Wherein the content of the first and second substances,
the obtaining module 21 is configured to obtain target music;
the dividing module 22 is configured to divide the target music into at least two music pieces;
the selecting module 23 is configured to select a second number of video segments from at least one source video file according to the first number of music segments, where the first number matches the second number;
the judging module 24 is configured to judge whether the duration of the current music piece meets a set constraint condition;
the allocating module 25 is configured to allocate the current music piece to the corresponding video piece based on the above-mentioned determination result of the determining module 24 until the at least two music pieces are completely allocated to the corresponding video piece;
the dividing module 22 is further configured to determine an unallocated maximum duration video frame interval in the video segment, and perform interval division on the maximum duration video frame interval based on a set first ratio;
the associating module 26 is configured to associate each video segment with the assigned music segment based on the interval division result, and generate a target video file with a music rhythm.
In the embodiment of the present invention, the constraint condition includes a first constraint condition; for the distributing module 25 to distribute the current music segment to the corresponding video segment based on the above judgment result, the following manner can be adopted:
traversing the at least two music segments in descending order according to the duration of the music segments, and traversing the second number of video segments in descending order according to the duration of the videos which are not distributed;
when determining that at least one first music piece which is not distributed exists in the at least two music pieces and the duration of the first music piece meets the first constraint condition, distributing the at least one first music piece to the corresponding at least one first video piece until the duration of the first music piece is detected not to meet the first constraint condition;
wherein the duration of the first music piece satisfies the first constraint condition, including:
the duration of the first music piece is less than the duration of the video of the first video piece which is not allocated, and the duration of the first music piece is less than the duration of the music piece to which the video of the first video piece is allocated.
In the embodiment of the present invention, the constraint condition further includes a second constraint condition; for the distributing module 25 to distribute the current music segment to the corresponding video segment based on the above judgment result, the following manner can be adopted:
when the duration of the first music piece is detected not to meet the first constraint condition, detecting whether at least one second music piece which is not distributed exists in the at least two music pieces;
when detecting that at least one second music piece which is not distributed exists in the at least two music pieces, traversing the at least one second music piece in a descending order according to the duration of the music pieces, and traversing other video pieces except the at least one first video piece in the second number of video pieces in a descending order according to the duration that the video is not distributed;
when the duration of the second music piece is determined to meet the second constraint condition, at least one second music piece is distributed to at least one corresponding second video piece until the duration of the second music piece is detected not to meet the second constraint condition;
wherein the duration of the second music piece satisfies the second constraint condition, which includes:
the duration of the second music piece is less than the video unassigned duration of the second video piece.
In the embodiment of the present invention, the allocating module 25 may allocate the current music segment to the corresponding video segment based on the above determination result, in the following manner:
when the duration of the second music piece is detected not to meet the second constraint condition, detecting whether at least one third music piece which is not allocated exists in the at least two music pieces;
when detecting that at least one third music fragment which is not allocated exists in the at least two music fragments, traversing the at least one third music fragment in a descending order according to the duration of the music fragments, and traversing other video fragments except the at least one first video fragment and the second video fragment in the second number of video fragments in a descending order according to the duration of the video which is not allocated;
and distributing the music piece with the longest duration in at least one third music piece to the corresponding third video piece with the longest duration until the at least two music pieces are completely distributed.
In the embodiment of the present invention, for the dividing module 22 to divide the maximum duration video frame interval based on the set first ratio, the following manner may be adopted: based on the first proportion, dividing the maximum duration video frame interval into a first subinterval and a second subinterval.
For the association module 26 to associate each video segment with the assigned music segment based on the above-mentioned interval division result, the following manner may be adopted:
determining the music piece with the longest duration in the at least two allocated music pieces;
detecting whether a first length corresponding to the music piece with the longest duration is smaller than or equal to a second length corresponding to the first subinterval, and if the first length is smaller than or equal to the second length, placing the music piece with the longest duration at the initial position of the first subinterval for association;
detecting whether the first length is smaller than or equal to a third length corresponding to the second subinterval, and if the first length is smaller than or equal to the third length, placing the music piece with the longest duration at the initial position of the second subinterval for association;
and detecting whether the first length is greater than the second length and the third length, and if the first length is greater than the second length and the third length, placing the music piece with the longest duration at the initial position of the first subinterval for association.
As an implementation manner, fig. 3 is a functional structure diagram of another multi-source video editing apparatus according to an embodiment of the present invention; as shown in fig. 3, the multi-source video clipping device further includes:
a determining module 27, configured to determine whether video frames in the selected video segments have an overlapping positional relationship after the distributing module 25 distributes the current music segment into the corresponding video segments until the at least two music segments are completely distributed into the corresponding video segments;
an adjusting module 28, configured to, when the determining module 27 determines that the video frames in the video segment have an overlapping positional relationship, adjust the video frames having the overlapping positional relationship so that the overlapping portions are staggered.
As an implementation manner, fig. 4 is a functional structure diagram of another multi-source video editing apparatus provided in an embodiment of the present invention; as shown in fig. 4, the multi-source video clipping device further includes:
a detecting module 29, configured to detect whether each video frame in the video segment includes a face image before the determining module 24 determines whether the duration of the current music segment satisfies a set constraint condition, and determine a video frame including the face image;
a marking module 210, configured to mark the video frames including the face images to distinguish video frames not including the face images;
the allocating module 25 is further configured to allocate a corresponding fourth music segment to the video frame including the face image, and if it is detected that the at least two music segments are not completely allocated, determine whether the duration of the current music segment meets a set constraint condition;
wherein the current music piece is the other music pieces except the fourth music piece in the at least two music pieces.
It should be noted that, if it is detected that the at least two music pieces are completely allocated, the step of determining whether the duration of the current music piece meets the set constraint condition is not required, but each video piece may be directly associated with the allocated music piece to generate the target video file with the music tempo.
In the embodiment of the present invention, for the allocating module 25 to allocate the corresponding fourth music piece to the video frame including the face image, the following method may be adopted: detecting a first duration of the video frame interval comprising the face image;
traversing all video frames comprising face images, and searching music segments matched with the first time length;
and if the fourth music segment successfully matched with the first time length is found, taking the found fourth music segment as the music segment distributed for the video frame comprising the face image.
In this embodiment of the present invention, the adjusting module 28 is further configured to adjust the first duration of the video frame interval including the face image when the fourth music piece successfully matched with the first duration is not found.
Here, for the adjusting module 28 to adjust the first duration of the video frame interval including the face image, the following may be implemented:
when detecting that the second time length of the fourth music segment is longer than the first time length, extending the starting time and/or the ending time of the video frame interval comprising the face image so as to enable the first time length to be matched with the second time length;
and when the second time length is smaller than the first time length, shortening the starting time and/or the ending time of the video frame interval including the face image so as to enable the first time length to be matched with the second time length.
It should be noted that: in the multi-source video editing apparatus provided in the above embodiment, when the multi-source video editing operation is performed, only the division of the program modules is illustrated, and in practical applications, the processing distribution may be completed by different program modules according to needs, that is, the internal structure of the multi-source video editing apparatus is divided into different program modules to complete all or part of the processing described above. In addition, the multi-source video editing device provided by the above embodiment and the multi-source video editing method embodiment belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described in detail herein.
In practical applications, each of the program modules may be implemented by a Central Processing Unit (CPU), a microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like, which are located on the terminal device.
In order to implement the multi-source video clipping method, the embodiment of the invention also provides a hardware structure of the multi-source video clipping device. A multi-source video editing apparatus that implements an embodiment of the present invention, which may be implemented in various forms of terminal devices (e.g., desktop computers, notebook computers, smart phones), will now be described with reference to the accompanying drawings. In the following, the hardware structure of the multi-source video editing apparatus according to the embodiment of the present invention is further described, it is to be understood that fig. 5 only shows an exemplary structure of the multi-source video editing apparatus, and not a whole structure, and a part of the structure or the whole structure shown in fig. 5 may be implemented as needed.
Referring to fig. 5, fig. 5 is a schematic diagram of a hardware structure of a multi-source video editing apparatus according to an embodiment of the present invention, which may be applied to various terminal devices running an application program in practical applications, where the multi-source video editing apparatus 500 shown in fig. 5 includes: at least one processor 501, memory 502, a user interface 503, and at least one network interface 504. The various components in the multi-source video-clip device 500 are coupled together by a bus system 505. It will be appreciated that the bus system 505 is used to enable communications among the components of the connection. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 505 in FIG. 5.
The user interface 503 may include a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, a touch screen, or the like, among others.
It will be appreciated that the memory 502 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory.
The multi-source video clipping method disclosed by the embodiment of the invention can be applied to the processor 501 or realized by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the multi-source video clipping method described above may be performed by instructions in the form of hardware integrated logic circuits or software in processor 501. The processor 501 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 501 may implement or perform the various multi-source video clipping methods, steps, and logic blocks provided in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the multi-source video clipping method provided by the embodiment of the invention can be directly embodied as the execution completion of a hardware decoding processor, or the execution completion of the hardware decoding processor and a software module in the decoding processor. The software modules may be located in a storage medium located in the memory 502, and the processor 501 reads the information in the memory 502 to complete the steps of the multi-source video clipping method provided by the embodiment of the present invention in combination with the hardware thereof.
In the embodiment of the present invention, the multi-source video clipping device 500 comprises a memory 502, a processor 501 and an executable program 5021 stored on the memory 502 and capable of being executed by the processor 501, and when the processor 501 executes the executable program 5021, the multi-source video clipping device 500 realizes that: acquiring target music, and dividing the target music into at least two music pieces; selecting a second number of video clips from at least one source video file according to the first number of the music clips, wherein the first number is matched with the second number; judging whether the duration of the current music piece meets the set constraint condition or not; distributing the current music segment to the corresponding video segment based on the judgment result until the at least two music segments are completely distributed to the corresponding video segments; determining unallocated maximum duration video frame intervals in the video clips, and performing interval division on the maximum duration video frame intervals on the basis of a set first proportion; and associating each video segment with the distributed music segments based on the interval division result to generate a target video file with music rhythm.
As an embodiment, when the processor 501 runs the executable program 5021, the following are implemented: the constraint comprises a first constraint; traversing the at least two music segments in descending order according to the duration of the music segments, and traversing the second number of video segments in descending order according to the duration of the videos which are not distributed; and when determining that at least one first music piece which is not distributed exists in the at least two music pieces and the duration of the first music piece meets the first constraint condition, distributing the at least one first music piece to the corresponding at least one first video piece until the duration of the first music piece is detected not to meet the first constraint condition.
As an embodiment, when the processor 501 runs the executable program 5021, the following are implemented: the constraints further comprise a second constraint; when the duration of the first music piece is detected not to meet the first constraint condition, detecting whether at least one second music piece which is not distributed exists in the at least two music pieces; when detecting that at least one second music piece which is not distributed exists in the at least two music pieces, traversing the at least one second music piece in a descending order according to the duration of the music pieces, and traversing other video pieces except the at least one first video piece in the second number of video pieces in a descending order according to the duration that the video is not distributed; and when the duration of the second music piece is determined to meet the second constraint condition, distributing at least one second music piece to at least one corresponding second video piece until the duration of the second music piece is detected not to meet the second constraint condition.
As an embodiment, when the processor 501 runs the executable program 5021, the following are implemented: when the duration of the second music piece is detected not to meet the second constraint condition, detecting whether at least one third music piece which is not allocated exists in the at least two music pieces; when detecting that at least one third music fragment which is not allocated exists in the at least two music fragments, traversing the at least one third music fragment in a descending order according to the duration of the music fragments, and traversing other video fragments except the at least one first video fragment and the second video fragment in the second number of video fragments in a descending order according to the duration of the video which is not allocated; and distributing the music piece with the longest duration in at least one third music piece to the corresponding third video piece with the longest duration until the at least two music pieces are completely distributed.
As an embodiment, when the processor 501 runs the executable program 5021, the following are implemented: dividing the maximum duration video frame interval into a first subinterval and a second subinterval based on the first ratio; determining the music piece with the longest duration in the at least two allocated music pieces; detecting whether a first length corresponding to the music piece with the longest duration is smaller than or equal to a second length corresponding to the first subinterval, and if the first length is smaller than or equal to the second length, placing the music piece with the longest duration at the initial position of the first subinterval for association; detecting whether the first length is smaller than or equal to a third length corresponding to the second subinterval, and if the first length is smaller than or equal to the third length, placing the music piece with the longest duration at the initial position of the second subinterval for association; and detecting whether the first length is greater than the second length and the third length, and if the first length is greater than the second length and the third length, placing the music piece with the longest duration at the initial position of the first subinterval for association.
As an embodiment, when the processor 501 runs the executable program 5021, the following are implemented: after the current music piece is distributed to the corresponding video piece until the at least two music pieces are completely distributed to the corresponding video piece, determining whether video frames in the selected video piece have an overlapped position relation; when it is determined that the video frames in the video segment have an overlapping positional relationship, the video frames having the overlapping positional relationship are adjusted so that the overlapping portions are staggered.
As an embodiment, when the processor 501 runs the executable program 5021, the following are implemented: before judging whether the duration of the current music segment meets the set constraint condition, detecting whether each video frame in the video segment comprises a face image, determining the video frame comprising the face image, and marking the video frame comprising the face image so as to distinguish the video frame not comprising the face image; distributing a corresponding fourth music segment for the video frame comprising the face image, and if the situation that the at least two music segments are not completely distributed is detected, judging whether the duration of the current music segment meets a set constraint condition or not; wherein the current music piece is the other music pieces except the fourth music piece in the at least two music pieces.
As an embodiment, when the processor 501 runs the executable program 5021, the following are implemented: detecting a first duration of the video frame interval comprising the face image; traversing all video frames comprising face images, and searching music segments matched with the first time length; and if the fourth music segment successfully matched with the first time length is found, taking the found fourth music segment as the music segment distributed for the video frame comprising the face image.
As an embodiment, when the processor 501 runs the executable program 5021, the following are implemented: if a fourth music segment successfully matched with the first time length is not found, and the second time length of the fourth music segment is detected to be longer than the first time length, extending the starting time and/or the ending time of the video frame interval including the face image so as to enable the first time length to be matched with the second time length; and when the second time length is smaller than the first time length, shortening the starting time and/or the ending time of the video frame interval including the face image so as to enable the first time length to be matched with the second time length.
In an exemplary embodiment, an embodiment of the present invention further provides a storage medium, which may be a storage medium such as an optical disc, a flash memory, or a magnetic disc, and may be a non-transitory storage medium. The storage medium in the embodiment of the present invention stores an executable program 5021, and when the executable program 5021 is executed by the processor 501, the executable program 5021 implements: acquiring target music, and dividing the target music into at least two music pieces; selecting a second number of video clips from at least one source video file according to the first number of the music clips, wherein the first number is matched with the second number; judging whether the duration of the current music piece meets the set constraint condition or not; distributing the current music segment to the corresponding video segment based on the judgment result until the at least two music segments are completely distributed to the corresponding video segments; determining unallocated maximum duration video frame intervals in the video clips, and performing interval division on the maximum duration video frame intervals on the basis of a set first proportion; and associating each video segment with the distributed music segments based on the interval division result to generate a target video file with music rhythm.
As an embodiment, the executable program 5021 when executed by the processor 501 implements: the constraint comprises a first constraint; traversing the at least two music segments in descending order according to the duration of the music segments, and traversing the second number of video segments in descending order according to the duration of the videos which are not distributed; and when determining that at least one first music piece which is not distributed exists in the at least two music pieces and the duration of the first music piece meets the first constraint condition, distributing the at least one first music piece to the corresponding at least one first video piece until the duration of the first music piece is detected not to meet the first constraint condition.
As an embodiment, the executable program 5021 when executed by the processor 501 implements: the constraints further comprise a second constraint; when the duration of the first music piece is detected not to meet the first constraint condition, detecting whether at least one second music piece which is not distributed exists in the at least two music pieces; when detecting that at least one second music piece which is not distributed exists in the at least two music pieces, traversing the at least one second music piece in a descending order according to the duration of the music pieces, and traversing other video pieces except the at least one first video piece in the second number of video pieces in a descending order according to the duration that the video is not distributed; and when the duration of the second music piece is determined to meet the second constraint condition, distributing at least one second music piece to at least one corresponding second video piece until the duration of the second music piece is detected not to meet the second constraint condition.
As an embodiment, the executable program 5021 when executed by the processor 501 implements: when the duration of the second music piece is detected not to meet the second constraint condition, detecting whether at least one third music piece which is not allocated exists in the at least two music pieces; when detecting that at least one third music fragment which is not allocated exists in the at least two music fragments, traversing the at least one third music fragment in a descending order according to the duration of the music fragments, and traversing other video fragments except the at least one first video fragment and the second video fragment in the second number of video fragments in a descending order according to the duration of the video which is not allocated; and distributing the music piece with the longest duration in at least one third music piece to the corresponding third video piece with the longest duration until the at least two music pieces are completely distributed.
As an embodiment, the executable program 5021 when executed by the processor 501 implements: dividing the maximum duration video frame interval into a first subinterval and a second subinterval based on the first ratio; determining the music piece with the longest duration in the at least two allocated music pieces; detecting whether a first length corresponding to the music piece with the longest duration is smaller than or equal to a second length corresponding to the first subinterval, and if the first length is smaller than or equal to the second length, placing the music piece with the longest duration at the initial position of the first subinterval for association; detecting whether the first length is smaller than or equal to a third length corresponding to the second subinterval, and if the first length is smaller than or equal to the third length, placing the music piece with the longest duration at the initial position of the second subinterval for association; and detecting whether the first length is greater than the second length and the third length, and if the first length is greater than the second length and the third length, placing the music piece with the longest duration at the initial position of the first subinterval for association.
As an embodiment, the executable program 5021 when executed by the processor 501 implements: after the current music piece is distributed to the corresponding video piece until the at least two music pieces are completely distributed to the corresponding video piece, determining whether video frames in the selected video piece have an overlapped position relation; when it is determined that the video frames in the video segment have an overlapping positional relationship, the video frames having the overlapping positional relationship are adjusted so that the overlapping portions are staggered.
As an embodiment, the executable program 5021 when executed by the processor 501 implements: before judging whether the duration of the current music segment meets the set constraint condition, detecting whether each video frame in the video segment comprises a face image, determining the video frame comprising the face image, and marking the video frame comprising the face image so as to distinguish the video frame not comprising the face image; distributing a corresponding fourth music segment for the video frame comprising the face image, and if the situation that the at least two music segments are not completely distributed is detected, judging whether the duration of the current music segment meets a set constraint condition or not; wherein the current music piece is the other music pieces except the fourth music piece in the at least two music pieces.
As an embodiment, the executable program 5021 when executed by the processor 501 implements: detecting a first duration of the video frame interval comprising the face image; traversing all video frames comprising face images, and searching music segments matched with the first time length; and if the fourth music segment successfully matched with the first time length is found, taking the found fourth music segment as the music segment distributed for the video frame comprising the face image.
As an embodiment, the executable program 5021 when executed by the processor 501 implements: if a fourth music segment successfully matched with the first time length is not found, and the second time length of the fourth music segment is detected to be longer than the first time length, extending the starting time and/or the ending time of the video frame interval including the face image so as to enable the first time length to be matched with the second time length; and when the second time length is smaller than the first time length, shortening the starting time and/or the ending time of the video frame interval including the face image so as to enable the first time length to be matched with the second time length.
In summary, in the multi-source video clipping method provided by the embodiment of the present invention, according to the first number of divided music pieces, a second number of matched video pieces are selected from at least one source video file, whether the duration of the current music piece meets the set constraint condition is determined, based on the determination result, the current music piece is allocated to the corresponding video piece until the at least two music pieces are completely allocated to the corresponding video piece, the unallocated maximum duration video frame interval in the video piece is determined, and based on the set first ratio, the maximum duration video frame interval is subjected to interval division; and associating each video segment with the distributed music segments based on the interval division result to generate a target video file with music rhythm. Therefore, the problems of large workload and low efficiency caused by a manual clipping mode can be solved, and meanwhile, the video clipped by the technical scheme of the embodiment of the invention can give consideration to the contents of a plurality of videos, meets the requirements of music catkin and video vision diversification, achieves the purpose of simply and conveniently operating and effectively improving the quality of the clipped video, and greatly improves the use experience of users.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or executable program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of an executable program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and executable program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by executable program instructions. These executable program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor with reference to a programmable data processing apparatus to produce a machine, such that the instructions, which execute via the computer or processor with reference to the programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These executable program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These executable program instructions may also be loaded onto a computer or reference programmable data processing apparatus to cause a series of operational steps to be performed on the computer or reference programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or reference programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. that are within the spirit and principle of the present invention should be included in the present invention.
Claims (13)
1. A multi-source video clipping method, the method comprising:
acquiring target music, and dividing the target music into at least two music pieces;
selecting a second number of video clips from at least one source video file according to the first number of the music clips, wherein the first number is matched with the second number;
distributing a corresponding fourth music segment for the video frames including the face images in the second number of video segments, and if the situation that the at least two music segments are not completely distributed is detected, judging whether the duration of the current music segment meets a set constraint condition, wherein the current music segment is other music segments except the fourth music segment in the at least two music segments;
distributing the current music segment to the corresponding video segment based on the judgment result until the at least two music segments are completely distributed to the corresponding video segments;
determining unallocated maximum duration video frame intervals in the video clips, and performing interval division on the maximum duration video frame intervals on the basis of a set first proportion;
and associating each video segment with the distributed music segments based on the interval division result to generate a target video file with music rhythm.
2. The multi-source video clipping method of claim 1, wherein the constraint comprises a first constraint;
the allocating the current music segment to the corresponding video segment based on the judgment result comprises:
traversing the at least two music segments in descending order according to the duration of the music segments, and traversing the second number of video segments in descending order according to the duration of the videos which are not distributed;
when determining that at least one first music piece which is not distributed exists in the at least two music pieces and the duration of the first music piece meets the first constraint condition, distributing the at least one first music piece to the corresponding at least one first video piece until the duration of the first music piece is detected not to meet the first constraint condition;
wherein the duration of the first music piece satisfies the first constraint condition, including:
the duration of the first music piece is less than the duration of the video of the first video piece which is not allocated, and the duration of the first music piece is less than the duration of the music piece to which the video of the first video piece is allocated;
the music segment duration to which the video of the first video segment should be allocated may be determined according to a ratio of the duration of each first video segment to the total duration of the second number of video segments and the total duration of at least one first music segment that is not allocated in the at least two music segments.
3. The multi-source video clipping method of claim 2, wherein the constraints further include a second constraint;
the allocating the current music segment to the corresponding video segment based on the judgment result comprises:
when the duration of the first music piece is detected not to meet the first constraint condition, detecting whether at least one second music piece which is not distributed exists in the at least two music pieces;
when detecting that at least one second music piece which is not distributed exists in the at least two music pieces, traversing the at least one second music piece in a descending order according to the duration of the music pieces, and traversing other video pieces except the at least one first video piece in the second number of video pieces in a descending order according to the duration that the video is not distributed;
when the duration of the second music piece is determined to meet the second constraint condition, at least one second music piece is distributed to at least one corresponding second video piece until the duration of the second music piece is detected not to meet the second constraint condition;
wherein the duration of the second music piece satisfies the second constraint condition, which includes:
the duration of the second music piece is less than the video unassigned duration of the second video piece.
4. The multi-source video clipping method according to claim 3, wherein the assigning the current music piece to the corresponding video piece based on the determination result comprises:
when the duration of the second music piece is detected not to meet the second constraint condition, detecting whether at least one third music piece which is not allocated exists in the at least two music pieces;
when detecting that at least one third music fragment which is not allocated exists in the at least two music fragments, traversing the at least one third music fragment in a descending order according to the duration of the music fragments, and traversing other video fragments except the at least one first video fragment and the second video fragment in the second number of video fragments in a descending order according to the duration of the video which is not allocated;
and distributing the music piece with the longest duration in at least one third music piece to the corresponding third video piece with the longest duration until the at least two music pieces are completely distributed.
5. The multi-source video clipping method according to claim 1, wherein the section division of the maximum duration video frame section based on the set first ratio comprises:
dividing the maximum duration video frame interval into a first subinterval and a second subinterval based on the first ratio;
the associating each video segment with the assigned music segment based on the section division result includes:
determining the music piece with the longest duration in the at least two allocated music pieces;
detecting whether a first length corresponding to the music piece with the longest duration is smaller than or equal to a second length corresponding to the first subinterval, and if the first length is smaller than or equal to the second length, placing the music piece with the longest duration at the initial position of the first subinterval for association;
detecting whether the first length is smaller than or equal to a third length corresponding to the second subinterval, and if the first length is smaller than or equal to the third length, placing the music piece with the longest duration at the initial position of the second subinterval for association;
and detecting whether the first length is greater than the second length and the third length, and if the first length is greater than the second length and the third length, placing the music piece with the longest duration at the initial position of the first subinterval for association.
6. The multi-source video clipping method of claim 1, wherein after said assigning the current music piece into the corresponding video piece until the at least two music pieces are completely assigned into the corresponding video piece, the method further comprises:
determining whether the video frames in the selected video clips have an overlapped position relation;
when it is determined that the video frames in the video segment have an overlapping positional relationship, the video frames having the overlapping positional relationship are adjusted so that the overlapping portions are staggered.
7. The multi-source video clipping method of claim 1, wherein before assigning a corresponding fourth music piece to a video frame comprising a face image in the video piece, the method further comprises:
and detecting whether each video frame in the video segment comprises a face image, determining the video frame comprising the face image, and marking the video frame comprising the face image so as to distinguish the video frame not comprising the face image.
8. The multi-source video clipping method of claim 7, wherein said assigning a corresponding fourth music piece to said video frame comprising a face image comprises:
detecting a first duration of the video frame interval comprising the face image;
traversing all video frames comprising face images, and searching music segments matched with the first time length;
and if the fourth music segment successfully matched with the first time length is found, taking the found fourth music segment as the music segment distributed for the video frame comprising the face image.
9. The multi-source video clipping method of claim 8, wherein the method further comprises:
if the fourth music segment successfully matched with the first time length is not found, adjusting the first time length of the video frame interval including the face image;
the adjusting the first duration of the video frame interval including the face image comprises:
when detecting that the second time length of the fourth music segment is longer than the first time length, extending the starting time and/or the ending time of the video frame interval comprising the face image so as to enable the first time length to be matched with the second time length;
and when the second time length is smaller than the first time length, shortening the starting time and/or the ending time of the video frame interval including the face image so as to enable the first time length to be matched with the second time length.
10. The multi-source video clipping method according to any of claims 1 to 9, wherein the first proportion is any value between 15% and 35%.
11. A multi-source video editing apparatus, the apparatus comprising: the device comprises an acquisition module, a division module, a selection module, a judgment module, a distribution module and an association module; wherein the content of the first and second substances,
the acquisition module is used for acquiring target music;
the dividing module is used for dividing the target music into at least two music pieces;
the selecting module is used for selecting a second number of video clips from at least one source video file according to the first number of the music clips, wherein the first number is matched with the second number;
the judging module is used for judging whether the duration of the current music piece meets the set constraint condition or not;
the distribution module is used for distributing the current music segments to corresponding video segments based on the judgment result until the at least two music segments are completely distributed to the corresponding video segments;
the dividing module is further configured to determine an unallocated maximum duration video frame interval in the video segment, and perform interval division on the maximum duration video frame interval based on a set first proportion;
and the association module is used for associating each video clip with the distributed music clips based on the interval division result to generate a target video file with music rhythm.
12. A multi-source video clipping device comprising a memory, a processor and an executable program stored on the memory and executable by the processor, wherein the steps of the multi-source video clipping method according to any one of claims 1 to 10 are performed when the executable program is executed by the processor.
13. A storage medium having stored thereon an executable program, the executable program when executed by a processor implementing the steps of the multi-source video clipping method according to any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810983670.8A CN109257545B (en) | 2018-08-27 | 2018-08-27 | Multi-source video editing method and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810983670.8A CN109257545B (en) | 2018-08-27 | 2018-08-27 | Multi-source video editing method and device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109257545A CN109257545A (en) | 2019-01-22 |
CN109257545B true CN109257545B (en) | 2021-04-13 |
Family
ID=65049416
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810983670.8A Active CN109257545B (en) | 2018-08-27 | 2018-08-27 | Multi-source video editing method and device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109257545B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112235631B (en) | 2019-07-15 | 2022-05-03 | 北京字节跳动网络技术有限公司 | Video processing method and device, electronic equipment and storage medium |
CN110519638B (en) * | 2019-09-06 | 2023-05-16 | Oppo广东移动通信有限公司 | Processing method, processing device, electronic device, and storage medium |
CN112822563A (en) | 2019-11-15 | 2021-05-18 | 北京字节跳动网络技术有限公司 | Method, device, electronic equipment and computer readable medium for generating video |
CN112822541B (en) * | 2019-11-18 | 2022-05-20 | 北京字节跳动网络技术有限公司 | Video generation method and device, electronic equipment and computer readable medium |
CN110913271B (en) * | 2019-11-29 | 2022-01-18 | Oppo广东移动通信有限公司 | Video processing method, mobile terminal and non-volatile computer-readable storage medium |
CN111225274B (en) * | 2019-11-29 | 2021-12-07 | 成都品果科技有限公司 | Photo music video arrangement system based on deep learning |
CN110992993B (en) * | 2019-12-17 | 2022-12-09 | Oppo广东移动通信有限公司 | Video editing method, video editing device, terminal and readable storage medium |
CN114339392B (en) * | 2021-11-12 | 2023-09-12 | 腾讯科技(深圳)有限公司 | Video editing method, device, computer equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003259302A (en) * | 2002-02-28 | 2003-09-12 | Fuji Xerox Co Ltd | Method for automatically producing music video, product including information storage medium for storing information, and program |
CN106649703A (en) * | 2016-12-20 | 2017-05-10 | 中国科学院深圳先进技术研究院 | Method and device for visualizing audio data |
CN107124624A (en) * | 2017-04-21 | 2017-09-01 | 腾讯科技(深圳)有限公司 | The method and apparatus of video data generation |
CN107393569A (en) * | 2017-08-16 | 2017-11-24 | 成都品果科技有限公司 | Audio frequency and video clipping method and device |
CN108028054A (en) * | 2015-09-30 | 2018-05-11 | 苹果公司 | The Voice & Video component of audio /video show to automatically generating synchronizes |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5752585B2 (en) * | 2011-12-16 | 2015-07-22 | 株式会社東芝 | Video processing apparatus, method and program |
-
2018
- 2018-08-27 CN CN201810983670.8A patent/CN109257545B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003259302A (en) * | 2002-02-28 | 2003-09-12 | Fuji Xerox Co Ltd | Method for automatically producing music video, product including information storage medium for storing information, and program |
CN108028054A (en) * | 2015-09-30 | 2018-05-11 | 苹果公司 | The Voice & Video component of audio /video show to automatically generating synchronizes |
CN106649703A (en) * | 2016-12-20 | 2017-05-10 | 中国科学院深圳先进技术研究院 | Method and device for visualizing audio data |
CN107124624A (en) * | 2017-04-21 | 2017-09-01 | 腾讯科技(深圳)有限公司 | The method and apparatus of video data generation |
CN107393569A (en) * | 2017-08-16 | 2017-11-24 | 成都品果科技有限公司 | Audio frequency and video clipping method and device |
Also Published As
Publication number | Publication date |
---|---|
CN109257545A (en) | 2019-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109257545B (en) | Multi-source video editing method and device and storage medium | |
CN107393569B (en) | Audio-video clipping method and device | |
CN107483843B (en) | Audio-video matches clipping method and device | |
CN109168084B (en) | Video editing method and device | |
US8319086B1 (en) | Video editing matched to musical beats | |
JP2023099568A (en) | Generating video presentation accompanied by voice | |
US20170060520A1 (en) | Systems and methods for dynamically editable social media | |
US9111519B1 (en) | System and method for generating cuepoints for mixing song data | |
CN104980773B (en) | streaming media processing method and device, terminal and server | |
CN106658226B (en) | Playing method and device | |
CN111274415B (en) | Method, device and computer storage medium for determining replacement video material | |
CN108319413B (en) | Music playing method, device and storage medium | |
CN107450874B (en) | Multimedia data double-screen playing method and system | |
JP2021009666A (en) | Method and device for analyzing data and storage medium | |
US9383965B1 (en) | Media library analyzer | |
US10534777B2 (en) | Systems and methods for continuously detecting and identifying songs in a continuous audio stream | |
KR101648931B1 (en) | Apparatus and method for producing a rhythm game, and computer program for executing the method | |
CN108364338B (en) | Image data processing method and device and electronic equipment | |
WO2016171900A1 (en) | Gapless media generation | |
CN109936762B (en) | Method for synchronously playing similar audio or video files and electronic equipment | |
CN105323652B (en) | Method and device for playing multimedia file | |
CN109729380B (en) | Audio and video playing method and equipment | |
CN113747233B (en) | Music replacement method and device, electronic equipment and storage medium | |
CN111491060B (en) | Information click log and ticket splicing method and device | |
CN106547768B (en) | Media file playing control method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |