CN110519655B - Video editing method, device and storage medium - Google Patents

Video editing method, device and storage medium Download PDF

Info

Publication number
CN110519655B
CN110519655B CN201810489728.3A CN201810489728A CN110519655B CN 110519655 B CN110519655 B CN 110519655B CN 201810489728 A CN201810489728 A CN 201810489728A CN 110519655 B CN110519655 B CN 110519655B
Authority
CN
China
Prior art keywords
time point
segment
candidate
determining
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810489728.3A
Other languages
Chinese (zh)
Other versions
CN110519655A (en
Inventor
杨俊毅
汪锦武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN201810489728.3A priority Critical patent/CN110519655B/en
Publication of CN110519655A publication Critical patent/CN110519655A/en
Application granted granted Critical
Publication of CN110519655B publication Critical patent/CN110519655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure relates to a video clipping method and apparatus. The method comprises the following steps: determining candidate segments from a video to be processed; determining a scene change time point in the video to be processed; and determining the time range of the clip segment corresponding to the candidate segment according to the scene change time point. The method determines the candidate segments from the video to be processed, determines the scene change time point in the video to be processed, and determines the time range of the clip segments corresponding to the candidate segments according to the scene change time point, so that the video clip is carried out based on the scene change time point, the integrity of the video content of the clip segments obtained by clipping from the video can be ensured, and the jumping feeling and the truncation feeling of a user are avoided.

Description

Video editing method, device and storage medium
Technical Field
The present disclosure relates to the field of video technologies, and in particular, to a video editing method and apparatus.
Background
The short video is a video clip, which is a transmission mode of internet content, and generally refers to video transmission content which is transmitted on new internet media within 5 minutes. With the popularization of mobile terminals and the increasing speed of networks, short and fast mass flow transmission contents are gradually favored by various large platforms and users.
In the related art, the integrity of video content is difficult to ensure by a short video obtained by editing from a video, and the short video is easy to bring a jumping feeling and a truncation feeling to a user. For example, a short video ends when the person speaks halfway, and so on.
Disclosure of Invention
In view of the above, the present disclosure provides a video editing method and apparatus.
According to an aspect of the present disclosure, there is provided a video clipping method including:
determining candidate segments from a video to be processed;
determining a scene change time point in the video to be processed;
and determining the time range of the clip segment corresponding to the candidate segment according to the scene change time point.
In one possible implementation, determining candidate segments from a video to be processed includes:
determining a viewpoint segment and/or a segment containing a specified object from the video to be processed;
and determining candidate segments according to the viewpoint segments and/or the segments containing the specified objects.
In a possible implementation manner, determining a candidate segment according to the viewpoint segment and/or the segment containing the specified object includes:
if the viewpoint segment and the segment containing the specified object have the overlapped time period, combining the overlapped viewpoint segment and the segment containing the specified object, and determining the candidate segment.
In one possible implementation, determining a scene change time point in the video to be processed includes:
determining a shot switching time point in the video to be processed;
determining the time range without subtitles in the video to be processed;
the shot cut time point within the subtitle-free time range is taken as a scene change time point.
In a possible implementation manner, determining a time range of a clip segment corresponding to the candidate segment according to the scene change time point includes:
if the duration of the candidate segment is greater than a first duration, determining a first time point which is away from the starting time point of the candidate segment by the first duration in the candidate segment;
and taking the last scene change time point before the first time point as the ending time point of the clip segment corresponding to the candidate segment.
In a possible implementation manner, determining a time range of a clip segment corresponding to the candidate segment according to the scene change time point includes:
if the duration of the candidate segment is greater than or equal to a second duration and less than or equal to a first duration, determining a second time point which is away from the starting time point of the candidate segment by the second duration and a third time point which is away from the starting time point of the candidate segment by the first duration in the candidate segment, wherein the second duration is less than the first duration, and the second time point is earlier than the third time point;
And determining the ending time point of the clip segment corresponding to the candidate segment according to the scene change time point between the second time point and the third time point.
In a possible implementation manner, determining an ending time point of a clip segment corresponding to the candidate segment according to a scene transition time point between the second time point and the third time point includes:
and taking the scene change time point with the minimum distance from the ending time point of the candidate segment in the scene change time points between the second time point and the third time point as the ending time point of the clip segment corresponding to the candidate segment.
In a possible implementation manner, determining a time range of a clip segment corresponding to the candidate segment according to the scene change time point includes:
if the duration of the candidate segment is less than the second duration, determining a fourth time point which is away from the starting time point of the candidate segment by the second duration, wherein the fourth time point is later than the starting time point of the candidate segment;
and taking the first scene change time point after the fourth time point as the ending time point of the clip segment corresponding to the candidate segment.
In one possible implementation, the method further includes:
and taking the ratio of the maximum expected time length of the target video to the number of the candidate segments as a first time length.
In one possible implementation, the method further includes:
and taking the ratio of the minimum expected time length of the target video to the number of the candidate segments as a second time length.
In one possible implementation, after determining the time range of the clip segment corresponding to the candidate segment, the method further includes:
if the number of the candidate segments is multiple, combining the clip segments corresponding to the candidate segments to obtain a target video;
and if the number of the candidate segments is one, taking the clipped segment corresponding to the candidate segment as the target video.
According to another aspect of the present disclosure, there is provided a video clipping device including:
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining candidate segments from a video to be processed;
the second determining module is used for determining a scene change time point in the video to be processed;
and the third determining module is used for determining the time range of the clipping segment corresponding to the candidate segment according to the scene change time point.
In one possible implementation manner, the first determining module includes:
the first determining submodule is used for determining a viewpoint segment and/or a segment containing a specified object from the video to be processed;
and the second determining submodule is used for determining candidate segments according to the viewpoint segments and/or the segments containing the specified objects.
In one possible implementation manner, the second determining submodule is configured to:
if the viewpoint segment and the segment containing the specified object have the overlapped time period, combining the overlapped viewpoint segment and the segment containing the specified object, and determining the candidate segment.
In one possible implementation manner, the second determining module includes:
the third determining submodule is used for determining a shot switching time point in the video to be processed;
the fourth determining submodule is used for determining the time range without subtitles in the video to be processed;
and the fifth determining submodule is used for taking the shot switching time point in the time range without the subtitles as the scene switching time point.
In one possible implementation manner, the third determining module includes:
a sixth determining submodule, configured to determine, if the duration of the candidate segment is greater than the first duration, a first time point that is a distance from a start time point of the candidate segment by the first duration in the candidate segment;
A seventh determining sub-module, configured to use a last scene change time point before the first time point as an end time point of a clip segment corresponding to the candidate segment.
In one possible implementation manner, the third determining module includes:
an eighth determining submodule, configured to determine, in the candidate segment, a second time point that is a distance from a starting time point of the candidate segment to a second duration and a third time point that is a distance from the starting time point of the candidate segment to the first duration if the duration of the candidate segment is greater than or equal to the second duration and less than or equal to the first duration, where the second duration is less than the first duration, and the second time point is earlier than the third time point;
a ninth determining sub-module, configured to determine, according to a scene transition time point between the second time point and the third time point, an end time point of the clip segment corresponding to the candidate segment.
In one possible implementation, the ninth determining sub-module is configured to:
and taking the scene transition time point with the minimum distance from the end time point of the candidate segment in the scene transition time points between the second time point and the third time point as the end time point of the clip segment corresponding to the candidate segment.
In one possible implementation manner, the third determining module includes:
a tenth determining submodule, configured to determine, if the duration of the candidate segment is smaller than the second duration, a fourth time point that is a distance from the start time point of the candidate segment by the second duration, where the fourth time point is later than the start time point of the candidate segment;
an eleventh determining sub-module, configured to take a first scene change time point after the fourth time point as an end time point of a clip segment corresponding to the candidate segment.
In one possible implementation, the apparatus further includes:
and the fourth determining module is used for taking the ratio of the maximum expected time length of the target video to the number of the candidate segments as the first time length.
In one possible implementation, the apparatus further includes:
and the fifth determining module is used for taking the ratio of the minimum expected time length of the target video to the number of the candidate segments as the second time length.
In one possible implementation, the apparatus further includes:
a sixth determining module, configured to, if the number of the candidate segments is multiple, merge clip segments corresponding to the candidate segments to obtain a target video; and if the number of the candidate segments is one, taking the clipped segment corresponding to the candidate segment as the target video.
According to another aspect of the present disclosure, there is provided a video clipping device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
According to the video clipping method and device, the candidate segments are determined from the video to be processed, the scene change time point in the video to be processed is determined, the time range of the clip segments corresponding to the candidate segments is determined according to the scene change time point, and therefore video clipping is carried out based on the scene change time point, the integrity of video content of the clip segments obtained by clipping from the video can be guaranteed, and jumping and truncation are prevented from being brought to a user.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of a video clipping method according to an embodiment of the present disclosure.
FIG. 2 shows an exemplary flowchart of video clipping method step S12 according to an embodiment of the present disclosure.
FIG. 3 shows an exemplary flowchart of video clipping method step S13 according to an embodiment of the present disclosure.
FIG. 4 shows another exemplary flowchart of video clipping method step S13 according to an embodiment of the present disclosure.
FIG. 5 shows another exemplary flowchart of video clipping method step S13 according to an embodiment of the present disclosure.
FIG. 6 shows an exemplary flowchart of video clipping method step S11 according to an embodiment of the present disclosure.
Fig. 7 shows a schematic diagram of a video clipping method according to an embodiment of the present disclosure.
FIG. 8 shows a block diagram of a video clipping device according to an embodiment of the present disclosure.
FIG. 9 shows an exemplary block diagram of a video clipping device according to an embodiment of the present disclosure.
FIG. 10 is a block diagram illustrating an apparatus 800 for video clips in accordance with an example embodiment.
FIG. 11 is a block diagram illustrating an apparatus 1900 for video clips in accordance with an example embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
FIG. 1 shows a flow diagram of a video clipping method according to an embodiment of the present disclosure. As shown in fig. 1, the method includes steps S11 through S13.
In step S11, candidate segments are determined from the video to be processed.
In this embodiment, the video to be processed may be any video that needs to be subjected to video clipping. For example, the video to be processed may be a movie video or a television play video.
In a possible implementation manner, N segments with the highest playquantity in the video to be processed may be used as candidate segments, where N is a positive integer.
In step S12, a scene change time point in the video to be processed is determined.
The scene change time point in the video to be processed may refer to a time point corresponding to a frame of the scene change in the video to be processed.
In step S13, the time range of the clip segment corresponding to the candidate segment is determined according to the scene change time point.
In a possible implementation manner, determining a time range of a clip segment corresponding to a candidate segment according to a scene change time point includes: and determining the ending time point of the clip segment corresponding to the candidate segment according to the scene change time point.
In one possible implementation, the starting time point of the candidate segment may be taken as the starting time point of the clip segment corresponding to the candidate segment.
In another possible implementation manner, the last scene transition time point before the start time point of the candidate segment may be taken as the start time point of the clip segment corresponding to the candidate segment.
In another possible implementation manner, the last shot cut time point before the start time point of the candidate segment may be taken as the start time point of the clip segment corresponding to the candidate segment.
In another possible implementation manner, a first scene transition time point after the start time point of the candidate segment may be taken as the start time point of the clip segment corresponding to the candidate segment.
In another possible implementation, the first shot-cut time point after the start time point of the candidate segment may be taken as the start time point of the clip segment corresponding to the candidate segment.
In the embodiment, the candidate segments are determined from the video to be processed, the scene change time point in the video to be processed is determined, and the time range of the clip segment corresponding to the candidate segment is determined according to the scene change time point, so that the video clip is performed based on the scene change time point, the integrity of the video content of the clip segment obtained by clipping from the video can be ensured, and the jumping feeling and the truncation feeling of a user are avoided.
FIG. 2 shows an exemplary flowchart of video clip method step S12 according to an embodiment of the present disclosure. As shown in fig. 2, step S12 may include steps S121 through S123.
In step S121, a shot cut time point in the video to be processed is determined.
In this embodiment, a shot-cut time point in a video to be processed may be determined by using a related technique. For example, FFmpeg may be used to determine shot cut time points in the video to be processed. In one possible implementation manner, the video frames corresponding to the shot-cut time point are all key frames.
In step S122, a time range without subtitles in the video to be processed is determined.
In this embodiment, according to a video frame that does not include subtitles in the video to be processed, a time range without subtitles in the video to be processed can be determined.
It should be noted that the present embodiment does not limit the execution sequence of step S121 and step S122, as long as step S121 and step S122 are executed before step S123. For example, step S121 may be executed first and then step S122 may be executed, or step S122 may be executed first and then step S121 may be executed.
In step S123, the shot cut time point in the subtitle-free time range is taken as the scene change time point.
Fig. 7 shows a schematic diagram of a video clipping method according to an embodiment of the present disclosure. As shown in fig. 7, if a shot cut time point is within the time range without subtitles, the shot cut time point can be regarded as a scene change time point.
FIG. 3 shows an exemplary flowchart of video clipping method step S13 according to an embodiment of the present disclosure. As shown in fig. 3, step S13 may include step S131 and step S132.
In step S131, if the duration of the candidate segment is greater than the first duration, a first time point that is a first duration from the start time point of the candidate segment is determined in the candidate segment.
In step S132, the last scene change time point before the first time point is taken as the end time point of the clip segment corresponding to the candidate segment.
In fig. 7, the maximum length represents the first time period, and the minimum length represents the second time period. If the duration of the first candidate segment is greater than the first duration, a first time point having a distance of the first duration from the start time point of the first candidate segment may be determined in the first candidate segment, and the last scene change time point before the first time point may be used as the end time point of the clip segment corresponding to the candidate segment.
In this example, if the duration of the candidate segment is greater than the first duration, a first time point which is a distance from the start time point of the candidate segment to the first duration is determined in the candidate segment, and the last scene change time point before the first time point is taken as the end time point of the clip segment corresponding to the candidate segment, so that the duration of the clip segment can be prevented from exceeding the first duration, the duration of the target video obtained according to the clip segment can be prevented from exceeding the maximum expected duration, video clipping is performed based on the scene change time point, the integrity of the video content of the clip segment can be ensured, and the skipping and truncation of the user can be prevented.
FIG. 4 shows another exemplary flowchart of video clipping method step S13 according to an embodiment of the present disclosure. As shown in fig. 4, step S13 may include step S133 and step S134.
In step S133, if the duration of the candidate segment is greater than or equal to the second duration and less than or equal to the first duration, a second time point that is a distance from the starting time point of the candidate segment to the second duration and a third time point that is a distance from the starting time point of the candidate segment to the first duration are determined in the candidate segment, where the second duration is less than the first duration and the second time point is earlier than the third time point.
In step S134, an end time point of the clip segment corresponding to the candidate segment is determined according to the scene change time point between the second time point and the third time point.
In a possible implementation manner, determining an end time point of a clip segment corresponding to a candidate segment according to a scene transition time point between the second time point and the third time point includes: and taking the scene change time point with the minimum distance from the end time point of the candidate segment in the scene change time points between the second time point and the third time point as the end time point of the clip segment corresponding to the candidate segment. As shown in fig. 7, for the third candidate segment, a scene transition time point having the smallest distance from the end time point of the candidate segment among scene transition time points between the second time point and the third time point may be taken as the end time point of the clip segment corresponding to the candidate segment.
In another possible implementation manner, if the duration of the candidate segment is greater than or equal to the second duration and less than or equal to the first duration, the last scene change time point before the ending time point of the candidate segment is taken as the ending time point of the clip segment corresponding to the candidate segment. As shown in fig. 7, for the third candidate segment and the fourth candidate segment, the last scene change time point before the end time point of the candidate segment may be taken as the end time point of the clip segment corresponding to the candidate segment.
FIG. 5 shows another exemplary flowchart of video clip method step S13 according to an embodiment of the present disclosure. As shown in fig. 5, step S13 may include step S135 and step S136.
In step S135, if the duration of the candidate segment is less than the second duration, a fourth time point which is distant from the starting time point of the candidate segment by the second duration is determined, wherein the fourth time point is later than the starting time point of the candidate segment.
In step S136, the first scene change time point after the fourth time point is taken as the end time point of the clip segment corresponding to the candidate segment.
In this example, if the duration of the candidate segment is less than the second duration, determining a fourth time point which is distant from the start time point of the candidate segment by the second duration, wherein the fourth time point is later than the start time point of the candidate segment, and taking a first scene transition time point after the fourth time point as an end time point of the clip segment corresponding to the candidate segment, thereby being capable of avoiding that the duration of the clip segment is less than the second duration, thereby being capable of avoiding that the duration of the target video obtained according to the clip segment is less than a minimum expected duration, and performing video clipping based on the scene transition time point, being capable of ensuring the integrity of the video content of the clip segment, and avoiding bringing a sense of jumping and a sense of truncation to a user.
In another possible implementation manner, if the duration of the candidate segment is less than the second duration, the candidate segment is taken as the clip segment corresponding to the candidate segment. As shown in fig. 7, if the duration of the second candidate segment is less than the second duration, the candidate segment may be regarded as the corresponding clip segment of the candidate segment.
In one possible implementation, the method further includes: and taking the ratio of the maximum expected time length of the target video to the number of the candidate segments as the first time length. For example, if the maximum expected duration of the target video is 5 minutes, and the number of candidate segments is 5, the first duration is 60 seconds.
According to the implementation mode, the ratio of the maximum expected time length of the target video to the number of the candidate segments is used as the first time length, so that the time length of the generated target video can be prevented from exceeding the maximum expected time length.
In one possible implementation, the method further includes: and taking the ratio of the minimum expected time length of the target video to the number of the candidate segments as a second time length. For example, if the minimum expected duration of the target video is 3 minutes, the number of candidate segments is 5, and the second duration is 36 seconds.
According to the implementation mode, the ratio of the minimum expected time length of the target video to the number of the candidate segments is used as the second time length, so that the time length of the generated target video can be prevented from being smaller than the minimum expected time length.
In one possible implementation, determining candidate segments from a video to be processed includes: determining a viewpoint segment and a segment containing a specified object from a video to be processed; and determining candidate segments according to the viewpoint segments and the segments containing the specified objects.
In another possible implementation manner, determining candidate segments from the video to be processed includes: determining a viewpoint segment from a video to be processed; and determining candidate segments according to the viewpoint segments.
In another possible implementation manner, determining candidate segments from the video to be processed includes: determining a segment containing a specified object from a video to be processed; according to the segment containing the specified object, a candidate segment is determined.
FIG. 6 shows an exemplary flowchart of video clipping method step S11 according to an embodiment of the present disclosure. As shown in fig. 6, step S11 may include step S111 and step S112.
In step S111, a viewpoint segment and/or a segment containing a specified object is determined from the video to be processed.
In one possible implementation, the viewpoint segment of the video to be processed may be a segment of highlight content marked in the video to be processed by an uploader of the video to be processed.
In another possible implementation manner, M segments with the highest playing amount in the video to be processed may be used as the viewpoint segments of the video to be processed, where M is a positive integer.
It should be noted that, although the above two implementations describe the manner of determining the viewpoint segment from the video to be processed, the skilled person can understand that the disclosure should not be limited thereto. The person skilled in the art can flexibly set the way of determining the viewpoint segment from the video to be processed according to the actual application scene requirements and/or personal preferences.
In one possible implementation, the specified object may include an object determined from the clip request. In this implementation, the user may determine the specified object according to the actual clipping requirements.
In another possible implementation, the designated object may comprise a hit object. For example, the topical objects may include topical stars and the like.
In step S112, candidate segments are determined based on the viewpoint segment and/or the segment containing the designated object.
In one possible implementation, determining candidate segments according to the viewpoint segment and the segment containing the specified object includes: if the viewpoint segment and the segment containing the specified object have the overlapped time period, combining the overlapped viewpoint segment and the segment containing the specified object, and determining the candidate segment. For example, if a certain viewpoint segment is in the time range of 5 seconds to 60 seconds and a certain segment containing a specified object is in the time range of 20 seconds to 80 seconds, there is an overlapping time period between the viewpoint segment and the segment containing the specified object. Combining the viewpoint segment and the segment containing the specified object, and determining the candidate segment in the time range of 5 to 80 seconds.
In one possible implementation manner, determining candidate segments according to the viewpoint segment and the segment containing the specified object includes: and if the certain viewpoint segment and each segment containing the specified object have no overlapped time period, taking the viewpoint segment as a candidate segment.
In one possible implementation manner, determining candidate segments according to the viewpoint segment and the segment containing the specified object includes: and if the certain section containing the specified object does not have the time period overlapping with each viewpoint section, taking the section containing the specified object as a candidate section.
The present embodiment determines candidate segments from the viewpoint segments and the segments containing the specified object, whereby the problem of insufficient number of viewpoint segments can be solved.
In one possible implementation, after determining the time range of the clip segment corresponding to the candidate segment, the method further includes: if the number of the candidate segments is multiple, combining the clip segments corresponding to the candidate segments to obtain a target video; and if the number of the candidate segments is one, taking the clip segment corresponding to the candidate segment as the target video.
In another possible implementation manner, clip segments corresponding to the candidate segments may also be respectively used as the target video.
FIG. 8 shows a block diagram of a video clipping device according to an embodiment of the present disclosure. As shown in fig. 8, the apparatus includes: a first determining module 81, configured to determine a candidate segment from a video to be processed; a second determining module 82, configured to determine a scene change time point in the video to be processed; and a third determining module 83, configured to determine, according to the scene change time point, a time range of the clip segment corresponding to the candidate segment.
FIG. 9 shows an exemplary block diagram of a video clipping device according to an embodiment of the present disclosure. As shown in fig. 9:
in one possible implementation, the first determining module 81 includes: a first determining submodule 811 for determining a viewpoint clip box and/or a clip containing a specified object from a video to be processed; a second determining sub-module 812 for determining candidate segments based on the viewpoint segment and/or the segment containing the specified object.
In one possible implementation, the second determining submodule 812 is configured to: if the viewpoint segment and the segment containing the specified object have the overlapped time period, combining the overlapped viewpoint segment and the segment containing the specified object, and determining the candidate segment.
In one possible implementation, the second determining module 82 includes: a third determining submodule 821, configured to determine a shot switching time point in the video to be processed; a fourth determining submodule 822, configured to determine a time range without subtitles in the video to be processed; the fifth determination sub-module 823 is configured to set the shot cut time point in the subtitle-free time range as the scene change time point.
In one possible implementation, the third determining module 83 includes: a sixth determining submodule 831, configured to determine, if the duration of the candidate segment is greater than the first duration, a first time point that is a distance of the first duration from the start time point of the candidate segment in the candidate segment; a seventh determining sub-module 832 for determining a last scene change time point before the first time point as an ending time point of the clip segment corresponding to the candidate segment.
In one possible implementation, the third determining module 83 includes: an eighth determining submodule 833 configured to determine, in the candidate segment, a second time point that is apart from the start time point of the candidate segment by the second duration and a third time point that is apart from the start time point of the candidate segment by the first duration if the duration of the candidate segment is greater than or equal to the second duration and less than or equal to the first duration, where the second duration is less than the first duration and the second time point is earlier than the third time point; a ninth determining sub-module 834 for determining an ending time point of the clip segment corresponding to the candidate segment according to the scene transition time point between the second time point and the third time point.
In one possible implementation, the ninth determining submodule 834 is configured to: and taking the scene change time point with the minimum distance from the ending time point of the candidate segment in the scene change time points between the second time point and the third time point as the ending time point of the clip segment corresponding to the candidate segment.
In one possible implementation, the third determining module 83 includes: a tenth determining submodule 835, configured to determine, if the duration of the candidate segment is smaller than the second duration, a fourth time point which is distant from the start time point of the candidate segment by the second duration, where the fourth time point is later than the start time point of the candidate segment; an eleventh determining submodule 836 configured to take the first scene change time point after the fourth time point as the ending time point of the clip segment corresponding to the candidate segment.
In one possible implementation, the apparatus further includes: a fourth determining module 84, configured to use a ratio of the maximum expected duration of the target video to the number of candidate segments as the first duration.
In one possible implementation, the apparatus further includes: and a fifth determining module 85, configured to use a ratio of the minimum expected duration of the target video to the number of candidate segments as the second duration.
In one possible implementation, the apparatus further includes: a sixth determining module 86, configured to, if the number of the candidate segments is multiple, merge clip segments corresponding to the candidate segments to obtain a target video; and if the number of the candidate segments is one, taking the clip segment corresponding to the candidate segment as the target video.
In the embodiment, the candidate segments are determined from the video to be processed, the scene change time point in the video to be processed is determined, and the time range of the clip segment corresponding to the candidate segment is determined according to the scene change time point, so that the video clip is performed based on the scene change time point, the integrity of the video content of the clip segment obtained by clipping from the video can be ensured, and the jumping feeling and the truncation feeling of a user are avoided.
FIG. 10 is a block diagram illustrating an apparatus 800 for video clips in accordance with an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 10, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the device 800 to perform the above-described methods.
FIG. 11 is a block diagram illustrating an apparatus 1900 for video clips in accordance with an example embodiment. For example, the apparatus 1900 may be provided as a server. Referring to FIG. 11, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer-readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (22)

1. A video clipping method, comprising:
determining candidate segments from a video to be processed;
determining a scene change time point in the video to be processed, wherein the scene change time point is a time point corresponding to a frame of scene change in the video to be processed;
determining the time range of the clipping segment corresponding to the candidate segment according to the scene change time point;
determining a time range of a clip segment corresponding to the candidate segment according to the scene change time point, wherein the determining includes:
if the duration of the candidate segment is greater than a first duration, determining a first time point which is away from the starting time point of the candidate segment by the first duration in the candidate segment;
And taking the last scene change time point before the first time point as the ending time point of the clip segment corresponding to the candidate segment.
2. The method of claim 1, wherein determining candidate segments from the video to be processed comprises:
determining a viewpoint segment and/or a segment containing a specified object from the video to be processed;
and determining candidate segments according to the viewpoint segments and/or the segments containing the specified objects.
3. The method of claim 2, wherein determining candidate segments from the viewpoint segments and/or the segments containing the specified object comprises:
if the viewpoint segment and the segment containing the specified object have the overlapped time period, combining the overlapped viewpoint segment and the segment containing the specified object, and determining the candidate segment.
4. The method of claim 1, wherein determining a scene change time point in the video to be processed comprises:
determining a shot switching time point in the video to be processed;
determining the time range without subtitles in the video to be processed;
the shot cut time point within the subtitle-free time range is taken as a scene change time point.
5. The method of claim 1, wherein determining a time range of a clip segment corresponding to the candidate segment according to the scene transition time point comprises:
if the duration of the candidate segment is greater than or equal to a second duration and less than or equal to a first duration, determining a second time point which is away from the starting time point of the candidate segment by the second duration and a third time point which is away from the starting time point of the candidate segment by the first duration in the candidate segment, wherein the second duration is less than the first duration, and the second time point is earlier than the third time point;
and determining the ending time point of the clip segment corresponding to the candidate segment according to the scene change time point between the second time point and the third time point.
6. The method of claim 5, wherein determining an ending time point of a clip segment corresponding to the candidate segment according to a scene transition time point between the second time point and the third time point comprises:
and taking the scene transition time point with the minimum distance from the end time point of the candidate segment in the scene transition time points between the second time point and the third time point as the end time point of the clip segment corresponding to the candidate segment.
7. The method of claim 1, wherein determining a time range of a clip segment corresponding to the candidate segment according to the scene change time point comprises:
if the duration of the candidate segment is less than the second duration, determining a fourth time point which is away from the starting time point of the candidate segment by the second duration, wherein the fourth time point is later than the starting time point of the candidate segment;
and taking the first scene change time point after the fourth time point as the ending time point of the clip segment corresponding to the candidate segment.
8. The method according to claim 1 or 5, characterized in that the method further comprises:
and taking the ratio of the maximum expected time length of the target video to the number of the candidate segments as a first time length.
9. The method according to any one of claims 5 to 7, further comprising:
and taking the ratio of the minimum expected time length of the target video to the number of the candidate segments as a second time length.
10. The method of claim 1, wherein after determining the time range of the clip segment corresponding to the candidate segment, the method further comprises:
If the number of the candidate segments is multiple, combining the clip segments corresponding to the candidate segments to obtain a target video;
and if the number of the candidate segments is one, taking the clipped segment corresponding to the candidate segment as the target video.
11. A video clipping apparatus, comprising:
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining candidate segments from a video to be processed;
a second determining module, configured to determine a scene change time point in the video to be processed, where the scene change time point is a time point corresponding to a frame of a scene change in the video to be processed;
a third determining module, configured to determine, according to the scene change time point, a time range of a clip segment corresponding to the candidate segment;
wherein the third determining module comprises:
a sixth determining submodule, configured to determine, if the duration of the candidate segment is greater than the first duration, a first time point that is a distance from a start time point of the candidate segment by the first duration in the candidate segment;
a seventh determining sub-module, configured to use a last scene change time point before the first time point as an end time point of a clip segment corresponding to the candidate segment.
12. The apparatus of claim 11, wherein the first determining module comprises:
the first determining submodule is used for determining a viewpoint segment and/or a segment containing a specified object from the video to be processed;
and the second determining submodule is used for determining candidate segments according to the viewpoint segment and/or the segment containing the specified object.
13. The apparatus of claim 12, wherein the second determination submodule is configured to:
if the viewpoint segment and the segment containing the specified object have the overlapped time period, combining the overlapped viewpoint segment and the segment containing the specified object, and determining the candidate segment.
14. The apparatus of claim 11, wherein the second determining module comprises:
the third determining submodule is used for determining a shot switching time point in the video to be processed;
the fourth determining submodule is used for determining the time range without subtitles in the video to be processed;
and the fifth determining submodule is used for taking the shot switching time point in the time range without the subtitles as the scene switching time point.
15. The apparatus of claim 11, wherein the third determining module comprises:
An eighth determining submodule, configured to determine, in the candidate segment, a second time point that is a distance from a starting time point of the candidate segment to a second duration and a third time point that is a distance from the starting time point of the candidate segment to the first duration if the duration of the candidate segment is greater than or equal to the second duration and less than or equal to the first duration, where the second duration is less than the first duration, and the second time point is earlier than the third time point;
a ninth determining sub-module, configured to determine, according to a scene transition time point between the second time point and the third time point, an end time point of the clip segment corresponding to the candidate segment.
16. The apparatus of claim 15, wherein the ninth determination submodule is configured to:
and taking the scene change time point with the minimum distance from the ending time point of the candidate segment in the scene change time points between the second time point and the third time point as the ending time point of the clip segment corresponding to the candidate segment.
17. The apparatus of claim 11, wherein the third determining module comprises:
a tenth determining submodule, configured to determine, if the duration of the candidate segment is smaller than the second duration, a fourth time point that is a distance from the start time point of the candidate segment by the second duration, where the fourth time point is later than the start time point of the candidate segment;
An eleventh determining sub-module, configured to take a first scene change time point after the fourth time point as an end time point of the clip segment corresponding to the candidate segment.
18. The apparatus of claim 11 or 15, further comprising:
and the fourth determining module is used for taking the ratio of the maximum expected time length of the target video to the number of the candidate segments as the first time length.
19. The apparatus of any one of claims 15 to 17, further comprising:
and the fifth determining module is used for taking the ratio of the minimum expected time length of the target video to the number of the candidate segments as the second time length.
20. The apparatus of claim 11, further comprising:
a sixth determining module, configured to, if the number of the candidate segments is multiple, merge clip segments corresponding to the candidate segments to obtain a target video; and if the number of the candidate segments is one, taking the clipped segment corresponding to the candidate segment as the target video.
21. A video clipping apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
Wherein the processor is configured to perform the method of any one of claims 1 to 10.
22. A non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the method of any one of claims 1 to 10.
CN201810489728.3A 2018-05-21 2018-05-21 Video editing method, device and storage medium Active CN110519655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810489728.3A CN110519655B (en) 2018-05-21 2018-05-21 Video editing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810489728.3A CN110519655B (en) 2018-05-21 2018-05-21 Video editing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN110519655A CN110519655A (en) 2019-11-29
CN110519655B true CN110519655B (en) 2022-06-10

Family

ID=68622109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810489728.3A Active CN110519655B (en) 2018-05-21 2018-05-21 Video editing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN110519655B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111447505B (en) * 2020-03-09 2022-05-31 咪咕文化科技有限公司 Video clipping method, network device, and computer-readable storage medium
CN113301430B (en) * 2021-07-27 2021-12-07 腾讯科技(深圳)有限公司 Video clipping method, video clipping device, electronic equipment and storage medium
CN113556486B (en) * 2021-07-27 2024-02-06 北京达佳互联信息技术有限公司 Video generation method, device, electronic equipment and storage medium
CN114302174A (en) * 2021-12-31 2022-04-08 上海爱奇艺新媒体科技有限公司 Video editing method and device, computing equipment and storage medium
CN114205671A (en) * 2022-01-17 2022-03-18 百度在线网络技术(北京)有限公司 Video content editing method and device based on scene alignment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6807362B1 (en) * 2000-07-18 2004-10-19 Fuji Xerox Co., Ltd. System and method for determining video clip boundaries for interactive custom video creation system
US7362946B1 (en) * 1999-04-12 2008-04-22 Canon Kabushiki Kaisha Automated visual image editing system
CN107330392A (en) * 2017-06-26 2017-11-07 司马大大(北京)智能系统有限公司 Video scene annotation equipment and method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5091806B2 (en) * 2008-09-01 2012-12-05 株式会社東芝 Video processing apparatus and method
CN102348049B (en) * 2011-09-16 2013-09-18 央视国际网络有限公司 Method and device for detecting position of cut point of video segment
CN104519401B (en) * 2013-09-30 2018-04-17 贺锦伟 Video segmentation point preparation method and equipment
CN104284216B (en) * 2014-10-23 2018-07-13 Tcl集团股份有限公司 A kind of method and its system generating video essence editing
CN104394422B (en) * 2014-11-12 2017-11-17 华为软件技术有限公司 A kind of Video segmentation point acquisition methods and device
CN104394488B (en) * 2014-11-28 2018-08-17 苏州科达科技股份有限公司 A kind of generation method and system of video frequency abstract
CN104754415B (en) * 2015-03-30 2018-02-09 北京奇艺世纪科技有限公司 A kind of video broadcasting method and device
CN104768082B (en) * 2015-04-01 2019-01-29 北京搜狗科技发展有限公司 A kind of audio and video playing information processing method and server
CN106162324A (en) * 2015-04-09 2016-11-23 腾讯科技(深圳)有限公司 The processing method and processing device of video file
CN105657537B (en) * 2015-12-23 2018-06-19 小米科技有限责任公司 Video clipping method and device
CN107623860A (en) * 2017-08-09 2018-01-23 北京奇艺世纪科技有限公司 Multi-medium data dividing method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7362946B1 (en) * 1999-04-12 2008-04-22 Canon Kabushiki Kaisha Automated visual image editing system
US6807362B1 (en) * 2000-07-18 2004-10-19 Fuji Xerox Co., Ltd. System and method for determining video clip boundaries for interactive custom video creation system
CN107330392A (en) * 2017-06-26 2017-11-07 司马大大(北京)智能系统有限公司 Video scene annotation equipment and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于DSP的特征视频段自动提取技术研究与应用;宋志勤;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20170315;全文 *

Also Published As

Publication number Publication date
CN110519655A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN110519655B (en) Video editing method, device and storage medium
CN108093315B (en) Video generation method and device
CN108259991B (en) Video processing method and device
CN107948708B (en) Bullet screen display method and device
CN109947981B (en) Video sharing method and device
CN108924644B (en) Video clip extraction method and device
CN108495168B (en) Bullet screen information display method and device
CN106960014B (en) Associated user recommendation method and device
CN108174269B (en) Visual audio playing method and device
CN107820131B (en) Comment information sharing method and device
CN107122430B (en) Search result display method and device
CN109063101B (en) Video cover generation method and device
CN106991018B (en) Interface skin changing method and device
CN107147936B (en) Display control method and device for barrage
CN108833952B (en) Video advertisement putting method and device
CN110913244A (en) Video processing method and device, electronic equipment and storage medium
CN108845749B (en) Page display method and device
CN106782576B (en) Audio mixing method and device
CN106790018B (en) Resource sharing playing method and device
CN108521579B (en) Bullet screen information display method and device
CN109992754B (en) Document processing method and device
CN108289229B (en) Interaction method and device for multimedia resources
CN108574860B (en) Multimedia resource playing method and device
CN108469991B (en) Multimedia data processing method and device
CN110121115B (en) Method and device for determining wonderful video clip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200518

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 100080 Beijing Haidian District city Haidian street A Sinosteel International Plaza No. 8 block 5 layer A, C

Applicant before: Youku network technology (Beijing) Co., Ltd

TA01 Transfer of patent application right
CB02 Change of applicant information

Address after: Room 554, 5 / F, building 3, 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 310052 room 508, 5th floor, building 4, No. 699 Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: Alibaba (China) Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant