CN105657537A - Video editing method and device - Google Patents

Video editing method and device Download PDF

Info

Publication number
CN105657537A
CN105657537A CN201510980311.3A CN201510980311A CN105657537A CN 105657537 A CN105657537 A CN 105657537A CN 201510980311 A CN201510980311 A CN 201510980311A CN 105657537 A CN105657537 A CN 105657537A
Authority
CN
China
Prior art keywords
video
target
duration
fragment
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510980311.3A
Other languages
Chinese (zh)
Other versions
CN105657537B (en
Inventor
陈涛
刘华君
刘华一君
吴珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510980311.3A priority Critical patent/CN105657537B/en
Publication of CN105657537A publication Critical patent/CN105657537A/en
Application granted granted Critical
Publication of CN105657537B publication Critical patent/CN105657537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The invention relates to a video editing method and device, belonging to the technical field of video editing. The method comprises the following steps of: receiving a video editing instruction triggered by a user, wherein the video editing instruction includes a target label; inquiring pre-generated video spotting information according to the target label; determining target spotting information according to the inquiry result; editing a target video according to the target spotting information, such that a target video clip is obtained; and editing the target video clip to obtain an edited video. By means of the invention, the problem that the video editing process is complex can be solved; and the effect of simplifying the video editing process is realized. The video editing method and device disclosed by the invention are used for video editing.

Description

Video clipping method and device
Technical field
It relates to video editing techniques field, particularly to a kind of video clipping method and device.
Background technology
Video clipping refers to be sheared the video segment in video, and the video segment then shearing obtained splices, to obtain the process of the desired video of user.
In correlation technique, main employing mode manually carries out video clipping. Illustratively, user is in the process of viewing video, when finding target video fragment (such as, user thinks excellent video segment) time, user can adopt Video editing software this target video fragment to be shear off from the video watched, after user shears and obtains at least two target video fragment, user can adopt Video editing software that this at least two target video frequency range is carried out splicing to obtain the desired video of user.
Summary of the invention
In order to realize simplifying the effect of video clipping process, present disclose provides a kind of video clipping method and device. Described technical scheme is as follows:
First aspect according to the disclosure, it is provided that a kind of video clipping method, described method includes:
Receiving the video clipping instruction that user triggers, described video clipping instruction includes target labels;
Information is got ready according to the video that the inquiry of described target labels previously generates;
Determine that target gets information ready according to Query Result;
Get information ready according to described target, shear target video, obtain target video fragment;
Target video fragment described in editing obtains the video after editing.
Alternatively, described method also includes:
Receive video capture instruction;
Carrying out video capture according to described video capture instruction and obtain the first video, described first video includes at least one video content;
In the process of shooting, each video content at least one video content described is identified obtaining the label of described each video content;
Determine initial time and the finish time of described each video content;
Label according to described each video content, and the initial time of described each video content and finish time generate described first video get information ready.
Alternatively, described in the process of shooting, each video content at least one video content described is identified obtaining the label of described each video content, including:
In the process of shooting, every prefixed time interval, the video content of shooting extracts in described prefixed time interval frame of video;
Extract the characteristic information of described frame of video;
The characteristic information that inquiry is preset and the corresponding relation of label, obtain the label that the characteristic information of described frame of video is corresponding.
Alternatively, also including target duration in described video clipping instruction, target video fragment described in described editing obtains the video after editing, including:
When the number of described target video fragment is one, the relatively duration of described target video fragment and described target duration;
Target video fragment according to comparative result editing obtains the video after described editing.
Alternatively, described target video fragment according to comparative result editing obtains the video after described editing, including:
When the duration of described target video fragment is equal to described target duration, described target video fragment is defined as the video after described editing;
When the duration of described target video fragment is more than described target duration, shears described target video fragment according to described target duration, obtain the video after described editing;
When the duration of described target video fragment is less than described target duration, presenting information, the operational order editing described information being operated triggering according to user obtains the video after described editing.
Alternatively, also including target duration in described video clipping instruction, target video fragment described in described editing obtains the video after editing, including:
When the number of described target video fragment is at least two, it is determined that the duration of each target video fragment at least two target video fragment;
Judge the target video fragment that whether there is duration in described at least two target video fragment equal to described target duration;
When described at least two target video fragment exists the target video fragment that duration is equal to described target duration, duration is defined as the video after described editing equal to the target video fragment of described target duration.
Alternatively, described method also includes:
When described at least two target video fragment is absent from the target video fragment that duration is equal to described target duration, it is judged that whether described at least two target video fragment exists the duration target video fragment more than described target duration;
When described at least two target video fragment existing duration more than the target video fragment of described target duration, shear duration more than the target video fragment of described target duration according to described target duration, obtain the video after described editing.
Alternatively, described method also includes:
When described at least two target video fragment being absent from duration more than the target video fragment of described target duration, judge whether the video segment group that described at least two target video fragment is constituted exists fragment group to be clipped, described fragment group to be clipped includes target video fragment described at least two, and the duration sum of all target video fragments in arbitrary described fragment group to be clipped is more than described target duration;
When the video segment group that described at least two target video fragment is constituted exists described fragment group to be clipped, in all described fragment groups to be clipped, determine that the duration sum fragment group to be clipped closest to described target duration is as target editing fragment group;
All target video fragments in described target editing fragment group are spliced, obtains spliced video;
Shear described spliced video according to described target duration, obtain the video after described editing.
Second aspect according to the disclosure, it is provided that a kind of video clipping device, described device includes:
First receiver module, is configured to receive the video clipping instruction that user triggers, and described video clipping instruction includes target labels;
Enquiry module, is configured to the video according to the inquiry of described target labels previously generates and gets information ready;
First determines module, is configured to determine that target gets information ready according to Query Result;
Shear module, is configured to get information ready according to described target, shears target video, obtain target video fragment;
Editing module, is configured to the video after target video fragment described in editing obtains editing.
Alternatively, described device also includes:
Second receiver module, is configured to receive video capture instruction;
Taking module, is configured to carry out video capture according to described video capture instruction and obtains the first video, and described first video includes at least one video content;
Identification module, is configured in the process of shooting, each video content at least one video content described is identified obtaining the label of described each video content;
Second determines module, is configured to determine that initial time and the finish time of described each video content;
Generation module, is configured to the label according to described each video content, and the initial time of described each video content and finish time generate described first video get information ready.
Alternatively, described identification module, it is configured to:
In the process of shooting, every prefixed time interval, the video content of shooting extracts in described prefixed time interval frame of video;
Extract the characteristic information of described frame of video;
The characteristic information that inquiry is preset and the corresponding relation of label, obtain the label that the characteristic information of described frame of video is corresponding.
Alternatively, described video clipping instruction also includes target duration,
Described editing module, including:
Comparison sub-module, is configured as the number of described target video fragment when being one, the relatively duration of described target video fragment and described target duration;
Editing submodule, is configured to the video after target video fragment according to comparative result editing obtains described editing.
Alternatively, described editing submodule, it is configured to:
When the duration of described target video fragment is equal to described target duration, described target video fragment is defined as the video after described editing;
When the duration of described target video fragment is more than described target duration, shears described target video fragment according to described target duration, obtain the video after described editing;
When the duration of described target video fragment is less than described target duration, presenting information, the operational order editing described information being operated triggering according to user obtains the video after described editing.
Alternatively, described video clipping instruction also includes target duration,
Described editing module, including:
First determines submodule, is configured as the number of described target video fragment when being at least two, it is determined that the duration of each target video fragment at least two target video fragment;
First judges submodule, is configured to the target video fragment judging whether to there is duration in described at least two target video fragment equal to described target duration;
Second determines submodule, when being configured as in described at least two target video fragment existing duration equal to the target video fragment of described target duration, equal to the target video fragment of described target duration, duration is defined as the video after described editing.
Alternatively, described editing module, also include:
Second judges submodule, when being configured as in described at least two target video fragment being absent from duration equal to the target video fragment of described target duration, it is judged that whether described at least two target video fragment exists the duration target video fragment more than described target duration;
First shears submodule, when being configured as in described at least two target video fragment existing duration more than the target video fragment of described target duration, shear duration more than the target video fragment of described target duration according to described target duration, obtain the video after described editing.
Alternatively, described editing module, also include:
3rd judges submodule, when being configured as in described at least two target video fragment being absent from duration more than the target video fragment of described target duration, judge whether the video segment group that described at least two target video fragment is constituted exists fragment group to be clipped, described fragment group to be clipped includes target video fragment described at least two, and the duration sum of all target video fragments in arbitrary described fragment group to be clipped is more than described target duration;
3rd determines submodule, when being configured as that the video segment group that described at least two target video fragment is constituted exists described fragment group to be clipped, in all described fragment groups to be clipped, determine that the duration sum fragment group to be clipped closest to described target duration is as target editing fragment group;
Splicing submodule, is configured to all target video fragments in described target editing fragment group are spliced, obtains spliced video;
Second shears submodule, is configured to shear described spliced video according to described target duration, obtains the video after described editing.
The third aspect according to the disclosure, it is provided that a kind of video clipping device, including:
Processor;
For storing the memorizer of the executable instruction of described processor;
Wherein, described processor is configured to:
Receiving the video clipping instruction that user triggers, described video clipping instruction includes target labels;
Information is got ready according to the video that the inquiry of described target labels previously generates;
Determine that target gets information ready according to Query Result;
Get information ready according to described target, shear target video, obtain target video fragment;
Target video fragment described in editing obtains the video after editing.
The technical scheme of the offer of the disclosure can include following beneficial effect:
The video clipping method of disclosure offer and device, method includes: receiving the video clipping instruction that user triggers, video clipping instruction includes target labels; Information is got ready according to the video that target labels inquiry previously generates; Determine that target gets information ready according to Query Result; Get information ready according to target, shear target video, obtain target video fragment; Editing target video fragment obtains the video after editing. Owing to just the video after editing can be obtained according to video clipping instruction editing after receiving video clipping instruction, without user video watched and manually shearing without user, therefore, solve the problem that in correlation technique, video clipping process is complicated, reach to simplify the effect of video clipping process.
It should be appreciated that above general description and details hereinafter describe and be merely illustrative of, the disclosure can not be limited.
Accompanying drawing explanation
Embodiment of the disclosure to be illustrated more clearly that, below the accompanying drawing used required during embodiment is described is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the disclosure, for those of ordinary skill in the art, under the premise not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the method flow diagram of a kind of video clipping method according to an exemplary embodiment;
Fig. 2-1 is the method flow diagram of a kind of video clipping method according to another exemplary embodiment;
Fig. 2-2 is a kind of method flow diagram that video content is identified that Fig. 2-1 illustrated embodiment provides;
Fig. 2-3 is a kind of video clipping instruction triggers surface chart that Fig. 2-1 illustrated embodiment provides;
Fig. 2-4 is the method flow diagram that a kind of editing target video fragment that Fig. 2-1 illustrated embodiment provides obtains the video after editing;
Fig. 2-5 is a kind of method flow diagram obtaining the video after editing according to comparative result editing target video fragment that Fig. 2-4 illustrated embodiment provides;
Fig. 2-6 is that the one that Fig. 2-5 illustrated embodiment provides shears schematic diagram;
Fig. 2-7 is that the another kind that Fig. 2-5 illustrated embodiment provides shears schematic diagram;
Fig. 2-8 is another shearing schematic diagram that Fig. 2-5 illustrated embodiment provides;
Fig. 2-9 is that the one that Fig. 2-5 illustrated embodiment provides reminds surface chart;
Fig. 2-10 is the method flow diagram that the another kind of editing target video fragment that Fig. 2-1 illustrated embodiment provides obtains the video after editing;
Fig. 3 is the block diagram of a kind of video clipping device according to an exemplary embodiment;
Fig. 4-1 is the block diagram of a kind of video clipping device according to another exemplary embodiment;
Fig. 4-2 is the block diagram of a kind of editing module shown in Fig. 4-1 illustrated embodiment;
Fig. 4-3 is the block diagram of the another kind of editing module shown in Fig. 4-1 illustrated embodiment;
Fig. 5 is the block diagram of a kind of video clipping device according to an exemplary embodiment.
Accompanying drawing herein is merged in description and constitutes the part of this specification, it is shown that meets and embodiment of the disclosure, and for explaining the principle of the disclosure together with description.
Detailed description of the invention
In order to make the purpose of the disclosure, technical scheme and advantage clearly, below in conjunction with accompanying drawing, the disclosure is described in further detail, it is clear that described embodiment is only disclosure some embodiments, rather than whole embodiments. Based on the embodiment in the disclosure, all other embodiments that those of ordinary skill in the art obtain under not making creative work premise, broadly fall into the scope of disclosure protection.
The video clipping method that disclosure embodiment provides can be performed by video clipping equipment, wherein, this video clipping equipment can be smart mobile phone, panel computer, intelligent television, dynamic image expert's compression standard audio frequency aspect 4 (English: MovingPictureExpertsGroupAudioLayerIV; It is called for short: MP4) player, pocket computer on knee or desk computer etc., video clipping equipment is not limited by disclosure embodiment.
Fig. 1 is the method flow diagram of a kind of video clipping method according to an exemplary embodiment, and the present embodiment is applied to video clipping equipment with this video clipping method and is illustrated, and referring to Fig. 1, this video clipping method can include following several step:
In a step 101, receiving the video clipping instruction that user triggers, video clipping instruction includes target labels.
In a step 102, the video previously generated according to target labels inquiry gets information ready.
Wherein, video gets the information information of getting ready for each video in record video clips equipment ready, and each information record of getting ready has at least one label, and the initial time of the video content of each label instruction at least one label and finish time.
In step 103, determine that target gets information ready according to Query Result.
Wherein, target gets information record ready a target labels, and the target initial time of the target video content of target labels instruction and target finish time.
At step 104, get information ready according to target, shear target video, obtain target video fragment.
In step 105, editing target video fragment obtains the video after editing.
In sum, the video clipping method that disclosure embodiment provides, by receiving the video clipping instruction that user triggers, video clipping instruction includes target labels, gets information ready according to the video that target labels inquiry previously generates, determines that target gets information ready according to Query Result, information is got ready according to target, shearing target video, obtain target video fragment, editing target video fragment obtains the video after editing. Owing to just the video after editing can be obtained according to video clipping instruction editing after receiving video clipping instruction, without user video watched and manually shearing without user, therefore, solve the problem that in correlation technique, video clipping process is complicated, reach to simplify the effect of video clipping process.
Alternatively, before step 101, this video clipping method can also include:
Receive video capture instruction;
Carrying out video capture according to video capture instruction and obtain the first video, the first video includes at least one video content;
In the process of shooting, each video content at least one video content is identified obtaining the label of each video content;
Determine initial time and the finish time of each video content;
Label according to each video content, and the initial time of each video content and finish time generate the first video get information ready.
Alternatively, in the process of shooting, each video content at least one video content is identified obtaining the label of each video content, including:
In the process of shooting, every prefixed time interval, the video content of shooting extracts in prefixed time interval frame of video;
Extract the characteristic information of frame of video;
The characteristic information that inquiry is preset and the corresponding relation of label, obtain the label that the characteristic information of frame of video is corresponding.
Alternatively, also including target duration in video clipping instruction, step 105 may include that
When the number of target video fragment is one, the duration of comparison object video segment and target duration;
The video after editing is obtained according to comparative result editing target video fragment.
Alternatively, obtain the video after editing according to comparative result editing target video fragment, including:
When the duration of target video fragment is equal to target duration, target video fragment is defined as the video after editing;
When the duration of target video fragment is more than target duration, shears target video fragment according to target duration, obtain the video after editing;
When the duration of target video fragment is less than target duration, presenting information, operational order editing information being operated triggering according to user obtains the video after editing.
Alternatively, also including target duration in video clipping instruction, step 105 may include that
When the number of target video fragment is at least two, it is determined that the duration of each target video fragment at least two target video fragment;
Judge the target video fragment that whether there is duration at least two target video fragment equal to target duration;
When at least two target video fragment exists the target video fragment that duration is equal to target duration, duration is defined as the video after editing equal to the target video fragment of target duration.
Alternatively, after whether there is the duration target video fragment equal to target duration in judging at least two target video fragment, this video clipping method can also include:
When at least two target video fragment is absent from the target video fragment that duration is equal to target duration, it is judged that whether at least two target video fragment exists the duration target video fragment more than target duration;
When at least two target video fragment existing duration more than the target video fragment of target duration, shear duration more than the target video fragment of target duration according to target duration, obtain the video after editing.
Alternatively, whether exist in judging at least two target video fragment duration more than the target video fragment of target duration after, this video clipping method can also include:
When at least two target video fragment being absent from duration more than the target video fragment of target duration, judge whether the video segment group that at least two target video fragment is constituted exists fragment group to be clipped, fragment group to be clipped includes at least two target video fragment, and the duration sum of all target video fragments in arbitrary fragment group to be clipped is more than target duration;
When the video segment group that at least two target video fragment is constituted exists fragment group to be clipped, in all fragment groups to be clipped, determine that the duration sum fragment group to be clipped closest to target duration is as target editing fragment group;
All target video fragments in target editing fragment group are spliced, obtains spliced video;
Shear spliced video according to target duration, obtain the video after editing.
In sum, the video clipping method that disclosure embodiment provides, by receiving the video clipping instruction that user triggers, video clipping instruction includes target labels, gets information ready according to the video that target labels inquiry previously generates, determines that target gets information ready according to Query Result, information is got ready according to target, shearing target video, obtain target video fragment, editing target video fragment obtains the video after editing. Owing to just the video after editing can be obtained according to video clipping instruction editing after receiving video clipping instruction, without user video watched and manually shearing without user, therefore, solve the problem that in correlation technique, video clipping process is complicated, reach to simplify the effect of video clipping process.
Fig. 2-1 is the method flow diagram of a kind of video clipping method according to another exemplary embodiment, the present embodiment is applied to video clipping equipment with this video clipping method and is illustrated, referring to Fig. 2-1, this video clipping method can include following several step:
In step 201, video capture instruction is received. Perform step 202.
In the disclosed embodiments, video clipping equipment can have video capture function, and user can operate video clipping equipment and trigger video capture instruction. Illustratively, video clipping equipment can arranging video capture button, can trigger video capture instruction when user clicks video capture button, video clipping equipment can receive the video capture instruction that user triggers.
In step 202., carrying out video capture according to video capture instruction and obtain the first video, the first video includes at least one video content. Perform step 203.
After video clipping equipment receives the video capture instruction that user triggers, video capture can be carried out according to video capture instruction and obtain the first video, this first video can include at least one video content, wherein, this video content refers to the object of video clipping equipment shooting, spot for photography, shooting time etc., and this is not limited by disclosure embodiment. Illustratively, in the disclosed embodiments, the object of this video clipping equipment shooting can be personage, animal, landscape etc., alternatively, the object of this video clipping equipment shooting can be kitten, doggie, personage X, the auto heterodyne of personage Y, Great Wall etc., spot for photography can be xx street, Beijing, international trade etc., shooting time: at 10 in December in 2015 morning on the 11st etc., this is not limited by disclosure embodiment.The object that disclosure embodiment shoots for video content for video clipping equipment illustrates.
Illustratively, assume that at least one video content that the first video includes is: video content A, video content B, video content C, then this video content A can be kitten A, and this video content B can be doggie B, this video content C can be personage C, and this is not limited by disclosure embodiment.
In step 203, in the process of shooting, each video content at least one video content is identified obtaining the label of each video content. Perform step 204.
Video clipping equipment is in the process of shooting video, it is possible to be identified obtaining the label of each video content to each video content at least one video content. Alternatively, in the disclosed embodiments, video clipping equipment can store the corresponding relation of video content and label, and video clipping equipment can obtain the label of each video content according to the corresponding relation of each video content inquiry video content at least one video content of shooting with label. Illustratively, the video content of video clipping device storage can be as shown in table 1 below with the corresponding relation of label:
Table 1
Video content Label
Video content A ID-A
Video content B ID-B
Video content C ID-C
Video content D ID-D
Video content E ID-E
...... ......
Video clipping equipment inquires about the corresponding relation shown in table 1 according to video content A, may determine that the label of video content A is ID-A, then this tag ID-A can be the label of kitten A, video clipping equipment inquires about the corresponding relation shown in table 1 according to video content B, may determine that the label of video content B is ID-B, then this tag ID-B can be the label of doggie B, video clipping equipment inquires about the corresponding relation shown in table 1 according to video content C, may determine that the label of video content C is ID-C, this tag ID-C can be the label of personage C, this is not limited by disclosure embodiment.
It should be noted that, owing to video is made up of substantial amounts of frame of video, and video clipping equipment is when carrying out video capture, within a period of time, shooting is probably same video content (such as, in 3 minutes, that shooting is all kitten A), more owing to carrying out shooting the number about the frame of video of this same video content obtained within this period of time, if each frame of video is identified, then identify that the workload of process is very big, therefore, in order to simplify the identification process of video content, video clipping equipment is in the process of shooting, from in this prefixed time interval, the video content of shooting can extract frame of video every prefixed time interval, by this frame of video being identified the realization identification to video content.
Illustratively, refer to Fig. 2-2, it is illustrated that a kind of method flow diagram that video content is identified that Fig. 2-1 illustrated embodiment provides, and referring to Fig. 2-2, the method flow process can include following several step:
In sub-step 2031, in the process of shooting, every prefixed time interval, the video content of shooting extracts in prefixed time interval frame of video. Perform sub-step 2032.
Video clipping equipment is in the process of shooting, it is possible to every prefixed time interval, extracts frame of video in prefixed time interval the video content of shooting. Wherein, prefixed time interval can be configured according to actual needs, and illustratively, this prefixed time interval can be 1 minute, 30 seconds, 10 seconds etc., and this is not limited by disclosure embodiment.Illustratively, video clipping equipment extracted frame of video the video content of shooting in the process of shooting in 1 minute from this 1 minute. Wherein, a frame of video can be piece image, and video clipping equipment extracts the process of frame of video and is referred to correlation technique, and disclosure embodiment does not repeat them here.
In sub-step 2032, extract the characteristic information of frame of video. Perform sub-step 2033.
After video clipping equipment extracts frame of video, the characteristic information of this frame of video can be extracted, illustratively, when this frame of video includes facial image, video clipping equipment can adopt face recognition technology to extract the face characteristic information characteristic information as this frame of video, and this is not limited by disclosure embodiment.
In sub-step 2033, inquire about the corresponding relation of characteristic information and the label preset, obtain the label that the characteristic information of frame of video is corresponding.
Video clipping equipment can store the corresponding relation of characteristic information and label, and after extracting the characteristic information of frame of video, video clipping equipment can inquire about characteristic information and the corresponding relation of label of self storage, obtains the label that the characteristic information of frame of video is corresponding.
Illustratively, that assume that video clipping equipment shoots within a preset time interval is video content A, then video clipping equipment can extract the characteristic information of the frame of video of this video content A, assume that the characteristic information that video clipping equipment extracts is A1, then the video clipping equipment corresponding relation according to this characteristic information A1 query characteristics information Yu label, obtains the label that the characteristic information of frame of video is corresponding.
Alternatively, the characteristic information of video clipping device storage can be as shown in table 2 below with the corresponding relation of label:
Table 2
Video clipping equipment inquires about the corresponding relation shown in table 2 according to this characteristic information A1, it is possible to the label obtaining the characteristic information of the frame of video of video content A corresponding is ID-A, and this tag ID-A that is to say the label of video content A.
In step 204, it is determined that the initial time of each video content and finish time. Perform step 205.
Video clipping equipment is in the process of shooting, it may be determined that the initial time of each video content and finish time, wherein, this initial time refers to broadcasting initial time, and this finish time refers to broadcasting finish time. the initial time assuming video content A is s1 and s3, the finish time of video content A is e1 and e3, the initial time of video content B is s2, the finish time of video content B is e2, the initial time of video content C is s4, the finish time of video content C can be e4, alternatively, in the disclosed embodiments, s1 can be 00:00 (0 point and 0 second), s2 can be 03:01 (3 points and 1 second), s3 can be 08:02 (8 points and 2 seconds), s4 can be 15:03 (15 points and 3 seconds), e1 can be 03:00 (3 points and 0 second), e2 can be 08:01 (8 points and 1 second), e3 can be 15:02 (15 points and 2 seconds), e4 can be 20:03 (20 points and 3 seconds), that is to say that the initial time of video content A is 00:00 and 08:02, the finish time of video content A is 03:00 and 15:02, the initial time of video content B is 03:01, the finish time of video content B is 08:01, the initial time of video content C is 15:03, the finish time of video content C is 20:03, this is not limited by disclosure embodiment.
In step 205, the label according to each video content, and the initial time of each video content and finish time generate the first video get information ready.Perform step 206.
Wherein, get the information label for recording in video ready, and the initial time of the video content of label instruction and finish time. Therefore, information of getting ready the record of the first video is the initial time of video content and finish time of the label of video content in the first video and the instruction of this label.
Video clipping equipment determine the label of each video content, each video content initial time and after finish time, it is possible to the label according to each video content, and the initial time of each video content and finish time generate the first video get information ready. Illustratively, the information of getting ready of this first video can be as shown in table 3 below:
Table 3
Label Initial time Finish time
ID-A s1(00:00) e1(03:00)
ID-B s2(03:01) e2(08:01)
ID-A s3(08:02) e3(15:02)
ID-C s4(15:03) e4(20:03)
It should be noted that in practical application, the label shown in table 3 can be content title (such as, kitten, doggie etc.), initial time and finish time can get information ready for the time, and it is Tag information that table 3 may be generally referred to as, and this is not limited by disclosure embodiment.
In step 206, generate video according to the information of getting ready of all videos and get information ready. Perform step 207.
Wherein, these all videos refer to all videos in video clipping equipment. Video clipping equipment generate each video get information ready after, video can be generated according to the information of getting ready of all videos in video clipping equipment and get information ready, wherein, video is got information ready and is got information ready for each video in record video clips equipment, further, the video title of what this video got information record ready can be each video in video clipping equipment and the video of this video title instruction get information ready. Illustratively, the video title assuming the first video is ID-1, and each video in this video clipping equipment also includes second video etc., and the video title of this second video can be ID-2, therefore, the video that video clipping equipment generates gets information ready can be as shown in table 4 below:
Table 4
In step 207, receiving the video clipping instruction that user triggers, video clipping instruction includes target labels. Perform step 208.
Video clipping equipment can provide editing instruction triggers interface, user can be operated at editing instruction triggers interface triggering video clipping instruction, this video clipping instruction can include target labels, target duration can also be included, wherein, this target labels is the label that user expects the video content of editing, and this target duration is the playing duration that user expects the video content of editing, illustratively, refer to Fig. 2-3, it is illustrated that a kind of editing instruction triggers surface chart that Fig. 2-1 illustrated embodiment provides, referring to Fig. 2-3, editing instruction triggers interface 230 includes: label input frame 231, duration input frame 232, confirming button and cancel button, label input frame 231 is used for inputting target labels, duration input frame 232 is used for inputting target duration, user can input target labels in label input frame 231, duration input frame 232 inputs target duration, then click on confirming button and trigger video clipping instruction, when user triggers video clipping instruction, video clipping equipment can receive the video clipping instruction that user triggers. illustratively, in the disclosed embodiments, target labels can be ID-A, and target duration can be 5 minutes.
In a step 208, the video previously generated according to target labels inquiry gets information ready. Perform step 209.
Wherein, video gets the information information of getting ready for each video in record video clips equipment ready, and each information record of getting ready has at least one label, and the initial time of the video content of each label instruction at least one label and finish time. Illustratively, video get information ready can be as shown in table 4.
After video clipping equipment receives the video clipping instruction that user triggers, it is possible to get information ready according to the video that target labels inquiry previously generates, in order to determine that record has the target of target labels to get information ready according to Query Result. Illustratively, video clipping equipment inquires about table 4 according to target labels ID-A, and disclosure embodiment does not repeat them here.
In step 209, determine that target gets information ready according to Query Result. Perform step 210.
According to Query Result, video clipping equipment can determine that target gets information ready. Wherein, this target gets information record ready a target labels, and the target initial time of the target video content of target labels instruction and target finish time. That is, record is had the information of getting ready of target labels to get information ready as target by video clipping equipment, illustratively, record is had the information of getting ready of target labels ID-A to get information ready as target by video clipping equipment. Known referring to table 4, the target video that record has the information of getting ready of target labels ID-A to be video title ID-1 instruction get information ready, integrating step 205 that is to say the information of getting ready (namely getting information shown in table 3) of the first video it can be seen that this target gets information ready.
It should be noted that, disclosure embodiment is to get, for target, the information of getting ready that information is a video ready to illustrate, in practical application, target get ready information can be at least one video get information ready, all records have the information of getting ready of target labels can be that target gets information ready, and disclosure embodiment does not repeat them here.
In step 210, get information ready according to target, shear target video, obtain target video fragment. Perform step 211.
Wherein, this target video refers to target and gets the video that information is corresponding ready, this target gets information record ready target labels, and the target initial time of target video content of target labels instruction and target finish time, therefore, video clipping equipment gets information ready according to target, shear target video, obtain target video fragment, that is to say, video clipping equipment gets the target initial time in information and target finish time ready according to target, shears target and gets the target video that information is corresponding ready, obtains target video fragment.
Illustratively, video clipping equipment first can be got information ready according to target and determine target video title, the video that target video title indicates is got ready target video that information is corresponding as target, then shear target video according to target initial time corresponding to target labels and target finish time, obtain target video fragment.
Alternatively, known referring to table 3, the target initial time of the target video content of target labels ID-A instruction is s1 and s3, the target finish time of the target video content of target labels ID-A instruction is e1 and e3, and target initial time s1 is corresponding with target finish time e1, target initial time s3 is corresponding with target finish time e3, namely, video content between target initial time s1 and target finish time e1 is target video content, and the video content between target initial time s3 and target finish time e3 is target video content.Therefore, video clipping equipment is respectively according to target initial time s1 and target finish time e1, and target initial time s3 and target finish time e3 shears target video, obtain two target video fragments, assume these two target video fragments respectively target video fragment m and target video fragment n, referring to table 3 it can be seen that the duration of this target video fragment m can be 3 minutes, the duration of target video fragment n can be 7 minutes.
In step 211, editing target video fragment obtains the video after editing.
After obtaining target video fragment, video clipping equipment editing target video fragment can obtain the video after editing.
In the disclosed embodiments, the difference according to the number of target video fragment, target video fragment can be processed the video after obtaining editing by video clipping equipment in different ways.
On the one hand, refer to Fig. 2-4, it is illustrated that a kind of editing target video fragment that Fig. 2-1 illustrated embodiment provides obtains the method flow diagram of the video after editing, and referring to Fig. 2-4, the method flow process can include following several step:
In sub-step 2111a, when the number of target video fragment is one, the duration of comparison object video segment and target duration.
Alternatively, video clipping equipment can first determine the duration of target video fragment, then the duration of comparison object video segment and target duration, the duration of this target video fragment refers to the playing duration of target video fragment, in the disclosed embodiments, target gets information record ready target labels, and the target initial time of target video content of target labels instruction and target finish time, therefore, video clipping equipment can be got target finish time of information record and target initial time ready according to target and determines the duration of target video fragment.
Illustratively, video clipping equipment can using the duration as target video fragment of the time difference between target finish time and target initial time. Assume that in step 210, video clipping equipment shearing target video obtains a target video fragment, and this target video fragment is target video fragment m, then the target finish time of this target video fragment is e1, target initial time is s1, therefore, video clipping equipment is using the duration as target video fragment m of the time difference between target finish time e1 and target initial time s1, referring to table 3 it can be seen that the duration of this target video fragment m is 3 minutes.
After determining the duration of target video fragment, video clipping equipment can duration and the target duration of comparison object video segment. Illustratively, the time difference between the target finish time e1 and target initial time s1 of video clipping equipment comparison object video segment m and target duration. Owing to the duration of target video fragment m is 3 minutes, and from step 207, target duration is 5 minutes, therefore, with target duration, the time difference between the target finish time e1 and target initial time s1 of video clipping equipment comparison object video segment m that is to say that video clipping equipment compares the duration of 3 minutes and 5 minutes.
In sub-step 2112a, obtain the video after editing according to comparative result editing target video fragment.
After the duration of comparison object video segment and target duration, video clipping equipment can obtain the video after editing according to comparative result editing target video fragment. Illustratively, refer to Fig. 2-5, it is illustrated that a kind of method flow diagram obtaining the video after editing according to comparative result editing target video fragment that Fig. 2-4 illustrated embodiment provides, and referring to Fig. 2-5, the method flow process can include following several step:
In sub-step 2113a1, when the duration of target video fragment is equal to target duration, target video fragment is defined as the video after editing.
If in sub-step 2112a, the duration of video clipping equipment comparison object video segment and target duration, it is determined that the duration of target video fragment is equal to target duration, then target video fragment is defined as the video after editing by video clipping equipment. Illustratively, it is assumed that the duration of target video fragment m is 5 minutes, then target video fragment m is defined as the video after editing by video clipping equipment.
In sub-step 2113a2, when the duration of target video fragment is more than target duration, shears target video fragment according to target duration, obtain the video after editing.
If in sub-step 2112a, the duration of video clipping equipment comparison object video segment and target duration, determine that the duration of target video fragment is more than target duration, then video clipping equipment can shear target video fragment according to target duration, obtains the video after editing. illustratively, it is assumed that the duration of target video fragment m was more than 5 minutes, then video clipping equipment shears target video fragment according to target duration, obtains the video after editing. illustratively, video clipping equipment according to target duration according to pre-set shear mode target video fragment, can obtain the video after editing. wherein, pre-set shear mode is set in advance, this is not limited by disclosure embodiment, alternatively, this pre-set shear mode can be: from the target initial time of target video fragment, the content that moment after being arranged in this target initial time in target video fragment plays shears the video segment that duration is target duration, or, from the target finish time of target video fragment, the content that moment before being arranged in this target finish time in target video fragment plays shears the video segment that duration is target duration, or, from the predetermined time of target video fragment, target video fragment is sheared the video segment that duration is target duration, this is not limited by disclosure embodiment.
Illustratively, refer to Fig. 2-6, it is illustrated that the one that Fig. 2-5 illustrated embodiment provides shears schematic diagram, referring to Fig. 2-6, the duration of target video fragment is the time difference between target finish time and target initial time, the duration of this target video fragment is more than target duration, video clipping equipment shears the video segment between target initial time and moment t1 in target video fragment, as the video after editing, wherein, the time difference between moment t1 and target initial time is equal to target duration.
Again illustratively, refer to Fig. 2-7, it is illustrated that the another kind that Fig. 2-5 illustrated embodiment provides shears schematic diagram, referring to Fig. 2-7, the duration of target video fragment is the time difference between target finish time and target initial time, the duration of this target video fragment is more than target duration, video clipping equipment shears the video segment between moment t2 and target finish time in target video fragment, as the video after editing, wherein, the time difference between target finish time and moment t2 is equal to target duration.
Again illustratively, refer to Fig. 2-8, it is illustrated that another shearing schematic diagram that Fig. 2-5 illustrated embodiment provides, referring to Fig. 2-8, the duration of target video fragment is the time difference between target finish time and target initial time, and the duration of this target video fragment is more than target duration, video clipping equipment shears the video segment between moment t3 and moment t4 in target video fragment, as the video after editing, wherein, the time difference between moment t3 and moment t4 is equal to target duration.
In sub-step 2113a3, when the duration of target video fragment is less than target duration, present information. Perform sub-step 2113a4.
If in sub-step 2112a, the duration of video clipping equipment comparison object video segment and target duration, determine that the duration of target video fragment is less than target duration, then video clipping equipment can present information to user, and this information is used for the duration pointing out target video fragment less than target duration. Alternatively, video clipping equipment first can generate information according to the duration of target video fragment with target duration, right rear line presents information, illustratively, video clipping equipment can display alarm frame, this information is shown, it is also possible to reporting this information to user in the form of speech, this is not limited by disclosure embodiment reminding frame. In the disclosed embodiments, owing to the duration of target video fragment m is 3 minutes, target duration is 5 minutes, and the duration of target video fragment is less than target duration, and therefore, video clipping equipment presents information.
Illustratively, disclosure embodiment illustrates for video clipping equipment display reminding information, this information can be " the target video fragment finding duration to be 3 minutes, its duration was less than 5 minutes, it can be used as the video after editing? " then the display interface figure of video clipping equipment can be as shown in figs. 2-9, referring to Fig. 2-9, the display interface 240 of video clipping equipment shows prompting frame 241, remind in frame 241 and show information " the target video fragment finding duration to be 3 minutes, its duration was less than 5 minutes, it can be used as the video after editing? " and display interface 240 also shows confirming button and cancel button. certainly, display interface 240 can also show other information, and information can also be other information, and this is not limited by disclosure embodiment.
In sub-step 2113a4, operational order editing information being operated triggering according to user obtains the video after editing.
After video clipping equipment presents information, information can be operated trigger action instruction by user, and video clipping equipment can obtain the video after editing according to the operational order editing that user triggers. Illustratively, when user clicks the confirming button shown in Fig. 2-9, video clipping equipment using duration less than the target video fragment of target duration as the video after editing, that is to say video clipping equipment using target video fragment m as the video after editing; When user clicks the cancel button shown in Fig. 2-9, video clipping equipment can abandon editing, and this is not limited by disclosure embodiment.
On the other hand, refer to Fig. 2-10, it is illustrated that the another kind of editing target video fragment that Fig. 2-1 illustrated embodiment provides obtains the method flow diagram of the video after editing, and referring to Fig. 2-10, the method flow process can include following several step:
In sub-step 2111b, when the number of target video fragment is at least two, it is determined that the duration of each target video fragment at least two target video fragment. Perform sub-step 2112b.
When the number of target video fragment is at least two, video clipping equipment may determine that the duration of each target video fragment at least two target video fragment, alternatively, video clipping equipment can first determine target initial time and the target finish time of each target video fragment, then determines the duration of each target video fragment according to the target initial time of each target video fragment and target finish time.
In the disclosed embodiments, known referring to step 210, video clipping equipment is sheared target video and is obtained two target video fragments, these two target video fragments respectively target video fragment m and target video fragment n, the target initial time of target video fragment m is s1, target finish time is e1, the target initial time of target video fragment n is s3, target finish time is e3, therefore, video clipping equipment determines the duration of target video fragment m according to target initial time s1 and target finish time e1, the duration of target video fragment n is determined according to target initial time s3 and target finish time e3. illustratively, video clipping equipment is using the duration as target video fragment m of the time difference between target finish time e1 and target initial time s1, using the duration as target video fragment n of the time difference between target finish time e3 and target initial time s3, known referring to table 3, the duration of this target video fragment m can be 3 minutes, and the duration of target video fragment n can be 7 minutes.
In sub-step 2112b, it is judged that whether at least two target video fragment exists the duration target video fragment equal to target duration. When at least two target video fragment exists the target video fragment that duration is equal to target duration, perform sub-step 2113b; When at least two target video fragment is absent from the target video fragment that duration is equal to target duration, perform sub-step 2114b.
After determining the duration of each target video fragment at least two target video fragment, video clipping equipment may determine that the target video fragment that whether there is duration at least two target video fragment equal to target duration. Alternatively, video clipping equipment can by being compared to the target video fragment judging whether to there is duration at least two target video fragment equal to target duration by the duration of each target video fragment and target duration. Illustratively, whether the duration of target video fragment m and target duration are compared the duration judging target video fragment m equal to target duration by video clipping equipment, the duration of target video fragment n and target duration comparing the duration judging target video fragment n whether equal to target duration, this is not limited by disclosure embodiment.
In sub-step 2113b, duration is defined as the video after editing equal to the target video fragment of target duration.
If in sub-step 2112b, video clipping equipment determines the target video fragment that there is duration at least two target video fragment equal to target duration, then duration is defined as the video after editing equal to the target video fragment of target duration by video clipping equipment.
It should be noted that, if duration is multiple equal to the number of the target video fragment of target duration at least two target video fragment, then video clipping equipment can randomly choose a target video fragment in multiple durations target video fragment equal to target duration, and this target video fragment is defined as the video after editing. Or, video clipping equipment can also determine the video after editing otherwise, and this is not limited by disclosure embodiment.
In sub-step 2114b, it is judged that whether at least two target video fragment exists the duration target video fragment more than target duration. When at least two target video fragment existing duration more than the target video fragment of target duration, perform sub-step 2115b;When at least two target video fragment being absent from duration more than the target video fragment of target duration, perform sub-step 2116b.
If in sub-step 2112b, video clipping equipment determines the target video fragment being absent from duration at least two target video fragment equal to target duration, then video clipping equipment judges whether there is the duration target video fragment more than target duration at least two target video fragment.
Alternatively, video clipping equipment can by by the duration of each target video fragment and target duration be compared to judge whether there is the duration target video fragment more than target duration at least two target video fragment. Illustratively, whether the duration of target video fragment m and target duration are compared the duration judging target video fragment m more than target duration by video clipping equipment, the duration of target video fragment n and target duration are compared the duration judging target video fragment n whether more than target duration, and this is not limited by disclosure embodiment.
In the disclosed embodiments, owing to the duration of target video fragment m is 3 minutes, the duration of target video fragment n is 7 minutes, and target duration is 5 minutes, therefore, there is the duration target video fragment more than target duration at least two target video fragment, this duration is target video fragment n more than the target video fragment of target duration.
In sub-step 2115b, shear duration more than the target video fragment of target duration according to target duration, obtain the video after editing.
If in sub-step 2114b, video clipping equipment determines there is the duration target video fragment more than target duration at least two target video fragment, then video clipping equipment shears duration more than the target video fragment of target duration, obtains the video after editing. Illustratively, video clipping equipment shears target video fragment n, obtains the video after editing. Wherein, video clipping equipment shears the duration target video fragment more than target duration according to target duration, and the process obtaining the video after editing is referred to the sub-step 2113a2 in Fig. 2-5 illustrated embodiment, and this is not limited by disclosure embodiment.
It should be noted that, if in sub-step 2114b, video clipping equipment determines that duration is multiple more than the number of the target video fragment of target duration, then video clipping equipment can randomly choose a video segment in multiple durations are more than the target video fragment of target duration, and shear duration more than the target video fragment of target duration according to target duration, obtain the video after editing. Or, video clipping equipment can also determine the video after editing otherwise, and this is not limited by disclosure embodiment.
In sub-step 2116b, it is judged that whether the video segment group that at least two target video fragment is constituted exists fragment group to be clipped. When the video segment group that at least two target video fragment is constituted exists fragment group to be clipped, perform sub-step 2117b; When the video segment group that at least two target video fragment is constituted is absent from fragment group to be clipped, perform sub-step 2120b.
If in sub-step 2114b, video clipping equipment is determined and is absent from the duration target video fragment more than target duration at least two target video fragment, then video clipping equipment may determine that whether there is fragment group to be clipped in the video segment group that at least two target video fragment is constituted. Wherein, fragment group to be clipped includes at least two target video fragment, and the duration sum of all target video fragments in arbitrary fragment group to be clipped is more than target duration.
Alternatively, at least two target video fragment can be combined obtaining multiple video segment group according to the duration of each target video fragment at least two target video fragment by video clipping equipment, then judging whether to exist in multiple video segment group fragment group to be clipped, this is not limited by disclosure embodiment.
Illustratively, the number assuming at least two target video fragment is 3, these 3 target video fragments respectively target video fragment p, target video fragment q and target video fragment f, and the duration of target video fragment p is 1 minute, the duration of target video fragment q is 3 minutes, the duration of target video fragment f is 4 minutes, and the duration of each target video fragment in this target video fragment p, target video fragment q and target video fragment f is both less than target duration 5 minutes. This target video fragment p, target video fragment q and target video fragment f are combined by video clipping equipment according to the duration of each target video fragment in target video fragment p, target video fragment q and target video fragment f. Illustratively, video clipping equipment composite object video segment p, target video fragment q and target video fragment f, it is possible to obtaining 4 video segment groups, the relevant information of these 4 video segment groups can be as shown in table 5 below:
Table 5
Referring to table 5, video clipping equipment composite object video segment p, target video fragment q and target video fragment f, 4 video segment groups can be obtained, this 4 video segment groups respectively video segment group 1, video segment group 2, video segment group 3 and video segment group 4, wherein, video segment group 1 includes target video fragment p and target video fragment q, the duration of target video fragment p is 1 minute, the duration of target video fragment q is 3 minutes, and the duration sum of all target video fragments in video segment group 1 is 4 minutes; Video segment group 2 includes target video fragment p and target video fragment f, and the duration of target video fragment p is 1 minute, and the duration of target video fragment f is 4 minutes, and the duration sum of all target video fragments in video segment group 2 is 5 minutes; Video segment group 3 includes target video fragment q and target video fragment f, and the duration of target video fragment q is 3 minutes, and the duration of target video fragment f is 4 minutes, and the duration sum of all target video fragments in video segment group 3 is 7 minutes; Video segment group 4 includes target video fragment p, target video fragment q and target video fragment f, the duration of target video fragment p is 1 minute, the duration of target video fragment q is 3 minutes, the duration of target video fragment f is 4 minutes, and the duration sum of all target video fragments in video segment group 4 is 8 minutes.
Illustratively, video clipping equipment judges whether there is fragment group to be clipped in these 4 video segment groups of video segment group 1, video segment group 2, video segment group 3 and video segment group 4, wherein, fragment group to be clipped includes at least two target video fragment, and the duration sum of all target video fragments in arbitrary fragment group to be clipped is more than target duration. Known referring to table 5, video segment group 3 and video segment group 4 all include at least two target video fragment, and owing to the duration sum of all target video fragments in video segment group 3 is 7 minutes, the duration sum of all target video fragments in video segment group 4 is 8 minutes, within 7 minutes and 8 minutes, it both is greater than target duration 5 minutes, therefore, video clipping equipment determines there is fragment group to be clipped in the video segment group that at least two target video fragment is constituted, and this fragment group to be clipped is video segment group 3 and video segment group 4.
In sub-step 2117b, in all fragment groups to be clipped, determine that the duration sum fragment group to be clipped closest to target duration is as target editing fragment group. Perform sub-step 2118b.
If in sub-step 2116b, video clipping equipment determines there is fragment group to be clipped in the video segment group that at least two target video fragment is constituted, then video clipping equipment determines that in all fragment groups to be clipped the duration sum fragment group to be clipped closest to target duration is as target editing fragment group. Illustratively, the duration sum of the target video fragment in the fragment group each to be clipped in all fragment groups to be clipped and target duration can be compared by video clipping equipment, it is determined that go out duration sum in all fragment groups to be clipped fragment group to be clipped closest to target duration as target editing fragment group. Illustratively, the duration sum of all target video fragments in video segment group 3 and target duration are compared by video clipping equipment, and, the duration sum of all target video fragments in video segment group 4 and target duration are compared by video clipping equipment, determine target editing fragment group in video segment group 3 and video segment group 4. In the disclosed embodiments, owing to the duration sum of all target video fragments in video segment group 3 is 7 minutes, the duration sum of all target video fragments in video segment group 4 is 8 minutes, 7 minutes closest with target duration 5 minutes, therefore, video clipping equipment using video segment group 3 as target editing fragment group.
In sub-step 2118b, all target video fragments in target editing fragment group are spliced, obtains spliced video. Perform sub-step 2119b.
After determining target editing fragment group, all target video fragments in target editing fragment group can be spliced by video clipping equipment, obtains spliced video. Illustratively, all target video fragments in video segment group 3 are spliced by video clipping equipment, obtain spliced video, that is to say, target video fragment q and target video fragment f is spliced by video clipping equipment. Alternatively, when target video fragment is spliced, all target video fragments can be spliced by video clipping equipment according to preset rules. Illustratively, all target video fragments in target editing fragment group can be ranked up by video clipping equipment according to preset rules, then according to ranking results, all target video fragments in target editing fragment group is spliced, obtains spliced video. Alternatively, all target video fragments in target editing fragment group can be ranked up by video clipping equipment according to the duration of each target video fragment in all target video fragments in target editing fragment group; Or, video clipping equipment may determine that the shearing moment of each target video fragment in all target video fragments in target editing fragment group, according to the moment of shearing of each target video fragment in all target video fragments in target editing fragment group, all target video fragments in target editing fragment group is ranked up; Or, video clipping equipment may determine that whether all target video fragments in target editing fragment group are the video segments sheared from same target video, when all target video fragments in target editing fragment group are not the video segments sheared from same target video, all target video fragments in target editing fragment group are ranked up by video clipping equipment according to the shooting moment of target video corresponding to each target video fragment, and this is not limited by disclosure embodiment.
In sub-step 2119b, shear spliced video according to target duration, obtain the video after editing.
All target video fragments in target editing fragment group being carried out after splicing obtains spliced video, video clipping equipment can shear spliced video according to target duration, obtain the video after editing. Wherein, video clipping equipment shears spliced video according to target duration, and the process obtaining the video after editing is referred to the sub-step 2113a2 in Fig. 2-5 illustrated embodiment, and this is not limited by disclosure embodiment.
It should be noted that, when video clipping equipment determine at least two target video fragment is absent from duration more than the video segment of target duration time, disclosure embodiment is to judge whether to exist in the video segment group that at least two target video fragment is constituted fragment group to be clipped with video clipping equipment, at least two target video fragment is included with fragment group to be clipped, and the duration sum of all target video fragments in arbitrary fragment group to be clipped is that example illustrates more than target duration, in practical application, when video clipping equipment determine at least two target video fragment is absent from duration more than the video segment of target duration time, video clipping equipment can first judge whether to exist in the video segment group that at least two target video fragment is constituted fragment group to be clipped, fragment group to be clipped includes at least two target video fragment, and the duration sum of all target video fragments in arbitrary fragment group to be clipped is equal to target duration, if there is fragment group to be clipped, then video clipping equipment treats the video that all target video fragments in editing fragment group carry out splicing after obtaining editing, this is not limited by disclosure embodiment.
In sub-step 2120b, presenting information, information is for pointing out the editing video that cannot obtain duration less than target duration. Perform sub-step 2121b.
If in sub-step 2116b, video clipping equipment is determined and is absent from fragment group to be clipped in the video segment group that at least two target video fragment is constituted, then video clipping equipment can present information to user, and this information is for pointing out the editing video that cannot obtain duration less than target duration. Wherein, the process that realizes that video clipping equipment presents information to user is referred to the sub-step 2113a3 in Fig. 2-5 illustrated embodiment, and this is not limited by disclosure embodiment.
In sub-step 2121b, process the video after obtaining editing according to the operational order that information is operated triggering by user.
The process that realizes of this sub-step 2121b is referred to the sub-step 2113a4 in Fig. 2-5 illustrated embodiment, and disclosure embodiment does not repeat them here.
It should be noted that; the sequencing of the video clipping method step that disclosure embodiment provides can suitably adjust; step can also according to circumstances increase and decrease accordingly; any those familiar with the art is in the technical scope that the disclosure discloses; the method that can readily occur in change; all should be encompassed within the protection domain of the disclosure, therefore repeat no more.
In sum, the video clipping method that disclosure embodiment provides, by receiving the video clipping instruction that user triggers, video clipping instruction includes target labels, gets information ready according to the video that target labels inquiry previously generates, determines that target gets information ready according to Query Result, information is got ready according to target, shearing target video, obtain target video fragment, editing target video fragment obtains the video after editing. Owing to just the video after editing can be obtained according to video clipping instruction editing after receiving video clipping instruction, without user video watched and manually shearing without user, therefore, solve the problem that in correlation technique, video clipping process is complicated, reach to simplify the effect of video clipping process.
The video clipping method that disclosure embodiment provides, owing to simplifying video clipping process, therefore, can quickly obtain the video after editing, and the video clipping method that disclosure embodiment provides, the user not possessing professional editing operation experience can also realize the editing to video, therefore, it can reduce operation threshold.
The video clipping method that disclosure embodiment provides, it is possible to reduce editing difficulty, saves the editing time, conveniently realizes the effect of video clipping.
Following for disclosure device embodiment, it is possible to be used for performing method of disclosure embodiment. For the details not disclosed in disclosure device embodiment, refer to method of disclosure embodiment.
Fig. 3 is the block diagram of a kind of video clipping device 300 according to an exemplary embodiment, this video clipping device 300 can pass through software, hardware or both be implemented in combination with become the some or all of of video clipping equipment, referring to Fig. 3, this video clipping device 300 may include that
First receiver module 310, is configured to receive the video clipping instruction that user triggers, and video clipping instruction includes target labels.
Enquiry module 320, is configured to the video according to target labels inquiry previously generates and gets information ready.
First determines module 330, is configured to determine that target gets information ready according to Query Result.
Shear module 340, is configured to get information ready according to target, shears target video, obtain target video fragment.
Editing module 350, is configured to the video after editing target video fragment obtains editing.
In sum, the video clipping device that disclosure embodiment provides, by receiving the video clipping instruction that user triggers, video clipping instruction includes target labels, gets information ready according to the video that target labels inquiry previously generates, determines that target gets information ready according to Query Result, information is got ready according to target, shearing target video, obtain target video fragment, editing target video fragment obtains the video after editing. Owing to just the video after editing can be obtained according to video clipping instruction editing after receiving video clipping instruction, without user video watched and manually shearing without user, therefore, solve the problem that in correlation technique, video clipping process is complicated, reach to simplify the effect of video clipping process.
Fig. 4-1 is the block diagram of a kind of video clipping device 400 according to another exemplary embodiment, this video clipping device 400 can pass through software, hardware or both be implemented in combination with become the some or all of of video clipping equipment, referring to Fig. 4-1, this video clipping device 400 can include but not limited to:
First receiver module 401, is configured to receive the video clipping instruction that user triggers, and video clipping instruction includes target labels;
Enquiry module 402, is configured to the video according to target labels inquiry previously generates and gets information ready;
Wherein, video gets the information information of getting ready for each video in record video clips equipment ready, and each information record of getting ready has at least one label, and the initial time of the video content of each label instruction at least one label and finish time.
First determines module 403, is configured to determine that target gets information ready according to Query Result;
Wherein, target gets information record ready a target labels, and the target initial time of the target video content of target labels instruction and target finish time.
Shear module 404, is configured to get information ready according to target, shears target video, obtain target video fragment;
Wherein, target gets information record ready target labels, and the target initial time of target video content of target labels instruction and target finish time, target video is the video that target gets that information is corresponding ready, gets information ready according to target, shear target video, obtain target video fragment, that is to say, according to target initial time and target finish time, shear target and get the target video that information is corresponding ready, obtain target video fragment.
Editing module 405, is configured to the video after editing target video fragment obtains editing.
Alternatively, please continue to refer to Fig. 4-1, this video clipping device 400 can also include:
Second receiver module 406, is configured to receive video capture instruction;
Taking module 407, is configured to carry out video capture according to video capture instruction and obtains the first video, and the first video includes at least one video content;
Identification module 408, is configured in the process of shooting, is identified obtaining the label of each video content to each video content at least one video content;
Second determines module 409, is configured to determine that initial time and the finish time of each video content;
Generation module 410, is configured to the label according to each video content, and the initial time of each video content and finish time generate the first video get information ready.
Alternatively, identification module 408, it is configured to:
In the process of shooting, every prefixed time interval, the video content of shooting extracts in prefixed time interval frame of video;
Extract the characteristic information of frame of video;
The characteristic information that inquiry is preset and the corresponding relation of label, obtain the label that the characteristic information of frame of video is corresponding.
Alternatively, also including target duration, refer to Fig. 4-2 in video clipping instruction, it is illustrated that the block diagram of a kind of editing module 405 that Fig. 4-1 illustrated embodiment provides, and referring to Fig. 4-2, editing module 405 may include that
Comparison sub-module 4051a, is configured as the number of target video fragment when being one, the duration of comparison object video segment and target duration;
Editing submodule 4052a, is configured to obtain the video after editing according to comparative result editing target video fragment.
Alternatively, editing submodule 4052a, it is configured to:
When the duration of target video fragment is equal to target duration, target video fragment is defined as the video after editing;
When the duration of target video fragment is more than target duration, shears target video fragment according to target duration, obtain the video after editing;
When the duration of target video fragment is less than target duration, presenting information, operational order editing information being operated triggering according to user obtains the video after editing.
Alternatively, also including target duration, refer to Fig. 4-3 in video clipping instruction, it is illustrated that the block diagram of another kind of editing module 405 that Fig. 4-1 illustrated embodiment provides, and referring to Fig. 4-3, editing module 405 can include but not limited to:
First determines submodule 4051b, is configured as the number of target video fragment when being at least two, it is determined that the duration of each target video fragment at least two target video fragment;
First judges submodule 4052b, is configured to the target video fragment judging whether to there is duration at least two target video fragment equal to target duration;
Second determines submodule 4053b, when being configured as at least two target video fragment existing duration equal to the target video fragment of target duration, equal to the target video fragment of target duration, duration is defined as the video after editing.
Further, please continue to refer to Fig. 4-3, this editing module 405 can also include:
Second judges submodule 4054b, when being configured as at least two target video fragment being absent from duration equal to the target video fragment of target duration, it is judged that whether there is the duration target video fragment more than target duration at least two target video fragment;
First shears submodule 4055b, when being configured as at least two target video fragment existing duration more than the target video fragment of target duration, shears duration more than the target video fragment of target duration according to target duration, obtains the video after editing.
Alternatively, please continue to refer to Fig. 4-3, this editing module 405 can also include:
3rd judges submodule 4056b, when being configured as at least two target video fragment being absent from duration more than the target video fragment of target duration, judge whether the video segment group that at least two target video fragment is constituted exists fragment group to be clipped, fragment group to be clipped includes at least two target video fragment, and the duration sum of all target video fragments in arbitrary fragment group to be clipped is more than target duration;
3rd determines submodule 4057b, when being configured as that the video segment group that at least two target video fragment is constituted exists fragment group to be clipped, in all fragment groups to be clipped, determine that the duration sum fragment group to be clipped closest to target duration is as target editing fragment group;
Splicing submodule 4058b, is configured to all target video fragments in target editing fragment group are spliced, obtains spliced video;
Second shears submodule 4059b, is configured to shear spliced video according to target duration, obtains the video after editing.
In sum, the video clipping device that disclosure embodiment provides, by receiving the video clipping instruction that user triggers, video clipping instruction includes target labels, gets information ready according to the video that target labels inquiry previously generates, determines that target gets information ready according to Query Result, information is got ready according to target, shearing target video, obtain target video fragment, editing target video fragment obtains the video after editing. Owing to just the video after editing can be obtained according to video clipping instruction editing after receiving video clipping instruction, without user video watched and manually shearing without user, therefore, solve the problem that in correlation technique, video clipping process is complicated, reach to simplify the effect of video clipping process.
The video clipping device that disclosure embodiment provides, owing to simplifying video clipping process, therefore, can quickly obtain the video after editing, and the video clipping device that disclosure embodiment provides, the user not possessing professional editing operation experience can also realize the editing to video, therefore, it can reduce operation threshold.
The video clipping device that disclosure embodiment provides, it is possible to reduce editing difficulty, saves the editing time, conveniently realizes the effect of video clipping.
About the device in above-described embodiment, the concrete mode that wherein modules performs to operate has been described in detail in about the embodiment of the method, and explanation will be not set forth in detail herein.
Fig. 5 is the block diagram of a kind of video clipping device 500 according to an exemplary embodiment. Such as, device 500 can be smart mobile phone, panel computer, desk computer, MP4, intelligent television, pocket computer on knee etc.
With reference to Fig. 5, device 500 can include following one or more assembly: processes assembly 502, memorizer 504, power supply module 506, multimedia groupware 508, audio-frequency assembly 510, input/output (I/O) interface 512, sensor cluster 514, and communications component 516.
Process assembly 502 and generally control the integrated operation of device 500, such as with display, call, data communication, the operation that camera operation and record operation are associated. Process assembly 502 and can include one or more processor 520 to perform instruction, to complete all or part of step of above-mentioned method. Additionally, process assembly 502 can include one or more module, it is simple to what process between assembly 502 and other assemblies is mutual.Such as, process assembly 502 and can include multi-media module, with facilitate multimedia groupware 508 and process between assembly 502 mutual.
Memorizer 504 is configured to store various types of data to support the operation at device 500. The example of these data includes the instruction of any application program for operating on device 500 or method, contact data, telephone book data, message, picture, video etc. Memorizer 504 can be realized by any kind of volatibility or non-volatile memory device or their combination, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, disk or CD.
The various assemblies that power supply module 506 is device 500 provide electric power. Power supply module 506 can include power-supply management system, one or more power supplys, and other generate, manage and distribute, with for device 500, the assembly that electric power is associated.
Multimedia groupware 508 includes the screen providing an output interface between device 500 and user. In certain embodiments, screen can include liquid crystal display (LCD) and touch panel (TP). If screen includes touch panel, screen may be implemented as touch screen, to receive the input signal from user. Touch panel includes one or more touch sensor to sense the gesture on touch, slip and touch panel. Touch sensor can not only sense the border of touch or sliding action, but also detects the persistent period relevant to touch or slide and pressure. In certain embodiments, multimedia groupware 508 includes a front-facing camera and/or post-positioned pick-up head. When device 500 is in operator scheme, during such as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive the multi-medium data of outside. Each front-facing camera and post-positioned pick-up head can be a fixing optical lens system or have focal length and optical zoom ability.
Audio-frequency assembly 510 is configured to output and/or input audio signal. Such as, audio-frequency assembly 510 includes a mike (MIC), and when device 500 is in operator scheme, during such as call model, logging mode and speech recognition mode, mike is configured to receive external audio signal. The audio signal received can be further stored at memorizer 504 or send via communications component 516. In certain embodiments, audio-frequency assembly 510 also includes a speaker, is used for exporting audio signal.
I/O interface 512 provides interface for processing between assembly 502 and peripheral interface module, above-mentioned peripheral interface module can be keyboard, puts striking wheel, button etc. These buttons may include but be not limited to: home button, volume button, startup button and locking press button.
Sensor cluster 514 includes one or more sensor, for providing the state estimation of various aspects for device 500. Such as, what sensor cluster 514 can detect device 500 opens/closed mode, the relative localization of assembly, such as assembly is display and the keypad of device 500, the position change of all right detecting device 500 of sensor cluster 514 or 500 1 assemblies of device, the presence or absence that user contacts with device 500, the variations in temperature of device 500 orientation or acceleration/deceleration and device 500. Sensor cluster 514 can include proximity transducer, is configured to when not having any physical contact object near detection.Sensor cluster 514 can also include optical sensor, such as CMOS or ccd image sensor, for using in imaging applications. In certain embodiments, this sensor cluster 514 can also include acceleration transducer, gyro sensor, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 516 is configured to facilitate between device 500 and other equipment the communication of wired or wireless mode. Device 500 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or their combination. In one exemplary embodiment, communications component 516 receives the broadcast singal or the broadcast related information that manage system from external broadcasting via broadcast channel. In one exemplary embodiment, communications component 516 also includes near-field communication (NFC) module, to promote junction service. Such as, can based on RF identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 500 can be realized by one or more application specific integrated circuits (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components, is used for performing said method.
In the exemplary embodiment, additionally providing a kind of non-transitory computer-readable recording medium including instruction, for instance include the memorizer 504 of instruction, above-mentioned instruction can have been performed said method by the processor 520 of device 500. Such as, non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
A kind of non-transitory computer-readable recording medium, when the instruction in storage medium is performed by the processor of device 500 so that device 500 is able to carry out a kind of video clipping method, and this video clipping method includes:
Receiving the video clipping instruction that user triggers, video clipping instruction includes target labels;
Information is got ready according to the video that target labels inquiry previously generates;
Determine that target gets information ready according to Query Result;
Get information ready according to target, shear target video, obtain target video fragment;
Editing target video fragment obtains the video after editing.
Alternatively, the method also includes:
Receive video capture instruction;
Carrying out video capture according to video capture instruction and obtain the first video, the first video includes at least one video content;
In the process of shooting, each video content at least one video content is identified obtaining the label of each video content;
Determine initial time and the finish time of each video content;
Label according to each video content, and the initial time of each video content and finish time generate the first video get information ready.
Alternatively, in the process of shooting, each video content at least one video content is identified obtaining the label of each video content, including:
In the process of shooting, every prefixed time interval, the video content of shooting extracts in prefixed time interval frame of video;
Extract the characteristic information of frame of video;
The characteristic information that inquiry is preset and the corresponding relation of label, obtain the label that the characteristic information of frame of video is corresponding.
Alternatively, also including target duration in video clipping instruction, editing target video fragment obtains the video after editing, including:
When the number of target video fragment is one, the duration of comparison object video segment and target duration;
The video after editing is obtained according to comparative result editing target video fragment.
Alternatively, obtain the video after editing according to comparative result editing target video fragment, including:
When the duration of target video fragment is equal to target duration, target video fragment is defined as the video after editing;
When the duration of target video fragment is more than target duration, shears target video fragment according to target duration, obtain the video after editing;
When the duration of target video fragment is less than target duration, presenting information, operational order editing information being operated triggering according to user obtains the video after editing.
Alternatively, also including target duration in video clipping instruction, editing target video fragment obtains the video after editing, including:
When the number of target video fragment is at least two, it is determined that the duration of each target video fragment at least two target video fragment;
Judge the target video fragment that whether there is duration at least two target video fragment equal to target duration;
When at least two target video fragment exists the target video fragment that duration is equal to target duration, duration is defined as the video after editing equal to the target video fragment of target duration.
Alternatively, the method also includes:
When at least two target video fragment is absent from the target video fragment that duration is equal to target duration, it is judged that whether at least two target video fragment exists the duration target video fragment more than target duration;
When at least two target video fragment existing duration more than the target video fragment of target duration, shear duration more than the target video fragment of target duration according to target duration, obtain the video after editing.
Alternatively, the method also includes:
When at least two target video fragment being absent from duration more than the target video fragment of target duration, judge whether the video segment group that at least two target video fragment is constituted exists fragment group to be clipped, fragment group to be clipped includes at least two target video fragment, and the duration sum of all target video fragments in arbitrary fragment group to be clipped is more than target duration;
When the video segment group that at least two target video fragment is constituted exists fragment group to be clipped, in all fragment groups to be clipped, determine that the duration sum fragment group to be clipped closest to target duration is as target editing fragment group;
All target video fragments in target editing fragment group are spliced, obtains spliced video;
Shear spliced video according to target duration, obtain the video after editing.
In sum, the video clipping device that disclosure embodiment provides, by receiving the video clipping instruction that user triggers, video clipping instruction includes target labels, gets information ready according to the video that target labels inquiry previously generates, determines that target gets information ready according to Query Result, information is got ready according to target, shearing target video, obtain target video fragment, editing target video fragment obtains the video after editing. Owing to just the video after editing can be obtained according to video clipping instruction editing after receiving video clipping instruction, without user video watched and manually shearing without user, therefore, solve the problem that in correlation technique, video clipping process is complicated, reach to simplify the effect of video clipping process.
Those skilled in the art, after considering description and putting into practice invention disclosed herein, will readily occur to other embodiment of the disclosure.The application is intended to any modification of the disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed the general principle of the disclosure and include the undocumented known general knowledge in the art of the disclosure or conventional techniques means. Description and embodiments is considered only as exemplary, and the true scope of the disclosure and spirit are pointed out by claim below.
It should be appreciated that the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and various amendment and change can carried out without departing from the scope. The scope of the present disclosure is only limited by appended claim.

Claims (17)

1. a video clipping method, it is characterised in that described method includes:
Receiving the video clipping instruction that user triggers, described video clipping instruction includes target labels;
Information is got ready according to the video that the inquiry of described target labels previously generates;
Determine that target gets information ready according to Query Result;
Get information ready according to described target, shear target video, obtain target video fragment;
Target video fragment described in editing obtains the video after editing.
2. method according to claim 1, it is characterised in that described method also includes:
Receive video capture instruction;
Carrying out video capture according to described video capture instruction and obtain the first video, described first video includes at least one video content;
In the process of shooting, each video content at least one video content described is identified obtaining the label of described each video content;
Determine initial time and the finish time of described each video content;
Label according to described each video content, and the initial time of described each video content and finish time generate described first video get information ready.
3. method according to claim 2, it is characterised in that described in the process of shooting, is identified obtaining the label of described each video content to each video content at least one video content described, including:
In the process of shooting, every prefixed time interval, the video content of shooting extracts in described prefixed time interval frame of video;
Extract the characteristic information of described frame of video;
The characteristic information that inquiry is preset and the corresponding relation of label, obtain the label that the characteristic information of described frame of video is corresponding.
4. according to the arbitrary described method of claims 1 to 3, it is characterised in that also including target duration in described video clipping instruction, target video fragment described in described editing obtains the video after editing, including:
When the number of described target video fragment is one, the relatively duration of described target video fragment and described target duration;
Target video fragment according to comparative result editing obtains the video after described editing.
5. method according to claim 4, it is characterised in that described target video fragment according to comparative result editing obtains the video after described editing, including:
When the duration of described target video fragment is equal to described target duration, described target video fragment is defined as the video after described editing;
When the duration of described target video fragment is more than described target duration, shears described target video fragment according to described target duration, obtain the video after described editing;
When the duration of described target video fragment is less than described target duration, presenting information, the operational order editing described information being operated triggering according to user obtains the video after described editing.
6. according to the arbitrary described method of claims 1 to 3, it is characterised in that also including target duration in described video clipping instruction, target video fragment described in described editing obtains the video after editing, including:
When the number of described target video fragment is at least two, it is determined that the duration of each target video fragment at least two target video fragment;
Judge the target video fragment that whether there is duration in described at least two target video fragment equal to described target duration;
When described at least two target video fragment exists the target video fragment that duration is equal to described target duration, duration is defined as the video after described editing equal to the target video fragment of described target duration.
7. method according to claim 6, it is characterised in that described method also includes:
When described at least two target video fragment is absent from the target video fragment that duration is equal to described target duration, it is judged that whether described at least two target video fragment exists the duration target video fragment more than described target duration;
When described at least two target video fragment existing duration more than the target video fragment of described target duration, shear duration more than the target video fragment of described target duration according to described target duration, obtain the video after described editing.
8. method according to claim 7, it is characterised in that described method also includes:
When described at least two target video fragment being absent from duration more than the target video fragment of described target duration, judge whether the video segment group that described at least two target video fragment is constituted exists fragment group to be clipped, described fragment group to be clipped includes target video fragment described at least two, and the duration sum of all target video fragments in arbitrary described fragment group to be clipped is more than described target duration;
When the video segment group that described at least two target video fragment is constituted exists described fragment group to be clipped, in all described fragment groups to be clipped, determine that the duration sum fragment group to be clipped closest to described target duration is as target editing fragment group;
All target video fragments in described target editing fragment group are spliced, obtains spliced video;
Shear described spliced video according to described target duration, obtain the video after described editing.
9. a video clipping device, it is characterised in that described device includes:
First receiver module, is configured to receive the video clipping instruction that user triggers, and described video clipping instruction includes target labels;
Enquiry module, is configured to the video according to the inquiry of described target labels previously generates and gets information ready;
First determines module, is configured to determine that target gets information ready according to Query Result;
Shear module, is configured to get information ready according to described target, shears target video, obtain target video fragment;
Editing module, is configured to the video after target video fragment described in editing obtains editing.
10. device according to claim 9, it is characterised in that described device also includes:
Second receiver module, is configured to receive video capture instruction;
Taking module, is configured to carry out video capture according to described video capture instruction and obtains the first video, and described first video includes at least one video content;
Identification module, is configured in the process of shooting, each video content at least one video content described is identified obtaining the label of described each video content;
Second determines module, is configured to determine that initial time and the finish time of described each video content;
Generation module, is configured to the label according to described each video content, and the initial time of described each video content and finish time generate described first video get information ready.
11. device according to claim 10, it is characterised in that described identification module, it is configured to:
In the process of shooting, every prefixed time interval, the video content of shooting extracts in described prefixed time interval frame of video;
Extract the characteristic information of described frame of video;
The characteristic information that inquiry is preset and the corresponding relation of label, obtain the label that the characteristic information of described frame of video is corresponding.
12. according to the arbitrary described device of claim 9 to 11, it is characterised in that described video clipping instruction also includes target duration, described editing module, including:
Comparison sub-module, is configured as the number of described target video fragment when being one, the relatively duration of described target video fragment and described target duration;
Editing submodule, is configured to the video after target video fragment according to comparative result editing obtains described editing.
13. device according to claim 12, it is characterised in that
Described editing submodule, is configured to:
When the duration of described target video fragment is equal to described target duration, described target video fragment is defined as the video after described editing;
When the duration of described target video fragment is more than described target duration, shears described target video fragment according to described target duration, obtain the video after described editing;
When the duration of described target video fragment is less than described target duration, presenting information, the operational order editing described information being operated triggering according to user obtains the video after described editing.
14. according to the arbitrary described device of claim 9 to 11, it is characterised in that described video clipping instruction also includes target duration, described editing module, including:
First determines submodule, is configured as the number of described target video fragment when being at least two, it is determined that the duration of each target video fragment at least two target video fragment;
First judges submodule, is configured to the target video fragment judging whether to there is duration in described at least two target video fragment equal to described target duration;
Second determines submodule, when being configured as in described at least two target video fragment existing duration equal to the target video fragment of described target duration, equal to the target video fragment of described target duration, duration is defined as the video after described editing.
15. device according to claim 14, it is characterised in that described editing module, also include:
Second judges submodule, when being configured as in described at least two target video fragment being absent from duration equal to the target video fragment of described target duration, it is judged that whether described at least two target video fragment exists the duration target video fragment more than described target duration;
First shears submodule, when being configured as in described at least two target video fragment existing duration more than the target video fragment of described target duration, shear duration more than the target video fragment of described target duration according to described target duration, obtain the video after described editing.
16. device according to claim 15, it is characterised in that described editing module, also include:
3rd judges submodule, when being configured as in described at least two target video fragment being absent from duration more than the target video fragment of described target duration, judge whether the video segment group that described at least two target video fragment is constituted exists fragment group to be clipped, described fragment group to be clipped includes target video fragment described at least two, and the duration sum of all target video fragments in arbitrary described fragment group to be clipped is more than described target duration;
3rd determines submodule, when being configured as that the video segment group that described at least two target video fragment is constituted exists described fragment group to be clipped, in all described fragment groups to be clipped, determine that the duration sum fragment group to be clipped closest to described target duration is as target editing fragment group;
Splicing submodule, is configured to all target video fragments in described target editing fragment group are spliced, obtains spliced video;
Second shears submodule, is configured to shear described spliced video according to described target duration, obtains the video after described editing.
17. a video clipping device, it is characterised in that including:
Processor;
For storing the memorizer of the executable instruction of described processor;
Wherein, described processor is configured to:
Receiving the video clipping instruction that user triggers, described video clipping instruction includes target labels;
Information is got ready according to the video that the inquiry of described target labels previously generates;
Determine that target gets information ready according to Query Result;
Get information ready according to described target, shear target video, obtain target video fragment;
Target video fragment described in editing obtains the video after editing.
CN201510980311.3A 2015-12-23 2015-12-23 Video clipping method and device Active CN105657537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510980311.3A CN105657537B (en) 2015-12-23 2015-12-23 Video clipping method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510980311.3A CN105657537B (en) 2015-12-23 2015-12-23 Video clipping method and device

Publications (2)

Publication Number Publication Date
CN105657537A true CN105657537A (en) 2016-06-08
CN105657537B CN105657537B (en) 2018-06-19

Family

ID=56476752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510980311.3A Active CN105657537B (en) 2015-12-23 2015-12-23 Video clipping method and device

Country Status (1)

Country Link
CN (1) CN105657537B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131627A (en) * 2016-07-07 2016-11-16 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, Apparatus and system
CN106375860A (en) * 2016-09-30 2017-02-01 腾讯科技(深圳)有限公司 Video playing method and device, and terminal and server
CN106488324A (en) * 2016-10-10 2017-03-08 广东小天才科技有限公司 A kind of video clipping method and system
CN106803992A (en) * 2017-02-14 2017-06-06 北京时间股份有限公司 Video clipping method and device
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 A kind of video clipping method and electronic equipment
CN108093315A (en) * 2017-12-28 2018-05-29 优酷网络技术(北京)有限公司 Video generation method and device
CN108121959A (en) * 2017-12-20 2018-06-05 浙江网仓科技有限公司 Visualize method for tracing and system
CN108540854A (en) * 2018-03-29 2018-09-14 努比亚技术有限公司 Live video clipping method, terminal and computer readable storage medium
CN108769733A (en) * 2018-06-22 2018-11-06 三星电子(中国)研发中心 Video clipping method and video clipping device
CN108900905A (en) * 2018-08-08 2018-11-27 北京未来媒体科技股份有限公司 A kind of video clipping method and device
CN109040834A (en) * 2018-08-14 2018-12-18 阿基米德(上海)传媒有限公司 A kind of short audio computer-aided production method and system
CN109429093A (en) * 2017-08-31 2019-03-05 中兴通讯股份有限公司 A kind of method and terminal of video clipping
WO2019042341A1 (en) * 2017-09-04 2019-03-07 优酷网络技术(北京)有限公司 Video editing method and device
CN109936763A (en) * 2017-12-15 2019-06-25 腾讯科技(深圳)有限公司 The processing of video and dissemination method
CN109963071A (en) * 2017-12-26 2019-07-02 深圳市优必选科技有限公司 A kind of method, system and the terminal device of automatic editing image
CN110035330A (en) * 2019-04-16 2019-07-19 威比网络科技(上海)有限公司 Video generation method, system, equipment and storage medium based on online education
CN110121103A (en) * 2019-05-06 2019-08-13 郭凌含 The automatic editing synthetic method of video and device
CN110401878A (en) * 2019-07-08 2019-11-01 天脉聚源(杭州)传媒科技有限公司 A kind of video clipping method, system and storage medium
CN110519655A (en) * 2018-05-21 2019-11-29 优酷网络技术(北京)有限公司 Video clipping method and device
CN110602560A (en) * 2018-06-12 2019-12-20 优酷网络技术(北京)有限公司 Video processing method and device
CN110703976A (en) * 2019-08-28 2020-01-17 咪咕文化科技有限公司 Clipping method, electronic device, and computer-readable storage medium
CN111447505A (en) * 2020-03-09 2020-07-24 咪咕文化科技有限公司 Video clipping method, network device, and computer-readable storage medium
CN111510787A (en) * 2020-04-28 2020-08-07 Oppo广东移动通信有限公司 Multimedia editing method, device, terminal and storage medium
CN111506771A (en) * 2020-04-22 2020-08-07 上海极链网络科技有限公司 Video retrieval method, device, equipment and storage medium
CN111836100A (en) * 2019-04-16 2020-10-27 阿里巴巴集团控股有限公司 Method, apparatus, device and storage medium for creating clip track data
CN112423113A (en) * 2020-11-20 2021-02-26 广州欢网科技有限责任公司 Television program dotting method and device and electronic terminal
CN113132752A (en) * 2019-12-30 2021-07-16 阿里巴巴集团控股有限公司 Video processing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080155590A1 (en) * 2006-12-22 2008-06-26 Nortel Networks Limited Dynamic advertising control
CA2731706A1 (en) * 2010-02-26 2011-08-26 Comcast Cable Communications, Llc Program segmentation of linear transmission
WO2014001607A1 (en) * 2012-06-29 2014-01-03 Nokia Corporation Video remixing system
CN104202658A (en) * 2014-08-29 2014-12-10 北京奇虎科技有限公司 Method and system for playing video in groups
US8930849B2 (en) * 2010-03-31 2015-01-06 Verizon Patent And Licensing Inc. Enhanced media content tagging systems and methods
CN104284241A (en) * 2014-09-22 2015-01-14 北京奇艺世纪科技有限公司 Video editing method and device
CN104581380A (en) * 2014-12-30 2015-04-29 联想(北京)有限公司 Information processing method and mobile terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080155590A1 (en) * 2006-12-22 2008-06-26 Nortel Networks Limited Dynamic advertising control
CA2731706A1 (en) * 2010-02-26 2011-08-26 Comcast Cable Communications, Llc Program segmentation of linear transmission
US8930849B2 (en) * 2010-03-31 2015-01-06 Verizon Patent And Licensing Inc. Enhanced media content tagging systems and methods
WO2014001607A1 (en) * 2012-06-29 2014-01-03 Nokia Corporation Video remixing system
CN104202658A (en) * 2014-08-29 2014-12-10 北京奇虎科技有限公司 Method and system for playing video in groups
CN104284241A (en) * 2014-09-22 2015-01-14 北京奇艺世纪科技有限公司 Video editing method and device
CN104581380A (en) * 2014-12-30 2015-04-29 联想(北京)有限公司 Information processing method and mobile terminal

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131627B (en) * 2016-07-07 2019-03-26 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, apparatus and system
CN106131627A (en) * 2016-07-07 2016-11-16 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, Apparatus and system
CN106375860B (en) * 2016-09-30 2020-03-03 腾讯科技(深圳)有限公司 Video playing method, device, terminal and server
CN106375860A (en) * 2016-09-30 2017-02-01 腾讯科技(深圳)有限公司 Video playing method and device, and terminal and server
CN106488324A (en) * 2016-10-10 2017-03-08 广东小天才科技有限公司 A kind of video clipping method and system
CN106803992A (en) * 2017-02-14 2017-06-06 北京时间股份有限公司 Video clipping method and device
CN106803992B (en) * 2017-02-14 2020-05-22 北京时间股份有限公司 Video editing method and device
CN109429093A (en) * 2017-08-31 2019-03-05 中兴通讯股份有限公司 A kind of method and terminal of video clipping
WO2019042341A1 (en) * 2017-09-04 2019-03-07 优酷网络技术(北京)有限公司 Video editing method and device
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 A kind of video clipping method and electronic equipment
CN109936763B (en) * 2017-12-15 2022-07-01 腾讯科技(深圳)有限公司 Video processing and publishing method
CN109936763A (en) * 2017-12-15 2019-06-25 腾讯科技(深圳)有限公司 The processing of video and dissemination method
CN108121959A (en) * 2017-12-20 2018-06-05 浙江网仓科技有限公司 Visualize method for tracing and system
CN109963071A (en) * 2017-12-26 2019-07-02 深圳市优必选科技有限公司 A kind of method, system and the terminal device of automatic editing image
CN108093315A (en) * 2017-12-28 2018-05-29 优酷网络技术(北京)有限公司 Video generation method and device
CN108093315B (en) * 2017-12-28 2021-01-29 优酷网络技术(北京)有限公司 Video generation method and device
CN108540854A (en) * 2018-03-29 2018-09-14 努比亚技术有限公司 Live video clipping method, terminal and computer readable storage medium
CN110519655A (en) * 2018-05-21 2019-11-29 优酷网络技术(北京)有限公司 Video clipping method and device
CN110602560A (en) * 2018-06-12 2019-12-20 优酷网络技术(北京)有限公司 Video processing method and device
CN108769733A (en) * 2018-06-22 2018-11-06 三星电子(中国)研发中心 Video clipping method and video clipping device
CN108900905A (en) * 2018-08-08 2018-11-27 北京未来媒体科技股份有限公司 A kind of video clipping method and device
CN109040834A (en) * 2018-08-14 2018-12-18 阿基米德(上海)传媒有限公司 A kind of short audio computer-aided production method and system
CN110035330A (en) * 2019-04-16 2019-07-19 威比网络科技(上海)有限公司 Video generation method, system, equipment and storage medium based on online education
CN111836100A (en) * 2019-04-16 2020-10-27 阿里巴巴集团控股有限公司 Method, apparatus, device and storage medium for creating clip track data
CN110035330B (en) * 2019-04-16 2021-11-23 上海平安智慧教育科技有限公司 Video generation method, system, device and storage medium based on online education
CN110121103A (en) * 2019-05-06 2019-08-13 郭凌含 The automatic editing synthetic method of video and device
CN110401878A (en) * 2019-07-08 2019-11-01 天脉聚源(杭州)传媒科技有限公司 A kind of video clipping method, system and storage medium
CN110703976A (en) * 2019-08-28 2020-01-17 咪咕文化科技有限公司 Clipping method, electronic device, and computer-readable storage medium
CN110703976B (en) * 2019-08-28 2021-04-13 咪咕文化科技有限公司 Clipping method, electronic device, and computer-readable storage medium
CN113132752A (en) * 2019-12-30 2021-07-16 阿里巴巴集团控股有限公司 Video processing method and device
CN111447505A (en) * 2020-03-09 2020-07-24 咪咕文化科技有限公司 Video clipping method, network device, and computer-readable storage medium
CN111506771A (en) * 2020-04-22 2020-08-07 上海极链网络科技有限公司 Video retrieval method, device, equipment and storage medium
CN111506771B (en) * 2020-04-22 2021-04-02 上海极链网络科技有限公司 Video retrieval method, device, equipment and storage medium
CN111510787A (en) * 2020-04-28 2020-08-07 Oppo广东移动通信有限公司 Multimedia editing method, device, terminal and storage medium
CN112423113A (en) * 2020-11-20 2021-02-26 广州欢网科技有限责任公司 Television program dotting method and device and electronic terminal

Also Published As

Publication number Publication date
CN105657537B (en) 2018-06-19

Similar Documents

Publication Publication Date Title
CN105657537A (en) Video editing method and device
CN104684048B (en) WIFI list shows method and apparatus
CN105487863A (en) Interface setting method and device based on scene
CN105120191A (en) Video recording method and device
CN105578275A (en) Video display method and apparatus
CN104933170A (en) Information exhibition method and device
CN105488112A (en) Information pushing method and device
CN106331761A (en) Live broadcast list display method and apparatuses
CN104834665A (en) Target picture acquiring method and device
CN105828201A (en) Video processing method and device
CN106202194A (en) The storage method and device of screenshot picture
CN105808050B (en) Information search method and device
CN105549944B (en) Equipment display methods and device
CN105407433A (en) Method and device for controlling sound output equipment
CN106020634A (en) Screen capture method and device
CN105426515A (en) Video classification method and apparatus
CN104268129A (en) Message reply method and message reply device
CN105451037A (en) Working method of equipment and apparatus thereof
CN104166604A (en) Video backup method and device
CN104331503A (en) Information push method and device
CN105301183A (en) Air quality detecting method and device
CN104539812A (en) Recommendation information acquisition method, terminal and server
CN104991910A (en) Album creation method and apparatus
CN104933071A (en) Information retrieval method and corresponding device
CN105549300A (en) Automatic focusing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant