CN111447505A - Video clipping method, network device, and computer-readable storage medium - Google Patents
Video clipping method, network device, and computer-readable storage medium Download PDFInfo
- Publication number
- CN111447505A CN111447505A CN202010156612.5A CN202010156612A CN111447505A CN 111447505 A CN111447505 A CN 111447505A CN 202010156612 A CN202010156612 A CN 202010156612A CN 111447505 A CN111447505 A CN 111447505A
- Authority
- CN
- China
- Prior art keywords
- time
- video
- highlight
- trigger
- segment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 241001465754 Metazoa Species 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 230000006872 improvement Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4334—Recording operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47202—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/835—Generation of protective data, e.g. certificates
- H04N21/8352—Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The embodiment of the invention relates to the technical field of multimedia, and discloses a video clipping method, network equipment and a computer readable storage medium, wherein the video clipping method comprises the following steps: acquiring trigger time for a user to perform special operation on a video to be edited, wherein the special operation at least comprises one of praise, collection, forwarding, sharing and comment; determining the starting time and the ending time of a highlight segment in the video to be clipped according to the trigger time; and according to the starting time and the ending time, editing the audio and video segments corresponding to the wonderful segments from the video to be edited. The video clipping method, the network equipment and the computer readable storage medium provided by the invention can accurately acquire the preference of the user for watching the video and automatically clip the wonderful segment considered by the user, thereby facilitating the user to view the wonderful segment in the video.
Description
Technical Field
The present invention relates to the field of multimedia technologies, and in particular, to a video editing method, a network device, and a computer-readable storage medium.
Background
With the development of video playing technology, more and more terminal devices are integrated with a video playing function, such as mobile phones, computers and the like, which can play videos, and especially can provide online video services for users. Currently, a method for a user to watch a video online by using a terminal device is as follows: the user retrieves videos from the video website by using the keywords and then clicks the retrieved videos for watching. In the prior art, the preference condition of the user for the live broadcast is obtained through factors such as the watching time length and the watching condition when the user watches the live broadcast.
The inventor finds that at least the following problems exist in the prior art: it is difficult to accurately judge the preference of the user to the live broadcast only through the watching duration and the appreciation condition of the user, and the user cannot quickly find and review the wonderful segment considered by the user after the live broadcast.
Disclosure of Invention
An object of embodiments of the present invention is to provide a video editing method, apparatus and computer-readable storage medium, which can accurately obtain a preference of a user for watching a video and automatically edit a highlight considered by the user, so that the user can conveniently view the highlight in the video.
To solve the above technical problem, an embodiment of the present invention provides a video editing method, including:
acquiring trigger time for a user to perform special operation on a video to be edited, wherein the special operation at least comprises one of praise, collection, forwarding, sharing and comment; determining the starting time and the ending time of a highlight segment in the video to be clipped according to the trigger time; and according to the starting time and the ending time, editing the audio and video segments corresponding to the wonderful segments from the video to be edited.
An embodiment of the present invention further provides a network device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video clipping method described above.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements the video clipping method described above.
Compared with the prior art, the method and the device have the advantages that the trigger time for the user to perform special operation on the video to be edited can be obtained (for example, when the user watches the video to be edited, a praise button is arranged on a playing interface, the user can click the praise button when watching the highlight content of the video to be edited, and the time point for clicking the praise button is the trigger time for performing the special operation), so that the preference condition of the user on the watched video can be accurately known through whether the user performs the special operation; the method comprises the steps of determining the starting time and the ending time of a highlight in a video to be clipped according to the trigger time, so that the highlight which a user is interested in can be accurately obtained, clipping an audio and video clip corresponding to the highlight from the video to be clipped, automatically clipping the highlight, avoiding the situation that the highlight in the video cannot be automatically clipped and the operation process of the highlight of one video which the user wants to view is relatively complicated, automatically clipping the highlight which the user thinks, and facilitating the user to view the highlight in the video.
In addition, the determining the starting time and the ending time of the highlight segment in the video to be clipped according to the trigger time specifically comprises: and determining the highlight segment by taking the trigger time as the end time of the highlight segment and taking the first N seconds of the trigger time as the start time of the highlight segment, wherein N is a natural number greater than 0.
In addition, before taking the first N seconds of the trigger time as the start time of the highlight, the method further includes: judging whether the video to be clipped contains a key time point in a video segment from the first N seconds of the trigger time to the trigger time, wherein the key time point is a time point when new information appears in the video segment; when the key time point is judged not to be contained, executing the step of taking the first N seconds of the trigger time as the starting time of the highlight; and when the key time point is judged to be contained, taking the earliest key time point in the video segment as the starting time. By the method, each picture in the wonderful segment can be further ensured to be the picture required by the user, the situation that the wonderful segment contains unnecessary segments for a long time is avoided, and the watching experience of the user when the user watches the wonderful segment is improved.
In addition, the key time points include one of the following types or any combination thereof: the time point when a new person appears in the video segment, the time point when scene switching occurs, the time point when a lens is switched, the time point when a new animal or object appears, and the starting time point of each line of lines.
In addition, after the start time and the end time of the highlight segment in the video to be clipped are determined according to the trigger time, the method further comprises the following steps: acquiring a plurality of identification tags of a highlight within a determined time period and a trigger time of each identification tag, wherein the determined time period is a time period from the start time to the end time; providing the plurality of identification tags for a user, and acquiring a target tag selected by the user from the plurality of identification tags; and updating the start time of the wonderful segment according to the trigger time of the target label. By the method, when the user carries out the praise operation on the video next time, the highlight segment can be determined according to the updated start time of the highlight segment and the praise time, so that the clipped audio and video segment is more in line with the watching habit of the user, and the watching experience of the user is further improved.
In addition, the plurality of target tags are provided, and the updating the start time of the highlight according to the trigger time point corresponding to the target tag specifically includes: selecting the target label with the earliest trigger time point from the plurality of target labels as an update label; and updating the start time of the wonderful segment according to the trigger time point corresponding to the updating label.
In addition, the start time of the highlight is updated according to the following formula: t ═ T- (T-T)`) K; wherein T is the difference between the start time of the highlight and the trigger time, T`And k is a constant which is larger than 0 and smaller than 1 and is the difference value between the trigger time point corresponding to the target label and the trigger time.
In addition, after obtaining a plurality of tags in the highlight and a plurality of trigger time points corresponding to the plurality of tags, the method further includes: judging whether the same label exists in the plurality of labels; when the same label is judged to exist, keeping the label with the earliest trigger time point in the same label, and removing other labels except the label with the earliest trigger time point in the same label. By the method, the watching experience of the user can be further improved.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a flow chart of a video clipping method provided in accordance with a first embodiment of the present invention;
FIG. 2 is a flow chart of a video clipping method provided in accordance with a second embodiment of the present invention;
FIG. 3 is a flow chart of a video clipping method provided in accordance with a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a network arrangement provided according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present invention in its various embodiments. However, the technical solution claimed in the present invention can be implemented without these technical details and various changes and modifications based on the following embodiments.
A first embodiment of the present invention relates to a video clipping method, and a specific flow is shown in fig. 1, where the method includes:
s101: and acquiring the trigger time of the user for carrying out special operation on the video to be clipped.
In step S101, specifically, the special operation at least includes one of approval, collection, forwarding, sharing, and comment, taking the special operation as "approval," as an example, the backend server sets an approval button for approving any content in the video file to be edited, provided by the user, in the video file to be edited, and the user approves the content in the video file to be edited by triggering the approval button. For each praise of the user, the background server can acquire a corresponding praise record, and the praise record records the time of the occurrence of the praise, that is to say, the background server can obtain the trigger time of the praise record in the video to be edited by searching the praise record; it should be noted that, for each approval of the user, the backend server may not generate an approval record, and directly use the trigger time of the approval operation as the start or end time of the highlight segment.
It should be noted that the video to be clipped in the present embodiment may be a live video or a recorded video (such as a movie downloaded by a user, a tv show, etc.), and the present embodiment does not specifically limit the type of the video to be clipped.
S102: and determining the starting time and the ending time of the highlight in the video to be clipped according to the trigger time.
In step S102, specifically, the trigger time of the praise record corresponds to the playing progress of the video file to be edited, for example, the user a watches the video file to be edited, and when watching the fifth minute, the user a triggers the praise function button on the video file to be edited to generate the praise record, where the praise record includes the playing progress of the audio/video file when the praise function button is triggered, that is, the audio/video file is played to the fifth minute, and the playing progress (5 th minute) is the trigger time of the praise record.
It should be noted that, in this embodiment, determining the highlight segment in the video to be clipped according to the trigger time may be: and determining the highlight segment by taking the trigger time as the end time of the highlight segment and taking the first N seconds of the trigger time as the start time of the highlight segment, wherein N is a natural number greater than 0. That is, in this embodiment, a time interval may be preset, the trigger time may be set as the end time of the highlight, and the time N seconds from the end time may be set as the start time of the highlight. This is because users are generally accustomed to praise after finishing watching the highlight, and thus the trigger time for praise recording is generally the end time of the highlight.
It can be understood that, in this embodiment, the trigger time may also be used as the start time of the highlight, and the last N seconds of the trigger time may be used as the end time of the highlight, so as to achieve the same technical effect.
S103: and editing the audio and video clips corresponding to the highlight clips from the video to be edited according to the starting time and the ending time.
In step S103, specifically, after the audio/video segment is clipped from the video to be clipped, the background server may further perform the following operations to improve the review experience of the user: 1. intercepting a segment related to at least one actor from a video to be edited according to the actor role in the audio and video segment; 2. and intercepting other audio and video segments corresponding to at least one section of complete speech from the video to be edited according to the integrity of the speech in the audio and video segments.
It should be noted that after the audio/video clip is obtained, the audio/video clip can be provided to a user watching a video to be clipped, and can also be pushed to other users; the clipped audio and video clips can be stored locally or in a background server, and a user can search for the audio and video clips clipped before approval and after approval.
Compared with the prior art, the method and the device have the advantages that the trigger time for the user to perform special operation on the video to be edited can be obtained (for example, when the user watches the video to be edited, a praise button is arranged on a playing interface, the user can click the praise button when watching the highlight content of the video to be edited, and the time point for clicking the praise button is the trigger time for performing the special operation), so that the preference condition of the user on the watched video can be accurately known according to the special operation; the method comprises the steps of determining the starting time and the ending time of a highlight in a video to be clipped according to the trigger time, so that the highlight which a user is interested in can be accurately obtained, clipping an audio and video clip corresponding to the highlight from the video to be clipped, automatically clipping the highlight, avoiding the situation that the highlight in the video cannot be automatically clipped, and the operation process that the user wants to view the highlight of one video is relatively complicated, and automatically clipping the highlight which the user is interested in, so that the user can conveniently view the highlight in the video.
The second embodiment of the invention relates to a video clipping method, and the second embodiment is a further improvement on the first embodiment, and the specific improvement is that: in the second embodiment, it is also determined whether the video to be clipped includes the key time point in the video segment from M seconds before the trigger time to the trigger time, and when it is determined that the key time point is included, the earliest occurring key time point is taken as the start time of the highlight. By the method, each picture in the wonderful segment can be further ensured to be the picture required by the user, the situation that the wonderful segment contains unnecessary segments for a long time is avoided, and the watching experience of the user when the user watches the wonderful segment is improved.
As shown in fig. 2, a specific flow of the present embodiment includes:
s201: and acquiring the trigger time of the user for carrying out special operation on the video to be clipped.
S202: taking the trigger time as the end time of the highlight segment, judging whether the video to be edited contains a key time point in the video segment from M seconds before the trigger time to the trigger time, if so, executing the step S203; if not, go to step S204.
In step S202, specifically, the size of "M" in the first M seconds of the trigger time in the present embodiment may be the same as or different from the size of "N" in the first N seconds of the trigger time in the first embodiment, and the size of M is not specifically limited in the present embodiment; the key time point is a time point when new information appears in the video segment, and the key time point comprises one of the following types or any combination thereof: the time point when a new person appears in the video segment, the time point when scene switching occurs, the time point when a lens is switched, the time point when a new animal or object appears, and the starting time point of each line of lines. More specifically, in the playing process of the video to be edited, the background server may perform real-time identification on each frame of picture of the video to be edited, the identification content includes the following, and all identification results are stored in the form of a tag with a timestamp:
(1) a character: judging the character appearing in the video picture through face recognition, and recording the time point of the new character appearing in the picture (compared with the previous frame) as a key time point; (2) scene: and judging scenes displayed by the video pictures, such as offices, basketball courts, coffee houses and the like, through scene identification. The time point when the picture scene is switched is marked as a key time point; (3) lens: judging the switching time point of the lens through lens detection, and recording the switching time point of the lens as a key time point; (4) animal/object: judging the animals/objects appearing in the picture through target detection, and recording the time point of the new animals/objects appearing in the picture (compared with the previous frame) as a key time point; (5) the lines: the content of the lines and the speaker are obtained through voice recognition, and the starting time point of each line of lines is marked as a key time point.
S203: and determining the highlight by taking the earliest appearing key time point in the video segment as the starting time of the highlight.
Regarding step S203, specifically, taking the video segment as the first 15 seconds of the trigger time of the praise recording as an example, if all the video segments are the same scene within 0 to 5 seconds and no new character or character conversation occurs, the video segment within the time period of 0 to 5 seconds can be regarded as an unnecessary segment, that is, the user will not take the unnecessary segment as a part of the highlight segment with a high probability, so that by this way, the highlight segment can be prevented from including the unnecessary segment for a long time, and the viewing experience of the user when reviewing the highlight segment is improved.
S204: and taking the first M seconds of the trigger time as the starting time of the highlight, and determining the highlight.
S205: and editing the audio and video segments corresponding to the highlight segments from the video to be edited.
Steps S201, S204 to S205 in this embodiment are similar to steps S101 to S103 in the first embodiment, and are not repeated here to avoid repetition.
For convenience of understanding, the following specifically exemplifies an application scenario of the video clipping method in the present embodiment, taking M equal to 15 as an example:
in the process of watching a video, a user carries out an approval operation when the video is played to the 20 th second in the 5 th minute, the background server stores the triggering time (namely, the 20 th second in the 5 th minute) of the approval record, the 20 th second in the 5 th minute is taken as the end time of a highlight, the background server judges whether a key time point exists in a video segment of the first 15 seconds (namely, the 5 th second in the 5 th minute) in the 20 th second in the 5 th minute, if not, the 5 th second in the 5 th minute is taken as the start time of the highlight, and the video segment of the 5 th second to the 20 th second in the 5 th minute is taken as the highlight; if the key time point which appears earliest is the 5 th minute and 10 th second, the 5 th minute and 10 th second is taken as the starting time of the highlight, and the video clips from the 5 th minute and 10 th second to the 5 th minute and 20 th second are saved as the highlight.
Compared with the prior art, the method and the device have the advantages that the trigger time for the user to perform special operation on the video to be edited can be obtained (for example, when the user watches the video to be edited, a praise button is arranged on a playing interface, the user can click the praise button when watching the highlight content of the video to be edited, and the time point for clicking the praise button is the trigger time for performing the special operation), so that the preference condition of the user on the watched video can be accurately known according to the special operation; the method comprises the steps of determining the starting time and the ending time of a highlight in a video to be clipped according to the trigger time, so that the highlight which a user is interested in can be accurately obtained, clipping an audio and video clip corresponding to the highlight from the video to be clipped, automatically clipping the highlight, avoiding the situation that the highlight in the video cannot be automatically clipped, and the operation process that the user wants to view the highlight of one video is relatively complicated, and automatically clipping the highlight which the user is interested in, so that the user can conveniently view the highlight in the video.
The third embodiment of the invention relates to a video clipping method, and is a further improvement on the second embodiment, and the specific improvement is that: in a third embodiment, after acquiring a plurality of tags in the highlight and a plurality of generation time points corresponding to the plurality of tags, the method further includes extracting the tags in the highlight, performing deduplication on the tags, providing the deduplicated tags for a user to select, and after selecting the tags, the user updates the start time of the highlight according to the earliest tag in the selected tags. By the method, when the user carries out the praise operation on the video next time, the highlight segment can be determined according to the updated start time of the highlight segment and the praise time, so that the clipped audio and video segment is more in line with the watching habit of the user, and the watching experience of the user is further improved.
As shown in fig. 3, a specific flow of the present embodiment includes:
s301: and acquiring the trigger time of the user for carrying out special operation on the video to be clipped.
S302: taking the trigger time as the end time of the highlight segment, judging whether the video to be edited contains a key time point in the video segment from the first N seconds of the trigger time to the trigger time, if so, executing the step S303; if not, go to step S304.
S303: and determining the highlight by taking the earliest appearing key time point in the video segment as the starting time of the highlight.
S304: and taking the first N seconds of the trigger time as the starting time of the highlight, and determining the highlight.
S305: and (4) cutting an audio and video segment corresponding to the highlight segment from the video to be cut, and providing the audio and video segment for the user.
S306: and carrying out label identification on the wonderful segment to obtain a plurality of labels in the wonderful segment and a plurality of trigger time points corresponding to the labels.
Regarding step S306, specifically, in this embodiment, performing label identification on the highlight in the highlight may be understood as performing label identification on each frame of picture in the highlight, that is, identifying a person, a scene, an animal, or another object appearing in each frame of picture, where each person, scene, animal, or other object in each frame of picture corresponds to one label.
S307: and judging whether the same tags exist in the plurality of tags, and if so, reserving the tags with the earliest trigger time point in the same tags and removing the tags except the tags with the earliest trigger time point in the same tags.
Regarding step S307, specifically, after the highlight is obtained, the tag of the highlight needs to be de-duplicated, which is because if a person a in the highlight always appears, there are many tags of the person a, but it is necessary to ensure that only the tag of the person a with the earliest trigger time point is finally displayed to the user, thereby further improving the viewing experience of the user.
S308: and providing the plurality of deduplicated labels for a user, and determining a target label selected by the user from the plurality of deduplicated labels.
In step S308, specifically, when the multiple de-duplicated labels are provided to the user, the de-duplicated labels may be sorted from late to early according to the trigger time point, and displayed to the user according to the order, so that the user can select the target label conveniently, and the user is prevented from spending a long time to select the target label due to disorder label sorting.
S309: and updating the start time of the highlight segment according to the trigger time point corresponding to the target label.
In step S309, specifically, the number of target tags selected by the user may also be multiple, and when the user selects multiple target tags, the background server selects a target tag with the earliest trigger time point among the multiple target tags as an update tag, and then updates the start time of the highlight according to the trigger time point corresponding to the update tag.
It should be noted that, in the present embodiment, the start time of the highlight is updated according to the following formula:
T=T-(T-T`) K; wherein T is the difference between the start time of the highlight and the trigger time, T`And k is a constant which is larger than 0 and smaller than 1 and is the difference value between the trigger time point corresponding to the target label and the trigger time. It is to be understood that the size of the update coefficient k is not particularly limited in this embodiment, and the value of k is preferably 0.2.
For convenience of understanding, the following specifically exemplifies an application scenario of the video clipping method in the present embodiment, taking N equal to 15 as an example:
in the process of watching a video, a user carries out an approval operation when the video is played to the 20 th second in the 5 th minute, the background server stores the triggering time (namely, the 20 th second in the 5 th minute) of the approval record, the 20 th second in the 5 th minute is taken as the end time of a highlight, the background server judges whether a key time point exists in a video segment of the first 15 seconds (namely, the 5 th second in the 5 th minute) in the 20 th second in the 5 th minute, if not, the 5 th second in the 5 th minute is taken as the start time of the highlight, and the video segment of the 5 th second to the 20 th second in the 5 th minute is taken as the highlight; if the key time point which appears earliest is the 5 th minute and 10 th second, the 5 th minute and 10 th second is taken as the starting time of the highlight, and the video clips from the 5 th minute and 10 th second to the 5 th minute and 20 th second are saved as the highlight.
Taking a video clip from the 10 th to the 20 th seconds in the 5 th minute as an example of the highlight clip, at this time, the background server performs tag identification on each frame of picture of the 10 th video clip to obtain a plurality of tags and performs de-duplication on the plurality of tags, assuming that the tags after de-duplication are a character a, an object B and a scene C, and the character a, the object B and the scene C all correspond to one trigger time point, the background server sends the character a, the object B and the scene C to the user, assuming that the user selects the character a and the scene C, assuming that the trigger time point of the character a is 5 minutes and 12 seconds and the trigger time point of the scene C is 5 minutes and 14 seconds, then taking 5 minutes and 12 seconds as the trigger time point of the target tag, assuming that the initial T is 5 minutes and 20 seconds-5 minutes and 10 seconds, and T is 10 seconds, and assuming that the initial T is 5 minutes and 20 seconds-5`If the time T is 5 minutes 20 seconds to 5 minutes 12 seconds to 8 seconds, the updated time T is 10 seconds- (10 seconds to 8 seconds) × 0.2 to 9.6 seconds, that is, the background server will save the video clip of the first 9.6 seconds of the trigger time of the approval recording as the highlight clip when the user approves next time.
Compared with the prior art, the method and the device have the advantages that the trigger time for the user to perform special operation on the video to be edited can be obtained (for example, when the user watches the video to be edited, a praise button is arranged on a playing interface, the user can click the praise button when watching the highlight content of the video to be edited, and the time point for clicking the praise button is the trigger time for performing the special operation), so that the preference condition of the user on the watched video can be accurately known according to the special operation; the method comprises the steps of determining the starting time and the ending time of a highlight in a video to be clipped according to the trigger time, so that the highlight which a user is interested in can be accurately obtained, clipping an audio and video clip corresponding to the highlight from the video to be clipped, automatically clipping the highlight, avoiding the situation that the highlight in the video cannot be automatically clipped, and the operation process that the user wants to view the highlight of one video is relatively complicated, and automatically clipping the highlight which the user is interested in, so that the user can conveniently view the highlight in the video.
A fourth embodiment of the present invention relates to a network device, as shown in fig. 4, including:
at least one processor 401; and the number of the first and second groups,
a memory 402 communicatively coupled to the at least one processor 401; wherein,
the memory 402 stores instructions executable by the at least one processor 401 to be executed by the at least one processor 401 to enable the at least one processor 401 to perform the video clipping method described above.
Where the memory 402 and the processor 401 are coupled by a bus, which may include any number of interconnected buses and bridges that couple one or more of the various circuits of the processor 401 and the memory 402 together. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 401 may be transmitted over a wireless medium via an antenna, which may receive the data and transmit the data to the processor 401.
The processor 401 is responsible for managing the bus and general processing and may provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 402 may be used to store data used by processor 401 in performing operations.
A fifth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.
Claims (10)
1. A video clipping method, comprising:
acquiring trigger time for a user to perform special operation on a video to be edited, wherein the special operation at least comprises one of praise, collection, forwarding, sharing and comment;
determining the starting time and the ending time of a highlight segment in the video to be clipped according to the trigger time;
and according to the starting time and the ending time, editing the audio and video segments corresponding to the wonderful segments from the video to be edited.
2. The video clipping method according to claim 1, wherein the determining the start time and the end time of the highlight segment in the video to be clipped according to the trigger time specifically comprises:
and determining the wonderful segment by taking the trigger time as the end time and taking the first N seconds of the trigger time as the start time, wherein N is a natural number greater than 0.
3. The video clipping method according to claim 2, wherein before taking the first N seconds of the trigger time as the start time, further comprising:
judging whether the video to be clipped contains a key time point in a video segment from the first N seconds of the trigger time to the trigger time, wherein the key time point is a time point when new information appears in the video segment;
when the key time point is judged not to be contained, executing the step of taking the first N seconds of the trigger time as the starting time;
and when the key time point is judged to be contained, taking the earliest key time point in the video segment as the starting time.
4. The video clipping method according to claim 3, wherein the key time points comprise one or any combination of the following types:
the time point when a new figure appears in the video segment to be edited, the time point when scene switching occurs, the time point when a shot is switched, the time point when a new animal or object appears, and the starting time point of each line of lines.
5. The video clipping method according to claim 3, further comprising, after determining the start time and the end time of the highlight segment in the video to be clipped according to the trigger time:
acquiring a plurality of identification tags of a highlight within a determined time period and a trigger time of each identification tag, wherein the determined time period is a time period from the start time to the end time;
providing the plurality of identification tags for a user, and acquiring a target tag selected by the user from the plurality of identification tags;
and updating the start time of the wonderful segment according to the trigger time of the target label.
6. The video clipping method according to claim 5, wherein the number of the target tags is plural, and the updating the start time of the highlight according to the trigger time of the target tag specifically comprises:
selecting a target label with the earliest trigger time from the plurality of target labels as an update label;
and updating the start time of the wonderful segment according to the trigger time of the updating label.
7. The video clipping method according to claim 5 or 6, wherein the start time of the highlight is updated according to the following formula:
T=T-(T-T`)*k;
wherein T is the difference between the start time and the trigger time, T`And k is a constant which is larger than 0 and smaller than 1 and is a difference value between the generation time point corresponding to the target label and the trigger time.
8. The video clipping method according to claim 5, further comprising, after acquiring a plurality of tags in the highlight and a plurality of generation time points corresponding to the plurality of tags:
judging whether the same label exists in the plurality of labels;
and when judging that the same label exists, reserving the label with the earliest generation time point in the same label, and removing other labels except the label with the earliest generation time point in the same label.
9. A network device, comprising: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video clipping method of any one of claims 1 to 8.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the video clipping method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010156612.5A CN111447505B (en) | 2020-03-09 | 2020-03-09 | Video clipping method, network device, and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010156612.5A CN111447505B (en) | 2020-03-09 | 2020-03-09 | Video clipping method, network device, and computer-readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111447505A true CN111447505A (en) | 2020-07-24 |
CN111447505B CN111447505B (en) | 2022-05-31 |
Family
ID=71653153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010156612.5A Active CN111447505B (en) | 2020-03-09 | 2020-03-09 | Video clipping method, network device, and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111447505B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113965805A (en) * | 2021-10-22 | 2022-01-21 | 北京达佳互联信息技术有限公司 | Prediction model training method and device and target video editing method and device |
CN114143575A (en) * | 2021-12-31 | 2022-03-04 | 上海爱奇艺新媒体科技有限公司 | Video editing method and device, computing equipment and storage medium |
CN114339399A (en) * | 2021-12-27 | 2022-04-12 | 咪咕文化科技有限公司 | Multimedia file editing method and device and computing equipment |
CN114666637A (en) * | 2022-03-10 | 2022-06-24 | 阿里巴巴(中国)有限公司 | Video editing method, audio editing method and electronic equipment |
CN114827454A (en) * | 2022-03-15 | 2022-07-29 | 荣耀终端有限公司 | Video acquisition method and device |
CN115022659A (en) * | 2022-05-31 | 2022-09-06 | 广州虎牙科技有限公司 | Live video processing method and system and live device |
CN115190356A (en) * | 2022-06-10 | 2022-10-14 | 北京达佳互联信息技术有限公司 | Multimedia data processing method and device, electronic equipment and storage medium |
CN115914739A (en) * | 2022-11-10 | 2023-04-04 | 南京伟柏软件技术有限公司 | Video sharing method and device and electronic equipment |
CN116830195A (en) * | 2020-10-28 | 2023-09-29 | 唯众挚美影视技术公司 | Automated post-production editing of user-generated multimedia content |
US12014752B2 (en) | 2020-05-08 | 2024-06-18 | WeMovie Technologies | Fully automated post-production editing for movies, tv shows and multimedia contents |
Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050044561A1 (en) * | 2003-08-20 | 2005-02-24 | Gotuit Audio, Inc. | Methods and apparatus for identifying program segments by detecting duplicate signal patterns |
US20110176733A1 (en) * | 2010-01-18 | 2011-07-21 | Pixart Imaging Inc. | Image recognition method |
US20120114167A1 (en) * | 2005-11-07 | 2012-05-10 | Nanyang Technological University | Repeat clip identification in video data |
CN104410920A (en) * | 2014-12-31 | 2015-03-11 | 合一网络技术(北京)有限公司 | Video segment playback amount-based method for labeling highlights |
CN104636162A (en) * | 2013-11-11 | 2015-05-20 | 宏达国际电子股份有限公司 | Method for performing multimedia management utilizing tags, and associated apparatus and associated computer program product |
US20150185965A1 (en) * | 2013-12-31 | 2015-07-02 | Disney Enterprises, Inc. | Systems and methods for video clip creation, curation, and interaction |
CN105657537A (en) * | 2015-12-23 | 2016-06-08 | 小米科技有限责任公司 | Video editing method and device |
CN105939494A (en) * | 2016-05-25 | 2016-09-14 | 乐视控股(北京)有限公司 | Audio/video segment providing method and device |
CN106210902A (en) * | 2016-07-06 | 2016-12-07 | 华东师范大学 | A kind of cameo shot clipping method based on barrage comment data |
CN106658231A (en) * | 2015-10-29 | 2017-05-10 | 亦非云信息技术(上海)有限公司 | Design method for sharing video clip in real time |
US20180205974A1 (en) * | 2017-01-13 | 2018-07-19 | Panasonic Intellectual Property Management Co., Ltd. | Video transmission system and video transmission method |
CN108540854A (en) * | 2018-03-29 | 2018-09-14 | 努比亚技术有限公司 | Live video clipping method, terminal and computer readable storage medium |
WO2018171325A1 (en) * | 2017-03-21 | 2018-09-27 | 华为技术有限公司 | Video hotspot fragment extraction method, user equipment, and server |
US20180343502A1 (en) * | 2017-05-27 | 2018-11-29 | Nanning Fugui Precision Industrial Co., Ltd. | Multimedia control method and server |
CN109151532A (en) * | 2018-08-13 | 2019-01-04 | 冼钇冰 | A kind of video intercepting method, apparatus, terminal and computer readable storage medium |
CN109672922A (en) * | 2017-10-17 | 2019-04-23 | 腾讯科技(深圳)有限公司 | A kind of game video clipping method and device |
CN109819325A (en) * | 2019-01-11 | 2019-05-28 | 平安科技(深圳)有限公司 | Hot video marks processing method, device, computer equipment and storage medium |
CN109889856A (en) * | 2019-01-21 | 2019-06-14 | 南京微特喜网络科技有限公司 | A kind of live streaming editing system based on artificial intelligence |
CN109905780A (en) * | 2019-03-30 | 2019-06-18 | 山东云缦智能科技有限公司 | A kind of video clip sharing method and Intelligent set top box |
CN110234037A (en) * | 2019-05-16 | 2019-09-13 | 北京百度网讯科技有限公司 | Generation method and device, the computer equipment and readable medium of video clip |
CN110519655A (en) * | 2018-05-21 | 2019-11-29 | 优酷网络技术(北京)有限公司 | Video clipping method and device |
CN110703976A (en) * | 2019-08-28 | 2020-01-17 | 咪咕文化科技有限公司 | Clipping method, electronic device, and computer-readable storage medium |
-
2020
- 2020-03-09 CN CN202010156612.5A patent/CN111447505B/en active Active
Patent Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050044561A1 (en) * | 2003-08-20 | 2005-02-24 | Gotuit Audio, Inc. | Methods and apparatus for identifying program segments by detecting duplicate signal patterns |
US20120114167A1 (en) * | 2005-11-07 | 2012-05-10 | Nanyang Technological University | Repeat clip identification in video data |
US20110176733A1 (en) * | 2010-01-18 | 2011-07-21 | Pixart Imaging Inc. | Image recognition method |
CN104636162A (en) * | 2013-11-11 | 2015-05-20 | 宏达国际电子股份有限公司 | Method for performing multimedia management utilizing tags, and associated apparatus and associated computer program product |
US20150185965A1 (en) * | 2013-12-31 | 2015-07-02 | Disney Enterprises, Inc. | Systems and methods for video clip creation, curation, and interaction |
CN104410920A (en) * | 2014-12-31 | 2015-03-11 | 合一网络技术(北京)有限公司 | Video segment playback amount-based method for labeling highlights |
CN106658231A (en) * | 2015-10-29 | 2017-05-10 | 亦非云信息技术(上海)有限公司 | Design method for sharing video clip in real time |
CN105657537A (en) * | 2015-12-23 | 2016-06-08 | 小米科技有限责任公司 | Video editing method and device |
CN105939494A (en) * | 2016-05-25 | 2016-09-14 | 乐视控股(北京)有限公司 | Audio/video segment providing method and device |
CN106210902A (en) * | 2016-07-06 | 2016-12-07 | 华东师范大学 | A kind of cameo shot clipping method based on barrage comment data |
US20180205974A1 (en) * | 2017-01-13 | 2018-07-19 | Panasonic Intellectual Property Management Co., Ltd. | Video transmission system and video transmission method |
WO2018171325A1 (en) * | 2017-03-21 | 2018-09-27 | 华为技术有限公司 | Video hotspot fragment extraction method, user equipment, and server |
US20180343502A1 (en) * | 2017-05-27 | 2018-11-29 | Nanning Fugui Precision Industrial Co., Ltd. | Multimedia control method and server |
CN109672922A (en) * | 2017-10-17 | 2019-04-23 | 腾讯科技(深圳)有限公司 | A kind of game video clipping method and device |
CN108540854A (en) * | 2018-03-29 | 2018-09-14 | 努比亚技术有限公司 | Live video clipping method, terminal and computer readable storage medium |
CN110519655A (en) * | 2018-05-21 | 2019-11-29 | 优酷网络技术(北京)有限公司 | Video clipping method and device |
CN109151532A (en) * | 2018-08-13 | 2019-01-04 | 冼钇冰 | A kind of video intercepting method, apparatus, terminal and computer readable storage medium |
CN109819325A (en) * | 2019-01-11 | 2019-05-28 | 平安科技(深圳)有限公司 | Hot video marks processing method, device, computer equipment and storage medium |
CN109889856A (en) * | 2019-01-21 | 2019-06-14 | 南京微特喜网络科技有限公司 | A kind of live streaming editing system based on artificial intelligence |
CN109905780A (en) * | 2019-03-30 | 2019-06-18 | 山东云缦智能科技有限公司 | A kind of video clip sharing method and Intelligent set top box |
CN110234037A (en) * | 2019-05-16 | 2019-09-13 | 北京百度网讯科技有限公司 | Generation method and device, the computer equipment and readable medium of video clip |
CN110703976A (en) * | 2019-08-28 | 2020-01-17 | 咪咕文化科技有限公司 | Clipping method, electronic device, and computer-readable storage medium |
Non-Patent Citations (1)
Title |
---|
曲鑫: "细粒度视频标签机制及其应用研究", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12014752B2 (en) | 2020-05-08 | 2024-06-18 | WeMovie Technologies | Fully automated post-production editing for movies, tv shows and multimedia contents |
CN116830195A (en) * | 2020-10-28 | 2023-09-29 | 唯众挚美影视技术公司 | Automated post-production editing of user-generated multimedia content |
CN116830195B (en) * | 2020-10-28 | 2024-05-24 | 唯众挚美影视技术公司 | Automated post-production editing of user-generated multimedia content |
CN113965805A (en) * | 2021-10-22 | 2022-01-21 | 北京达佳互联信息技术有限公司 | Prediction model training method and device and target video editing method and device |
CN114339399A (en) * | 2021-12-27 | 2022-04-12 | 咪咕文化科技有限公司 | Multimedia file editing method and device and computing equipment |
CN114143575A (en) * | 2021-12-31 | 2022-03-04 | 上海爱奇艺新媒体科技有限公司 | Video editing method and device, computing equipment and storage medium |
CN114666637A (en) * | 2022-03-10 | 2022-06-24 | 阿里巴巴(中国)有限公司 | Video editing method, audio editing method and electronic equipment |
CN114666637B (en) * | 2022-03-10 | 2024-02-02 | 阿里巴巴(中国)有限公司 | Video editing method, audio editing method and electronic equipment |
CN114827454B (en) * | 2022-03-15 | 2023-10-24 | 荣耀终端有限公司 | Video acquisition method and device |
CN114827454A (en) * | 2022-03-15 | 2022-07-29 | 荣耀终端有限公司 | Video acquisition method and device |
CN115022659A (en) * | 2022-05-31 | 2022-09-06 | 广州虎牙科技有限公司 | Live video processing method and system and live device |
CN115190356B (en) * | 2022-06-10 | 2023-12-19 | 北京达佳互联信息技术有限公司 | Multimedia data processing method and device, electronic equipment and storage medium |
CN115190356A (en) * | 2022-06-10 | 2022-10-14 | 北京达佳互联信息技术有限公司 | Multimedia data processing method and device, electronic equipment and storage medium |
CN115914739A (en) * | 2022-11-10 | 2023-04-04 | 南京伟柏软件技术有限公司 | Video sharing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111447505B (en) | 2022-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111447505B (en) | Video clipping method, network device, and computer-readable storage medium | |
CN110209843B (en) | Multimedia resource playing method, device, equipment and storage medium | |
US20180253173A1 (en) | Personalized content from indexed archives | |
US20180039627A1 (en) | Creating a content index using data on user actions | |
CN106331778B (en) | Video recommendation method and device | |
US11190821B2 (en) | Methods and apparatus for alerting users to media events of interest using social media analysis | |
CN104798346B (en) | For supplementing the method and computing system of electronic information relevant to broadcast medium | |
US20130179172A1 (en) | Image reproducing device, image reproducing method | |
CN107846629B (en) | Method, device and server for recommending videos to users | |
US20100088726A1 (en) | Automatic one-click bookmarks and bookmark headings for user-generated videos | |
US8245253B2 (en) | Displaying music information associated with a television program | |
US20100192188A1 (en) | Systems and methods for linking media content | |
US20210144418A1 (en) | Providing video recommendation | |
US20150156227A1 (en) | Synchronize Tape Delay and Social Networking Experience | |
US11930058B2 (en) | Skipping the opening sequence of streaming content | |
CN112752121B (en) | Video cover generation method and device | |
CN111327968A (en) | Short video generation method, short video generation platform, electronic equipment and storage medium | |
US9824722B2 (en) | Method to mark and exploit at least one sequence record of a video presentation | |
CN110545475B (en) | Video playing method and device and electronic equipment | |
CN111263186A (en) | Video generation, playing, searching and processing method, device and storage medium | |
US20170272793A1 (en) | Media content recommendation method and device | |
US20140355957A1 (en) | Marking Media Files | |
CN113449144A (en) | Video processing method and device and electronic equipment | |
JP2009200918A (en) | Program recording and playback apparatus | |
JP2013150221A (en) | Information processor, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |