CN109688463B - Clip video generation method and device, terminal equipment and storage medium - Google Patents

Clip video generation method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN109688463B
CN109688463B CN201811612134.3A CN201811612134A CN109688463B CN 109688463 B CN109688463 B CN 109688463B CN 201811612134 A CN201811612134 A CN 201811612134A CN 109688463 B CN109688463 B CN 109688463B
Authority
CN
China
Prior art keywords
video
video segment
segment
segments
clip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811612134.3A
Other languages
Chinese (zh)
Other versions
CN109688463A (en
Inventor
姜宇宁
王猛
解佳琦
徐力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811612134.3A priority Critical patent/CN109688463B/en
Publication of CN109688463A publication Critical patent/CN109688463A/en
Application granted granted Critical
Publication of CN109688463B publication Critical patent/CN109688463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The disclosure discloses a clip video generation method, a clip video generation device, a terminal device and a storage medium. The method comprises the following steps: acquiring at least one video matched with a video theme from a pre-established video library; editing the at least one video to generate a video segment set; selecting at least two video segments from the video segment set, generating a video segment sequence, and determining a transition special effect matched with each two adjacent target video segments according to the characteristic information of any two adjacent target video segments in the video segment sequence; and according to the position sequence of each target video segment in the video segment sequence, splicing the video segment sequence to generate a clip video by using the video special effect matched with each target video segment in the video segment sequence and the transition special effect matched with any two adjacent target video segments. The embodiment of the disclosure can improve the generation efficiency of the clip video and can also improve the flexibility and diversity of generating the clip video.

Description

Clip video generation method and device, terminal equipment and storage medium
Technical Field
The present disclosure relates to data technologies, and in particular, to a method and an apparatus for generating a clip video, a terminal device, and a storage medium.
Background
With the development of communication technology and terminal devices, various terminal devices such as android phones, apple phones, computers and the like have become an indispensable part of people in work and life, and in order to meet the information acquisition requirements of people, a large number of pages are usually displayed on application programs developed in the terminal devices.
At present, in an existing page video display method, video segments are generally captured from a plurality of videos for splicing and playing, the step is completed manually, independent and appropriate video segments can be captured only by manually repeatedly comparing the contents of front and rear video frames, meanwhile, the splicing sequence and special effect design of the video to be edited are generally designed manually and independently, but due to the limitation of manual thinking, the expression form of the video to be edited is not flexible and various, and meanwhile, the time involved in the video to be edited and the special effect is relatively long, so that the speed of information updating cannot be met.
Disclosure of Invention
The embodiment of the disclosure provides a clip video generation method, a clip video generation device, a terminal device and a storage medium, which can improve the generation efficiency of the clip video and can also improve the flexibility and diversity of the generation of the clip video.
In a first aspect, an embodiment of the present disclosure provides a clip video generation method, where the method includes:
acquiring at least one video matched with a video theme from a pre-established video library;
editing the at least one video to generate a video segment set;
selecting at least two video segments from the video segment set, generating a video segment sequence, and determining a transition special effect matched with each two adjacent target video segments according to the characteristic information of any two adjacent target video segments in the video segment sequence;
and according to the position sequence of each target video segment in the video segment sequence, splicing the video segment sequence to generate a clip video by using the video special effect matched with each target video segment in the video segment sequence and the transition special effect matched with any two adjacent target video segments.
Further, said clipping said at least one video, generating a set of video segments, comprises:
splitting the at least one video into video frames respectively, and acquiring the characteristic information of each video frame respectively;
performing clustering analysis on each video frame according to the characteristic information of each video frame to generate at least two video segments;
and screening at least two video segments meeting quality and content conditions to generate a video segment set according to the video quality and the video content in the at least two generated video segments.
Further, the screening of at least two video segments that satisfy quality and content conditions includes:
and when the video quality of the video segment exceeds a set threshold value, the condition that the video segment does not contain transition content is met, and the video segment contains video content matched with the video theme, determining that the video segment meets the quality and content conditions.
Further, the selecting at least two video segments from the video segment collection to generate a video segment sequence includes:
and selecting at least two video segments matched with the video theme and the video clipping duration according to the duration of each video segment in the video segment set and the characteristic information of each video segment, and generating a video segment sequence.
Further, before using the video special effect matched with each target video segment in the video segment sequence and the transition special effect matched with any two adjacent target video segments, the method further comprises the following steps:
and determining the video special effect matched with each target video segment according to the video theme and the characteristic information of each target video segment in the video segment sequence.
Further, after splicing the sequence of video segments to generate a clip video, the method further includes:
determining style information of the clip video according to the video theme and the feature information of each target video segment in the video segment sequence;
and selecting music matched with the style information as video music according to the music characteristics of each piece of music in the music library, and synthesizing the video music with the clip video to generate a target clip video.
Further, after generating the target clip video, the method further includes:
inputting the clip video into a display page evaluation model to obtain a display prediction evaluation result corresponding to the target clip video;
judging whether the display prediction evaluation result meets a threshold condition;
if so, displaying the target clip video;
otherwise, returning to the step of acquiring at least one video matched with the video subject of the clip video from the pre-established video library until the target clip video meeting the threshold condition is acquired for display.
In a second aspect, an embodiment of the present disclosure further provides a clip video generating apparatus, including:
the video acquisition module is used for acquiring at least one video matched with a video theme from a pre-established video library;
the video segment set generation module is used for clipping the at least one video to generate a video segment set;
the video segment sequence generation module is used for selecting at least two video segments from the video segment set to generate a video segment sequence and determining a transition special effect matched with each two adjacent target video segments according to the characteristic information of any two adjacent target video segments in the video segment sequence;
and the clip video generation module is used for splicing the video segment sequence to generate a clip video by using the video special effect matched with each target video segment in the video segment sequence and the transition special effect matched with any two adjacent target video segments according to the position sequence of each target video segment in the video segment sequence.
Further, the video segment collection generation module includes:
the video frame splitting module is used for splitting the at least one video into video frames respectively and acquiring the characteristic information of each video frame respectively;
the video segment generation module is used for performing clustering analysis on each video frame according to the characteristic information of each video frame to generate at least two video segments;
and the video segment set determining module is used for screening at least two video segments meeting the quality and content conditions to generate a video segment set according to the video quality and the video content in the at least two generated video segments.
Further, the video segment set determining module includes:
and the video segment quality and content judgment module is used for determining that the video segment meets the quality and content conditions when the video quality of the video segment exceeds a set threshold, meets the condition that the video segment does not contain transition content and contains video content matched with the video theme.
Further, the video segment sequence generating module includes:
and the video segment screening module is used for selecting at least two video segments matched with the video theme and the video editing duration according to the duration of each video segment in the video segment set and the characteristic information of each video segment to generate a video segment sequence.
Further, the clip video generation apparatus further includes:
and the video special effect acquisition module is used for determining the video special effect matched with each target video segment according to the video theme and the characteristic information of each target video segment in the video segment sequence.
Further, the clip video generation apparatus further includes:
the style information determining module is used for determining style information of the clip video according to the video theme and the characteristic information of each target video segment in the video segment sequence;
and the target clip video generation module is used for selecting music matched with the style information as video music according to the music characteristics of each piece of music in the music library, and synthesizing the video music with the clip video to generate a target clip video.
Further, the clip video generation apparatus further includes:
the display prediction evaluation result acquisition module is used for inputting the clip video into a display page evaluation model to obtain a display prediction evaluation result corresponding to the target clip video;
the display judgment module is used for judging whether the display prediction evaluation result meets a threshold condition; if so, displaying the target clip video; otherwise, returning to the step of acquiring at least one video matched with the video subject of the clip video from the pre-established video library until the target clip video meeting the threshold condition is acquired for display.
In a third aspect, an embodiment of the present disclosure further provides a terminal device, where the terminal device includes:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a clip video generation method according to an embodiment of the present disclosure.
In a fourth aspect, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the clip video generation method according to the disclosed embodiments.
The video matched with the video theme is selected and edited into the video segments to generate the video segment set, at least two video segments are selected from the video segment set to generate the video segment sequence, the transition special effect matched with two adjacent target video segments in the video segment sequence and the video special effect matched with the target video segments are obtained at the same time, and the target video segments in the video segment sequence are spliced to generate the edited video.
Drawings
Fig. 1 is a flowchart of a clip video generation method in one embodiment of the present disclosure;
fig. 2 is a flowchart of a clip video generation method in the second embodiment of the disclosure;
fig. 3 is a schematic structural diagram of a clip video generation apparatus in a third embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a terminal device in a fourth embodiment of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the disclosure and are not limiting of the disclosure. It should be further noted that, for the convenience of description, only some of the structures relevant to the present disclosure are shown in the drawings, not all of them.
Example one
Fig. 1 is a flowchart of a clipped video generation method provided in an embodiment of the present disclosure, where the embodiment is applicable to a case of generating a clipped video, the method may be executed by a clipped video generation apparatus, the apparatus may be implemented in a software and/or hardware manner, and the apparatus may be configured in a terminal device, such as a computer. As shown in fig. 1, the method specifically includes the following steps:
s110, at least one video matched with the video theme is obtained from a pre-established video library.
The video database is a video database formed by historically displayed videos, and the historically displayed videos can be complete videos displayed on a network or intercepted video segments. The video theme may refer to theme content displayed to the user, and specifically may include at least one of content information such as people, backgrounds, colors, and contexts. The at least one video matched with the video theme refers to a video with video content conforming to the video theme, for example, a video with video content similar to or identical to the video content required by the video theme.
Specifically, the displayed historical videos are obtained, feature labels are added to the historical videos according to the video content of the historical videos, and meanwhile the historical videos with the feature labels are added to a video library. Therefore, at least one video corresponding to at least one feature label matched with the video theme can be determined according to the feature labels of the videos in the video library.
Or a content-based video retrieval algorithm may be employed: the method comprises the steps of dividing a video into a series of video frames, extracting at least one key frame for representing abstract content of the video, and determining description sentences matched with the key frames according to image content of the key frames so as to obtain the abstract description sentences of the video. So that at least one video corresponding to the summary description sentence matching the video topic can be selected as at least one video matching the video topic. The determination of the description sentence matched with each key frame according to the image content of each key frame may be implemented based on a visual semantic embedding algorithm (visual semantic embedding). The visual semantic embedding algorithm expresses images and sentences into a vector with a fixed length, and then the vector is embedded into the same vector space. Thereby matching and retrieval of images and sentences can be achieved by neighbor search in the vector space.
And S120, clipping the at least one video to generate a video segment set.
The displayed video is edited, so that a high-quality video segment can be obtained, a video segment set is generated according to the obtained video segment, and the video quality of the video segment in the video segment set is improved.
Optionally, the clipping the at least one video and generating a video segment set may include: splitting the at least one video into video frames respectively, and acquiring the characteristic information of each video frame respectively; performing clustering analysis on each video frame according to the characteristic information of each video frame to generate at least two video segments; and screening at least two video segments meeting quality and content conditions to generate a video segment set according to the video quality and the video content in the at least two generated video segments.
Wherein the feature information may refer to at least one of an image element, attribute information, and content information included in the video frame. The image element may include a foreground image, a background image, a text in the image, and the like, and the attribute information may refer to at least one of information such as a structure of the image, a color, a size, a position, a shape, a style, and the like of the image, for example, a layer position of the image element in the picture, for example, a layer in which the text in the image is located in the foreground image, and the like, such as a color of each pixel in a pixel map corresponding to the image, a contrast of the image, a brightness of the image, and the like. The content information may refer to the description content of the target video frame, and more specifically, to the text content that can be recognized in the video frame.
And performing cluster analysis on the video frames according to the characteristic information, wherein the video frames in each class set are related to each other, for example, the similarity exceeds a set threshold value. The clustering analysis method may be k-means algorithm (k-means), spectral clustering algorithm, etc. Specifically, clustering is performed according to image elements shown in the video frames, and the image elements exemplarily include faces, products, and scenes (such as background images). And splicing the video frames in each class set according to the time sequence in the video to generate a video segment.
The quality and content condition refers to a condition for limiting the video quality and the video content of the video segment, and is used for screening out video segments with high quality and video content related to the video theme.
The video is split and combined to generate the video segments, and the video segments meeting the quality and content conditions are generated into the video segment set, so that the video quality of the video segments in the video segment set is improved, and the quality of the finally generated clip video is improved.
Optionally, the screening of the at least two video segments meeting the quality and content conditions may include: and when the video quality of the video segment exceeds a set threshold value, the condition that the video segment does not contain transition content is met, and the video segment contains video content matched with the video theme, determining that the video segment meets the quality and content conditions.
The video quality of the video segment may be determined by evaluating the sharpness of each video frame in the video segment, and specifically, the sharpness may be calculated according to a sharpness algorithm, for example, a Brenner gradient function, a Tenengrad gradient function, or an SMD (grayscale variance) function, and the embodiment of the present disclosure is not particularly limited. Transitions may refer to transitions or transitions between paragraphs and paragraphs, scene and scene, inclusive. The condition that the video segment does not contain the transition content is specifically determined by whether the information such as the color and the content of a plurality of continuous video frames in the video segment suddenly changes, for example, the background color of the theme of the first 3 video frames in the video segment is blue, and the background color of the theme of the last 3 video frames in the video segment is red, and the video segment is determined to contain the transition content and not to contain the transition content. Judging whether the video frame contains the video content matched with the video theme actually searches for the target content in the video frame can be realized through a template similarity matching method, namely, other images of the target content are taken as templates, similarity calculation is respectively carried out on the other images and each video frame, and the video frame with the similarity exceeding a set threshold value determines that the video frame contains the video content matched with the video theme.
By screening out the clear video segments which do not comprise transition pictures and simultaneously comprise products related to the video theme and are used for generating the video segments of the clip video, the high-quality video segments can be accurately screened out, and the quality of the generated clip video is improved.
S130, selecting at least two video segments from the video segment set, generating a video segment sequence, and determining a transition special effect matched with each two adjacent target video segments according to the characteristic information of any two adjacent target video segments in the video segment sequence.
Wherein a sequence of video segments can refer to a sequence of video segments arranged in an order. A transition effect may refer to a video effect when switching from presentation of one video segment to presentation of another video segment. The feature information of the video segment may be feature information formed by integrating feature information of each video frame in the video segment, or feature information of a key frame in the video segment is selected as the feature information of the video segment. The characteristic information of the video segment is determined by acquiring the characteristic information of each video frame in the video segment.
The transition special effect is used for being displayed in the switching process of two adjacent target video segments, and the switching information matched with the two adjacent target video segments, such as the content, style, spatial structure and video segment display theme of the two adjacent target video segments, is analyzed according to the characteristic information of the two adjacent target video segments, so that the matched transition special effect is selected in the transition special effect database, and the transition special effect is added in the switching operation of the two adjacent target video segments, so that the transition of the two adjacent target video segments is richer and more natural. Illustratively, the feature information of the two adjacent target video segments is description content and image elements extracted from the two adjacent target video segments. The transition special effect database is a database of codes which are defined in advance and can realize the special effect of the target video segments during switching, and exemplarily comprises at least one of codes of size adjustment of image elements, gradual change transparency of the last video frame in the previous target video segment of two adjacent target video segments, gradual change entity of the first video frame in the next target video segment and the like. In addition, the transition effect database also comprises other effects, and the embodiment of the disclosure is not particularly limited.
At least two video segments can be selected from the video segment collection, and the selected video segments are arranged according to a set sequence to generate a video segment sequence. Optionally, the selecting at least two video segments from the video segment collection and generating a video segment sequence may include: and selecting at least two video segments matched with the video theme and the video clipping duration according to the duration of each video segment in the video segment set and the characteristic information of each video segment, and generating a video segment sequence.
Generally, the generated clip video duration is set. According to the duration of each video segment in the video segment cutting set, selecting a plurality of video segments of which the time difference between the sum of the durations and the duration of the video to be cut is lower than a set threshold value so as to ensure that the duration of the video to be cut formed by splicing the video segments meets the set duration.
Determining similarity between the video segments and the video theme according to the characteristic information of the video segments, sorting the video segments according to the similarity, selecting the video segments according to a sorting result, for example, sequentially selecting a plurality of video segments from high similarity to low similarity, and generating a video segment sequence. The determining of the similarity between the video segment and the video theme can be determined by calculating the similarity between the feature information of the video segment and the video theme, for example, the video theme is a set image element (such as a bicycle), half of the video frames in the feature information package video segment of the video segment include the set image element, and a value of the number of the video frames including the set image element in the number of all the video frames can be used as the similarity between the video segment and the video theme, specifically, the similarity between the video segment and the video theme is 0.5.
The sequencing of each video segment in the video segment sequence can be random sequencing or sequencing according to set conditions, and according to the depth of the characteristic background color of the video segment, the depth can be represented by the brightness, and each video segment is sequenced from light to deep or from deep to shallow.
Through the duration and the characteristic information of the video segments, a plurality of video segments are selected to be used for generating a video segment sequence, the matching degree of each video segment in the video sequence and a video theme can be improved, and the generated clip video is more in line with the video theme.
And S140, according to the position sequence of each target video segment in the video segment sequence, splicing the video segment sequence to generate a clip video by using the video special effect matched with each target video segment in the video segment sequence and the transition special effect matched with any two adjacent target video segments.
Wherein the video effect is a video effect shown in the target video segment.
Optionally, before using the video special effect matched with each target video segment in the video segment sequence and the transition special effect matched with any two adjacent target video segments, the method may further include: and determining the video special effect matched with each target video segment according to the video theme and the characteristic information of each target video segment in the video segment sequence.
And selecting a matched video special effect from a video special effect database according to the characteristic information and the video theme of each target video segment in the video segment sequence, and adding a special effect to each video segment. The video special effect database is a database of codes which are defined in advance and used for realizing special effects of highlighting video content and video themes, and for example, the video special effect database comprises at least one of codes of facial expression generation, human body skeleton structure migration, static image rotation images and the like.
And searching a proper video special effect from a video special effect database by using the description content, the image elements, the background color and the like extracted from the target video segment, and simultaneously fusing the acquired video special effect and the target video segment by using a video synthesis technology, for example, covering or overlapping the video special effect and the target video segment in a video frame and the like.
Specifically, each target video segment in the video segment sequence is taken as a video segment of the clip video, and the position sequence of each target video segment in the video segment sequence is taken as the playing sequence of the clip video. And adding the matched video special effect in each target video segment, and adding the transition special effect matched with the two adjacent video segments when the two adjacent video segments are switched, thereby forming a complete clip video.
The video matched with the video theme is selected and edited into the video segments to generate the video segment set, at least two video segments are selected from the video segment set to generate the video segment sequence, the transition special effect matched with two adjacent target video segments in the video segment sequence and the video special effect matched with the target video segments are obtained at the same time, and the target video segments in the video segment sequence are spliced to generate the edited video.
On the basis of the foregoing embodiment, optionally, after splicing the sequence of video segments to generate a clip video, the method may further include: determining style information of the clip video according to the video theme and the feature information of each target video segment in the video segment sequence; and selecting music matched with the style information as video music according to the music characteristics of each piece of music in the music library, and synthesizing the video music with the clip video to generate a target clip video.
Illustratively, the style information may refer to at least one of a theme tone of the clip video, such as a warm, a rhythm of the clip video, such as a play speed of the target video segment, and a theme element of the clip video, such as an element (such as a bag) included in each target video segment, and the like.
The music library is a predefined database comprising at least one piece of music and music characteristics matched with each piece of music, and the music matched with the music characteristics with the highest matching degree is used as video music by calculating the matching degree of the style information and the music characteristics. By adding video music to the clip video, the clip video can be further enriched.
On the basis of the above embodiment, optionally, the clip video is input into a display page evaluation model to obtain a display prediction evaluation result corresponding to the target clip video; judging whether the display prediction evaluation result meets a threshold condition; if so, displaying the target clip video; otherwise, returning to the step of acquiring at least one video matched with the video subject of the clip video from the pre-established video library until the target clip video meeting the threshold condition is acquired for display.
The threshold condition may be that a threshold value with the same magnitude or number type as that of the presentation prediction evaluation result is set, and is used to determine whether the target clip video meets the presentation standard, that is, whether the target clip video meets the quality standard. The presentation page evaluation model may be a pre-trained machine learning model for evaluating the effect of the formed target clip video presented in the page. And evaluating the finally obtained target clip video, and displaying the target clip video under the condition of meeting a threshold value condition, so that the high-quality clip video is displayed, and the user experience is improved.
Example two
Fig. 2 is a flowchart of a clip video generation method according to a second embodiment of the disclosure. The present embodiment is embodied on the basis of various alternatives in the above-described embodiments.
Correspondingly, the method of the embodiment may include:
s201, at least one video matched with the video theme is obtained from a pre-established video library.
It should be noted that, in the present embodiment, a video theme, a video library, a video segment sequence, feature information, a video special effect, an excess special effect, a clip video, and the like can all refer to the description of the above embodiments.
S202, splitting the at least one video into video frames respectively, and acquiring the characteristic information of each video frame respectively.
S203, performing cluster analysis on each video frame according to the characteristic information of each video frame to generate at least two video segments.
S204, screening at least two video segments meeting quality and content conditions to generate a video segment set according to the video quality and the video content in the at least two generated video segments.
Wherein the screening of the at least two video segments satisfying the quality and content conditions may include: and when the video quality of the video segment exceeds a set threshold value, the condition that the video segment does not contain transition content is met, and the video segment contains video content matched with the video theme, determining that the video segment meets the quality and content conditions.
S205, according to the duration of each video segment in the video segment set and the characteristic information of each video segment, selecting at least two video segments matched with the video theme and the duration of the clip video, and generating a video segment sequence.
S206, determining a transition special effect matched with each two adjacent target video segments according to the feature information of any two adjacent target video segments in the video segment sequence.
And S207, determining the video special effect matched with each target video segment according to the video theme and the characteristic information of each target video segment in the video segment sequence.
And S208, according to the position sequence of each target video segment in the video segment sequence, splicing the video segment sequence to generate a clip video by using the video special effect matched with each target video segment in the video segment sequence and the transition special effect matched with any two adjacent target video segments.
S209, determining style information of the clip video according to the video theme and the feature information of each target video segment in the video segment sequence; and selecting music matched with the style information as video music according to the music characteristics of each music in the music library, and synthesizing the video music with the clip video to generate a target clip video.
S210, inputting the target clip video into a display page evaluation model to obtain a display prediction evaluation result corresponding to the target clip video.
Specifically, the presentation page evaluation model may include a feature extraction layer and a full connection layer. Acquiring the image characteristics of each video frame in the clipped video through an image embedding layer in the characteristic extraction layer; acquiring character features of each video frame in the clipped video through a character embedding layer in the feature extraction layer or through an optical character recognition technology; generating a feature vector of each video frame in the clipped video according to the image features and the character features, specifically splicing the image features and the character features of the video frames to generate a feature vector of an image frame; and obtaining a display prediction evaluation result of the clipped video through the full-connection layer according to each feature vector, and outputting the result in a numerical form.
S211, judging whether the display prediction evaluation result meets a threshold condition, if so, executing S212; otherwise, executing S201 until the target clip video meeting the threshold value condition is acquired for displaying.
S212, displaying the target clip video.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a clipped video generating apparatus according to an embodiment of the present disclosure, which is applicable to generating a clipped video. The apparatus may be implemented in software and/or hardware, and may be configured in a terminal device. As shown in fig. 3, the apparatus may include: a video acquisition module 310, a video segment set generation module 320, a video segment sequence generation module 330, and a clip video generation module 340.
The video acquiring module 310 is configured to acquire at least one video matched with a video topic from a pre-established video library;
a video segment set generating module 320, configured to clip the at least one video to generate a video segment set;
a video segment sequence generating module 330, configured to select at least two video segments from the video segment set, generate a video segment sequence, and determine a transition special effect matched with each of two adjacent target video segments according to feature information of any two adjacent target video segments in the video segment sequence;
the clip video generation module 340 is configured to splice the video segment sequence to generate a clip video according to the position sequence of each target video segment in the video segment sequence by using the video special effect matched with each target video segment in the video segment sequence and the transition special effect matched with any two adjacent target video segments.
The video matched with the video theme is selected and edited into the video segments to generate the video segment set, at least two video segments are selected from the video segment set to generate the video segment sequence, the transition special effect matched with two adjacent target video segments in the video segment sequence and the video special effect matched with the target video segments are obtained at the same time, and the target video segments in the video segment sequence are spliced to generate the edited video.
Further, the video segment collection generation module 320 includes: the video frame splitting module is used for splitting the at least one video into video frames respectively and acquiring the characteristic information of each video frame respectively; the video segment generation module is used for performing clustering analysis on each video frame according to the characteristic information of each video frame to generate at least two video segments; and the video segment set determining module is used for screening at least two video segments meeting the quality and content conditions to generate a video segment set according to the video quality and the video content in the at least two generated video segments.
Further, the video segment set determining module includes: and the video segment quality and content judgment module is used for determining that the video segment meets the quality and content conditions when the video quality of the video segment exceeds a set threshold, meets the condition that the video segment does not contain transition content and contains video content matched with the video theme.
Further, the video segment sequence generating module 330 includes: and the video segment screening module is used for selecting at least two video segments matched with the video theme and the video editing duration according to the duration of each video segment in the video segment set and the characteristic information of each video segment to generate a video segment sequence.
Further, the clip video generation apparatus further includes: and the video special effect acquisition module is used for determining the video special effect matched with each target video segment according to the video theme and the characteristic information of each target video segment in the video segment sequence.
Further, the clip video generation apparatus further includes: the style information determining module is used for determining style information of the clip video according to the video theme and the characteristic information of each target video segment in the video segment sequence; and the target clip video generation module is used for selecting music matched with the style information as video music according to the music characteristics of each piece of music in the music library, and synthesizing the video music with the clip video to generate a target clip video.
Further, the clip video generation apparatus further includes: the display prediction evaluation result acquisition module is used for inputting the clip video into a display page evaluation model to obtain a display prediction evaluation result corresponding to the target clip video; the display judgment module is used for judging whether the display prediction evaluation result meets a threshold condition; if so, displaying the target clip video; otherwise, returning to the step of acquiring at least one video matched with the video subject of the clip video from the pre-established video library until the target clip video meeting the threshold condition is acquired for display.
The clipped video generating device provided by the embodiment of the disclosure belongs to the same inventive concept as the clipped video generating method provided by the first embodiment, and the technical details which are not described in detail in the embodiment of the disclosure can be referred to in the first embodiment, and the first embodiment and the second embodiment of the disclosure have the same beneficial effects.
Example four
The disclosed embodiment provides a terminal device, and referring to fig. 4 below, a schematic structural diagram of a terminal device (e.g., a client or a server) 400 suitable for implementing the disclosed embodiment is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The terminal device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the terminal device 400 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 401 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the terminal apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the terminal device 400 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 illustrates a terminal apparatus 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
EXAMPLE five
Embodiments of the present disclosure also provide a computer readable storage medium, which may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
The computer readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device.
The computer readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: acquiring at least one video matched with a video theme from a pre-established video library; editing the at least one video to generate a video segment set; selecting at least two video segments from the video segment set, generating a video segment sequence, and determining a transition special effect matched with each two adjacent target video segments according to the characteristic information of any two adjacent target video segments in the video segment sequence; and according to the position sequence of each target video segment in the video segment sequence, splicing the video segment sequence to generate a clip video by using the video special effect matched with each target video segment in the video segment sequence and the transition special effect matched with any two adjacent target video segments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module does not constitute a limitation to the module itself in some cases, for example, the video capture module may also be described as a "module for capturing at least one video matching the video topic from a pre-established video library".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (12)

1. A clip video generation method, comprising:
acquiring at least one video matched with a video theme from a pre-established video library;
editing the at least one video to generate a video segment set;
selecting at least two video segments from the video segment set, generating a video segment sequence, and determining a transition special effect matched with each two adjacent target video segments according to the characteristic information of any two adjacent target video segments in the video segment sequence;
according to the position sequence of each target video segment in the video segment sequence, splicing the video segment sequence to generate a clip video by using the video special effect matched with each target video segment in the video segment sequence and the transition special effect matched with any two adjacent target video segments;
wherein said selecting at least two video segments from said set of video segments to generate a sequence of video segments comprises:
selecting at least two video segments matched with the video theme and the video editing duration according to the duration of each video segment in the video segment set and the characteristic information of each video segment, and generating a video segment sequence;
the selecting at least two video segments matching with the video theme comprises:
determining the similarity between the video segment and the video theme according to the characteristic information of the video segment;
sorting the video segments according to the similarity, and selecting the at least two video segments according to a sorting result;
the generating the sequence of video segments comprises:
sequencing each video segment in the at least two selected video segments according to the depth of the characteristic background color of the video segments to generate a video segment sequence;
wherein said clipping said at least one video to generate a set of video segments comprises:
splitting the at least one video into video frames respectively, and acquiring the characteristic information of each video frame respectively;
performing clustering analysis on each video frame according to the characteristic information of each video frame to generate at least two video segments;
and screening at least two video segments meeting quality and content conditions to generate a video segment set according to the video quality and the video content in the at least two generated video segments.
2. The method according to claim 1, wherein said filtering at least two video segments that satisfy quality and content conditions comprises:
and when the video quality of the video segment exceeds a set threshold value, the condition that the video segment does not contain transition content is met, and the video segment contains video content matched with the video theme, determining that the video segment meets the quality and content conditions.
3. The method according to claim 1, further comprising, prior to using the video effect matched to each target video segment in the sequence of video segments and the transition effect matched to any two adjacent target video segments:
and determining the video special effect matched with each target video segment according to the video theme and the characteristic information of each target video segment in the video segment sequence.
4. The method according to any of claims 1-3, further comprising, after splicing the sequence of video segments to generate a clip video:
determining style information of the clip video according to the video theme and the feature information of each target video segment in the video segment sequence;
and selecting music matched with the style information as video music according to the music characteristics of each piece of music in the music library, and synthesizing the video music with the clip video to generate a target clip video.
5. The method of claim 4, after generating the target clip video, further comprising:
inputting the target clip video into a display page evaluation model to obtain a display prediction evaluation result corresponding to the target clip video;
judging whether the display prediction evaluation result meets a threshold condition;
if so, displaying the target clip video;
and otherwise, returning to the step of acquiring at least one video matched with the video theme from the pre-established video library until the target clip video meeting the threshold condition is acquired and displayed.
6. A clip video generation apparatus, comprising:
the video acquisition module is used for acquiring at least one video matched with a video theme from a pre-established video library;
the video segment set generation module is used for clipping the at least one video to generate a video segment set;
the video segment sequence generation module is used for selecting at least two video segments from the video segment set to generate a video segment sequence and determining a transition special effect matched with each two adjacent target video segments according to the characteristic information of any two adjacent target video segments in the video segment sequence;
the video clip generation module is used for splicing the video segment sequence to generate a clip video by using the video special effect matched with each target video segment in the video segment sequence and the transition special effect matched with any two adjacent target video segments according to the position sequence of each target video segment in the video segment sequence;
wherein the video segment sequence generation module comprises:
a video segment screening module, configured to select at least two video segments matching the video theme and the video editing duration according to the duration of each video segment in the video segment set and the feature information of each video segment, and generate a video segment sequence;
the video segment screening module is specifically configured to:
determining the similarity between the video segment and the video theme according to the characteristic information of the video segment;
sorting the video segments according to the similarity, and selecting the at least two video segments according to a sorting result;
the video segment screening module is further specifically configured to:
sequencing each video segment in the at least two selected video segments according to the depth of the characteristic background color of the video segments to generate a video segment sequence;
wherein the video segment collection generation module comprises:
the video frame splitting module is used for splitting the at least one video into video frames respectively and acquiring the characteristic information of each video frame respectively;
the video segment generation module is used for performing clustering analysis on each video frame according to the characteristic information of each video frame to generate at least two video segments;
and the video segment set determining module is used for screening at least two video segments meeting the quality and content conditions to generate a video segment set according to the video quality and the video content in the at least two generated video segments.
7. The apparatus of claim 6, wherein said video segment collection determination module comprises:
and the video segment quality and content judgment module is used for determining that the video segment meets the quality and content conditions when the video quality of the video segment exceeds a set threshold, meets the condition that the video segment does not contain transition content and contains video content matched with the video theme.
8. The apparatus of claim 6, further comprising:
and the video special effect acquisition module is used for determining the video special effect matched with each target video segment according to the video theme and the characteristic information of each target video segment in the video segment sequence.
9. The apparatus of any of claims 6-8, further comprising:
the style information determining module is used for determining style information of the clip video according to the video theme and the characteristic information of each target video segment in the video segment sequence;
and the target clip video generation module is used for selecting music matched with the style information as video music according to the music characteristics of each piece of music in the music library, and synthesizing the video music with the clip video to generate a target clip video.
10. The apparatus of claim 9, further comprising:
the display prediction evaluation result acquisition module is used for inputting the clip video into a display page evaluation model to obtain a display prediction evaluation result corresponding to the target clip video;
the display judgment module is used for judging whether the display prediction evaluation result meets a threshold condition; if so, displaying the target clip video; otherwise, returning to the step of acquiring at least one video matched with the video subject of the clip video from the pre-established video library until the target clip video meeting the threshold condition is acquired for display.
11. A terminal device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the clip video generation method of any one of claims 1-5.
12. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the clip video generating method according to any one of claims 1 to 5.
CN201811612134.3A 2018-12-27 2018-12-27 Clip video generation method and device, terminal equipment and storage medium Active CN109688463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811612134.3A CN109688463B (en) 2018-12-27 2018-12-27 Clip video generation method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811612134.3A CN109688463B (en) 2018-12-27 2018-12-27 Clip video generation method and device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109688463A CN109688463A (en) 2019-04-26
CN109688463B true CN109688463B (en) 2020-02-18

Family

ID=66190562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811612134.3A Active CN109688463B (en) 2018-12-27 2018-12-27 Clip video generation method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109688463B (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110139159B (en) * 2019-06-21 2021-04-06 上海摩象网络科技有限公司 Video material processing method and device and storage medium
CN110381367B (en) * 2019-07-10 2022-01-25 咪咕文化科技有限公司 Video processing method, video processing equipment and computer readable storage medium
CN110730381A (en) * 2019-07-12 2020-01-24 北京达佳互联信息技术有限公司 Method, device, terminal and storage medium for synthesizing video based on video template
CN110415723B (en) * 2019-07-30 2021-12-03 广州酷狗计算机科技有限公司 Method, device, server and computer readable storage medium for audio segmentation
CN110381371B (en) * 2019-07-30 2021-08-31 维沃移动通信有限公司 Video editing method and electronic equipment
CN111083393B (en) * 2019-12-06 2021-09-14 央视国际网络无锡有限公司 Method for intelligently making short video
CN110958470A (en) * 2019-12-09 2020-04-03 北京字节跳动网络技术有限公司 Multimedia content processing method, device, medium and electronic equipment
CN111107392B (en) * 2019-12-31 2023-02-07 北京百度网讯科技有限公司 Video processing method and device and electronic equipment
CN113225488B (en) * 2020-02-05 2023-10-20 字节跳动有限公司 Video processing method and device, electronic equipment and storage medium
CN111432141B (en) * 2020-03-31 2022-06-17 北京字节跳动网络技术有限公司 Method, device and equipment for determining mixed-cut video and storage medium
CN111432142B (en) * 2020-04-03 2022-11-22 腾讯云计算(北京)有限责任公司 Video synthesis method, device, equipment and storage medium
CN111918146B (en) * 2020-07-28 2021-06-01 广州筷子信息科技有限公司 Video synthesis method and system
CN113840099B (en) * 2020-06-23 2023-07-07 北京字节跳动网络技术有限公司 Video processing method, device, equipment and computer readable storage medium
CN113938744B (en) * 2020-06-29 2024-01-23 抖音视界有限公司 Video transition type processing method, device and storage medium
CN113938751B (en) * 2020-06-29 2023-12-22 抖音视界有限公司 Video transition type determining method, device and storage medium
CN112312161A (en) * 2020-06-29 2021-02-02 北京沃东天骏信息技术有限公司 Method and device for generating video, electronic equipment and readable storage medium
CN111541946A (en) * 2020-07-10 2020-08-14 成都品果科技有限公司 Automatic video generation method and system for resource matching based on materials
CN112200739A (en) * 2020-09-30 2021-01-08 北京大米科技有限公司 Video processing method and device, readable storage medium and electronic equipment
CN112367481A (en) * 2020-10-28 2021-02-12 郑州阿帕斯科技有限公司 Video clip processing method and device
CN112801861A (en) * 2021-01-29 2021-05-14 恒安嘉新(北京)科技股份公司 Method, device and equipment for manufacturing film and television works and storage medium
CN113115106B (en) * 2021-03-31 2023-05-05 影石创新科技股份有限公司 Automatic editing method, device, terminal and storage medium for panoramic video
CN113038243B (en) * 2021-05-28 2021-09-17 卡莱特云科技股份有限公司 Transparency adjusting method and device in video source picture playing process
CN115996274A (en) * 2021-10-18 2023-04-21 华为技术有限公司 Video production method and electronic equipment
CN116137672A (en) * 2021-11-18 2023-05-19 脸萌有限公司 Video generation method, device, apparatus, storage medium and program product
CN114268741B (en) * 2022-02-24 2023-01-31 荣耀终端有限公司 Transition dynamic effect generation method, electronic device, and storage medium
CN114666657B (en) * 2022-03-18 2024-03-19 北京达佳互联信息技术有限公司 Video editing method and device, electronic equipment and storage medium
CN115134646B (en) * 2022-08-25 2023-02-10 荣耀终端有限公司 Video editing method and electronic equipment
CN115866347B (en) * 2023-02-22 2023-08-01 北京百度网讯科技有限公司 Video processing method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394331A (en) * 2014-12-05 2015-03-04 厦门美图之家科技有限公司 Video processing method for adding matching sound effect in video picture
CN106534971A (en) * 2016-12-05 2017-03-22 腾讯科技(深圳)有限公司 Audio/ video clipping method and device
CN107566756A (en) * 2017-08-03 2018-01-09 广东小天才科技有限公司 A kind of processing method and terminal device of video transition
CN108289180A (en) * 2018-01-30 2018-07-17 广州市百果园信息技术有限公司 Method, medium and the terminal installation of video are handled according to limb action
CN108900905A (en) * 2018-08-08 2018-11-27 北京未来媒体科技股份有限公司 A kind of video clipping method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170134714A1 (en) * 2015-11-11 2017-05-11 Microsoft Technology Licensing, Llc Device and method for creating videoclips from omnidirectional video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394331A (en) * 2014-12-05 2015-03-04 厦门美图之家科技有限公司 Video processing method for adding matching sound effect in video picture
CN106534971A (en) * 2016-12-05 2017-03-22 腾讯科技(深圳)有限公司 Audio/ video clipping method and device
CN107566756A (en) * 2017-08-03 2018-01-09 广东小天才科技有限公司 A kind of processing method and terminal device of video transition
CN108289180A (en) * 2018-01-30 2018-07-17 广州市百果园信息技术有限公司 Method, medium and the terminal installation of video are handled according to limb action
CN108900905A (en) * 2018-08-08 2018-11-27 北京未来媒体科技股份有限公司 A kind of video clipping method and device

Also Published As

Publication number Publication date
CN109688463A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN109688463B (en) Clip video generation method and device, terminal equipment and storage medium
CN109618222B (en) A kind of splicing video generation method, device, terminal device and storage medium
US20240107127A1 (en) Video display method and apparatus, video processing method, apparatus, and system, device, and medium
CN109803180B (en) Video preview generation method and device, computer equipment and storage medium
CN110914872A (en) Navigating video scenes with cognitive insights
CN110446066B (en) Method and apparatus for generating video
CN109348277B (en) Motion pixel video special effect adding method and device, terminal equipment and storage medium
KR20220103112A (en) Video generation method and apparatus, electronic device, and computer readable medium
CN113010698B (en) Multimedia interaction method, information interaction method, device, equipment and medium
CN112287168A (en) Method and apparatus for generating video
CN112929746B (en) Video generation method and device, storage medium and electronic equipment
CN109815448B (en) Slide generation method and device
WO2023056835A1 (en) Video cover generation method and apparatus, and electronic device and readable medium
CN109816670B (en) Method and apparatus for generating image segmentation model
WO2023124793A1 (en) Image pushing method and device
CN114513706B (en) Video generation method and device, computer equipment and storage medium
CN114708443A (en) Screenshot processing method and device, electronic equipment and computer readable medium
CN112905838A (en) Information retrieval method and device, storage medium and electronic equipment
CN113535031A (en) Page display method, device, equipment and medium
US11792494B1 (en) Processing method and apparatus, electronic device and medium
CN114697761B (en) Processing method, processing device, terminal equipment and medium
CN114697762B (en) Processing method, processing device, terminal equipment and medium
CN114339356B (en) Video recording method, device, equipment and storage medium
WO2023160515A1 (en) Video processing method and apparatus, device and medium
CN112764601B (en) Information display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant