CN117201858A - Video generation method, device and equipment - Google Patents

Video generation method, device and equipment Download PDF

Info

Publication number
CN117201858A
CN117201858A CN202311145383.7A CN202311145383A CN117201858A CN 117201858 A CN117201858 A CN 117201858A CN 202311145383 A CN202311145383 A CN 202311145383A CN 117201858 A CN117201858 A CN 117201858A
Authority
CN
China
Prior art keywords
video
library
template
target
video template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311145383.7A
Other languages
Chinese (zh)
Inventor
张智谦
吕晓磊
罗嘉鑫
高龙成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momo Information Technology Co Ltd
Original Assignee
Beijing Momo Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Momo Information Technology Co Ltd filed Critical Beijing Momo Information Technology Co Ltd
Priority to CN202311145383.7A priority Critical patent/CN117201858A/en
Publication of CN117201858A publication Critical patent/CN117201858A/en
Pending legal-status Critical Current

Links

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

The application provides a video generation method, a video generation device and video generation equipment. The video generation method provided by the application comprises the following steps: responding to a video making request, searching a plurality of video templates matched with materials carried by the video making request from a pre-created video template library, and displaying the plurality of video templates to a user; responding to the selection operation of a user for selecting a target video template from the plurality of video templates, and displaying an editing interface aiming at the target video template; responding to the editing operation of the user on the target video template based on the editing interface, and generating an edited video template; and generating a video corresponding to the material by using the material and the edited video template. The video generation method, the device and the equipment provided by the application can conveniently and quickly manufacture personalized video for users.

Description

Video generation method, device and equipment
Technical Field
The present application relates to the field of video generation technologies, and in particular, to a video generation method, apparatus, and device.
Background
With the advent of short videos, videos have become an important carrier for users to express their own ideas on social platforms. However, a user often spends a lot of time making a video in order to distribute a high quality, personalized video. At present, how to help users to make videos more conveniently and rapidly is an important point of attraction for making differentiation and improving user viscosity by an internet platform. Therefore, how to help users to conveniently and quickly make personalized videos is a current urgent problem to be solved.
Disclosure of Invention
In view of the above, the present application provides a method, apparatus and device for generating video, which are used to conveniently and quickly generate personalized video for users.
Specifically, the application is realized by the following technical scheme:
a first aspect of the present application provides a video generation method, the method comprising:
responding to a video making request, searching a plurality of video templates matched with materials carried by the video making request from a pre-created video template library, and displaying the plurality of video templates to a user;
responding to the selection operation of a user for selecting a target video template from the plurality of video templates, and displaying an editing interface aiming at the target video template;
responding to the editing operation of the user on the target video template based on the editing interface, and generating an edited video template;
and generating a video corresponding to the material by using the material and the edited video template.
The second aspect of the application provides a video generating device, which comprises a searching module, a display module, an editing module and a processing module; wherein,
the searching module is used for responding to a video making request, searching a plurality of video templates matched with materials carried by the video making request from a pre-created video template library, and displaying the plurality of video templates to a user;
the display module is used for responding to the selection operation of a user for selecting a target video template from the plurality of video templates, and displaying an editing interface aiming at the target video template;
the editing module is used for responding to the editing operation of the user on the target video template based on the editing interface and generating an edited video template;
the processing module is used for generating a video corresponding to the material by utilizing the material and the edited video template.
A third aspect of the application provides a video generating apparatus comprising a memory, a processor and a computer program stored on said memory and executable on the processor, said processor implementing the steps of any one of the methods provided in the first aspect of the application when said program is executed.
A fourth aspect of the application provides a storage medium having stored thereon a program which when executed by a processor performs the steps of any of the methods provided in the first aspect of the application.
According to the video generation method, the device and the equipment, a video production request is responded, a plurality of video templates matched with materials carried by the video production request are searched from a pre-created video template library, the video templates are displayed to a user, further, an editing interface for a target video template is displayed in response to a selection operation of the user for selecting the target video template from the video templates, and therefore an edited video template is generated in response to an editing operation of the user for the target video template based on the editing interface, and finally, the materials and the edited video templates are utilized to generate videos corresponding to the materials. In this way, a plurality of video templates can be displayed to the user according to the materials carried by the video making request, so that the user can select favorite video templates from the video templates, in addition, editing operation for the video templates is provided, so that the user can edit the selected video templates according to own favorites, in other words, the user is supported to customize personalized video templates according to the existing video templates, in this way, the personalized requirements of the user can be better met, the user can be helped to make personalized videos required by the user more quickly and conveniently, and the user experience is higher.
Drawings
Fig. 1 is a flowchart of a first embodiment of a video generating method according to the present application;
FIG. 2 is a schematic diagram of an imaged video template according to an exemplary embodiment of the present application;
fig. 3 is a flowchart of a second embodiment of a video generating method provided by the present application;
fig. 4 is a flowchart of a third embodiment of a video generating method provided by the present application;
fig. 5 is a hardware structure diagram of a video generating device where the video generating apparatus provided by the present application is located;
fig. 6 is a schematic structural diagram of a video generating apparatus according to a first embodiment of the present application;
fig. 7 is a schematic diagram of a second embodiment of a video generating apparatus provided by the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The application provides a video generation method, a video generation device and video generation equipment, which are used for conveniently and quickly manufacturing personalized videos for users.
According to the video generation method, the device and the equipment, a video production request is responded, a plurality of video templates matched with materials carried by the video production request are searched from a pre-created video template library, the video templates are displayed to a user, further, an editing interface for a target video template is displayed in response to a selection operation of the user for selecting the target video template from the video templates, and therefore an edited video template is generated in response to an editing operation of the user for the target video template based on the editing interface, and finally, the materials and the edited video templates are utilized to generate videos corresponding to the materials. In this way, a plurality of video templates can be displayed to the user according to the materials carried by the video making request, so that the user can select favorite video templates from the video templates, in addition, editing operation for the video templates is provided, so that the user can edit the selected video templates according to own favorites, in other words, the user is supported to customize personalized video templates according to the existing video templates, in this way, the personalized requirements of the user can be better met, the user can be helped to make personalized videos required by the user more quickly and conveniently, and the user experience is higher.
Specific examples are given below to describe the technical solution of the present application in detail.
Fig. 1 is a flowchart of a video generating method according to an embodiment of the present application. Referring to fig. 1, the method provided in this embodiment includes:
s101, responding to a video production request, searching a plurality of video templates matched with materials carried by the video production request from a pre-created video template library, and displaying the plurality of video templates to a user.
Specifically, the video generation method provided by the embodiment is applied to video generation equipment. When a user wants to make a video using a material, the user may instruct the video generating device to generate a video based on the material by triggering a video making request carrying the material. For example, in one embodiment, the video generating device provides a video production button, after the user clicks the button, the video generating device displays a material upload page, at this time, the user may upload the material based on the material upload page and trigger a submit operation, and further, the video generating device generates a video production request carrying the material in response to the submit operation triggered by the user on the material upload page.
Specifically, the video production request may carry at least one material, and in addition, the material carried by the video production request may be a material such as a picture, a video, a text, and the like.
It should be noted that, the video template library is created in advance according to actual requirements, for example, in an embodiment, the video template library may be designed and created by a designer according to actual requirements. Further, the pre-created video template library includes a plurality of video templates, wherein each video template may be a video template generated based on a music element, a sticker element, and a transition special effect element. The video templates will be described in detail below, and will not be described in detail here.
Specifically, in one possible implementation manner, the video template library includes a plurality of video templates, and attribute information of each video template is recorded in the video template library, where the attribute information of each video template may be used to describe features of the video template, for example, the attribute information may include category information corresponding to the video template after classifying the video template according to different classification standards. For example, in an embodiment, the attribute information of the video template may include first category information, where the first category information is used to characterize which category of a first preset category, such as a person, a landscape, or an animal, is the video template; for another example, the attribute information of the video template may further include second category information for characterizing which of the respective preset styles the video template is. The first preset category and the preset style are set according to actual needs, and are not limited in this embodiment.
Specifically, when the attribute information of each video template is recorded in a video template library, at this time, searching a plurality of video templates matched with the material from a pre-established video template library, and obtaining the description information of the material first, so as to search a plurality of video templates matched with the material from the video template library according to the description information of the material and the attribute information of each video template in the video template library; the descriptive information is used for characterizing the characteristics of the material.
In specific implementation, according to the description information of the materials and the attribute information of each video template, searching the video template with the attribute information matched with the description information from a video template library, wherein the searched video template is the video template matched with the materials.
For example, in one embodiment, descriptive information of the material: the material is a sports style material, and then the video template with the attribute information of the sports style is directly searched from the video template library.
In addition, in another possible implementation manner, if the attribute information of each video template is not recorded in the video template library, at this time, the attribute information of each video template may be obtained first, and then, according to the description information of the material and the attribute information of each video template, a plurality of video templates matched with the material may be searched from the video template library.
Specifically, when the attribute information of the video template or the description information of the material is acquired, the attribute information or the description information of the material can be acquired based on a pre-trained model or a preset feature extraction method. The specific implementation principle will not be described here.
In one possible implementation manner, the user may upload the material and specify the style of the video to be produced, and the video production request carries the style specified by the user at this time. The specific implementation principle of the search is similar to that described above, and will not be repeated here.
Specifically, in one possible implementation manner, when a plurality of video templates are displayed to a user, besides the video templates are directly displayed, attribute information of the video templates or rendering effects of the video templates can be displayed to the user together, so that the user can quickly select the favorite video templates based on the displayed information, and user experience can be further improved.
S102, responding to the selection operation of a user for selecting a target video template from the plurality of video templates, and displaying an editing interface aiming at the target video template.
For example, in an embodiment, in response to a video production request, a plurality of video templates found from a video template library are a video template a, a video template B, a video template C and a video template D, and after the plurality of video templates are displayed to a user, the user selects the video template a based on own preference, and at this time, an editing interface for the video template a is displayed, so that the user can edit the video template a according to own requirements, obtain a video template satisfactory to the user, and can meet individual requirements of the user.
With reference to the foregoing description, the video template may be a video template generated based on a music element, a decal element, and a transition effect element. In this step, after the editing interface for the video template is presented to the user, the user may edit the music element, the sticker element, and the transition special effect element in the video template based on the editing interface. For example, the user may change music elements, delete decal elements, change transition effects elements, etc., based on the editing interface. For another example, the user may also set the stacking order of the decal elements and the transitional effect elements based on the editing interface.
It should be noted that, the editing interface can also display the preview effect of the video template which is edited at present in real time, and the user can view the preview effect of the video template which is edited at present at any time, so as to obtain the video template which is satisfied by the user.
S103, responding to the editing operation of the user on the target video template based on the editing interface, and generating an edited video template.
For example, in one embodiment, the user modifies the filter of the video template at the editing interface, and a new video template is generated.
For another example, the user may perform editing operations based on the editing interface to replace original music elements, move or add decal elements, remade filters, etc., so that the edited video template achieves the desired effect.
S104, generating a video corresponding to the material by using the material and the edited video template.
For example, fig. 2 is a schematic diagram of an imaged video template according to an exemplary embodiment of the present application. Referring to fig. 2, in the example shown in fig. 2, the video template includes: special video clips, music elements, decal elements, filter elements, 3 transition special effects elements, and 3 blank material clips for placing user material. Correspondingly, when the video template is used for making video, the corresponding video can be generated by filling the corresponding 3 blank material fragments with the material uploaded by the user.
According to the method provided by the embodiment, a video making request is responded, a plurality of video templates matched with materials carried by the video making request are searched from a pre-created video template library, the video templates are displayed to a user, further, an editing interface aiming at a target video template is displayed in response to the selection operation of the user for selecting the target video template from the video templates, and therefore an edited video template is generated in response to the editing operation of the user on the target video template based on the editing interface, and finally, videos corresponding to the materials are generated by utilizing the materials and the edited video template. In this way, a plurality of video templates can be displayed to the user according to the materials carried by the video making request, so that the user can select favorite video templates from the video templates, in addition, editing operation for the video templates is provided, so that the user can edit the selected video templates according to own favorites, in other words, the user is supported to customize personalized video templates according to the existing video templates, in this way, the personalized requirements of the user can be better met, the user can be helped to make personalized videos required by the user more quickly and conveniently, and the user experience is higher.
Optionally, after generating the video corresponding to the material, the method may further include:
(1) Displaying a video display interface for displaying the video; the video display interface is provided with a release control.
(2) And publishing the video to a designated video publishing platform in response to a publishing operation performed by a user based on the publishing control.
Specifically, a release control is arranged on the video display interface, and a user can release the generated video to a designated video release platform through the release control. In other words, the video generating device provides a one-key release function, so that after the video is generated, a user can be conveniently and rapidly helped to release the video, and the user experience can be further improved.
The designated video publishing platform is designated according to actual requirements, and is not limited in this embodiment.
When the release control is executed to execute the appointed operation, the video generating device detects the release operation triggered by the user and releases the video. The designating operation is set according to actual needs, and is not limited in this embodiment. For example, in one embodiment, the specified operation may be a click operation; at this time, when the user clicks the release control, a release operation is triggered, and accordingly, the video generating device detects the release operation and releases the generated video.
According to the video generation method provided by the embodiment, after the video is generated, the video display interface with the release control is displayed, so that the release control can realize the function of one-key release, the user can conveniently and rapidly release the video, and the user experience can be improved.
Fig. 3 is a flowchart of a second embodiment of a video generating method provided by the present application. Referring to fig. 3, on the basis of the foregoing embodiment, the method provided in this embodiment, the creating process of each video template in the video template library may include:
s301, receiving a video template creation request; the video template creation request is used to instruct the creation of a video template of a specified style.
It should be noted that, when a designer wants to make a video template, a video template creation request may be triggered to instruct the video generating device to create the video template through the video template creation request.
In particular, the video template creation request may carry a style of video template that the designer specifies to be created to indicate that the style of video template is created. For example, in one embodiment, a video template creation request is used to request creation of a sports style video template.
S302, selecting music elements matched with the appointed style from a music element library in a preset multi-class element library; the multi-class element library at least comprises a music element library, a transition special effect element library and a sticker element library.
The preset multi-class element library includes a plurality of different types of element libraries, wherein the types of the multi-class element libraries are set according to actual needs, and in this embodiment, the types are not limited. For example, in one embodiment, the multi-class element library includes at least a music element library, a transition effect element library, and a decal element library. For another example, in addition to the three element libraries described above, the multi-class element library may include a filter element library and a special video clip element library. As another example, in one embodiment, a keyframe element library or the like may be included in addition to the element libraries mentioned above.
Specifically, the music element library includes a plurality of different music elements and attribute information of each music element. Wherein the music element refers to background music of the video; the attribute information of the music element may include style information, clip information, play time information, and the like.
In particular, the music element may be a music element selected by a designer according to different styles and confirmed by copyrights. Furthermore, after the music element is selected, the clip point information and the playing time information of the music element can be obtained through the clip point detection model, and the method for obtaining other attribute information is not described herein.
Further, the transition special effect element library contains a plurality of different transition special effect elements and attribute information of each transition special effect element. The transition special effect element is a special effect used when a plurality of user materials (mainly referred to as pictures, videos, etc.) are connected in series. For example, the special effects used when connecting a plurality of user materials in series may be flipping, waving, zooming in, zooming out, fading in, and the like. Further, the attribute information of the transition special effect element may include style information, priority information, and the like. The priority information is used for indicating the rendering sequence of the transition special effect element during rendering.
Further, the sticker element library includes a plurality of different sticker elements and attribute information of each of the sticker elements. Wherein the decal element may comprise a dynamic decal and/or a static decal. Note that the text content is also a sticker element. Further, attribute information of the sticker element may include style information, position information, size information, priority information, and the like. The priority information is used for indicating the rendering sequence of the sticker elements during rendering.
Further, the filter element library includes a plurality of different filter elements and attribute information of each filter element. The filter element refers to a tone conversion or the like performed on the material. Further, the attribute information of the filter element may include style information, priority information, and the like. The priority information is used for indicating the rendering sequence of the filter elements during rendering.
Specifically, the transition special effect element, the sticker element, and the filter element may be designed by a designer using specialized software such as AE.
Further, the special video clip element library contains a plurality of different special video clip elements and attribute information of each special video clip. The special video clip element refers to an independent clip which can be a video, and comprises a special video scene, etc. The attribute information of the special video clip element may include style information of the special video clip, etc.
It should be noted that, referring to the foregoing description, after the designer collects various elements and attribute information of each element, the designer divides the collected various elements and stores the elements in a warehouse, so that the construction of the element library can be completed.
In particular, in this step, a music element matching a specified style may be selected based on the degree of matching of attribute information of each music element in the music element library with the specified style. For example, in one embodiment, the designated style is a sports style, and at this time, music element 1 whose style information is sports style is selected from the music element library is selected as the selected music element.
S303, selecting target elements matched with the attribute information from other element libraries according to the attribute information of the music elements; wherein the other element libraries are element libraries other than the music element library in the multi-class element library.
In connection with the above example, the attribute information of the music element 1 characterizes that the music element is a sports style music element. For example, in one embodiment, when the multi-class element library includes a music element library, a transition special effect element and a sticker element library, in this step, the transition special effect element whose attribute information is a sports style and the sticker element whose attribute information is a sports style are selected from the transition special effect element library and the sticker element library, respectively.
On the basis of the above example, for example, in another possible implementation manner, when the multi-class element library includes a filter element library and a special video element fragment element library in addition to the music element library, the transition special effect element and the sticker element library, in this step, in addition to the transition special effect element and the sticker element, a filter element whose attribute information is a motion style and a special video fragment whose attribute information is a motion style need to be selected from the filter element library and the special video fragment element library, respectively.
S304, performing time line assembly on the music element and the target element to obtain a video template.
It should be noted that, the time line assembly refers to splicing different elements (e.g., music element, transition special effect element, sticker element) together in time sequence to create a complete video work.
Specifically, when assembling different elements, one stacking order may be randomly selected from possible stacking orders for assembly, or each stacking order may be assembled once according to the possible stacking orders. In one embodiment, for example, the elements to be assembled include music elements, transition effects elements and decal elements,
at this time, the assembly may be performed in the order of stacking the decal elements first and then the transition special effect elements, or may be performed in the order of stacking the transition special effect elements first and then the decal elements.
Further, in a possible implementation manner of the present application, the video template creation request carries a specified stacking order, and at this time, when the music element and the target element are assembled in a time line, the music element and the target element may be assembled in a time line according to the specified stacking order, so as to obtain a video template.
For example, when a video template creation request is triggered, the stacking order can be designated together, and at this time, the video template creation request carries the designated stacking order; accordingly, at the time of assembly, then, the assembly is performed in the specified stacking order.
Optionally, in a possible implementation manner of the present application, when each transition special effect in the transition special effect library and attribute information of each sticker in the sticker element library both include priority information, at this time, when the music element and the target element are assembled in a time line, a stacking order corresponding to the target element may be determined first according to the priority of each element in the target element, and then the music element and the target element are assembled in the time line according to the stacking order.
For example, in one embodiment, the elements to be assembled include a music element, a sticker element, a filter element, and a transition special effect element, where the priority order is sequentially from high to low: the laminating sequence is determined to be: firstly stacking the sticker elements, then stacking the filter elements, and finally stacking the transition special effect elements.
According to the video generation method, a video template creation request for indicating to create a video template with a specified style is received, and music elements matched with the specified style are selected from music element libraries in a preset multi-class element library, wherein the multi-class element library at least comprises a music element library, a transition special effect element library and a sticker element library, so that target elements matched with the attribute information are selected from other element libraries according to the attribute information of the music elements, wherein the other element libraries are element libraries except the music element library in the multi-class element library, and further the music elements and the target elements are assembled in a time line mode to obtain the video template. Thus, firstly, different elements are stored in different element libraries, and each element can be decoupled better; furthermore, based on different element libraries, different elements can be personalized and recombined, so that a rich video template can be created, and the personalized requirements of users can be better met.
Fig. 4 is a flowchart of a third embodiment of a video generating method provided by the present application. Referring to fig. 4, based on the above embodiment, according to the attribute information of the music element, the step of selecting the target element matched with the attribute information from the transition special effect element library may include:
s401, determining the number of target transition special effects to be selected from the transition special effect element library according to the stuck point information in the attribute information of the music elements.
The stuck point information includes the number of stuck points and the position information of each stuck point. Specifically, the positions and the numbers of the transition special effects are set according to the stuck point information. For example, in one embodiment, the music element has several stuck points, and the corresponding video template sets the transition special effect elements at the corresponding positions of the stuck points, in other words, the number of transition special effect elements is equal to the number of stuck points included in the music element. For example, in one embodiment, the music elements include a clip point 1, a clip point 2, and a clip point 3, where the clip point 1 is located, a corresponding transition effect 1 is set, where the clip point 2 is located, a corresponding transition effect 2 is set, and where the clip point 3 is located, a corresponding transition effect 3 is set.
S402, selecting the number of target transition special effects matched with the music element from the transition special effects according to style information in the attribute information of the music element and style information in the attribute information of each transition special effect in the transition special effect library.
In particular, in combination with the above example, for example, in one embodiment, the selected music element includes 3 clip points, and further, the style information of the music element is a sports style, where 3 transition special effect elements of the sports style are selected from the transition special effect library.
According to the video generation method provided by the embodiment, the number of target transition effects to be selected from the transition effect element library is determined according to the stuck point information in the attribute information of the music element, and the number of target transition effects matched with the music element are selected from the transition effects according to the style information in the attribute information of the music element and the style information in the attribute information of each transition effect in the transition effect library. Therefore, the prepared video template is more in accordance with aesthetic design by adapting the transition special effect number and the transition special effect style to the music elements, and the personalized requirements of users can be better met.
The application also provides an embodiment of a video generating device corresponding to the embodiment of the video generating method.
The embodiment of the video generating apparatus of the application can be applied to video generating equipment. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory by a processor of a video generating device where the device is located for operation. In terms of hardware, as shown in fig. 5, a hardware structure diagram of a video generating device where the video generating apparatus provided by the present application is located is shown in fig. 5, and in addition to the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 5, the video generating device where the apparatus is located in an embodiment may further include other hardware according to an actual function of the video generating apparatus, which is not described herein again.
Fig. 6 is a schematic diagram of a video generating apparatus according to a first embodiment of the present application. Referring to fig. 6, the apparatus provided in this embodiment may include a search module 610, a display module 620, an editing module 630, and a processing module 640; wherein,
the searching module 610 is configured to respond to a video production request, search a plurality of video templates matching with materials carried by the video production request from a pre-created video template library, and display the plurality of video templates to a user;
the display module 620 is configured to display an editing interface for a target video template in response to a selection operation of selecting the target video template from the plurality of video templates by a user;
the editing module 630 is configured to generate an edited video template in response to an editing operation performed by a user on the target video template based on the editing interface;
the processing module 640 is configured to generate a video corresponding to the material by using the material and the edited video template.
The apparatus provided in this embodiment is configured to execute the specific steps of the method embodiment shown in fig. 1, and its implementation principle and effects are similar to those described above, and will not be described herein again.
Optionally, the searching module 610 is specifically configured to obtain description information of the material, and search, according to the description information of the material and attribute information of each video template in the video template library, a plurality of video templates matched with the material from the video template library; the descriptive information is used for characterizing the characteristics of the material.
Fig. 7 is a schematic diagram of a second embodiment of a video generating apparatus provided by the present application. Referring to fig. 7, the apparatus further includes a creation module 650 based on the above embodiment; wherein,
the creating module is used for creating a video template library in advance.
Specifically, the creating module 650 is specifically configured to:
receiving a video template creation request; the video template creation request is used for indicating to create a video template with a specified style;
selecting music elements matched with the appointed style from a music element library in a preset multi-class element library; the multi-class element library at least comprises a music element library, a transition special effect element library and a sticker element library;
selecting target elements matched with the attribute information from other element libraries according to the attribute information of the music elements; wherein the other element libraries are element libraries other than the music element library in the multi-class element library;
and carrying out time line assembly on the music element and the target element to obtain a video template.
Optionally, the creating module 650 is further specifically configured to:
selecting a target element matched with the attribute information from a transition special effect element library according to the attribute information of the music element, wherein the target element comprises the following components:
determining the number of target transition effects to be selected from the transition effect element library according to the stuck point information in the attribute information of the music elements;
and selecting the number of target transition special effects matched with the music element from the transition special effect library according to style information in the attribute information of the music element and style information in the attribute information of each transition special effect in the transition special effect library.
Optionally, the video template creation request carries a specified stacking order; the creation module is specifically configured to perform timeline assembly on the music element and the target element according to the specified stacking order, so as to obtain a video template.
Optionally, each transition special effect in the transition special effect library and attribute information of each sticker in the sticker element library all include priority information; the creation module 650 is specifically configured to determine a stacking order corresponding to the target element according to priority information of each element in the target element, and perform timeline assembly on the music element and the target element according to the stacking order, so as to obtain a video template.
Optionally, the display module 620 is further configured to display, after generating the video, a video display interface for displaying the video; the video display interface is provided with a release control;
the processing module 640 is further configured to publish the video to a specified video publishing platform in response to a publishing operation performed by the user based on the publishing control.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present application without undue burden.
The foregoing description of the preferred embodiments of the application is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the application.

Claims (10)

1. A method of video generation, the method comprising:
responding to a video making request, searching a plurality of video templates matched with materials carried by the video making request from a pre-created video template library, and displaying the plurality of video templates to a user;
responding to the selection operation of a user for selecting a target video template from the plurality of video templates, and displaying an editing interface aiming at the target video template;
responding to the editing operation of the user on the target video template based on the editing interface, and generating an edited video template;
and generating a video corresponding to the material by using the material and the edited video template.
2. The method of claim 1, wherein the searching for a plurality of video templates matching the material from a pre-established library of video templates comprises:
acquiring description information of the material; the descriptive information is used for representing the characteristics of the material;
searching a plurality of video templates matched with the material from the video template library according to the description information of the material and the attribute information of each video template in the video template library.
3. The method of claim 1, wherein the creating of each video template in the library of video templates comprises:
receiving a video template creation request; the video template creation request is used for indicating to create a video template with a specified style;
selecting music elements matched with the appointed style from a music element library in a preset multi-class element library; the multi-class element library at least comprises a music element library, a transition special effect element library and a sticker element library;
selecting target elements matched with the attribute information from other element libraries according to the attribute information of the music elements; wherein the other element libraries are element libraries other than the music element library in the multi-class element library;
and carrying out time line assembly on the music element and the target element to obtain a video template.
4. A method according to claim 3, wherein selecting a target element matching the attribute information from a transition special effect element library according to the attribute information of the music element, comprises:
determining the number of target transition effects to be selected from the transition effect element library according to the stuck point information in the attribute information of the music elements;
and selecting the number of target transition special effects matched with the music element from the transition special effect library according to style information in the attribute information of the music element and style information in the attribute information of each transition special effect in the transition special effect library.
5. The method of claim 3 or 4, wherein the video template creation request carries a specified stacking order; performing time line assembly on the music element and the target element to obtain a video template, wherein the time line assembly comprises the following steps:
and carrying out time line assembly on the music elements and the target elements according to the appointed stacking order to obtain a video template.
6. The method according to claim 3 or 4, wherein the transition effects in the transition effects library and the attribute information of the stickers in the sticker elements library each include priority information; performing time line assembly on the music element and the target element to obtain a video template, wherein the time line assembly comprises the following steps:
determining a stacking order corresponding to the target elements according to the priority information of each element in the target elements;
and carrying out time line assembly on the music elements and the target elements according to the stacking order to obtain a video template.
7. The method of claim 1, wherein after the generating the video corresponding to the material, the method further comprises:
displaying a video display interface for displaying the video; the video display interface is provided with a release control;
and publishing the video to a designated video publishing platform in response to a publishing operation performed by a user based on the publishing control.
8. The video generation device is characterized by comprising a searching module, a display module, an editing module and a processing module; wherein,
the searching module is used for responding to a video making request, searching a plurality of video templates matched with materials carried by the video making request from a pre-created video template library, and displaying the plurality of video templates to a user;
the display module is used for responding to the selection operation of a user for selecting a target video template from the plurality of video templates, and displaying an editing interface aiming at the target video template;
the editing module is used for responding to the editing operation of the user on the target video template based on the editing interface and generating an edited video template;
the processing module is used for generating a video corresponding to the material by utilizing the material and the edited video template.
9. A video generating device comprising a memory, a processor and a computer program stored on said memory and executable on the processor, characterized in that said processor implements the steps of the method according to any of claims 1-7 when said program is executed by said processor.
10. A storage medium having a program stored thereon, which when executed by a processor, implements the steps of the method of any of claims 1-7.
CN202311145383.7A 2023-09-06 2023-09-06 Video generation method, device and equipment Pending CN117201858A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311145383.7A CN117201858A (en) 2023-09-06 2023-09-06 Video generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311145383.7A CN117201858A (en) 2023-09-06 2023-09-06 Video generation method, device and equipment

Publications (1)

Publication Number Publication Date
CN117201858A true CN117201858A (en) 2023-12-08

Family

ID=89001003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311145383.7A Pending CN117201858A (en) 2023-09-06 2023-09-06 Video generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN117201858A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117615084A (en) * 2024-01-22 2024-02-27 南京爱照飞打影像科技有限公司 Video synthesis method and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117615084A (en) * 2024-01-22 2024-02-27 南京爱照飞打影像科技有限公司 Video synthesis method and computer readable storage medium
CN117615084B (en) * 2024-01-22 2024-03-29 南京爱照飞打影像科技有限公司 Video synthesis method and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN112449231B (en) Multimedia file material processing method and device, electronic equipment and storage medium
EP3475848B1 (en) Generating theme-based videos
US8548249B2 (en) Information processing apparatus, information processing method, and program
US8958662B1 (en) Methods and systems for automating insertion of content into media-based projects
US7752548B2 (en) Features such as titles, transitions, and/or effects which vary according to positions
US7908556B2 (en) Method and system for media landmark identification
CN104581380B (en) A kind of method and mobile terminal of information processing
CN103702039B (en) image editing apparatus and image editing method
AU2006249239B2 (en) A method of ordering and presenting images with smooth metadata transitions
JP4702743B2 (en) Content display control apparatus and content display control method
CN108028054A (en) The Voice & Video component of audio /video show to automatically generating synchronizes
CN108712665A (en) A kind of generation method, device, server and the storage medium of live streaming list
WO2016029745A1 (en) Method and device for generating video slide
JP5988798B2 (en) Image display apparatus, control method therefor, program, and storage medium
JP2004048735A (en) Method and graphical user interface for displaying video composition
CN111541946A (en) Automatic video generation method and system for resource matching based on materials
KR20070095431A (en) Multimedia presentation creation
CN117201858A (en) Video generation method, device and equipment
CN113395605B (en) Video note generation method and device
CN104350455A (en) Causing elements to be displayed
CN112004031A (en) Video generation method, device and equipment
US7610554B2 (en) Template-based multimedia capturing
JP5955035B2 (en) Video generation apparatus and control method thereof
WO2006067659A1 (en) Method and apparatus for editing program search information
JP2014207527A (en) Video generation device, and method of controlling the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination