CN116309964A - Video generation method, device, equipment and storage medium - Google Patents

Video generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN116309964A
CN116309964A CN202111584766.5A CN202111584766A CN116309964A CN 116309964 A CN116309964 A CN 116309964A CN 202111584766 A CN202111584766 A CN 202111584766A CN 116309964 A CN116309964 A CN 116309964A
Authority
CN
China
Prior art keywords
data
video
animation
template
template data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111584766.5A
Other languages
Chinese (zh)
Inventor
陈小明
黄先帅
王传海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN202111584766.5A priority Critical patent/CN116309964A/en
Publication of CN116309964A publication Critical patent/CN116309964A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to the field of video production technologies, and in particular, to a video generation method, apparatus, device, and storage medium. According to the embodiment, the initial animation template data are acquired so as to facilitate the subsequent material addition, then the material path of the material to be added for making the video is added into the initial animation template data, the target video template data are obtained, the target video template data are subjected to data analysis, the target animation video is obtained, the user does not need to edit the video, and the target animation video can be automatically generated by only adding the material for making the video into the initial animation template data.

Description

Video generation method, device, equipment and storage medium
Technical Field
The present invention relates to the field of video production technologies, and in particular, to a video generation method, apparatus, device, and storage medium.
Background
The most popular APP in the market is short video APP, but making a good short video requires a lot of time to learn the making of the short video, and the process of making a short video is too complex and has low efficiency.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a video generation method, a device, equipment and a storage medium, and aims to solve the technical problem that the process of making short videos is too complex in the prior art.
To achieve the above object, the present invention provides a video generating method, comprising the steps of:
when an animation production instruction is received, obtaining initial animation template data;
determining a material to be added according to the animation production instruction, and acquiring a material path of the material to be added;
updating the initial animation template data according to the material path to obtain target video template data;
and carrying out data analysis on the target video template data to obtain a target animation video. Alternatively, the process may be carried out in a single-stage,
optionally, before the initial animation template data is obtained when the animation instruction is received, the method further includes:
when a template making instruction is received, determining initial template data according to the template making instruction;
and carrying out data packaging according to the initial template data to obtain initial animation template data.
Optionally, the determining initial template data according to the template making instruction when the template making instruction is received includes:
When a template making instruction is received, determining original template data according to the template making instruction;
generating an original animation template by the original template data through a preset template design script;
and carrying out data export on the original animation template based on a preset data format to obtain initial template data.
Optionally, the data packaging according to the initial template data to obtain initial animation template data includes:
determining animation core data according to the template making instruction, wherein the animation core data comprises: animation template duration and core image number;
and updating the initial template data according to the animation core data, and carrying out data packaging on the updated initial template data to obtain initial animation template data.
Optionally, the deriving the data of the original animation template based on a preset data format to obtain initial template data includes:
performing data export based on a preset data format according to the original animation template to obtain template data to be detected;
analyzing the template data to be detected to obtain a template data type;
judging whether the template data type meets a preset data condition or not;
And when the template data type meets the preset data condition, taking the template data to be detected as the original animation template.
Optionally, the data parsing of the target video template data is performed to obtain a target animation video, including:
carrying out data analysis on the target video template data through a preset data analysis model to obtain a preview animation video;
extracting adjacent core images in the preview animation video, and generating frame images according to the adjacent core images;
and generating a target animation video based on the frame image and the preview animation video.
Optionally, after the data analysis is performed on the target video template data to obtain the target animation video, the method further includes:
receiving an audio insertion instruction input by a user, and determining local background music and music insertion time according to the audio insertion instruction;
and generating an animation video based on the music insertion time, the local background music and the target animation video.
Optionally, the generating the animated video based on the music insertion time, the local background music and the target animated video includes:
acquiring the playing time of a current target animation video;
And when the playing time of the current target animation video reaches the music insertion time, generating the animation video according to the local background music and the target animation video.
Optionally, the generating the animation video according to the local background music and the target animation video includes:
and generating the animation video by the local background music and the target animation video through a preset video synthesis strategy.
Optionally, the material to be added includes: the material path comprises an image path and an audio path;
the step of determining the material to be added according to the animation production instruction and obtaining the material path of the material to be added comprises the following steps:
carrying out data analysis on the initial animation template data to obtain the number of core images;
displaying the number of the core images to a user;
receiving images to be added input by a user based on the number of the core images, and acquiring an image path of the images to be added;
and determining audio to be added according to the animation instruction, and acquiring an audio path of the audio to be added.
Optionally, the receiving the images to be added input by the user based on the number of the core images and acquiring an image path of the images to be added includes:
Receiving images to be added input by a user based on the number of the core images, and acquiring the number of the images to be added;
comparing the number of images with the number of core images;
and when the number of the images is the same as the number of the core images, acquiring an image path of the image to be added.
Optionally, after comparing the number of images with the number of core images, the method further includes:
when the number of the images is different from the number of the core images, generating reminding information, and sending the reminding information to a user side for display;
and when receiving the images to be added which are input by the user based on the reminding information, executing the step of comparing the number of the images with the number of the core images.
In addition, in order to achieve the above object, the present invention also proposes a video generating apparatus including: the system comprises a data acquisition module, a path determination module, a data updating module and a data analysis module;
the data acquisition module is used for acquiring initial animation template data when receiving an animation production instruction;
the path determining module is used for determining materials to be added according to the animation production instruction and obtaining a material path of the materials to be added;
The data updating module is used for updating the initial animation template data according to the material path to obtain target video template data;
and the data analysis module is used for carrying out data analysis on the target video template data to obtain a target animation video.
Optionally, the data acquisition module is further configured to determine initial template data according to the template making instruction when the template making instruction is received;
and carrying out data packaging according to the initial template data to obtain initial animation template data.
Optionally, the data acquisition module is further configured to determine original template data according to the template making instruction when the template making instruction is received;
generating an original animation template by the original template data through a preset template design script;
and carrying out data export on the original animation template based on a preset data format to obtain initial template data.
Optionally, the data obtaining module is further configured to determine animation core data according to the template making instruction, where the animation core data includes: animation template duration and core image number;
and updating the initial template data according to the animation core data, and carrying out data packaging on the updated initial template data to obtain initial animation template data.
Optionally, the data acquisition module is further configured to perform data export based on a preset data format according to the original animation template, so as to obtain template data to be detected;
analyzing the template data to be detected to obtain a template data type;
judging whether the template data type meets a preset data condition or not;
and when the template data type meets the preset data condition, taking the template data to be detected as the original animation template.
Optionally, the data analysis module is further configured to perform data analysis on the target video template data through a preset data analysis model to obtain a preview animation video;
extracting adjacent core images in the preview animation video, and generating frame images according to the adjacent core images;
and generating a target animation video based on the frame image and the preview animation video.
In addition, in order to achieve the above object, the present invention also proposes a video generating apparatus including: a memory, a processor, and a video generation program stored on the memory and executable on the processor, the video generation program configured to implement the steps of the video generation method as described above.
In addition, in order to achieve the above object, the present invention also proposes a storage medium having stored thereon a video generation program which, when executed by a processor, implements the steps of the video generation method as described above.
The invention discloses that when an animation production instruction is received, initial animation template data are obtained; determining a material to be added according to the animation production instruction, and acquiring a material path of the material to be added; updating the initial animation template data according to the material path to obtain target video template data; compared with the prior art, the method and the device can automatically generate the target animation video by firstly acquiring the initial animation template data so as to facilitate the subsequent material addition, then adding the material path of the material to be added for making the video into the initial animation template data to obtain the target video template data, and carrying out data analysis on the target video template data to obtain the target animation video without video editing by a user, and can automatically generate the target animation video by only adding the material for making the video into the initial animation template data, thereby avoiding the technical problem that the process for making the short video is too complex in the prior art.
Drawings
FIG. 1 is a schematic diagram of a video generating device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart of a video generating method according to a first embodiment of the present invention;
FIG. 3 is a flowchart of a video generating method according to a second embodiment of the present invention;
FIG. 4 is a flowchart of a third embodiment of a video generating method according to the present invention;
fig. 5 is a block diagram of a first embodiment of the video generating apparatus of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a video generating device in a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the video generating apparatus may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation on the video generating apparatus, and may include more or fewer components than shown, or may combine certain components, or may be a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a video generation program may be included in the memory 1005 as one type of storage medium.
In the video generating apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the video generating apparatus of the present invention may be provided in a video generating apparatus that calls a video generating program stored in the memory 1005 through the processor 1001 and executes the video generating method provided by the embodiment of the present invention.
An embodiment of the present invention provides a video generating method, referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of a video generating method according to the present invention.
In this embodiment, the video generating method includes the following steps:
step S10: and when an animation production instruction is received, acquiring initial animation template data.
It should be noted that, the execution body of the method in this embodiment may be an animation generating device having functions of data processing, network communication and program running, such as a mobile phone, a computer, or other electronic devices capable of implementing the same or similar functions, which is not limited in this embodiment, and in the following embodiments, a control computer is exemplified.
It should be noted that, the animation instruction may be operation information sent by the user through the client, and the animation instruction is used for making the image video according to the needs of the user, where the client may be a control terminal based on different operating systems, for example: the control terminal based on the IOS operating system, the control terminal based on the Android operating system, the control terminal based on the Web operating system, and the like are not particularly limited in this embodiment.
It will be appreciated that the initial animation template data is used to generate an animation video from user-added material, where the initial animation template data may be template data designed by a developer to generate the animation video.
In a specific implementation, when a user needs to make a video, only the materials needing to be added are added into the initial animation template data,
Step S20: and determining the material to be added according to the animation production instruction, and acquiring a material path of the material to be added.
It should be understood that the material to be added may be a picture to be added or audio to be added, where the picture to be added refers to a picture used for making video in the local client, and the audio to be added refers to audio used for configuring synchronous sound for the made video in the local client.
It is easy to understand that the material path refers to the storage location of the material to be added in the local client.
In a specific implementation, when the material to be added is stored in the local client, the local client generates a corresponding index identifier according to the storage position of the material to be added, and the user can find the storage path of the corresponding material, i.e. the material path, according to the index identifier, for example: the storage position of the picture to be added in the local computer is the to-be-added picture folder of the to-be-added material folder in the C disc, and the material path of the picture to be added is the C disc/to-be-added material/to-be-added picture, which is not particularly limited in this embodiment.
Further, the step S20 includes:
the material to be added comprises: the material path comprises an image path and an audio path;
The step of determining the material to be added according to the animation production instruction and obtaining the material path of the material to be added comprises the following steps:
carrying out data analysis on the initial animation template data to obtain the number of core images;
displaying the number of the core images to a user;
receiving images to be added input by a user based on the number of the core images, and acquiring an image path of the images to be added;
and determining audio to be added according to the animation instruction, and acquiring an audio path of the audio to be added.
It should be noted that the number of core images refers to the number of images used for making a video, where the number of core images is used for making a most basic condition of the video and reminding a user of the number of pictures needed for making the video, so that the user inputs a corresponding number of images to be added according to the number of core pictures.
In a specific implementation, if the number of images to be added input by a user does not meet the requirement of the number of core pictures, a video file cannot be generated, so the step of receiving the images to be added input by the user based on the number of core images and acquiring the image paths of the images to be added includes: receiving images to be added input by a user based on the number of the core images, and acquiring the number of the images to be added; comparing the number of images with the number of core images; and when the number of the images is the same as the number of the core images, acquiring an image path of the image to be added.
Meanwhile, if the number of the images to be added is different from the number of the core images, in order to ensure that the video file can be successfully generated, reminding information can be generated when the number of the images is different from the number of the core images, and the reminding information is sent to a user side for display; and when receiving the images to be added which are input by the user based on the reminding information, executing the step of comparing the number of the images with the number of the core images.
In addition, if the audio data is added at the same time of adding the picture, the data addition of the audio to be added can be completed through an audio path for adding the audio to be added.
In a specific implementation, when the audio to be added is stored in the local client, the local client generates a corresponding index identifier according to the storage position of the audio to be added, and the user can find the storage path, i.e. the audio path, of the corresponding material according to the index identifier, for example: in the audio folder to be added, the storage position of the audio to be added in the local computer is the folder to be added of the material folder to be added in the C disc, and the material path of the audio to be added is that of the C disc/the material to be added/the audio to be added, which is not particularly limited in this embodiment.
Step S30: and updating the initial animation template data according to the material path to obtain target video template data.
It should be noted that, the target video template data refers to data obtained after updating the initial animation template data according to a material path, and the target video template data is used for subsequent data analysis to obtain a video made by a user according to a material to be added.
In a specific implementation, the initial animation template data can leave a data interface for inserting a material path, and a user can add or delete the data interface for the material path according to the needs in consideration of inserting a plurality of pictures into the initial animation template data.
For example: the method comprises the steps that two existing pictures are required to be manufactured into corresponding videos according to the two pictures, the storage position of a picture A in a local client is C disc/material to be added/picture to be added/A, the storage position of a picture B in the local client is C disc/material to be added/picture to be added/B, the storage position of a picture C in the local client is C disc/material to be added/picture to be added/C, and then the C disc/material to be added/picture to be added/A, the C disc/material to be added/picture to be added/B and the C disc/material to be added/picture to be added/C are respectively added into initial animation template data.
Further, if there are a plurality of materials to be added, the order of adding the plurality of materials to be added needs to be considered, for example: and the picture paths corresponding to the pictures A, B and C are required to be respectively added into the initial animation template data, the adding sequence is required to be determined according to the operation information of the user, and the C disc/material to be added/picture to be added/A, the C disc/material to be added/picture to be added/C and the C disc/material to be added/picture to be added/B are respectively added into the initial animation template data according to the sequence ACB.
Step S40: and carrying out data analysis on the target video template data to obtain a target animation video.
The target animation video is a video file obtained by analyzing target video template data added with a material path through a lottie third library.
In a specific implementation, if only the image to be added is in the material to be added, after the data analysis is performed on the target video template data to obtain the target animation video, audio needs to be added according to the needs of the user, but the audio path of the audio to be added may not be added to the target video template data, and may be added to the video file in other manners, for example: recording the audio signal by playing it out.
The embodiment discloses that when an animation production instruction is received, initial animation template data are obtained; determining a material to be added according to the animation production instruction, and acquiring a material path of the material to be added; updating the initial animation template data according to the material path to obtain target video template data; according to the embodiment, the initial animation template data is acquired firstly so as to facilitate subsequent material addition, then a material path of a material to be added for making a video is added into the initial animation template data to obtain the target video template data, and the target video template data is subjected to data analysis to obtain the target animation video.
Referring to fig. 3, fig. 3 is a flowchart of a video generating method according to a second embodiment of the present invention.
Based on the first embodiment, in this embodiment, before step S10, the method further includes:
Step S1: when a template making instruction is received, initial template data is determined according to the template making instruction.
The template making instruction refers to template data generated by controlling a preset template design script by a designer through an operation instruction input by a control terminal end; the initial template data refers to template data generated by a designer through controlling a preset script, wherein the initial template data is used for providing front data for generating initial animation template data.
Further, the step S1 includes:
when a template making instruction is received, determining original template data according to the template making instruction;
generating an original animation template by the original template data through a preset template design script;
and carrying out data export on the original animation template based on a preset data format to obtain initial template data.
It should be noted that, the original template data is data input by a designer and used for controlling a preset template design script, and the original template data is used for generating an original animation template through the preset template design script, wherein video format information and the like need to be confirmed before the original animation template is generated, so that the original animation template can determine the corresponding animation template according to the video format information.
In addition, the preset template design script refers to AE (Adobe After Effects) software, namely, the original template data needs to be generated into the original animation template through AE (Adobe After Effects) software.
It can be understood that the preset data format may be json data format, in the actual operation process, because the client that needs to make the video may use different operating systems, the initial animation data that is generated last cannot run, so as to affect the user experience, and the json data format is selected to be compatible with all operating systems, so that the robustness is better, and in this embodiment, the data export may be implemented through a body movie plug-in AE (Adobe After Effects) software, if the body movie plug-in is not installed in AE (Adobe After Effects) software of the control terminal, the body movie plug-in is also required to be installed correspondingly to conduct the data export.
In a specific implementation, a designer inputs a template making instruction at a control terminal, wherein the template making instruction comprises original template data, the original template data is used for generating an original animation template through AE (Adobe After Effects) software, and then the original animation template is used for data export through a body movie plug-in, so that initial animation template data is obtained.
In addition, the data export of the original animation template based on a preset data format to obtain initial template data includes:
performing data export based on a preset data format according to the original animation template to obtain template data to be detected;
analyzing the template data to be detected to obtain a template data type;
judging whether the template data type meets a preset data condition or not;
and when the template data type meets the preset data condition, taking the template data to be detected as the original animation template.
It should be noted that the template data to be detected is data obtained by randomly collecting an original animation template, wherein the data to be detected is used for detecting whether the derived data type accords with the condition of a designer;
in a specific implementation, due to possible errors in designing the template, the video format information corresponding to the template does not meet preset conditions, but due to the fact that the original animation template is too complex in checking the video format information, data analysis can be performed through data export, corresponding template data types are obtained, and accuracy of template detection is improved.
Step S2: and carrying out data packaging according to the initial template data to obtain initial animation template data.
It should be appreciated that the initial animation template data is used to generate the animation video from the material added by the user, wherein the initial animation template data may be template data designed by a developer to generate the animation video.
Further, the step S2 includes:
determining animation core data according to the template making instruction, wherein the animation core data comprises: animation template duration and core image number;
and updating the initial template data according to the animation core data, and carrying out data packaging on the updated initial template data to obtain initial animation template data.
It is easy to understand that the animation core data refers to data information necessary for producing an animation video, wherein the animation core data includes an animation template duration and a core image number.
The animation template duration refers to the duration of the generated video after the material is added; the core image number refers to the number of images used for making videos, wherein the core image number is used for making the most basic conditions of the videos and reminding a user of the number of pictures required for making the videos, so that the user inputs a corresponding number of images to be added according to the core picture number.
In a specific implementation, the data packaging of the initial template data to obtain the initial animation template data may be to add animation core data to the initial template data, and add video production conditions of the template data, for example: the method comprises the steps of setting the duration of an animation template in animation core data to be 2min, setting the number of core images in the animation core data to be 20, updating initial template data according to the duration of the animation template and the number of the core images, and obtaining initial animation template data, so that a user can input 20 pictures to be added, and in addition, a 2min video file can be generated according to materials to be added and the initial animation template data.
The embodiment discloses that when a template making instruction is received, initial template data is determined according to the template making instruction; carrying out data packaging according to the initial template data to obtain initial animation template data; according to the embodiment, the initial template data is determined according to the template making instruction of the designer, and then the initial animation template data is obtained by data packaging according to the initial template data, so that the construction of the initial animation template data can be realized, and necessary conditions are provided for a user to make a video.
Referring to fig. 4, fig. 4 is a flowchart of a third embodiment of a video generating method according to the present invention.
Based on the above second embodiment, in this embodiment, the step S40 includes:
step S401: and carrying out data analysis on the target video template data through a preset data analysis model to obtain a preview animation video.
It should be understood that previewing the animated video refers to obtaining a video file by performing data parsing on the target video template data through a preset data parsing model, where the previewing the animated video is used to adjust the sharpness of the video before generating the formal video file, so as to improve the viewing effect of the user.
Step S402: and extracting adjacent core images in the preview animation video, and generating frame images according to the adjacent core images.
It should be noted that, the adjacent core image refers to a core image input by a user forming the preview animation video, that is, a picture to be added corresponding to a picture adding path in the target video template data.
It can be understood that the frame image refers to an image obtained by expanding a scene, an action, a task image, etc. between adjacent core images, so that the continuity of the adjacent core images is stronger, and the definition of the video can be improved.
Step S403: and generating a target animation video based on the frame image and the preview animation video.
The embodiment discloses that data analysis is carried out on the target video template data through a preset data analysis model to obtain a preview animation video; extracting adjacent core images in the preview animation video, and generating frame images according to the adjacent core images; and generating a target animation video based on the frame image and the preview animation video. The embodiment increases the definition and resolution of the video when playing by adding frame images between adjacent core images.
In this embodiment, after the step S40, the method further includes:
step S50: and receiving an audio inserting instruction input by a user, and determining local background music and music inserting time according to the audio inserting instruction.
It should be noted that the audio inserting instruction refers to a control instruction input by a user through a user terminal, where the audio inserting instruction is used for selecting the inserted audio information and the audio inserting time.
It may be appreciated that the local background music may be an audio file stored in the local client, and in this embodiment, if background music is desired to be added to the target animation video, the target animation video and the local background music may be recorded at the same time in a form of playing the local background music, so as to obtain the animation video.
It is easy to understand that the music insertion time refers to the time when insertion of local background music occurs when the target moving picture video is played.
Step S60: and generating an animation video based on the music insertion time, the local background music and the target animation video.
It should be understood that the animated video refers to a video file generated from a picture to be added and added background music input by a user, and the video file should be consistent with a video format and a video duration designed by a designer.
Further, the step S60 includes:
acquiring the playing time of a current target animation video;
and when the playing time of the current target animation video reaches the music insertion time, generating the animation video according to the local background music and the target animation video.
It should be noted that, the current target animation video playing time refers to a time when the target animation video has been played, where when the current target animation video playing time reaches the music insertion time, the local background music is synchronously played, and at this time, the local device is started and the local background music and the target animation video are recorded at the same time, so as to generate the animation video, and complete the video production process.
In addition, the process of generating the animation video may further be that the local background music and the target animation video are generated into the animation video through a preset video synthesis strategy, wherein the preset video synthesis strategy may be that the local background music and the target animation video are synchronously exported at the time of music insertion through a client-side bottom video synthesis editing library, so as to obtain the animation video.
In a specific implementation, there may be local background music that is too long after the target animation video is played, and the local background music still does not end, at this time, the terminal may record directly or stop exporting the video file through the client bottom video composition editing library.
The embodiment discloses a method for receiving a music insertion instruction input by a user and determining local background music and music insertion time according to the music insertion instruction; generating an animated video based on the music insertion moment, the local background music, and the target animated video; according to the embodiment, the target animation video is enabled to have background music by selecting the insertion time of the local background music, so that the composition of the animation video is perfected, and the use experience of a user is improved.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium stores a video generation program, and the video generation program realizes the steps of the video generation method when being executed by a processor.
Because the storage medium adopts all the technical schemes of all the embodiments, the storage medium has at least all the beneficial effects brought by the technical schemes of the embodiments, and the description is omitted here.
Referring to fig. 5, fig. 5 is a block diagram showing the structure of a first embodiment of the video generating apparatus according to the present invention.
As shown in fig. 5, a video generating apparatus according to an embodiment of the present invention includes: a data acquisition module 10, a path determination module 20, a data update module 30, and a data analysis module 40;
the data acquisition module 10 is used for acquiring initial animation template data when receiving an animation production instruction.
And the path determining module 20 is used for determining the material to be added according to the animation production instruction and acquiring a material path of the material to be added.
And the data updating module 30 is used for updating the initial animation template data according to the material path to obtain target video template data.
And the data analysis module 40 is used for carrying out data analysis on the target video template data to obtain a target animation video.
The embodiment discloses that when an animation production instruction is received, initial animation template data are obtained; determining a material to be added according to the animation production instruction, and acquiring a material path of the material to be added; updating the initial animation template data according to the material path to obtain target video template data; according to the embodiment, the initial animation template data is acquired firstly so as to facilitate subsequent material addition, then a material path of a material to be added for making a video is added into the initial animation template data to obtain the target video template data, and the target video template data is subjected to data analysis to obtain the target animation video.
It should be understood that the foregoing is illustrative only and is not limiting, and that in specific applications, those skilled in the art may set the invention as desired, and the invention is not limited thereto.
It should be noted that the above-described working procedure is merely illustrative, and does not limit the scope of the present invention, and in practical application, a person skilled in the art may select part or all of them according to actual needs to achieve the purpose of the embodiment, which is not limited herein.
In addition, technical details that are not described in detail in this embodiment may refer to the video generation method provided in any embodiment of the present invention, and are not described herein again.
Furthermore, it should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. Read Only Memory)/RAM, magnetic disk, optical disk) and including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
The invention discloses an A1 video generation method, which comprises the following steps:
When an animation production instruction is received, obtaining initial animation template data;
determining a material to be added according to the animation production instruction, and acquiring a material path of the material to be added;
updating the initial animation template data according to the material path to obtain target video template data;
and carrying out data analysis on the target video template data to obtain a target animation video.
A2, the video generating method as described in A1, wherein when receiving the animation instruction, the method further comprises, before obtaining the initial animation template data:
when a template making instruction is received, determining initial template data according to the template making instruction;
and carrying out data packaging according to the initial template data to obtain initial animation template data.
A3, the video generating method according to A2, when receiving the template making instruction, determining initial template data according to the template making instruction, including:
when a template making instruction is received, determining original template data according to the template making instruction;
generating an original animation template by the original template data through a preset template design script;
and carrying out data export on the original animation template based on a preset data format to obtain initial template data.
A4, the video generation method as described in A2, wherein the step of obtaining the initial animation template data by data packaging according to the initial template data comprises the following steps:
determining animation core data according to the template making instruction, wherein the animation core data comprises: animation template duration and core image number;
and updating the initial template data according to the animation core data, and carrying out data packaging on the updated initial template data to obtain initial animation template data.
A5, the video generation method according to A3, wherein the step of data export the original animation template based on a preset data format to obtain initial template data comprises the following steps:
performing data export based on a preset data format according to the original animation template to obtain template data to be detected;
analyzing the template data to be detected to obtain a template data type;
judging whether the template data type meets a preset data condition or not;
and when the template data type meets the preset data condition, taking the template data to be detected as the original animation template.
A6, the video generation method as described in A1, wherein the step of performing data analysis on the target video template data to obtain a target animation video comprises the following steps:
Carrying out data analysis on the target video template data through a preset data analysis model to obtain a preview animation video;
extracting adjacent core images in the preview animation video, and generating frame images according to the adjacent core images;
and generating a target animation video based on the frame image and the preview animation video.
A7, the video generation method according to A1, wherein the step of performing data analysis on the target video template data to obtain a target animation video, further comprises:
receiving an audio insertion instruction input by a user, and determining local background music and music insertion time according to the audio insertion instruction;
and generating an animation video based on the music insertion time, the local background music and the target animation video.
A8, the video generating method of A7, wherein the generating an animated video based on the music insertion time, the local background music and the target animated video includes:
acquiring the playing time of a current target animation video;
and when the playing time of the current target animation video reaches the music insertion time, generating the animation video according to the local background music and the target animation video.
A9, the video generation method of A8, the generating the animation video according to the local background music and the target animation video, includes:
and generating the animation video by the local background music and the target animation video through a preset video synthesis strategy.
A10, the video generation method as described in A1, wherein the material to be added comprises: the material path comprises an image path and an audio path;
the step of determining the material to be added according to the animation production instruction and obtaining the material path of the material to be added comprises the following steps:
carrying out data analysis on the initial animation template data to obtain the number of core images;
displaying the number of the core images to a user;
receiving images to be added input by a user based on the number of the core images, and acquiring an image path of the images to be added;
and determining audio to be added according to the animation instruction, and acquiring an audio path of the audio to be added.
A11, the video generating method according to A10, wherein the steps of receiving the images to be added input by the user based on the number of the core images, and obtaining the image path of the images to be added include:
Receiving images to be added input by a user based on the number of the core images, and acquiring the number of the images to be added;
comparing the number of images with the number of core images;
and when the number of the images is the same as the number of the core images, acquiring an image path of the image to be added.
A12, the video generating method according to A11, wherein after comparing the number of images with the number of core images, further comprises:
when the number of the images is different from the number of the core images, generating reminding information, and sending the reminding information to a user side for display;
and when receiving the images to be added which are input by the user based on the reminding information, executing the step of comparing the number of the images with the number of the core images.
The invention also discloses a B13 and a video generating device, wherein the video generating device comprises: the system comprises a data acquisition module, a path determination module, a data updating module and a data analysis module;
the data acquisition module is used for acquiring initial animation template data when receiving an animation production instruction;
the path determining module is used for determining materials to be added according to the animation production instruction and obtaining a material path of the materials to be added;
The data updating module is used for updating the initial animation template data according to the material path to obtain target video template data;
and the data analysis module is used for carrying out data analysis on the target video template data to obtain a target animation video.
The video generating device as described in B14, wherein the data obtaining module is further configured to determine initial template data according to a template making instruction when the template making instruction is received;
and carrying out data packaging according to the initial template data to obtain initial animation template data.
The video generating device as described in B15, wherein the data obtaining module is further configured to determine original template data according to a template making instruction when the template making instruction is received;
generating an original animation template by the original template data through a preset template design script;
and carrying out data export on the original animation template based on a preset data format to obtain initial template data.
B16, the video generating apparatus of B15, wherein the data obtaining module is further configured to determine animation core data according to the template making instruction, where the animation core data includes: animation template duration and core image number;
And updating the initial template data according to the animation core data, and carrying out data packaging on the updated initial template data to obtain initial animation template data.
The video generating device as described in B17, the data obtaining module is further configured to perform data export based on a preset data format according to the original animation template, so as to obtain template data to be detected;
analyzing the template data to be detected to obtain a template data type;
judging whether the template data type meets a preset data condition or not;
and when the template data type meets the preset data condition, taking the template data to be detected as the original animation template.
The video generating device as described in B18, wherein the data parsing module is further configured to parse the target video template data through a preset data parsing model to obtain a preview animation video;
extracting adjacent core images in the preview animation video, and generating frame images according to the adjacent core images;
and generating a target animation video based on the frame image and the preview animation video.
The invention also discloses C19, a video generating device, the video generating device includes: a memory, a processor, and a video generation program stored on the memory and executable on the processor, the video generation program configured to implement the video generation method as described above.
The invention also discloses D20, a storage medium, wherein the storage medium stores a video generation program, and the video generation program realizes the video generation method when being executed by a processor.

Claims (10)

1. A video generation method, the video generation method comprising:
when an animation production instruction is received, obtaining initial animation template data;
determining a material to be added according to the animation production instruction, and acquiring a material path of the material to be added;
updating the initial animation template data according to the material path to obtain target video template data;
and carrying out data analysis on the target video template data to obtain a target animation video.
2. The video generation method of claim 1, wherein upon receiving an animation instruction, before acquiring initial animation template data, further comprising:
when a template making instruction is received, determining initial template data according to the template making instruction;
and carrying out data packaging according to the initial template data to obtain initial animation template data.
3. The video generation method of claim 2, wherein the determining initial template data according to the template creation instruction when the template creation instruction is received comprises:
When a template making instruction is received, determining original template data according to the template making instruction;
generating an original animation template by the original template data through a preset template design script;
and carrying out data export on the original animation template based on a preset data format to obtain initial template data.
4. The video generating method according to claim 2, wherein the data packaging according to the initial template data to obtain initial animation template data comprises:
determining animation core data according to the template making instruction, wherein the animation core data comprises: animation template duration and core image number;
and updating the initial template data according to the animation core data, and carrying out data packaging on the updated initial template data to obtain initial animation template data.
5. A video generating method according to claim 3, wherein said deriving data of said original animation template based on a preset data format to obtain initial template data comprises:
performing data export based on a preset data format according to the original animation template to obtain template data to be detected;
analyzing the template data to be detected to obtain a template data type;
Judging whether the template data type meets a preset data condition or not;
and when the template data type meets the preset data condition, taking the template data to be detected as the original animation template.
6. The method of claim 1, wherein the performing data parsing on the target video template data to obtain a target animated video comprises:
carrying out data analysis on the target video template data through a preset data analysis model to obtain a preview animation video;
extracting adjacent core images in the preview animation video, and generating frame images according to the adjacent core images;
and generating a target animation video based on the frame image and the preview animation video.
7. The method for generating video according to claim 1, wherein after the data parsing is performed on the target video template data to obtain the target animation video, the method further comprises:
receiving an audio insertion instruction input by a user, and determining local background music and music insertion time according to the audio insertion instruction;
and generating an animation video based on the music insertion time, the local background music and the target animation video.
8. A video generating apparatus, characterized in that the video generating apparatus comprises: the system comprises a data acquisition module, a path determination module, a data updating module and a data analysis module;
the data acquisition module is used for acquiring initial animation template data when receiving an animation production instruction;
the path determining module is used for determining materials to be added according to the animation production instruction and obtaining a material path of the materials to be added;
the data updating module is used for updating the initial animation template data according to the material path to obtain target video template data;
and the data analysis module is used for carrying out data analysis on the target video template data to obtain a target animation video.
9. A video generating apparatus, characterized in that the video generating apparatus comprises: a memory, a processor, and a video generation program stored on the memory and executable on the processor, the video generation program configured to implement the video generation method of any one of claims 1 to 7.
10. A storage medium having stored thereon a video generation program which, when executed by a processor, implements the video generation method of any one of claims 1 to 7.
CN202111584766.5A 2021-12-21 2021-12-21 Video generation method, device, equipment and storage medium Pending CN116309964A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111584766.5A CN116309964A (en) 2021-12-21 2021-12-21 Video generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111584766.5A CN116309964A (en) 2021-12-21 2021-12-21 Video generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116309964A true CN116309964A (en) 2023-06-23

Family

ID=86822645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111584766.5A Pending CN116309964A (en) 2021-12-21 2021-12-21 Video generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116309964A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117271270A (en) * 2023-11-21 2023-12-22 麒麟软件有限公司 Method for monitoring Android music playing on Web operating system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117271270A (en) * 2023-11-21 2023-12-22 麒麟软件有限公司 Method for monitoring Android music playing on Web operating system
CN117271270B (en) * 2023-11-21 2024-04-05 麒麟软件有限公司 Method for monitoring Android music playing on Web operating system

Similar Documents

Publication Publication Date Title
CN108228188B (en) View component processing method, electronic device and readable storage medium
CN106375696B (en) A kind of film recording method and device
CN110134600B (en) Test script recording method, device and storage medium
CN107770626A (en) Processing method, image synthesizing method, device and the storage medium of video material
CN108833787B (en) Method and apparatus for generating short video
CN106998494B (en) Video recording method and related device
CN111556329B (en) Method and device for inserting media content in live broadcast
CN110070593B (en) Method, device, equipment and medium for displaying picture preview information
CN109966742B (en) Method and device for acquiring rendering performance data in game running
US20240177365A1 (en) Previewing method and apparatus for effect application, and device, and storage medium
CN111782184B (en) Apparatus, method, apparatus and medium for performing a customized artificial intelligence production line
CN112866776A (en) Video generation method and device
CN111901695A (en) Video content interception method, device and equipment and computer storage medium
CN114880062B (en) Chat expression display method, device, electronic device and storage medium
CN113014934A (en) Product display method, product display device, computer equipment and storage medium
CN116309964A (en) Video generation method, device, equipment and storage medium
CN112637518B (en) Method, device, equipment and medium for generating simulated shooting special effect
CN108108143B (en) Recording playback method, mobile terminal and device with storage function
CN114363688A (en) Video processing method and device and non-volatile computer readable storage medium
CN113559503A (en) Video generation method, device and computer readable medium
CN108614656B (en) Information processing method, medium, device and computing equipment
CN113207037B (en) Template clipping method, device, terminal, system and medium for panoramic video animation
CN114286179B (en) Video editing method, apparatus, and computer-readable storage medium
CN112734940B (en) VR content playing modification method, device, computer equipment and storage medium
WO2021208330A1 (en) Method and apparatus for generating expression for game character

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination