CN113891017A - Automatic video generation method and device, terminal equipment and storage device - Google Patents

Automatic video generation method and device, terminal equipment and storage device Download PDF

Info

Publication number
CN113891017A
CN113891017A CN202111336263.6A CN202111336263A CN113891017A CN 113891017 A CN113891017 A CN 113891017A CN 202111336263 A CN202111336263 A CN 202111336263A CN 113891017 A CN113891017 A CN 113891017A
Authority
CN
China
Prior art keywords
video
information
lens
script
mirror
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111336263.6A
Other languages
Chinese (zh)
Inventor
孙忠武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Tangmi Technology Co ltd
Original Assignee
Chengdu Tangmi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Tangmi Technology Co ltd filed Critical Chengdu Tangmi Technology Co ltd
Priority to CN202111336263.6A priority Critical patent/CN113891017A/en
Publication of CN113891017A publication Critical patent/CN113891017A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Abstract

The invention discloses a method and a device for automatically generating a video, a terminal device and a storage device, relates to the field of video processing, solves the problem of inconvenient video generation in the prior art, and has the technical scheme that: a video automatic generation method comprises the following steps of obtaining a video script, wherein the video script comprises lens information of at least one lens; comparing the lens information of the split mirror with video materials in a video material library, and selecting the video materials meeting the lens information; establishing a corresponding relation between the video material and the split mirror; and when each sub-mirror corresponds to one video material, video synthesis is carried out. The purpose of reducing the difference between the video and the sample is achieved.

Description

Automatic video generation method and device, terminal equipment and storage device
Technical Field
The present invention relates to the field of video processing, and more particularly, to a method and an apparatus for automatically generating a video, a terminal device, and a storage apparatus.
Background
More and more people choose to share a short video of a pet on a social platform through video. In order to achieve better video effect, users need to clip and synthesize the video. The existing relatively mature video clip generation platform is 'clipping', a user manually selects video materials to be used in a video library according to a video script, and then generates a video, wherein the video script only contains the required quantity of the video materials. However, this generation method requires manual operation by the user, and is inconvenient to select when the amount of video material is too large. On the other hand, the content of the video material may be different from that of the dailies, so that the generated video has a large gap with the dailies.
Disclosure of Invention
The invention aims to provide a method and a device for automatically generating a video, a terminal device and a storage device, and the purpose of reducing the difference between the video and a sample is achieved.
The technical purpose of the invention is realized by the following technical scheme: a video automatic generation method comprises the following steps of obtaining a video script, wherein the video script comprises lens information of at least one lens; comparing the lens information of the split mirror with video materials in a video material library, and selecting the video materials meeting the lens information; establishing a corresponding relation between the video material and the split mirror; and when each sub-mirror corresponds to one video material, video synthesis is carried out.
Compared with the prior art, the video script information contains the required quantity of the video materials, and the lens information of each sub-lens contained in the video script can more accurately screen the video materials and find out the video materials similar to the lens information of the sample film. On the basis, the video is synthesized, and the difference between the generated video and the sample can be reduced.
Further, the video material comprises material information, and the material information comprises at least one of a mirror moving mode, picture description information and a picture frame number.
The video material contains material information, and can be conveniently compared with lens information of each sub-lens in the script.
In one possible embodiment, the video material information labeling model is trained through the neural network model in advance, and the video material information labeling is carried out through the pre-trained labeling model. By the method, the efficiency of marking the video material information can be greatly improved.
Further, the video script further comprises at least one of background music and a special lens transition effect.
Background music and special lens transition effects can endow the video with richer expressive force in the process of generating the video.
Further, the lens information of the split mirror comprises at least one of a split mirror number, a mirror moving mode, picture description information, filter information, subtitle information and a picture frame number.
Further, the picture description information is at least one of eating, tongue opening, and laziness in stretching.
Further, the video synthesis comprises the following steps:
adding a filter to the corresponding video material according to lens information of the sub-lenses in the video script;
sequencing the corresponding video materials according to the number of the sub-mirrors in the video script;
adding a lens transition special effect according to the lens transition special effect in the video script;
and adding the background music into the video according to the background music in the video script.
Further, the method also comprises the step of sending the synthesized video to the user.
The present invention also provides a video generating apparatus, comprising:
the video script comprises background music, a lens transition special effect and lens information of at least one lens, wherein the lens information comprises at least one of a lens number, a lens moving mode, picture description information, filter information, subtitle information and a picture frame number;
the material comparison module is used for comparing the lens information of the split mirror with video materials in a video material library, selecting the video materials meeting the lens information and establishing a corresponding relation between the video materials and the split mirror; the video material comprises material information, wherein the material information comprises at least one of a mirror moving mode, picture description information and a picture frame number;
the video synthesis module is used for synthesizing the selected video materials into a video according to the video script; the video synthesis module comprises: the filter adding module is used for adding a filter to the corresponding video material according to the filter information in the video script; the video sequencing module is used for sequencing the corresponding video materials according to the number of the sub-mirrors in the video script; the transition effect module is used for adding a lens transition special effect according to the lens transition special effect in the video script; and the music adding module is used for adding background music into the video according to the background music in the video script.
The present invention also provides a terminal device, including: one or more processors; a memory for storing one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors implement the video automatic generation method according to the present invention.
The present invention also provides a computer-readable storage medium having a computer program stored thereon, wherein the computer program is executed by a processor to implement the automatic video generation method according to the present invention.
In summary, the invention has the following beneficial effects:
1. when the method is executed, the function of automatically synthesizing the video can be realized, and the operation difficulty of a user is reduced.
2. By adopting the method provided by the invention, the difference between the generated video and the sample film is reduced, and a more excellent video effect can be obtained.
Drawings
FIG. 1 is a flow chart of the method of example 1
FIG. 2 is a flowchart of the method of example 2
FIG. 3 is a flowchart of the method of example 3
FIG. 4 is a schematic diagram showing script information of the method according to embodiment 1
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected" to another element, it can be directly or indirectly connected to the other element, and the "connection" does not limit the fixed connection or the movable connection, and the specific connection mode should be determined according to the specific technical problem to be solved.
It will be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like, as used herein, refer to an orientation or positional relationship indicated in the drawings that is solely for the purpose of facilitating the description and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and is therefore not to be construed as limiting the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Example 1:
the present embodiment is applicable to the case of composite video, and the method may be executed by a composite video generating apparatus, which may be implemented in software and/or hardware, and may be configured in terminal equipment, such as a computer, a mobile phone, and the like. The method specifically comprises the following steps:
s1, acquiring a video script, wherein the video script comprises background music, a lens transition special effect and lens information of at least one lens, and the lens information comprises at least one of a lens number, a lens moving mode, picture description information, filter information, subtitle information and a picture frame number;
in one possible embodiment, the user arranges and numbers the mirrors, constructs a list of mirrors, inputs shot information for each mirror, and generates a video script by the computer for execution. For example, the contents of script information (as shown in fig. 4) are input into an excel table, and computer code for execution is generated by a program. For example, the user may create a video script computer code for execution in the application program based on the script information required for the prompt data script by programming the application program with a computer. The script information shown in the drawings of the present embodiment is exemplary information, and the script information can also be adjusted according to requirements, and the adjustment under the concept of the present invention is within the protection scope of the present patent.
In a possible embodiment, the script information is obtained by analyzing a sample by a computer program, and the sample may be from other creators or from a video that is ideally captured by the user. For example, when a popular video clip of a pet appears on the social platform, the computer program analyzes the clip:
inputting the sample film into a lens segmentation model (such as a neural network model after deep learning) to perform lens segmentation processing, segmenting the video short film into a plurality of sub-lenses, sequencing and numbering the sub-lenses to obtain a sub-lens list; determining the frame number of each sub-mirror; a single mirror has successive video frames, containing at least one video frame.
The identification of the sub-mirrors can calculate the difference between every two adjacent video frame images in the video, if the difference reaches a preset threshold value, the two adjacent video frames are considered to come from the two sub-mirrors, and the two adjacent video frames are taken as a dividing point to divide the video into a plurality of sub-mirrors.
Inputting the lens into a lens analysis model (such as a neural network model after deep learning) to analyze the picture information to obtain one or more information such as a lens moving mode, an angle, a machine position and the like;
inputting the lens into a lens identification model (such as a neural network model after deep learning) to analyze the picture information to obtain filter information, picture description information and subtitle information; the picture description information may include at least one of eating, tongue-opening, and laziness in waist stretching.
The information is converted into an executable video script computer program.
S2, comparing the lens information of the split mirror with the video materials in the video material library, and selecting the video materials meeting the lens information. The video material comprises material information, wherein the material information comprises at least one of a mirror moving mode, picture description information and a picture frame number.
The video material library can be arranged on terminal equipment, such as a mobile phone and a computer, and can also be positioned on a server.
In one possible embodiment, the video material information is manually annotated by the user. For example, after a user takes a video, the user inputs a video mirror-moving mode and picture description information according to the prompt of application software, and inputs a picture frame number according to the prompt of the application software.
In a possible embodiment, the video material information is obtained by splitting and analyzing a video through a computer program, and the video can be shot by the user himself or can be shot by other users. The method specifically comprises the following steps:
inputting a video into a video segmentation model (such as a neural network model after deep learning) to perform shot segmentation processing, segmenting the video into a plurality of video materials, and determining the frame number of each sub-lens; a single video material has successive video pictures, containing at least one video frame.
The splitting of the video material can calculate the difference between every two adjacent video frame images in the video, if the difference reaches a preset threshold value, the two adjacent video frames are considered to come from two sub-mirrors, and the two adjacent video frames are taken as a dividing point to split the video into a plurality of video materials.
Inputting a video material into a video analysis model (such as a neural network model after deep learning) to analyze picture information to obtain one or more information such as a mirror moving mode, an angle, a machine position and the like;
inputting the video material into a video identification model (such as a neural network model after deep learning) to analyze the picture information to obtain picture description information; the picture description information may include at least one of eating, tongue-opening, and laziness in waist stretching.
By the scheme, a new video material library and material information contained in each video material can be obtained.
In one possible embodiment, the material information is described in terms of file details, and the video material information may also be managed by creating a table.
It should be noted that, since the shooting time of the video photographer may not coincide with the time of the split mirror, the frame number of the picture is different. Therefore, when comparing the frame number of the picture required by the split mirror with the frame number of the picture of the video material, the importance level of the frame number of the picture is lower. In one possible embodiment, the consistency of the number of picture frames may be ignored when other conditions are met. In some embodiments, the difference in frame number of pictures is considered to be within a certain threshold.
S3, establishing a corresponding relation between the video material and the split mirror;
when the information of the lens is matched with a proper video material, namely the material information meets the requirement of the lens information, the corresponding relation between the video material and the lens is established. Each corresponding to at least one segment of video material.
In one possible embodiment, after each mirror finds the corresponding video material, the file name of the video material is modified to the mirror number of the corresponding video script. For example: script name + minute number.
In one possible embodiment, the mirrors in the video script are pointed to the video material by pointers.
And S4, synthesizing the videos after each sub-mirror corresponds to one video material.
And when each sub-mirror corresponds to one video material, executing the operation of video synthesis. The specific operation steps are as follows:
s41, adding a filter to the corresponding video material according to the information of the split mirror in the video script;
in other possible embodiments, adding information such as subtitles to the video material according to the split-mirror information;
in other possible embodiments, the step of cutting the video material is also based on the number of the script picture frames, and generally, the picture frames behind the video material are cut off so that the video material meets the requirement of the number of the script picture frames.
S42, sorting the corresponding video materials according to the number of the split mirrors in the video script; the automatic document sorting function is the prior art and is not described herein.
S43, adding a lens transition special effect according to the lens transition special effect in the video script;
s44 adds background music to the video based on the background music in the video script.
Adding transition special effects to the shot and adding background music to the video are common prior art, and are not described herein again.
S5 sends the composed video or the composed information to the user.
And when the video synthesis is finished, sending the video to the user, or sending notification information to the user.
To better illustrate the technical effect of the present embodiment, one possible usage scenario is as follows:
an application may implement the splitting and analysis of the video material information in step S2 as in the present embodiment. The user uses the application to split and analyze the video materials in the mobile phone or the computer to obtain a video material library.
An application may implement the steps as in the present embodiments S1, S3, S4, S5.
When the user finds that the network has popular pet short videos, the video script automatically executes corresponding operation by downloading the script of the video to generate a video with the video script as a template, and the user is informed after the generation is finished. And when the corresponding video material cannot be found completely, the program is paused until the user newly shoots a new video material, and then the corresponding program is executed.
Example 2
The present embodiment is applicable to the case of composite video, and the method may be performed by a composite video generation apparatus, which may be implemented in software and/or hardware, and may be configured in a server. The method for synthesizing the video is mainly characterized in that the video can be synthesized by a plurality of video scripts. The specific method comprises the following steps:
s1 obtains a plurality of video scripts. The video script comprises background music, a lens transition special effect and lens information of at least one sub-lens, wherein the lens information comprises at least one of a sub-lens number, a lens moving mode, picture description information, filter information, subtitle information and a picture frame number.
The obtaining mode of the video script is not described any more.
In a possible embodiment, the method further comprises the step of grading the video scripts, namely, preferentially selecting the video scripts with higher priority for further processing. The priority can be set by the user through self-sequencing; and the heat of the video scripts can be analyzed according to the big data, and the video scripts are sequenced according to the heat.
S2, comparing the lens information of the video script with the video material in the video material library, and selecting the video material meeting the lens information. The video material comprises material information, wherein the material information comprises at least one of a mirror moving mode, picture description information and a picture frame number. The video material information acquisition mode is not described again. The video referred to in this embodiment may be a video uploaded to a server by a user, or may be a video automatically uploaded after being captured by a terminal device (e.g., a camera). And after uploading, establishing a video library exclusive to the user or the equipment. The method comprises the following specific steps:
s21, acquiring one video script in a plurality of video scripts;
in one possible embodiment, the video scripts are ordered according to settings or heat; and sequentially selecting the video scripts according to the sequence and executing the subsequent steps.
S22 selecting a sub-mirror in the video script which is not established with the corresponding relation of the video material;
if there is no branch mirror in the script that has not established a correspondence relationship with the video, step S3 is executed.
S23, traversing the video materials in the video material library, and selecting the video materials meeting the requirements.
If the appropriate video material is selected, the corresponding relationship between the split mirror and the video material is established, and then step S22 is executed.
If no suitable video material is selected, S21 is executed to select the next video script.
And establishing the corresponding relation between the split mirror and the video material, which is not described herein again.
S3 video composition.
The synthesis method is not described in detail.
S4 sends the composed video or the composed information to the user.
According to the embodiment, a user can select a plurality of video scripts simultaneously, and the plurality of scripts automatically execute a video synthesis program according to the sequence.
Example 3
The present embodiment is applicable to the case of composite video, and the method may be performed by a composite video generation apparatus, which may be implemented in software and/or hardware, and may be configured in a server. The video synthesizing method provided by the embodiment is mainly characterized in that a plurality of devices/users and a plurality of video scripts can be used for synthesizing the video.
In this embodiment, a user/device establishes an account in a server, each account corresponds to a dedicated video material library, and the video material libraries are automatically uploaded by the user/device; each account can obtain a plurality of video scripts, and the scripts are graded and a script list is established. The video script acquisition and the script grading are not repeated. The method comprises the following specific steps:
s1 obtains the current user/device video script list, and exits the current user/device to refer to the next user/device if no video script exists in the list.
S2, comparing the lens information of the video script with the video material in the video material library, and selecting the video material meeting the lens information. The method comprises the following specific steps:
s21 obtaining a video script in the list;
in this step, the video scripts in the list are selected, and the video scripts are sequentially selected from highest priority to lowest priority according to the grading information of the scripts.
If there is no next video script, the current user/device is exited, the next user/device is referred to, and the step S1 is executed.
S22 selecting a sub-mirror in the video script which is not established with the corresponding relation of the video material;
if there is no branch mirror in the script that has not established a correspondence relationship with the video, step S3 is executed.
S23, traversing the video material in the current user/equipment video material library, and selecting the video material meeting the requirement.
If the appropriate video material is selected, the corresponding relationship between the split mirror and the video material is established, and then step S22 is executed.
If no suitable video material is selected, S21 is executed to select the next video script.
And establishing the corresponding relation between the split mirror and the video material, which is not described herein again.
S3 video composition.
The synthesis method is not described in detail.
S4 sends the composed video or the composed information to the current user.
After the current user/device is notified, step S21 is performed.
By the method, the server can circularly process the video scripts of a plurality of users/equipment to automatically synthesize the video.
Example 4
The present embodiment also provides a video generating apparatus, including:
the video script comprises background music, a lens transition special effect and lens information of at least one lens, wherein the lens information comprises at least one of a lens number, a lens moving mode, picture description information, filter information, subtitle information and a picture frame number;
the material comparison module is used for comparing the lens information of the split mirror with video materials in a video material library, selecting the video materials meeting the lens information and establishing a corresponding relation between the video materials and the split mirror; the video material comprises material information, wherein the material information comprises at least one of a mirror moving mode, picture description information and a picture frame number;
the video synthesis module is used for synthesizing the selected video materials into a video according to the video script; the video synthesis module comprises: the filter adding module is used for adding a filter to the corresponding video material according to the filter information in the video script; the video sequencing module is used for sequencing the corresponding video materials according to the number of the sub-mirrors in the video script; the transition effect module is used for adding a lens transition special effect according to the lens transition special effect in the video script; and the music adding module is used for adding background music into the video according to the background music in the video script.
In one possible embodiment, a user/device management module is used to establish a dedicated account for each user/device, storing video footage and video scripts for that user/device.
Example 5
The present embodiment provides a terminal device, which in the present embodiment may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a vehicle mounted terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, a server, and the like. The method according to the preceding embodiments may be implemented by computer software code. The computer software can be installed in the terminal device provided in this embodiment.
Example 6
This embodiment is a computer-readable storage medium that may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
The computer readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device.
The computer readable medium carries one or more programs, and when the one or more programs are executed by the terminal device, the terminal device is enabled to implement the methods in embodiments 1 to 3.
The present embodiment is only for explaining the present invention, and it is not limited to the present invention, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present invention.

Claims (10)

1. An automatic video generation method comprises the following steps,
acquiring a video script, wherein the video script comprises lens information of at least one lens;
comparing the lens information of the split mirror with video materials in a video material library, and selecting the video materials meeting the lens information;
establishing a corresponding relation between the video material and the split mirror;
and when each sub-mirror corresponds to one video material, video synthesis is carried out.
2. The method of claim 1, wherein: the video material comprises material information, wherein the material information comprises at least one of a mirror moving mode, picture description information and a picture frame number.
3. The method of claim 1, wherein: the video script also comprises at least one of background music and a lens transition special effect.
4. A method for automatic generation of video according to claim 3, wherein: the lens information of the sub-lens comprises at least one of a sub-lens number, a lens moving mode, picture description information, filter information, subtitle information and picture frame number.
5. An automatic video generation method according to claim 4, wherein: the picture description information is at least one of eating, tongue opening and laziness in stretching.
6. An automatic video generation method according to claim 5, wherein: the video synthesis comprises the following steps:
adding a filter to the corresponding video material according to lens information of the sub-lenses in the video script;
sequencing the corresponding video materials according to the number of the sub-mirrors in the video script;
adding a lens transition special effect according to the lens transition special effect in the video script;
and adding the background music into the video according to the background music in the video script.
7. The method of claim 1, wherein: further comprising the step of transmitting the composed video to a user.
8. A video generation apparatus, comprising:
the video script comprises background music, a lens transition special effect and lens information of at least one lens, wherein the lens information comprises at least one of a lens number, a lens moving mode, picture description information, filter information, subtitle information and a picture frame number;
the material comparison module is used for comparing the lens information of the split mirror with video materials in a video material library, selecting the video materials meeting the lens information and establishing a corresponding relation between the video materials and the split mirror; the video material comprises material information, wherein the material information comprises at least one of a mirror moving mode, picture description information and a picture frame number;
the video synthesis module is used for synthesizing the selected video materials into a video according to the video script; the video synthesis module comprises:
the filter adding module is used for adding a filter to the corresponding video material according to the filter information in the video script;
the video sequencing module is used for sequencing the corresponding video materials according to the number of the sub-mirrors in the video script;
the transition effect module is used for adding a lens transition special effect according to the lens transition special effect in the video script;
and the music adding module is used for adding background music into the video according to the background music in the video script.
9. A terminal device, comprising: one or more processors; a memory for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement the method of automatic video generation of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method for automatic generation of video according to any one of claims 1 to 7.
CN202111336263.6A 2021-11-12 2021-11-12 Automatic video generation method and device, terminal equipment and storage device Pending CN113891017A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111336263.6A CN113891017A (en) 2021-11-12 2021-11-12 Automatic video generation method and device, terminal equipment and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111336263.6A CN113891017A (en) 2021-11-12 2021-11-12 Automatic video generation method and device, terminal equipment and storage device

Publications (1)

Publication Number Publication Date
CN113891017A true CN113891017A (en) 2022-01-04

Family

ID=79017336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111336263.6A Pending CN113891017A (en) 2021-11-12 2021-11-12 Automatic video generation method and device, terminal equipment and storage device

Country Status (1)

Country Link
CN (1) CN113891017A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114567819A (en) * 2022-02-23 2022-05-31 中国平安人寿保险股份有限公司 Video generation method and device, electronic equipment and storage medium
WO2022253349A1 (en) * 2021-06-04 2022-12-08 北京字跳网络技术有限公司 Video editing method and apparatus, and device and storage medium
CN115883818A (en) * 2022-11-29 2023-03-31 北京优酷科技有限公司 Automatic statistical method and device for video frame number, electronic equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022253349A1 (en) * 2021-06-04 2022-12-08 北京字跳网络技术有限公司 Video editing method and apparatus, and device and storage medium
CN114567819A (en) * 2022-02-23 2022-05-31 中国平安人寿保险股份有限公司 Video generation method and device, electronic equipment and storage medium
CN114567819B (en) * 2022-02-23 2023-08-18 中国平安人寿保险股份有限公司 Video generation method, device, electronic equipment and storage medium
CN115883818A (en) * 2022-11-29 2023-03-31 北京优酷科技有限公司 Automatic statistical method and device for video frame number, electronic equipment and storage medium
CN115883818B (en) * 2022-11-29 2023-09-19 北京优酷科技有限公司 Video frame number automatic counting method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113891017A (en) Automatic video generation method and device, terminal equipment and storage device
US10685460B2 (en) Method and apparatus for generating photo-story based on visual context analysis of digital content
US10320876B2 (en) Media production system with location-based feature
CN111930994A (en) Video editing processing method and device, electronic equipment and storage medium
CN104581380A (en) Information processing method and mobile terminal
US10674183B2 (en) System and method for perspective switching during video access
US20220084313A1 (en) Video processing methods and apparatuses, electronic devices, storage mediums and computer programs
CN104881451A (en) Image searching method and image searching device
CN111787354B (en) Video generation method and device
CN110992993A (en) Video editing method, video editing device, terminal and readable storage medium
CN111246289A (en) Video generation method and device, electronic equipment and storage medium
JP2022524672A (en) Information recognition methods and devices, systems, electronic devices, storage media and computer programs
CN110418148B (en) Video generation method, video generation device and readable storage medium
CN111314620A (en) Photographing method and apparatus
US20170161871A1 (en) Method and electronic device for previewing picture on intelligent terminal
US10924637B2 (en) Playback method, playback device and computer-readable storage medium
CN112333238A (en) Data processing method, device, equipment and storage medium
CN108733737B (en) Video library establishing method and device
KR102462297B1 (en) A system providing cloud-based one-stop personal media creator studio platform for personal media broadcasting
CN114979764B (en) Video generation method, device, computer equipment and storage medium
US20230328336A1 (en) Processing method and apparatus, electronic device and medium
CN112911351B (en) Video tutorial display method, device, system and storage medium
CN110633399A (en) Data processing method and device and data processing device
CN115484471B (en) Method and device for recommending anchor
CN108804596B (en) Network information pushing method and device and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220104