CN112422831A - Video generation method and device, computer equipment and storage medium - Google Patents

Video generation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112422831A
CN112422831A CN202011311051.8A CN202011311051A CN112422831A CN 112422831 A CN112422831 A CN 112422831A CN 202011311051 A CN202011311051 A CN 202011311051A CN 112422831 A CN112422831 A CN 112422831A
Authority
CN
China
Prior art keywords
video
user
shooting
page
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011311051.8A
Other languages
Chinese (zh)
Inventor
刘伟烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Pacific Computer Information Consulting Co ltd
Original Assignee
Guangzhou Pacific Computer Information Consulting Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Pacific Computer Information Consulting Co ltd filed Critical Guangzhou Pacific Computer Information Consulting Co ltd
Priority to CN202011311051.8A priority Critical patent/CN112422831A/en
Publication of CN112422831A publication Critical patent/CN112422831A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to a video generation method, a video generation device, a computer device and a storage medium. By adopting the method and the device, the user does not need to spend longer time for editing clips and does not need to make subtitles and synchronize with a time axis, a large amount of video editing and clipping time is saved for the user, and the video making efficiency is improved. The method comprises the following steps: displaying a template editing page corresponding to a video shooting template selected by a user, and displaying a shooting page in response to a first shooting instruction triggered by the user on the template editing page, wherein the shooting page is displayed with a preset composition reference line; displaying a preset caption on the shooting page according to a preset time node, and acquiring original data which is input by a user according to the preset composition reference line and is used for generating a video; and generating a video based on the original data and the preset subtitles.

Description

Video generation method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video generation method and apparatus, a computer device, and a storage medium.
Background
At present, short video services are widely developed, particularly, in the automobile media industry, users generally adopt a short video form to carry out automobile marketing, the users collect materials in advance or write video file scripts, then shooting is carried out by using shooting software, the users need to recite subtitles by themselves or use other prompters to carry out subtitle prompt in the shooting process, editing and other operations are carried out according to the shot video content and subtitles, the video characters and the time axis are required to be input and adjusted in detail in the process to be synchronous, and the mode needs the users to spend longer operation time, so that the video production efficiency is low.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a video generation method, apparatus, computer device and storage medium for solving the above technical problems.
A method of video generation, the method comprising:
displaying a template editing page corresponding to the video shooting template selected by the user;
responding to a first shooting instruction triggered by the user on the template editing page, and displaying a shooting page; the shooting page is displayed with a preset composition reference line;
displaying a preset subtitle on the shooting page according to a preset time node, and acquiring original data which are input by the user according to the preset composition reference line and are used for generating a video;
and generating the video based on the original data and the preset subtitles.
In one embodiment, a plurality of lens sections corresponding to the video shooting template are further displayed on the template editing page; the displaying a shooting page in response to a first shooting instruction triggered by the user on the template editing page further comprises:
acquiring a lens division appointed by the user in the plurality of lens division;
displaying an effect display page corresponding to the specified lens;
and responding to the first shooting instruction triggered by the user on the effect display page, and displaying a shooting page corresponding to the specified lens.
In one embodiment, each of the split-mirror shots contains a subtitle set; after the step of displaying the effect display page corresponding to the specified lens, the method further includes:
responding to a subtitle modification request triggered by the user on the effect display page, and displaying a subtitle modification page corresponding to the specified lens; a subtitle set comprising a plurality of subtitle sequences is displayed on the subtitle modification page;
acquiring subtitle update content of the user for a specified subtitle sequence;
and updating the subtitle set according to the subtitle updating content.
In one embodiment, the specified split-mirror shots contain default durations; the raw data includes image data and voice data; the displaying of the corresponding preset subtitles on the shooting page according to the preset time nodes and the obtaining of the original data for generating the video, which is input by the user according to the preset composition reference line, comprise:
responding to a second shooting instruction triggered by the user on the shooting page, and continuously acquiring image data through a shooting lens until the default time length is finished to obtain an image data packet;
meanwhile, displaying a preset subtitle corresponding to the preset time node on the current shooting page according to the preset time node;
continuously acquiring the voice data input by the user based on the preset subtitles through voice equipment until the default time length is finished to obtain a voice data packet;
and storing the image data packet and the voice data packet, and returning to the template editing page.
In one embodiment, the generating the video based on the original data and the preset subtitles includes:
generating video data based on the image data packet and the voice data packet;
establishing an association relation between the video data and the preset subtitle based on the time axis of the video data and the time axis of the preset subtitle;
outputting the video data to a first display layer according to the incidence relation so that the first display layer displays the video data;
at the same time, the user can select the desired position,
outputting the preset subtitles to a second display layer according to the association relation so that the preset subtitles are displayed on the second display layer; the second display layer is positioned above the first display layer;
and synthesizing the first display layer and the second display layer to generate the video.
In one embodiment, the method further comprises:
acquiring a style adjusting request input by the user; the style adjustment request comprises a text style adjustment request, a special effect style insertion request and a background music insertion request;
adjusting a video style in the video according to the style adjustment request.
In one embodiment, the method further comprises:
responding to a picture import instruction triggered by the user on the effect display page, and displaying a picture list;
and acquiring the picture appointed by the user in the picture list, and storing the picture.
A video generation apparatus, the apparatus comprising:
the template editing page display module is used for displaying a template editing page corresponding to the video shooting template selected by the user;
the shooting page display module is used for responding to a first shooting instruction triggered by the user on the template editing page and displaying a shooting page; the shooting page is displayed with a preset composition reference line;
the original data acquisition module is used for displaying preset subtitles on the shooting page according to preset time nodes and acquiring original data which are input by the user according to the preset composition reference line and are used for generating a video;
and the video generation module is used for generating the video based on the original data and the preset subtitles.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the video generation method as described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video generation method as described above.
According to the video generation method, the video generation device, the computer equipment and the storage medium, a template editing page corresponding to a video shooting template selected by a user is displayed, a shooting page is displayed in response to a first shooting instruction triggered by the user on the template editing page, and the shooting page is displayed with a preset composition reference line; displaying a preset caption on the shooting page according to a preset time node, and acquiring original data which is input by a user according to the preset composition reference line and is used for generating a video; and generating a video based on the original data and the preset subtitles. The method ensures that the user does not need to spend longer time for editing clips and does not need to make subtitles by the user and synchronize with a time axis, thereby saving a great deal of time for editing and clipping videos and improving the video making efficiency.
Drawings
FIG. 1 is a schematic flow chart diagram of a video generation method in one embodiment;
FIG. 2 is a schematic diagram of a sample video capture template in one embodiment;
FIG. 3 is a diagram illustrating a sample template editing page in accordance with an embodiment;
FIG. 4 is a schematic diagram of a sample patterned reference line in one embodiment;
FIG. 5 is a diagram illustrating an example of a subtitle prompt in one embodiment;
FIG. 6 is a diagram illustrating a sample batch modification of subtitles according to an embodiment;
FIG. 7 is a flowchart illustrating a video generation method according to another embodiment;
FIG. 8 is a block diagram showing the structure of a video generating apparatus according to an embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The video generation method provided by the application is mainly applied to terminal equipment, and the terminal equipment can be but is not limited to various personal computers, notebook computers, smart phones, tablet computers, portable wearable equipment and the like which are provided with shooting lenses.
In one embodiment, as shown in fig. 1, there is provided a video generation method comprising the steps of:
step S101, a template editing page corresponding to the video shooting template selected by the user is displayed.
The video capture templates are data organization structures preset according to a certain pattern and organization structure, as shown in fig. 2, fig. 2 is an exemplary diagram of the video capture templates, the video capture templates are data packets with fixed organization structures, and contents in each organization structure can be modified according to the needs of a user.
Specifically, a series of video shooting templates are preset in the video shooting APP, as shown in fig. 2, a user selects a video shooting template required by the user from the series of video shooting templates, the terminal receives a selection instruction of the user and then enters a template editing page corresponding to the video shooting template, as shown in fig. 3, different split-lens shots corresponding to the template according to a time sequence are displayed on the template editing page, an effect preview image is arranged in the center of the image, and an effect preview image of the split-lens shot selected by the current user is displayed on the effect preview image.
Optionally, a tutorial option is also displayed on the template editing page, and the user can enter the tutorial page through the option, and the tutorial page shows the user how to use the template to shoot the video.
Step S102, responding to a first shooting instruction triggered by a user on the template editing page, and displaying a shooting page; the shooting page is displayed with a preset composition reference line.
The preset composition reference line is a reference line layer embedded in the template file when the template file is manually made, as shown in fig. 4.
Specifically, the user triggers a first shooting instruction on the template editing page, for example, triggers a "start shooting" button on the template editing page shown in fig. 3, and then automatically opens a camera lens switch arranged on the user terminal, enters a shooting page, and displays a preset composition reference line layer on the shooting page.
And step S103, displaying preset subtitles on the shooting page according to a preset time node, and acquiring original data which is input by a user according to the preset composition reference line and is used for generating a video.
The preset subtitles are lines which are already made according to time nodes and are synchronous with the pictures when the template file is made manually.
Specifically, after entering the shooting page, the user clicks a "start shooting" button, and the terminal starts to acquire original data through the camera device and the recording device, and records and stores the original data until the preset time is over. During the recording period, the terminal displays the lines in the video shooting template selected by the user on the shooting page according to the self-contained subtitle script in the video shooting template selected by the user and the preset time node, and each line is provided with countdown information, so that the user has preparation time and can directly read the lines according to the line prompt after the preparation time is over, as shown in fig. 5.
And step S104, generating a video based on the original data and the preset subtitles.
Specifically, the original data comprises video information and voice information obtained by recording, and the terminal automatically synthesizes a video according to the video information and the voice information and a subtitle script attached to a video shooting template selected by a user.
In the embodiment, a template editing page corresponding to a video shooting template selected by a user is displayed, a shooting page is displayed in response to a first shooting instruction triggered by the user on the template editing page, and the shooting page is displayed with a preset composition reference line; displaying a preset caption on the shooting page according to a preset time node, and acquiring original data which is input by a user according to the preset composition reference line and is used for generating a video; and generating a video based on the original data and the preset subtitles. The method ensures that the user does not need to spend longer time for editing clips and does not need to make subtitles by the user and synchronize with a time axis, thereby saving a great deal of time for editing and clipping videos and improving the video making efficiency.
In one embodiment, a plurality of lens sections corresponding to the video shooting template are further displayed on the template editing page; the step S102 further includes:
acquiring a lens division appointed by a user in a plurality of lens division; displaying an effect display page corresponding to the appointed lens; and responding to a first shooting instruction triggered by the user on the effect display page, and displaying a shooting page corresponding to the specified lens.
Specifically, each video template includes a certain number of shot scripts, for example, in a shot video of an automobile, a picture (with a speech script) shot for the entire automobile is a shot script, then a close-up is performed for a certain part on the automobile, and with a corresponding speech, another shot script is another shot script. In this embodiment, a user first selects a required lens from a plurality of lenses, and the terminal displays an effect display page corresponding to the lens on a current template editing page according to the selection of the user, so that the user can preview the effect display page. If the user is satisfied with the currently displayed effect display page of the lens splitting lens, the user can directly select 'start shooting' on the current template editing page, the terminal responds to the first shooting instruction, opens the camera device, displays the shooting page corresponding to the lens splitting lens specified by the user, and the shooting page is displayed with a preset composition reference line corresponding to the current lens splitting lens.
According to the embodiment, the plurality of the lens segments are arranged in the video shooting template, and each lens segment corresponds to different composition reference lines, so that more personalized choices are provided for a user to shoot videos, for example, the user can select to shoot the partial lens of the whole vehicle before shooting according to the actual requirement of the user, and can shoot the partial lens of the whole vehicle before shooting, so that the flexibility and diversity of shooting are improved.
In one embodiment, each of the split-mirror shots includes a subtitle set; after the step of displaying the effect display page corresponding to the specified lens, the method further includes:
responding to a subtitle modification request triggered by a user on an effect display page, and displaying a subtitle modification page corresponding to the specified lens; a caption set containing a plurality of lines sequences is displayed on the caption modification page; acquiring the speech updating content of a user aiming at the specified speech sequence; and updating the subtitle set according to the updated content of the speech.
The caption set refers to a speech data packet preset in each of the lens segments, as shown in fig. 5, where each sentence of speech is ordered according to a preset time period.
Specifically, the template editing page is further provided with a "modify subtitle" button, after the user selects the split-lens, if the preset subtitle is not satisfied, the "modify subtitle" button may be selected, and the terminal responds to a subtitle modification request initiated by the user and displays a subtitle modification page corresponding to the designated split-lens, as shown in fig. 6. The subtitle modification page displays a subtitle set corresponding to the current lens, wherein the subtitle set comprises a preset speech-line sequence which is arranged according to a time sequence, a modification button is arranged behind each sentence of speech-line, and a user can modify each sentence of speech-line. And when the user clicks the 'modification' button, the current sequence of the lines can be modified, and after the modification is finished, the user clicks 'finished', the terminal acquires the new lines input by the user as line updating contents and updates the subtitle set corresponding to the lens by using the line updating contents.
According to the embodiment, the caption batch modification request of the user is obtained, the speech updating content of the user is obtained, the caption set corresponding to the current lens is updated according to the speech updating content, the user can conveniently make speech, and a data basis is provided for automatically synthesizing the caption and the video in a follow-up manner.
In one embodiment, the designated lens segment includes a default duration; the raw data includes image data and voice data, and the step S103 includes:
responding to a second shooting instruction triggered by the user on the shooting page, and continuously acquiring image data through the shooting lens until the default time length is finished to obtain an image data packet; meanwhile, displaying a preset subtitle corresponding to the preset time node on the current shooting page according to the preset time node; continuously acquiring voice data input by a user based on a preset subtitle through voice equipment until the default duration is finished to obtain a voice data packet; and storing the image data packet and the voice data packet and returning to the template editing page.
Specifically, after a user selects a lens and enters a shooting page corresponding to the lens, a 'start shooting' button is displayed on the shooting page, the user clicks the 'start shooting' button, and the terminal responds to the second shooting instruction initiated by the user and starts to continuously acquire image data through the shooting lens carried by the terminal until the end of the default duration to obtain an image data packet; meanwhile, the terminal displays preset subtitles corresponding to the lens of the split mirror on a current shooting page according to a preset time node, each sentence of lines is provided with countdown, a user only needs to read the lines displayed on a screen according to the countdown prompt, and the terminal continuously obtains the lines read by the user through voice equipment until the default time length is over to obtain a voice data packet; the terminal stores the image data packet and the voice data packet, and automatically returns to the template editing page, as shown in fig. 3.
In the embodiment, the image data and the voice data input by the user are acquired, and the speech-line prompter is arranged so that the user can directly read out the speech-line according to the preset speech-line prompt without preparing the speech-line prompt additionally, time nodes are not easy to be pinched by the speech-line prompt additionally prepared by the user, and continuous and repeated video recording may be caused.
In one embodiment, the step S104 includes generating video data based on the image data packet and the voice data packet; establishing an association relation between the video data and the preset subtitles based on a time axis of the video data and a time axis of the preset subtitles; outputting the video data to the first display layer according to the association relation so that the first display layer displays the video data; meanwhile, outputting the preset subtitles to a second display layer according to the association relationship so that the preset subtitles are displayed on the second display layer; the second display layer is positioned above the first display layer; and synthesizing the first display layer and the second display layer to generate a video.
Specifically, the terminal firstly generates video data, namely a video layer, from an image data packet and a voice data packet as a first display layer; then, based on the AVFoundation (system video framework), converting the subtitles into layer animation and adding the layer animation to the video layer so as to synthesize a new video with the subtitles, which specifically comprises the following steps:
the preset speech in each lens is separated according to lines, the speech sequences separated according to lines are stored in an array, and each set in the array comprises information such as characters, start time, end time, duration time and the like of the speech sequences of the lines; the terminal establishes an association relation between the video data and the preset subtitles based on the time axis information of the video layer and the time axis of the preset subtitles; generating a corresponding CATextLayer text layer according to each set of an array containing all the lines, wherein the content of the text layer is corresponding line information, then creating a CALAYER layer, adding the CATextLayer text layer to the CALAYER layer to obtain a second display layer, wherein CALAYER is animation, the starting time of the animation is the starting time of the lines corresponding to the CATextLayer layer, the duration of the animation is the duration of the corresponding lines, and the CALAYER layer (namely the second display layer) is added to a video layer (namely the first display layer) of the editable video; and then synthesizing a new video, wherein the new video starts video layer animations while playing the video, and the animations display corresponding CATextLayer layers according to a time axis, so that the starting time point of a certain CATextLayer layer is corresponding to the video playing time point, and the corresponding subtitles are displayed at the corresponding time point from the perspective of a user.
According to the embodiment, the association relationship between the video data and the preset subtitle set is established, and the video data and the preset subtitle set are automatically synthesized into the video according to the association relationship, so that the video production is quickly completed, and the video production efficiency is improved.
In an embodiment, the video generation method further includes: acquiring a style adjusting request input by a user; the style adjustment request comprises a text style adjustment request, a special effect style insertion request and a background music insertion request; and adjusting the video style in the video according to the style adjustment request.
Specifically, after the user finishes shooting for each split-lens in sequence, the terminal automatically returns to the template editing page, a "next" button is also displayed on the template editing page, and the user clicks the "next" button, the terminal executes the step S104. The terminal obtains a style adjusting request input by a user, adjusts the character style in the video according to the corresponding style selected by the user, or inserts special effects or music specified by the user.
According to the embodiment, various style adjustment options are set, so that the diversified requirements of users are met.
In an embodiment, the method further includes: responding to a picture import instruction triggered by a user on a template editing page, and displaying a picture list; and acquiring the pictures specified by the user in the picture list, and storing the pictures.
Specifically, a "import picture" button is further displayed on the template editing page, after the user selects the split-lens, the user clicks the "import picture" button, the terminal responds to a picture import instruction of the user and enters a picture list, and the user selects a picture in the picture list to import into the current split-lens, so that a static picture is stored in the current split-lens.
According to the embodiment, the picture importing function is set, so that the user can use the static picture as the material for video production, more choices are provided for the user to produce the video, and the video production efficiency is improved.
In one embodiment, as shown in fig. 7, fig. 7 shows a flow chart of a video generation method in a specific application scenario, where:
in step S701, the user selects and downloads a "follow me" template corresponding to the required car video, and enters a template editing page specified by the user, as shown in fig. 3.
Step S702, shooting videos section by section according to the split mirror setting of the template, displaying the number of split mirror shots specified and required by the template in the edit page picture 3, displaying the preview content of the split mirror shots in a video preview window, and viewing the teaching video of how the split mirror is shot in the tutorial options, wherein the options of 'effect' and 'tutorial' are arranged above the preview content. And clicking the next step to quickly position the content of the split mirror which is not shot temporarily.
Step S703, clicking the "modify subtitle" button in batch in fig. 3, and looking up the script dialog of the automobile document specified by the template in fig. 6, the user can use the script directly without modification, or click the "modify" button beside to modify the desired document. The text cases appearing here appear simultaneously on the shot prompter subtitles.
In step S704, each of the sub-mirrors in the template provides a composition standard for car shooting. The user immediately enters the shooting interface as shown in fig. 4 after clicking the "start shooting" button of fig. 3. Composition reference lines can be superposed on the shooting lens during shooting, and a user is guided to complete shooting along the composition reference lines. The composition reference line shown in fig. 4 is a set of automobile industry shooting standards set for APP, which is a layout and composition standard between automobiles and people specifically for the automobile industry. And is a vertical screen video shooting standard.
In step S705, when the "shoot" button in fig. 4 is clicked, the mode of shooting by the prompter is automatically entered as in fig. 5, and the "prompter caption" is prepared by counting down for 5 seconds. And displaying the opposite white script and case corresponding to the lens of the lens after counting down, reading words by a user following the words marked with yellow marks, wherein the yellow marks appear in the time required by the words. The script file of the prompter is the file at the 'batch modification caption'. And after the lens splitting time is finished, automatically stopping recording and returning to enter the interface of the figure 3, so that the user can shoot the next lens splitting lens. Until all the shots are taken.
Step S706, the user then clicks the next button to enter the video clip fine adjustment page, and at this time, the APP automatically inserts and adds all the subtitles in the batch text modification page into the video without manually adjusting the subtitle time by the user. But the user can also adjust the contents of video captions, duration and editing title styles and the like in the page layout block.
In step S707, the product flow essentially following me is completed. And finally, clicking to generate a video file and publishing the video to a certain automobile network, and simultaneously, publishing the video to each large video or social platform by one key.
It should be understood that, although the steps in the flowcharts of fig. 1 and 7 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1 and 7 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in other steps.
In one embodiment, as shown in fig. 8, there is provided a video generating apparatus 800 comprising: a template editing page display module 801, a shooting page display module 802, an original data acquisition module 803, and a video generation module 804, wherein:
a template editing page display module 801, configured to display a template editing page corresponding to the video shooting template selected by the user;
a shooting page display module 802, configured to display a shooting page in response to a first shooting instruction triggered by the user on the template editing page; the shooting page is displayed with a preset composition reference line;
an original data obtaining module 803, configured to display a preset subtitle on the shooting page according to a preset time node, and obtain original data for generating a video, which is input by the user according to the preset composition reference line;
a video generating module 804, configured to generate the video based on the original data and the preset subtitles.
In an embodiment, the shooting page display module 802 is further configured to: acquiring a lens division appointed by the user in the plurality of lens division; displaying an effect display page corresponding to the specified lens; and responding to the first shooting instruction triggered by the user on the effect display page, and displaying a shooting page corresponding to the specified lens.
In an embodiment, the above shooting page display module 802 is further configured to: responding to a subtitle modification request triggered by the user on the effect display page, and displaying a subtitle modification page corresponding to the specified lens; a caption set comprising a plurality of lines sequences is displayed on the caption modification page; acquiring the speech updating content of the user aiming at the specified speech sequence; and updating the subtitle set according to the speech updating content.
In an embodiment, the raw data obtaining module 803 is further configured to:
responding to a second shooting instruction triggered by the user on the shooting page, and continuously acquiring image data through a shooting lens until the default time length is finished to obtain an image data packet; meanwhile, displaying a preset subtitle corresponding to the preset time node on the current shooting page according to the preset time node; continuously acquiring the voice data input by the user based on the preset subtitles through voice equipment until the default time length is finished to obtain a voice data packet; and storing the image data packet and the voice data packet, and returning to the template editing page.
In an embodiment, the video generation module 804 is further configured to:
generating video data based on the image data packet and the voice data packet; establishing an association relation between the video data and the preset subtitle based on the time axis of the video data and the time axis of the preset subtitle; outputting the video data to a first display layer according to the incidence relation so that the first display layer displays the video data; meanwhile, outputting the preset subtitles to a second display layer according to the association relation so that the preset subtitles are displayed on the second display layer; the second display layer is positioned above the first display layer; and synthesizing the first display layer and the second display layer to generate the video.
In an embodiment, as shown in fig. 8, the video generating apparatus 800 further includes a pattern adjusting unit 805, where the pattern adjusting unit 805 is configured to: acquiring a style adjusting request input by the user; the style adjustment request comprises a text style adjustment request, a special effect style insertion request and a background music insertion request; adjusting a video style in the video according to the style adjustment request.
In an embodiment, the video generating apparatus 800 further includes a picture obtaining unit 806, where the picture obtaining unit 806 is configured to: responding to a picture import instruction triggered by the user on the template editing page, and displaying a picture list; and acquiring the picture appointed by the user in the picture list, and storing the picture.
For specific limitations of the video generation apparatus, reference may be made to the above limitations of the video generation method, which is not described herein again. The modules in the video generating apparatus can be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control functions. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a video generation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory having a computer program stored therein and a processor that implements the steps of any of the video generation methods described above when the processor executes the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of any of the above-mentioned video generation methods.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of video generation, the method comprising:
displaying a template editing page corresponding to the video shooting template selected by the user;
responding to a first shooting instruction triggered by the user on the template editing page, and displaying a shooting page; the shooting page is displayed with a preset composition reference line;
displaying a preset subtitle on the shooting page according to a preset time node, and acquiring original data which are input by the user according to the preset composition reference line and are used for generating a video;
and generating the video based on the original data and the preset subtitles.
2. The method according to claim 1, wherein a plurality of split-lens shots corresponding to the video shooting template are also displayed on the template editing page; the displaying a shooting page in response to a first shooting instruction triggered by the user on the template editing page further comprises:
acquiring a lens division appointed by the user in the plurality of lens division;
displaying an effect display page corresponding to the specified lens;
and responding to the first shooting instruction triggered by the user on the effect display page, and displaying a shooting page corresponding to the specified lens.
3. The method according to claim 2, wherein each of the split-mirror shots contains a subtitle set; after the step of displaying the effect display page corresponding to the specified lens, the method further includes:
responding to a subtitle modification request triggered by the user on the effect display page, and displaying a subtitle modification page corresponding to the specified lens; a caption set comprising a plurality of lines sequences is displayed on the caption modification page;
acquiring the speech updating content of the user aiming at the specified speech sequence;
and updating the subtitle set according to the speech updating content.
4. The method of claim 3, wherein the specified split-mirror shots contain a default duration; the raw data includes image data and voice data; the displaying of the corresponding preset subtitles on the shooting page according to the preset time nodes and the obtaining of the original data for generating the video, which is input by the user according to the preset composition reference line, comprise:
responding to a second shooting instruction triggered by the user on the shooting page, and continuously acquiring image data through a shooting lens until the default time length is finished to obtain an image data packet;
meanwhile, displaying a preset subtitle corresponding to the preset time node on the current shooting page according to the preset time node;
continuously acquiring the voice data input by the user based on the preset subtitles through voice equipment until the default time length is finished to obtain a voice data packet;
and storing the image data packet and the voice data packet, and returning to the template editing page.
5. The method of claim 4, wherein the generating the video based on the original data and the preset subtitles comprises:
generating video data based on the image data packet and the voice data packet;
establishing an association relation between the video data and the preset subtitle based on the time axis of the video data and the time axis of the preset subtitle;
outputting the video data to a first display layer according to the incidence relation so that the first display layer displays the video data;
at the same time, the user can select the desired position,
outputting the preset subtitles to a second display layer according to the association relation so that the preset subtitles are displayed on the second display layer; the second display layer is positioned above the first display layer;
and synthesizing the first display layer and the second display layer to generate the video.
6. The method of claim 5, further comprising:
acquiring a style adjusting request input by the user; the style adjustment request comprises a text style adjustment request, a special effect style insertion request and a background music insertion request;
adjusting a video style in the video according to the style adjustment request.
7. The method of claim 2, further comprising:
responding to a picture import instruction triggered by the user on the template editing page, and displaying a picture list;
and acquiring the picture appointed by the user in the picture list, and storing the picture.
8. A video generation apparatus, characterized in that the apparatus comprises:
the template editing page display module is used for displaying a template editing page corresponding to the video shooting template selected by the user;
the shooting page display module is used for responding to a first shooting instruction triggered by the user on the template editing page and displaying a shooting page; the shooting page is displayed with a preset composition reference line;
the original data acquisition module is used for displaying preset subtitles on the shooting page according to preset time nodes and acquiring original data which are input by the user according to the preset composition reference line and are used for generating a video;
and the video generation module is used for generating the video based on the original data and the preset subtitles.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202011311051.8A 2020-11-20 2020-11-20 Video generation method and device, computer equipment and storage medium Pending CN112422831A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011311051.8A CN112422831A (en) 2020-11-20 2020-11-20 Video generation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011311051.8A CN112422831A (en) 2020-11-20 2020-11-20 Video generation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112422831A true CN112422831A (en) 2021-02-26

Family

ID=74778152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011311051.8A Pending CN112422831A (en) 2020-11-20 2020-11-20 Video generation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112422831A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411490A (en) * 2021-05-11 2021-09-17 北京达佳互联信息技术有限公司 Multimedia work publishing method and device, electronic equipment and storage medium
CN113422996A (en) * 2021-05-10 2021-09-21 北京达佳互联信息技术有限公司 Subtitle information editing method, device and storage medium
CN113516783A (en) * 2021-05-19 2021-10-19 上海爱客博信息技术有限公司 3D model online editing method and device, computer equipment and storage medium
CN113709575A (en) * 2021-04-07 2021-11-26 腾讯科技(深圳)有限公司 Video editing processing method and device, electronic equipment and storage medium
CN114928753A (en) * 2022-04-12 2022-08-19 广州阿凡提电子科技有限公司 Video splitting processing method, system and device
CN115134662A (en) * 2022-06-28 2022-09-30 广州阿凡提电子科技有限公司 Multi-sample processing method and system based on artificial intelligence
CN115442538A (en) * 2021-06-04 2022-12-06 北京字跳网络技术有限公司 Video generation method, device, equipment and storage medium
WO2022262680A1 (en) * 2021-06-17 2022-12-22 北京字跳网络技术有限公司 Display method and apparatus, and readable storage medium
CN115701093A (en) * 2021-07-15 2023-02-07 上海幻电信息科技有限公司 Video shooting information acquisition method and video shooting and processing indication method
WO2023045963A1 (en) * 2021-09-23 2023-03-30 北京字跳网络技术有限公司 Video generation method and apparatus, device and storage medium
WO2023104078A1 (en) * 2021-12-09 2023-06-15 北京字跳网络技术有限公司 Method and apparatus for generating video editing template, device, and storage medium
WO2024056023A1 (en) * 2022-09-14 2024-03-21 北京字跳网络技术有限公司 Video editing method and apparatus, and device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103502969A (en) * 2009-06-13 2014-01-08 罗莱斯塔尔有限公司 System for sequential juxtaposition of separately recorded scenes
CN104967900A (en) * 2015-05-04 2015-10-07 腾讯科技(深圳)有限公司 Video generating method and video generating device
CN107172476A (en) * 2017-06-09 2017-09-15 创视未来科技(深圳)有限公司 A kind of system and implementation method of interactive script recorded video resume
CN109151356A (en) * 2018-09-05 2019-01-04 传线网络科技(上海)有限公司 video recording method and device
CN110855893A (en) * 2019-11-28 2020-02-28 维沃移动通信有限公司 Video shooting method and electronic equipment
CN111163274A (en) * 2020-01-21 2020-05-15 海信视像科技股份有限公司 Video recording method and display equipment
CN111372119A (en) * 2020-04-17 2020-07-03 维沃移动通信有限公司 Multimedia data recording method and device and electronic equipment
CN111629269A (en) * 2020-05-25 2020-09-04 厦门大学 Method for automatically shooting and generating mobile terminal short video advertisement based on mechanical arm
CN111726536A (en) * 2020-07-03 2020-09-29 腾讯科技(深圳)有限公司 Video generation method and device, storage medium and computer equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103502969A (en) * 2009-06-13 2014-01-08 罗莱斯塔尔有限公司 System for sequential juxtaposition of separately recorded scenes
CN104967900A (en) * 2015-05-04 2015-10-07 腾讯科技(深圳)有限公司 Video generating method and video generating device
CN107172476A (en) * 2017-06-09 2017-09-15 创视未来科技(深圳)有限公司 A kind of system and implementation method of interactive script recorded video resume
CN109151356A (en) * 2018-09-05 2019-01-04 传线网络科技(上海)有限公司 video recording method and device
CN110855893A (en) * 2019-11-28 2020-02-28 维沃移动通信有限公司 Video shooting method and electronic equipment
CN111163274A (en) * 2020-01-21 2020-05-15 海信视像科技股份有限公司 Video recording method and display equipment
CN111372119A (en) * 2020-04-17 2020-07-03 维沃移动通信有限公司 Multimedia data recording method and device and electronic equipment
CN111629269A (en) * 2020-05-25 2020-09-04 厦门大学 Method for automatically shooting and generating mobile terminal short video advertisement based on mechanical arm
CN111726536A (en) * 2020-07-03 2020-09-29 腾讯科技(深圳)有限公司 Video generation method and device, storage medium and computer equipment

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709575A (en) * 2021-04-07 2021-11-26 腾讯科技(深圳)有限公司 Video editing processing method and device, electronic equipment and storage medium
CN113709575B (en) * 2021-04-07 2024-04-16 腾讯科技(深圳)有限公司 Video editing processing method and device, electronic equipment and storage medium
CN113422996B (en) * 2021-05-10 2023-01-20 北京达佳互联信息技术有限公司 Subtitle information editing method, device and storage medium
CN113422996A (en) * 2021-05-10 2021-09-21 北京达佳互联信息技术有限公司 Subtitle information editing method, device and storage medium
CN113411490A (en) * 2021-05-11 2021-09-17 北京达佳互联信息技术有限公司 Multimedia work publishing method and device, electronic equipment and storage medium
CN113411490B (en) * 2021-05-11 2022-11-11 北京达佳互联信息技术有限公司 Multimedia work publishing method and device, electronic equipment and storage medium
WO2022237189A1 (en) * 2021-05-11 2022-11-17 北京达佳互联信息技术有限公司 Publishing method and apparatus for multimedia work
CN113516783A (en) * 2021-05-19 2021-10-19 上海爱客博信息技术有限公司 3D model online editing method and device, computer equipment and storage medium
CN115442538A (en) * 2021-06-04 2022-12-06 北京字跳网络技术有限公司 Video generation method, device, equipment and storage medium
WO2022253350A1 (en) * 2021-06-04 2022-12-08 北京字跳网络技术有限公司 Video generation method and apparatus, and device and storage medium
WO2022262680A1 (en) * 2021-06-17 2022-12-22 北京字跳网络技术有限公司 Display method and apparatus, and readable storage medium
CN115701093A (en) * 2021-07-15 2023-02-07 上海幻电信息科技有限公司 Video shooting information acquisition method and video shooting and processing indication method
WO2023045963A1 (en) * 2021-09-23 2023-03-30 北京字跳网络技术有限公司 Video generation method and apparatus, device and storage medium
WO2023104078A1 (en) * 2021-12-09 2023-06-15 北京字跳网络技术有限公司 Method and apparatus for generating video editing template, device, and storage medium
CN114928753A (en) * 2022-04-12 2022-08-19 广州阿凡提电子科技有限公司 Video splitting processing method, system and device
CN115134662A (en) * 2022-06-28 2022-09-30 广州阿凡提电子科技有限公司 Multi-sample processing method and system based on artificial intelligence
WO2024056023A1 (en) * 2022-09-14 2024-03-21 北京字跳网络技术有限公司 Video editing method and apparatus, and device and storage medium

Similar Documents

Publication Publication Date Title
CN112422831A (en) Video generation method and device, computer equipment and storage medium
CN112073649B (en) Multimedia data processing method, multimedia data generating method and related equipment
CN111935505B (en) Video cover generation method, device, equipment and storage medium
CN111970577B (en) Subtitle editing method and device and electronic equipment
CN108900771B (en) Video processing method and device, terminal equipment and storage medium
CN113452941B (en) Video generation method and device, electronic equipment and storage medium
CN109525884B (en) Video sticker adding method, device, equipment and storage medium based on split screen
CN111930994A (en) Video editing processing method and device, electronic equipment and storage medium
CN112437353B (en) Video processing method, video processing device, electronic apparatus, and readable storage medium
CN109379633B (en) Video editing method and device, computer equipment and readable storage medium
CN113099297B (en) Method and device for generating click video, electronic equipment and storage medium
CN110636365B (en) Video character adding method and device, electronic equipment and storage medium
CN109348155A (en) Video recording method, device, computer equipment and storage medium
CN113721810A (en) Display method, device, equipment and storage medium
CN109474855A (en) Video editing method, device, computer equipment and readable storage medium storing program for executing
CN103699621A (en) Method for recording graphic and text information on materials recorded by mobile device
CN112565882A (en) Video generation method and device, electronic equipment and computer readable medium
CN104333699A (en) Synthetic method and device of user-defined photographing area
CN114598819A (en) Video recording method and device and electronic equipment
CN109413352B (en) Video data processing method, device, equipment and storage medium
CN114091422A (en) Display page generation method, device, equipment and medium for exhibition
WO2023241377A1 (en) Video data processing method and device, equipment, system, and storage medium
CN113364999B (en) Video generation method and device, electronic equipment and storage medium
CN114025237A (en) Video generation method and device and electronic equipment
CN112995770B (en) Video playing method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210226