CN113992866A - Video production method and device - Google Patents

Video production method and device Download PDF

Info

Publication number
CN113992866A
CN113992866A CN202111283301.6A CN202111283301A CN113992866A CN 113992866 A CN113992866 A CN 113992866A CN 202111283301 A CN202111283301 A CN 202111283301A CN 113992866 A CN113992866 A CN 113992866A
Authority
CN
China
Prior art keywords
video
target
editing
image
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111283301.6A
Other languages
Chinese (zh)
Other versions
CN113992866B (en
Inventor
常青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202111283301.6A priority Critical patent/CN113992866B/en
Publication of CN113992866A publication Critical patent/CN113992866A/en
Application granted granted Critical
Publication of CN113992866B publication Critical patent/CN113992866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the application provides a video production method and a video production device, wherein the video production method comprises the following steps: the method comprises the steps of responding to a video editing interface calling instruction, obtaining configuration parameters of at least two reference objects uploaded through a video editing interface, wherein the configuration parameters are determined according to a target video template, analyzing the configuration parameters to obtain object types and index addresses of the at least two reference objects, obtaining the at least two reference objects according to the index addresses, loading the at least two reference objects to at least two editing tracks according to the object types, processing the reference objects in each editing track according to object editing data of each editing track contained in the target video template, generating a target video and returning.

Description

Video production method and device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a video production method. One or more embodiments of the present application also relate to a video production apparatus, a computing device, and a computer-readable storage medium.
Background
With the development of internet technology, more and more users record their lives through pictures, and the users can also make the pictures into video albums and make a plurality of pictures into videos to play.
Some album video production tools exist in the market at present, and the album video production tools can generate corresponding album videos from a group of pictures, but the existing album video production tools usually need to manually participate in a complex editing and production process. Moreover, in order to present a better visual effect for a user, a video effect designer can design some cool and dazzling video visual effects to be realized by application developers generally, however, the complex video visual effects are difficult to realize rapidly and simply by utilizing the prior art and interfaces, and a video model applied when the video effect is realized in the prior art is difficult to have universal applicability generally and is difficult to meet the demand effect of video design.
Disclosure of Invention
In view of this, the present application provides a video production method. One or more embodiments of the present application also relate to a video production apparatus, a computing device, and a computer-readable storage medium, so as to solve the technical defects that the complex video visual effect cannot be quickly and simply achieved in the prior art, and the video production method in the prior art does not have general applicability.
According to a first aspect of embodiments of the present application, there is provided a video production method, including:
responding to a video editing interface calling instruction, and acquiring configuration parameters of at least two reference objects uploaded through a video editing interface, wherein the configuration parameters are determined according to a target video template;
analyzing the configuration parameters to obtain object types and index addresses of the at least two reference objects, obtaining the at least two reference objects according to the index addresses, and loading the at least two reference objects to at least two editing tracks according to the object types;
and processing the reference object in each editing track according to the object editing data of each editing track contained in the target video template to generate a target video and return the target video.
According to a second aspect of embodiments of the present application, there is provided a video production apparatus including:
the acquisition module is configured to respond to a video editing interface calling instruction and acquire configuration parameters of at least two reference objects uploaded through the video editing interface, wherein the configuration parameters are determined according to a target video template;
the analysis module is configured to analyze the configuration parameters to obtain object types and index addresses of the at least two reference objects, obtain the at least two reference objects according to the index addresses, and load the at least two reference objects to at least two editing tracks according to the object types;
and the processing module is configured to process the reference object in each editing track according to the object editing data of each editing track contained in the target video template, generate a target video and return the target video.
According to a third aspect of embodiments herein, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, wherein the processor implements the steps of the video production method when executing the computer-executable instructions.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of the video production method.
An embodiment of the application realizes a video production method and a video production device, wherein the video production method comprises the steps of responding to a video editing interface call instruction, obtaining configuration parameters of at least two reference objects uploaded through a video editing interface, analyzing the configuration parameters according to a target video template, obtaining object types and index addresses of the at least two reference objects, obtaining the at least two reference objects according to the index addresses, loading the at least two reference objects to at least two editing tracks according to the object types, processing the reference objects in the editing tracks according to object editing data of the editing tracks contained in the target video template, generating a target video and returning the target video.
According to the video making method provided by the embodiment of the application, the plurality of video templates are generated in advance for a user to select, the user can directly upload the configuration parameters of at least two reference objects for video making aiming at the target video template in a mode of calling a video editing interface under the condition that the video making requirement exists, and then the multimedia platform can generate the target video meeting the expectation based on the configuration parameters and the target video template.
In the process, only configuration parameters required by video production need to be submitted by a user without participating in a subsequent complex video production process; in addition, the video production method provided by the embodiment of the application has universal applicability for different types of video production scenes, is beneficial to quickly realizing the target video with a complex display effect through a small number of reference images, and is beneficial to saving the time required for developing the video with complex transformation, thereby being beneficial to improving the convenience and efficiency of video production.
Drawings
Fig. 1 is a flowchart of a video production method according to an embodiment of the present application;
FIG. 2 is a block diagram of a video template provided in one embodiment of the present application;
fig. 3 is a flowchart of a processing procedure of applying the video production method to a video production process according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a video production apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present application relate are explained.
Video template: a video template is a structure file that stores video clip data (including video clips, music, pictures, special effects, filters, and track structure data) and can be reused (imported into corresponding video clip software or parsed by a program).
In the present application, a video production method is provided. One or more embodiments of the present application are also directed to a video production apparatus, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
The video production method provided by the embodiment of the application can be applied to any field needing to produce video, such as production of video animation in the video field, production of voice video in the communication field, production of special effect video in the self-media field and the like; for convenience of understanding, the embodiments of the present application describe in detail the video production method applied to the video production in the video field, but are not limited thereto.
In specific implementation, the target video in the embodiment of the present application may be presented in devices such as a large-scale video playing device, a game console, a desktop computer, a smart phone, a tablet computer, an MP3(Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4) player, a laptop portable computer, an e-book reader, and other display terminals.
In addition, the target video of the embodiment of the application can be applied to any video and audio capable of presenting videos, for example, special effect videos can be presented in videos of live broadcast and recorded broadcast, videos of online or offline video broadcast and the like can be presented.
Referring to fig. 1, fig. 1 shows a flow chart of a video production method according to an embodiment of the present application, including the following steps:
step 102, responding to a video editing interface calling instruction, and acquiring configuration parameters of at least two reference objects uploaded through a video editing interface, wherein the configuration parameters are determined according to a target video template.
Specifically, the video production method provided by the embodiment of the application is applied to a multimedia platform, the multimedia platform defines a video template structure based on its own video processing capability, abstracts historical video data in the whole platform, processes an abstraction result according to the video template structure, and generates a video template, in addition, the multimedia platform can develop a video editing interface for a user to call, the user can define video data of a target video to be produced according to the video template, and call the video editing interface, so as to upload the part of video data to the target video template through the video editing interface, and then the multimedia platform receives the video data, can generate the target video through the target video template and the part of video data, and output the target video.
The user can be a common user in a mechanism or an enterprise for developing a multimedia platform, or a third-party user, and after determining the used target video template, the user can determine the video data to be uploaded according to the target video template. In the embodiment of the application, a user can upload the configuration parameters of at least two reference objects, so that the multimedia platform can make a target video based on the target video template and the configuration parameters. In practical applications, the at least two reference objects include, but are not limited to, at least two reference images, an audio segment and/or a video segment, etc.; the configuration parameters include, but are not limited to, an index address of the image, audio or video, a name of the image, audio or video, and the like.
In specific implementation, before receiving a call instruction of a video editing interface of a user, the multimedia platform may generate a plurality of video templates in advance for the user to select, specifically may acquire a historical video, and construct a video template based on video attribute data of the historical video, object attribute data of at least two reference objects included in the historical video, and object editing data corresponding to at least two editing tracks used for generating the historical video.
Specifically, the historical video is a video that has been produced and generated on the multimedia platform before the current time, and the historical video may be produced and generated by a user in an interactive manner with the multimedia platform, that is, the multimedia platform provides a presentation interface for the user, and the user uploads at least two reference objects through the presentation interface, where the at least two reference objects include, but are not limited to, at least two reference images, audio, or video. After receiving the reference objects, the multimedia platform may load the reference objects to the editing track according to the object types corresponding to the reference objects, for example, load pictures or videos to the video editing track for creating key video frames of the videos, and load audio to the audio editing track for creating background music of the videos.
After the reference objects are loaded to the editing track respectively by the multimedia platform, initial multimedia fragments corresponding to the reference objects can be generated and displayed to a user through a display interface, wherein the initial multimedia fragments include but are not limited to video fragments or audio fragments and the like; if the user determines that the generated initial multimedia segment does not satisfy the expected exhibition effect, the reference object in the editing track can be adjusted to generate the target video conforming to the expected exhibition effect.
In this case, a user can process the reference image, that is, intercept at least two local images of each reference image for each reference image, perform interpolation calculation based on each reference image and its local image to obtain at least one interpolated image, use one of the at least two local images as a start frame image and the other as an end frame image, use the interpolated image and the other local images as intermediate frame images, and synthesize an intermediate video segment corresponding to each reference image, where each frame video frame may be different from each other.
The reference object may include an audio in addition to the reference image, and the user may or may not modify the initial audio clip corresponding to the audio, and if not, may splice a plurality of intermediate video clips in the video editing track, and then superimpose the splicing result and the initial audio clip in the audio editing track to generate a corresponding video, where the superimposing may be frame-by-frame sequential superimposing, that is, superimposing the ith frame of the video to be synthesized and the ith frame of the initial multimedia clip.
After the video is generated, the video can be used as a historical video, and a video template is constructed based on video attribute data of the historical video, object attribute data of at least two reference objects contained in the historical video and object editing data corresponding to at least two editing tracks for generating the historical video, wherein the video attribute data includes but is not limited to video type, size, background color and the like; object attribute data includes, but is not limited to, index address, name, type, etc.; the object edit data includes, but is not limited to, the position of the object in the track, the brightness, color, etc. of the object.
A structure diagram of a video template provided in the embodiment of the present application is shown in fig. 2, where the structure of the video template may be defined as an XML structure or a JSON structure, and a description of a part of parameters in the video template is as follows:
video description: type is the type of derived video, which may be MP4, MOV; the size is the size of the derived video, for example: 1028 × 720; the background is the background color of the derived video, and may be a black background or a white background.
Resource: the reference objects are transmitted to the multimedia platform, and a plurality of resources can be provided; resource path and resource name: a resource URL path uploaded by a user and an uploaded resource name; resource type: can be pictures, video and audio; for the case of multiple tracks, for example, the picture in the first track can be used to determine the key frame of the target video, and the picture in the second track can be used as a picture-in-picture (pip) or a Public Network Group (PNG) picture with a transparent channel, and can also be used as a static special effect; videos can be overlapped to make special effects; audio may be used as background music.
The generator, the playlist and the multitrack are a multi-level concept, one multitrack comprises a plurality of playlists, and one playlist comprises a plurality of generators.
The generator: the resources and filters transmitted to the multimedia platform are packaged, a filter is a class defined by the platform, each filter has a method for processing resources, and a filter is understood to be a tool for processing resources, such as: some filters are responsible for resource cutting; some filters are responsible for resource rendering, etc.
And (4) bill broadcasting: a playlist may be understood as one track of a plurality of tracks.
Multi-track: each track is composed of a playlist + a filter, and the filter processes not a single resource but the content of the whole track, for example: the entire track corresponds to the in-field, out-field, brightness and color of the video.
In addition, after the configuration parameters of at least two reference objects uploaded through the video editing interface are obtained, whether the configuration parameters belong to a target data format can be judged;
if not, carrying out data format conversion on the configuration parameters to generate the configuration parameters conforming to the target data format.
Specifically, when the multimedia platform generates the video template, the structure of the video template can be defined as an XML structure or a JSON structure, and when the configuration parameters uploaded by the user are analyzed, the data of the two structures can only be analyzed, so that when the user uploads the configuration parameters, the configuration parameters can be converted into any one of the two structures, and then the converted data is uploaded to the multimedia platform through the video editing interface; or, the user can upload configuration parameters according to any structure, and the multimedia platform can judge whether the structure of the multimedia platform belongs to a target data structure (target data format) after receiving the configuration parameters uploaded by the user, and can directly analyze and process the structure if the structure of the multimedia platform belongs to the target data structure; if not, the data format conversion is needed, the data format is converted into a target data format, and then the converted data is analyzed.
By providing the video template for the user, in the whole video making process, the user only needs to upload the corresponding configuration parameters aiming at the target video template without participating in the subsequent complex video making process, the interaction with the user can be still embodied while the complex video making process is reduced for the user, and the participation experience of the user in the video making process is favorably improved.
And 104, analyzing the configuration parameters to obtain the object types and the index addresses of the at least two reference objects, obtaining the at least two reference objects according to the index addresses, and loading the at least two reference objects to at least two editing tracks according to the object types.
Specifically, in the embodiment of the present application, in the process of video production based on at least two reference objects, after configuration parameters of the at least two reference objects uploaded by a user are acquired by means of an editing track, the configuration parameters may be analyzed to acquire an object type and an index address of each reference object, where the object type is an image, an audio, a video, or the like, and is used to load the reference object to a corresponding editing track according to the object type; the index address is used for reading the reference object based on the address; therefore, after the index address and the object type are obtained through analysis, the reference objects can be obtained according to the index address, the reference objects are loaded to the corresponding editing tracks respectively according to the object types corresponding to the reference objects, the initial multimedia fragments are generated, and then the display time, the display position, the display effect and the like of the reference objects in the editing tracks can be edited based on the object editing data in the target video template, so that the target video conforming to the expected effect is obtained.
In a specific implementation, loading the at least two reference objects into at least two editing tracks according to the object type includes:
determining whether a target reference object of the at least two reference objects belongs to a target image class in case that an object type of the target reference object is an image;
if yes, loading the target reference object to a first editing track;
if not, loading the target reference object to a second editing track;
and loading the target reference object to a third editing track under the condition that the object type is audio.
Specifically, as mentioned above, the at least two reference objects include, but are not limited to, at least two reference images, an audio and/or video segment, and the like.
For example, a main function of a part of the reference images is to determine a key video frame for producing a target video, while a part of the reference images is mainly used for producing a related special effect of the target video, and a function of audio is mostly used as background music of the target video; therefore, after at least two reference objects are obtained, firstly, the object type corresponding to each reference object is determined, the image type of the reference image can be further determined under the condition that the object type of the reference object is determined to be an image, namely, the reference object is a reference image, and if the reference image is determined to belong to the target image type, namely, the reference image belongs to the category of determining a key video frame for making a target video, the reference image is loaded to a first editing track; if it is determined that the reference image does not belong to the target image category, i.e. the main function of the reference image may be to make a video special effect, but not to determine key video frames, it is loaded into the second editing track.
In addition, under the condition that the object type of the reference object is determined to be audio, the reference object is loaded to a third editing track, wherein the first editing track and the second editing track are video editing tracks, the third editing track is an audio editing track, each editing track can be loaded with one or at least two reference objects, and the reference object can be determined according to actual requirements.
In the embodiment of the application, after the reference object is loaded to the corresponding editing track, the corresponding initial multimedia segment can be generated, and in practical application, the target video template can contain the duration of the reference object of different object types corresponding to the initial multimedia segment, so that after the different reference objects are loaded to the editing track, the initial multimedia segment with the corresponding duration can be generated. For example, after a reference image is loaded to an editing track, an initial video segment with a duration of 4 seconds corresponding to the reference image is generated, and each frame of video in the initial video segment is the reference image; after loading a section of audio to the audio editing track, an initial audio clip with a duration of 8 seconds corresponding to the audio is generated, where the initial audio clip may be a part of the audio or a splicing result of multiple sections of the audio.
In a specific application process, the preset durations of different video segments in the first editing track, the second editing track or the third editing track may be determined according to actual requirements, and are not limited herein.
And 106, processing the reference object in each editing track according to the object editing data of each editing track contained in the target video template, generating a target video and returning.
Specifically, the object editing data, that is, the relevant editing data characterizing the editing of the reference object in the editing track, includes, but is not limited to, the position of the object in the track, the brightness and color of the object, and the cropping mode of the object. After the multimedia platform loads the at least two reference objects to the at least two editing tracks, the reference objects in each editing track can be processed according to the object editing data of each editing track contained in the target video template, so as to generate a target video and return the target video.
In specific implementation, according to the object editing data of each editing track included in the target video template, processing the reference object in each editing track to generate a target video and return the target video, including:
acquiring at least two target images corresponding to reference images in a first editing track according to object editing data corresponding to each editing track contained in the target video template, wherein the reference images are one of the at least two reference objects;
processing the reference image in the first editing track based on the reference image and the at least two target images to generate a first multimedia fragment;
and generating a target video based on the first multimedia segment and a second multimedia segment corresponding to the reference object in the second editing track, and returning.
Further, according to the object editing data corresponding to each editing track included in the target video template, at least two target images corresponding to the reference image in the first editing track are obtained, that is, according to the position information of the local image to be intercepted corresponding to the reference image in the first editing track included in the target video template, the local image interception is performed on the reference image, so that at least two target images corresponding to the reference image in the first editing track are obtained.
Specifically, in the historical video generation process, reference objects such as images, audios or videos and the like can be loaded to corresponding editing tracks to generate initial video clips corresponding to the reference objects, a user can submit a clip content modification request to the multimedia platform for the initial multimedia clip through a display interface of the multimedia platform under the condition that the initial video clip is determined not to meet an expected display effect, the multimedia platform can display the reference images in the initial multimedia clip for the user through the display interface after receiving the request, and then the user can submit a local image interception instruction to the reference images in the initial multimedia clip through the display interface to perform local image interception on the reference images to obtain intercepted target images; then, interpolation calculation can be carried out based on each reference image and the local images thereof to obtain at least one interpolation image, one of the at least two local images is used as a starting frame image, the other local image is used as an ending frame image, and the interpolation image and the other local images are used as intermediate frame images to carry out video segment synthesis so as to realize video production.
Therefore, in the video template generated based on the historical video, the object configuration parameters contained in the video template can exist, for example: the image processing apparatus includes clipped region data used when a local image is clipped from a reference image, and data such as interpolation coefficients and interpolation conversion values used when interpolation calculation is performed on the reference image and its local image. And processing the reference object in each editing track according to the object editing data corresponding to each editing track contained in the target video template, namely performing local image interception on the reference image in the first editing track according to the intercepted area data corresponding to the first editing track in the target video template to obtain at least two target images. Then, processing can be carried out based on at least two target images and the reference image to generate a first multimedia fragment, and then, a target video can be generated and output based on the first multimedia fragment and a second multimedia fragment corresponding to the reference object in the second editing track.
According to the method and the device, interpolation calculation is carried out on the target image intercepted based on the reference image by utilizing an interpolation algorithm, the interpolation image is determined according to the interpolation calculation result, the first multimedia segment is synthesized according to the interpolation image and the target image, the target video with the complex display effect can be quickly realized through a small amount of target images and configuration parameters of the target images, the time required by developing the video with complex conversion is saved, and convenience and efficiency of video production are improved.
Wherein processing the reference image in the first editing track based on the reference image and the at least two target images to generate a first multimedia clip comprises:
generating at least one interpolation image between any two adjacent target images based on an interpolation algorithm and configuration parameters of the at least two target images;
and generating a first multimedia segment corresponding to the reference image based on the at least two target images and the at least one interpolation image, wherein the at least one interpolation image is a local image of the reference object.
Specifically, as described above, the video template generated based on the historical video may include data such as an interpolation coefficient and an interpolation conversion value used for performing interpolation calculation on the reference image and the local image by using an interpolation algorithm, as the object configuration parameters. Therefore, after at least two target images are obtained, interpolation calculation can be carried out according to the interpolation coefficient, the interpolation conversion value, the configuration parameters of at least two target images and the reference image contained in the target video template, and at least one interpolation image between any two adjacent target images is generated. Then, a first multimedia segment corresponding to the reference image can be synthesized based on the target image and the interpolation image, and each frame of video in the first multimedia segment may be different.
In practical application, in the process of making a history video, after a target image is captured, configuration parameters of the target image need to be determined, and the configuration parameters of each target image can be determined according to the position relationship between a local area to be captured of the target image and a display canvas of a reference image in a display interface.
The configuration parameters include, but are not limited to, coordinate parameters of the target image, length and width of the target image, or scaling ratios of the length and width of the target image with respect to the length and width of the reference image, and the like, so that the parameters such as coordinates, length, or width of at least two target images can be determined according to a position relationship between the local region to be intercepted and the display canvas of the reference image in the display interface.
In practical application, the coordinate parameter may be a vertex coordinate of the target image in the display interface, specifically, a vertex of an upper left corner of the display interface may be used as a coordinate origin, a planar rectangular coordinate system is established, and the coordinate parameter of the target image is determined according to a position of the vertex of the upper left corner of the target image in the rectangular coordinate system, where the coordinate parameter is a horizontal and vertical coordinate of the vertex of the upper left corner of the target image.
Except for the coordinate parameters, the length, width or scaling ratio and other parameters in the configuration parameters are similar to the determination process of the coordinate parameters, and are not described herein again.
After the configuration parameters of the target image are determined, interpolation calculation can be carried out according to the configuration parameters, so that an interpolation image is determined according to the interpolation calculation result, and a video is manufactured according to the interpolation image and the target image, wherein the interpolation image is also a local image of a reference object.
Since the configuration parameters of the target image include, but are not limited to, the coordinate parameters of the target image and the length and width of the target image, or the scaling ratio of the length and width of the target image with respect to the length and width of the reference object, the configuration parameters of the interpolated image obtained by interpolation calculation may be performed according to the configuration parameters of the target image, and the configuration parameters of the interpolated image may also include the coordinate parameters of the interpolated image and the length and width of the interpolated image, or the scaling ratio of the length and width of the interpolated image with respect to the length and width of the reference object, or the like. The coordinate parameters of the interpolation image are obtained by performing interpolation calculation on the coordinate parameters of the target image in the configuration parameters of the target image; similarly, the width or height of the interpolated image is obtained by performing interpolation calculation on the width or height of the target image in the configuration parameters of the target image, respectively.
After the configuration parameters of the interpolation image are determined, the local area to be intercepted of the reference image in the display interface can be determined according to the configuration parameters, and the local image interception is carried out on the reference image based on the local area to be intercepted, so that at least one intercepted interpolation image is obtained.
Therefore, in addition to obtaining the interpolation image by performing interpolation calculation according to the interpolation coefficient, the interpolation conversion value, and the configuration parameter of the target image, the target video template may further include the configuration parameter of each interpolation image used in the historical video generation process, and the embodiment of the present application may further perform local image clipping processing on the reference image according to the configuration parameter of each interpolation image directly to obtain the required interpolation image.
In practical applications, the number of the interpolation images between the target images may be determined according to the duration and the frame rate of the multimedia segment, and is not limited herein.
In specific implementation, generating a first multimedia segment corresponding to the reference image based on the at least two target images and the at least one interpolation image includes:
taking a first target image of the at least two target images as a starting frame image of a first multimedia segment, taking a second target image of the at least two target images as an ending frame image of the first multimedia segment, and taking the at least one interpolation image as an intermediate frame image of the first multimedia segment;
and generating a first multimedia segment corresponding to the reference image based on the starting frame image, the intermediate frame image and the ending frame image.
Specifically, after obtaining at least one interpolated image, a first target image of the at least two target images may be used as a start frame image of a first multimedia segment, a second target image of the at least two target images may be used as an end frame image of the first multimedia segment, the at least one interpolated image may be used as an intermediate frame image of the first multimedia segment, and the first multimedia segment may be produced based on the start frame image, the intermediate frame image, and the end frame image.
And further, generating a target video based on the first multimedia segment and a second multimedia segment corresponding to the reference object in the second editing track, and returning, namely, performing superposition processing on the first multimedia segment and the second multimedia segment corresponding to the reference object in the second editing track to generate the target video and return.
Specifically, since at least two reference objects include at least two reference images, and in the case that the at least two reference images include reference images of at least two target image categories, the reference images of the at least two target image categories may be loaded to the same editing track (first editing track), and an initial video segment corresponding to the reference images of the at least two target image categories is generated; and then, the reference images of at least two target image categories can be sequentially processed, namely, the reference images are intercepted by utilizing the intercepted area data in the target video template to obtain at least two local images, interpolation calculation is carried out based on the local images to obtain interpolation images, and then, first multimedia fragments corresponding to the reference images can be generated based on the at least two local images and the interpolation images.
Furthermore, because at least two first multimedia clips corresponding to the reference images respectively exist in the first editing track, when the target video is generated, the first multimedia clips in the first editing track can be spliced to generate a video to be synthesized; then, under the condition that the initial multimedia segment corresponding to the reference object in the second editing track is not modified, the video to be synthesized and the initial multimedia segment (second multimedia segment) corresponding to the reference object in the second editing track can be overlapped to generate the target video, wherein the overlapping can be frame-by-frame sequential overlapping, that is, the ith frame of the video to be synthesized and the ith frame of the initial multimedia segment are overlapped to generate the target video and return the target video.
In addition, after loading the at least two reference objects into at least two editing tracks according to the object type, the method further includes:
generating initial multimedia fragments corresponding to the at least two reference objects;
correspondingly, after the generating the first multimedia segment, the method further includes:
modifying an initial multimedia segment corresponding to a reference object of a second editing track according to an object editing parameter corresponding to the second editing track contained in the target video template to generate a third multimedia segment;
and overlapping the first multimedia segment and the third multimedia segment to generate a target video and returning.
Specifically, in the process of making the historical video, the user can modify the initial multimedia segment corresponding to the reference image to generate the first multimedia segment corresponding to the reference image, and also can modify the initial multimedia segments corresponding to other reference objects to obtain the corresponding second multimedia segment, and then perform superposition processing on the intermediate multimedia segment corresponding to the reference image and the intermediate multimedia segment corresponding to the other reference object to generate the target video.
Therefore, in the process of making the target video, the initial multimedia segment corresponding to the reference object of the second editing track can be modified according to the object editing parameters corresponding to the second editing track contained in the target video template, so as to generate a third multimedia segment; and then, similarly, splicing the first multimedia clips respectively corresponding to at least two reference images in the first editing track to generate a video to be synthesized, and then overlapping the video to be synthesized and a third multimedia clip corresponding to a reference object in the second editing track to generate a target video.
By providing the video template for the user, in the whole video making process, the user only needs to upload the corresponding configuration parameters aiming at the target video template without participating in the subsequent complex video making process, the interaction with the user can be still embodied while the complex video making process is reduced for the user, and the participation experience of the user in the video making process is favorably improved.
An embodiment of the application realizes a video production method and a video production device, wherein the video production method comprises the steps of responding to a video editing interface call instruction, obtaining configuration parameters of at least two reference objects uploaded through a video editing interface, analyzing the configuration parameters according to a target video template, obtaining object types and index addresses of the at least two reference objects, obtaining the at least two reference objects according to the index addresses, loading the at least two reference objects to at least two editing tracks according to the object types, processing the reference objects in the editing tracks according to object editing data of the editing tracks contained in the target video template, generating a target video and returning the target video.
According to the video making method provided by the embodiment of the application, the plurality of video templates are generated in advance for a user to select, the user can directly upload the configuration parameters of at least two reference objects for video making aiming at the target video template in a mode of calling a video editing interface under the condition that the video making requirement exists, and then the multimedia platform can generate the target video meeting the expectation based on the configuration parameters and the target video template.
In the process, only configuration parameters required by video production need to be submitted by a user without participating in a subsequent complex video production process; in addition, the video production method provided by the embodiment of the application has universal applicability for different types of video production scenes, is beneficial to quickly realizing the target video with a complex display effect through a small number of reference images, and is beneficial to saving the time required for developing the video with complex transformation, thereby being beneficial to improving the convenience and efficiency of video production.
Referring to fig. 3, the video production method provided in the embodiment of the present application is further described by taking an application of the video production method in a video production process in the video field as an example. Fig. 3 shows a flow chart of a processing procedure of applying a video production method provided in an embodiment of the present application to a video production process, and specifically includes the following steps:
step 302, obtaining historical video.
Step 304, constructing a video template based on the video attribute data of the historical video, the object attribute data of at least two reference objects contained in the historical video and the object editing data corresponding to at least two editing tracks used for generating the historical video.
And step 306, receiving a video editing interface calling instruction of a third party.
And 308, acquiring configuration parameters of at least two reference objects uploaded by a third party through a video editing interface, wherein the configuration parameters are determined according to a target video template.
Step 310, analyzing the configuration parameters to obtain the object types and the index addresses of the at least two reference objects.
Step 312, reading the at least two reference objects according to the index addresses, and loading the at least two reference objects to at least two editing tracks according to the object types.
Step 314, processing the reference object in each editing track according to the object editing data of each editing track included in the target video template, and generating a target video.
Step 316, returning the target video to the third party.
According to the video making method provided by the embodiment of the application, the plurality of video templates are generated in advance for a user to select, the user can directly upload the configuration parameters of at least two reference objects for video making aiming at the target video template in a mode of calling a video editing interface under the condition that the video making requirement exists, and then the multimedia platform can generate the target video meeting the expectation based on the configuration parameters and the target video template.
In the process, only configuration parameters required by video production need to be submitted by a user without participating in a subsequent complex video production process; in addition, the video production method provided by the embodiment of the application has universal applicability for different types of video production scenes, is beneficial to quickly realizing the target video with a complex display effect through a small number of reference images, and is beneficial to saving the time required for developing the video with complex transformation, thereby being beneficial to improving the convenience and efficiency of video production.
Corresponding to the above method embodiment, the present application further provides an embodiment of a video production apparatus, and fig. 4 shows a schematic structural diagram of a video production apparatus according to an embodiment of the present application. As shown in fig. 4, the apparatus includes:
an obtaining module 402, configured to respond to a video editing interface call instruction, to obtain configuration parameters of at least two reference objects uploaded through the video editing interface, where the configuration parameters are determined according to a target video template;
an analyzing module 404 configured to analyze the configuration parameters to obtain object types and index addresses of the at least two reference objects, obtain the at least two reference objects according to the index addresses, and load the at least two reference objects to at least two editing tracks according to the object types;
and the processing module 406 is configured to process the reference object in each editing track according to the object editing data of each editing track included in the target video template, generate a target video, and return the target video.
Optionally, the processing module 406 includes:
the acquisition sub-module is configured to acquire at least two target images corresponding to reference images in a first editing track according to object editing data corresponding to each editing track included in the target video template, wherein the reference images are one of the at least two reference objects;
a generating sub-module configured to process the reference image in the first editing track based on the reference image and the at least two target images, generating a first multimedia segment;
and the return submodule is configured to generate a target video and return the target video based on the first multimedia segment and a second multimedia segment corresponding to the reference object in the second editing track.
Optionally, the obtaining sub-module is further configured to:
and according to the position information of the local image to be intercepted corresponding to the reference image in the first editing track, which is contained in the target video template, carrying out local image interception on the reference image to obtain at least two target images corresponding to the reference image in the first editing track.
Optionally, the generation submodule is further configured to:
generating at least one interpolation image between any two adjacent target images based on an interpolation algorithm and configuration parameters of the at least two target images;
and generating a first multimedia segment corresponding to the reference image based on the at least two target images and the at least one interpolation image, wherein the at least one interpolation image is a local image of the reference object.
Optionally, the generation submodule is further configured to:
taking a first target image of the at least two target images as a starting frame image of a first multimedia segment, taking a second target image of the at least two target images as an ending frame image of the first multimedia segment, and taking the at least one interpolation image as an intermediate frame image of the first multimedia segment;
and generating a first multimedia segment corresponding to the reference image based on the starting frame image, the intermediate frame image and the ending frame image.
Optionally, the processing module is further configured to:
and overlapping the first multimedia clip and a second multimedia clip corresponding to the reference object in the second editing track to generate a target video and returning.
Optionally, the video production apparatus further includes a first generation module configured to: generating initial multimedia fragments corresponding to the at least two reference objects;
a first generation module configured to:
modifying an initial multimedia segment corresponding to a reference object of a second editing track according to an object editing parameter corresponding to the second editing track contained in the target video template to generate a third multimedia segment;
and overlapping the first multimedia segment and the third multimedia segment to generate a target video and returning.
Optionally, the video production apparatus further includes a construction module configured to:
acquiring a historical video, and constructing a video template based on video attribute data of the historical video, object attribute data of at least two reference objects contained in the historical video and object editing data corresponding to at least two editing tracks for generating the historical video.
Optionally, the video production apparatus further includes a conversion module configured to:
judging whether the configuration parameters belong to a target data format or not;
if not, carrying out data format conversion on the configuration parameters to generate the configuration parameters conforming to the target data format.
Optionally, the parsing module 404 is further configured to:
determining whether a target reference object of the at least two reference objects belongs to a target image class in case that an object type of the target reference object is an image;
if yes, loading the target reference object to a first editing track;
if not, loading the target reference object to a second editing track;
and loading the target reference object to a third editing track under the condition that the object type is audio.
The above is a schematic scheme of a video production apparatus of the present embodiment. It should be noted that the technical solution of the video creation apparatus and the technical solution of the video creation method belong to the same concept, and details that are not described in detail in the technical solution of the video creation apparatus can be referred to the description of the technical solution of the video creation method.
FIG. 5 illustrates a block diagram of a computing device 500 provided according to an embodiment of the present application. The components of the computing device 500 include, but are not limited to, a memory 510 and a processor 520. Processor 520 is coupled to memory 510 via bus 530, and database 550 is used to store data.
Computing device 500 also includes access device 540, access device 540 enabling computing device 500 to communicate via one or more networks 560. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 540 may include one or more of any type of network interface, e.g., a Network Interface Card (NIC), wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the application, the above-described components of computing device 500 and other components not shown in FIG. 5 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 5 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 500 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 500 may also be a mobile or stationary server.
Wherein processor 520 is configured to execute computer-executable instructions for executing the computer-executable instructions, wherein the steps of the video production method are implemented when the processor executes the computer-executable instructions.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the video production method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the video production method.
An embodiment of the present application further provides a computer-readable storage medium storing computer-executable instructions, which when executed by a processor, implement the steps of the video production method.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the video production method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the video production method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the foregoing method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application embodiment is not limited by the described acts or sequences, because some steps may be performed in other sequences or simultaneously according to the present application embodiment. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that acts and modules referred to are not necessarily required to implement the embodiments of the application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the embodiments of the application and its practical application, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (13)

1. A method of video production, comprising:
responding to a video editing interface calling instruction, and acquiring configuration parameters of at least two reference objects uploaded through a video editing interface, wherein the configuration parameters are determined according to a target video template;
analyzing the configuration parameters to obtain object types and index addresses of the at least two reference objects, obtaining the at least two reference objects according to the index addresses, and loading the at least two reference objects to at least two editing tracks according to the object types;
and processing the reference object in each editing track according to the object editing data of each editing track contained in the target video template to generate a target video and return the target video.
2. The video production method according to claim 1, wherein the processing, according to the object edit data of each edit track included in the target video template, the reference object in each edit track to generate the target video and return the target video, comprises:
acquiring at least two target images corresponding to reference images in a first editing track according to object editing data corresponding to each editing track contained in the target video template, wherein the reference images are one of the at least two reference objects;
processing the reference image in the first editing track based on the reference image and the at least two target images to generate a first multimedia fragment;
and generating a target video based on the first multimedia segment and a second multimedia segment corresponding to the reference object in the second editing track, and returning.
3. The video production method according to claim 2, wherein the obtaining at least two target images corresponding to the reference image in the first editing track according to the object editing data corresponding to each editing track included in the target video template comprises:
and according to the position information of the local image to be intercepted corresponding to the reference image in the first editing track, which is contained in the target video template, carrying out local image interception on the reference image to obtain at least two target images corresponding to the reference image in the first editing track.
4. The method of claim 2, wherein the processing the reference image in the first edit track based on the reference image and the at least two target images to generate a first multimedia clip comprises:
generating at least one interpolation image between any two adjacent target images based on an interpolation algorithm and configuration parameters of the at least two target images;
and generating a first multimedia segment corresponding to the reference image based on the at least two target images and the at least one interpolation image, wherein the at least one interpolation image is a local image of the reference object.
5. The method of claim 4, wherein the generating the first multimedia segment corresponding to the reference image based on the at least two target images and the at least one interpolated image comprises:
taking a first target image of the at least two target images as a starting frame image of a first multimedia segment, taking a second target image of the at least two target images as an ending frame image of the first multimedia segment, and taking the at least one interpolation image as an intermediate frame image of the first multimedia segment;
and generating a first multimedia segment corresponding to the reference image based on the starting frame image, the intermediate frame image and the ending frame image.
6. The video production method according to claim 2 or 3, wherein the generating and returning a target video based on the first multimedia clip and a second multimedia clip corresponding to a reference object in a second editing track comprises:
and overlapping the first multimedia clip and a second multimedia clip corresponding to the reference object in the second editing track to generate a target video and returning.
7. A method of video production according to claim 2 or 3, wherein said loading said at least two reference objects into at least two editing tracks according to said object type further comprises:
generating initial multimedia fragments corresponding to the at least two reference objects;
correspondingly, after the generating the first multimedia segment, the method further includes:
modifying an initial multimedia segment corresponding to a reference object of a second editing track according to an object editing parameter corresponding to the second editing track contained in the target video template to generate a third multimedia segment;
and overlapping the first multimedia segment and the third multimedia segment to generate a target video and returning.
8. The video production method according to claim 1, further comprising:
acquiring a historical video, and constructing a video template based on video attribute data of the historical video, object attribute data of at least two reference objects contained in the historical video and object editing data corresponding to at least two editing tracks for generating the historical video.
9. The method according to claim 1, wherein said obtaining configuration parameters of at least two reference objects uploaded through the video editing interface comprises:
judging whether the configuration parameters belong to a target data format or not;
if not, carrying out data format conversion on the configuration parameters to generate the configuration parameters conforming to the target data format.
10. The method of claim 1, wherein said loading the at least two reference objects into at least two edit tracks according to the object type comprises:
determining whether a target reference object of the at least two reference objects belongs to a target image class in case that an object type of the target reference object is an image;
if yes, loading the target reference object to a first editing track;
if not, loading the target reference object to a second editing track;
and loading the target reference object to a third editing track under the condition that the object type is audio.
11. A video production apparatus, comprising:
the acquisition module is configured to respond to a video editing interface calling instruction and acquire configuration parameters of at least two reference objects uploaded through the video editing interface, wherein the configuration parameters are determined according to a target video template;
the analysis module is configured to analyze the configuration parameters to obtain object types and index addresses of the at least two reference objects, obtain the at least two reference objects according to the index addresses, and load the at least two reference objects to at least two editing tracks according to the object types;
and the processing module is configured to process the reference object in each editing track according to the object editing data of each editing track contained in the target video template, generate a target video and return the target video.
12. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, wherein the processor implements the steps of the video production method according to any one of claims 1 to 10 when executing the computer-executable instructions.
13. A computer-readable storage medium storing computer instructions which, when executed by a processor, carry out the steps of the video production method of any one of claims 1 to 10.
CN202111283301.6A 2021-11-01 2021-11-01 Video production method and device Active CN113992866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111283301.6A CN113992866B (en) 2021-11-01 2021-11-01 Video production method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111283301.6A CN113992866B (en) 2021-11-01 2021-11-01 Video production method and device

Publications (2)

Publication Number Publication Date
CN113992866A true CN113992866A (en) 2022-01-28
CN113992866B CN113992866B (en) 2024-03-26

Family

ID=79745367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111283301.6A Active CN113992866B (en) 2021-11-01 2021-11-01 Video production method and device

Country Status (1)

Country Link
CN (1) CN113992866B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842325A (en) * 2012-08-28 2012-12-26 深圳市万兴软件有限公司 Method and device for managing audio and video editing tracks
CN110287368A (en) * 2019-05-31 2019-09-27 上海萌鱼网络科技有限公司 The generation method of short-sighted frequency stencil design figure generating means and short video template
CN112333536A (en) * 2020-10-28 2021-02-05 深圳创维-Rgb电子有限公司 Audio and video editing method, equipment and computer readable storage medium
CN112367551A (en) * 2020-10-30 2021-02-12 维沃移动通信有限公司 Video editing method and device, electronic equipment and readable storage medium
CN112954391A (en) * 2021-02-05 2021-06-11 北京百度网讯科技有限公司 Video editing method and device and electronic equipment
CN112995533A (en) * 2021-02-04 2021-06-18 上海哔哩哔哩科技有限公司 Video production method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842325A (en) * 2012-08-28 2012-12-26 深圳市万兴软件有限公司 Method and device for managing audio and video editing tracks
CN110287368A (en) * 2019-05-31 2019-09-27 上海萌鱼网络科技有限公司 The generation method of short-sighted frequency stencil design figure generating means and short video template
CN112333536A (en) * 2020-10-28 2021-02-05 深圳创维-Rgb电子有限公司 Audio and video editing method, equipment and computer readable storage medium
CN112367551A (en) * 2020-10-30 2021-02-12 维沃移动通信有限公司 Video editing method and device, electronic equipment and readable storage medium
CN112995533A (en) * 2021-02-04 2021-06-18 上海哔哩哔哩科技有限公司 Video production method and device
CN112954391A (en) * 2021-02-05 2021-06-11 北京百度网讯科技有限公司 Video editing method and device and electronic equipment

Also Published As

Publication number Publication date
CN113992866B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
CN111935504B (en) Video production method, device, equipment and storage medium
CN111899322B (en) Video processing method, animation rendering SDK, equipment and computer storage medium
CN107770626A (en) Processing method, image synthesizing method, device and the storage medium of video material
CN109168026A (en) Instant video display methods, device, terminal device and storage medium
US20100085363A1 (en) Photo Realistic Talking Head Creation, Content Creation, and Distribution System and Method
CN112738627B (en) Play control method and device
CN112637670B (en) Video generation method and device
WO2019227429A1 (en) Method, device, apparatus, terminal, server for generating multimedia content
CN113542624A (en) Method and device for generating commodity object explanation video
CN113038185B (en) Bullet screen processing method and device
WO2018098340A1 (en) Intelligent graphical feature generation for user content
CN112995533A (en) Video production method and device
CN111835985A (en) Video editing method, device, apparatus and storage medium
CN112954459A (en) Video data processing method and device
CN113395569B (en) Video generation method and device
CN112988306B (en) Animation processing method and device
CN105760420B (en) Realize the method and device with multimedia file content interaction
CN114140564A (en) Expression creating method and device
KR101850285B1 (en) Device and method for generating video script, and video producing system and method based on video script, computer program media
CN114025103A (en) Video production method and device
CN117319582A (en) Method and device for human action video acquisition and fluent synthesis
CN113992866B (en) Video production method and device
CN115314732A (en) Multi-user collaborative film examination method and system
US20230419997A1 (en) Automatic Non-Linear Editing Style Transfer
CN114374872A (en) Video generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant