CN113992866B - Video production method and device - Google Patents

Video production method and device Download PDF

Info

Publication number
CN113992866B
CN113992866B CN202111283301.6A CN202111283301A CN113992866B CN 113992866 B CN113992866 B CN 113992866B CN 202111283301 A CN202111283301 A CN 202111283301A CN 113992866 B CN113992866 B CN 113992866B
Authority
CN
China
Prior art keywords
video
target
editing
image
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111283301.6A
Other languages
Chinese (zh)
Other versions
CN113992866A (en
Inventor
常青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202111283301.6A priority Critical patent/CN113992866B/en
Publication of CN113992866A publication Critical patent/CN113992866A/en
Application granted granted Critical
Publication of CN113992866B publication Critical patent/CN113992866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the application provides a video production method and a device, wherein the video production method comprises the following steps: responding to a video editing interface call instruction, acquiring configuration parameters of at least two reference objects uploaded through the video editing interface, wherein the configuration parameters are determined according to a target video template, analyzing the configuration parameters to obtain object types and index addresses of the at least two reference objects, acquiring the at least two reference objects according to the index addresses, loading the at least two reference objects to at least two editing tracks according to the object types, processing the reference objects in each editing track according to object editing data of each editing track contained in the target video template, generating a target video and returning.

Description

Video production method and device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a video production method. One or more embodiments of the present application relate to a video production apparatus, a computing device, and a computer-readable storage medium.
Background
With the development of internet technology, more and more users record their lives through pictures, and the users can also make the pictures into video album and make a plurality of pictures into video for playing.
There are some album video production tools on the market, which can produce corresponding album video from a group of pictures, but the current album video production tools usually need to participate in a complex editing production process by people. In addition, in order to present better visual effects for users, video effect designers often design some relatively cool and dazzling video visual effects for application developers to realize, however, by using the existing technology and interfaces, it is difficult to quickly and simply realize complex video visual effects, and video models applied when video effects are realized in the prior art are also difficult to have general applicability, and the requirement effects of video design are difficult to be met.
Disclosure of Invention
In view of this, the embodiment of the application provides a video production method. One or more embodiments of the present application relate to a video production apparatus, a computing device, and a computer readable storage medium, so as to solve the technical defect that a complex video visual effect cannot be quickly and simply achieved in the prior art, and a video production method in the prior art does not have universal applicability.
According to a first aspect of an embodiment of the present application, there is provided a video production method, including:
Responding to a video editing interface calling instruction, and acquiring configuration parameters of at least two reference objects uploaded through the video editing interface, wherein the configuration parameters are determined according to a target video template;
analyzing the configuration parameters to obtain object types and index addresses of the at least two reference objects, obtaining the at least two reference objects according to the index addresses, and loading the at least two reference objects to at least two editing tracks according to the object types;
and processing the reference object in each editing track according to the object editing data of each editing track contained in the target video template, generating a target video and returning.
According to a second aspect of embodiments of the present application, there is provided a video production apparatus, including:
the acquisition module is configured to respond to a video editing interface calling instruction and acquire configuration parameters of at least two reference objects uploaded through the video editing interface, wherein the configuration parameters are determined according to a target video template;
the analysis module is configured to analyze the configuration parameters to obtain object types and index addresses of the at least two reference objects, obtain the at least two reference objects according to the index addresses, and load the at least two reference objects to at least two editing tracks according to the object types;
And the processing module is configured to process the reference object in each editing track according to the object editing data of each editing track contained in the target video template, generate a target video and return the target video.
According to a third aspect of embodiments of the present application, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, wherein the processor, when executing the computer-executable instructions, performs the steps of the video production method.
According to a fourth aspect of embodiments of the present application, there is provided a computer readable storage medium storing computer executable instructions which, when executed by a processor, implement the steps of the video production method.
An embodiment of the application realizes a video production method and a device, wherein the video production method comprises the steps of responding to a video editing interface calling instruction, obtaining configuration parameters of at least two reference objects uploaded through a video editing interface, determining the configuration parameters according to a target video template, analyzing the configuration parameters to obtain object types and index addresses of the at least two reference objects, obtaining the at least two reference objects according to the index addresses, loading the at least two reference objects to at least two editing tracks according to the object types, processing the reference objects in each editing track according to object editing data of each editing track contained in the target video template, and generating a target video and returning.
According to the video production method, the plurality of video templates are generated in advance for selection by a user, the user can directly upload configuration parameters of at least two reference objects for video production to the target video templates in a mode of calling a video editing interface under the condition that video production requirements exist, and then the multimedia platform can generate target videos meeting expectations based on the configuration parameters and the target video templates.
In the process, only the configuration parameters required for video production are submitted by a user, and the follow-up complex video production process is not required to be participated; in addition, the video manufacturing method provided by the embodiment of the application has universal applicability to different types of video manufacturing scenes, is beneficial to quickly realizing target videos with complex display effects through a small number of reference images, and is beneficial to saving time required for developing videos with complex transformation, so that convenience and efficiency of video manufacturing are improved.
Drawings
FIG. 1 is a flow chart of a video production method according to one embodiment of the present application;
FIG. 2 is a block diagram of a video template provided in one embodiment of the present application;
FIG. 3 is a flowchart of a video production process according to one embodiment of the present application;
fig. 4 is a schematic structural diagram of a video production device according to an embodiment of the present application;
FIG. 5 is a block diagram of a computing device provided in one embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The terminology used in one or more embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of one or more embodiments of the application. As used in this application in one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
First, terms related to one or more embodiments of the present application will be explained.
Video templates: a video template is a structure file that holds video clip data (including video clips, music, pictures, special effects, filters, and track structure data) and can be reused (imported into the corresponding video clip software or parsed with a program).
In the present application, a video production method is provided. One or more embodiments of the present application relate to a video production apparatus, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments.
The video production method provided by the embodiment of the application can be applied to any field needing to produce videos, such as production of video animation in the video field, production of voice video in the communication field, production of special effect video in the self-media field and the like; for easy understanding, the embodiment of the present application will be described in detail with reference to the example where the video production method is applied to the production of videos in the video field, but is not limited thereto.
In particular, the target video of the embodiments of the present application may be presented in a large video playing device, a game console, a desktop computer, a smart phone, a tablet computer, an MP3 (Moving Picture Experts Group Audio Layer III, dynamic video expert compression standard audio layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video expert compression standard audio layer 4) player, a laptop, an electronic book reader, and other display terminals.
In addition, the target video in the embodiment of the application can be applied to any video and audio capable of presenting video, for example, special effect video can be presented in live video and recorded video, and video can be presented in online or offline video playing and the like.
Referring to fig. 1, fig. 1 shows a flowchart of a video production method according to an embodiment of the present application, including the following steps:
step 102, responding to a video editing interface call instruction, and acquiring configuration parameters of at least two reference objects uploaded through the video editing interface, wherein the configuration parameters are determined according to a target video template.
Specifically, the video production method provided by the embodiment of the application is applied to a multimedia platform, the multimedia platform defines a video template structure based on the video processing capability of the multimedia platform, and the multimedia platform generates a video template by abstracting historical video data in the whole platform and then processing an abstract result according to the video template structure.
The user can be a common user in a mechanism or an enterprise for developing a multimedia platform, or can be a third party user, and after the user determines a target video template to be used, the user can determine video data to be uploaded according to the target video template. In the embodiment of the application, the user can upload the configuration parameters of at least two reference objects so as to make the target video by the multimedia platform based on the target video template and the configuration parameters. In practical applications, the at least two reference objects include, but are not limited to, at least two reference images, a piece of audio and/or a piece of video, etc.; the configuration parameters include, but are not limited to, an index address of an image, audio, or video, a name of an image, audio, or video, etc.
In implementation, the multimedia platform may generate a plurality of video templates in advance for selection by a user before receiving a call instruction of a video editing interface of the user, and may specifically obtain a history video, and construct the video template based on video attribute data of the history video, object attribute data of at least two reference objects included in the history video, and object editing data corresponding to at least two editing tracks for generating the history video.
Specifically, the historical video, that is, the video which is generated by making the historical video on the multimedia platform before the current time, can be generated by making the historical video by the user through interaction with the multimedia platform, that is, the multimedia platform provides a display interface for the user, and the user uploads at least two reference objects through the display interface, wherein the at least two reference objects comprise, but are not limited to, at least two reference images, audio or video, and the like. After receiving the reference objects, the multimedia platform can load the reference objects to the editing track according to the object types corresponding to the reference objects, for example, a picture or a video is loaded to the video editing track for making key video frames of the video, and an audio is loaded to the audio editing track for making background music of the video.
After the multimedia platform loads the reference objects to the editing track respectively, initial multimedia fragments corresponding to the reference objects can be generated and displayed to a user through a display interface, wherein the initial multimedia fragments comprise but are not limited to video fragments or audio fragments and the like; and when the user determines that the generated initial multimedia fragment does not meet the expected display effect, the reference object in the editing track can be adjusted to generate a target video conforming to the expected display effect.
Because at least two reference objects include at least two reference images, after loading the reference images into an editing track, each frame of video frame is the reference image in an initial multimedia fragment corresponding to each reference image, in this case, a user can process the reference images, namely, for each reference image, intercept at least two partial images of the reference image, then perform interpolation calculation based on each reference image and the partial images thereof to obtain at least one interpolation image, then use one of the at least two partial images as a start frame image and the other as an end frame image, use the interpolation image and the other partial images as intermediate frame images, and synthesize intermediate video fragments corresponding to the reference images, where each frame of video frame may be different.
Besides the reference image, the reference object can also contain audio, for the initial audio segment corresponding to the audio, the user can modify the initial audio segment or not, if not, a plurality of intermediate video segments in the video editing track can be spliced, and then the splicing result and the initial audio segment in the audio editing track are subjected to superposition processing to generate the corresponding video, wherein the superposition can be sequential superposition frame by frame, namely, the ith frame of the video to be synthesized is superposed with the ith frame of the initial multimedia segment.
After the video is generated, the video can be used as a historical video, and a video template is constructed based on video attribute data of the historical video, object attribute data of at least two reference objects contained in the historical video and object editing data corresponding to at least two editing tracks used for generating the historical video, wherein the video attribute data comprises but is not limited to video type, size, background color and the like; object attribute data includes, but is not limited to, index address, name, type, etc.; the object editing data includes, but is not limited to, the position of the object in the track, the brightness, color, etc. of the object.
The structure diagram of the video template provided in the embodiment of the application is shown in fig. 2, the structure of the video template can be defined as an XML structure or a JSON structure, and the description of part of parameters in the video template is as follows:
video description: the type is the type of the derived video, which can be MP4 and MOV; the size is the size of the derived video, for example: 1028×720; the background is the background color of the derived video, and can be black background or white background.
The resource: the reference object transmitted to the multimedia platform can have a plurality of resources; resource path and resource name: the resource URL path uploaded by the user and the uploaded resource name; resource type: can be pictures, video and audio; for the case of multiple tracks, for example, the picture in the first track can be used to determine the key frame of the target video, and the picture in the second track can be used as a picture-in-picture or PNG picture with a transparent channel, or a static special effect; the videos can be overlapped to make special effects; audio may be used as background music.
The producer, the playlist, and the multitrack are a multi-layered concept, one multitrack contains multiple playlists, and one playlist contains multiple producers.
The generator: packaging resources and filters transmitted to a multimedia platform, the filters being a class defined by the platform, each filter having a method of processing resources, it being understood that the filters are a tool for processing resources, such as: some filters are responsible for clipping resources; some filters are responsible for the rendering of resources, etc.
Broadcasting a list: a playlist may be understood as one of a plurality of tracks.
Multitrack: wherein each track consists of a playlist + filter, where the filter does not handle a single resource, but rather the content of the entire track, for example: the whole track corresponds to the incoming, outgoing, brightness and color of the video.
In addition, after the configuration parameters of at least two reference objects uploaded through the video editing interface are obtained, whether the configuration parameters belong to a target data format or not can be judged;
if not, carrying out data format conversion on the configuration parameters to generate the configuration parameters conforming to the target data format.
Specifically, when the multimedia platform generates the video template, the structure of the video template can be defined as an XML structure or a JSON structure, and when the configuration parameters uploaded by a user are analyzed, the configuration parameters can be converted into any one of the two structures by the user when the configuration parameters are uploaded, and then the converted data are uploaded to the multimedia platform through a video editing interface; or, the user can upload the configuration parameters according to any structure, and after receiving the configuration parameters uploaded by the user, the multimedia platform can judge whether the structure of the multimedia platform belongs to a target data structure (target data format), and if so, the multimedia platform can directly analyze the structure; if not, the data format is converted into the target data format, and then the converted data is analyzed.
By providing the video template for the user, in the whole video production process, the user only needs to upload the corresponding configuration parameters aiming at the target video template without participating in the subsequent complex video production process, and the interaction with the user can be reflected while the complex video production process is reduced for the user, so that the participation experience of the user in the video production process is improved.
And 104, analyzing the configuration parameters to obtain object types and index addresses of the at least two reference objects, obtaining the at least two reference objects according to the index addresses, and loading the at least two reference objects to at least two editing tracks according to the object types.
Specifically, in the embodiment of the present application, during video production based on at least two reference objects, an editing track may be used, that is, after configuration parameters of at least two reference objects uploaded by a user are obtained, the configuration parameters may be parsed to obtain object types and index addresses of each reference object, where the object types are image, audio or video, etc. are used to load the reference objects to the corresponding editing tracks according to the object types; the index address is used for reading the reference object based on the address; therefore, after the index address and the object type are obtained through analysis, the reference object can be obtained according to the index address, each reference object is respectively loaded to the corresponding editing track according to the object type corresponding to each reference object, an initial multimedia fragment is generated, and then the display time, the display position or the display effect and the like of the reference object in each editing track can be edited based on the object editing data in the target video template, so that the target video meeting the expected effect is obtained.
In specific implementation, loading the at least two reference objects to at least two editing tracks according to the object type includes:
determining whether a target reference object belongs to a target image category or not under the condition that the object type of the target reference object in the at least two reference objects is an image;
if yes, loading the target reference object to a first editing track;
if not, loading the target reference object to a second editing track;
and loading the target reference object to a third editing track in the case that the object type is audio.
Specifically, as previously described, the at least two reference objects include, but are not limited to, at least two reference images, a piece of audio and/or a piece of video, and the like.
The functions that can be achieved in at least two reference images may be different, for example, the main function of a part of the reference images is to determine a key video frame for producing a target video, the main function of a part of the reference images is to produce a related special effect of the target video, and the function of audio is mostly background music of the target video; therefore, after obtaining at least two reference objects, determining the object type corresponding to each reference object, and if the object type of the reference object is determined to be an image, that is, if the reference object is determined to be the reference image, the image type of the reference image can be further determined, if the reference image is determined to belong to the target image type, that is, the type of which the main function is to determine the key video frame for making the target video, then loading the reference image into the first editing track; if it is determined that the reference image does not belong to the target image category, i.e. the main function of the reference image may be to make video special effects, but not to determine key video frames, it is loaded to the second editing track.
In addition, when the object type of the reference object is determined to be audio, the reference object is loaded to a third editing track, wherein the first editing track and the second editing track are video editing tracks, the third editing track is audio editing tracks, and one or at least two reference objects can be loaded in each editing track, and the reference object can be determined according to actual requirements.
In the embodiment of the application, after the reference objects are loaded to the corresponding editing tracks, the corresponding initial multimedia fragments can be generated, and in practical application, the target video template can contain the durations of the reference objects with different object types corresponding to the initial multimedia fragments, so that the initial multimedia fragments with corresponding durations can be generated after the different reference objects are loaded to the editing tracks. For example, after loading a reference image to an editing track, generating an initial video segment with a duration of 4 seconds corresponding to the reference image, wherein each frame of video frame in the initial video segment is the reference image; after loading a section of audio to the audio editing track, an initial audio fragment with a duration of 8 seconds corresponding to the audio is generated, where the initial audio fragment may be a part of the audio or a splicing result of multiple sections of the audio.
In a specific application process, the preset duration of different video clips in the first editing track, the second editing track or the third editing track can be determined according to actual requirements, and the method is not limited herein.
And 106, processing the reference object in each editing track according to the object editing data of each editing track contained in the target video template, generating a target video and returning.
Specifically, the object editing data, that is, the related editing data representing editing of the reference object in the editing track, includes, but is not limited to, the position of the object in the track, the brightness and color of the object, the clipping mode of the object, and the like. After the multimedia platform loads at least two reference objects to at least two editing tracks, the multimedia platform can process the reference objects in each editing track according to the object editing data of each editing track contained in the target video template, generate target videos and return the target videos.
In the implementation, according to the object editing data of each editing track contained in the target video template, processing the reference object in each editing track to generate a target video and returning, including:
According to object editing data corresponding to each editing track contained in the target video template, at least two target images corresponding to reference images in a first editing track are obtained, wherein the reference images are one of the at least two reference objects;
processing the reference image in the first editing track based on the reference image and the at least two target images to generate a first multimedia fragment;
and generating a target video based on the first multimedia segment and a second multimedia segment corresponding to the reference object in the second editing track, and returning.
Further, at least two target images corresponding to the reference images in the first editing track are obtained according to the object editing data corresponding to each editing track contained in the target video template, namely, the reference images are subjected to local image interception according to the position information of the local images to be intercepted, which are contained in the target video template and correspond to the reference images in the first editing track, so that at least two target images corresponding to the reference images in the first editing track are obtained.
Specifically, in the process of generating a historical video, reference objects such as images, audios or videos and the like can be loaded to corresponding editing tracks to generate initial video clips corresponding to the reference objects, when a user determines that the initial video clips do not meet the expected display effect, a clip content modification request can be submitted to the multimedia platform for the initial multimedia clips through a display interface of the multimedia platform, after the request is received by the multimedia platform, the reference images in the initial multimedia clips can be displayed for the user through the display interface, and then the user can submit local image interception instructions to the reference images in the initial multimedia clips through the display interface so as to intercept the local images of the reference images, so that intercepted target images are obtained; then, interpolation calculation can be carried out based on each reference image and local images thereof to obtain at least one interpolation image, one of the at least two local images is taken as a starting frame image, the other one is taken as an ending frame image, and the interpolation image and the other local images are taken as intermediate frame images to synthesize video clips so as to realize video production.
Thus, in a video template generated based on historical video, the object configuration parameters that it contains may exist, for example: cut-out region data used when cutting out a partial image of a reference image, interpolation coefficients used for interpolation calculation of the reference image and the partial image thereof, interpolation conversion values, and the like. And processing the reference object in each editing track according to the object editing data corresponding to each editing track contained in the target video template, namely, carrying out local image interception on the reference image in the first editing track according to the interception area data corresponding to the first editing track in the target video template so as to obtain at least two target images. Then, a first multimedia clip can be generated based on the at least two target images and the reference image, and then, a target video can be generated and output based on the first multimedia clip and a second multimedia clip corresponding to the reference object in the second editing track.
According to the embodiment of the application, the interpolation algorithm is utilized to conduct interpolation calculation on the target image intercepted based on the reference image, the interpolation image is determined according to the interpolation calculation result, the first multimedia fragment is synthesized according to the interpolation image and the target image, the target video with complex display effect can be rapidly achieved through a small amount of target images and configuration parameters of the target images, time required for developing the video with complex transformation is saved, and therefore convenience and efficiency of video production are improved.
Wherein processing the reference image in the first editing track based on the reference image and the at least two target images to generate a first multimedia segment comprises:
generating at least one interpolation image between any two adjacent target images based on an interpolation algorithm and configuration parameters of the at least two target images;
and generating a first multimedia segment corresponding to the reference image based on the at least two target images and the at least one interpolation image, wherein the at least one interpolation image is a local image of the reference object.
Specifically, as described above, in the video template generated based on the historical video, the object configuration parameters included in the video template may include data such as interpolation coefficients and interpolation conversion values used for performing interpolation calculation on the reference image and its partial image by using an interpolation algorithm. Therefore, after obtaining at least two target images, interpolation calculation can be performed according to interpolation coefficients, interpolation conversion values, configuration parameters of at least two target images and the reference image contained in the target video template, so as to generate at least one interpolation image between any two adjacent target images. A first multimedia segment corresponding to the reference image may then be synthesized based on the target image and the interpolated image, where each frame of video frame may be different.
In practical application, after capturing the target image in the process of making the historical video, the configuration parameters of the target image need to be determined, and specifically, the configuration parameters of each target image can be determined according to the position relationship between the to-be-captured local area of the target image and the display canvas of the reference image in the display interface.
The configuration parameters include, but are not limited to, coordinate parameters of the target image and length and width of the target image, or parameters such as scaling of the length and width of the target image relative to the length and width of the reference image, so that the coordinate, length or width of at least two target images can be determined according to the positional relationship between the local area to be intercepted and the display canvas of the reference image in the display interface.
In practical application, the coordinate parameters may be the vertex coordinates of the target image in the display interface, specifically, the vertex at the upper left corner of the display interface may be the origin of coordinates, a plane rectangular coordinate system is established, and the coordinate parameters of the target image are determined according to the position of the vertex at the upper left corner of the target image in the rectangular coordinate system, where the coordinate parameters are the abscissa and ordinate of the vertex at the upper left corner of the target image.
Except for the coordinate parameters, parameters such as length, width or scaling ratio in the configuration parameters are similar to the determining process of the coordinate parameters, and are not described herein.
After the configuration parameters of the target image are determined, interpolation calculation can be performed according to the configuration parameters, so that an interpolation image is determined according to interpolation calculation results, and a video is produced according to the interpolation image and the target image, wherein the interpolation image is also a local image of a reference object.
Since the configuration parameters of the target image include, but are not limited to, the coordinate parameters of the target image and the length and width of the target image, or the scaling of the length and width of the target image with respect to the length and width of the reference object, etc., the configuration parameters of the interpolation image obtained by interpolation calculation may be based on the configuration parameters of the target image, and the configuration parameters of the interpolation image may also include the coordinate parameters of the interpolation image and the length and width of the interpolation image, or the scaling of the length and width of the interpolation image with respect to the length and width of the reference object, etc. The coordinate parameters of the interpolation image are obtained by carrying out interpolation calculation on the coordinate parameters of the target image in the configuration parameters of the target image; also, the width or height of the interpolation image is obtained by performing interpolation calculation on the width or height of the target image in the configuration parameters of the target image, respectively.
After the configuration parameters of the interpolation image are determined, a local area to be intercepted of the reference image in the display interface can be determined according to the configuration parameters, and the reference image is intercepted based on the local area to be intercepted, so that at least one intercepted interpolation image is obtained.
Therefore, in addition to obtaining the interpolation image by performing interpolation calculation according to the interpolation coefficient, the interpolation conversion value and the configuration parameters of the target image, the configuration parameters of each interpolation image used in the history video generation process may be further included in the target video template.
In practical applications, the number of interpolation images between the target images may be determined according to the duration and frame rate of the multimedia segment, which is not limited herein.
In a specific implementation, generating a first multimedia segment corresponding to the reference image based on the at least two target images and the at least one interpolation image includes:
taking a first target image of the at least two target images as a starting frame image of a first multimedia fragment, taking a second target image of the at least two target images as an ending frame image of the first multimedia fragment, and taking the at least one interpolation image as an intermediate frame image of the first multimedia fragment;
And generating a first multimedia fragment corresponding to the reference image based on the initial frame image, the intermediate frame image and the end frame image.
Specifically, after at least one interpolation image is obtained, a first target image of the at least two target images may be used as a start frame image of a first multimedia segment, a second target image of the at least two target images may be used as an end frame image of the first multimedia segment, the at least one interpolation image may be used as an intermediate frame image of the first multimedia segment, and the first multimedia segment may be manufactured based on the start frame image, the intermediate frame image, and the end frame image.
Further, based on the first multimedia segment and the second multimedia segment corresponding to the reference object in the second editing track, generating a target video and returning, namely, performing superposition processing on the first multimedia segment and the second multimedia segment corresponding to the reference object in the second editing track, generating the target video and returning.
Specifically, since at least two reference objects include at least two reference images, and in the case that the at least two reference images include at least two reference images of target image categories, the reference images of the at least two target image categories may be loaded into the same editing track (first editing track), so as to generate initial video clips corresponding to the reference images of the at least two target image categories; then, reference images of at least two target image categories can be sequentially processed, namely, the reference images are intercepted by utilizing intercepting region data in a target video template, at least two partial images are obtained, interpolation calculation is carried out on the basis of the partial images, interpolation images are obtained, and then first multimedia fragments corresponding to the reference images can be generated on the basis of the at least two partial images and the interpolation images.
Further, because at least two first multimedia fragments respectively corresponding to the reference images exist in the first editing track, when the target video is generated, the first multimedia fragments in the first editing track can be spliced to generate the video to be synthesized; and then, under the condition that the initial multimedia segment corresponding to the reference object in the second editing track is not modified, overlapping the video to be synthesized with the initial multimedia segment (second multimedia segment) corresponding to the reference object in the second editing track to generate a target video, wherein overlapping is that the frame by frame overlapping is sequentially carried out, i.e. the i-th frame of the video to be synthesized is overlapped with the i-th frame of the initial multimedia segment to generate the target video and then the target video is returned.
In addition, after loading the at least two reference objects to the at least two editing tracks according to the object type, the method further comprises:
generating initial multimedia fragments corresponding to the at least two reference objects;
correspondingly, after the first multimedia segment is generated, the method further includes:
modifying an initial multimedia segment corresponding to a reference object of a second editing track according to an object editing parameter corresponding to the second editing track contained in the target video template, and generating a third multimedia segment;
And carrying out superposition processing on the first multimedia fragment and the third multimedia fragment, generating a target video and returning.
Specifically, in the process of producing the historical video, a user can modify initial multimedia fragments corresponding to the reference image to generate a first multimedia fragment corresponding to the reference image, modify initial multimedia fragments corresponding to other reference objects to obtain a corresponding second multimedia fragment, and then superimpose intermediate multimedia fragments corresponding to the reference image and intermediate multimedia fragments corresponding to other reference objects to generate the target video.
Therefore, in the process of manufacturing the target video, the initial multimedia segment corresponding to the reference object of the second editing track can be modified according to the object editing parameter corresponding to the second editing track contained in the target video template to generate a third multimedia segment; and then, the first multimedia fragments corresponding to at least two reference images in the first editing track can be spliced to generate a video to be synthesized, and the video to be synthesized and the third multimedia fragments corresponding to the reference objects in the second editing track are subjected to superposition processing to generate a target video.
By providing the video template for the user, in the whole video production process, the user only needs to upload the corresponding configuration parameters aiming at the target video template without participating in the subsequent complex video production process, and the interaction with the user can be reflected while the complex video production process is reduced for the user, so that the participation experience of the user in the video production process is improved.
An embodiment of the application realizes a video production method and a device, wherein the video production method comprises the steps of responding to a video editing interface calling instruction, obtaining configuration parameters of at least two reference objects uploaded through a video editing interface, determining the configuration parameters according to a target video template, analyzing the configuration parameters to obtain object types and index addresses of the at least two reference objects, obtaining the at least two reference objects according to the index addresses, loading the at least two reference objects to at least two editing tracks according to the object types, processing the reference objects in each editing track according to object editing data of each editing track contained in the target video template, and generating a target video and returning.
According to the video production method, the plurality of video templates are generated in advance for selection by a user, the user can directly upload configuration parameters of at least two reference objects for video production to the target video templates in a mode of calling a video editing interface under the condition that video production requirements exist, and then the multimedia platform can generate target videos meeting expectations based on the configuration parameters and the target video templates.
In the process, only the configuration parameters required for video production are submitted by a user, and the follow-up complex video production process is not required to be participated; in addition, the video manufacturing method provided by the embodiment of the application has universal applicability to different types of video manufacturing scenes, is beneficial to quickly realizing target videos with complex display effects through a small number of reference images, and is beneficial to saving time required for developing videos with complex transformation, so that convenience and efficiency of video manufacturing are improved.
Referring to fig. 3, an application of the video production method provided in the embodiment of the present application to a video production process in the video field is taken as an example, and the video production method is further described. Fig. 3 shows a flowchart of a processing procedure of applying a video production method to a video production process according to an embodiment of the present application, which specifically includes the following steps:
Step 302, a history video is acquired.
Step 304, a video template is constructed based on video attribute data of the historical video, object attribute data of at least two reference objects contained in the historical video, and object editing data corresponding to at least two editing tracks for generating the historical video.
Step 306, receiving a video editing interface call instruction of a third party.
Step 308, obtaining configuration parameters of at least two reference objects uploaded by a third party through a video editing interface, wherein the configuration parameters are determined according to a target video template.
Step 310, parsing the configuration parameters to obtain the object type and index address of the at least two reference objects.
Step 312, reading the at least two reference objects according to the index address, and loading the at least two reference objects to at least two editing tracks according to the object type.
And step 314, processing the reference object in each editing track according to the object editing data of each editing track contained in the target video template to generate a target video.
And step 316, returning the target video to the third party.
According to the video production method, the plurality of video templates are generated in advance for selection by a user, the user can directly upload configuration parameters of at least two reference objects for video production to the target video templates in a mode of calling a video editing interface under the condition that video production requirements exist, and then the multimedia platform can generate target videos meeting expectations based on the configuration parameters and the target video templates.
In the process, only the configuration parameters required for video production are submitted by a user, and the follow-up complex video production process is not required to be participated; in addition, the video manufacturing method provided by the embodiment of the application has universal applicability to different types of video manufacturing scenes, is beneficial to quickly realizing target videos with complex display effects through a small number of reference images, and is beneficial to saving time required for developing videos with complex transformation, so that convenience and efficiency of video manufacturing are improved.
Corresponding to the above method embodiment, the present application further provides an embodiment of a video production device, and fig. 4 shows a schematic structural diagram of a video production device according to one embodiment of the present application. As shown in fig. 4, the apparatus includes:
an obtaining module 402, configured to obtain configuration parameters of at least two reference objects uploaded through the video editing interface in response to a video editing interface call instruction, wherein the configuration parameters are determined according to a target video template;
the parsing module 404 is configured to parse the configuration parameters to obtain object types and index addresses of the at least two reference objects, obtain the at least two reference objects according to the index addresses, and load the at least two reference objects to at least two editing tracks according to the object types;
And the processing module 406 is configured to process the reference object in each editing track according to the object editing data of each editing track contained in the target video template, generate a target video and return the target video.
Optionally, the processing module 406 includes:
the acquisition sub-module is configured to acquire at least two target images corresponding to reference images in a first editing track according to object editing data corresponding to each editing track contained in the target video template, wherein the reference images are one of the at least two reference objects;
a generation sub-module configured to process the reference image in the first editing track based on the reference image and the at least two target images, generating a first multimedia clip;
and the returning sub-module is configured to generate a target video based on the first multimedia fragment and a second multimedia fragment corresponding to the reference object in the second editing track and return the target video.
Optionally, the acquiring submodule is further configured to:
and according to the position information of the partial image to be intercepted, which is contained in the target video template, corresponding to the reference image in the first editing track, carrying out partial image interception on the reference image to obtain at least two target images, which are corresponding to the reference image in the first editing track.
Optionally, the generating sub-module is further configured to:
generating at least one interpolation image between any two adjacent target images based on an interpolation algorithm and configuration parameters of the at least two target images;
and generating a first multimedia segment corresponding to the reference image based on the at least two target images and the at least one interpolation image, wherein the at least one interpolation image is a local image of the reference object.
Optionally, the generating sub-module is further configured to:
taking a first target image of the at least two target images as a starting frame image of a first multimedia fragment, taking a second target image of the at least two target images as an ending frame image of the first multimedia fragment, and taking the at least one interpolation image as an intermediate frame image of the first multimedia fragment;
and generating a first multimedia fragment corresponding to the reference image based on the initial frame image, the intermediate frame image and the end frame image.
Optionally, the processing module is further configured to:
and carrying out superposition processing on the first multimedia fragment and a second multimedia fragment corresponding to the reference object in the second editing track, generating a target video and returning.
Optionally, the video production device further includes a first generation module configured to: generating initial multimedia fragments corresponding to the at least two reference objects;
a first generation module configured to:
modifying an initial multimedia segment corresponding to a reference object of a second editing track according to an object editing parameter corresponding to the second editing track contained in the target video template, and generating a third multimedia segment;
and carrying out superposition processing on the first multimedia fragment and the third multimedia fragment, generating a target video and returning.
Optionally, the video production device further includes a construction module configured to:
and acquiring the historical video, and constructing a video template based on the video attribute data of the historical video, the object attribute data of at least two reference objects contained in the historical video and the object editing data corresponding to at least two editing tracks for generating the historical video.
Optionally, the video production device further includes a conversion module configured to:
judging whether the configuration parameters belong to a target data format or not;
if not, carrying out data format conversion on the configuration parameters to generate the configuration parameters conforming to the target data format.
Optionally, the parsing module 404 is further configured to:
determining whether a target reference object belongs to a target image category or not under the condition that the object type of the target reference object in the at least two reference objects is an image;
if yes, loading the target reference object to a first editing track;
if not, loading the target reference object to a second editing track;
and loading the target reference object to a third editing track in the case that the object type is audio.
The above is a schematic solution of a video production apparatus of the present embodiment. It should be noted that, the technical solution of the video production device and the technical solution of the video production method belong to the same conception, and details of the technical solution of the video production device, which are not described in detail, can be referred to the description of the technical solution of the video production method.
Fig. 5 illustrates a block diagram of a computing device 500 provided in accordance with one embodiment of the present application. The components of the computing device 500 include, but are not limited to, a memory 510 and a processor 520. Processor 520 is coupled to memory 510 via bus 530 and database 550 is used to hold data.
Computing device 500 also includes access device 540, access device 540 enabling computing device 500 to communicate via one or more networks 560. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 540 may include one or more of any type of network interface, wired or wireless (e.g., a Network Interface Card (NIC)), such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 500, as well as other components not shown in FIG. 5, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 5 is for exemplary purposes only and is not intended to limit the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 500 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 500 may also be a mobile or stationary server.
Wherein the processor 520 is configured to execute computer-executable instructions for performing steps of the video production method when the processor executes the computer-executable instructions.
The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the video production method belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the video production method.
An embodiment of the present application also provides a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of the video production method.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the video production method belong to the same concept, and details of the technical solution of the storage medium which are not described in detail can be referred to the description of the technical solution of the video production method.
The foregoing describes specific embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the embodiments are not limited by the order of actions described, as some steps may take other order or occur simultaneously in accordance with the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the embodiments of the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The above-disclosed preferred embodiments of the present application are provided only as an aid to the elucidation of the present application. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the teachings of the embodiments of the present application. These embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. This application is to be limited only by the claims and the full scope and equivalents thereof.

Claims (12)

1. A method of video production, comprising:
responding to a video editing interface call instruction, acquiring configuration parameters of at least two reference objects uploaded through the video editing interface, and converting the configuration parameters into configuration parameters conforming to a target data format under the condition that the configuration parameters do not belong to the target data format, wherein the configuration parameters conforming to the target data format are determined according to a target video template;
resolving configuration parameters conforming to a target data format to obtain object types and index addresses of the at least two reference objects, obtaining the at least two reference objects according to the index addresses, and loading the at least two reference objects to at least two editing tracks according to the object types;
and processing the reference object in each editing track according to object editing data of each editing track contained in the target video template, generating a target video and returning, wherein the object editing data comprises the position of the reference object in the track, the clipping mode of the reference object and the brightness and color of the reference object.
2. The method for producing video according to claim 1, wherein said processing the reference object in each editing track according to the object editing data of each editing track contained in the target video template, generating a target video and returning includes:
According to object editing data corresponding to each editing track contained in the target video template, at least two target images corresponding to reference images in a first editing track are obtained, wherein the reference images are one of the at least two reference objects;
processing the reference image in the first editing track based on the reference image and the at least two target images to generate a first multimedia fragment;
and generating a target video based on the first multimedia segment and a second multimedia segment corresponding to the reference object in the second editing track, and returning.
3. The method for producing video according to claim 2, wherein the obtaining at least two target images corresponding to the reference image in the first editing track according to the object editing data corresponding to each editing track included in the target video template includes:
and according to the position information of the partial image to be intercepted, which is contained in the target video template, corresponding to the reference image in the first editing track, carrying out partial image interception on the reference image to obtain at least two target images, which are corresponding to the reference image in the first editing track.
4. The video production method according to claim 2, wherein the processing the reference image in the first editing track based on the reference image and the at least two target images to generate a first multimedia clip includes:
generating at least one interpolation image between any two adjacent target images based on an interpolation algorithm and configuration parameters of the at least two target images;
and generating a first multimedia segment corresponding to the reference image based on the at least two target images and the at least one interpolation image, wherein the at least one interpolation image is a local image of the reference object.
5. The method according to claim 4, wherein generating the first multimedia clip corresponding to the reference image based on the at least two target images and the at least one interpolation image, comprises:
taking a first target image of the at least two target images as a starting frame image of a first multimedia fragment, taking a second target image of the at least two target images as an ending frame image of the first multimedia fragment, and taking the at least one interpolation image as an intermediate frame image of the first multimedia fragment;
And generating a first multimedia fragment corresponding to the reference image based on the initial frame image, the intermediate frame image and the end frame image.
6. A method of video production according to claim 2 or 3, wherein generating and returning a target video based on the first multimedia segment and a second multimedia segment corresponding to a reference object in a second editing track comprises:
and carrying out superposition processing on the first multimedia fragment and a second multimedia fragment corresponding to the reference object in the second editing track, generating a target video and returning.
7. A video production method according to claim 2 or 3, wherein after loading the at least two reference objects into at least two edit tracks according to the object type, further comprising:
generating initial multimedia fragments corresponding to the at least two reference objects;
correspondingly, after the first multimedia segment is generated, the method further includes:
modifying an initial multimedia segment corresponding to a reference object of a second editing track according to an object editing parameter corresponding to the second editing track contained in the target video template, and generating a third multimedia segment;
And carrying out superposition processing on the first multimedia fragment and the third multimedia fragment, generating a target video and returning.
8. The video production method according to claim 1, further comprising:
and acquiring the historical video, and constructing a video template based on the video attribute data of the historical video, the object attribute data of at least two reference objects contained in the historical video and the object editing data corresponding to at least two editing tracks for generating the historical video.
9. The video production method according to claim 1, wherein the loading the at least two reference objects to at least two editing tracks according to the object type includes:
determining whether a target reference object belongs to a target image category or not under the condition that the object type of the target reference object in the at least two reference objects is an image;
if yes, loading the target reference object to a first editing track;
if not, loading the target reference object to a second editing track;
and loading the target reference object to a third editing track in the case that the object type is audio.
10. A video production apparatus, comprising:
the acquisition module is configured to respond to a video editing interface calling instruction, acquire configuration parameters of at least two reference objects uploaded through the video editing interface, and convert the configuration parameters into configuration parameters conforming to a target data format under the condition that the configuration parameters do not belong to the target data format, wherein the configuration parameters conforming to the target data format are determined according to a target video template;
the analysis module is configured to analyze configuration parameters conforming to a target data format to obtain object types and index addresses of the at least two reference objects, obtain the at least two reference objects according to the index addresses, and load the at least two reference objects to at least two editing tracks according to the object types;
and the processing module is configured to process the reference object in each editing track according to the object editing data of each editing track contained in the target video template, generate a target video and return, wherein the object editing data comprises the position of the reference object in the track, the clipping mode of the reference object and the brightness and color of the reference object.
11. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer executable instructions and the processor is configured to execute the computer executable instructions, wherein the processor, when executing the computer executable instructions, performs the steps of the video production method of any one of claims 1-9.
12. A computer readable storage medium, characterized in that it stores computer instructions which, when executed by a processor, implement the steps of the video production method of any one of claims 1-9.
CN202111283301.6A 2021-11-01 2021-11-01 Video production method and device Active CN113992866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111283301.6A CN113992866B (en) 2021-11-01 2021-11-01 Video production method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111283301.6A CN113992866B (en) 2021-11-01 2021-11-01 Video production method and device

Publications (2)

Publication Number Publication Date
CN113992866A CN113992866A (en) 2022-01-28
CN113992866B true CN113992866B (en) 2024-03-26

Family

ID=79745367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111283301.6A Active CN113992866B (en) 2021-11-01 2021-11-01 Video production method and device

Country Status (1)

Country Link
CN (1) CN113992866B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842325A (en) * 2012-08-28 2012-12-26 深圳市万兴软件有限公司 Method and device for managing audio and video editing tracks
CN110287368A (en) * 2019-05-31 2019-09-27 上海萌鱼网络科技有限公司 The generation method of short-sighted frequency stencil design figure generating means and short video template
CN112333536A (en) * 2020-10-28 2021-02-05 深圳创维-Rgb电子有限公司 Audio and video editing method, equipment and computer readable storage medium
CN112367551A (en) * 2020-10-30 2021-02-12 维沃移动通信有限公司 Video editing method and device, electronic equipment and readable storage medium
CN112954391A (en) * 2021-02-05 2021-06-11 北京百度网讯科技有限公司 Video editing method and device and electronic equipment
CN112995533A (en) * 2021-02-04 2021-06-18 上海哔哩哔哩科技有限公司 Video production method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842325A (en) * 2012-08-28 2012-12-26 深圳市万兴软件有限公司 Method and device for managing audio and video editing tracks
CN110287368A (en) * 2019-05-31 2019-09-27 上海萌鱼网络科技有限公司 The generation method of short-sighted frequency stencil design figure generating means and short video template
CN112333536A (en) * 2020-10-28 2021-02-05 深圳创维-Rgb电子有限公司 Audio and video editing method, equipment and computer readable storage medium
CN112367551A (en) * 2020-10-30 2021-02-12 维沃移动通信有限公司 Video editing method and device, electronic equipment and readable storage medium
CN112995533A (en) * 2021-02-04 2021-06-18 上海哔哩哔哩科技有限公司 Video production method and device
CN112954391A (en) * 2021-02-05 2021-06-11 北京百度网讯科技有限公司 Video editing method and device and electronic equipment

Also Published As

Publication number Publication date
CN113992866A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
US9020325B2 (en) Storyboard-directed video production from shared and individualized assets
CN109168026A (en) Instant video display methods, device, terminal device and storage medium
US20220188357A1 (en) Video generating method and device
WO2019227429A1 (en) Method, device, apparatus, terminal, server for generating multimedia content
CN113542624A (en) Method and device for generating commodity object explanation video
US20240013811A1 (en) Video processing method and apparatus
US20180143741A1 (en) Intelligent graphical feature generation for user content
WO2017157135A1 (en) Media information processing method, media information processing device and storage medium
CN113055709B (en) Video publishing method, device, equipment, storage medium and program product
CN114638232A (en) Method and device for converting text into video, electronic equipment and storage medium
CN112995533A (en) Video production method and device
CN113542833A (en) Video playing method, device and equipment based on face recognition and storage medium
CN110781835B (en) Data processing method and device, electronic equipment and storage medium
CN113905254B (en) Video synthesis method, device, system and readable storage medium
CN113395569B (en) Video generation method and device
CN113992866B (en) Video production method and device
CN110446118B (en) Video resource preprocessing method and device and video resource downloading method and device
CN115278306B (en) Video editing method and device
CN111242688A (en) Animation resource manufacturing method and device, mobile terminal and storage medium
CN114025103A (en) Video production method and device
CN114173154A (en) Video processing method and system
US20230419997A1 (en) Automatic Non-Linear Editing Style Transfer
CN112287173A (en) Method and apparatus for generating information
CN115086730B (en) Subscription video generation method, subscription video generation system, computer equipment and subscription video generation medium
US20230015971A1 (en) Neural network for audio and video dubbing with 3d facial modelling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant