CN111935504B - Video production method, device, equipment and storage medium - Google Patents

Video production method, device, equipment and storage medium Download PDF

Info

Publication number
CN111935504B
CN111935504B CN202010745265.XA CN202010745265A CN111935504B CN 111935504 B CN111935504 B CN 111935504B CN 202010745265 A CN202010745265 A CN 202010745265A CN 111935504 B CN111935504 B CN 111935504B
Authority
CN
China
Prior art keywords
video
target
image
publishing
page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010745265.XA
Other languages
Chinese (zh)
Other versions
CN111935504A (en
Inventor
曾衍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huanju Mark Network Information Co ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN202010745265.XA priority Critical patent/CN111935504B/en
Publication of CN111935504A publication Critical patent/CN111935504A/en
Application granted granted Critical
Publication of CN111935504B publication Critical patent/CN111935504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)
  • Processing Or Creating Images (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a video production method, a video production device, video production equipment and a storage medium, which relate to the field of video processing, wherein a synthesis template is selected from preset video materials on a video editing page; rendering an original image selected by a user at a designated position on a synthesis template, and generating an initial video; receiving a synthesis processing instruction, jumping a current page from a video editing page to a video publishing page, and synchronously synthesizing the initial video into a target video in a background; and uploading the target video for publishing. According to the technical scheme, the video publishing page is skipped to after the video is synthesized, so that the time from video production to video publishing is shortened, and the user experience is improved.

Description

Video production method, device, equipment and storage medium
Technical Field
The present application relates to the field of video processing, and in particular, to a video production method, apparatus, device, and storage medium.
Background
In daily life and work, people can make pictures interested in themselves into an electronic photo album, wherein the electronic photo album is a small video made of a given group of pictures and can be accompanied by special effects such as background music, description characters, stickers and the like.
In the related technology, a user can add special effects on some pictures to enhance the infectivity of the pictures in the process of making the small videos of the electronic photo album, the time is often consumed for special effect synthesis of the videos, and the user waits for a long time in the process to influence the use experience of the user. In addition, most of applications can pop up a modal window to stay on a current page to wait for special effect synthesis in the process of special effect synthesis of a video, a user can only wait for the special effect synthesis and cannot perform other operations, and after the video is synthesized, the user jumps to a video publishing page to publish the video, so that time is spent from video production to publishing, and time waste is caused.
Disclosure of Invention
The object of the present application is to solve at least one of the above technical drawbacks, in particular the problem of the long time it takes for a small video production to be released.
In a first aspect, an embodiment of the present application provides a video production method, including the following steps:
selecting a composite template from preset video materials through a video editing page;
rendering an original image selected by a user at a designated position on the composite template, and generating an initial video;
receiving a synthesis processing instruction, skipping a current page from the video editing page to a video publishing page, and synchronously synthesizing the initial video into a target video in a background;
and uploading the target video for publishing.
In an embodiment, before receiving the composition processing instruction, the method further comprises:
in the process of previewing the initial video, acquiring an editing operation input by a user on one or more frames of original images in the initial video, and determining a target transformation parameter corresponding to the editing operation;
the synthesizing the initial video into the target video comprises the following steps:
acquiring target transformation parameters corresponding to the original images, and performing corresponding transformation processing on the corresponding original images in the initial video according to the target transformation parameters to obtain transformed images;
and performing secondary synthesis on the initial video by using the transformed image to generate the target video.
In an embodiment, the step of obtaining an editing operation input by a user on one or more frames of original images in the initial video, and determining a transformation parameter corresponding to the editing operation includes:
determining one or more frames of original images to be processed in the initial video;
acquiring an editing operation acted on the original image, carrying out corresponding processing on the original image according to the editing operation, and displaying a processed real-time effect graph;
and acquiring a target real-time effect image confirmed by a user from the real-time effect image, calculating a target transformation parameter of the target real-time effect image relative to the original image, and storing the target transformation parameter.
In one embodiment, before the step of rendering the original image selected by the user at the specified position on the composite template, the method further comprises:
decoding the video material;
and in the process of decoding the synthesis template, cutting the selected original image according to the image filling size of the synthesis template.
In an embodiment, the synthesizing the initial video into the target video in the background further comprises: receiving description information input by a user on the video publishing page;
the step of uploading the target video for publishing comprises the following steps:
receiving an issuing instruction;
and uploading the description information and the target video to a server for issuing according to the issuing instruction.
In one embodiment, the step of selecting a composite template comprises:
acquiring a selection instruction of a user for the synthesis template, and playing a material video corresponding to the synthesis template;
and decoding the target material in the process of playing the material video.
In one embodiment, the editing operation comprises: at least one of image cropping, image scaling, image color transformation, image rotation, image filtering, adding special effects, image beautification, adding sticker text, and inserting background music.
In a second aspect, an embodiment of the present application further provides a video production apparatus, including:
the composite template selecting module is used for selecting a composite template from preset video materials through a video editing page;
an initial video generation module, configured to render an original image selected by a user at a specified position on the composite template, and generate an initial video;
the target video synthesis module is used for receiving a synthesis processing instruction, skipping a current page from the video editing page to a video publishing page, and synchronously synthesizing the initial video into a target video in the background;
and the target video publishing module is used for uploading the target video to publish.
In a third aspect, an embodiment of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the processor implements the steps of the video production method according to the first aspect.
In a fourth aspect, the present application further provides a storage medium containing computer-executable instructions, wherein the computer-executable instructions, when executed by a computer processor, are configured to perform the steps of the video production method according to the first aspect.
According to the video production method, the video production device, the video production equipment and the storage medium, the composite template is selected from the preset video materials on the video editing page; rendering an original image selected by a user at a designated position on a composite template, and generating an initial video; receiving a synthesis processing instruction, skipping a current page from a video editing page to a video publishing page, and synchronously synthesizing the initial video into a target video in a background; the target video is uploaded to be published, the video publishing page is not required to be skipped after the video is synthesized, the time from video production to video publishing is shortened, and the user experience is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic interface diagram of a modal window presented by a composition process in the related art;
FIG. 2 is an interface schematic of a video publication page;
FIG. 3 is a flow diagram of a method for video production provided by an embodiment;
FIG. 4 is a diagram of an application scenario of a video production method according to an embodiment;
FIG. 5 is a diagrammatic illustration of a preview interface for an initial video;
FIG. 6 is a schematic diagram of an interface for editing an original image;
FIG. 7 is a schematic interface diagram based on the target real-time effects graph of FIG. 6;
FIG. 8 is a schematic view of another interface for editing an original image;
FIG. 9 is a schematic diagram of an interface based on the target real-time effects graph of FIG. 8;
fig. 10 is a schematic structural diagram of a video production apparatus according to an embodiment.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
To better explain the technical solution of the present disclosure, a description is first given of a flow of video production, especially small video production in the related art.
In the related small video production, a user produces a client through a related small video, such as an application program of an electronic album, a short video production application program and the like, in the small video production process, the user selects an original image for producing the small video to synthesize the small video, in the small video synthesizing process, fig. 1 is an interface schematic diagram of a modal window presented in the synthesizing process in the related art, as shown in fig. 1, the interface is kept on a current page to wait for the completion of the synthesizing action, at the moment, the modal window (please keep track of a circle in fig. 1) appears on the interface to maintain the current page to avoid other operations of the user, after the small video synthesizing is completed, the user jumps to a video publishing page, fig. 2 is an interface schematic diagram of a video publishing page, as shown in fig. 2, a description information input box prompts 'write a title and use a proper topic to enable more people to see' to remind the user to input description information, and the user can input the description information on the video publishing page, such as 'happy tourism' to attract other small videos to browse the description information.
However, the video composition is a time-consuming process, and especially for the composition of high-definition images and large-capacity videos, the user needs to wait for a longer time to wait for the composition completion and then release the composition, thereby affecting the user experience.
Based on this, the embodiment of the present disclosure provides a video production scheme, which can save the time from video production to publishing, and shorten the waiting time of a user, so as to improve the user experience of the user.
Fig. 3 is a flowchart of a video production method according to an embodiment, where the video production method is applicable to a video production device, such as a client, fig. 4 is an application scenario diagram of the video production method according to an embodiment, and referring to fig. 4, the client 10 may be a portable device, such as a smart phone, a smart camera, a palm computer, a tablet computer, an electronic book, and a notebook computer, and may have functions, such as a video production function, to implement video production. Optionally, the client 10 has a touch screen, and the user may perform corresponding operations on the touch screen of the client 10 to implement functions such as image processing, video composition, and video cover generation.
Specifically, as shown in fig. 3, the video production method may include the following steps:
s210, selecting a composite template from preset video materials through a video editing page.
In this embodiment, a client for related video production is installed on a terminal device, a user starts the client and enters a video editing page, video materials are displayed in the video editing page, and the video materials include related resources such as a composite template for video production, music, pictures, stickers, characters, special effects, and the like.
Optionally, in an embodiment, the video material is self-contained in the system, and when the user downloads the client, the corresponding video material is automatically downloaded, and in another embodiment, when the user enters a video editing page, the client starts downloading the video material, so as to avoid downloading only when the user uses the video editing page, which saves time.
In one embodiment, the step of selecting a composite template comprises:
s2101, a selection instruction of the user on the synthesis template is obtained, and a material video corresponding to the synthesis template is played.
In this embodiment, various video materials are displayed to the user through the material bar, and the video materials can be presented in a list form, wherein each video material corresponds to one material video, and the user can know the video display effect corresponding to the video material through the material video.
The user selects from among the various video material in the material bar. And the client receives a selection instruction of the composite template selected by the user, acquires the preview connection corresponding to the composite template, and plays the material video corresponding to the composite template through a video player.
S2102, in the process of playing the material video, the composite template is decoded.
And in the process of playing the material video of the composite template, the client acquires a compressed file corresponding to the composite template and decodes the compressed file. Because the synthesis template is decoded in the process of watching the material video by the user, the user has no perception, thereby avoiding the time waste caused by waiting for decoding.
S220, rendering the original image selected by the user at the designated position on the composite template, and generating an initial video.
And selecting an original image by the user according to the synthesis template, or automatically matching the corresponding synthesis template by the client according to the original image selected by the user. On the composite template, different images are rendered and displayed at specified positions. In this embodiment, the client renders the original image selected by the user at the designated position of the composite template, replaces the default image on the composite template, and generates the initial video according to the composite template corresponding to the composite template.
And S230, receiving a synthesis processing instruction, jumping the current page from the video editing page to a video publishing page, and synchronously synthesizing the initial video into a target video in a background.
In this embodiment, if the user is not satisfied with the effect presented by the provided video material or has other editing ideas, the initial video may be re-edited, such as adding characters, stickers, filter effects, and the like, after the user finishes editing, the client receives a composition processing instruction, and then composes the content edited by the user, at this time, the composition processing operation is shifted to the background for processing, and the current page is shifted from the video editing page to the video publishing page, so that the user can input video description information on the video publishing page conveniently. In this embodiment, after receiving the composition processing instruction, the client skips the current page from the video editing page to the video publishing page, and transfers the composition processing to the background, without waiting for the completion of the composition processing and then skips the video editing page to the video publishing page.
Optionally, in an embodiment, an abort entry or an abort entry may be set on the video publishing page, and the initial video is aborted to be composited into the target video through the abort entry, or the initial video is composited into the target video through the abort entry, so that a user can intervene in the target video compositing process.
S240, uploading the target video to a server for distribution.
In one embodiment, when the initial video is synthesized into the target video in the background and the target video is automatically uploaded to the server for publishing after being synthesized, in another embodiment, when the initial video is synthesized into the target video in the background and the target video is presented on the current page to prompt the user that the target video is synthesized, the target video is automatically uploaded to the server for publishing after receiving a publishing instruction of the user.
In an embodiment, a prompt icon of the visual frequency synthesizing progress may be presented on the current interface, for example, the synthesizing progress of the target video is shown in a floating ball manner, and if the floating ball displays a number "80%", it indicates that the target video has been synthesized by 80%.
According to the video production method provided by the embodiment, a composite template is selected from preset video materials on a video editing page; rendering an original image selected by a user at a designated position on a composite template, and generating an initial video; receiving a synthesis processing instruction, skipping a current page from a video editing page to a video publishing page, and synchronously synthesizing the initial video into a target video in a background; the target video is uploaded to the server to be published, the video publishing page is skipped after the video is synthesized, the time from video production to video publishing is shortened, and the user experience is improved.
In an embodiment, before receiving the composition processing instruction in step S230, the method further includes:
s130, in the process of previewing the initial video, the editing operation input by a user on one or more frames of original images in the initial video is obtained, and the target transformation parameters corresponding to the editing operation are determined.
In this embodiment, the user can preview the effect of the created initial video and re-edit the initial video in a preview state.
Wherein the editing operation comprises: at least one of image cropping, image scaling, image color transformation, image rotation, image filtering, adding special effects, image beautification, adding sticker text, and inserting background music. The image cutting means to cut part of an original image to delete part of an area, the image zooming means to enlarge or reduce the original image, the image color transformation means to change the color of part or all of the area of the original image, the image rotation means to rotate the original image to change the presenting angle, the image filter means to add the special effect of the filter to the original image, the adding of the special effect means to add a static or dynamic image, the image beautifying means to beautify people in the original image by whitening, enlarging eyes, grinding skin, removing speckles and the like, the adding of a sticker character means to add decorations such as stickers and characters to the original image, and the inserting of background music means to add background music and the like to the original image. Of course, in other embodiments, other editing operations may also be included.
In an embodiment, the acquiring, in step S130, an editing operation input by a user on one or more frames of original images in the initial video, and determining a transformation parameter corresponding to the editing operation may include the following steps:
s1301, determining one or more frames of original images to be processed in the initial video.
In a preview state, one or more frames of original images to be processed in an initial video are acquired, and editing operation is input on the original images. Optionally, the MD5 value of the original image or the corresponding timestamp of the original image on the initial video is obtained to determine the position of the original image selected by the user for editing.
S1302, obtaining an editing operation acting on the original image, performing corresponding processing on the original image according to the editing operation, and displaying a processed real-time effect picture.
The client side carries out corresponding processing on the original image according to the editing operation input by the user, displays the processed real-time effect image in a preview mode, and displays the image effect, such as the real-time effect after the sticker is added and the real-time effect after the filter is added, of the original image after corresponding first video production operation is carried out in the real-time effect image.
S1303, obtaining a target real-time effect image confirmed by a user from the real-time effect image, calculating target transformation parameters of the target real-time effect image relative to the original image, and storing the target transformation parameters.
In this embodiment, there may be one or more real-time effect maps. Taking an editing operation as an image zooming example, a client acquires image zooming ratios input by a user for many times, corresponding real-time effect graphs are displayed on a display interface of the client in real time, the user can view the real-time effect graphs under different image zooming ratios through key operations of 'previous step' and 'next step', or operations of left sliding, right sliding and the like, and each real-time effect graph corresponds to a conversion parameter. Meanwhile, the target real-time effect graphs corresponding to different original images can be checked in a page turning or switching mode.
And the user selects one of the real-time effect graphs as a target real-time effect graph, compares the difference between the target real-time effect graph and the original image, calculates the transformation parameter of the target real-time effect graph relative to the original image, and stores the target transformation parameter corresponding to the target real-time effect graph, so that the original image corresponding to the original video can be transformed according to the target transformation parameter in the subsequent processing of synthesizing the target video. Wherein, the target transformation parameters include: cutting region position information, scaling, color transformation information, rotation angle and rotation direction, filter type, special effect type, beauty value, sticker text content, music information and the like.
In one embodiment, the synthesizing the initial video into the target video in step S230 includes:
s2301, obtaining target transformation parameters corresponding to the original image, and performing corresponding transformation processing on the original image corresponding to the initial video according to the target transformation parameters to obtain a transformed image.
The transformed image is an image that carries an effect corresponding to the target transformation parameter on the basis of the original image. Keys for composition processing, such as "composition" or "ok" keys, are presented on the display interface of the client. In this embodiment, after the user triggers the editing operation on the initial video, the corresponding composition processing key is popped up, or the user does not trigger the editing operation on the initial video, and the composition processing key is hidden.
And when the user clicks the corresponding synthesis processing key, performing synthesis processing operation. The client receives the synthesis processing operation, triggers the synthesis instruction, detects whether each frame of original image in the initial video corresponds to a target transformation parameter, optionally, each original image is marked with a parameter value, if the parameter value is a preset reference value, no change is caused, it indicates that the original image does not correspond to the target transformation parameter, and if the parameter value is not the preset reference value, it indicates that the original image corresponds to the target transformation parameter.
And after determining that the original image corresponds to the target transformation parameters, acquiring the corresponding target transformation parameters of the original image, wherein the target transformation parameters are transformation parameters corresponding to the original image to synthesize the target video. Optionally, a first original image is obtained, and the first original image is subjected to conversion processing according to the corresponding target conversion parameter to obtain a first conversion image; and after the first transformed image is obtained, automatically obtaining a second original image, carrying out transformation processing on the second original image according to the corresponding target transformation parameters to obtain a second transformed image, and so on until all the original images are subjected to corresponding transformation processing according to the corresponding target transformation parameters to obtain corresponding transformed images.
And S2302, performing secondary synthesis on the initial video by using the transformed image to generate the target video.
In this embodiment, the original image corresponding to the original video is transformed by using the target transformation parameters, such as scaling, adding a filter special effect, adding a sticker text, changing color, brightness, contrast, and the like, to obtain a transformed image, and the transformed image is substituted for the corresponding original image and then synthesized to obtain the target video. In this embodiment, a user first preprocesses an original image to be processed to obtain a satisfactory target real-time effect image, then stores a transformation parameter corresponding to the target real-time effect image, and then releases the original image without storing the preprocessed image, so that redundancy of the image is not caused, and occupation of storage space resources is avoided.
And after the user carries out pre-transformation processing on one of the original images, storing the corresponding target transformation parameters, switching to the other original image, carrying out pre-transformation processing on the other original image to obtain the corresponding target transformation parameters, storing the target transformation parameters, and repeating the steps until the user carries out pre-transformation processing on a plurality of original images in the original video to be edited. After the pre-conversion processing is finished, the conversion processing is carried out again to synthesize the target video, so that the resource consumption caused by obtaining the converted image by carrying out the conversion processing on one original image every time is reduced, and the image redundancy and the storage space waste caused by generating new images for many times by modifying the effect of the original image again by a user are avoided.
In one embodiment, before the step S220 renders the original image selected by the user at the designated position on the composite template, the method includes:
and S1201, decoding the video material.
And S1202, in the process of decoding the synthesis template, cutting the selected original image according to the image filling size of the synthesis template.
Because there are differences in fill sizes corresponding to the locations of the original images in the different composite templates, the original image selected by the user may not meet the requirements of the composite template. For example, if the filling size of the synthesized template is 400px × 600px, the original image selected by the user is scaled and cropped according to the filling size of the selected synthesized template, so as to reduce the memory usage and the overhead of the synthesis performance in the video production process.
It should be noted that, in the process of decoding the composite template, the original image is cropped according to the filling size of the composite template, so that the time for video production is shortened.
In an embodiment, the synthesizing the initial video into the target video in the background further comprises: s330, receiving the description information input by the user on the video publishing page.
In this embodiment, the client receives the synthesis processing operation, and transfers the initial video to the background for processing according to the target video synthesized by the target transformation parameter, and at the same time, jumps from the video editing page to the video publishing page on the current page, and receives the description information input by the user on the video publishing page. The description information includes a video title, a video description, a video type and the like, so as to simply introduce the related content of the video, or attract the rest of users to browse the target video through the description information.
After uploading the target video to a server for publishing in step S240, the method includes:
s2401 receives an issue instruction.
The video publishing page is presented with a publishing key, such as a 'publishing' key, a user clicks the publishing key to perform publishing operation, and the client receives a publishing instruction corresponding to the publishing operation.
S2402, uploading the description information and the target video to a server for issuing according to the issuing instruction.
And after receiving the publishing instruction, the client acquires the description information on the video publishing page and uploads the description information and the target video to the server for publishing. Optionally, when the user does not input the description information, the obtained description information is null, and the target video is uploaded to the server for publishing.
For the purpose of illustrating the concepts of the present application, reference is made to the following drawings and examples.
Fig. 5 is a schematic view of a preview interface of an initial video, as shown in fig. 5, when a user previews the initial video, the upper half of the interface plays the initial video, the lower half shows a video playing track of the initial video, the playing track corresponds to an original image (i.e., a video frame) corresponding to the initial video being played, for example, an original image 1 represents a first video frame, an original image 2 represents a second video frame, and so on. The user selects an original image that needs to be edited again from the original image of the original video.
Fig. 6 is a schematic diagram of an interface for editing an original image, and as shown in fig. 6, a user can select an original image to be edited by sliding a video play track in previewing an initial video. As shown in fig. 6, the user selects the original image 1 to edit, and at this time, there is an "edit" button at a set position (for example, the upper right corner) of the interface, so that the current original image is in an editable state, and the current original image can be edited, that is, a first image processing operation, such as zooming, adding characters, stickers, filters, and beauty, is input, for example, the user adds a "smiling face" sticker to cover a human face in the original image 1, as shown in fig. 7, and fig. 7 is an interface schematic diagram based on the target real-time effect diagram of fig. 6.
After clicking the 'edit' key, a 'save' key is presented at the set position. And after the user finishes editing, the user clicks a 'save' key, namely, the transformation parameters corresponding to the presented effect of the current original image are saved. And after the user saves the file, converting the save key into an edit key, and when the user continues editing, clicking the edit key again to repeat the operation. And analogizing until the user finishes editing the selected original image needing to be edited, and directly switching to the next original image when the user does not need to edit the original image.
Fig. 8 is another schematic interface diagram for editing an original image, as shown in fig. 8, the preview video window switches and displays the original image 5 to edit (pre-transform) the original image 5, and fig. 9 is a schematic interface diagram based on the target real-time effect diagram of fig. 8, where a user adds the text "pig blown with oO" on the original image 5 and clicks "save", and then saves the transformation parameter corresponding to the target real-time effect diagram shown in fig. 9.
As shown in fig. 7 and 9, in the non-editing state, the display interface presents a composition key, which may be a "Done" key, or an "ok" key, or a "one-key composition" key, to trigger the composition operation of the initial video, and process the selected original image at a time. And when the user finishes editing the selected original image and stores the corresponding conversion parameter, clicking a 'one-key synthesis' key to obtain the original image and the corresponding conversion parameter, synthesizing the original image after conversion processing, and synthesizing the original video into a target video with a newly-increased editing effect.
In this embodiment, when a user needs to edit an initial video again, after multiple frames of original videos are edited, after target real-time effect graphs corresponding to the multiple frames of original videos are respectively determined, target transformation parameters corresponding to the target real-time effect graphs are respectively stored, and after the target transformation parameters corresponding to multiple original images are obtained, multiple original images are transformed at one time to further synthesize the target videos, so that resource occupation and loss caused by immediate transformation and synthesis processing after each original image is edited are avoided. The following describes in detail a related embodiment of the video production apparatus.
Fig. 10 is a schematic structural diagram of a video production apparatus according to an embodiment, where the video production apparatus is executable in a video production device.
Specifically, as shown in fig. 10, the video creation apparatus 200 includes: a composite template selection module 210, an initial video generation module 220, a target video composition module 230, and a target video distribution module 140.
A composite template selecting module 210, configured to select a composite template from pre-downloaded video materials through a video editing page;
an initial video generation module 220, configured to render an original image selected by a user at a specified position on the composite template, and generate an initial video;
a target video synthesizing module 230, configured to receive a synthesizing processing instruction, jump the current page from the video editing page to a video publishing page, and synchronously synthesize the initial video into a target video in the background;
and the target video publishing module 240 is configured to upload the target video to a server in the background for publishing.
The video production device provided by the embodiment selects the composite template from the preset video materials through the video editing page; rendering an original image selected by a user at a designated position on a composite template, and generating an initial video; receiving a synthesis processing instruction, jumping a current page from a video editing page to a video publishing page, and synchronously synthesizing the initial video into a target video in a background; the target video is uploaded to be published, the video publishing page is not required to be skipped after the video is synthesized, the time from video production to video publishing is shortened, and the user experience is improved.
In one embodiment, the video production apparatus 200 further comprises: the conversion parameter determining module is used for acquiring the editing operation input by a user on one or more frames of original images in the initial video in the process of previewing the initial video and determining a target conversion parameter corresponding to the editing operation;
the target video composition module 230 includes: a transformed image obtaining unit and a target video generating unit;
a transformed image obtaining unit, configured to obtain a target transformation parameter corresponding to the original image, and perform corresponding transformation processing on the original image corresponding to the initial video according to the target transformation parameter to obtain a transformed image;
and the target video generating unit is used for performing secondary synthesis on the initial video by utilizing the transformed image to generate the target video.
In one embodiment, the transformation parameter determination module comprises: the device comprises an original image determining unit, a real-time effect graph display unit and a transformation parameter storage unit;
the original image determining unit is used for determining one or more frames of original images to be processed in the initial video;
the real-time effect graph display unit is used for acquiring editing operation acting on the original image, correspondingly processing the original image according to the editing operation and displaying the processed real-time effect graph;
and the transformation parameter storage unit is used for acquiring a target real-time effect image confirmed by a user from the real-time effect image, calculating a target transformation parameter of the target real-time effect image relative to the original image, and storing the target transformation parameter.
In one embodiment, the video production apparatus 200 further includes: a decoding module and a cutting module;
the decoding module is used for decoding the video material;
and the cropping module is used for cropping the selected original image according to the image filling size of the synthesis template in the process of decoding the synthesis template.
In one embodiment, the video production apparatus 200 further includes: the description information receiving module is used for receiving the description information input by the user on the video publishing page;
the target video publishing module comprises: the system comprises an issuing instruction receiving unit and a target video issuing unit;
an issue instruction receiving unit configured to receive an issue instruction;
and the target video publishing unit is used for uploading the description information and the target video to a server for publishing according to the publishing instruction.
In one embodiment, the composite template selection module 210 includes: a material video playing unit and a composite template decoding unit;
the material video playing unit is used for acquiring a selection instruction of a user on the composite template and playing a material video corresponding to the composite template;
and the composite template decoding unit is used for decoding the composite template in the process of playing the material video.
In one embodiment, the editing operation comprises: at least one of image color transformation, image rotation, image filtering, adding special effects, image beautification, adding sticker text, and inserting background music.
The video production device of the embodiment of the present disclosure can execute the video production method provided by the embodiment of the present disclosure, and the implementation principle is similar, the actions executed by each module in the video production device in each embodiment of the present disclosure correspond to the steps in the video production method in each embodiment of the present disclosure, and for the detailed functional description of each module of the video production device, reference may be specifically made to the description in the corresponding video production method shown in the foregoing, and details are not repeated here.
The embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the video production method in any of the above embodiments is implemented.
When the computer device provided by the above-mentioned embodiment executes the video production method provided by any of the above-mentioned embodiments, the computer device has corresponding functions and advantageous effects.
An embodiment of the present invention provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a video production method, comprising:
selecting a composite template from preset video materials through a video editing page;
rendering an original image selected by a user at a designated position on the composite template, and generating an initial video;
receiving a synthesis processing instruction, jumping a current page from the video editing page to a video publishing page, and synchronously synthesizing the initial video into a target video in a background;
and uploading the target video to a server for publishing.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the operations of the video production method described above, and has corresponding functions and advantages.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the video production method according to any embodiment of the present invention.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps. The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. A method of video production, comprising the steps of:
selecting a composite template from preset video materials through a video editing page;
rendering the selected original image at a designated position on the composite template, and generating an initial video;
receiving a synthesis processing instruction, jumping a current page from the video editing page to a video publishing page, and synchronously synthesizing the initial video into a target video in a background; responding to the operation of non-instruction synthesizing or publishing of the target video on the video publishing page in the process of synthesizing the target video in the background; the video publishing page is provided with a pause entry and/or a stop entry, the pause entry is used for pausing the synthesis of the initial video into the target video in the synthesis process, and the stop entry is used for stopping the synthesis of the initial video into the target video in the synthesis process;
and after the target videos are synthesized, uploading the target videos for publishing.
2. The video production method according to claim 1, further comprising, before receiving the composition processing instruction:
in the process of previewing the initial video, acquiring an editing operation input by a user on one or more frames of original images in the initial video, and determining a target transformation parameter corresponding to the editing operation;
the synthesizing the initial video into the target video comprises the following steps:
acquiring target transformation parameters corresponding to the original images, and performing corresponding transformation processing on the corresponding original images in the initial video according to the target transformation parameters to obtain transformed images;
and performing secondary synthesis on the initial video by using the transformed image to generate the target video.
3. The video production method according to claim 2, wherein the step of obtaining an editing operation input by a user on one or more frames of original images in the initial video, and determining a transformation parameter corresponding to the editing operation comprises:
determining one or more frames of original images to be processed in the initial video;
acquiring an editing operation acted on the original image, carrying out corresponding processing on the original image according to the editing operation, and displaying a processed real-time effect graph;
and acquiring a target real-time effect image confirmed by a user from the real-time effect image, calculating target transformation parameters of the target real-time effect image relative to the original image, and storing the target transformation parameters.
4. The video production method according to claim 1, further comprising, before the step of rendering the original image selected by the user at the specified position on the composite template:
decoding the video material;
and in the process of decoding the synthesis template, cutting the selected original image according to the image filling size of the synthesis template.
5. The method of claim 1, wherein the step of background combining the initial video into the target video further comprises: receiving description information input by a user on the video publishing page;
the step of uploading the target video for publishing comprises the following steps:
receiving an issuing instruction;
and uploading the description information and the target video to a server for issuing according to the issuing instruction.
6. The method of claim 1, wherein the step of selecting a composite template comprises:
acquiring a selection instruction of a user for the synthesis template, and playing a material video corresponding to the synthesis template;
and decoding the composite template in the process of playing the material video.
7. The video production method according to claim 2, wherein the editing operation includes: at least one of image cropping, image scaling, image color transformation, image rotation, image filtering, adding special effects, image beautification, adding sticker text, and inserting background music.
8. A video production apparatus, comprising:
the composite template selecting module is used for selecting a composite template from preset video materials through a video editing page;
an initial video generation module, configured to render an original image selected by a user at a specified position on the composite template, and generate an initial video;
the target video synthesis module is used for receiving a synthesis processing instruction, jumping a current page from the video editing page to a video publishing page, and synchronously synthesizing the initial video into a target video in the background; responding to the operation of non-instruction synthesizing or publishing of the target video on the video publishing page in the process of synthesizing the target video in the background; the video publishing page is provided with a pause entry and/or a stop entry, the pause entry is used for pausing the synthesis of the initial video into the target video in the synthesis process, and the stop entry is used for stopping the synthesis of the initial video into the target video in the synthesis process;
and the target video publishing module is used for uploading the target video for publishing after the target video is synthesized.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the video production method according to any one of claims 1 to 7 when executing said program.
10. A storage medium containing computer-executable instructions for performing the steps of the video production method of any one of claims 1 to 7 when executed by a computer processor.
CN202010745265.XA 2020-07-29 2020-07-29 Video production method, device, equipment and storage medium Active CN111935504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010745265.XA CN111935504B (en) 2020-07-29 2020-07-29 Video production method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010745265.XA CN111935504B (en) 2020-07-29 2020-07-29 Video production method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111935504A CN111935504A (en) 2020-11-13
CN111935504B true CN111935504B (en) 2023-04-14

Family

ID=73315896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010745265.XA Active CN111935504B (en) 2020-07-29 2020-07-29 Video production method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111935504B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112104894B (en) * 2020-11-18 2021-03-09 成都索贝数码科技股份有限公司 Ultra-high-definition video editing method based on breadth transformation
CN112866798B (en) * 2020-12-31 2023-05-05 北京字跳网络技术有限公司 Video generation method, device, equipment and storage medium
CN112887794B (en) * 2021-01-26 2023-07-18 维沃移动通信有限公司 Video editing method and device
CN113115095B (en) * 2021-03-18 2022-09-09 北京达佳互联信息技术有限公司 Video processing method, video processing device, electronic equipment and storage medium
CN113111222B (en) * 2021-03-26 2024-03-19 北京达佳互联信息技术有限公司 Short video template generation method, device, server and storage medium
CN113411633A (en) * 2021-04-30 2021-09-17 成都东方盛行电子有限责任公司 Template editing method based on non-editing engineering and application thereof
CN113115099B (en) * 2021-05-14 2022-07-05 北京市商汤科技开发有限公司 Video recording method and device, electronic equipment and storage medium
CN113411655A (en) * 2021-05-18 2021-09-17 北京达佳互联信息技术有限公司 Method and device for generating video on demand, electronic equipment and storage medium
CN113395588A (en) * 2021-06-23 2021-09-14 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium
CN114025103A (en) * 2021-11-01 2022-02-08 上海哔哩哔哩科技有限公司 Video production method and device
CN116132719A (en) * 2021-11-15 2023-05-16 北京字跳网络技术有限公司 Video processing method, device, electronic equipment and readable storage medium
CN116170549A (en) * 2021-11-25 2023-05-26 北京字跳网络技术有限公司 Video processing method and device
CN114900734B (en) * 2022-05-18 2024-05-03 广州太平洋电脑信息咨询有限公司 Vehicle type comparison video generation method and device, storage medium and computer equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4010227B2 (en) * 2002-11-11 2007-11-21 ソニー株式会社 Imaging apparatus, content production method, program, program recording medium
CN110049266A (en) * 2019-04-10 2019-07-23 北京字节跳动网络技术有限公司 Video data issues method, apparatus, electronic equipment and storage medium
CN110708596A (en) * 2019-09-29 2020-01-17 北京达佳互联信息技术有限公司 Method and device for generating video, electronic equipment and readable storage medium
CN111222571B (en) * 2020-01-06 2021-12-14 腾讯科技(深圳)有限公司 Image special effect processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111935504A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN111935504B (en) Video production method, device, equipment and storage medium
EP3758364B1 (en) Dynamic emoticon-generating method, computer-readable storage medium and computer device
CN111935505B (en) Video cover generation method, device, equipment and storage medium
US8860865B2 (en) Assisted video creation utilizing a camera
KR100480076B1 (en) Method for processing still video image
WO2020107297A1 (en) Video clipping control method, terminal device, system
US20160328877A1 (en) Method and apparatus for making personalized dynamic emoticon
CN104581380A (en) Information processing method and mobile terminal
KR20160055813A (en) Gesture based interactive graphical user interface for video editing on smartphone/camera with touchscreen
CN112804459A (en) Image display method and device based on virtual camera, storage medium and electronic equipment
CN113891113A (en) Video clip synthesis method and electronic equipment
CN113099287A (en) Video production method and device
CN113691854A (en) Video creation method and device, electronic equipment and computer program product
US11394888B2 (en) Personalized videos
CN109379631B (en) Method for editing video captions through mobile terminal
CN113099288A (en) Video production method and device
CN109960549B (en) GIF picture generation method and device
CN103312981A (en) Synthetic multi-picture taking method and shooting device
CN114520876A (en) Time-delay shooting video recording method and device and electronic equipment
CN114693827A (en) Expression generation method and device, computer equipment and storage medium
CN107564084B (en) Method and device for synthesizing motion picture and storage equipment
CN104991950A (en) Picture generating method, display method and corresponding devices
KR101457045B1 (en) The manufacturing method for Ani Comic by applying effects for 2 dimensional comic contents and computer-readable recording medium having Ani comic program manufacturing Ani comic by applying effects for 2 dimensional comic contents
US9122923B2 (en) Image generation apparatus and control method
CN111951353A (en) Electronic album synthesis method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230817

Address after: No. 79 Wanbo Second Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province, 5114303802 (self declared)

Patentee after: Guangzhou Huanju Mark Network Information Co.,Ltd.

Address before: 29th floor, building B-1, Wanda Plaza, Wanbo business district, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right