CN112819927A - Video generation method and device based on pictures - Google Patents

Video generation method and device based on pictures Download PDF

Info

Publication number
CN112819927A
CN112819927A CN202110166271.4A CN202110166271A CN112819927A CN 112819927 A CN112819927 A CN 112819927A CN 202110166271 A CN202110166271 A CN 202110166271A CN 112819927 A CN112819927 A CN 112819927A
Authority
CN
China
Prior art keywords
picture
video
frame
edited
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110166271.4A
Other languages
Chinese (zh)
Inventor
常青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202110166271.4A priority Critical patent/CN112819927A/en
Publication of CN112819927A publication Critical patent/CN112819927A/en
Priority to PCT/CN2022/072854 priority patent/WO2022166595A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2213/00Indexing scheme for animation
    • G06T2213/08Animation software package

Abstract

The application discloses a video generation method based on pictures. The method comprises the following steps: providing a picture editing interface, wherein the picture editing interface is used for selecting a picture to be edited; acquiring a selected picture to be edited, and displaying the picture to be edited in a display area of the picture editing interface; generating configuration information of the animation effect of the picture to be edited according to the selected video parameters and the picture to be edited; and generating the video with the animation effect corresponding to the picture to be edited according to the configuration information. According to the method and the device, the user can conveniently generate the video according to the single picture.

Description

Video generation method and device based on pictures
Technical Field
The present application relates to the field of video technologies, and in particular, to a method and an apparatus for generating a video based on a picture.
Background
With the development of internet services, people are beginning to perform more and more life entertainment, various transactions and the like by means of a network platform. How to expose goods or service information to customers or potential customers through a network becomes a concern for all parties. Among them, one of the widely known solutions is to put an advertisement on a web page or to display a detailed page of a commodity or the like. The value of an advertisement or item detail page is to attract users to click and consume.
Most of the advertisement or commodity detail pages are presented to the user in the form of static pictures, and have poor display effect and low attraction to the user. Of course, there are some advertisements presented in the form of video, which are better displayed than still pictures. However, in the prior art, when presentation is performed in the form of video, a user is required to perform video shooting on the displayed video, or a method of shooting a plurality of pictures and then synthesizing the pictures into the video. In the prior art, the video of the commodity details is obtained in two modes, which are time-consuming and labor-consuming.
Disclosure of Invention
In view of the above, a method, an apparatus, a computer device and a computer-readable storage medium for generating a video based on a picture are provided to solve the problem in the prior art that a video of commodity details cannot be obtained conveniently.
The application provides a video generation method based on pictures, which comprises the following steps:
providing a picture editing interface, wherein the picture editing interface is used for selecting a picture to be edited;
acquiring a selected picture to be edited, and displaying the picture to be edited in a display area of the picture editing interface;
generating configuration information of the animation effect of the picture to be edited according to the selected video parameters and the picture to be edited;
and generating the video with the animation effect corresponding to the picture to be edited according to the configuration information.
Optionally, the display area includes a first display area and a second display area, and the acquiring the initial-state picture selected based on the picture to be edited includes:
acquiring a first framing operation on the picture to be edited, and displaying first position information of the first picture framed by the first framing operation in the first display area based on the first framing operation, wherein the first position information comprises coordinate information of the initial picture and first size information of the frame used for framing the initial picture;
acquiring a first setting instruction for setting the first picture as an initial picture, displaying a copied picture of the picture to be edited in the second display area based on the first setting instruction, and selecting the initial picture from a frame on the copied picture.
Optionally, the obtaining the last-state picture selected based on the picture to be edited includes:
acquiring second framing operation on the picture to be edited, and displaying second position information of the second picture framed by the second framing operation in the first display area based on the second framing operation, wherein the second position information comprises coordinate information of the last picture and second size information of a frame used for framing the last picture;
and acquiring a second setting instruction for setting the second picture as a final picture, and selecting the final picture from the frame on the copied picture based on the second setting instruction.
Optionally, the generating a video with an animation effect corresponding to the picture to be edited according to the configuration information includes:
decomposing the picture to be edited into N frames of pictures according to the configuration information, wherein N is the video duration and the frame rate;
and synthesizing the N frames of pictures into the video with the animation effect.
Optionally, the generating a video with an animation effect corresponding to the picture to be edited according to the configuration information includes:
uploading the configuration information to a server, so that the server decomposes the picture to be edited into N frames of pictures according to the configuration information, and synthesizes the N frames of pictures into the video with the animation effect, wherein N is the video duration and the frame rate;
and receiving the video of the animation effect returned by the server.
Optionally, the decomposing the picture to be edited into N frames of pictures according to the configuration information includes:
calculating the pixel of the next frame picture in the N frame pictures moving relative to the previous frame picture according to the coordinate information of the initial state picture and the final state picture in the configuration information and the frame number of the video;
calculating the scaling of the next frame picture relative to the previous frame picture in the N frame pictures according to the first size information, the second size information and the frame number of the video in the configuration information;
determining a picture to be interpolated corresponding to each frame of pictures from a first frame of picture to an Nth frame of picture in the N frames of pictures according to the pixels, the scaling and the picture to be edited, wherein the initial-state picture is used as the picture to be interpolated of the first frame of picture, and the final-state picture is used as the picture to be interpolated of the Nth frame of picture;
and sequentially carrying out difference processing on each picture to be interpolated by adopting a preset difference algorithm to obtain the N frames of pictures.
Optionally, the method for generating a video based on pictures further includes:
acquiring an audio corresponding to the video;
incorporating the audio into the video.
The application also provides a video generation method based on the picture, which comprises the following steps:
acquiring a picture to be edited and configuration information of an animation effect generated based on the picture to be edited, wherein the configuration information comprises coordinate information of an initial picture selected based on the picture to be edited, first size information of a frame used for framing the initial picture, coordinate information of an end picture selected based on the picture to be edited, second size information of a frame used for framing the end picture, video duration information and a frame rate;
calculating pixels of a next frame picture in the N frame pictures moving relative to a previous frame picture according to the coordinate information of the initial state picture and the final state picture in the configuration information and the frame number of the video, wherein N is the video duration and the frame rate;
calculating the scaling of the next frame picture relative to the previous frame picture in the N frame pictures according to the first size information, the second size information and the frame number of the video in the configuration information;
determining a picture to be interpolated corresponding to each frame of pictures from a first frame of picture to an Nth frame of picture in the N frames of pictures according to the pixels, the scaling and the picture to be edited, wherein the initial-state picture is used as the picture to be interpolated of the first frame of picture, and the final-state picture is used as the picture to be interpolated of the Nth frame of picture;
performing difference processing on each picture to be interpolated in sequence by adopting a preset difference algorithm to obtain the N frames of pictures;
and synthesizing the N frames of pictures into the video with the animation effect.
Optionally, the video generation method further includes:
acquiring an audio corresponding to the video;
incorporating the audio into the video.
The present application further provides a video generation device based on pictures, including:
the system comprises a providing module, a picture editing module and a display module, wherein the providing module is used for providing a picture editing interface which is used for selecting a picture to be edited;
the display module is used for acquiring the selected picture to be edited and displaying the picture to be edited in a display area of the picture editing interface;
the video parameter selection module is used for selecting a video parameter based on a picture to be edited, wherein the video parameter comprises at least one of an initial state picture, a final state picture, a video duration and a video frame rate;
the first generation module is used for generating configuration information of the animation effect of the picture to be edited according to the selected video parameters and the picture to be edited;
and the second generation module is used for generating the video with the animation effect corresponding to the picture to be edited according to the configuration information.
The present application further provides a video generation device based on pictures, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a picture to be edited and configuration information of an animation effect generated based on the picture to be edited, and the configuration information comprises coordinate information of a primary picture selected based on the picture to be edited, first size information of a frame used for framing the primary picture, coordinate information of a final picture selected based on the picture to be edited, second size information of a frame used for framing the final picture, video duration information and a frame rate;
a first calculating module, configured to calculate, according to coordinate information of an initial state picture and a final state picture in the configuration information and a frame number of the video, a pixel of a next picture moving relative to a previous picture in N frames of pictures, where N is the video duration and the frame rate;
the second calculation module is used for calculating the scaling of a next frame picture relative to a previous frame picture in the N frame pictures according to the first size information, the second size information and the frame number of the video in the configuration information;
a determining module, configured to determine, according to the pixel, the scaling and the to-be-edited picture, a to-be-interpolated picture corresponding to each frame of pictures from a first frame of picture to an nth frame of picture in the N frames of pictures, where the initial-state picture is used as the to-be-interpolated picture of the first frame of picture, and the final-state picture is used as the to-be-interpolated picture of the nth frame of picture;
the generating module is used for sequentially carrying out difference processing on each picture to be interpolated by adopting a preset difference algorithm to obtain the N frames of pictures;
and the synthesis module is used for synthesizing the N frames of pictures into the video with the animation effect.
The present application further provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method.
The beneficial effects of the above technical scheme are that:
in the embodiment of the application, a picture editing interface is provided and used for selecting a picture to be edited; acquiring a selected picture to be edited, and displaying the picture to be edited in a display area of the picture editing interface; generating configuration information of the animation effect of the picture to be edited according to the selected video parameters and the picture to be edited; and generating the video with the animation effect corresponding to the picture to be edited according to the configuration information. In the embodiment of the application, a picture editing page is provided, so that the picture to be edited can be generated into the video with the animation effect in a visual mode, all videos can be made according to the requirements of the users, and video making skills are not needed. In addition, in the embodiment, the video is generated in a single picture mode, so that the video of the commodity details can be obtained more conveniently.
Drawings
Fig. 1 is a schematic diagram of an architecture of a video generation method based on pictures according to an embodiment of the present application;
FIG. 2 is a flowchart of an embodiment of a method for picture-based video generation according to the present application;
fig. 3 is a schematic diagram of an initial picture selected from the upper frame of the picture to be edited based on the initial picture setting instruction in an embodiment of the present application;
fig. 4 is a detailed schematic diagram of the step of acquiring an initial state picture selected based on a picture to be edited in an embodiment of the present application;
fig. 5 is a schematic diagram of a last picture selected from the upper frame of the picture to be edited based on the last picture setting instruction in an embodiment of the present application;
fig. 6 is a detailed schematic diagram of a step of acquiring a last-state picture selected based on a picture to be edited in an embodiment of the present application;
fig. 7 is a detailed schematic diagram of a step of generating a video with an animation effect corresponding to the picture to be edited according to the configuration information in an embodiment of the present application;
fig. 8 is a detailed schematic diagram of a step of generating a video with an animation effect corresponding to the picture to be edited according to the configuration information in another embodiment of the present application;
fig. 9 is a detailed schematic diagram of a step of decomposing the picture to be edited into N frames of pictures according to the configuration information in an embodiment of the present application;
FIG. 10 is a flow chart of a method for picture-based video generation according to another embodiment of the present application;
FIG. 11 is a block diagram of a program for an embodiment of an apparatus for picture-based video generation according to the present application;
FIG. 12 is a block diagram of another embodiment of a picture-based video production apparatus according to the present application;
fig. 13 is a schematic hardware structure diagram of a computer device for executing a picture-based video generation method according to an embodiment of the present application.
Detailed Description
The advantages of the present application are further illustrated below with reference to the accompanying drawings and specific embodiments.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the description of the present application, it should be understood that the numerical references before the steps do not identify the order of performing the steps, but merely serve to facilitate the description of the present application and to distinguish each step, and therefore should not be construed as limiting the present application.
Fig. 1 schematically shows an environment diagram of a picture-based video generation method according to an embodiment of the present application.
The terminal device 2 may be configured to generate a corresponding video based on one picture. The terminal device 2 may comprise an electronic device, such as a smartphone, a tablet device, a laptop, a workstation, etc., which generates a corresponding video based on one picture.
The terminal device 2 may comprise a client 2A, such as an application for generating video based on pictures. The client 2A may output (e.g., display, render, present) the generated video to the user. The application includes a visual editing page interface for the user to perform video compositing. The visual editing can enable more non-technical personnel to carry out self-defining video, the visual editing is equivalent to changing the implementation mode of the video (changing from the interface mode only suitable for the technical personnel to the visual editing mode suitable for the technical personnel and the non-technical personnel), and the dynamic video effect display is no longer limited to the technical personnel and has low learning cost. The server 4 can connect a plurality of terminal apparatuses 2 through the network 3. The server 4 may be located in a data center, such as a single site, or distributed in different physical locations (e.g., at multiple sites). The server 4 may provide services via one or more networks 3. The network 3 includes various network devices such as routers, switches, multiplexers, hubs, modems, bridges, repeaters, firewalls, proxy devices, and/or the like. The network 3 may include physical links such as coaxial cable links, twisted pair cable links, fiber optic links, combinations thereof, and the like. The network 3 may include wireless links such as cellular links, satellite links, Wi-Fi links, etc.
The server 4 may be configured to synthesize a video according to the configuration information and the picture to be edited, for example, synthesize a video with an animation effect. The server 4 may be an application server for providing some functional services. The server 4 comprises a plurality of network nodes. Multiple network nodes may handle tasks associated with a message service. The plurality of network nodes may be implemented as one or more computing devices, one or more processors, one or more virtual compute instances, combinations thereof, and/or the like. The plurality of network nodes may be implemented by one or more computer devices. One or more computer devices may include virtualized compute instances. The virtualized compute instance may include an emulation of a virtual machine, such as a computer system, operating system, server, and the like. The computer device may load a virtual machine from the computer device based on the virtual image and/or other data defining the particular software (e.g., operating system, dedicated application, server) used for emulation. As the demand for different types of processing services changes, different virtual machines may be loaded and/or terminated on one or more computer devices. A hypervisor may be implemented to manage the use of different virtual machines on the same computer device.
Fig. 2 is a schematic flowchart of a video generation method based on pictures according to an embodiment of the present application. The video generation method of the embodiment is applied to terminal equipment, wherein the terminal equipment can be electronic equipment such as a smart phone, a tablet device and a notebook computer. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed. As can be seen from the figure, the method for generating a video based on a picture provided in this embodiment includes:
and step S20, providing a picture editing interface, wherein the picture editing interface is used for selecting a picture to be edited.
Step S21, acquiring the selected picture to be edited, and displaying the picture to be edited in the display area of the picture editing interface.
In particular, the terminal device may provide a picture editing interface, and a user (such as an operator) may select a picture to be edited based on the picture editing interface. In an exemplary implementation manner, a user may enter a to-be-edited picture selection interface by clicking an "add fragment" control in the picture editing interface, and then display a candidate to-be-edited picture in the selection interface, so that the user can select a to-be-edited picture that the user wants to edit, and then, when the user selects a to-be-edited picture from the candidate to-be-edited picture, the to-be-edited picture may be obtained, and the to-be-edited picture is displayed in a display area of the picture editing interface. In another embodiment, the user may also display a picture to be edited in a display area of the picture editing interface by dragging a picture to be edited into the interface to be edited. It should be noted that the operator may be an operator of a platform using a picture-based video generation method, such as an operator of a B-station platform
Step S22, video parameters selected based on the picture to be edited are obtained, and the video parameters comprise at least one of an initial picture, a final picture, video duration and video frame rate.
Specifically, the initial image is a picture corresponding to a first frame of the generated video.
As an example, the user may select an initial picture on the upper frame of the picture to be edited based on the initial picture setting instruction triggered by the picture to be edited, after receiving the initial picture setting instruction, the terminal device may select the initial picture on the upper frame of the picture to be edited based on the initial picture setting instruction, thereby completing the operation of obtaining the initial picture selected based on the picture to be edited. In an embodiment, in order to facilitate a user to intuitively know an initial image selected by the user, after the user selects the initial image, the initial image and first position information of the initial image are displayed on a screen.
In an exemplary embodiment, the display area includes a first display area and a second display area, and referring to fig. 4, the acquiring the initial picture selected based on the picture to be edited includes:
step S50, acquiring a first framing operation on the picture to be edited, and displaying, in the first display area, first position information of the first picture framed by the first framing operation based on the first framing operation, where the first position information includes coordinate information of the initial picture and first size information of a frame used for framing the initial picture.
Specifically, the user may perform a first framing operation on the picture to be edited by using the rectangular frame, and after the user completes the framing operation, the terminal device may display, in the first display area, first position information of the first picture framed and selected by the user based on the first framing operation, that is, coordinate information of the first picture and first size information of the frame for framing the first picture. The first position information includes coordinate information of the initial-state picture and first size information of a frame used for framing the initial-state picture.
The coordinate information may include an abscissa (X) and an ordinate (Y) of a top left corner vertex of the initial state picture, or may include an abscissa and an ordinate of a bottom left corner vertex of the initial state picture, or an abscissa and an ordinate of a bottom right corner vertex of the initial state picture, or an abscissa and an ordinate of a top right corner vertex of the initial state picture, which is not limited in this embodiment. As an example, referring to fig. 3, it is assumed that the coordinate information is the abscissa and the ordinate of the vertex at the lower left corner of the initial picture, and is 0 and 74, respectively.
The first size information is a Width (Width) and a Height (Height) of the box, and referring to fig. 3, the Width and the Height of the box are 498 and 498, respectively, as an example.
In other embodiments of the present application, the first position information may be displayed in the second display area.
Step S51, acquiring a first setting instruction for setting the first picture as an initial picture, displaying a copy picture of the picture to be edited in the second display area based on the first setting instruction, and selecting the initial picture in the copy picture.
Specifically, after completing the framing of the first picture, the user may set the first picture as an initial-state picture or as a final-state picture. In a specific embodiment, referring to fig. 3, a user may trigger the first setting instruction by clicking a "set to initial state" control displayed on the interface to be edited, so as to set the first picture as an initial state picture.
In this embodiment, after receiving the first setting instruction, the terminal device displays a copy picture of the picture to be edited in the second display area based on the first setting instruction, and frames the initial picture in the copy picture.
It is understood that, in other embodiments of the present application, the initial picture may be directly framed in the picture to be edited, and the initial picture does not need to be framed in the copied picture.
And the last picture is a picture corresponding to the last frame of the generated video.
As an example, based on the final picture setting instruction triggered by the picture to be edited, the final picture may be selected on the upper frame of the picture to be edited based on the final picture setting instruction, so as to complete the operation of obtaining the picture to be edited and selecting the final initial picture. In an embodiment, in order to facilitate the user to intuitively know the last picture selected by the user, after the user selects the last picture, the last picture and the second position information of the last picture are displayed in the screen.
In an exemplary embodiment, referring to fig. 6, the acquiring a final picture selected based on a picture to be edited includes:
step S70, acquiring a second framing operation on the picture to be edited, and displaying, in the first display area, second position information of a second picture framed by the second framing operation based on the second framing operation, where the second position information includes coordinate information of the last picture and second size information of a frame used for framing the last picture.
Specifically, the coordinate information may include an abscissa (X) and an ordinate (Y) of a top left corner vertex of the final state picture, or may include an abscissa and an ordinate of a bottom left corner vertex of the final state picture, or an abscissa and an ordinate of a bottom right corner vertex of the final state picture, or an abscissa and an ordinate of a top right corner vertex of the final state picture, which is not limited in this embodiment. As an example, referring to fig. 5, it is assumed that the coordinate information is the abscissa and the ordinate of the vertex at the lower left corner of the final picture, and is 120 and 58, respectively.
The second size information is a Width (Width) and a Height (Height) of a frame for framing the last picture, and referring to fig. 5, the Width and the Height of the frame are 223 and 223, respectively, as an example.
The user may perform a second framing operation on the picture to be edited by using the rectangular frame, and after the user completes the framing operation, the terminal device may display, in the first display area, second position information of the second picture framed and selected by the user based on the second framing operation, that is, coordinate information of the second picture and second size information of the frame for framing the second picture.
In other embodiments of the present invention, the second position information may be displayed in the second display area. Step S71, a second setting instruction for setting the second picture as a final picture is obtained, and the final picture is selected from the frame on the copied picture based on the second setting instruction.
Specifically, after the framing of the second picture is completed, the second picture may be set as an initial-state picture or as a final-state picture. In a specific embodiment, referring to fig. 5, the user may trigger the second setting instruction by clicking a "set to last state" control displayed on the interface to be edited, so as to set the second picture as a last state picture.
In this embodiment, after the terminal device receives the second setting instruction, the terminal device may select the last picture in the copied picture based on the second setting instruction.
In this embodiment, in order to distinguish the initial state picture from the final state picture, the initial state picture and the final state picture may be selected by using rectangular frames with different colors, or may be distinguished by adding text marks near the initial state picture and the final state picture, or by using a combination of the two modes.
It is understood that, in other embodiments of the present application, the last picture may also be directly framed in the picture to be edited, and the last picture does not need to be framed in the copied picture.
Specifically, the user may select the video duration through a timeline provided on the picture editing interface, or may set the video duration through a video duration configuration interface provided on the picture editing interface. As an example, the video duration is 2 seconds.
The Frame rate is the frequency (rate) at which bitmap images appear continuously on the display in units of frames.
In this embodiment, the frame rate of the video may be set by the user, or may be a default value, such as default value 60. It should be noted that, when the obtained video parameters only include an initial state picture, the picture to be edited may be used as a final state picture, and the video duration and the video frame rate are set as default values; when the acquired video parameters only contain the final-state picture, the picture to be edited can be used as the initial-state picture, and the video duration and the video frame rate are set as default values; when the acquired video parameters include an initial state picture and a final state picture, the video duration and the video frame rate can be set as default values; when the acquired video parameters comprise an initial state picture, a final state picture and video duration, the video frame rate can be set as a default value; when the acquired video parameters include an initial state picture, a final state picture and a video frame rate, the video duration can be set as a default value. That is, at least the initial picture or the final picture selected based on the picture to be edited needs to be acquired, and the remaining video parameters may be used as default values and automatically added when the configuration information is generated.
And step S25, generating configuration information of the animation effect of the picture to be edited according to the selected video parameters and the picture to be edited.
Specifically, the configuration information includes an initial picture of a video, the final picture, the video duration, the frame rate, and description information of the picture to be edited. As an example, when the configuration information is generated, when the selected video parameters are not full, the missing video parameters may be added to the configuration information by default.
It is understood that, in order that other devices (such as a server) or other application software may generate a video of an animation effect according to the information, configuration information in a preset format, such as a json format, may be generated according to the selected video parameter and the picture to be edited.
The animation effect is a ken burn effect, and refers to the dynamic visual effect generated by displaying the static image in a zooming mode, a translating mode, a transparency modifying mode and the like.
And step S26, generating a video with animation effect corresponding to the picture to be edited according to the configuration information.
Specifically, after the configuration information is obtained, the picture to be edited may be decomposed into N frames of pictures according to the configuration information, and then the N frames of pictures are synthesized into a video with an animation effect. The value N is determined by the video duration at the frame rate, i.e., N is the video duration at the frame rate. It is understood that when the calculated N value is a decimal, rounding may be performed on the resulting value.
In an exemplary implementation manner, referring to fig. 7, the generating a video of an animation effect corresponding to the picture to be edited according to the configuration information includes:
step S80, decomposing the picture to be edited into N frames of pictures according to the configuration information, where N is the video duration and the frame rate;
and step S81, synthesizing the N frames of pictures into the video with the animation effect.
Specifically, when synthesizing the video with the animation effect, the video may be locally synthesized by the terminal device, that is, the terminal device decomposes the picture to be edited into N frames of pictures according to the configuration information, and then synthesizes the N frames of pictures into the video through a video synthesis tool, where the video synthesis tool may be an FFmpeg. FFmpeg is a set of open source computer programs that can be used to record, convert digital audio, video, and convert them into streams. It includes the current leading audio/video coding library libavcodec.
In an exemplary embodiment, in order to increase the video composition rate, referring to fig. 8, the generating a video with an animation effect corresponding to the picture to be edited according to the configuration information includes:
step S90, uploading the configuration information to a server, so that the server decomposes the picture to be edited into N frames of pictures according to the configuration information, and synthesizes the N frames of pictures into the video with the animation effect, where N is the video duration and the frame rate.
And step S91, receiving the video of the animation effect returned by the server.
Specifically, when synthesizing a video, the video synthesis may be performed in a server, that is, the terminal device uploads configuration information to the server, then the server decomposes the picture to be edited into N frames of pictures according to the received configuration information, and then synthesizes the N frames of pictures into a video. It is understood that the method of composing the video at the server may be the same as the method of composing the video at the terminal device.
In an exemplary embodiment, referring to fig. 9, the decomposing the picture to be edited into N frames of pictures according to the configuration information includes:
and step S100, calculating the moving pixels of the next frame picture relative to the previous frame picture in the N frame pictures according to the coordinate information of the initial state picture and the final state picture in the configuration information and the frame number of the video.
As an example, the pixels of the next frame picture that move relative to the previous frame picture are pixels of the next frame picture that move in the X axis and the Y axis, respectively, relative to the previous frame picture. For example, if the coordinate information of the initial picture and the final picture are (0,74) and (120,58), respectively, and N is 100, the pixels of the subsequent picture shifted on the X-axis relative to the previous picture are: (120-0)/(100-1) — 120/99, the pixels of the subsequent picture shifted on the Y-axis relative to the previous picture are: (58-74)/(100-1) — 16/99. It should be noted that the shifted pixel value of "-" in this embodiment merely indicates the direction in which the next frame is shifted relative to the previous frame, for example, a leftward shift may be represented by "+" and a rightward shift may be represented by "-" on the X-axis. Similarly, the Y-axis may be represented by "+" for upward translation and "-" for downward translation.
In the present embodiment, the pixel moved by the next frame picture relative to the previous frame picture may be determined to be (120/99, -16/99).
Step S101, calculating the scaling of the next frame picture relative to the previous frame picture in the N frame pictures according to the first size information, the second size information and the frame number of the video in the configuration information.
As an example, the scaling of the next frame picture relative to the previous frame picture is the scaling of the next frame picture relative to the previous frame picture in terms of area. Since the size information of the frame is the size information of the picture, the size information of the initial picture can be known according to the first size information, and the size information of the final picture can be known according to the second size information, for example, the first size information and the second size information are (498) and (223), respectively, the area of the initial picture is: 498,498 — 248004, area of last state picture: 223 × 223 ═ 49729, and the scaling of the next frame picture relative to the previous frame picture is 49729/(248004 × (100-1)) -49729/24552396.
It should be noted that the calculated ratio of the pixels moved and scaled by the next frame picture relative to the previous frame picture is a value obtained by performing linear transformation on the next frame relative to the previous frame picture, and it can be understood that in other embodiments of the present application, the value of the change in the next frame relative to the previous frame may not be linearly changed, and the specific change value may also be calculated by referring to the above method, which is not described in detail in this embodiment.
Step S102, determining a picture to be interpolated corresponding to each frame of pictures from a first frame of picture to an Nth frame of picture in the N frames of pictures according to the pixels, the scaling and the picture to be edited, wherein the initial-state picture is used as the picture to be interpolated of the first frame of picture, and the final-state picture is used as the picture to be interpolated of the Nth frame of picture.
Specifically, the picture to be interpolated is an original picture that needs to be subjected to difference processing.
In this embodiment, an initial image is taken as an original image of a first frame of the nth frame of image, an image obtained by moving the initial image by 1 × the pixels and scaling by 1 × the scaling is taken as an original image of a second frame of image, similarly, an image obtained by moving the initial image by 2 × the pixels and scaling by 2 × the scaling is taken as an original image of a third frame of image, and an image obtained by moving the initial image by (N-1) × the pixels and scaling by (N-1) × the scaling is taken as an original image of a (N-1) th frame of image.
As an example, assuming that the coordinates of the top left corner vertex of the initial picture are (12,25), the size of the initial picture is (10 × 10), the pixel is (2,2), and the scaling is 0.04, the coordinates of the top left corner vertex of the original picture of the second frame picture are (12+2,25+2), that is, (14, 27); the size is (10 × 0.2), i.e., (2,2), that is, the original picture of the second frame picture is a picture with the coordinates of the top left vertex being (14,27) and the size being (2, 2).
And step S103, sequentially carrying out difference processing on each picture to be interpolated by adopting a preset difference algorithm to obtain the N frames of pictures.
Specifically, the difference algorithm may be nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, or the like, and is not limited in this embodiment.
In a specific embodiment, the terminal device may perform difference processing on each picture to be difference through FFmpeg to obtain the N frames of pictures.
It should be noted that, in this embodiment, the size of the obtained N-frame picture is the same as the size of the picture to be edited.
It can be understood that when the size of the video frame of the finally synthesized video is different from the size of the picture to be edited, after the N-frame pictures are obtained, the scaling process needs to be further performed on the N-frame pictures to scale all the frames to the required size. Of course, in another embodiment of the present application, a picture with a desired size may also be directly output in the process of difference processing.
In the embodiment of the application, a picture editing interface is provided and used for selecting a picture to be edited; acquiring a selected picture to be edited, and displaying the picture to be edited in a display area of the picture editing interface; generating configuration information of the animation effect of the picture to be edited according to the selected video parameters and the picture to be edited; and generating the video with the animation effect corresponding to the picture to be edited according to the configuration information. In the embodiment of the application, a picture editing page is provided, so that the picture to be edited can be generated into the video with the animation effect in a visual mode, all videos can be made according to the requirements of the users, and video making skills are not needed. In addition, in the embodiment, the video is generated in a single picture mode, so that the video of the commodity details can be obtained more conveniently.
In an exemplary embodiment, when the video needs to be provided with audio, the method for generating the video based on the picture further includes:
and acquiring the audio corresponding to the video.
Incorporating the audio into the video.
Specifically, when adding a picture to be edited to the picture editing interface, the user may also add audio corresponding to the video to the picture editing interface. The user can also add audio to the picture editing interface after the video is generated.
After acquiring the audio, the terminal device may incorporate the audio into the video through FFmpeg.
In this embodiment, by merging the audio into the video, the obtained video can include the audio, so as to improve the user experience of watching the video by the user.
Fig. 10 schematically shows a flow chart of a picture-based video generation method according to another embodiment of the present application. The video generation method of the embodiment is applied to a server, wherein the server may be a rack server, a blade server, a tower server, or a cabinet server. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed. As can be seen from the figure, the method for generating a video based on a picture provided in this embodiment includes:
step S110, obtaining a picture to be edited and configuration information of an animation effect generated based on the picture to be edited, wherein the configuration information comprises coordinate information of a primary picture selected based on the picture to be edited, first size information of a frame used for framing the primary picture, coordinate information of a final picture selected based on the picture to be edited, second size information of a frame used for framing the final picture, video duration information and a frame rate.
Specifically, the terminal device may upload the picture to be edited and the configuration information to the server, so that the server may generate a video according to the information.
Step S111, calculating a pixel of a next picture moving relative to a previous picture in N pictures according to the coordinate information of the initial picture and the final picture in the configuration information and the frame number of the video, where N is the video duration and the frame rate;
step S111, calculating the scaling of the next frame picture relative to the previous frame picture in the N frame pictures according to the first size information, the second size information and the frame number of the video in the configuration information;
step S112, determining a picture to be interpolated corresponding to each frame of pictures from a first frame of picture to an Nth frame of picture in the N frames of pictures according to the pixels, the scaling and the picture to be edited, wherein the initial-state picture is used as the picture to be interpolated of the first frame of picture, and the final-state picture is used as the picture to be interpolated of the Nth frame of picture;
step S113, performing difference processing on each picture to be interpolated in sequence by adopting a preset difference algorithm to obtain the N frames of pictures;
and step S114, synthesizing the N frames of pictures into the video with the animation effect.
Specifically, steps S111 to S114 are similar to steps S101 to S103, and are not described in detail in this embodiment.
In the embodiment, the video is generated according to the picture to be edited and the configuration information at the server side, so that the video generation efficiency can be improved.
In an exemplary embodiment, when the video needs to be provided with audio, the method for generating the video based on the picture further includes:
and acquiring the audio corresponding to the video.
Incorporating the audio into the video.
Specifically, when adding a picture to be edited to the picture editing interface, the user may also add audio corresponding to the video to the picture editing interface. The user can also add audio to the picture editing interface after the video is generated.
After acquiring the audio, the terminal device may incorporate the audio into the video through FFmpeg.
In this embodiment, by merging the audio into the video, the obtained video can include the audio, so as to improve the user experience of watching the video by the user.
Fig. 11 is a block diagram of an embodiment of a picture-based video generating apparatus 120 according to the present invention. The video generating apparatus 120 may be applied to a terminal device.
In this embodiment, the picture-based video generation apparatus 120 includes a series of computer program instructions stored on a memory, and when the computer program instructions are executed by a processor, the picture-based video generation function of the embodiments of the present application can be realized. In some embodiments, picture-based video generation apparatus 120 may be divided into one or more modules based on the particular operations implemented by the portions of the computer program instructions. For example, in fig. 11, the picture-based video generating apparatus 120 may be divided into a providing module 121, a displaying module 122, an obtaining module 123, a first generating module 124, and a second generating module 125. Wherein:
a providing module 121, configured to provide a picture editing interface, where the picture editing interface is used to select a picture to be edited;
the display module 122 is configured to obtain the selected picture to be edited and display the picture to be edited in a display area of the picture editing interface;
the obtaining module 123 is configured to obtain a video parameter selected based on a picture to be edited, where the video parameter includes at least one of an initial picture, a final picture, a video duration, and a video frame rate;
a first generating module 124, configured to generate configuration information of an animation effect of the picture to be edited according to the selected video parameter and the picture to be edited;
and a second generating module 125, configured to generate, according to the configuration information, a video with an animation effect corresponding to the picture to be edited.
In an exemplary embodiment, the display area includes a first display area and a second display area, the obtaining module 123 is further configured to obtain a first framing operation on the picture to be edited, and display, in the first display area, first position information of the first picture framed by the first framing operation based on the first framing operation, where the first position information includes coordinate information of the initial picture and first size information of a frame used for framing the initial picture; acquiring a first setting instruction for setting the first picture as an initial picture, displaying a copied picture of the picture to be edited in the second display area based on the first setting instruction, and selecting the initial picture from a frame on the copied picture.
In an exemplary embodiment, the obtaining module 123 is further configured to obtain a second frame selection operation on the picture to be edited, and display, in the first display area, second position information of a second picture selected by the second frame selection operation based on the second frame selection operation, where the second position information includes coordinate information of the last picture and second size information of a frame used for framing the last picture; and acquiring a second setting instruction for setting the second picture as a final picture, and selecting the final picture from the frame on the copied picture based on the second setting instruction.
In an exemplary embodiment, the second generating module 125 is further configured to decompose the picture to be edited into N frames of pictures according to the configuration information, where N is the video duration and the frame rate; and synthesizing the N frames of pictures into the video with the animation effect.
In an exemplary embodiment, the first generating module 125 is further configured to upload the configuration information to a server, so that the server decomposes the picture to be edited into N frames of pictures according to the configuration information, and synthesizes the N frames of pictures into the video of the animation effect, where N is the video duration and the frame rate; and receiving the video of the animation effect returned by the server.
In an exemplary embodiment, the first generating module 125 is further configured to calculate, according to the coordinate information of the initial picture and the final picture in the configuration information and the frame number of the video, pixels of a subsequent picture in the N-frame pictures that move relative to a previous picture; calculating the scaling of the next frame picture relative to the previous frame picture in the N frame pictures according to the first size information, the second size information and the frame number of the video in the configuration information; determining a picture to be interpolated corresponding to each frame of pictures from a first frame of picture to an Nth frame of picture in the N frames of pictures according to the pixels, the scaling and the picture to be edited, wherein the initial-state picture is used as the picture to be interpolated of the first frame of picture, and the final-state picture is used as the picture to be interpolated of the Nth frame of picture; and sequentially carrying out difference processing on each picture to be interpolated by adopting a preset difference algorithm to obtain the N frames of pictures.
In an exemplary embodiment, the picture-based video generating apparatus 120 further includes an obtaining module and a merging module.
The acquisition module is used for acquiring the audio corresponding to the video
The merging module is used for merging the audio into the video.
In the embodiment of the application, a picture editing interface is provided and used for selecting a picture to be edited; acquiring a selected picture to be edited, and displaying the picture to be edited in a display area of the picture editing interface; generating configuration information of the animation effect of the picture to be edited according to the selected video parameters and the picture to be edited; and generating the video with the animation effect corresponding to the picture to be edited according to the configuration information. In the embodiment of the application, a picture editing page is provided, so that the picture to be edited can be generated into the video with the animation effect in a visual mode, all videos can be made according to the requirements of the users, and video making skills are not needed. In addition, in the embodiment, the video is generated in a single picture mode, so that the video of the commodity details can be obtained more conveniently.
Fig. 12 is a block diagram of an embodiment of a picture-based video generating apparatus 130 according to the present invention. The video generation apparatus 130 is applied to a server.
In this embodiment, the picture-based video generation apparatus 130 includes a series of computer program instructions stored on a memory, and when the computer program instructions are executed by a processor, the picture-based video generation function of the embodiments of the present application can be realized. In some embodiments, picture-based video generation apparatus 130 may be divided into one or more modules based on the particular operations implemented by the portions of the computer program instructions. For example, in fig. 12, the picture-based video generating apparatus 130 may be divided into an acquisition module 131, a first calculation module 132, a second calculation module 133, a determination module 134, a generation module 135, and a composition module 136. Wherein:
the obtaining module 131 is configured to obtain a picture to be edited and configuration information of an animation effect generated based on the picture to be edited, where the configuration information includes coordinate information of a primary picture framed and selected based on the picture to be edited, first size information of a frame used for framing the primary picture, coordinate information of a final picture framed and selected based on the picture to be edited, second size information of a frame used for framing the final picture, video duration information, and a frame rate;
a first calculating module 132, configured to calculate, according to the coordinate information of the initial state picture and the final state picture in the configuration information and the frame number of the video, a pixel of a next picture moving relative to a previous picture in N frames of pictures, where N is the video duration and the frame rate;
a second calculating module 133, configured to calculate, according to the first size information and the second size information in the configuration information and the frame number of the video, a scaling ratio of a next frame picture in the N frame pictures with respect to a previous frame picture;
a determining module 134, configured to determine, according to the pixel, the scaling and the to-be-edited picture, a to-be-interpolated picture corresponding to each frame of pictures from a first frame of picture to an nth frame of picture in the N frames of pictures, where the initial-state picture is used as the to-be-interpolated picture of the first frame of picture, and the final-state picture is used as the to-be-interpolated picture of the nth frame of picture;
a generating module 135, configured to perform difference processing on each to-be-interpolated picture in sequence by using a preset difference algorithm, so as to obtain the N frames of pictures;
and a synthesizing module 136, configured to synthesize the N frames of pictures into a video with the animation effect.
In an exemplary embodiment, the picture-based video generating apparatus 130 further includes an obtaining module and a merging module.
The acquisition module is used for acquiring the audio corresponding to the video
The merging module is used for merging the audio into the video.
In the embodiment, the video is generated according to the picture to be edited and the configuration information at the server side, so that the video generation efficiency can be improved.
Fig. 13 schematically shows a hardware architecture diagram of a computer device 20 adapted to implement the picture-based video generation method according to an embodiment of the present application. In the present embodiment, the computer device 20 is a device capable of automatically performing numerical calculation and/or information processing in accordance with a command set or stored in advance. For example, the server may be a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a rack server (including an independent server or a server cluster composed of a plurality of servers). As shown in fig. 13, the computer device 20 includes at least, but is not limited to: memory 140, processor 141, and network interface 143 may be communicatively linked to each other via a system bus. Wherein:
the memory 140 includes at least one type of computer-readable storage medium, which may be volatile or non-volatile, and particularly, includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 140 may be an internal storage module of the computer device 20, such as a hard disk or a memory of the computer device 20. In other embodiments, the memory 140 may also be an external storage device of the computer device 20, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the computer device 20. Of course, memory 140 may also include both internal and external memory modules of computer device 20. In this embodiment, the memory 140 is generally used for storing an operating system installed in the computer device 20 and various types of application software, such as program codes of a picture-based video generation method. In addition, the memory 140 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 141 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 141 is generally configured to control the overall operation of the computer device 20, such as performing control and processing related to data interaction or communication with the computer device 20. In this embodiment, the processor 141 is configured to execute program codes stored in the memory 140 or process data.
Network interface 143 may comprise a wireless network interface or a wired network interface, with network interface 143 typically being used to establish communication links between computer device 20 and other computer devices. For example, the network interface 143 is used to connect the computer device 20 with an external terminal via a network, establish a data transmission channel and a communication link between the computer device 20 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), or Wi-Fi.
It is noted that FIG. 13 only shows a computer device having components 120-122, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
In this embodiment, the picture-based video generation method stored in the memory 140 may be divided into one or more program modules and executed by one or more processors (in this embodiment, the processor 141) to complete the present application.
The present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the picture-based video generation method in the embodiment are implemented.
In this embodiment, the computer-readable storage medium includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the computer readable storage medium may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the computer readable storage medium may be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device. Of course, the computer-readable storage medium may also include both internal and external storage devices of the computer device. In this embodiment, the computer-readable storage medium is generally used for storing an operating system and various types of application software installed in the computer device, for example, the program code of the picture-based video generation method in the embodiment, and the like. Further, the computer-readable storage medium may also be used to temporarily store various types of data that have been output or are to be output.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on at least two network units. Some or all of the modules can be screened out according to actual needs to achieve the purpose of the scheme of the embodiment of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. A method for generating a video based on pictures, comprising:
providing a picture editing interface, wherein the picture editing interface is used for selecting a picture to be edited;
acquiring a selected picture to be edited, and displaying the picture to be edited in a display area of the picture editing interface;
acquiring video parameters selected based on the picture to be edited, wherein the video parameters comprise at least one of an initial state picture, a final state picture, video duration and video frame rate;
generating configuration information of the animation effect of the picture to be edited according to the selected video parameters and the picture to be edited;
and generating the video with the animation effect corresponding to the picture to be edited according to the configuration information.
2. The picture-based video generation method according to claim 1, wherein the display area includes a first display area and a second display area, and the acquiring the initial-state picture selected based on the picture to be edited includes:
acquiring a first framing operation on the picture to be edited, and displaying first position information of the first picture framed by the first framing operation in the first display area based on the first framing operation, wherein the first position information comprises coordinate information of the initial picture and first size information of the frame used for framing the initial picture;
acquiring a first setting instruction for setting the first picture as an initial picture, displaying a copied picture of the picture to be edited in the second display area based on the first setting instruction, and selecting the initial picture from a frame on the copied picture.
3. The picture-based video generation method according to claim 2, wherein the obtaining of the last picture selected based on the picture to be edited includes:
acquiring second framing operation on the picture to be edited, and displaying second position information of the second picture framed by the second framing operation in the first display area based on the second framing operation, wherein the second position information comprises coordinate information of the last picture and second size information of a frame used for framing the last picture;
and acquiring a second setting instruction for setting the second picture as a final picture, and selecting the final picture from the frame on the copied picture based on the second setting instruction.
4. The picture-based video generation method according to claim 1, wherein the generating a video of an animation effect corresponding to the picture to be edited according to the configuration information includes:
decomposing the picture to be edited into N frames of pictures according to the configuration information, wherein N is the video duration and the frame rate;
and synthesizing the N frames of pictures into the video with the animation effect.
5. The picture-based video generation method according to claim 1, wherein the generating a video of an animation effect corresponding to the picture to be edited according to the configuration information includes:
uploading the configuration information to a server, so that the server decomposes the picture to be edited into N frames of pictures according to the configuration information, and synthesizes the N frames of pictures into the video with the animation effect, wherein N is the video duration and the frame rate;
and receiving the video of the animation effect returned by the server.
6. The picture-based video generation method according to claim 3, wherein the decomposing the picture to be edited into N-frame pictures according to the configuration information comprises:
calculating the pixel of the next frame picture in the N frame pictures moving relative to the previous frame picture according to the coordinate information of the initial state picture and the final state picture in the configuration information and the frame number of the video;
calculating the scaling of the next frame picture relative to the previous frame picture in the N frame pictures according to the first size information, the second size information and the frame number of the video in the configuration information;
determining a picture to be interpolated corresponding to each frame of pictures from a first frame of picture to an Nth frame of picture in the N frames of pictures according to the pixels, the scaling and the picture to be edited, wherein the initial-state picture is used as the picture to be interpolated of the first frame of picture, and the final-state picture is used as the picture to be interpolated of the Nth frame of picture;
and sequentially carrying out difference processing on each picture to be interpolated by adopting a preset difference algorithm to obtain the N frames of pictures.
7. The picture-based video generation method according to any one of claims 1 to 5, further comprising:
acquiring an audio corresponding to the video;
incorporating the audio into the video.
8. A video generation method based on pictures is characterized by comprising the following steps:
acquiring a picture to be edited and configuration information of an animation effect generated based on the picture to be edited, wherein the configuration information comprises coordinate information of an initial picture selected based on the picture to be edited, first size information of a frame used for framing the initial picture, coordinate information of an end picture selected based on the picture to be edited, second size information of a frame used for framing the end picture, video duration information and a frame rate;
calculating pixels of a next frame picture in the N frame pictures moving relative to a previous frame picture according to the coordinate information of the initial state picture and the final state picture in the configuration information and the frame number of the video, wherein N is the video duration and the frame rate;
calculating the scaling of the next frame picture relative to the previous frame picture in the N frame pictures according to the first size information, the second size information and the frame number of the video in the configuration information;
determining a picture to be interpolated corresponding to each frame of pictures from a first frame of picture to an Nth frame of picture in the N frames of pictures according to the pixels, the scaling and the picture to be edited, wherein the initial-state picture is used as the picture to be interpolated of the first frame of picture, and the final-state picture is used as the picture to be interpolated of the Nth frame of picture;
performing difference processing on each picture to be interpolated in sequence by adopting a preset difference algorithm to obtain the N frames of pictures;
and synthesizing the N frames of pictures into the video with the animation effect.
9. The picture-based video generation method according to claim 8, further comprising:
acquiring an audio corresponding to the video;
incorporating the audio into the video.
10. A picture-based video generation apparatus, comprising:
the system comprises a providing module, a picture editing module and a display module, wherein the providing module is used for providing a picture editing interface which is used for selecting a picture to be edited;
the display module is used for acquiring the selected picture to be edited and displaying the picture to be edited in a display area of the picture editing interface;
the video parameter selection module is used for selecting a video parameter based on a picture to be edited, wherein the video parameter comprises at least one of an initial state picture, a final state picture, a video duration and a video frame rate;
the first generation module is used for generating configuration information of the animation effect of the picture to be edited according to the selected video parameters and the picture to be edited;
and the second generation module is used for generating the video with the animation effect corresponding to the picture to be edited according to the configuration information.
11. A picture-based video generation apparatus, characterized in that the video generation apparatus comprises:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a picture to be edited and configuration information of an animation effect generated based on the picture to be edited, and the configuration information comprises coordinate information of a primary picture selected based on the picture to be edited, first size information of a frame used for framing the primary picture, coordinate information of a final picture selected based on the picture to be edited, second size information of a frame used for framing the final picture, video duration information and a frame rate;
a first calculating module, configured to calculate, according to coordinate information of an initial state picture and a final state picture in the configuration information and a frame number of the video, a pixel of a next picture moving relative to a previous picture in N frames of pictures, where N is the video duration and the frame rate;
the second calculation module is used for calculating the scaling of a next frame picture relative to a previous frame picture in the N frame pictures according to the first size information, the second size information and the frame number of the video in the configuration information;
a determining module, configured to determine, according to the pixel, the scaling and the to-be-edited picture, a to-be-interpolated picture corresponding to each frame of pictures from a first frame of picture to an nth frame of picture in the N frames of pictures, where the initial-state picture is used as the to-be-interpolated picture of the first frame of picture, and the final-state picture is used as the to-be-interpolated picture of the nth frame of picture;
the generating module is used for sequentially carrying out difference processing on each picture to be interpolated by adopting a preset difference algorithm to obtain the N frames of pictures;
and the synthesis module is used for synthesizing the N frames of pictures into the video with the animation effect.
12. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the picture-based video generation method of any one of claims 1 to 7 or the steps of the picture-based video generation method of any one of claims 8 to 9 when executing the computer program.
13. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when executed by a processor implementing the steps of the picture based video generation method of any of claims 1 to 7 or implementing the steps of the picture based video generation method of any of claims 8 to 9.
CN202110166271.4A 2021-02-04 2021-02-04 Video generation method and device based on pictures Pending CN112819927A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110166271.4A CN112819927A (en) 2021-02-04 2021-02-04 Video generation method and device based on pictures
PCT/CN2022/072854 WO2022166595A1 (en) 2021-02-04 2022-01-20 Video generation method and apparatus based on picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110166271.4A CN112819927A (en) 2021-02-04 2021-02-04 Video generation method and device based on pictures

Publications (1)

Publication Number Publication Date
CN112819927A true CN112819927A (en) 2021-05-18

Family

ID=75862061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110166271.4A Pending CN112819927A (en) 2021-02-04 2021-02-04 Video generation method and device based on pictures

Country Status (2)

Country Link
CN (1) CN112819927A (en)
WO (1) WO2022166595A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022166595A1 (en) * 2021-02-04 2022-08-11 上海哔哩哔哩科技有限公司 Video generation method and apparatus based on picture

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116560550A (en) * 2023-05-11 2023-08-08 上海百秋新网商数字科技有限公司 Visual pre-configuration, configuration and video generation system and method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000008853A1 (en) * 1998-08-04 2000-02-17 Flashpoint Technology, Inc. Interactive movie creation from one or more still images in a digital imaging device
US20050008343A1 (en) * 2003-04-30 2005-01-13 Frohlich David Mark Producing video and audio-photos from a static digital image
WO2006032209A1 (en) * 2004-09-22 2006-03-30 Yan Feng Dynamic logo generating and displaying method
CN101815197A (en) * 2009-02-23 2010-08-25 佳能株式会社 The control method of image display system, image display and image display
JP2013073572A (en) * 2011-09-29 2013-04-22 Dainippon Printing Co Ltd Advertisement printed matter moving image creation device
CN103176788A (en) * 2011-12-26 2013-06-26 中国移动通信集团福建有限公司 Method and device used for smooth transition of animation content of mobile phone desktop
CN104700444A (en) * 2015-03-10 2015-06-10 上海鸿利数码科技有限公司 Achievement method of picture animation
CN106649541A (en) * 2016-10-26 2017-05-10 广东小天才科技有限公司 Cartoon playing and generating method and device
CN107610207A (en) * 2017-10-23 2018-01-19 北京奇艺世纪科技有限公司 A kind of method, apparatus and system of generation GIF pictures
CN110163932A (en) * 2018-07-12 2019-08-23 腾讯数码(天津)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN110717962A (en) * 2019-10-18 2020-01-21 厦门美图之家科技有限公司 Dynamic photo generation method and device, photographing equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170178685A1 (en) * 2015-12-22 2017-06-22 Le Holdings (Beijing) Co., Ltd. Method for intercepting video animation and electronic device
CN111246247A (en) * 2018-11-29 2020-06-05 阿里巴巴集团控股有限公司 Video generation method, device and equipment
CN110266971B (en) * 2019-05-31 2021-10-08 上海萌鱼网络科技有限公司 Short video making method and system
CN112019767A (en) * 2020-08-07 2020-12-01 北京奇艺世纪科技有限公司 Video generation method and device, computer equipment and storage medium
CN112819927A (en) * 2021-02-04 2021-05-18 上海哔哩哔哩科技有限公司 Video generation method and device based on pictures

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000008853A1 (en) * 1998-08-04 2000-02-17 Flashpoint Technology, Inc. Interactive movie creation from one or more still images in a digital imaging device
US20050008343A1 (en) * 2003-04-30 2005-01-13 Frohlich David Mark Producing video and audio-photos from a static digital image
WO2006032209A1 (en) * 2004-09-22 2006-03-30 Yan Feng Dynamic logo generating and displaying method
CN101815197A (en) * 2009-02-23 2010-08-25 佳能株式会社 The control method of image display system, image display and image display
JP2013073572A (en) * 2011-09-29 2013-04-22 Dainippon Printing Co Ltd Advertisement printed matter moving image creation device
CN103176788A (en) * 2011-12-26 2013-06-26 中国移动通信集团福建有限公司 Method and device used for smooth transition of animation content of mobile phone desktop
CN104700444A (en) * 2015-03-10 2015-06-10 上海鸿利数码科技有限公司 Achievement method of picture animation
CN106649541A (en) * 2016-10-26 2017-05-10 广东小天才科技有限公司 Cartoon playing and generating method and device
CN107610207A (en) * 2017-10-23 2018-01-19 北京奇艺世纪科技有限公司 A kind of method, apparatus and system of generation GIF pictures
CN110163932A (en) * 2018-07-12 2019-08-23 腾讯数码(天津)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN110717962A (en) * 2019-10-18 2020-01-21 厦门美图之家科技有限公司 Dynamic photo generation method and device, photographing equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022166595A1 (en) * 2021-02-04 2022-08-11 上海哔哩哔哩科技有限公司 Video generation method and apparatus based on picture

Also Published As

Publication number Publication date
WO2022166595A1 (en) 2022-08-11

Similar Documents

Publication Publication Date Title
US10984177B2 (en) System and method providing responsive editing and viewing, integrating hierarchical fluid components and dynamic layout
WO2022166595A1 (en) Video generation method and apparatus based on picture
CN109656654B (en) Editing method of large-screen scene and computer-readable storage medium
CN110633436B (en) Visual and user-defined panoramic editing method, system, storage medium and equipment
US8347211B1 (en) Immersive multimedia views for items
CN111161392B (en) Video generation method and device and computer system
US20150286364A1 (en) Editing method of the three-dimensional shopping platform display interface for users
CN114708391B (en) Three-dimensional modeling method, three-dimensional modeling device, computer equipment and storage medium
CN109636885B (en) Sequential frame animation production method and system for H5 page
CN111951356B (en) Animation rendering method based on JSON data format
CN111651966A (en) Data report file generation method and device and electronic equipment
US20160111129A1 (en) Image edits propagation to underlying video sequence via dense motion fields
US9501812B2 (en) Map performance by dynamically reducing map detail
CN113705156A (en) Character processing method and device
CN114997105A (en) Design template, material generation method, computing device and storage medium
CN113282852A (en) Method and device for editing webpage
CN112418902A (en) Multimedia synthesis method and system based on webpage
EP3454207B1 (en) Dynamic preview generation in a product lifecycle management environment
EP2816521A1 (en) Editing method of the three-dimensional shopping platform display interface for users
CN112348924A (en) Image processing method, computing device and storage medium
CN112799745A (en) Page display control method and device
CN114067028A (en) Picture editing method and device
KR101430964B1 (en) Method for controlling display
CN113761830A (en) Data display method, device, system and storage medium
CN116700704A (en) Image processing method, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination