CN112312161A - Method and device for generating video, electronic equipment and readable storage medium - Google Patents

Method and device for generating video, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112312161A
CN112312161A CN202010602915.5A CN202010602915A CN112312161A CN 112312161 A CN112312161 A CN 112312161A CN 202010602915 A CN202010602915 A CN 202010602915A CN 112312161 A CN112312161 A CN 112312161A
Authority
CN
China
Prior art keywords
transition
picture
pictures
information
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010602915.5A
Other languages
Chinese (zh)
Inventor
周芳汝
安山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202010602915.5A priority Critical patent/CN112312161A/en
Publication of CN112312161A publication Critical patent/CN112312161A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Abstract

The embodiment of the application discloses a method and a device for generating a video, electronic equipment and a computer-readable storage medium, and relates to the field of image data processing. One embodiment of the method comprises: acquiring at least two original pictures and sequencing information of each original picture, wherein the sequencing information represents the display sequencing of the corresponding pictures in a target video; obtaining transition mode information between two adjacent original pictures in sequence, and executing corresponding pixel dissolving operation on the two corresponding pictures according to the transition mode information to determine a transition picture displayed between the two corresponding pictures; and generating a target video according to the sequencing information and the corresponding transition pictures. The implementation mode provides a transition picture calculation mode based on pixel dissolution operation, the calculation is simple and convenient, the calculation amount is small, the effect is better, and the overall efficiency is improved.

Description

Method and device for generating video, electronic equipment and readable storage medium
Technical Field
The embodiment of the application relates to the field of data processing, in particular to the field of image data processing.
Background
In order to enrich the use modes of the electronic photo album and the personal picture by the majority of users, the generation of the dynamic video based on multiple pictures is an important function.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating a video, electronic equipment and a computer-readable storage medium.
In a first aspect, an embodiment of the present application provides a method for generating a video, including: acquiring at least two original pictures and sequencing information of each original picture, wherein the sequencing information represents the display sequencing of the corresponding pictures in a target video; obtaining transition mode information between two adjacent original pictures in sequence, and executing corresponding pixel dissolving operation on the two corresponding pictures according to the transition mode information to determine a transition picture displayed between the two corresponding pictures; and generating a target video according to the sequencing information and the corresponding transition pictures.
In some embodiments, when the transition mode information indicates that the corresponding pixel dissolving operation is performed on the two target original pictures which are adjacent in the sequence, the performing the corresponding pixel dissolving operation on the two corresponding pictures according to the transition mode information to determine a transition picture displayed between the two corresponding pictures includes: determining a first transition time period between two target original pictures; and aiming at each first transition moment in the first transition time period, calculating to obtain a current first transition picture corresponding to the current first transition moment according to the respective pixel information of the two target original pictures, the total duration of the first transition time period, the current first transition moment, the transition starting moment and the transition ending moment of the first transition time period.
In some embodiments, for each first transition time in the first transition time period, calculating to obtain a current first transition picture corresponding to the current first transition time according to respective pixel information of the two target original pictures, a total duration of the first transition time period, the current first transition time, a transition start time of the first transition time period, and a transition end time, including: calculating to obtain a current first transition picture corresponding to the current first transition moment by the following formula:
Figure BDA0002559714940000021
wherein t is the current first transition time, t1Is the transition start time, t, of the first transition period2End of transition time, I, for a first transition period1Is pixel information, I, of a first target original picture in two target original pictures2Is pixel information, I, of a second target original picture of the two target original pictures1Is shown in sequence I2Before, I is the current first transition picture.
In some embodiments, when the transition mode information indicates that the corresponding pixel dissolving operation is performed on one target original picture and the preset background picture, performing the corresponding pixel dissolving operation on the two corresponding pictures according to the transition mode information to determine a transition picture displayed between the two corresponding pictures, includes: determining a second transition time period between a target original picture and a preset background picture; and aiming at each second transition moment in the second transition time period, calculating to obtain a current second transition picture corresponding to the current second transition moment according to the pixel information of a target original picture and a preset background picture, the total duration of the second transition time period, the current second transition moment and the transition starting moment of the second transition time period.
In some embodiments, the method for generating a video further comprises: determining display mode information of each original picture, and processing the corresponding original picture according to the display mode information; and generating a target video according to the sequencing information and the corresponding transition picture, wherein the target video comprises: and connecting the processed original picture and the corresponding transition picture in series according to the sequencing information to generate a target video.
In some embodiments, the presentation information includes information for performing a transform presentation on the picture; and processing the original picture according to the display mode information, comprising: and performing corresponding transformation operation on the original picture according to the display mode information, and processing the black edges in the transformed picture by using a mask to obtain the processed picture.
In some embodiments, after generating the target video, the method further comprises: and correspondingly adding a preset film leader and a preset film trailer at the beginning and the end of the target video respectively.
In a second aspect, an embodiment of the present application provides an apparatus for generating a video, including: the device comprises an original picture and sequencing information acquisition unit, a sequencing information acquisition unit and a sequencing information acquisition unit, wherein the original picture and sequencing information acquisition unit is configured to acquire at least two original pictures and sequencing information of each original picture, and the sequencing information represents display sequencing of the corresponding pictures in a target video; the transition mode information acquisition and pixel dissolution operation execution unit is configured to acquire transition mode information between two sequenced adjacent original pictures and execute corresponding pixel dissolution operation on the two corresponding pictures according to the transition mode information so as to determine a transition picture displayed between the two corresponding pictures; and the target video generation unit is configured to generate a target video according to the sequencing information and the corresponding transition picture.
In some embodiments, when the transition mode information indicates that the corresponding pixel dissolving operation is performed on the two target original pictures which are adjacent to each other in the sorting order, the transition mode information obtaining and pixel dissolving operation performing unit includes: a first transition time period determination subunit configured to determine a first transition time period between two target original pictures; and the first transition picture calculating subunit at each moment is configured to calculate, according to the respective pixel information of the two target original pictures, the total duration of the first transition time period, the current first transition moment, the transition starting moment and the transition ending moment of the first transition time period, and each first transition moment in the first transition time period, to obtain a current first transition picture corresponding to the current first transition moment.
In some embodiments, the first transition picture calculation subunit at each time instant is further configured to: calculating to obtain a current first transition picture corresponding to the current first transition moment by the following formula:
Figure BDA0002559714940000031
wherein t is the current first transition time, t1Is the transition start time, t, of the first transition period2End of transition time, I, for a first transition period1Is pixel information, I, of a first target original picture in two target original pictures2Is pixel information, I, of a second target original picture of the two target original pictures1Is shown in sequence I2Before, I is the current first transition picture.
In some embodiments, when the transition mode information indicates that the corresponding pixel dissolving operation is performed on one target original picture and the preset background picture, the transition mode information obtaining and pixel dissolving operation performing unit is further configured to: determining a second transition time period between a target original picture and a preset background picture; and aiming at each second transition moment in the second transition time period, calculating to obtain a current second transition picture corresponding to the current second transition moment according to the pixel information of a target original picture and a preset background picture, the total duration of the second transition time period, the current second transition moment and the transition starting moment of the second transition time period.
In some embodiments, the means for generating a video further comprises: the original picture display processing unit is configured to determine display mode information of each original picture and process the corresponding original picture according to the display mode information; and the target video generation unit is further configured to: and connecting the processed original picture and the corresponding transition picture in series according to the sequencing information to generate a target video.
In some embodiments, the presentation information includes information for performing a transform presentation on the picture; and the original picture presentation processing unit is further configured to: and performing corresponding transformation operation on the original picture according to the display mode information, and processing the black edges in the transformed picture by using a mask to obtain the processed picture.
In some embodiments, the means for generating a video further comprises: and the title and tail adding unit is configured to correspondingly add a preset title and a preset tail at the beginning and the end of the target video respectively after the target video is generated.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for generating video as described in any implementation manner of the first aspect when executed.
In a fourth aspect, the present application provides a non-transitory computer-readable storage medium storing computer instructions for enabling a computer to implement the method for generating video as described in any implementation manner of the first aspect when executed.
According to the method, the device, the electronic equipment and the computer-readable storage medium for generating the video, firstly, at least two original pictures and sequencing information of each original picture are obtained, wherein the sequencing information represents display sequencing of the corresponding pictures in a target video; then, obtaining transition mode information between two adjacent original pictures in sequence, and executing corresponding pixel dissolving operation on the two corresponding pictures according to the transition mode information to determine a transition picture displayed between the two corresponding pictures; and finally, generating a target video according to the sequencing information and the corresponding transition pictures. According to the technical scheme, the transition picture calculation method based on the pixel dissolution operation is provided, the calculation is simple and convenient, the calculation amount is small, the effect is better, and the overall efficiency is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture to which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for generating video in accordance with the present application;
FIG. 3 is a flow chart of a method of generating a transition picture in the embodiment shown in FIG. 2;
FIG. 4 is a flow chart of another method of generating a transition picture in the embodiment shown in FIG. 2;
FIG. 5 is a flow diagram of another embodiment of a method for generating video in accordance with the present application;
FIG. 6 is a schematic block diagram illustrating one embodiment of an apparatus for generating video in accordance with the present application;
fig. 7 is a block diagram of an electronic device suitable for use to implement the method for generating video of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the methods, apparatuses, electronic devices and computer-readable storage media for generating video provided herein may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 and the server 105 may be installed with various applications for implementing information communication between the two, such as a picture transmission application, an electronic photo album application, an instant messaging application, and the like.
The terminal apparatuses 101, 102, 103 and the server 105 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like; when the terminal devices 101, 102, and 103 are software, they may be installed in the electronic devices listed above, and may be implemented as multiple software or software modules (for example, raw pictures stored by a user and sorting information input by the user may be acquired), or may be implemented as a single software or software module, which is not limited in this respect. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server; when the server is software, the server may be implemented as a plurality of software or software modules (for example, to provide a picture-based motion video generation service), or may be implemented as a single software or software module, which is not limited herein.
The server 105 may provide various services through various built-in applications, taking an electronic album type application that may provide a picture-based dynamic video generation service as an example, the server 105 may implement the following effects when running the electronic album type application: firstly, acquiring at least two original pictures and sequencing information of each original picture from terminal equipment 101, 102 and 103 through a network 104, and sequencing transition mode information between two adjacent original pictures, wherein the sequencing information represents display sequencing of the corresponding pictures in a target video; then, the server 105 performs corresponding pixel dissolving operation on the two corresponding pictures according to the transition mode information to determine a transition picture displayed between the two corresponding pictures; finally, the server 105 generates a target video according to the sorting information and the corresponding transition pictures. Namely, the server 105 generates a dynamic video from at least two original pictures under the guidance of some relevant parameters through the above processing steps and outputs the dynamic video as a result.
Note that the two original pictures, the sorting information, and the transition mode information may be acquired from the terminal apparatuses 101, 102, and 103 through the network 104, or may be stored locally in the server 105 in advance in various ways. Thus, when the server 105 detects that such data is already stored locally (e.g., a pending dynamic video generation task remaining before starting processing), it may choose to retrieve such data directly from locally, in which case the exemplary system architecture 100 may also not include the terminal devices 101, 102, 103 and the network 104.
Since the generation of the dynamic video based on the original picture needs to occupy more computation resources and stronger computation capability, the method for generating the video provided in the following embodiments of the present application is generally executed by the server 105 having stronger computation capability and more computation resources, and accordingly, the apparatus for generating the video is generally disposed in the server 105. However, it should be noted that, when the terminal devices 101, 102, and 103 also have computing capabilities and computing resources meeting the requirements, the terminal devices 101, 102, and 103 may also complete the above-mentioned operations that are originally delivered to the server 105 through the electronic album application installed thereon, and then output the same result as the server 105. Particularly, when there are a plurality of types of terminal devices having different computing capabilities at the same time, but when it is determined that the terminal device in which the electronic album application is located has a strong computing capability and a large amount of computing resources are left, the terminal device may execute the above-described computation, thereby appropriately reducing the computing load on the server 105, and accordingly, the device for generating a video may be provided in the terminal devices 101, 102, and 103. In such a case, the exemplary system architecture 100 may also not include the server 105 and the network 104.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continuing reference to FIG. 2, an implementation flow 200 of one embodiment of a method for generating video according to the present application is shown, comprising the steps of:
step 201: acquiring at least two original pictures and sequencing information of each original picture;
this step is intended to acquire, by an execution subject (e.g., the server 105 shown in fig. 1) of the method for generating a video, at least two original pictures and ordering information of each original picture, where the ordering information represents a display ordering of the corresponding picture in the target video.
Specifically, the sorting information may be represented in various forms, such as numbering according to a certain sequence or a regular sequence (e.g., 01, 02, 03, etc.), time information sorted according to a time axis (e.g., 00:10, 00:20, 00:30, etc.), or a combination of basic position information and incremental position information relative to the basic position information, etc., and the most suitable way for representing the sorting information may be flexibly selected according to all possible requirements in an actual application scenario, which is not specifically limited herein.
Since the video generated based on a single original picture only needs a small number of parameters and operations, this application does not discuss such a situation, and the application is directed to a scene that needs more parameters and more operations and generates a dynamic video based on at least two original pictures.
It should be noted that the at least two original pictures and the ordering information of each original picture may be obtained directly from a local storage device by the execution main body, or may be obtained from a non-local storage device (for example, terminal devices 101, 102, 103 shown in fig. 1). The local storage device may be a data storage module arranged in the execution main body, for example, a server hard disk, in which case, two original pictures and their sorting information can be quickly read locally; the non-local storage device may also be any other electronic device configured to store data, such as some user terminals, in which case the executing entity may obtain the required at least two original pictures and their ordering information by sending a obtaining command to the electronic device.
Step 202: obtaining transition mode information between two adjacent original pictures in sequence, and executing corresponding pixel dissolving operation on the two corresponding pictures according to the transition mode information to determine a transition picture displayed between the two corresponding pictures;
on the basis of step 201, in this step, the execution main body continues to acquire transition mode information between two original pictures that are adjacent in sequence, and the execution main body executes a corresponding pixel dissolving operation on the two corresponding pictures according to the transition mode information, so as to finally obtain a transition picture displayed between the two corresponding pictures after the pixel dissolving operation is completed.
The pixel dissolving operation is essentially a dissolving operation performed on each pixel constituting a two-dimensional picture, and is similar to a process of dissolving a certain water-soluble substance in water to finally obtain a mixture, the pixel dissolving operation is to mix color information of pixels of at least two pictures with the same size at the same position (for example, to perform weighted summation on R, G, B values), and according to different objects, the at least two pictures with the same size can be specifically two original pictures, and the transition picture generated at this time exists as a transition between the two original pictures; or specifically, the transition picture may be an original picture and a preset background picture, and the transition picture generated at this time exists as a transition of a process in which the original picture gradually tends to the preset background picture.
It should be understood that the target of the pixel dissolving operation is not limited to two pictures, and in the case of special requirements, the pixel dissolving operation may be performed on three or even more pictures with the same size to obtain a mixed picture having the color characteristics of the multiple pictures at the same time. Since the pixel dissolving operation can be decomposed into the operation of weighted summation in nature, the calculation is simple and convenient, and the operation amount is small.
Step 203: and generating a target video according to the sequencing information and the corresponding transition pictures.
On the basis of step 201 and step 202, this step aims to determine the actual sequence of the transition pictures generated in step 202 in the target video according to the ordering information of the original pictures acquired in step 201 and by combining the dependency relationship between the transition pictures and the original pictures, so as to finally generate the target video.
Further, in order to improve the visual effect of the final video when the final video is presented to the user as much as possible, a preset title and a preset trailer can be correspondingly added at the beginning and the end of the target video respectively, so that the actual content can be better backed up by utilizing the preset title and the preset trailer. Specifically, the preset leader and the preset trailer may be added alternatively, the specific leader or trailer may be randomly selected, or the execution subject may automatically match the original pictures by analyzing the style, image features, and theme of each original picture, which is not specifically limited herein.
The method for generating the video, provided by the embodiment of the application, comprises the steps of firstly, obtaining at least two original pictures and sequencing information of each original picture, wherein the sequencing information represents display sequencing of the corresponding pictures in a target video; then, obtaining transition mode information between two adjacent original pictures in sequence, and executing corresponding pixel dissolving operation on the two corresponding pictures according to the transition mode information to determine a transition picture displayed between the two corresponding pictures; and finally, generating a target video according to the sequencing information and the corresponding transition pictures. According to the technical scheme, the transition picture calculation method based on the pixel dissolution operation is provided, the calculation is simple and convenient, the calculation amount is small, the effect is better, and the overall efficiency is improved.
On the basis of the above embodiments, the present application further provides two specific implementations for step 202 through fig. 3 and fig. 4, so as to better understand the actual effects that can be produced by the different implementations.
As shown in the flowchart 300 shown in fig. 3, it specifically indicates, for the transition mode information, a specific scene in which corresponding pixel dissolving operations are performed on two target original pictures that are adjacent to each other in sequence, and in this scene, a transition picture may be generated through the following steps:
step 301: determining a first transition time period between two target original pictures;
step 302: and aiming at each first transition moment in the first transition time period, calculating to obtain a current first transition picture corresponding to the current first transition moment according to the respective pixel information of the two target original pictures, the total duration of the first transition time period, the current first transition moment, the transition starting moment and the transition ending moment of the first transition time period.
When a video is formed, if only two sorted adjacent target original pictures form a single transition picture, the visual experience presented to a user is not good, so that a better visual experience of a transition part is provided for the user, and transition connection between pictures in a generated video is smoother, a transition time period mode is often adopted to transition from one target original picture to a next target original picture which is sorted adjacent to the target original picture, that is, a corresponding transition picture needs to be generated for each transition time in the transition time period, assuming that the duration of the transition time period is 3 seconds, and a video frame rate is 30FPS (Frames Per Second), that is, 90 transition pictures are finally formed. The present embodiment provides a transition effect from the current original picture to the next original picture in a gradual transition manner in the above manner.
Specifically, in step 302, the current first transition picture corresponding to the current first transition time may be calculated by the following formula:
Figure BDA0002559714940000101
wherein t is the current first transition time, t1Is the transition start time, t, of the first transition period2End of transition time, I, for a first transition period1Is pixel information, I, of a first target original picture in two target original pictures2Is pixel information, I, of a second target original picture of the two target original pictures1Is shown in sequence I2Before, I is the current first transition picture.
Unlike the process 300 shown in fig. 3, the process 400 shown in fig. 4 specifically indicates another scene for performing the corresponding pixel dissolving operation on a target original picture and a preset background picture for transition mode information, where a transition picture can be generated by the following steps:
step 401: determining a second transition time period between a target original picture and a preset background picture;
step 402: and aiming at each second transition moment in the second transition time period, calculating to obtain a current second transition picture corresponding to the current second transition moment according to the pixel information of a target original picture and a preset background picture, the total duration of the second transition time period, the current second transition moment and the transition starting moment of the second transition time period.
It should be understood that the embodiment shown in fig. 3 is a pixel dissolving operation performed by taking two original pictures that are adjacent in sequence as objects, and the transition picture generated by the pixel dissolving operation is a gradual transition between the two original pictures that are adjacent in sequence, and is suitable for a scene that needs a relatively compact transition mode. The embodiment shown in fig. 4 is a pixel dissolving operation performed by taking an original picture and a preset background picture as objects, and correspondingly, the generated transition picture has the effect of gradual transition between the original picture and the preset background picture, and is suitable for implementing a fade-in or fade-out transition mode of the original picture, that is, suitable for a scene requiring a sparse transition mode, and the transition between two sequenced adjacent original pictures can be implemented in a combination mode of fade-out before fade-in and fade-in after fade-in.
Similarly, when the second transition picture at each second transition time is calculated in this embodiment, the above calculation formula may also be applied, and only one of the original pictures needs to be adaptively changed to the preset background picture.
On the basis of the foregoing embodiment, in order to further enrich the display manner of the original pictures in the generated dynamic video, the present embodiment further provides another implementation flow 500 of the method for generating a video, including the following steps:
step 501: acquiring at least two original pictures and sequencing information of each original picture;
step 502: determining display mode information of each original picture, and processing the corresponding original picture according to the display mode information;
the step aims to determine the display mode information of each original picture by the execution main body, and process the corresponding original picture according to the display mode information, so that the target video can be generated based on the processed picture conveniently. The display mode information comprises information for displaying the pictures in a conversion mode, and the display in the conversion mode comprises and is not limited to at least one of the following items: zooming in, zooming out, translating, 3D flipping, projective transformation, mirroring, 16-grid occasionally flipping, and photo wall.
The display mode of enlargement or reduction can be realized by the following principle: setting a starting size start _ size of a starting time t _ start and an ending size end _ size of an ending time t _ end for displaying pictures in a video, so as to calculate an actual size current _ size of the pictures processed at the time t _ start < t < t _ end by combining time difference and frame rate, and after obtaining the actual size of each frame of the processed pictures, setting the display position of each frame of the processed pictures on a video interface, wherein the calculation process can refer to the following calculation formula:
Figure BDA0002559714940000111
the display mode of translation can also be realized by a similar principle: setting the position pos _ start of the picture at the time t _ start and the position pos _ end at the time t _ end so as to calculate the actual position pos _ t of the picture processed at the time t _ start < t < t _ end by combining the time difference and the frame rate, wherein the calculation process can be seen in the following calculation formula:
Figure BDA0002559714940000112
the 3D flip display mode is usually realized by performing affine transformation on a picture for a plurality of times: and acquiring four-vertex coordinates of the original picture, obtaining new four-vertex coordinates of the processed picture through multiple affine transformations, and displaying the corresponding picture by using the new four-vertex coordinates to realize the 3D overturning effect. Further, the image after affine transformation is usually not rectangular, so that a more obvious black edge is generated, and the background of the black edge part can be leaked or filled with other contents by superimposing a mask/mask.
When the two pictures are displayed in a combined manner of projective transformation and mirror image, the following processing modes can be adopted: respectively carrying out affine transformation processing on each picture for multiple times to obtain new four-vertex coordinates and new four-point coordinates obtained after mirror image processing; and merging the picture obtained after the affine transformation processing of the same picture and the picture obtained after the mirror image processing. The actual scenario may be: when a video is initially displayed, two pictures are displayed in a small-size mode, then one picture is displayed in an enlarged mode through projective transformation and mirroring, when the picture is large enough, the picture is displayed in a reduced mode through projective transformation and mirroring, and meanwhile the other picture is displayed in an enlarged mode through projective transformation and mirroring. The black edge which appears due to the affine change in the display mode can be processed in the same mode.
The method comprises the steps of firstly determining four-point coordinates of a certain part to be amplified, intercepting partial images to be amplified based on the four-point coordinates, and then processing the partial images by combining the amplified display mode.
When a 16-grid display mode of regularly turning is adopted, each frame of picture needs to be divided into 16 parts, and a black gap is reserved between every two parts. In order to realize the flipping function for each cell, it is necessary to set each of the 16 cells to start flipping at a random time within a prescribed time period. For example, when each grid is turned 0 to 180 degrees, 16 parts constituting a first original picture are shown, and when the grid is turned 180 degrees to 360 degrees, 16 parts constituting a second original picture are shown, and when the 16 grids are completely turned over, the 16 parts of the second picture are combined to form a complete picture.
The display mode of the photo wall refers to a mode of attaching a plurality of original pictures to a background wall and then sliding the background wall from the right side to the left side of a video so as to display the plurality of pictures at high density.
Step 503: obtaining transition mode information between two adjacent original pictures in sequence, and executing corresponding pixel dissolving operation on the two corresponding pictures according to the transition mode information to determine a transition picture displayed between the two corresponding pictures;
step 504: and connecting the processed original picture and the corresponding transition picture in series according to the sequencing information to generate a target video.
Since step 502 is added in this embodiment, this step needs to be adjusted to the original picture and the corresponding transition picture processed in series by the execution main body according to the sorting information, so as to generate the target video.
Different from the above embodiments, in order to enable a dynamic video generated based on multiple pictures to provide a richer visual effect to a user, in addition to implementing transition through transition pictures, the present embodiment adds a step of processing an original picture through a transformed presentation to generate a target video based on the processed pictures. Namely, the target videos are not all the original pictures, but the processed pictures, so that the purpose of providing richer visual effects for users is achieved.
In order to deepen understanding, the application also provides a specific implementation scheme by combining a specific application scene. The scene is as follows: the user triggers a dynamic video generation function in an electronic photo album application installed in the smart phone and hopes to obtain a video generated based on the album picture in the smart phone. Suppose that a user selects 8 pictures in the mobile phone album by clicking, numbers 01, 02, 03, 04, 05, 06, 07 and 08 of the pictures are respectively numbered according to a self-defined sequencing mode through the electronic album application, and simultaneously selects options of randomly using a picture display mode and a transition mode.
The electronic photo album application forwards the received parameters to a corresponding dynamic video generation server through a network, and the dynamic video generation server randomly selects the following display modes and transition modes respectively based on a preset random algorithm: zooming and displaying a picture numbered 01, performing 3D turning and displaying a picture numbered 02, displaying three pictures numbered 03, 04 and 05 in the form of a photo wall, displaying central partial images in the pictures numbered 06 and 07 in detail, performing gradual transition processing as shown in FIG. 3 on three transition scenes numbered 01 to 02, 03 to 04 and 06 to 07, and performing fade-in or fade-out transition processing as shown in FIG. 4 on other transition scenes.
And the dynamic video generation server respectively calculates to obtain each frame of picture according to the determined display mode and transition mode by combining the display time length of the default single picture, the transition time length and the default frame rate, and then serially splices according to a time axis to obtain the target video coded according to the ffmpeg format.
And the dynamic video generation server stores the generated target video in the ffmpeg format in a preset storage position, and returns information containing the preset storage position to the electronic photo album application through a network, so that a user can download the target video in the ffmpeg format from the preset storage position to a local storage space of the smart phone.
With further reference to fig. 6, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for generating a video, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 6, the apparatus 600 for generating video provided by the present embodiment may include: an original picture and ordering information acquisition unit 601, a transition mode information acquisition and pixel dissolution operation execution unit 602, and a target video generation unit 603. The original picture and ordering information acquiring unit 601 is configured to acquire at least two original pictures and ordering information of each original picture, where the ordering information represents display ordering of the corresponding pictures in the target video; a transition mode information obtaining and pixel dissolving operation executing unit 602 configured to obtain transition mode information between two sequenced adjacent original pictures, and execute a corresponding pixel dissolving operation on the corresponding two pictures according to the transition mode information to determine a transition picture displayed between the corresponding two pictures; and a target video generating unit 603 configured to generate a target video according to the sorting information and the corresponding transition picture.
In the present embodiment, in the apparatus 600 for generating a video: the detailed processing and the brought technical effects of the original picture and ordering information obtaining unit 601, the transition mode information obtaining and pixel dissolving operation executing unit 602, and the target video generating unit 603 can refer to the related descriptions of step 201 and step 203 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, when the transition mode information indicates that the corresponding pixel dissolving operation is performed on two target original pictures that are adjacent to each other in the sequence, the transition mode information obtaining and pixel dissolving operation performing unit 602 may include: a first transition time period determination subunit configured to determine a first transition time period between two target original pictures; and the first transition picture calculating subunit at each moment is configured to calculate, according to the respective pixel information of the two target original pictures, the total duration of the first transition time period, the current first transition moment, the transition starting moment and the transition ending moment of the first transition time period, and each first transition moment in the first transition time period, to obtain a current first transition picture corresponding to the current first transition moment.
In some optional implementations of this embodiment, the first transition picture calculation subunit at each time may be further configured to: calculating to obtain a current first transition picture corresponding to the current first transition moment by the following formula:
Figure BDA0002559714940000151
wherein t is the current first transition time, t1Is the transition start time, t, of the first transition period2End of transition time, I, for a first transition period1Is pixel information, I, of a first target original picture in two target original pictures2Is pixel information, I, of a second target original picture of the two target original pictures1Is shown in sequence I2Before, I is the current first transition picture.
In some optional implementations of the embodiment, when the transition mode information indicates that the corresponding pixel dissolving operation is performed on one target original picture and the preset background picture, the transition mode information obtaining and pixel dissolving operation performing unit 602 may be further configured to: determining a second transition time period between a target original picture and a preset background picture; and aiming at each second transition moment in the second transition time period, calculating to obtain a current second transition picture corresponding to the current second transition moment according to the pixel information of a target original picture and a preset background picture, the total duration of the second transition time period, the current second transition moment and the transition starting moment of the second transition time period.
In some optional implementations of this embodiment, the apparatus 600 for generating a video may further include: the original picture display processing unit is configured to determine display mode information of each original picture and process the corresponding original picture according to the display mode information; and the target video generation unit is further configured to: and connecting the processed original picture and the corresponding transition picture in series according to the sequencing information to generate a target video.
In some optional implementations of this embodiment, the display mode information includes information for performing transform display on the picture; and the original picture presentation processing unit is further configured to: and performing corresponding transformation operation on the original picture according to the display mode information, and processing the black edges in the transformed picture by using a mask to obtain the processed picture.
In some optional implementations of this embodiment, the apparatus 600 for generating a video may further include: and the title and tail adding unit is configured to correspondingly add a preset title and a preset tail at the beginning and the end of the target video respectively after the target video is generated.
The present embodiment exists as an embodiment of an apparatus corresponding to the foregoing method embodiment, and the apparatus for generating a video provided in the present embodiment provides a transition picture calculation method implemented based on a pixel dissolution operation through the foregoing technical solution, and has the advantages of simple calculation, small computation workload, better effect, and improvement of overall efficiency.
According to an embodiment of the present application, an electronic device and a computer-readable storage medium are also provided.
Fig. 7 shows a block diagram of an electronic device suitable for implementing the method for generating video of the embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor in the electronic device to perform the method for generating video provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method for generating video provided herein.
The memory 702, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the method for generating video in the embodiment of the present application (for example, the original picture and ordering information acquisition unit 601, the transition manner information acquisition and pixel dissolution operation execution unit 602, and the target video generation unit 603 shown in fig. 6). The processor 701 executes various functional applications of the server and data processing, i.e., implements the method for generating a video in the above-described method embodiment, by executing the non-transitory software programs, instructions, and modules stored in the memory 702.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store various types of data, etc., created by the electronic device in performing the method for generating a video. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 702 may optionally include memory located remotely from the processor 701, which may be connected via a network to an electronic device adapted to perform the method for generating video. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device adapted to perform the method for generating a video may further comprise: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of an electronic apparatus suitable for performing the method for generating a video, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the transition picture calculation mode based on the pixel dissolving operation is provided, the calculation is simple and convenient, the calculation amount is small, the effect is better, and the overall efficiency is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (16)

1. A method for generating video, comprising:
acquiring at least two original pictures and sequencing information of each original picture, wherein the sequencing information represents the display sequencing of the corresponding pictures in a target video;
obtaining transition mode information between two adjacent original pictures in sequence, and executing corresponding pixel dissolving operation on the two corresponding pictures according to the transition mode information to determine a transition picture displayed between the two corresponding pictures;
and generating a target video according to the sequencing information and the corresponding transition pictures.
2. The method according to claim 1, wherein when the transition mode information indicates that the corresponding pixel dissolving operation is performed on two target original pictures which are adjacent in sequence, the performing the corresponding pixel dissolving operation on the corresponding two pictures according to the transition mode information to determine a transition picture displayed between the corresponding two pictures comprises:
determining a first transition time period between the two target original pictures;
and for each first transition moment in the first transition time period, calculating to obtain a current first transition picture corresponding to the current first transition moment according to the respective pixel information of the two target original pictures, the total duration of the first transition time period, the current first transition moment, the transition starting moment and the transition ending moment of the first transition time period.
3. The method according to claim 2, wherein for each first transition time in the first transition time period, calculating a current first transition picture corresponding to the current first transition time according to respective pixel information of two target original pictures, a total duration of the first transition time period, the current first transition time, a transition start time of the first transition time period, and a transition end time, and includes:
calculating to obtain a current first transition picture corresponding to the current first transition moment by using the following formula:
Figure FDA0002559714930000011
wherein t is the current first transition time, t1Is the transition start time t of the first transition time period2Is the transition end time, I, of the first transition period1Pixel information, I, of a first target original picture in the two target original pictures2Is pixel information, I, of a second target original picture in the two target original pictures1Is shown in sequence I2And I is the current first transition picture.
4. The method according to claim 1, wherein when the transition mode information indicates that the corresponding pixel dissolving operation is performed on one target original picture and a preset background picture, the performing the corresponding pixel dissolving operation on the two corresponding pictures according to the transition mode information to determine a transition picture displayed between the two corresponding pictures comprises:
determining a second transition time period between the target original picture and the preset background picture;
and for each second transition moment in the second transition time period, calculating to obtain a current second transition picture corresponding to the current second transition moment according to the pixel information of the target original picture and the preset background picture, the total duration of the second transition time period, the current second transition moment and the transition starting moment of the second transition time period.
5. The method of claim 1, further comprising:
determining display mode information of each original picture, and processing the corresponding original picture according to the display mode information; and
generating a target video according to the sequencing information and the corresponding transition picture, wherein the generating comprises:
and connecting the processed original picture and the corresponding transition picture in series according to the sequencing information to generate a target video.
6. The method according to claim 5, wherein the presentation information comprises information for performing transform presentation on a picture; and
the processing the original picture according to the display mode information comprises:
and performing corresponding transformation operation on the original picture according to the display mode information, and processing the black edges in the transformed picture by using a mask to obtain a processed picture.
7. The method of any of claims 1 to 6, after generating the target video, further comprising:
and correspondingly adding a preset film leader and a preset film trailer at the beginning and the end of the target video respectively.
8. An apparatus for generating video, comprising:
the device comprises an original picture and sequencing information acquisition unit, a sequencing information acquisition unit and a sequencing information acquisition unit, wherein the original picture and sequencing information acquisition unit is configured to acquire at least two original pictures and sequencing information of each original picture, and the sequencing information represents display sequencing of the corresponding pictures in a target video;
the transition mode information acquisition and pixel dissolution operation execution unit is configured to acquire transition mode information between two sequenced and adjacent original pictures, and execute corresponding pixel dissolution operation on the two corresponding pictures according to the transition mode information so as to determine a transition picture displayed between the two corresponding pictures;
and the target video generation unit is configured to generate a target video according to the sequencing information and the corresponding transition picture.
9. The apparatus according to claim 8, wherein when the transition mode information indicates that the corresponding pixel dissolving operation is performed on two target original pictures that are adjacent in the sequence, the transition mode information obtaining and pixel dissolving operation performing unit includes:
a first transition time period determination subunit configured to determine a first transition time period between the two target original pictures;
and the first transition picture calculation subunit at each moment is configured to calculate, according to the respective pixel information of the two target original pictures, the total duration of the first transition time periods, the current first transition moment, the transition start moment and the transition end moment of the first transition time periods, and for each first transition moment in the first transition time periods, a current first transition picture corresponding to the current first transition moment.
10. The apparatus of claim 9, wherein the moment-by-moment first transition picture calculation subunit is further configured to:
calculating to obtain a current first transition picture corresponding to the current first transition moment by using the following formula:
Figure FDA0002559714930000031
wherein t is the current first transition time, t1Is the transition start time t of the first transition time period2Is the transition end time, I, of the first transition period1Pixel information, I, of a first target original picture in the two target original pictures2Is pixel information, I, of a second target original picture in the two target original pictures1Is shown in sequence I2And I is the current first transition picture.
11. The apparatus according to claim 8, wherein when the transition mode information indicates that the corresponding pixel dissolving operation is performed on a target original picture and a preset background picture, the transition mode information obtaining and pixel dissolving operation performing unit is further configured to:
determining a second transition time period between the target original picture and the preset background picture;
and for each second transition moment in the second transition time period, calculating to obtain a current second transition picture corresponding to the current second transition moment according to the pixel information of the target original picture and the preset background picture, the total duration of the second transition time period, the current second transition moment and the transition starting moment of the second transition time period.
12. The apparatus of claim 8, further comprising:
the original picture display processing unit is configured to determine display mode information of each original picture and process the corresponding original picture according to the display mode information; and
the target video generation unit is further configured to:
and connecting the processed original picture and the corresponding transition picture in series according to the sequencing information to generate a target video.
13. The apparatus of claim 12, wherein the presentation information includes information for performing a transform presentation on a picture; and
the original picture presentation processing unit is further configured to:
and performing corresponding transformation operation on the original picture according to the display mode information, and processing the black edges in the transformed picture by using a mask to obtain a processed picture.
14. The apparatus of any of claims 8 to 13, further comprising:
and the title and tail adding unit is configured to correspondingly add a preset title and a preset tail at the beginning and the end of the target video respectively after the target video is generated.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method for generating video of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method for generating video of any one of claims 1-7.
CN202010602915.5A 2020-06-29 2020-06-29 Method and device for generating video, electronic equipment and readable storage medium Pending CN112312161A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010602915.5A CN112312161A (en) 2020-06-29 2020-06-29 Method and device for generating video, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010602915.5A CN112312161A (en) 2020-06-29 2020-06-29 Method and device for generating video, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN112312161A true CN112312161A (en) 2021-02-02

Family

ID=74483364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010602915.5A Pending CN112312161A (en) 2020-06-29 2020-06-29 Method and device for generating video, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112312161A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113114955A (en) * 2021-03-25 2021-07-13 苏宁金融科技(南京)有限公司 Video generation method and device and electronic equipment
CN113111035A (en) * 2021-04-09 2021-07-13 上海掌门科技有限公司 Special effect video generation method and equipment
CN113316019A (en) * 2021-05-26 2021-08-27 北京搜房科技发展有限公司 Video synthesis method and device
CN114615513A (en) * 2022-03-08 2022-06-10 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium
WO2023125132A1 (en) * 2021-12-30 2023-07-06 北京字跳网络技术有限公司 Special effect image processing method and apparatus, and electronic device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108495171A (en) * 2018-04-03 2018-09-04 优视科技有限公司 Method for processing video frequency and its device, storage medium, electronic product
CN109618222A (en) * 2018-12-27 2019-04-12 北京字节跳动网络技术有限公司 A kind of splicing video generation method, device, terminal device and storage medium
CN109688463A (en) * 2018-12-27 2019-04-26 北京字节跳动网络技术有限公司 A kind of editing video generation method, device, terminal device and storage medium
CN109729365A (en) * 2017-10-27 2019-05-07 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and intelligent terminal, storage medium
CN110580691A (en) * 2019-09-09 2019-12-17 京东方科技集团股份有限公司 dynamic processing method, device and equipment of image and computer readable storage medium
CN111182308A (en) * 2018-11-09 2020-05-19 腾讯美国有限责任公司 Video decoding method, video decoding device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729365A (en) * 2017-10-27 2019-05-07 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and intelligent terminal, storage medium
CN108495171A (en) * 2018-04-03 2018-09-04 优视科技有限公司 Method for processing video frequency and its device, storage medium, electronic product
CN111182308A (en) * 2018-11-09 2020-05-19 腾讯美国有限责任公司 Video decoding method, video decoding device, computer equipment and storage medium
CN109618222A (en) * 2018-12-27 2019-04-12 北京字节跳动网络技术有限公司 A kind of splicing video generation method, device, terminal device and storage medium
CN109688463A (en) * 2018-12-27 2019-04-26 北京字节跳动网络技术有限公司 A kind of editing video generation method, device, terminal device and storage medium
CN110580691A (en) * 2019-09-09 2019-12-17 京东方科技集团股份有限公司 dynamic processing method, device and equipment of image and computer readable storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113114955A (en) * 2021-03-25 2021-07-13 苏宁金融科技(南京)有限公司 Video generation method and device and electronic equipment
CN113111035A (en) * 2021-04-09 2021-07-13 上海掌门科技有限公司 Special effect video generation method and equipment
CN113316019A (en) * 2021-05-26 2021-08-27 北京搜房科技发展有限公司 Video synthesis method and device
CN113316019B (en) * 2021-05-26 2022-04-26 北京搜房科技发展有限公司 Video synthesis method and device
WO2023125132A1 (en) * 2021-12-30 2023-07-06 北京字跳网络技术有限公司 Special effect image processing method and apparatus, and electronic device and storage medium
CN114615513A (en) * 2022-03-08 2022-06-10 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium
CN114615513B (en) * 2022-03-08 2023-10-20 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112312161A (en) Method and device for generating video, electronic equipment and readable storage medium
US10521468B2 (en) Animated seek preview for panoramic videos
CN110989878B (en) Animation display method and device in applet, electronic equipment and storage medium
US20150142884A1 (en) Image Sharing for Online Collaborations
US10554713B2 (en) Low latency application streaming using temporal frame transformation
CN110070496B (en) Method and device for generating image special effect and hardware device
WO2021147414A1 (en) Video message generation method and apparatus, electronic device, and storage medium
KR20080042835A (en) Extensible visual effects on active content in user interfaces
CN111970571B (en) Video production method, device, equipment and storage medium
CN113457160A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN108986009A (en) Generation method, device and the electronic equipment of picture
CN112929740A (en) Method, device, storage medium and equipment for rendering video stream
WO2022166595A1 (en) Video generation method and apparatus based on picture
JP7471510B2 (en) Method, device, equipment and storage medium for picture to video conversion - Patents.com
CN113721876A (en) Screen projection processing method and related equipment
JP6309004B2 (en) Video display changes for video conferencing environments
CN112261483B (en) Video output method and device
CN115080037A (en) Method, device, equipment and storage medium for page interaction
CN114429513A (en) Method and device for determining visible element, storage medium and electronic equipment
KR20230086770A (en) Shooting method, shooting device, electronic device and readable storage medium
CN113836455A (en) Special effect rendering method, device, equipment, storage medium and computer program product
CN113542802A (en) Video transition method and device
CN111601042B (en) Image acquisition method, image display method and device
CN113254680B (en) Cover map processing method of multimedia information, client and electronic equipment
CN117274106B (en) Photo restoration method, electronic equipment and related medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination