CN113055730B - Video generation method, device, electronic equipment and storage medium - Google Patents

Video generation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113055730B
CN113055730B CN202110168756.7A CN202110168756A CN113055730B CN 113055730 B CN113055730 B CN 113055730B CN 202110168756 A CN202110168756 A CN 202110168756A CN 113055730 B CN113055730 B CN 113055730B
Authority
CN
China
Prior art keywords
segment
frame
video
fragment
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110168756.7A
Other languages
Chinese (zh)
Other versions
CN113055730A (en
Inventor
文丹霞
吴炫滢
郑子茗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110168756.7A priority Critical patent/CN113055730B/en
Publication of CN113055730A publication Critical patent/CN113055730A/en
Application granted granted Critical
Publication of CN113055730B publication Critical patent/CN113055730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4825End-user interface for program selection using a list of items to be played back in a given order, e.g. playlists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Abstract

The application discloses a video generation method, a device, an electronic device and a storage medium, wherein the video generation method, the device, the electronic device and the storage medium are used for displaying a segment queue, the segment queue comprises a plurality of segment frames and material segments displayed in each segment frame, the display positions of at least two material segments are adjusted in response to adjustment operation acted on at least one material segment, the adjusted plurality of material segments are displayed in the plurality of segment frames, each segment frame and the material segments displayed in the segment frame form a video unit, the attribute information of each video unit comprises the arrangement sequence of the segment frames in each video unit, the limiting time length corresponding to the segment frames and the playing time length corresponding to the material segments, and video is generated based on the plurality of material segments and the attribute information of each video unit. The application can also adjust the display position of the material segments displayed on the segment frame under the condition that the segment frame has a limited duration, thereby improving the convenience of adjusting the material segments and the efficiency of generating videos.

Description

Video generation method, device, electronic equipment and storage medium
Technical Field
The present application relates to the technical field of electronic devices, and in particular, to a video generating method, apparatus, electronic device, and storage medium.
Background
With the development of science and technology, more and more people record life by making videos. In the process of producing video through a plurality of material segments, the display positions or arrangement sequences of the plurality of material segments required for producing video are often required to be adjusted, however, the current operation process of adjusting the display positions and arrangement sequences of the plurality of material segments is complicated, so that the difficulty of producing video is high and the time consumption is long.
Disclosure of Invention
In view of the above, the present application proposes a video generating method, apparatus, electronic device, and storage medium to solve the above problems.
In a first aspect, an embodiment of the present application provides a video generating method, where the method includes: displaying a segment queue, wherein the segment queue comprises a plurality of segment frames with fixed arrangement sequence and material segments displayed in each segment frame, each segment frame corresponds to a respective limit duration, and each material segment displayed in each segment frame corresponds to a respective play duration; adjusting the display positions of at least two material segments in response to an adjustment operation acting on at least one of the material segments; displaying the adjusted plurality of material fragments in the plurality of fragment frames, wherein each fragment frame and the material fragments displayed in the fragment frame form a video unit, and the attribute information of each video unit comprises the arrangement sequence of the fragment frames in each video unit, the limiting duration corresponding to the fragment frames and the playing duration corresponding to the material fragments; and generating a video based on the plurality of material fragments and the attribute information of each video unit.
In a second aspect, an embodiment of the present application provides a video generating apparatus, including: the system comprises a fragment queue display module, a display module and a display module, wherein the fragment queue display module is used for displaying a fragment queue, the fragment queue comprises a plurality of fragment frames with fixed arrangement sequences and material fragments displayed in each fragment frame, each fragment frame corresponds to a respective limit duration, and each material fragment displayed in each fragment frame corresponds to a respective play duration; the display position adjusting module is used for responding to the adjusting operation acted on at least one material segment and adjusting the display positions of at least two material segments; the material segment display module is used for displaying the adjusted plurality of material segments in the plurality of segment frames, wherein each segment frame and the material segments displayed in the segment frame form a video unit, and the attribute information of each video unit comprises the arrangement sequence of the segment frames in each video unit, the limiting duration corresponding to the segment frames and the playing duration corresponding to the material segments; and the video generation module is used for generating videos based on the plurality of material fragments and the attribute information of each video unit.
In a third aspect, an embodiment of the present application provides an electronic device comprising a memory and a processor, the memory coupled to the processor, the memory storing instructions that when executed by the processor perform the above-described method.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having program code stored therein, the program code being callable by a processor to perform the above method.
The video generating method, the device, the electronic equipment and the storage medium provided by the embodiment of the application display a segment queue, wherein the segment queue comprises a plurality of segment frames with fixed arrangement sequence and material segments displayed in each segment frame, each segment frame corresponds to respective limiting time length, each material segment displayed in each segment frame corresponds to respective playing time length, the display positions of at least two material segments are adjusted in response to adjustment operation acting on at least one material segment, the adjusted plurality of material segments are displayed in the plurality of segment frames, wherein the attribute information of each segment frame and the material segments displayed in the segment frame form a video unit, the arrangement sequence of the segment frames in each video unit, the limiting time length corresponding to the segment frame and the playing time length corresponding to the material segment are included, and video is generated based on the plurality of material segments and the attribute information of each video unit, so that the display positions of the material segments displayed on the segment frames can be adjusted under the condition that the segment frames have the limiting time length, and the convenience of the adjustment of the material segments and the video generation efficiency of the video are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a video generating method according to an embodiment of the present application;
fig. 2 shows a first interface schematic diagram of an electronic device according to an embodiment of the present application;
fig. 3 shows a second interface schematic diagram of an electronic device according to an embodiment of the present application;
fig. 4 shows a third interface schematic diagram of an electronic device according to an embodiment of the present application;
fig. 5 shows a fourth interface schematic diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic flow chart of a video generating method according to another embodiment of the present application;
fig. 7 is a schematic flow chart of a video generating method according to still another embodiment of the present application;
fig. 8 is a flowchart illustrating step S350 of the video generating method shown in fig. 7 according to the present application;
Fig. 9 is a flowchart illustrating step S352 of the video generation method illustrated in fig. 8 according to the present application;
fig. 10 is a flowchart illustrating step S360 of the video generating method shown in fig. 7 according to the present application;
fig. 11 is a flowchart of a video generating method according to another embodiment of the present application;
FIG. 12 is an interactive schematic diagram of an electronic device according to an embodiment of the present application;
fig. 13 shows a fifth interface schematic diagram of an electronic device according to an embodiment of the present application;
FIG. 14 is a flow chart of a video generating method according to still another embodiment of the present application;
fig. 15 is a flowchart illustrating step S530 of the video generating method shown in fig. 14 according to the present application;
fig. 16 shows a block diagram of a video generating apparatus according to an embodiment of the present application;
FIG. 17 shows a block diagram of an electronic device for performing a video generation method according to an embodiment of the application;
fig. 18 illustrates a storage unit for storing or carrying program code for implementing a video generation method according to an embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present application with reference to the accompanying drawings.
In the process of video production by electronic equipment, a user is often required to select a plurality of material segments to synthesize a video according to a preset arrangement sequence, however, the arrangement and synthesis of the plurality of material segments are difficult. Therefore, at present, the electronic device generally sorts and synthesizes the plurality of material segments by providing a template to improve the video production efficiency and reduce the video production difficulty, wherein the template refers to a video template (materials such as music, stickers, dynamic effects, animation, text and the like can be added) produced by a user through manual editing work, and an ordinary user can apply the material segments (such as pictures/videos) of the user, so that the template video effect can be obtained. However, when the frame for displaying the material segments in the video template corresponds to a limited time length and the material segments correspond to a playing time length, and the user needs to adjust the display position or the arrangement sequence of the material segments, the electronic device provides two schemes: 1. exiting the video template editing, re-entering a material segment selection page, and selecting corresponding material segments according to the arrangement sequence of the wanted material segments; 2. after using the video template, the material segments are replaced one by one in the edit page in accordance with the arrangement order of the material segments as thought. However, the two schemes cannot directly adjust the display position or the arrangement sequence of the material fragments, which results in complicated operation process, so that the difficulty and the time consumption of video production are relatively high.
In view of the above problems, the inventor finds out through long-term research and proposes the video generation method, the device, the electronic equipment and the storage medium provided by the embodiment of the application, and the display position of the material segments displayed on the segment frame can be adjusted under the condition that the segment frame has a limited duration, so that the convenience of adjusting the material segments and the efficiency of video generation are improved. The specific video generation method is described in detail in the following embodiments.
Referring to fig. 1, fig. 1 is a flowchart illustrating a video generating method according to an embodiment of the application. The video generation method is used for adjusting the display position of the material segments displayed on the segment frame under the condition that the segment frame has a limited duration, so that the convenience of adjusting the material segments and the efficiency of video generation are improved. In a specific embodiment, the video generating method is applied to the video generating apparatus 200 shown in fig. 16 and the electronic device 100 (fig. 17) configured with the video generating apparatus 200. In the following, the specific flow of the present embodiment will be described by taking an electronic device as an example, and it will be understood that the electronic device applied in the present embodiment may include a smart phone, a tablet computer, a wearable electronic device, and the like, which is not limited herein. The following details about the flowchart shown in fig. 1, the video generating method specifically may include the following steps:
Step S110: and displaying a fragment queue, wherein the fragment queue comprises a plurality of fragment frames with fixed arrangement sequence and material fragments displayed in each fragment frame, each fragment frame corresponds to respective limiting duration, and each material fragment displayed in each fragment frame corresponds to respective playing duration.
In some implementations, the electronic device can display a video editing interface and display a clip queue at the video editing interface. As one embodiment, the video editing interface may include a first display area and a second display area, wherein the first display area may be disposed above the second display area, may be disposed below the second display area, may be disposed to the left of the second display area, may be disposed to the right of the second display area, may surround the second display area, and the like, and the area of the first display area may be larger than the area of the second display area, may be smaller than the area of the second display area, may be equal to the area of the second display area, and the like. In this embodiment, the first display area may be used to display a clip queue, and the second display area may be used to display video generated based on the clip queue.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating a first interface of an electronic device according to an embodiment of the application. As shown in fig. 2, the electronic device may display a video editing interface, display a clip queue a in a first display area of the video editing interface, and display a generated video B in a second display area of the video editing interface. As shown in fig. 2, when the clip queue a is displayed in the first display area, the video B displayed in the second display area pauses to play, and the progress of video playing is displayed in the second display area. In the interface shown in fig. 2, when the user confirms that the display position of the material segment is adjusted, the "v" control displayed in the first display area may be clicked, and when the user gives up the adjustment of the display position of the material segment, the "x" control displayed in the first display area may be clicked. As one way, when a click operation of the "x" control is detected, the display section queue a may be exited and the video B may be displayed full screen, and when a click operation acting on the "v" control is detected, a new video may be generated from the adjusted material section and displayed full screen.
In some implementations, the electronic device can display a material selection interface and display a clip queue at the material clip interface. As one embodiment, the material selection interface may include a third display area and a fourth display area, wherein the third display area may be disposed above the fourth display area, may be disposed below the fourth display area, may be disposed on the left side of the fourth display area, may be disposed on the right side of the fourth display area, may surround the fourth display area, and the like, and the area of the area corresponding to the third display area may be larger than the area of the area corresponding to the fourth display area, may be smaller than the area of the area corresponding to the fourth display area, may be equal to the area corresponding to the fourth display area, and the like. In this embodiment, the third display area may be used to display a clip queue, and the fourth display area may be used to display a material clip to be selected.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a second interface of the electronic device according to the embodiment of the application. As shown in fig. 3, the electronic device may display a material selection interface, display a clip queue a in a third display area of the material selection interface, and display a material C to be selected in a fourth display area of the material selection interface. The material C to be selected may include a picture, a video, or the like stored locally on the electronic device.
In this embodiment, the clip queue may include a plurality of clip frames in a fixed arrangement order and material clips displayed within each clip frame. That is, the arrangement order of the plurality of segment frames included in the segment queue is fixed and remains unchanged all the time, and one material segment is correspondingly displayed in each of the plurality of segment frames, the display position of each material segment can be moved, and the arrangement order of the plurality of material segments can be changed. For example, assuming that the segment queue includes 4 segments in total of segment 1, segment 2, segment 3, and segment 4, and the 4 segments are fixedly arranged in the order of segment 1-segment 2-segment 3-segment 4, the corresponding display material segment 1 in segment 1, the corresponding display material segment 2 in segment 2, the corresponding display material segment 3 in segment 3, and the corresponding display material segment 4 in segment 4, based on this, the arrangement order of segment 1, segment 2, segment 3, and segment 4 remains unchanged, and the display positions of the material segment 1, segment 2, segment 3, and segment 4 in segment 1, segment 2, segment 3, and segment 4 may change.
With continued reference to fig. 2 and 3, the segment queue may include a segment frame 1, a segment frame 2, a segment frame 3, and a segment frame 4 … …, where a plurality of segment frames are fixedly arranged in the order of the segment frame 1-segment frame 2-segment frame 3-segment frame 4, the segment frame 1 corresponds to the display material segment 1 (P1), the segment frame 2 corresponds to the display material segment 2 (P2), the segment frame 3 corresponds to the display material segment 3 (P3), and the segment frame 4 corresponds to the display material segment 1 (P4).
In this embodiment, each segment box may correspond to a respective constraint duration. As one way, the restriction durations corresponding to the plurality of segment frames may be the same, may be different, or may be partially the same. In some embodiments, assuming that the restriction duration corresponding to the segment frame 1 is 5s, the restriction duration corresponding to the segment frame 2 is 4s, the restriction duration corresponding to the segment frame 3 is 5s, and the restriction duration corresponding to the segment frame 4 is 3s, then the total duration of the video generated based on the segment queue is 5s+4s+5s+3s=17s, and the material segment 5s in the segment frame 1 is displayed, the material segment 4s in the segment frame 2 is displayed, the material segment 5s in the segment frame 3 is displayed, and the material segment 3s in the segment frame 4 is displayed within the duration of 17 s.
In this embodiment, the material segments displayed in each segment frame correspond to respective playing durations. As a way, the playing durations corresponding to the plurality of material segments may be the same, may be different, or may be partially the same. Therefore, since each segment frame corresponds to a respective restriction duration, and each material segment corresponds to a respective playing duration, when the same material segment is displayed in different segment frames, different situations may occur in which the restriction duration corresponding to the segment frame is longer than the playing duration corresponding to the displayed material segment, the restriction duration corresponding to the segment frame is equal to the playing duration corresponding to the displayed material segment, or the restriction duration corresponding to the segment frame is shorter than the playing duration corresponding to the displayed material segment.
Step S120: and adjusting the display positions of at least two material segments in response to an adjustment operation acting on at least one material segment.
In this embodiment, the electronic device may detect an adjustment operation acting on the material segments displayed in the segment queue during the process of displaying the segment queue, where when the adjustment operation acting on at least one of the material segments displayed in the segment queue is detected, the display positions of at least two of the material segments displayed in the segment queue may be adjusted.
In some embodiments, the electronic device may detect whether to trigger adjustment of a material segment during display of the segment queue, and may detect an adjustment operation on a material segment displayed in the segment queue when adjustment of the trigger material segment is detected, where the display positions of at least two material segments may be adjusted when an adjustment operation on at least one material segment is detected.
As one approach, the electronic device may display a target control, which may be, for example, a "sort" control, which is used to trigger adjustment of the material segments. Therefore, in the process of displaying the segment queues, the electronic device can detect touch operation on the target control, when detecting click operation on the target control, the electronic device can determine to trigger adjustment of the material segments, and then can detect adjustment operation on the material segments displayed in the segment queues, wherein when detecting adjustment operation on at least one material segment, the electronic device can adjust display positions of at least two material segments.
As still another way, the electronic device may detect a touch operation on a material segment displayed in the segment queue during displaying the segment queue, and when detecting that a pressing duration on at least one material segment of the material segments displayed in the segment queue reaches a preset duration, may determine to trigger adjustment of the material segment, then may detect an adjustment operation on the material segment displayed in the segment queue, where when detecting the adjustment operation on the at least one material segment, may adjust display positions of at least two material segments.
For example, the corresponding display material segment 1 in segment box 1, the corresponding display material segment 2 in segment box 2, the corresponding display material segment 3 in segment box 3, and the corresponding display material segment 4 in segment box 4.
Referring to fig. 4, fig. 4 is a schematic diagram showing a third interface of the electronic device according to the embodiment of the present application, as shown in fig. 4, taking a video editing interface as an example, if the display position of the material segment 1 (P1) is adjusted to be within the segment frame 2 in response to the adjustment operation on the material segment 1 (P1), the display position of the material segment 2 (P2) is adjusted to be within the segment frame 1, so as to keep the material segment 3 (P3) displayed within the segment frame 3 and keep the material segment 4 (P4) displayed within the segment frame 4.
Referring to fig. 5, fig. 5 shows a fourth interface schematic diagram of an electronic device according to an embodiment of the present application, as shown in fig. 5, taking a video editing interface as an example, if the display position of the material segment 1 (P1) is adjusted to the segment frame 4 in response to the adjustment operation acting on the material segment 1 (P1), the display position of the material segment 2 (P2) is adjusted to the segment frame 1, the display position of the material segment 3 (P3) is adjusted to the segment frame 2, and the display position of the material segment 4 (P4) is adjusted to the segment frame 3.
Step S130: and displaying the adjusted plurality of material fragments in the plurality of fragment frames, wherein each fragment frame and the material fragments displayed in the fragment frame form a video unit, and the attribute information of each video unit comprises the arrangement sequence of the fragment frames in each video unit, the limiting duration corresponding to the fragment frames and the playing duration corresponding to the material fragments.
In this embodiment, after the display positions of at least two material segments are adjusted, the adjusted plurality of material segments may be displayed in a plurality of segment frames. Wherein each segment frame and the material segments displayed within the segment frame form a video unit. For example, a plurality of frame segments are fixedly arranged in the order of frame segment 1-frame segment 2-frame segment 3-frame segment 4, and the plurality of adjusted material segments are arranged in the order of frame segment 2-frame segment 1-frame segment 3-frame segment 4, and at this time, frame segment 1 displays frame segment 2, frame segment 2 displays frame segment 1, frame segment 3 displays frame segment 3, and frame segment 4 displays frame segment 4, then frame segment 1 and frame segment 2 displayed on frame segment 1 form a video unit, frame segment 2 and frame segment 1 displayed on frame segment 2 form a video unit, frame segment 3 and frame segment 3 displayed on frame segment 3 form a video unit, and frame segment 4 displayed on frame segment 4 form a video unit.
In some embodiments, each video unit may be set to have respective attribute information, where the attribute information of each video unit may include an arrangement sequence of clip frames in each video unit, a restriction duration corresponding to the clip frames, and a play duration corresponding to the material clips. For example, taking a video unit formed by the segment frame 1 and the material segment 2 displayed on the segment frame 1 as an example, the attribute information of the video unit includes an arrangement sequence of the segment frame 1, a limitation duration corresponding to the segment frame 1, and a play duration corresponding to the material segment 2; taking the video unit formed by the segment frame 3 and the material segment 3 displayed on the segment frame 3 as an example, the attribute information of the video unit includes the arrangement sequence of the segment frame 3, the limitation duration corresponding to the segment frame 3, and the play duration corresponding to the material segment 3.
Step S140: and generating a video based on the plurality of material fragments and the attribute information of each video unit.
In this embodiment, after the attribute information of each video unit is obtained, a video may be generated based on the plurality of material segments and the attribute information of each video unit. Specifically, the video may be generated based on a plurality of material segments, and an arrangement sequence of segment frames in each video unit, a restriction duration corresponding to the segment frames, and a play duration corresponding to the material segments. As one way, after the attribute information of each video unit is obtained, processing of the playing parameters may be performed on the plurality of material segments based on the attribute information of each unit, and the video may be generated based on the plurality of material segments and the playing parameters corresponding to the respective material segments in the plurality of material segments.
In some embodiments, the plurality of material segments includes a material segment 1, a material segment 2, a material segment 3, and a material segment 4, and the attribute information of each video unit includes attribute information of a first video unit formed by the material segment 2 displayed on the segment frame 1, attribute information of a second video unit formed by the material segment 2 displayed on the segment frame 2, attribute information of a third video unit formed by the material segment 3 displayed on the segment frame 3, and attribute information of a fourth video unit formed by the material segment 4 displayed on the segment frame 4. Then, a video may be generated based on the attribute information of the material section 1, the material section 2, the material section 3, the material section 4, the first video unit, the attribute information of the second video unit, the attribute information of the third video unit, and the attribute information of the fourth video unit.
According to the video generation method provided by the embodiment of the application, the segment queue is displayed, the segment queue comprises a plurality of segment frames with fixed arrangement sequence and material segments displayed in each segment frame, each segment frame corresponds to a respective limit time, each material segment displayed in each segment frame corresponds to a respective play time, the display positions of at least two material segments are adjusted in response to adjustment operation acting on at least one material segment, the adjusted plurality of material segments are displayed in the plurality of segment frames, wherein each segment frame and the material segments displayed in the segment frame form a video unit, the attribute information of each video unit comprises the arrangement sequence of the segment frames in each video unit, the limit time corresponding to the segment frame and the play time corresponding to the material segment, and video is generated based on the plurality of material segments and the attribute information of each video unit, so that the display positions of the material segments displayed on the segment frames can be adjusted under the condition that the segment frames have the limit time, and the convenience of adjustment of the material segments and the efficiency of video generation are improved.
Referring to fig. 6, fig. 6 is a flowchart illustrating a video generating method according to another embodiment of the application. The following details about the flowchart shown in fig. 6, the video generating method specifically may include the following steps:
step S210: and displaying a fragment queue, wherein the fragment queue comprises a plurality of fragment frames with fixed arrangement sequence and material fragments displayed in each fragment frame, each fragment frame corresponds to respective limiting duration, and each material fragment displayed in each fragment frame corresponds to respective playing duration.
Step S220: and adjusting the display positions of at least two material segments in response to an adjustment operation acting on at least one material segment.
Step S230: and displaying the adjusted plurality of material fragments in the plurality of fragment frames, wherein each fragment frame and the material fragments displayed in the fragment frame form a video unit, and the attribute information of each video unit comprises the arrangement sequence of the fragment frames in each video unit, the limiting duration corresponding to the fragment frames and the playing duration corresponding to the material fragments.
The specific description of step S210 to step S230 is referred to step S110 to step S130, and will not be repeated here.
Step S240: and acquiring the size relation between the limiting time length corresponding to the fragment frame in each video unit and the playing time length corresponding to the material fragment.
In this embodiment, after determining each video unit and attribute information of each video unit, a size relationship between a restriction duration corresponding to a clip frame in each video unit and a play duration corresponding to a material clip may be obtained.
In some embodiments, the limit duration corresponding to the segment frame in each video unit may be compared with the play duration corresponding to the material segment, so as to obtain a size relationship between the limit duration corresponding to the segment frame in each video unit and the play duration corresponding to the material segment. The size relationship between the limiting duration corresponding to the segment frame and the playing duration corresponding to the material segment in each video unit may include: the limiting time length corresponding to the segment frame is longer than the playing time length corresponding to the material segment, the limiting time length corresponding to the segment frame is equal to the playing time length corresponding to the material segment, and the limiting time length corresponding to the segment frame is shorter than the playing time length corresponding to the material segment.
Step S250: and determining the playing parameters of the material segments in each video unit based on the size relation.
In this embodiment, after obtaining the size relationship between the restriction duration corresponding to the segment frame in each video unit and the playing duration corresponding to the material segment, the playing parameters of the material segment in each video unit may be determined based on the size relationship.
As a way, when the limit time length corresponding to the segment frame in the video unit is longer than the play time length corresponding to the material segment, the material segment can be processed by adopting the first video processing way, so as to obtain the play parameter of the material segment. The first video processing method may, but is not limited to, include: continuously playing, repeatedly playing and slowly playing.
As another way, when the limiting duration corresponding to the segment frame in the video unit is equal to the playing duration corresponding to the material segment, the second video processing way may be adopted to process the material segment, so as to obtain the playing parameter of the material segment. The second video processing method may include, but is not limited to: no treatment was performed.
As another way, when the limiting duration corresponding to the segment frame in the video unit is smaller than the playing duration corresponding to the material segment, the first video processing way may be adopted to process the material segment, so as to obtain the playing parameter of the material segment. The first video processing method may, but is not limited to, include: and playing after cutting, and fast playing.
Step S260: and generating a video based on the plurality of material fragments, the arrangement sequence of the fragment frames in each video unit and the playing parameters of the material fragments.
In this embodiment, after the playing parameters of the material segments in each video unit are obtained, a video may be generated based on the plurality of material segments, the arrangement order of the segment frames in each video unit, and the playing parameters of the material segments.
In the video generating method according to still another embodiment of the present application, compared with the video generating method shown in fig. 1, the present embodiment further obtains a size relationship between a constraint duration corresponding to a segment frame in each video unit and a play duration corresponding to a material segment, determines a play parameter of the material segment in each video unit based on the size relationship, and generates a video based on the plurality of material segments and an arrangement sequence of the segment frames and the play parameters of the material segments in each video unit, so as to improve suitability of the segment frames and the material segments in each video unit, so as to improve an effect of the generated video.
Referring to fig. 7, fig. 7 is a flowchart illustrating a video generating method according to still another embodiment of the present application. As will be described in detail below with respect to the flowchart shown in fig. 7, in this embodiment, the video unit includes a target video unit, and the target video unit is formed by a target segment frame and a target material segment displayed in the target segment frame, and the video generating method specifically may include the following steps:
Step S310: and displaying a fragment queue, wherein the fragment queue comprises a plurality of fragment frames with fixed arrangement sequence and material fragments displayed in each fragment frame, each fragment frame corresponds to respective limiting duration, and each material fragment displayed in each fragment frame corresponds to respective playing duration.
Step S320: and adjusting the display positions of at least two material segments in response to an adjustment operation acting on at least one material segment.
Step S330: and displaying the adjusted plurality of material fragments in the plurality of fragment frames, wherein each fragment frame and the material fragments displayed in the fragment frame form a video unit, and the attribute information of each video unit comprises the arrangement sequence of the fragment frames in each video unit, the limiting duration corresponding to the fragment frames and the playing duration corresponding to the material fragments.
The specific description of step S310 to step S320 refer to step S110 to step S130, and are not described herein.
Step S340: and acquiring the size relation between the limiting time length corresponding to the fragment frame in each video unit and the playing time length corresponding to the material fragment.
The specific description of step S340 is referred to step S240, and will not be repeated here.
Step S350: and when the limit time length corresponding to the target fragment frame is longer than the play time length corresponding to the target material fragment, increasing the play time length corresponding to the target material fragment to the limit time length.
In this embodiment, the video units may include a target video unit, where the target video unit is formed by a target segment frame and a target material segment displayed in the target segment frame, and the target video unit may be a part of video units in the plurality of video units or may be all video units in the plurality of video units, which is not limited herein.
In some embodiments, when the size relationship between the limit duration corresponding to the target segment frame and the play duration corresponding to the target material segment in the target video unit characterizes that the limit duration corresponding to the target segment frame is longer than the play duration corresponding to the target material segment, the limit duration set by the target segment frame cannot be reached according to the play duration when the target material segment is displayed in the target segment frame to generate a video, so that in order to realize normal generation of the video, the play duration corresponding to the target material segment can be increased to the limit duration, so that the target material segment is adapted to the target segment frame correspondingly displayed.
In some embodiments, when the size relationship between the restriction duration corresponding to the target segment frame and the play duration corresponding to the target material segment in the target video unit characterizes that the restriction duration corresponding to the target segment frame is equal to the play duration corresponding to the target material segment, and characterizes that the target material segment is displayed in the target segment frame to generate the video, the restriction duration set by the target segment frame is exactly reached according to the play duration, so that in order to realize normal generation of the video, the play duration corresponding to the target material segment can be kept, so that the target material segment is adapted to the target segment frame correspondingly displayed.
Referring to fig. 8, fig. 8 is a flowchart illustrating a step S350 of the video generating method shown in fig. 7 according to the present application. The following details the flow shown in fig. 8, and the method may specifically include the following steps:
step S351: and when the limit time corresponding to the target fragment frame is longer than the play time corresponding to the target material fragment, acquiring the material type of the target material fragment.
The manner of adding the corresponding playing time length to the limiting time length is different for the target material segments of different material types, so in this embodiment, the material type of the target material segment can be obtained when the limiting time length corresponding to the target segment frame is longer than the playing time length corresponding to the target material segment.
As one way, when the limit time length corresponding to the target segment frame is longer than the play time length corresponding to the target material segment, the suffix name of the target material segment may be obtained, and the material type of the target material segment may be obtained based on the suffix name of the target material segment. For example, whether the material type of the target material segment is a picture or a video is acquired based on the suffix name of the target material segment.
As still another way, when the limit time length corresponding to the target segment frame is longer than the play time length corresponding to the target material segment, the play time length corresponding to the target material segment may be obtained, and the material type of the target material segment may be obtained based on the play time length corresponding to the target material segment. For example, based on the playing time length corresponding to the target material segment, whether the material type of the target material segment is a picture or a video is obtained.
Step S352: and based on the material type, increasing the playing time length corresponding to the target material fragment to the limiting time length.
In this embodiment, after the material type of the target material segment is obtained, the playing time length corresponding to the target material segment may be increased to the limiting time length based on the material type of the target material segment. For example, when the material type of the target material is the first type, increasing the playing time length corresponding to the target material segment to the limiting time length through a first mode; when the material type of the target material is the second type, the playing time length corresponding to the target material segment is increased to the limiting time length and the like in a second mode.
Referring to fig. 9, fig. 9 is a flowchart illustrating step S352 of the video generating method shown in fig. 8 according to the present application. The following will describe the flow shown in fig. 9 in detail, and the method specifically may include the following steps:
step S3521: and when the material type is a picture type, continuously displaying the target material segment to the limit duration.
In some embodiments, when the material type of the target material segment is determined to be a picture type, the target material segment may be continuously displayed until the limiting duration is reached, so that the target material segment is adapted to the target segment frame that is correspondingly displayed, that is, the target material segment is filled with the limiting duration corresponding to the target segment frame, so as to adapt to the target segment frame. For example, assuming that the restriction duration corresponding to the target segment frame is 5s, the 5s target material segment may be continuously displayed.
Step S3522: and when the material type is a video type, circularly playing the target material segment to the limit duration.
In some embodiments, when the material type of the target material segment is determined to be a video type, the target material segment may be played back circularly for a limited duration, so that the target material segment is adapted to the target segment frame that is displayed correspondingly. For example, assuming that the limit duration corresponding to the target segment frame is 5s and the play duration corresponding to the target material segment is 2s, the target material segment is circularly played to the limit duration.
Step S3523: and when the material type is a video type, playing the target material segment at a first multiple speed until the time is limited, wherein the first multiple is larger than the original multiple.
In some embodiments, when the material type of the target material segment is determined to be a video type, the target material segment may be played at a first multiple speed to a limited duration, where the first multiple speed is greater than the original multiple speed, so that the target material segment is adapted to a target segment frame that is correspondingly displayed. For example, assuming that the restriction duration corresponding to the target segment frame is 10s and the playback duration corresponding to the target material segment is 5s, the target material segment may be played at a speed 2 times the restriction duration.
Step S360: and when the limiting time length corresponding to the target fragment frame is smaller than the playing time length corresponding to the target material fragment, reducing the playing time length corresponding to the target material fragment to the limiting time length.
In some embodiments, when the size relationship between the limit duration corresponding to the target segment frame and the play duration corresponding to the target material segment in the target video unit characterizes that the limit duration corresponding to the target segment frame is smaller than the play duration corresponding to the target material segment, and characterizes that the target material segment is displayed in the target segment frame to generate the video, the limit duration set by the target segment frame is exceeded according to the play duration, so that in order to realize normal generation of the video, the play duration corresponding to the target material segment can be reduced to the limit duration, so that the target material segment is adapted to the target segment frame correspondingly displayed.
In some embodiments, when the restriction duration corresponding to the target segment frame is inconsistent with the play duration corresponding to the target material segment, a prompt message may be displayed, where the prompt message may be used to prompt that the restriction duration corresponding to the target segment frame is inconsistent with the play duration corresponding to the target material segment, a cause of the inconsistency, a processing manner of the inconsistency, and the like, and is not limited herein. As a way, when the limit time period corresponding to the target segment frame is longer than the play time period corresponding to the target material segment, the first prompt message may be displayed, and when the limit time period corresponding to the target segment frame is shorter than the play time period corresponding to the target material segment, the second prompt message may be displayed.
Referring to fig. 10, fig. 10 is a flowchart illustrating a step S360 of the video generating method shown in fig. 7 according to the present application. The following will describe the flow chart shown in fig. 10 in detail, and the method specifically may include the following steps:
step S361: when the limiting time length corresponding to the target segment frame is smaller than the playing time length corresponding to the target material segment, cutting the target material segment to obtain the residual target material segment, wherein the playing time length corresponding to the residual target material segment is equal to the limiting time length corresponding to the target segment frame.
In some embodiments, when the restriction duration corresponding to the target segment frame is smaller than the play duration corresponding to the target material segment, the target material segment may be cut to obtain a remaining target material segment, where the play duration corresponding to the remaining target material segment is equal to the restriction duration corresponding to the target material segment, so that the target material segment is adapted to the target segment frame that is displayed correspondingly. For example, assuming that the restriction duration corresponding to the target segment frame is 3s and the play duration corresponding to the target material segment is 5s, the target material segment of 2s may be cut to obtain the remaining target material segments of 3 s.
As one mode, the cutting position of the target material segment is not limited, that is, the starting position of the target material segment may be cut to obtain a remaining target material segment, the ending position of the target material segment may be cut to obtain a remaining target material segment, the intermediate position of the target material segment may be cut to obtain a remaining target material segment, and the like.
Step S362: and when the limiting time length corresponding to the target fragment frame is smaller than the playing time length corresponding to the target material fragment, playing the target material fragment at a second multiple speed to the limiting time length, wherein the second multiple is smaller than the original multiple.
In some embodiments, when the restriction duration corresponding to the target segment frame is smaller than the playing duration corresponding to the target material segment, the target material segment may be played at a second multiple speed to the restriction duration, where the second multiple speed is smaller than the original multiple speed, so that the target material segment is adapted to the target segment frame that is correspondingly displayed. For example, assuming that the restriction duration corresponding to the target clip frame is 5s and the playback duration corresponding to the target material clip is 10s, the target material clip may be played at a speed 1/2 times the restriction duration.
Step S370: and generating a video based on the plurality of material fragments, the arrangement sequence of the fragment frames in each video unit and the playing parameters of the material fragments.
The specific description of step S370 is referred to step S260, and will not be repeated here.
In the video generating method according to the still further embodiment of the present application, compared with the video generating method shown in fig. 1, in this embodiment, when the limit time corresponding to the target segment frame in the target video unit is longer than the play time corresponding to the target material segment, the play time corresponding to the target material segment is increased to the limit time, and when the limit time corresponding to the target segment frame in the target video unit is shorter than the play time corresponding to the target material segment, the play time corresponding to the target material segment is reduced to the limit time, so that the suitability of the target segment frame and the target material segment in the target video unit can be improved by increasing or reducing the play time of the target material segment.
Referring to fig. 11, fig. 11 is a flowchart illustrating a video generating method according to another embodiment of the application. The following details about the flowchart shown in fig. 11, the video generating method specifically may include the following steps:
step S410: and displaying a fragment queue, wherein the fragment queue comprises a plurality of fragment frames with fixed arrangement sequence and material fragments displayed in each fragment frame, each fragment frame corresponds to respective limiting duration, and each material fragment displayed in each fragment frame corresponds to respective playing duration.
The specific description of step S410 is referred to step S110, and will not be repeated here.
Step S420: when a drag operation acting on at least one of the material segments is detected, a moving distance of at least one of the material segments based on the drag operation is detected.
In this embodiment, when the clip queue is displayed, a touch operation applied to a material clip in the clip queue may be detected, and when a drag operation applied to at least one material clip is detected, a movement distance of the at least one material clip based on the drag operation may be detected. As one way, the moving distance of the at least one material segment based on the drag operation may be detected by a touch sensor.
Referring to fig. 12 and 13, fig. 12 shows an interaction schematic diagram of an electronic device provided by an embodiment of the present application, and fig. 13 shows a fifth interface schematic diagram of an electronic device provided by an embodiment of the present application. As shown in fig. 12, when a drag operation acting on the material section 1 (P1) is detected, the material section 1 (P1) may be dragged away from the section frame 1 while the section frame 1 remains stationary and is displayed blank (as in fig. 13).
Step S430: and when the moving distance is larger than a distance threshold value, adjusting the display positions of at least two material fragments.
In some embodiments, the electronic device may preset and store a distance threshold, where the distance threshold is used as a basis for determining a moving distance of the material segment based on the drag operation. Therefore, in the present embodiment, after the moving distance of at least one material segment based on the drag operation is obtained, the moving distance may be compared with the distance threshold and the comparison result may be obtained. When the comparison result indicates that the moving distance is larger than the distance threshold, adjusting the display positions of at least two material fragments; when the comparison indicates that the moving distance is not greater than the distance threshold, then at least one material segment is returned to the initial display position.
In some embodiments, the display positions of at least two material segments are adjusted when the movement distance is greater than a distance threshold and a drag-and-drop operation is detected on at least one material segment. As one way, when the moving distance is greater than the distance threshold, whether the drag operation on at least one material segment is ended or not may be detected, and when the drag operation on at least one material segment is detected to be ended, the drag release operation on the at least one material segment is detected to be characterized, and then the display positions of at least two material segments may be adjusted. Wherein whether or not the drag operation acting on the material section is ended can be detected by the touch sensor.
Step S440: and displaying the adjusted plurality of material fragments in the plurality of fragment frames, wherein each fragment frame and the material fragments displayed in the fragment frame form a video unit, and the attribute information of each video unit comprises the arrangement sequence of the fragment frames in each video unit, the limiting duration corresponding to the fragment frames and the playing duration corresponding to the material fragments.
Step S450: and generating a video based on the plurality of material fragments and the attribute information of each video unit.
The specific description of step S440 to step S450 is referred to step S130 to step S140, and will not be repeated here.
In the video generating method according to another embodiment of the present application, compared with the video generating method shown in fig. 1, when a drag operation acting on at least one material segment is detected, a moving distance of the at least one material segment based on the drag operation is detected, and when the moving distance is greater than a distance threshold, display positions of at least two material segments are adjusted, so that adjustment of the display positions of the material segments is triggered according to the drag operation acting on the material segments, and convenience in adjustment of the display positions of the material segments is improved.
Referring to fig. 14, fig. 14 is a flowchart illustrating a video generating method according to still another embodiment of the present application. As will be described in detail below with respect to the flowchart shown in fig. 14, in this embodiment, at least one material segment includes a first material segment, and the video generating method specifically may include the following steps:
step S510: and displaying a fragment queue, wherein the fragment queue comprises a plurality of fragment frames with fixed arrangement sequence and material fragments displayed in each fragment frame, each fragment frame corresponds to respective limiting duration, and each material fragment displayed in each fragment frame corresponds to respective playing duration.
The specific description of step S510 refers to step S110, and is not repeated here.
Step S520: and responding to the adjustment operation acted on the first material segment, and determining a first segment frame corresponding to the first material segment before adjustment and a second segment frame corresponding to the first material segment after adjustment from the plurality of segment frames.
In some implementations, the at least one material segment includes a first material segment. The number of the first material segments may be one or more, which is not limited herein. In this embodiment, when an adjustment operation acting on a first material section is detected, a first section frame, which is displayed correspondingly before adjustment of the first material section, and a second section frame, which is displayed correspondingly after adjustment based on the adjustment operation, may be determined from a plurality of section frames in response to the adjustment operation acting on the first material section.
For example, assume that a plurality of segment frames includes segment frame 1, segment frame 2, segment frame 3, and segment frame 4, and a plurality of material segments includes material segment 1, material segment 2, material segment 3, and material segment 4; if the first material segment is not the material segment 1, and the material segment 1 is adjusted from being displayed in the segment frame 1 to being displayed in the segment frame 2, the first segment frame corresponding to the material segment 1 before adjustment can be determined to be the segment frame 1, and the second segment frame corresponding to the material segment 1 after adjustment is determined to be the segment frame 2; if the first material segment does not have the material segment 1, and the material segment 1 is adjusted from being displayed in the segment frame 1 to being displayed in the segment frame 4, it may be determined that the first segment frame corresponding to the material segment 1 before adjustment is the segment frame 1, and the second segment frame corresponding to the material segment 1 after adjustment is the segment frame 4.
Step S530: and acquiring a second fragment frame and a material fragment displayed in a fragment frame between the first fragment frame and the second fragment frame as a second material fragment, and adjusting the display positions of the first material fragment and the second material fragment.
In this embodiment, after determining the first segment frame corresponding to the first material segment before adjustment and the second segment frame corresponding to the second segment frame after adjustment, the second segment frame and the material segment displayed in the segment frame between the first segment frame and the second segment frame may be obtained as the second material segment, and the display positions of the first material segment and the second material segment may be adjusted.
In some embodiments, the second segment box may or may not have material segments displayed within it. When the material segments are displayed in the second segment frame, the material segments displayed in the second segment frame and the segment frame between the first segment frame and the second segment frame can be used as the second material segments, and the display positions of the first material segments and the second material segments can be adjusted; when no material segment is displayed in the second segment (display blank), the material segment displayed in the segment frame between the first segment frame and the second segment frame can be used as the second material segment, and the display positions of the first material segment and the second material segment can be adjusted.
For example, assume that the segment queue includes 4 segment frames, namely, segment frame 1, segment frame 2, segment frame 3, and segment frame 4, corresponding display material segment 1 in segment frame 1, corresponding display material segment 2 in segment frame 2, corresponding display material segment 3 in segment frame 3, and corresponding display material segment 4 in segment frame 4.
If the first material segment does not have the material segment 1, and the material segment 1 is adjusted from being displayed in the segment frame 1 to being displayed in the segment frame 2, it may be determined that the first segment frame corresponding to the material segment 1 before adjustment is the segment frame 1, and the second segment frame corresponding to the material segment 1 after adjustment is the segment frame 2, and at this time, the material segments in the segment frame 2 and the segment frames between the segment frame 1 and the segment frame 2 include the material segment 2, the display positions of the material segment 1 and the material segment 2 may be adjusted, that is, the display positions of the material segment 1 and the material segment 2 may be adjusted.
If the first material segment does not have the material segment 1 and the material segment 1 is adjusted from being displayed in the segment frame 1 to being displayed in the segment frame 4, it may be determined that the first segment frame corresponding to the material segment 1 before adjustment is the segment frame 1 and the second segment frame corresponding to the material segment 1 after adjustment is the segment frame 4, and at this time, the material segments in the segment frame 4 and the segment frames between the segment frame 1 and the segment frame 4 include the material segment 2, the material segment 3 and the material segment 4, the display positions of the material segment 1, the material segment 2, the material segment 3 and the material segment 4 may be adjusted, for example, the display position of the material segment 1 is adjusted into the segment frame 4, the display position of the material segment 2 is adjusted into the segment frame 1, the display position of the material segment 3 is adjusted into the segment frame 2, and the display position of the material segment 4 is adjusted into the segment frame 3.
Referring to fig. 15, fig. 15 is a flowchart illustrating a step S530 of the video generating method shown in fig. 14 according to the present application. The following will describe the flow shown in fig. 15 in detail, and the method specifically may include the following steps:
step S531: and when the first material segments move forward in the fixed arrangement sequence, the second material segments move backward in the fixed arrangement sequence.
In some embodiments, characterizing the first material segment requires adjusting the ranking position occupying the front as the first material segment advances in the fixed ranking order, and then the second material segment requires backing in the fixed ranking order.
For example, assume that the segment queue includes 4 segment frames in total, segment frame 1, segment frame 2, segment frame 3, and segment frame 4, and the 4 segment frames are fixedly arranged in the order of segment frame 1-segment frame 2-segment frame 3-segment frame 4, corresponding display material segment 1 in segment frame 1, corresponding display material segment 2 in segment frame 2, corresponding display material segment 3 in segment frame 3, and corresponding display material segment 4 in segment frame 4. If the first material segment is the material segment 4, the material segment 4 may be moved from being displayed in the frame 4 to being displayed in the frame 1, then the material segment 1 may be moved from being displayed in the frame 1 to being displayed in the frame 2, the material segment 2 may be moved from being displayed in the frame 3 to being displayed in the frame 3, and the material segment 3 may be moved from being displayed in the frame 3 to being displayed in the frame 4.
Step S532: and when the first material segments are moved backwards in the fixed arrangement sequence, the second material segments are moved forwards in the fixed arrangement sequence.
In some embodiments, when the first material segment is shifted backward in the fixed arrangement, and the arrangement sequence representing the first material segment shifted backward and forward appears in the blank display position, it is necessary to shift the second material segment forward in the fixed arrangement sequence.
For example, assume that the segment queue includes 4 segment frames in total, segment frame 1, segment frame 2, segment frame 3, and segment frame 4, and the 4 segment frames are fixedly arranged in the order of segment frame 1-segment frame 2-segment frame 3-segment frame 4, corresponding display material segment 1 in segment frame 1, corresponding display material segment 2 in segment frame 2, corresponding display material segment 3 in segment frame 3, and corresponding display material segment 4 in segment frame 4. If the first material segment is the material segment 1, the material segment 1 is moved from being displayed in the segment frame 1 to being displayed in the segment frame 4, the material segment 2 can be moved forward from being displayed in the segment frame 2 to being displayed in the segment frame 1, the material segment 3 is moved forward from being displayed in the segment frame 3 to being displayed in the segment frame 2, and the material segment 4 is moved forward from being displayed in the segment frame 4 to being displayed in the segment frame 3.
Step S540: and displaying the adjusted plurality of material fragments in the plurality of fragment frames, wherein each fragment frame and the material fragments displayed in the fragment frame form a video unit, and the attribute information of each video unit comprises the arrangement sequence of the fragment frames in each video unit, the limiting duration corresponding to the fragment frames and the playing duration corresponding to the material fragments.
Step S550: and generating a video based on the plurality of material fragments and the attribute information of each video unit.
The specific description of step S540 to step S550 is referred to step S130 to step S140, and will not be repeated here.
In still another embodiment of the present application, compared to the video generating method shown in fig. 1, the video generating method further determines a first segment frame corresponding to the first material piece before adjustment and a second segment frame corresponding to the second material piece after adjustment from the plurality of segment frames, obtains the second segment frame and a material piece displayed in the segment frame between the first segment frame and the second segment frame as the second material piece, and adjusts display positions of the first material piece and the second material piece, thereby improving accuracy of the determined material piece needing to be subjected to display position adjustment, and further improving effects of the generated video.
Referring to fig. 16, fig. 16 is a block diagram illustrating a video generating apparatus according to an embodiment of the present application. The following will describe the block diagram shown in fig. 16, and the video generating apparatus 200 includes: a clip queue display module 210, a display position adjustment module 220, a material clip display module 230, and a video generation module 240, wherein:
the segment queue display module 210 is configured to display a segment queue, where the segment queue includes a plurality of segment frames in a fixed arrangement sequence and material segments displayed in each segment frame, each segment frame corresponds to a respective constraint duration, and each material segment displayed in each segment frame corresponds to a respective play duration.
The display position adjustment module 220 is configured to adjust the display positions of at least two of the material segments in response to an adjustment operation acting on at least one of the material segments.
Further, the display position adjustment module 220 includes: a moving distance detection sub-module and a first display position adjustment sub-module, wherein:
and the moving distance detection sub-module is used for detecting the moving distance of at least one material fragment based on the dragging operation when the dragging operation acted on at least one material fragment is detected.
And the first display position adjustment sub-module is used for adjusting the display positions of at least two material fragments when the moving distance is greater than a distance threshold value.
Further, the first display position adjustment submodule includes: a display position adjustment unit in which:
and the display position adjusting unit is used for adjusting the display positions of at least two material fragments when the moving distance is larger than a distance threshold value and the drag-and-drop operation acting on at least one material fragment is detected.
Further, the at least one material segment includes a first material segment, and the display position adjustment module 220 includes: a segment frame determination sub-module and a second display position adjustment sub-module, wherein:
and the segment frame determining submodule is used for responding to the adjusting operation acted on the first material segment and determining a first segment frame corresponding to the first material segment before adjustment and a second segment frame corresponding to the first material segment after adjustment from the plurality of segment frames.
And the second display position adjustment sub-module is used for acquiring a second segment frame and a material segment displayed in a segment frame between the first segment frame and the second segment frame as a second material segment, and adjusting the display positions of the first material segment and the second material segment.
Further, the second display position adjustment submodule includes: a display position backward moving unit and a display position forward moving unit, wherein:
and the display position backward moving unit is used for moving the second material segments backward in the fixed arrangement sequence when the first material segments move forward in the fixed arrangement sequence.
And the display position forward moving unit is used for moving the second material fragments forward in the fixed arrangement sequence when the first material fragments are moved backward in the fixed arrangement sequence.
The material segment display module 230 is configured to display the adjusted plurality of material segments in the plurality of segment frames, where each segment frame and the material segments displayed in the segment frame form a video unit, and the attribute information of each video unit includes an arrangement sequence of the segment frames in each video unit, a limitation duration corresponding to the segment frames, and a play duration corresponding to the material segments.
The video generating module 240 is configured to generate a video based on the plurality of material segments and the attribute information of each video unit.
Further, the video generating module 240 includes: the system comprises a size relation acquisition sub-module, a play parameter determination sub-module and a video generation sub-module, wherein:
And the size relation acquisition sub-module is used for acquiring the size relation between the limiting time length corresponding to the fragment frame in each video unit and the playing time length corresponding to the material fragment.
And the play parameter determination submodule is used for determining the play parameters of the material segments in each video unit based on the size relation.
Further, the video unit includes a target video unit formed of a target clip frame and a target material clip displayed within the target clip frame, and the play parameter determination submodule includes: a play time length increasing monocular and play time length reducing unit, wherein:
and the playing time length increasing unit is used for increasing the playing time length corresponding to the target material fragment to the limiting time length when the limiting time length corresponding to the target fragment frame is longer than the playing time length corresponding to the target material fragment.
Further, the play duration increasing unit includes: a material type acquisition subunit and a playing duration increasing subunit, wherein:
and the material type acquisition subunit is used for acquiring the material type of the target material segment when the limit time length corresponding to the target segment frame is longer than the play time length corresponding to the target material segment.
And the playing time length increasing subunit is used for increasing the playing time length corresponding to the target material fragment to the limiting time length based on the material type.
Further, the play duration increasing subunit includes: the first playing time length increment is in a subunit, a second playing time length increment subunit and a third playing time length increment subunit, wherein:
and the first playing duration increasing subunit is used for continuously displaying the target material segment to the limiting duration when the material type is the picture type.
And the second playing duration increasing subunit is used for circularly playing the target material segment to the limiting duration when the material type is the video type.
And a third playing time length increasing subunit, configured to play the target material segment to the limiting time length at a speed of a first multiple when the material type is a video type, where the first multiple is greater than the original multiple.
And the playing time length reducing unit is used for reducing the playing time length corresponding to the target material fragment to the limiting time length when the limiting time length corresponding to the target fragment frame is smaller than the playing time length corresponding to the target material fragment.
Further, the play duration reducing unit includes: a first play duration reduction subunit and a second play duration reduction subunit, wherein:
and the first playing duration reducing subunit is used for cutting the target material segment to obtain the residual target material segment when the limiting duration corresponding to the target segment frame is smaller than the playing duration corresponding to the target material segment, and the playing duration corresponding to the residual target material segment is equal to the limiting duration corresponding to the target segment frame.
And the second playing duration reducing subunit is used for playing the target material segment to the limiting duration at a second multiple speed when the limiting duration corresponding to the target segment frame is smaller than the playing duration corresponding to the target material segment, wherein the second multiple is smaller than the original multiple.
Further, the play parameter determination submodule further includes: a prompt information display unit, wherein:
and the prompt information display unit is used for displaying prompt information when the limit duration corresponding to the target fragment frame is inconsistent with the play duration corresponding to the target material fragment.
And the video generation sub-module is used for generating videos based on the plurality of material fragments, the arrangement sequence of the fragment frames in each video unit and the playing parameters of the material fragments.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In several embodiments provided by the present application, the coupling of the modules to each other may be electrical, mechanical, or other.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
Referring to fig. 17, a block diagram of an electronic device 100 according to an embodiment of the application is shown. The electronic device 100 may be a smart phone, a tablet computer, an electronic book, or the like capable of running an application program. The electronic device 100 of the present application may include one or more of the following components: a processor 110, a memory 120, and one or more application programs, wherein the one or more application programs may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more program(s) configured to perform the method as described in the foregoing method embodiments.
Wherein the processor 110 may include one or more processing cores. The processor 110 utilizes various interfaces and lines to connect various portions of the overall electronic device 100, perform various functions of the electronic device 100, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and invoking data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 110 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing the content to be displayed; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 110 and may be implemented solely by a single communication chip.
The Memory 120 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Memory 120 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc. The storage data area may also store data created by the electronic device 100 in use (e.g., phonebook, audiovisual data, chat log data), and the like.
Referring to fig. 18, a block diagram of a computer readable storage medium according to an embodiment of the present application is shown. The computer readable medium 300 has stored therein program code which can be invoked by a processor to perform the methods described in the method embodiments described above.
The computer readable storage medium 300 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer readable storage medium 300 comprises a non-volatile computer readable medium (non-transitory computer-readable storage medium). The computer readable storage medium 300 has storage space for program code 310 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 310 may be compressed, for example, in a suitable form.
In summary, the video generating method, the device, the electronic equipment and the storage medium provided in the embodiments of the present application display a segment queue, where the segment queue includes a plurality of segment frames in a fixed arrangement sequence and a material segment displayed in each segment frame, each segment frame corresponds to a respective limit duration, each material segment displayed in each segment frame corresponds to a respective play duration, and in response to an adjustment operation acting on at least one material segment, the display positions of at least two material segments are adjusted, and the adjusted plurality of material segments are displayed in the plurality of segment frames, where each segment frame and the material segment displayed in the segment frame form a video unit, and attribute information of each video unit includes the arrangement sequence of the segment frames in each video unit, the limit duration corresponding to the segment frame, and the play duration corresponding to the material segment, and video is generated based on the plurality of material segments and the attribute information of each video unit, so that, in case that the segment frame has the limit duration, the display positions of the material segments displayed on the segment frame can also be adjusted, thereby improving the efficiency of video adjustment and the convenience of the generation of the material segments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be appreciated by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not drive the essence of the corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (14)

1. A method of video generation, the method comprising:
displaying a video editing interface, wherein the video editing interface comprises a first display area and a second display area, the first display area is used for displaying a segment queue, the second display area is used for displaying videos generated based on the segment queue, the videos are paused, the segment queue comprises a plurality of segment frames with fixed arrangement sequences and material segments displayed in each segment frame, each segment frame corresponds to a respective limiting duration, and each material segment displayed in each segment frame corresponds to a respective playing duration;
Responding to an adjustment operation acted on at least one material segment, and adjusting the display positions of at least two material segments, wherein the arrangement sequence of the plurality of segment frames is kept unchanged in the process of adjusting the display positions of at least two material segments;
displaying the adjusted plurality of material fragments in the plurality of fragment frames, wherein each fragment frame and the material fragments displayed in the fragment frame form a video unit, and the attribute information of each video unit comprises the arrangement sequence of the fragment frames in each video unit, the limiting duration corresponding to the fragment frames and the playing duration corresponding to the material fragments;
and generating an adjusted video based on the plurality of material fragments and the attribute information of each video unit.
2. The method of claim 1, wherein generating the adjusted video based on the plurality of material segments and the attribute information of each video unit comprises:
acquiring the size relation between the limiting time length corresponding to the fragment frame in each video unit and the playing time length corresponding to the material fragment;
determining playing parameters of the material segments in each video unit based on the size relation;
And generating an adjusted video based on the plurality of material fragments, the arrangement sequence of the fragment frames in each video unit and the playing parameters of the material fragments.
3. The method of claim 2, wherein the video units comprise target video units formed from target segment frames and target material segments displayed within the target segment frames, wherein determining playback parameters for the material segments within each video unit based on the size relationship comprises:
when the limit time length corresponding to the target fragment frame is longer than the play time length corresponding to the target material fragment, increasing the play time length corresponding to the target material fragment to the limit time length; or (b)
And when the limiting time length corresponding to the target fragment frame is smaller than the playing time length corresponding to the target material fragment, reducing the playing time length corresponding to the target material fragment to the limiting time length.
4. The method of claim 3, wherein when the constraint time period corresponding to the target segment frame is longer than the play time period corresponding to the target material segment, increasing the play time period corresponding to the target material segment to the constraint time period includes:
When the limit time length corresponding to the target fragment frame is longer than the play time length corresponding to the target material fragment, acquiring the material type of the target material fragment;
and based on the material type, increasing the playing time length corresponding to the target material fragment to the limiting time length.
5. The method of claim 4, wherein the adding the playing duration corresponding to the target material segment to the limiting duration based on the material type includes:
when the material type is a picture type, continuously displaying the target material segment to the limit duration;
when the material type is a video type, circularly playing the target material segment to the limit duration; or (b)
And when the material type is a video type, playing the target material segment at a first multiple speed until the time is limited, wherein the first multiple is larger than the original multiple.
6. The method of claim 3, wherein when the constraint duration corresponding to the target segment frame is less than the play duration corresponding to the target material segment, reducing the play duration corresponding to the target material segment to the constraint duration comprises:
When the limiting time length corresponding to the target segment frame is smaller than the playing time length corresponding to the target material segment, cutting the target material segment to obtain residual target material segments, wherein the playing time length corresponding to the residual target material segments is equal to the limiting time length corresponding to the target segment frame; or (b)
And when the limiting time length corresponding to the target fragment frame is smaller than the playing time length corresponding to the target material fragment, playing the target material fragment at a second multiple speed to the limiting time length, wherein the second multiple is smaller than the original multiple.
7. The method according to any one of claims 3-6, further comprising:
and when the limiting time length corresponding to the target fragment frame is inconsistent with the playing time length corresponding to the target material fragment, displaying prompt information.
8. The method of any one of claims 1-6, wherein adjusting the display position of at least two of the material segments in response to an adjustment operation applied to at least one of the material segments comprises:
detecting a moving distance of at least one of the material segments based on a drag operation when the drag operation acting on the at least one of the material segments is detected;
And when the moving distance is larger than a distance threshold value, adjusting the display positions of at least two material fragments.
9. The method of claim 8, wherein adjusting the display position of at least two of the material segments when the movement distance is greater than a distance threshold comprises:
and when the moving distance is larger than a distance threshold value and a drag-and-drop operation acting on at least one material segment is detected, adjusting the display positions of at least two material segments.
10. The method of any of claims 1-6, wherein the at least one material segment comprises a first material segment, and wherein adjusting the display position of at least two of the material segments in response to an adjustment operation on the at least one material segment comprises:
determining a first segment frame corresponding to the first material segment before adjustment and a second segment frame corresponding to the first material segment after adjustment from the plurality of segment frames in response to an adjustment operation acting on the first material segment;
and acquiring a second fragment frame and a material fragment displayed in a fragment frame between the first fragment frame and the second fragment frame as a second material fragment, and adjusting the display positions of the first material fragment and the second material fragment.
11. The method of claim 10, wherein adjusting the display positions of the first material segment and the second material segment comprises:
when the first material segments move forward in the fixed arrangement sequence, the second material segments move backward in the fixed arrangement sequence; or (b)
And when the first material segments are moved backwards in the fixed arrangement sequence, the second material segments are moved forwards in the fixed arrangement sequence.
12. A video generating apparatus, the apparatus comprising:
the video editing device comprises a video editing interface and a video display module, wherein the video editing interface comprises a first display area and a second display area, the first display area is used for displaying a video sequence, the second display area is used for displaying videos generated based on the video sequence, the video pauses to play, the video sequence comprises a plurality of fragment frames with fixed arrangement sequences and material fragments displayed in each fragment frame, each fragment frame corresponds to a respective limiting duration, and each material fragment displayed in each fragment frame corresponds to a respective playing duration;
The display position adjusting module is used for responding to the adjusting operation acted on at least one material segment and adjusting the display positions of at least two material segments, wherein the arrangement sequence of the plurality of segment frames is kept unchanged in the process of adjusting the display positions of at least two material segments;
the material segment display module is used for displaying the adjusted plurality of material segments in the plurality of segment frames, wherein each segment frame and the material segments displayed in the segment frame form a video unit, and the attribute information of each video unit comprises the arrangement sequence of the segment frames in each video unit, the limiting duration corresponding to the segment frames and the playing duration corresponding to the material segments;
and the video generation module is used for generating an adjusted video based on the plurality of material fragments and the attribute information of each video unit.
13. An electronic device comprising a memory and a processor, the memory coupled to the processor, the memory storing instructions that when executed by the processor perform the method of any of claims 1-11.
14. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program code, which is callable by a processor for executing the method according to any one of claims 1-11.
CN202110168756.7A 2021-02-07 2021-02-07 Video generation method, device, electronic equipment and storage medium Active CN113055730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110168756.7A CN113055730B (en) 2021-02-07 2021-02-07 Video generation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110168756.7A CN113055730B (en) 2021-02-07 2021-02-07 Video generation method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113055730A CN113055730A (en) 2021-06-29
CN113055730B true CN113055730B (en) 2023-08-18

Family

ID=76508726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110168756.7A Active CN113055730B (en) 2021-02-07 2021-02-07 Video generation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113055730B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113365106B (en) * 2021-08-10 2022-01-21 北京达佳互联信息技术有限公司 Multimedia resource generation method and device, electronic equipment and storage medium
WO2023056697A1 (en) * 2021-10-09 2023-04-13 普源精电科技股份有限公司 Waveform sequence processing method and processing apparatus, and electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107438839A (en) * 2016-10-25 2017-12-05 深圳市大疆创新科技有限公司 A kind of multimedia editing method, device and intelligent terminal
CN107770626A (en) * 2017-11-06 2018-03-06 腾讯科技(深圳)有限公司 Processing method, image synthesizing method, device and the storage medium of video material
CN111357277A (en) * 2018-11-28 2020-06-30 深圳市大疆创新科技有限公司 Video clip control method, terminal device and system
CN111464735A (en) * 2019-01-21 2020-07-28 阿里巴巴集团控股有限公司 Video shooting method and device, electronic equipment and computer storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107438839A (en) * 2016-10-25 2017-12-05 深圳市大疆创新科技有限公司 A kind of multimedia editing method, device and intelligent terminal
CN107770626A (en) * 2017-11-06 2018-03-06 腾讯科技(深圳)有限公司 Processing method, image synthesizing method, device and the storage medium of video material
CN111357277A (en) * 2018-11-28 2020-06-30 深圳市大疆创新科技有限公司 Video clip control method, terminal device and system
CN111464735A (en) * 2019-01-21 2020-07-28 阿里巴巴集团控股有限公司 Video shooting method and device, electronic equipment and computer storage medium

Also Published As

Publication number Publication date
CN113055730A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
US11645804B2 (en) Dynamic emoticon-generating method, computer-readable storage medium and computer device
CN112291627B (en) Video editing method and device, mobile terminal and storage medium
CN110275664B (en) Apparatus, method and graphical user interface for providing audiovisual feedback
WO2020107297A1 (en) Video clipping control method, terminal device, system
CN113055730B (en) Video generation method, device, electronic equipment and storage medium
KR101373020B1 (en) The method and system for generating animated art effects on static images
US20170024110A1 (en) Video editing on mobile platform
CN106804003B (en) Video editing method and device based on ffmpeg
JP2004343683A5 (en)
CN107277411B (en) Video recording method and mobile terminal
CN109379631B (en) Method for editing video captions through mobile terminal
CN112637675B (en) Video generation method, device, electronic equipment and storage medium
CN112004138A (en) Intelligent video material searching and matching method and device
CN112004137A (en) Intelligent video creation method and device
US9773524B1 (en) Video editing using mobile terminal and remote computer
CN108845741B (en) AR expression generation method, client, terminal and storage medium
US7844901B1 (en) Circular timeline for video trimming
KR20180027917A (en) Display apparatus and control method thereof
US20210289266A1 (en) Video playing method and apparatus
US8856251B2 (en) Picture processing method and apparatus for instant communication tool
CN111757177B (en) Video clipping method and device
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
CN108052578B (en) Method and apparatus for information processing
CN110662104B (en) Video dragging bar generation method and device, electronic equipment and storage medium
CN111984173B (en) Expression package generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant