CN116980544B - Video editing method, device, electronic equipment and computer readable storage medium - Google Patents

Video editing method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN116980544B
CN116980544B CN202311234920.5A CN202311234920A CN116980544B CN 116980544 B CN116980544 B CN 116980544B CN 202311234920 A CN202311234920 A CN 202311234920A CN 116980544 B CN116980544 B CN 116980544B
Authority
CN
China
Prior art keywords
video
target
layer
video frame
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311234920.5A
Other languages
Chinese (zh)
Other versions
CN116980544A (en
Inventor
杜照丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tricolor Technology Co ltd
Original Assignee
Beijing Tricolor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tricolor Technology Co ltd filed Critical Beijing Tricolor Technology Co ltd
Priority to CN202311234920.5A priority Critical patent/CN116980544B/en
Publication of CN116980544A publication Critical patent/CN116980544A/en
Application granted granted Critical
Publication of CN116980544B publication Critical patent/CN116980544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The application provides a video editing method, a video editing device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: responding to the combined editing operation of a user on at least two target video layers in the video layers, so that an overlapped part exists between the target video layers; in response to a user's first layer merging operation for the target video layers, merging all target video layers into one first merged layer, and generating a second sequence of video frames for display into the first merged layer; and in response to a second layer merging operation of the user for the other video layers and the first merging layer, merging the other video layers and the first merging layer into one second merging layer, and generating a third video frame sequence for display into the second merging layer according to the original video frame sequence in the other video layers and the second video frame sequence in the first merging layer. The method is beneficial to reducing the templating of video editing and improving the flexibility of video editing.

Description

Video editing method, device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of video editing technologies, and in particular, to a video editing method, a video editing device, an electronic device, and a computer readable storage medium.
Background
During the playing process of the video, the user will usually generate interest in the highlight video clips, and in order to intuitively appreciate the interest video clips, the user will generally clip the interest video clips separately, so as to obtain the video clips in the form of short video.
After obtaining video clips in short video form, when a user wants to post the video clips to social media, the presentation forms of the video clips are typically edited. Currently, in the editing mode of video clips, the video clips are generally input into an existing video template, so as to generate edited video. However, the edited video generated by the editing mode has serious templating and poorer editing flexibility.
Disclosure of Invention
Accordingly, an object of the present application is to provide a video editing method, apparatus, electronic device, and computer readable storage medium, so as to reduce the templating of video editing and improve the flexibility of video editing.
In a first aspect, an embodiment of the present application provides a video editing method, where the method is applied to a video scene editor, and the method includes:
when at least two video layers are contained in the target slide, responding to the combined editing operation of a user on at least two target video layers in the video layers, so as to edit the position and/or the size of the target video layers, and enabling an overlapped part to exist between the target video layers; each video layer is filled with an original video frame sequence of a corresponding target video respectively;
in response to a first layer merging operation of the user on the target video layers, merging all the target video layers into one first merged layer, generating a first video frame sequence corresponding to the overlapped part according to an original video frame sequence positioned in the overlapped part in each target video layer, and generating a second video frame sequence for displaying in the first merged layer according to the first video frame sequence and the original video frame sequence positioned in the non-overlapped part in each target video layer;
in response to the user merging operation for a second layer of each of the video layers and the first merged layer other than the target video layer, merging each of the other video layers and the first merged layer into one second merged layer, and generating a third sequence of video frames for display into the second merged layer from the original sequence of video frames in each of the other video layers and the second sequence of video frames in the first merged layer;
And encoding the third video frame sequence to generate a video file.
With reference to the first aspect, an embodiment of the present application provides a first possible implementation manner of the first aspect, where, when the target slide includes at least two video layers, in response to a combined editing operation of a user on at least two target video layers of the video layers, to edit a position and/or a size of the target video layers, before an overlapping portion exists between the target video layers, the method further includes:
responding to a first creation operation of the user in a video editing scene for slides, so as to create at least one slide in the video editing scene;
responding to a second creation operation of the user in a target slide aiming at a layer so as to create at least one basic layer in the target slide; the target slide is any slide;
for each base layer, responding to the video inserting operation of the user for the base layer, so as to add the original video frame sequence of the target video into the base layer, and taking the base layer added with the original video frame sequence as the video layer.
With reference to the first aspect, an embodiment of the present application provides a second possible implementation manner of the first aspect, where the original video frame sequence includes multiple frames of original video frames, and each original video frame corresponds to a respective sequence number; the generating a first video frame sequence corresponding to the overlapping portion according to the original video frame sequences located in the overlapping portion in each target video layer includes:
for each sequence number, respectively selecting a target original video frame corresponding to the sequence number from the corresponding original video frame sequences in each target video layer to obtain each target original video frame corresponding to the sequence number;
for each pixel point in the overlapping portion, generating a target pixel value corresponding to the pixel point according to the pixel value of each target original video frame at the pixel point, so as to obtain a target pixel value corresponding to each pixel point in the overlapping portion;
generating a first video frame in the overlapped part corresponding to the sequence number according to the target pixel value of each pixel point in the overlapped part corresponding to the sequence number;
and generating a first video frame sequence corresponding to the overlapped part according to the first video frames in the overlapped part corresponding to each sequence number.
With reference to the second possible implementation manner of the first aspect, the embodiment of the present application provides a third possible implementation manner of the first aspect, wherein the generating, according to a pixel value at the pixel point in each target original video frame, a target pixel value corresponding to the pixel point includes:
calculating the product of a preset weight corresponding to the target original video frame and a pixel value at the pixel point in the target original video frame aiming at each target original video frame to obtain a first value corresponding to the target original video frame at the pixel point;
after obtaining the corresponding first numerical value of each target original video frame at the pixel point, calculating the sum of all the first numerical values to obtain a second numerical value;
and inputting the sum of the second numerical value and the preset brightness value into a saturation value function to obtain a target pixel value corresponding to the pixel point.
With reference to the third possible implementation manner of the first aspect, the embodiment of the present application provides a fourth possible implementation manner of the first aspect, wherein the preset weight of each target original video frame is calculated by:
wherein, N represents the number of target original video frames corresponding to the same sequence number, K represents the preset weight of each target original video frame corresponding to the same sequence number; the number of the target original video frames corresponding to the same sequence number is the same as the number of the target video layers.
In a second aspect, an embodiment of the present application further provides a video editing apparatus, where the apparatus is applied to a video scene editor, the apparatus including:
the combining module is used for responding to the combined editing operation of a user on at least two target video layers in the video layers when the target slide contains at least two video layers, so as to edit the position and/or the size of the target video layers, and an overlapped part exists among the target video layers; each video layer is filled with an original video frame sequence of a corresponding target video respectively;
the first merging module is used for responding to the first layer merging operation of the user for the target video layers, merging all the target video layers into one first merging layer, generating a first video frame sequence corresponding to the overlapped part according to the original video frame sequence positioned at the overlapped part in each target video layer, and generating a second video frame sequence for displaying into the first merging layer according to the first video frame sequence and the original video frame sequence positioned at the non-overlapped part in each target video layer;
A second merging module, configured to, in response to a merging operation of the user for a second layer of the video layers other than the target video layer and the first merging layer, merge the other video layers and the first merging layer into one second merging layer, and generate a third video frame sequence for display into the second merging layer according to an original video frame sequence in the other video layers and a second video frame sequence in the first merging layer;
and the encoding module is used for encoding the third video frame sequence to generate a video file.
With reference to the second aspect, an embodiment of the present application provides a first possible implementation manner of the second aspect, where the apparatus further includes:
a first creating module, configured to, when the combining module includes at least two video layers in a target slide, respond to a combined editing operation of a user for at least two target video layers in the video layers, so as to edit a position and/or a size of the target video layers, so that before an overlapping portion exists between the target video layers, respond to a first creating operation of the user for slides in a video editing scene, so as to create at least one slide in the video editing scene;
A second creation module for responding to a second creation operation of the user in the target slide aiming at the image layer, so as to create at least one basic image layer in the target slide; the target slide is any slide;
and the adding module is used for responding to the video inserting operation of the user for each base layer so as to add the original video frame sequence of the target video into the base layer and take the base layer added with the original video frame sequence as the video layer.
With reference to the second aspect, an embodiment of the present application provides a second possible implementation manner of the second aspect, where the original video frame sequence includes multiple frames of original video frames, and each original video frame corresponds to a respective sequence number; the first merging module is configured to, when generating a first video frame sequence corresponding to the overlapping portion according to an original video frame sequence located in the overlapping portion in each target video layer, specifically:
for each sequence number, respectively selecting a target original video frame corresponding to the sequence number from the corresponding original video frame sequences in each target video layer to obtain each target original video frame corresponding to the sequence number;
For each pixel point in the overlapping portion, generating a target pixel value corresponding to the pixel point according to the pixel value of each target original video frame at the pixel point, so as to obtain a target pixel value corresponding to each pixel point in the overlapping portion;
generating a first video frame in the overlapped part corresponding to the sequence number according to the target pixel value of each pixel point in the overlapped part corresponding to the sequence number;
and generating a first video frame sequence corresponding to the overlapped part according to the first video frames in the overlapped part corresponding to each sequence number.
With reference to the second possible implementation manner of the second aspect, the embodiment of the present application provides a third possible implementation manner of the second aspect, where the first merging module is configured to, when generating, according to a pixel value at the pixel point in each target original video frame, a target pixel value corresponding to the pixel point, specifically configured to:
calculating the product of a preset weight corresponding to the target original video frame and a pixel value at the pixel point in the target original video frame aiming at each target original video frame to obtain a first value corresponding to the target original video frame at the pixel point;
After obtaining the corresponding first numerical value of each target original video frame at the pixel point, calculating the sum of all the first numerical values to obtain a second numerical value;
and inputting the sum of the second numerical value and the preset brightness value into a saturation value function to obtain a target pixel value corresponding to the pixel point.
With reference to the third possible implementation manner of the second aspect, the embodiment of the present application provides a fourth possible implementation manner of the second aspect, wherein the preset weight of each target original video frame is calculated by:
wherein, N represents the number of target original video frames corresponding to the same sequence number, K represents the preset weight of each target original video frame corresponding to the same sequence number; the number of the target original video frames corresponding to the same sequence number is the same as the number of the target video layers.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of any one of the possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the possible implementations of the first aspect described above.
The video editing method, the device, the electronic equipment and the computer readable storage medium provided by the embodiment of the application, wherein a user can independently edit the position and/or the size of each video layer through a video scene editor, and can independently and randomly combine target video layers in the video layers to form a new first combined layer.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a video editing method according to an embodiment of the present application;
FIG. 2 shows a comparison of the target video layers before and after merging provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of a video editing apparatus according to an embodiment of the present application;
fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
Considering the current editing mode for video clips, the video clips are usually input into an existing video template, so as to generate edited video. However, the edited video generated by the editing mode has serious templating and poorer editing flexibility. Based on this, the embodiments of the present application provide a video editing method, apparatus, electronic device, and computer readable storage medium, so as to reduce templating of video editing and improve flexibility of video editing, and the following description will be made by using embodiments.
Embodiment one:
for the sake of understanding the present embodiment, a video editing method disclosed in the present embodiment is first described in detail. The method is applied to a video scene editor, fig. 1 shows a flow chart of a video editing method provided by an embodiment of the application, as shown in fig. 1, and includes the following steps S101-S104:
s101: when the target slide contains at least two video layers, responding to the combined editing operation of a user on at least two target video layers in the video layers so as to edit the position and/or the size of the target video layers, so that an overlapped part exists between the target video layers; each video layer is respectively filled with an original video frame sequence of a corresponding target video.
In this embodiment, the shapes of the video layers in the target slide may be the same or different. Wherein the shape of the video layer includes any one or more of: square, rectangle, diamond, trapezoid, polygon, circle, ellipse, triangle. In the target slide show, the positions and sizes of different video layers can be different. In addition, there is a context between the video layers, and the levels of different video layers are different, for example, when the target slide contains three video layers, video layer 1 is located at the bottom layer, video layer 2 is located at the middle layer, and video layer 3 is located at the top layer. Furthermore, the original video frame sequences filled in different video layers may be the same or different, which is not limited by the present application. The original video frame sequence comprises a plurality of continuous original video frames.
In this embodiment, when the video layer is circular, the target video is rectangular (i.e., each frame in the original video layer frame sequence is a rectangular frame), only the original video layer frame sequence falling into the video layer is displayed (reserved) in the video layer, and the original video layer frame sequence not falling into the video layer is removed.
The target video layers can be selected from the video layers according to the requirements of users, and the shapes of different target video layers can be the same or different. The target video layers are exemplified by two, namely a target video layer a and a target video layer b, wherein the target video layer a is square, and the target video layer b is round. In this embodiment, the user may adjust the position and/or size of the target video layer such that the target video layer a and the target video layer b partially overlap.
In this embodiment, the user may edit the position and/or size of other video layers in the target slide in addition to the position and/or size of the target video layer, specifically: and responding to the adjustment operation of the user on any video layer in the target slide so as to adjust the position and/or the size of the video layer. At this time, the display position and/or display size of the original video frame sequence in each video layer may change with the change of the position and/or the size of the video layer.
S102: in response to a user merging operation for a first layer of target video layers, merging all target video layers into one first merged layer, generating a first video frame sequence corresponding to the overlapped portion according to an original video frame sequence located in the overlapped portion of each target video layer, and generating a second video frame sequence for display into the first merged layer according to the first video frame sequence and the original video frame sequence located in the non-overlapped portion of each target video layer.
Fig. 2 shows a comparison of the combined front and back of the target video layers provided by the embodiment of the present application, as shown in fig. 2, the target video layer a is square, the target video layer b is circular, and for the front-back relationship of the layers, the target video layer a is located in front of the target video layer b, and there is an overlapping portion between the target video layer a and the target video layer b.
In this embodiment, since there is an overlapping portion between the target video layer a and the target video layer b, if the target video layer a and the target video layer b are combined into a first combined layer, the content displayed in the overlapping portion needs to be determined again, and the specific determining process is as follows: when an overlapping portion exists between the target video layers a and b, a new first video frame sequence (namely, the content displayed by the determined overlapping portion) corresponding to the overlapping portion is generated according to the original video frame sequence a1 of the overlapping portion in the target video layer a and the original video frame sequence b1 of the overlapping portion in the target video layer b. And then generating a second video frame sequence for display into the first merging layer according to the first video frame sequence, the original video frame sequence a2 positioned in the non-overlapping part of the target video layer a and the original video frame sequence b2 positioned in the non-overlapping part of the target video layer b.
S103: in response to a user's second layer merging operation for each of the other video layers and the first merging layer other than the target video layer, merging each of the other video layers and the first merging layer into one second merging layer, and generating a third sequence of video frames for display into the second merging layer from the original sequence of video frames in each of the other video layers and the second sequence of video frames in the first merging layer.
When the target slide contains other video layers besides the target video layer, the other video layers and the first merging layer need to be merged into a second merging layer. For example, if the target slide includes other video layers c and d in addition to the target video layer, the other video layers c, the other video layer d, and the first merging layer are merged to obtain the second merging layer.
Meanwhile, a third video frame sequence for display into the second merging layer is generated from the original video frame sequence c1 in the other video layer c, the original video frame sequence d1 in the other video layer d, and the second video frame sequence in the first merging layer.
S104: and encoding the third video frame sequence to generate a video file.
The third video frame sequence comprises a plurality of frames of third video frames, and the video file is obtained by encoding the third video frame sequence into an H264 format. The video file may specifically be an MP4 file.
In this embodiment, after obtaining the video file, the video file may be uploaded to a video playing platform, so that a user of the video playing platform views the video file.
In one possible implementation, before performing step S101, it may be further performed according to the following steps S1001-S1003:
s1001: in response to a first creation operation of the user for the slides in the video editing scene, at least one slide is created in the video editing scene.
In this embodiment, a user may create a video editing scene in a video scene editor and then create at least one slide in the video editing scene.
S1002: responding to a second creation operation of a user in the target slide aiming at the layers, so as to create at least one basic layer in the target slide; the target slide is any slide.
In this embodiment, the display interface of the video scene editor includes base layer styles with various shapes. The user can select the layer style required by the user and drag the layer style into the target slide, namely the second creation operation at the moment can be a dragging operation.
In this embodiment, the style of the base layer includes any one or more of the following: square, rectangle, diamond, trapezoid, polygon, circle, ellipse, triangle. The style of the base layer created in the target slide may be the same or different.
The user can adjust the position and/or the size of the base layer in the target slide according to the own requirement, and the specific steps are as follows: and adjusting the position and/or the size of the base layer in the target slide in response to an adjustment operation of the user on the base layer.
S1003: for each base layer, in response to a video insertion operation by a user for the base layer, to add an original video frame sequence of a target video to the base layer, and to take the base layer to which the original video frame sequence is added as a video layer.
In this embodiment, for each base layer, in response to a video insertion operation by a user for that base layer, the inserted target video is decoded from h264 to RGB or yuv format, resulting in an original video frame sequence of the target video, which is then added to the base layer. Wherein the original video frame sequence comprises a plurality of continuous original video frames. When inserting the video into the base layer, the user can directly drag the selected target video into the base layer.
In this embodiment, after the original video frame sequence of the target video is added to the base layer, the video falling in the base layer is displayed near in the base layer, and the original video frame sequence falling outside the base layer is not displayed.
In this embodiment, after the original video frame sequence is added to the base layer, the display portion of the original video frame sequence displayed in the base layer may be adjusted, specifically: the display portion of the original video frame sequence in the base layer is adjusted in response to a user adjusting the display size and/or display position of the original video frame sequence in the base layer.
For example, if the original video frame sequence includes a rabbit and a straight fox, the display portion of the original video frame sequence displayed in the base layer may be adjusted by the adjustment method described above if only the rabbit is desired to be displayed in the base layer.
In one possible implementation, the original video frame sequence includes multiple frames of original video frames, each original video frame corresponding to a respective sequence number; when executing step S102 to generate the first video frame sequence corresponding to the overlapping portion according to the original video frame sequence located in the overlapping portion in each target video layer, the following steps S1021-S1024 may be specifically executed:
S1021: and for each sequence number, respectively selecting the target original video frames corresponding to the sequence number from the corresponding original video frame sequences in each target video layer to obtain each target original video frame corresponding to the sequence number.
In this embodiment, the original video frames of the frames in the same original video frame sequence are consecutive, each original video frame corresponding to a respective sequence number. For example, when 10 original video frames are included in the original video frame sequence, the sequence numbers corresponding to the 10 original video frames are 1, 2, 3 and … …, respectively.
Illustratively, assume that the original video frame sequence in the target video layer a is a '(1-10), and the original video frame sequence a' (1-10) contains 10 original video frames; the original video frame sequence in the target video layer b is b '(1-10), and the original video frame sequence b' (1-10) contains 10 original video frames.
For sequence number 1, a target original video frame a '(1) corresponding to sequence number 1 is selected from original video frame sequences a' (1-10) corresponding to sequence number 1 in target video layer a, and a target original video frame b '(1) corresponding to sequence number 1 is selected from original video frame sequences b' (1-10) corresponding to sequence number b.
S1022: and generating a target pixel value corresponding to each pixel point in the overlapped part according to the pixel value of each target original video frame at the pixel point for each pixel point in the overlapped part so as to obtain the target pixel value corresponding to each pixel point in the overlapped part.
For example, for a pixel point a in the overlapping portion, if a pixel value at the pixel point a in the target original video frame a '(1) is S1 and a pixel value at the pixel point a in the target original video frame b' (1) is S2, a target pixel value corresponding to the pixel point a is calculated according to the pixel value S1 and the pixel value S2. Thereby obtaining the target pixel value corresponding to each pixel point in the overlapped part corresponding to the sequence number 1.
S1023: and generating a first video frame in the overlapped part corresponding to the sequence number according to the target pixel value of each pixel point in the overlapped part corresponding to the sequence number.
After the above embodiment is received, after the target pixel value corresponding to each pixel point in the overlapping portion corresponding to the sequence number 1 is obtained, the first video frame in the overlapping portion corresponding to the sequence number 1 is generated according to the target pixel value of each pixel point in the overlapping portion corresponding to the sequence number 1.
S1024: and generating a first video frame sequence corresponding to the overlapped part according to the first video frames in the overlapped part corresponding to each sequence number.
After the 10 frames of first video frames in the overlapped part corresponding to the sequence numbers 1-10 are obtained through the method, a first video frame sequence corresponding to the overlapped part is generated according to the sequence number corresponding to the first video frame of each frame. The first sequence of video frames is only displayed in the overlapping portion.
In a possible implementation manner, when performing step S1022 to generate, according to the pixel value at the pixel point in each target original video frame, the target pixel value corresponding to the pixel point, the following steps S10221 to S10223 may be specifically performed:
s10221: for each target original video frame, calculating the product of a preset weight corresponding to the target original video frame and a pixel value at the pixel point in the target original video frame to obtain a first value corresponding to the target original video frame at the pixel point;
s10222: after obtaining the corresponding first value of each target original video frame at the pixel point, calculating the sum of all the first values to obtain a second value;
s10223: and inputting the sum of the second numerical value and the preset brightness value into a saturation value taking function to obtain a target pixel value corresponding to the pixel point.
For example, if there are two target original video frames, a and b respectively, the target pixel value corresponding to the pixel point may be calculated by the following formula:
Wherein X represents the target pixel value corresponding to the pixel point, S1 represents the pixel value at the pixel point in the target original video frame a,representing a preset weight corresponding to the target original video frame a, and S2 represents a pixel value of the target original video frame b at the pixel point, < +.>Representing the corresponding preset weight of the target original video frame b, < ->Is a second value>For a preset brightness value, < >>To take the function of saturation value.
In one possible implementation, the preset weights for each target original video frame are calculated by:
wherein, N represents the number of target original video frames corresponding to the same sequence number, K represents the preset weight of each target original video frame corresponding to the same sequence number; the number of the target original video frames corresponding to the same sequence number is the same as the number of the target video layers.
For example, when there are two target video layers, i.e., the target original video frames are a and b, respectively, the preset weight corresponding to the target original video frame a is 50%, and the preset weight corresponding to the target original video frame b is also 50%.
Embodiment two:
based on the same technical concept, the embodiment of the present application further provides a video editing apparatus, where the apparatus is applied to a video scene editor, and fig. 3 shows a schematic structural diagram of the video editing apparatus provided by the embodiment of the present application, and as shown in fig. 3, the apparatus includes:
A combination module 301, configured to, when at least two video layers are included in a target slide, respond to a combined editing operation of a user for at least two target video layers in the video layers, so as to edit a position and/or a size of the target video layers, so that an overlapping portion exists between the target video layers; each video layer is filled with an original video frame sequence of a corresponding target video respectively;
a first merging module 302, configured to, in response to a first layer merging operation of the user on the target video layers, merge all the target video layers into one first merged layer, generate a first video frame sequence corresponding to the overlapping portion according to an original video frame sequence located in the overlapping portion in each of the target video layers, and generate a second video frame sequence for display into the first merged layer according to the first video frame sequence and an original video frame sequence located in a non-overlapping portion in each of the target video layers;
a second merging module 303, configured to, in response to a merging operation of the user for a second layer of each of the video layers other than the target video layer and the first merging layer, merge each of the other video layers and the first merging layer into one second merging layer, and generate a third video frame sequence for display into the second merging layer according to an original video frame sequence in each of the other video layers and a second video frame sequence in the first merging layer;
The encoding module 304 is configured to encode the third video frame sequence to generate a video file.
Optionally, the apparatus further comprises:
a first creating module, configured to, when the combining module includes at least two video layers in a target slide, respond to a combined editing operation of a user for at least two target video layers in the video layers, so as to edit a position and/or a size of the target video layers, so that before an overlapping portion exists between the target video layers, respond to a first creating operation of the user for slides in a video editing scene, so as to create at least one slide in the video editing scene;
a second creation module for responding to a second creation operation of the user in the target slide aiming at the image layer, so as to create at least one basic image layer in the target slide; the target slide is any slide;
and the adding module is used for responding to the video inserting operation of the user for each base layer so as to add the original video frame sequence of the target video into the base layer and take the base layer added with the original video frame sequence as the video layer.
Optionally, the original video frame sequence includes multiple frames of original video frames, each original video frame corresponding to a respective sequence number; the first merging module 302 is configured to, when generating a first video frame sequence corresponding to the overlapping portion according to an original video frame sequence located in the overlapping portion in each of the target video layers, specifically:
for each sequence number, respectively selecting a target original video frame corresponding to the sequence number from the corresponding original video frame sequences in each target video layer to obtain each target original video frame corresponding to the sequence number;
for each pixel point in the overlapping portion, generating a target pixel value corresponding to the pixel point according to the pixel value of each target original video frame at the pixel point, so as to obtain a target pixel value corresponding to each pixel point in the overlapping portion;
generating a first video frame in the overlapped part corresponding to the sequence number according to the target pixel value of each pixel point in the overlapped part corresponding to the sequence number;
and generating a first video frame sequence corresponding to the overlapped part according to the first video frames in the overlapped part corresponding to each sequence number.
Optionally, the first merging module 302 is configured to, when configured to generate, according to a pixel value at the pixel point in each target original video frame, a target pixel value corresponding to the pixel point, specifically:
calculating the product of a preset weight corresponding to the target original video frame and a pixel value at the pixel point in the target original video frame aiming at each target original video frame to obtain a first value corresponding to the target original video frame at the pixel point;
after obtaining the corresponding first numerical value of each target original video frame at the pixel point, calculating the sum of all the first numerical values to obtain a second numerical value;
and inputting the sum of the second numerical value and the preset brightness value into a saturation value function to obtain a target pixel value corresponding to the pixel point.
Optionally, the preset weight of each target original video frame is calculated by:
wherein, N represents the number of target original video frames corresponding to the same sequence number, K represents the preset weight of each target original video frame corresponding to the same sequence number; the number of the target original video frames corresponding to the same sequence number is the same as the number of the target video layers.
Embodiment III:
based on the same technical concept, the embodiment of the present application further provides an electronic device, and fig. 4 shows a schematic structural diagram of the electronic device provided by the embodiment of the present application, as shown in fig. 4, the electronic device 400 includes: a processor 401, a memory 402 and a bus 403, the memory storing machine-readable instructions executable by the processor, the processor 401 executing machine-readable instructions to perform the method steps described in the first embodiment when the electronic device is operating, the processor 401 communicating with the memory 402 via the bus 403.
Embodiment four:
based on the same technical idea, a fourth embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, which when being executed by a processor performs the method steps described in the first embodiment.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working processes of the apparatus, the electronic device and the computer readable storage medium described above may refer to corresponding processes in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, and the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, and for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, indirect coupling or communication connection of devices or modules, electrical, mechanical, or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present application, and are not intended to limit the scope of the present application, but it should be understood by those skilled in the art that the present application is not limited thereto, and that the present application is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of video editing, the method being applied to a video scene editor, the method comprising:
when at least two video layers are contained in the target slide, responding to the combined editing operation of a user on at least two target video layers in the video layers, so as to edit the position and/or the size of the target video layers, and enabling an overlapped part to exist between the target video layers; each video layer is filled with an original video frame sequence of a corresponding target video respectively;
In response to a first layer merging operation of the user on the target video layers, merging all the target video layers into one first merged layer, generating a first video frame sequence corresponding to the overlapped part according to an original video frame sequence positioned in the overlapped part in each target video layer, and generating a second video frame sequence for displaying in the first merged layer according to the first video frame sequence and the original video frame sequence positioned in the non-overlapped part in each target video layer;
in response to the user merging operation for a second layer of each of the video layers and the first merged layer other than the target video layer, merging each of the other video layers and the first merged layer into one second merged layer, and generating a third sequence of video frames for display into the second merged layer from the original sequence of video frames in each of the other video layers and the second sequence of video frames in the first merged layer;
and encoding the third video frame sequence to generate a video file.
2. The method of claim 1, wherein when at least two video layers are included in the target slide show, in response to a user's combined editing operation for at least two of the video layers to edit a position and/or size of the target video layers such that there is an overlap between the target video layers, the method further comprises:
Responding to a first creation operation of the user in a video editing scene for slides, so as to create at least one slide in the video editing scene;
responding to a second creation operation of the user in a target slide aiming at a layer so as to create at least one basic layer in the target slide; the target slide is any slide;
for each base layer, responding to the video inserting operation of the user for the base layer, so as to add the original video frame sequence of the target video into the base layer, and taking the base layer added with the original video frame sequence as the video layer.
3. The method of claim 1, wherein the sequence of original video frames comprises a plurality of frames of original video frames, each original video frame corresponding to a respective sequence number; the generating a first video frame sequence corresponding to the overlapping portion according to the original video frame sequences located in the overlapping portion in each target video layer includes:
for each sequence number, respectively selecting a target original video frame corresponding to the sequence number from the corresponding original video frame sequences in each target video layer to obtain each target original video frame corresponding to the sequence number;
For each pixel point in the overlapping portion, generating a target pixel value corresponding to the pixel point according to the pixel value of each target original video frame at the pixel point, so as to obtain a target pixel value corresponding to each pixel point in the overlapping portion;
generating a first video frame in the overlapped part corresponding to the sequence number according to the target pixel value of each pixel point in the overlapped part corresponding to the sequence number;
and generating a first video frame sequence corresponding to the overlapped part according to the first video frames in the overlapped part corresponding to each sequence number.
4. A method according to claim 3, wherein the generating a target pixel value corresponding to the pixel point according to the pixel value at the pixel point in each target original video frame includes:
calculating the product of a preset weight corresponding to the target original video frame and a pixel value at the pixel point in the target original video frame aiming at each target original video frame to obtain a first value corresponding to the target original video frame at the pixel point;
after obtaining the corresponding first numerical value of each target original video frame at the pixel point, calculating the sum of all the first numerical values to obtain a second numerical value;
And inputting the sum of the second numerical value and the preset brightness value into a saturation value function to obtain a target pixel value corresponding to the pixel point.
5. The method of claim 4, wherein the preset weight for each of the target original video frames is calculated by:
wherein, N represents the number of target original video frames corresponding to the same sequence number, K represents the preset weight of each target original video frame corresponding to the same sequence number; the number of the target original video frames corresponding to the same sequence number is the same as the number of the target video layers.
6. A video editing apparatus, the apparatus being applied to a video scene editor, the apparatus comprising:
the combining module is used for responding to the combined editing operation of a user on at least two target video layers in the video layers when the target slide contains at least two video layers, so as to edit the position and/or the size of the target video layers, and an overlapped part exists among the target video layers; each video layer is filled with an original video frame sequence of a corresponding target video respectively;
The first merging module is used for responding to the first layer merging operation of the user for the target video layers, merging all the target video layers into one first merging layer, generating a first video frame sequence corresponding to the overlapped part according to the original video frame sequence positioned at the overlapped part in each target video layer, and generating a second video frame sequence for displaying into the first merging layer according to the first video frame sequence and the original video frame sequence positioned at the non-overlapped part in each target video layer;
a second merging module, configured to, in response to a merging operation of the user for a second layer of the video layers other than the target video layer and the first merging layer, merge the other video layers and the first merging layer into one second merging layer, and generate a third video frame sequence for display into the second merging layer according to an original video frame sequence in the other video layers and a second video frame sequence in the first merging layer;
and the encoding module is used for encoding the third video frame sequence to generate a video file.
7. The apparatus of claim 6, wherein the apparatus further comprises:
a first creating module, configured to, when the combining module includes at least two video layers in a target slide, respond to a combined editing operation of a user for at least two target video layers in the video layers, so as to edit a position and/or a size of the target video layers, so that before an overlapping portion exists between the target video layers, respond to a first creating operation of the user for slides in a video editing scene, so as to create at least one slide in the video editing scene;
a second creation module for responding to a second creation operation of the user in the target slide aiming at the image layer, so as to create at least one basic image layer in the target slide; the target slide is any slide;
and the adding module is used for responding to the video inserting operation of the user for each base layer so as to add the original video frame sequence of the target video into the base layer and take the base layer added with the original video frame sequence as the video layer.
8. The apparatus of claim 6, wherein the sequence of original video frames comprises a plurality of frames of original video frames, each original video frame corresponding to a respective sequence number; the first merging module is configured to, when generating a first video frame sequence corresponding to the overlapping portion according to an original video frame sequence located in the overlapping portion in each target video layer, specifically:
for each sequence number, respectively selecting a target original video frame corresponding to the sequence number from the corresponding original video frame sequences in each target video layer to obtain each target original video frame corresponding to the sequence number;
for each pixel point in the overlapping portion, generating a target pixel value corresponding to the pixel point according to the pixel value of each target original video frame at the pixel point, so as to obtain a target pixel value corresponding to each pixel point in the overlapping portion;
generating a first video frame in the overlapped part corresponding to the sequence number according to the target pixel value of each pixel point in the overlapped part corresponding to the sequence number;
and generating a first video frame sequence corresponding to the overlapped part according to the first video frames in the overlapped part corresponding to each sequence number.
9. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine-readable instructions executable by said processor, said processor and said memory communicating over the bus when the electronic device is running, said machine-readable instructions when executed by said processor performing the steps of the method according to any one of claims 1 to 5.
10. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1 to 5.
CN202311234920.5A 2023-09-22 2023-09-22 Video editing method, device, electronic equipment and computer readable storage medium Active CN116980544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311234920.5A CN116980544B (en) 2023-09-22 2023-09-22 Video editing method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311234920.5A CN116980544B (en) 2023-09-22 2023-09-22 Video editing method, device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN116980544A CN116980544A (en) 2023-10-31
CN116980544B true CN116980544B (en) 2023-12-01

Family

ID=88471663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311234920.5A Active CN116980544B (en) 2023-09-22 2023-09-22 Video editing method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116980544B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722897A (en) * 2012-05-23 2012-10-10 方正国际软件有限公司 Method and system for processing multilayer image
CN110348527A (en) * 2019-07-16 2019-10-18 百度在线网络技术(北京)有限公司 A kind of method for amalgamation processing of picture, device, equipment and storage medium
CN112150591A (en) * 2020-09-30 2020-12-29 广州光锥元信息科技有限公司 Intelligent animation and graphic layer multimedia processing device
CN112184856A (en) * 2020-09-30 2021-01-05 广州光锥元信息科技有限公司 Multimedia processing device supporting multi-layer special effect and animation mixing
CN115080038A (en) * 2022-06-17 2022-09-20 浙江大学 Layer processing method, model generation method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NZ518774A (en) * 1999-10-22 2004-09-24 Activesky Inc An object oriented video system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722897A (en) * 2012-05-23 2012-10-10 方正国际软件有限公司 Method and system for processing multilayer image
CN110348527A (en) * 2019-07-16 2019-10-18 百度在线网络技术(北京)有限公司 A kind of method for amalgamation processing of picture, device, equipment and storage medium
CN112150591A (en) * 2020-09-30 2020-12-29 广州光锥元信息科技有限公司 Intelligent animation and graphic layer multimedia processing device
CN112184856A (en) * 2020-09-30 2021-01-05 广州光锥元信息科技有限公司 Multimedia processing device supporting multi-layer special effect and animation mixing
CN115080038A (en) * 2022-06-17 2022-09-20 浙江大学 Layer processing method, model generation method and device

Also Published As

Publication number Publication date
CN116980544A (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN106507200B (en) Video playing content insertion method and system
US10090018B2 (en) Method and device for generating video slides
CN105516618A (en) Method and device for making video and communication terminal
CN109874048B (en) Video window assembly semitransparent display method and device and computer equipment
CN104504447A (en) Method and device for distributing virtual seat images
CN105005599A (en) Photograph sharing method and mobile terminal
CN110688506A (en) Template generation method and device, electronic equipment and storage medium
US10460490B2 (en) Method, terminal, and computer storage medium for processing pictures in batches according to preset rules
CN116980544B (en) Video editing method, device, electronic equipment and computer readable storage medium
CN114501100A (en) Live broadcast page skipping method and system
KR101352203B1 (en) Method of distributing plug-in for configuring effect on mobile movie authoring tool
CN107241635B (en) Bullet screen position switching method and device
CN109640148A (en) A kind of method and device by text box text exhibition content
CN103607629A (en) Multimedia file playing method and electronic terminal
CN110941413B (en) Display screen generation method and related device
CN113596351A (en) Video display method and device
KR101352737B1 (en) Method of setting up effect on mobile movie authoring tool using effect configuring data and computer-readable meduim carring effect configuring data
CN113343027A (en) Interactive video editing and interactive video display method and device
CN106934847B (en) Pattern generation method and device
CN105260345A (en) Method and device for constructing facial characters and electronic equipment
CN110705242A (en) Method and device for manufacturing slide template and electronic equipment
CN113254700B (en) Interactive video editing method, device, computer equipment and storage medium
CN111614912B (en) Video generation method, device, equipment and storage medium
CN113542846B (en) AR barrage display method and device
JP2014192795A (en) Electronic album insufficient image retrieval device and method for controlling operation of the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant