CN114900736A - Video generation method and device and electronic equipment - Google Patents

Video generation method and device and electronic equipment Download PDF

Info

Publication number
CN114900736A
CN114900736A CN202210312376.0A CN202210312376A CN114900736A CN 114900736 A CN114900736 A CN 114900736A CN 202210312376 A CN202210312376 A CN 202210312376A CN 114900736 A CN114900736 A CN 114900736A
Authority
CN
China
Prior art keywords
video
frame
target
texture
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210312376.0A
Other languages
Chinese (zh)
Inventor
梅立琴
黄少豪
刘嘉星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210312376.0A priority Critical patent/CN114900736A/en
Publication of CN114900736A publication Critical patent/CN114900736A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a video generation method, a video generation device and electronic equipment; the method comprises the following steps: acquiring a video editing instruction, and determining the arrangement sequence of a target image and a frame image in an initial video; in a frame image storage area of the initial video, texture drawing is carried out on the target image based on a preset frame rate and duration to obtain bitmap data of the target image, and the frame position of the bitmap data in the target video to be generated is determined based on the arrangement sequence; and obtaining the video frame texture of the target video to be generated based on the bitmap data, the frame position of the bitmap data in the target video to be generated and the frame image of the initial video, and generating the target video based on the video frame texture. In the mode, image texture drawing is carried out in the storage area of the initial video, so that the image is mixed into the initial video, mixed editing of the video and the image is realized through a MediaCodec interface of an android system, and the versatility and the editing efficiency of video editing are improved.

Description

Video generation method and device and electronic equipment
Technical Field
The present invention relates to the field of multimedia technologies, and in particular, to a video generation method and apparatus, and an electronic device.
Background
In the video editing process, the edited video file is obtained by cutting, changing speed, adding music, adding character stickers, beautifying and filtering original materials such as videos, pictures and music. In the related art, a video editing tool can be developed by using C + + based on an FFmpeg media library, but the video editing tool has higher development and debugging cost and lower efficiency, the data volume of a tool program packet is large, and the coding and decoding efficiency of a video is also lower; in another mode, a video editing tool can be developed by adopting a native interface MediaCodec of the android system, but the developed video editing tool has a single editing function, can only realize editing on a single video, and is difficult to satisfy mixed editing on multiple original materials.
Disclosure of Invention
In view of this, the present invention provides a video generating method, a video generating apparatus, and an electronic device, so as to implement mixed editing of video and picture through a MediaCodec interface of an android system, and improve the versatility and editing efficiency of video editing.
In a first aspect, an embodiment of the present invention provides a video generation method, where the method includes: acquiring a video editing instruction for an initial video, and determining the arrangement sequence of a target image and a frame image in the initial video based on the video editing instruction; in a frame image storage area of the initial video, texture drawing is carried out on a target image based on a preset frame rate and duration to obtain bitmap data of the target image, and the frame position of the bitmap data in the target video to be generated is determined based on an arrangement sequence; and obtaining the video frame texture of the target video to be generated based on the bitmap data, the frame position of the bitmap data in the target video to be generated and the frame image of the initial video, and generating the target video based on the video frame texture.
The step of obtaining the video frame texture of the target video to be generated based on the bitmap data, the frame position of the bitmap data in the target video to be generated, and the frame image of the initial video includes: for each frame position of a target video to be generated, if bitmap data corresponding to the frame position is empty, drawing a video frame texture on the frame position based on a frame image of an initial video; and if the bitmap data corresponding to the frame position is not empty, using the bitmap data as the video frame texture at the frame position.
The step of performing texture rendering on the target image based on the preset frame rate and duration to obtain bitmap data of the target image includes: determining the texture drawing times of the target image based on the preset frame rate and duration; and performing texture drawing on the target image in a frame buffer area corresponding to the initial video until the texture drawing times are reached, and obtaining bitmap data of the target image.
The step of performing texture rendering on the target image to obtain bitmap data of the target image includes: determining a drawing mapping matrix based on the size of the frame image of the initial video and the size of the target image; performing texture drawing on the target image based on the drawing mapping matrix to obtain bitmap data of the target image; in the bitmap data, the display area corresponding to the target image is located in the designated area of the bitmap data.
The step of generating the target video based on the video frame texture includes: and coding the video frame texture on each frame position of the target video to be generated to obtain the target video.
The step of encoding the video frame texture at each frame position of the target video to be generated to obtain the target video includes: receiving a special effect adding instruction; wherein the special effect adding instruction includes but is not limited to: adding one or more of character stickers, setting a beauty filter, setting a transition special effect and setting a video proportion; rendering a special effect on the video frame texture of the target video in a frame image storage area of the target video to obtain the video frame texture after the special effect is rendered; and coding the texture of the video frame after the special effect is rendered to obtain the target video.
After the step of generating the target video based on the texture of the video frame, the method further includes: acquiring an audio material, and editing the audio material based on the duration of the target video to obtain audio data; wherein the duration of the audio data is the same as that of the target video; and writing the audio data into the audio track, and writing the target video into the video track so as to synthesize the audio data and the target video and obtain the final target video.
In a second aspect, an embodiment of the present invention provides a video generating apparatus, where the apparatus includes: the instruction acquisition module is used for acquiring a video editing instruction for the initial video and determining the arrangement sequence of the target image and the frame image in the initial video based on the video editing instruction; the texture drawing module is used for performing texture drawing on the target image in a frame image storage area of the initial video based on a preset frame rate and duration to obtain bitmap data of the target image, and determining the frame position of the bitmap data in the target video to be generated based on the arrangement sequence; and the video generation module is used for obtaining the video frame texture of the target video to be generated based on the bitmap data, the frame position of the bitmap data in the target video to be generated and the frame image of the initial video, and generating the target video based on the video frame texture.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor and a memory, where the memory stores machine executable instructions that can be executed by the processor, and the processor executes the machine executable instructions to implement the video generation method.
In a fourth aspect, embodiments of the present invention provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the video generation method described above.
The embodiment of the invention has the following beneficial effects:
the video generation method, the video generation device and the electronic equipment acquire a video editing instruction for an initial video, and determine the arrangement sequence of a target image and a frame image in the initial video based on the video editing instruction; in a frame image storage area of the initial video, texture drawing is carried out on a target image based on a preset frame rate and duration to obtain bitmap data of the target image, and the frame position of the bitmap data in the target video to be generated is determined based on an arrangement sequence; and obtaining the video frame texture of the target video to be generated based on the bitmap data, the frame position of the bitmap data in the target video to be generated and the frame image of the initial video, and generating the target video based on the video frame texture. In the mode, the texture of the target image is drawn in the storage area of the initial video, so that the target image is mixed into the initial video, mixed editing of the video and the picture is realized through a MediaCodec interface of an android system, and the versatility and the editing efficiency of video editing are improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a video generation method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an audio editing method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a video mixing editing method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an overall implementation manner of audio and video synthesis according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a video generating apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Video editing mainly involves two processes of video decoding and video encoding. In the process of video decoding, the input video material is in a format such as MP4, and is generally encoded in h264 data type, and the video material is decoded into video data in an original format such as YUV. In the process of video coding, video data in original format such as YUV is coded into video data of type h264 or h265, and then the audio data and the video data are combined into a video file in format of MP 4.
In the related art, there are two main video encoding and decoding methods. In the first mode, based on the FFmpeg media library, C + + is used for audio and video processing. The disadvantages of the method mainly include: the development cost is high, the data volume of the FFmpeg media library is large and is completely compiled by C + + language, and the development and debugging efficiency is low; the FFmpeg media library is transcoded by using a Central Processing Unit (CPU), that is, soft decoding and soft encoding are adopted, which is much less efficient than hard decoding and hard encoding using a Graphics Processing Unit (GPU); if hard decoding and hard coding of Android of the Android system are required to be supported, a JNI (Java Native Interface) Interface is required to be used for calling a MediaCodec Interface of the Android system for secondary development; in addition, the data volume of the FFmpeg compiling library is large, and then the dependency Package is introduced, so that the volume of an APK (Android Package) Package is increased by more than 10M, and the storage space of the terminal is occupied.
In the second mode, a video editing tool is developed by adopting a native interface MediaCodec of an android system, but the mode can only finish the basic functions of cutting a single video and adding a filter, does not support splicing and synthesizing of multiple materials, and does not support mixed synthesis of pictures and videos; the audio processing capability is lacked, the sound mixing processing of original sound and music can not be carried out, and the speed change processing can not be carried out; in addition, the video editing tool developed by the method has poor stability and compatibility.
Based on the foregoing, the video generation method, apparatus and electronic device provided in the embodiments of the present invention can be applied to video editing scenes, such as video synthesis, mixing of video and picture, audio processing, and synthesizing of video and audio; the method can be particularly applied to editing of short videos.
To facilitate understanding of the present embodiment, a detailed description is first provided for a video generation method disclosed in the present embodiment, and as shown in fig. 1, the method includes the following steps:
step S102, acquiring a video editing instruction for an initial video, and determining the arrangement sequence of a target image and a frame image in the initial video based on the video editing instruction;
the video generation method in the present embodiment mainly describes a mixing process of a video and a picture, that is, inserting a picture into a video. When a user edits a video, the video editing instruction is generated based on the editing operation of the user, and the video editing instruction comprises the relevant information of the editing operation. When the editing operation of the user is to insert the target image into the designated position of the initial video, the arrangement sequence of the target image and the frame image in the initial video is recorded in the video editing instruction.
For example, 100 frame images of an initial video are displayed in the video editing tool, and the 100 frame images are sequentially arranged according to a time sequence; the user can select and drag the target image to the back of the 50 th frame image, and at this time, the target image is positioned at the back of the 50 th frame image and at the front of the 51 st frame image.
In another implementation manner, after the user performs the editing operation, a piece of editing data may be generated, where the editing data is used to record information related to the editing operation, and the editing data is stored in the video editing instruction. The editing data may also record information such as the aspect ratio of the screen, the start time and the end time of the clicking operation, the saving path and the type of the clicked object, and the like.
Step S104, texture drawing is carried out on the target image in a frame image storage area of the initial video based on a preset frame rate and duration to obtain bitmap data of the target image, and the frame position of the bitmap data in the target video to be generated is determined based on the arrangement sequence;
in a MediaCodec interface of an android system, a video encoder and a video decoder both need to bind a storage area, which may also be referred to as a surface; before the initial video is edited, decoding is needed to obtain a frame image of the initial video; the frame image of the initial video is stored in the frame image storage area of the initial video. The frame image storage area of the initial video can also be understood as the surface corresponding to the decoder of the video.
A frame image obtained after the initial video decoding is directly output to a frame image storage area for OpenGL (Open Graphics Library) drawing, and is encoded through the frame image storage area. The MediaCodec interface is not flexible enough, and is difficult to add material types except for videos, so that the video coding process in a storage area is interfered, the MediaCodec interface is difficult to realize mixed editing of videos and pictures, and the videos and the pictures are difficult to share one OpenGL rendering process. In order to solve the difficulty, the target image is inserted into the initial video, and in the steps, the target image is processed in a frame image storage area of the initial video.
Specifically, the target image is texture-rendered in the frame image storage region of the initial video. The frame rate and the duration may be preset, and the duration may be understood as the display duration of the target image during the video playing process, for example, when the frame rate is 30 frames per second and the duration is 2 seconds, in this case, 60 frame images in the target video are required to display the target image, and thus the texture drawing is required 60 times for the target image. After texture drawing is performed on the target image, Bitmap data of the target image is obtained, and the Bitmap data can also be called Bitmap data. When the video is played, bitmap data drawn for multiple times need to be continuously displayed, and based on the arrangement sequence of the target image and the frame image in the initial video, the frame position of the bitmap data of the target image in the target video to be generated can be known.
For example, the arrangement order of the target image and the frame image in the initial video is: the target image is positioned behind the 50 th frame image and in front of the 51 st frame image; at this time, in the target video, the 1 st to 50 th frame positions correspond to the first 50 frame images of the initial video, the 51 st to 110 th frame positions correspond to the 60 th bitmap data drawn by the target image, and the 111 th to 160 th frame positions correspond to the last 50 frame images of the initial video.
And S106, obtaining the video frame texture of the target video to be generated based on the bitmap data, the frame position of the bitmap data in the target video to be generated and the frame image of the initial video, and generating the target video based on the video frame texture.
To generate the target video, the video frame texture at each frame position in the target video needs to be drawn in advance. The total number of frame positions of the target video is the number of frame images of the initial video plus the number of frame images of the display target image, which is determined based on the aforementioned frame rate and duration. Meanwhile, the arrangement sequence of the target image and the frame image in the initial video can be determined according to the video editing instruction, and further the frame position of the bitmap data of the target image in the target video to be generated can be determined. Based on the above, an image displayed at each frame position of the target video can be determined.
It is understood that the partial frame positions in the target video display the frame images of the original video, and the partial frame positions display bitmap data of the target image. When the frame image at each frame position is drawn, it may be determined whether the frame image of the initial video or the bitmap data of the target image is displayed at the frame position, and then the video frame texture at the frame position is drawn. And after the video frame texture on each frame position of the target video is obtained, coding is carried out to obtain the target video.
The video generation method comprises the steps of obtaining a video editing instruction for an initial video, and determining the arrangement sequence of a target image and a frame image in the initial video based on the video editing instruction; in a frame image storage area of the initial video, texture drawing is carried out on a target image based on a preset frame rate and duration to obtain bitmap data of the target image, and the frame position of the bitmap data in the target video to be generated is determined based on an arrangement sequence; and obtaining the video frame texture of the target video to be generated based on the bitmap data, the frame position of the bitmap data in the target video to be generated and the frame image of the initial video, and generating the target video based on the video frame texture. In the mode, the texture of the target image is drawn in the storage area of the initial video, so that the target image is mixed into the initial video, mixed editing of the video and the picture is realized through a MediaCodec interface of an android system, and the versatility and the editing efficiency of video editing are improved.
Another embodiment of a video generation method is provided below. When obtaining the video frame texture of the target video to be generated based on the bitmap data, the frame position of the bitmap data in the target video to be generated, and the frame image of the initial video, a frame-by-frame drawing method is usually adopted, and the frame-by-frame drawing method is also called onDrawFrame. Specifically, for each frame position of a target video to be generated, if bitmap data corresponding to the frame position is empty, drawing a video frame texture at the frame position based on a frame image of an initial video; if the bitmap data corresponding to the frame position is not empty, taking the bitmap data as the video frame texture on the frame position; the target video is then generated based on the video frame texture at each frame location.
As can be seen from the foregoing embodiments, the arrangement order of the target image and the frame image in the initial video can be determined according to the video editing instruction, and then the frame position where the bitmap data of the target image is located in the target video to be generated can be determined. At these frame positions, bitmap data of the target image is mapped. The frame-by-frame drawing method is called once for each frame position of a target video, whether bitmap data of a target image correspond to the position of a current frame is checked through the method, if the bitmap data of the target image do not correspond to the position of the current frame, namely the bitmap data corresponding to the position of the current frame are empty, at the moment, the frame image corresponding to the position of the current frame needs to be searched from the frame image of an initial video, and the video frame texture of the position of the current frame is drawn based on the frame image; and if the current frame position corresponds to the bitmap data of the target image, namely the bitmap data corresponding to the current frame position is not null, taking the bitmap data of the target image as the video frame texture at the frame position. The bitmap data may also be referred to as imageBitmap. In this embodiment, both texture rendering on the target image and texture rendering on the frame image may be implemented by OpenGL.
By the method, the target image can be embedded into the initial video, so that the target image participates in video coding, and mixed editing of the picture and the video is realized.
Further, when texture drawing is performed on the target image, the texture drawing times of the target image are determined based on a preset frame rate and a preset duration; and performing texture drawing on the target image in a frame buffer area corresponding to the initial video until the texture drawing times are reached, and obtaining bitmap data of the target image. Generally, the product of the frame rate and the duration, that is, the number of times of texture rendering of a target image, needs to invoke a frame-by-frame rendering method each time the target image is rendered, and the frame-by-frame rendering method outputs bitmap data obtained by texture rendering to a frame buffer, which is the same as the frame buffer of an initial video. Namely, the texture position is carried out on the target image in the frame buffer area corresponding to the initial video.
In addition, size considerations are also required when performing texture rendering. Specifically, a drawing mapping matrix is determined based on the size of a frame image of an initial video and the size of a target image; performing texture drawing on the target image based on the drawing mapping matrix to obtain bitmap data of the target image; in the bitmap data, the display area corresponding to the target image is located in the designated area of the bitmap data. For example, the size of the video frame displaying the target picture needs to be the same as the size of the video frame of the original video, and the target picture needs to be displayed centrally in the video frame. Considering that the size of the target image is usually different from the size of the frame image of the initial video, in order to match and display the target image with the video, the above-mentioned drawing mapping matrix needs to be generated, and the drawing mapping matrix can represent the corresponding position of each point in the target image in the frame image displaying the target image; if the size of the target image is large, it may be necessary to downsample the target image, and in the above-mentioned rendering mapping matrix, only a part of the points in the target image have corresponding positions in the frame image in which the target image is displayed. If the size of the target image is large, the target image may need to be enlarged and displayed, and in this case, one point in the target image may correspond to a plurality of positions in the frame image in which the target image is displayed.
Texture drawing of the target image is performed based on the mapping matrix, so that the target image can have an appropriate display size and display position area in the drawn bitmap data.
After the video frame texture at each frame position is obtained in the above manner, the video frame texture at each frame position of the target video to be generated is encoded to obtain the target video. Specifically, the target video can be obtained by encoding through an encoder of the MediaCodec interface. The encoder is also bound with a storage area, namely a Surface, and the texture of the video frame is input into the encoder through the Surface to be encoded so as to obtain a target video.
In the video editing process, a user may need to add a special effect to the video in addition to the mixed editing of the picture and the video. Specifically, a special effect adding instruction is received; wherein, the special effect adding instruction includes but is not limited to: adding one or more of character stickers, setting a beauty filter, setting a transition special effect and setting a video proportion; rendering a special effect on the video frame texture of the target video in a frame image storage area of the target video to obtain the video frame texture after the special effect is rendered; and coding the texture of the video frame after the special effect is rendered to obtain the target video. It should be noted here that, because the target image performs texture rendering in the frame image storage area of the initial video, that is, the target image and the initial video share one Surface, the target image and the initial video can share a set of OpenGL processing flows such as text stickers, beauty filters, transition, and the like, and therefore the target image and the initial video can participate in special effect rendering together, so that video editing is more convenient and faster, and editing effects are more diverse.
In addition to editing video, audio may also be edited for configuring sound for the video. Specifically, an audio material is obtained, and the audio material is edited based on the duration of the target video to obtain audio data; wherein the duration of the audio data is the same as that of the target video; and writing the audio data into the audio track, and writing the target video into the video track so as to synthesize the audio data and the target video and obtain the final target video.
If the original video itself carries sound, the multimedia extractor MediaExtractor can be used to separate the video and audio, and the video is edited in the manner described in the above embodiments, so that the target image is mixed into the original video. Then, the audio is edited separately. The audio separated from the original video is used as the audio material, and since the target image is mixed in the original video, the duration of the finally obtained target video is longer than that of the original video, at this time, the audio material needs to be edited, for example, music is added, so that the duration of the audio data obtained after editing is the same as that of the target video. And then, synthesizing the audio data and the target video to obtain the final target video.
In the video generation method provided by this embodiment, a native MediaCodec interface of the android system can be used for hardware encoding and decoding, and the hardware encoding and decoding can be run on the GPU, so that compared with FFmpeg software encoding and decoding, the CPU is not consumed, the synthesis speed is high, and the packet volume of the editing tool installation packet is not increased; meanwhile, under the condition that the restriction of a native coding and decoding API (Application Programming Interface) MediaCodec Interface is more, the mixed synthesis of a plurality of pictures and video materials is realized. In addition, the system also supports video processing capabilities such as character stickers, beauty filters, material transition, curtain backgrounds and picture proportion, and audio processing capabilities such as audio speed change and music mixing.
The following embodiment provides a specific audio editing manner, as shown in fig. 2. The audio processing requires decoding the original audio (including music and video original) separated from the original video into PCM data format, and then performing mixing and variable-speed processing, if a section of audio material is separated from the video 1, a section of audio material is separated from the video 2, and then music material needs to be mixed, the audio editing includes the following steps:
at step 201, clip decoding is performed on the audio material of the added MP3 format music. And in the decoding process, according to the time stamp of the decoded stream, cutting or repeatedly splicing the music, so that the PCM file output by decoding is aligned with the total time length of the output video.
For example, the audio material of the MP3 music may be a song, such as 1 minute in duration. The total duration of the video 1 and the video 2 is 2 minutes, and at the moment, the MP3 music needs to be spliced circularly and played circularly for 2 times; when the total duration of video 1 and video 2 equals 30s, the piece of MP3 music needs to be clipped, e.g. only the first half is played.
Step 202, for the video material 1, separating the audio track for decoding, and outputting the decoded audio track as a PCM file 1.
Step 203, for the video material 2, separating the audio track for decoding, and outputting the decoded audio track as a PCM file 2.
And step 204, in order to ensure that the video 1 and the video 2 can be smoothly played after being spliced and mixed, for the PCM file 1 and the PCM file 2, the sampling rate and the channel number are converted into consistency and then written into an audio track. In a plurality of voiced video materials, the largest number of channels and sampling rate (such as PCM file 1 in fig. 2) are generally adopted as the target number of channels and sampling rate. Thus, the PCM2 is converted to a channel number and sample rate consistent with the PCM 1.
Step 205, performing variable speed processing on the converted PCM data by using a soundport library, wherein soundport is a relatively common variable speed processing scheme in the field of current audio and video, is an open source library written in C + +, and is used in an android project by writing a JNI interface.
Step 206, converting the music decoded data in step 201 into the number of channels and sampling rate consistent with the PCM file 1.
And step 207, performing sound mixing processing on the music data and the audio data subjected to the variable speed processing, and adopting a volume weighted sound mixing algorithm, namely superposing numerical values of the two audio tracks by taking the volume as a weighting factor.
The following embodiments provide a way of video mix editing. As shown in fig. 3, the method comprises the following steps:
step 301, sampling a video of a material 1 of a video material by using a mediaextra, and separating video data and audio data for each sampling data.
Step 302, initializing video track coding parameters, sending the video data of each sample block into an encoder, and writing the video data into a video track.
Step 303, initializing the audio track encoding parameters, sending the audio data of each sampling block to the encoder, and writing the audio data into the audio track.
And step 304, circularly sampling the material 1 until the size of a sample data block read by the MediaExtract is 0, judging that the sampling is finished, and jumping out of the sampling circulation.
Step 305, cyclically sampling the material 2 of the video material, and separating video data and audio data from each sampling data.
And step 306, editing the video separated from the material 1 and the material 2 through the video track, editing the audio separated from the material 1 and the material 2 through the audio track, and finally obtaining an MP4 file through mediaMuxer synthesis.
Based on the video editing mode and the audio editing mode provided by the above embodiments, fig. 4 provides an overall implementation mode of audio and video synthesis, which relates to decoding, encoding, OpenGL, audio processing, and the like. As shown in fig. 4, the method comprises the following steps:
step 401, for the video material in MP4 format, a mediaextra is used to separate the audio and video, and the audio and video are decomposed into a video track decoding and an audio track decoding, and the decoded video frame enters the OpenGL video processing flow. And directly entering an OpenGL processing flow aiming at the material files of the picture types.
In the OpenGL processing flow, by the video generation method provided in the above embodiment, mixed editing of a video and a picture can be realized, and the picture is inserted into the video. The OpenGL processing comprises character pasting, a beauty filter, transition, video proportion setting and the like, and image frames drawn by OpenGL enter a video encoder. And clipping and variable speed are processed through video frame time stamps in the encoding process.
In step 402, for audio MP 4-formatted video material with sound, audio separated by MediaExtractor, i.e. video original sound, needs to be mixed with added music.
The audio frequency speed change is complex relative to the video frequency speed change, the audio frequency processing needs to decode original audio frequency (including music and video original sound) into a PCM data format, and resampling of sound analog signals is carried out based on PCM data, so that a sound mixing algorithm and a speed change algorithm are realized. The sound mixing adopts a sound mixing algorithm based on a volume weight value, and the sound speed is adjusted by adopting a soundport processing library in a speed changing manner.
In step 403, after the video and the audio are processed respectively, the audio data and the video data are written into the audio track and the video track in the finally generated MP4 file respectively by the audio and video synthesizer MediaMuxer, so that the whole video synthesis process is completed.
Among them, in the video processing process, since the video processing speed is generally slower than the audio processing speed, the progress is usually calculated according to the time stamp of the video frame.
Corresponding to the above method embodiment, referring to a schematic diagram of a video generating apparatus shown in fig. 5, the apparatus includes:
an instruction obtaining module 502, configured to obtain a video editing instruction for an initial video, and determine an arrangement order of a target image and a frame image in the initial video based on the video editing instruction;
a texture drawing module 504, configured to perform texture drawing on the target image in the frame image storage region of the initial video based on a preset frame rate and preset duration to obtain bitmap data of the target image, and determine a frame position of the bitmap data in the target video to be generated based on the arrangement order;
the video generating module 506 is configured to obtain a video frame texture of the target video to be generated based on the bitmap data, a frame position of the bitmap data in the target video to be generated, and a frame image of the initial video, and generate the target video based on the video frame texture.
The video generation device acquires a video editing instruction for the initial video, and determines the arrangement sequence of the target image and the frame image in the initial video based on the video editing instruction; in a frame image storage area of the initial video, texture drawing is carried out on a target image based on a preset frame rate and duration to obtain bitmap data of the target image, and the frame position of the bitmap data in the target video to be generated is determined based on an arrangement sequence; and obtaining the video frame texture of the target video to be generated based on the bitmap data, the frame position of the bitmap data in the target video to be generated and the frame image of the initial video, and generating the target video based on the video frame texture. In the mode, the texture of the target image is drawn in the storage area of the initial video, so that the target image is mixed into the initial video, mixed editing of the video and the picture is realized through a MediaCodec interface of an android system, and the versatility and the editing efficiency of video editing are improved.
The video generation module is further configured to: for each frame position of a target video to be generated, if bitmap data corresponding to the frame position is empty, drawing a video frame texture on the frame position based on a frame image of an initial video; and if the bitmap data corresponding to the frame position is not empty, using the bitmap data as the video frame texture at the frame position.
The texture rendering module is further configured to: determining the texture drawing times of the target image based on the preset frame rate and duration; and performing texture drawing on the target image in a frame buffer area corresponding to the initial video until the texture drawing times are reached, and obtaining bitmap data of the target image.
The texture rendering module is further configured to: determining a drawing mapping matrix based on the size of the frame image of the initial video and the size of the target image; performing texture drawing on the target image based on the drawing mapping matrix to obtain bitmap data of the target image; in the bitmap data, the display area corresponding to the target image is located in the designated area of the bitmap data.
The video generation module is further configured to: and coding the video frame texture on each frame position of the target video to be generated to obtain the target video.
The video generation module is further configured to: receiving a special effect adding instruction; wherein the special effect adding instruction includes but is not limited to: adding one or more of character stickers, setting a beauty filter, setting a transition special effect and setting a video proportion; rendering a special effect on the video frame texture of the target video in a frame image storage area of the target video to obtain the video frame texture after the special effect is rendered; and coding the texture of the video frame after the special effect is rendered to obtain the target video.
The above-mentioned device still includes: an audio processing module to: acquiring an audio material, and editing the audio material based on the duration of the target video to obtain audio data; wherein the duration of the audio data is the same as that of the target video; and writing the audio data into the audio track, and writing the target video into the video track so as to synthesize the audio data and the target video and obtain the final target video.
The embodiment also provides an electronic device, which comprises a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to realize the video generation method. The electronic device may be a server or a terminal device.
Referring to fig. 6, the electronic device includes a processor 100 and a memory 101, the memory 101 stores machine executable instructions capable of being executed by the processor 100, and the processor 100 executes the machine executable instructions to implement the video generating method.
Further, the electronic device shown in fig. 6 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The Memory 101 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
Processor 100 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 100. The Processor 100 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The present embodiments also provide a machine-readable storage medium having stored thereon machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the video generation method described above.
The video generation method, the video generation device, and the computer program product of the electronic device provided in the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementations may refer to the method embodiments and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in a specific case to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that the following embodiments are merely illustrative of the present invention, and not restrictive, and the scope of the present invention is not limited thereto: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method of video generation, the method comprising:
acquiring a video editing instruction for an initial video, and determining an arrangement sequence of a target image and a frame image in the initial video based on the video editing instruction;
performing texture drawing on the target image in a frame image storage area of the initial video based on a preset frame rate and duration to obtain bitmap data of the target image, and determining a frame position of the bitmap data in the target video to be generated based on the arrangement sequence;
and obtaining the video frame texture of the target video to be generated based on the bitmap data, the frame position of the bitmap data in the target video to be generated and the frame image of the initial video, and generating the target video based on the video frame texture.
2. The method according to claim 1, wherein the step of obtaining the video frame texture of the target video to be generated based on the bitmap data, the frame position of the bitmap data in the target video to be generated, and the frame image of the initial video comprises:
for each frame position of the target video to be generated, if the bitmap data corresponding to the frame position is empty, drawing the video frame texture on the frame position based on the frame image of the initial video; and if the bitmap data corresponding to the frame position is not empty, taking the bitmap data as the video frame texture on the frame position.
3. The method according to claim 1, wherein the step of texture-mapping the target image based on a preset frame rate and a preset duration to obtain bitmap data of the target image comprises:
determining the texture drawing times of the target image based on a preset frame rate and a preset duration;
and performing texture drawing on the target image in a frame buffer area corresponding to the initial video until the texture drawing times are reached, so as to obtain bitmap data of the target image.
4. The method according to claim 1, wherein the step of texture-rendering the target image to obtain bitmap data of the target image comprises:
determining a drawing mapping matrix based on the size of the frame image of the initial video and the size of the target image;
texture drawing is carried out on the target image based on the drawing mapping matrix, and bitmap data of the target image are obtained; and in the bitmap data, the display area corresponding to the target image is positioned in the designated area of the bitmap data.
5. The method of claim 1, wherein the step of generating the target video based on the video frame texture comprises:
and coding the video frame texture at each frame position of the target video to be generated to obtain the target video.
6. The method according to claim 5, wherein the step of coding the video frame texture at each frame position of the target video to be generated to obtain the target video comprises:
receiving a special effect adding instruction; wherein the special effect addition instructions include, but are not limited to: adding one or more of character stickers, setting a beauty filter, setting a transition special effect and setting a video proportion;
rendering a special effect on the video frame texture of the target video in the frame image storage area of the target video to obtain the video frame texture after the special effect is rendered;
and coding the video frame texture after the special effect is rendered to obtain the target video.
7. The method of claim 1, wherein after the step of generating the target video based on the video frame texture, the method further comprises:
acquiring an audio material, and editing the audio material based on the duration of the target video to obtain audio data; wherein the audio data and the target video are the same in duration;
and writing the audio data into an audio track, and writing the target video into a video track so as to synthesize the audio data and the target video to obtain the final target video.
8. A video generation apparatus, characterized in that the apparatus comprises:
the instruction acquisition module is used for acquiring a video editing instruction for an initial video and determining the arrangement sequence of a target image and a frame image in the initial video based on the video editing instruction;
the texture drawing module is used for performing texture drawing on the target image in a frame image storage area of the initial video based on a preset frame rate and duration to obtain bitmap data of the target image, and determining the frame position of the bitmap data in the target video to be generated based on the arrangement sequence;
and the video generation module is used for obtaining the video frame texture of the target video to be generated based on the bitmap data, the frame position of the bitmap data in the target video to be generated and the frame image of the initial video, and generating the target video based on the video frame texture.
9. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the video generation method of any of claims 1-7.
10. A machine-readable storage medium having stored thereon machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the video generation method of any of claims 1-7.
CN202210312376.0A 2022-03-28 2022-03-28 Video generation method and device and electronic equipment Pending CN114900736A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210312376.0A CN114900736A (en) 2022-03-28 2022-03-28 Video generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210312376.0A CN114900736A (en) 2022-03-28 2022-03-28 Video generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114900736A true CN114900736A (en) 2022-08-12

Family

ID=82714561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210312376.0A Pending CN114900736A (en) 2022-03-28 2022-03-28 Video generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114900736A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011094537A2 (en) * 2010-01-29 2011-08-04 Hillcrest Laboratories, Inc. Embedding argb data in a rgb stream
CN106534880A (en) * 2016-11-28 2017-03-22 深圳Tcl数字技术有限公司 Video synthesis method and device
CN106899875A (en) * 2017-02-06 2017-06-27 合网络技术(北京)有限公司 The display control method and device of plug-in captions
US20170346864A1 (en) * 2016-05-25 2017-11-30 Mark Nataros System And Method For Video Gathering And Processing
CN110290425A (en) * 2019-07-29 2019-09-27 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and storage medium
WO2020097888A1 (en) * 2018-11-15 2020-05-22 深圳市欢太科技有限公司 Video processing method and apparatus, electronic device, and computer-readable storage medium
CN111199166A (en) * 2018-11-16 2020-05-26 首都师范大学 Video riblet detection and recovery method based on frequency domain and spatial domain characteristics
CN111246122A (en) * 2020-01-15 2020-06-05 齐力软件科技(广州)有限公司 Method and device for synthesizing video by multiple photos
CN111754607A (en) * 2019-03-27 2020-10-09 北京小米移动软件有限公司 Picture processing method and device, electronic equipment and computer readable storage medium
CN111899155A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN112019767A (en) * 2020-08-07 2020-12-01 北京奇艺世纪科技有限公司 Video generation method and device, computer equipment and storage medium
CN112256570A (en) * 2020-10-19 2021-01-22 网易(杭州)网络有限公司 Remote debugging method, device, equipment and storage medium
CN112533058A (en) * 2019-09-17 2021-03-19 西安中兴新软件有限责任公司 Video processing method, device, equipment and computer readable storage medium
US20210358524A1 (en) * 2020-05-14 2021-11-18 Shanghai Bilibili Technology Co., Ltd. Method and device of editing a video

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011094537A2 (en) * 2010-01-29 2011-08-04 Hillcrest Laboratories, Inc. Embedding argb data in a rgb stream
US20170346864A1 (en) * 2016-05-25 2017-11-30 Mark Nataros System And Method For Video Gathering And Processing
CN106534880A (en) * 2016-11-28 2017-03-22 深圳Tcl数字技术有限公司 Video synthesis method and device
CN106899875A (en) * 2017-02-06 2017-06-27 合网络技术(北京)有限公司 The display control method and device of plug-in captions
WO2020097888A1 (en) * 2018-11-15 2020-05-22 深圳市欢太科技有限公司 Video processing method and apparatus, electronic device, and computer-readable storage medium
CN111199166A (en) * 2018-11-16 2020-05-26 首都师范大学 Video riblet detection and recovery method based on frequency domain and spatial domain characteristics
CN111754607A (en) * 2019-03-27 2020-10-09 北京小米移动软件有限公司 Picture processing method and device, electronic equipment and computer readable storage medium
CN110290425A (en) * 2019-07-29 2019-09-27 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and storage medium
CN112533058A (en) * 2019-09-17 2021-03-19 西安中兴新软件有限责任公司 Video processing method, device, equipment and computer readable storage medium
CN111246122A (en) * 2020-01-15 2020-06-05 齐力软件科技(广州)有限公司 Method and device for synthesizing video by multiple photos
US20210358524A1 (en) * 2020-05-14 2021-11-18 Shanghai Bilibili Technology Co., Ltd. Method and device of editing a video
CN111899155A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN112019767A (en) * 2020-08-07 2020-12-01 北京奇艺世纪科技有限公司 Video generation method and device, computer equipment and storage medium
CN112256570A (en) * 2020-10-19 2021-01-22 网易(杭州)网络有限公司 Remote debugging method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112184856B (en) Multimedia processing device supporting multi-layer special effect and animation mixing
WO2018192342A1 (en) Method for generating video data, computer device and storage medium
CN113891113B (en) Video clip synthesis method and electronic equipment
CN106804003B (en) Video editing method and device based on ffmpeg
CN111899322B (en) Video processing method, animation rendering SDK, equipment and computer storage medium
CN105933724A (en) Video producing method, device and system
EP2104105A1 (en) Digital audio and video clip encoding
KR102081214B1 (en) Method and device for stitching multimedia files
EP1648172A1 (en) System and method for embedding multimedia editing information in a multimedia bitstream
KR100923993B1 (en) Method and apparatus for encoding/decoding
CN104768025B (en) A kind of video bad frame restorative procedure and device
CN109788212A (en) A kind of processing method of segmenting video, device, terminal and storage medium
CN104091608A (en) Video editing method and device based on IOS equipment
CN109068163A (en) A kind of audio-video synthesis system and its synthetic method
ES2248549T3 (en) EDITION OF AUDIO SIGNALS.
EP2104103A1 (en) Digital audio and video clip assembling
CN107454447B (en) Plug-in loading method and device for player and television
CN114845151A (en) Multi-screen synchronous display method, system, terminal equipment and storage medium
CN112533058A (en) Video processing method, device, equipment and computer readable storage medium
CN112689194B (en) Functional machine video music matching method and device, terminal equipment and storage medium
CN114900736A (en) Video generation method and device and electronic equipment
US20240244299A1 (en) Content providing method and apparatus, and content playback method
CN114079823A (en) Video rendering method, device, equipment and medium based on Flutter
CN115250335A (en) Video processing method, device, equipment and storage medium
CN109285197B (en) GIF image processing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination