CN110582018A - Video file processing method, related device and equipment - Google Patents

Video file processing method, related device and equipment Download PDF

Info

Publication number
CN110582018A
CN110582018A CN201910872350.XA CN201910872350A CN110582018A CN 110582018 A CN110582018 A CN 110582018A CN 201910872350 A CN201910872350 A CN 201910872350A CN 110582018 A CN110582018 A CN 110582018A
Authority
CN
China
Prior art keywords
video
layer
video file
processed
additional element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910872350.XA
Other languages
Chinese (zh)
Other versions
CN110582018B (en
Inventor
李浩鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910872350.XA priority Critical patent/CN110582018B/en
Publication of CN110582018A publication Critical patent/CN110582018A/en
Application granted granted Critical
Publication of CN110582018B publication Critical patent/CN110582018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Abstract

The application discloses a video file processing method, which comprises the following steps: acquiring a video file to be processed positioned on a first layer; receiving a video additional element adding operation aiming at the operation editing area; responding to the addition operation of the video additional elements, generating a second layer above a first layer where the video files to be processed in the video file playing area are located, wherein time nodes of the added video additional elements on the second layer correspond to time nodes of the video files to be processed in the first layer, which are triggered to carry out the addition and/or editing operation of the video additional elements; and displaying the video file which is subjected to the synthesis processing and comprises the content of the video file to be processed and the video additional elements. The application also discloses a video processing device and terminal equipment. According to the method and the device, only the generated new image layer is edited when the elements are added, and each frame of video does not need to be redrawn, so that waiting time does not need to be consumed, and the efficiency of video processing is improved.

Description

Video file processing method, related device and equipment
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a method, a related apparatus, and a device for processing a video file.
background
with the development of science and technology, the technology of image acquisition equipment is also improved day by day, and videos recorded by the image acquisition equipment are clearer. However, the existing recorded videos are only monotonous recorded materials, and cannot meet more and more personalized requirements provided by users. Therefore, after the video is recorded, the video can be further processed manually by the user so as to meet the personalized requirements of the user.
At present, in the process of video creation, a user can use elements such as stickers, pendants or characters which are expected to be added as the elements of a video, and the elements are rendered while rendering video frames.
However, each time the user edits the elements, the user needs to perform a video rendering again, that is, a video stream is generated from the beginning of the video, and each time the user edits the video once, the user needs to re-encode the video stream according to the edited video additional elements (including characters, moving pictures, etc.), and if the user edits the video many times, the user needs to re-encode the video stream many times, which results in low efficiency of video processing and large consumption of processing resources.
Disclosure of Invention
The embodiment of the application provides a video file processing method, a related device and equipment, wherein a video file and video additional elements are respectively arranged on two image layers, when the video additional elements are added or edited, only the generated new image layer needs to be edited, and the video stream after each editing does not need to be recoded, so that waiting time does not need to be consumed, a user can see a video processing result in real time, and the video processing efficiency is improved.
in view of the above, a first aspect of the present application provides a method for processing a video file, including:
Providing a video file processing interface, wherein the video file processing interface comprises a video file playing area and an operation editing area, and the operation editing area comprises at least one video additional element operation control;
Acquiring a video file to be processed and time length information of the video file to be processed, wherein the video file to be processed comprises at least one frame of image to be processed, the video file to be processed and the time length information are displayed in a video file playing area, and the video file to be processed is positioned on a first layer;
Receiving addition and/or editing operation of video additional elements in an operation editing area, which is triggered by any time node in a corresponding duration information interval in the playing, dragging and/or positioning process of a video file to be processed;
Responding to the addition and/or editing operation of the video additional elements, generating a second layer above a first layer where a to-be-processed video file in a video file playing area is located, wherein the added video additional elements are located in the second layer, and time nodes of the added video additional elements on the second layer correspond to time nodes of the to-be-processed video file in the first layer, which are triggered to carry out the addition and/or editing operation of the video additional elements;
and displaying the video file which is subjected to synthesis processing and comprises the content of the video file to be processed of the first image layer and the video additional element of the second image layer.
a second aspect of the present application provides a video processing apparatus comprising:
The display module is used for providing a video file processing interface, wherein the video file processing interface comprises a video file playing area and an operation editing area, and the operation editing area comprises at least one video additional element operation control;
The processing module is used for acquiring a video file to be processed and time length information of the video file to be processed, wherein the video file to be processed comprises at least one frame of image to be processed, the video file to be processed and the time length information are displayed in a video file playing area, and the video file to be processed is positioned in a first layer;
The receiving module is used for receiving video additional element adding and/or editing operation of an operation editing area, which is triggered by any time node in a corresponding duration information interval in the playing, dragging and/or positioning process of a video file to be processed;
The processing module is further configured to generate a second layer above the first layer where the to-be-processed video file in the video file playing region is located in response to the video additional element adding and/or editing operation received by the receiving module, where the added video additional element is located in the second layer, and a time node of the added video additional element on the second layer corresponds to a time node of the to-be-processed video file in the first layer, which is triggered to perform the video additional element adding and/or editing operation;
And the display module is also used for displaying the video file which is synthesized by the processing module and comprises the content of the video file to be processed on the first layer and the video additional element on the second layer.
In a possible design, in a first implementation manner of the second aspect of the embodiment of the present application, the processing module is specifically configured to:
Controlling the video file to be processed to pause playing in response to a video additional element adding operation, wherein the video additional element adding operation corresponds to the identification of the added video additional element, and the added video additional element is a dynamic text element or a dynamic image element;
generating a second layer above the first layer where the video file to be processed in the video file playing area is located;
the display module is specifically configured to display content of a video file to be processed in the video file on a first layer, and display an added video additional element in the video file on a second layer.
In one possible design, in a second implementation of the second aspect of the embodiments of the present application,
The display module is specifically used for:
Acquiring an image sequence corresponding to the added video additional element, wherein the image sequence comprises at least one frame of image;
generating a third layer above the second layer where the added video additional elements are located, wherein the content attribute of the third layer is assigned as a first frame image in the image sequence;
And displaying the image sequence corresponding to the added video additional element on a third layer of the video file playing area.
in one possible design, in a third implementation of the second aspect of the embodiments of the present application,
The processing module is specifically configured to:
controlling the video to be processed to pause playing in response to the video additional element adding operation, wherein the video additional element adding operation corresponds to the identifier of the video additional element, and the video additional element is a static text element or a static image element;
and generating a second layer above the first layer where the video file to be processed in the video file playing area is located.
in one possible design, in a fourth implementation of the second aspect of the embodiment of the present application,
the processing module is further used for acquiring a third layer corresponding to the video additional element, and the third layer is located above the second layer;
and the processing module is further used for processing the video additional element in the third layer to obtain a processed video additional element, and the processed video additional element is displayed on the third layer.
in one possible design, in a fifth implementation of the second aspect of the embodiments of the present application,
The processing module is specifically configured to:
Adding a third layer to the editable view in the operation editing area;
performing translation processing on the video additional element on the editable view to obtain translation parameters;
Recording the translation parameters to a data structure corresponding to the third layer;
and redrawing the third image layer on the second image layer.
In one possible design, in a sixth implementation of the second aspect of the embodiments of the present application,
the processing module is specifically configured to:
adding a third layer to the editable view in the operation editing area;
zooming the video additional element on the editable view to obtain zooming parameters, wherein the zooming parameters comprise magnification parameters or reduction magnification parameters;
recording the scaling parameters to a data structure corresponding to the third layer;
And redrawing the third image layer on the second image layer.
In one possible design, in a seventh implementation of the second aspect of the embodiments of the present application,
the processing module is specifically configured to:
Adding a third layer to the editable view in the operation editing area;
performing rotation processing on the video additional element on the editable view to obtain a rotation angle parameter;
Recording the rotation angle parameter to a data structure corresponding to the third layer;
and redrawing the third image layer on the second image layer.
In one possible design, in an eighth implementation of the second aspect of the embodiments of the present application,
the receiving module is further used for receiving a video additional element deleting operation of the operation editing area, which is triggered by any time node in the corresponding duration information interval in the playing, dragging and/or positioning process of the video file to be processed;
and the processing module is further used for responding to the video additional element deleting operation and deleting the video additional elements on the second layer of the video file playing area.
In a possible design, in a ninth implementation manner of the second aspect of the embodiment of the present application, the processing module is specifically configured to delete, in response to a video add-on deletion operation, the third layer and the video add-on generated above the second layer where the video add-on is located.
In a possible design, in a tenth implementation manner of the second aspect of the embodiment of the present application, the displaying module is specifically configured to display a video file after the synthesizing process, where the video file includes content of a to-be-processed video file of the first image layer.
a third aspect of the present application provides a terminal device, comprising: a memory, a transceiver, a processor, and a bus system;
wherein the memory is used for storing programs;
The processor is used for executing the program in the memory and comprises the following steps:
providing a video file processing interface, wherein the video file processing interface comprises a video file playing area and an operation editing area, and the operation editing area comprises at least one video additional element operation control;
Acquiring a video file to be processed and time length information of the video file to be processed, wherein the video file to be processed comprises at least one frame of image to be processed, the video file to be processed and the time length information are displayed in a video file playing area, and the video file to be processed is positioned on a first layer;
receiving addition and/or editing operation of video additional elements in an operation editing area, which is triggered by any time node in a corresponding duration information interval in the playing, dragging and/or positioning process of a video file to be processed;
responding to the addition and/or editing operation of the video additional elements, generating a second layer above a first layer where a to-be-processed video file in a video file playing area is located, wherein the added video additional elements are located in the second layer, and time nodes of the added video additional elements on the second layer correspond to time nodes of the to-be-processed video file in the first layer, which are triggered to carry out the addition and/or editing operation of the video additional elements;
Displaying the video file which is subjected to synthesis processing and comprises the content of the video file to be processed of the first image layer and the video additional element of the second image layer;
The bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.
A fourth aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of the above-described aspects.
according to the technical scheme, the embodiment of the application has the following advantages:
In the embodiment of the application, a video file processing interface is provided, the video file processing interface comprises a video file playing area and an operation editing area, after the time length information of a video file to be processed and the video file to be processed is obtained, the video file to be processed is displayed in the video file playing area, in the playing, dragging or positioning process of the video file to be processed, any time node in the time length information interval receives the adding and/or editing operation of a video additional element in the operation editing area, a second layer is generated above a first layer where the video file to be processed in the video file playing area is located, the added video additional element can be borne by the second layer, the time node of the added video additional element on the second layer corresponds to the time node where the video additional element adding and/or editing operation of the video file to be processed is triggered, and then displaying the video file which is subjected to synthesis processing and comprises the content of the video file to be processed on the first image layer and the video additional element on the second image layer. Through the mode, a layer, namely the second layer, is newly added on the layer of the video playing, the generated new layer and the layer of the video playing are bound, the synchronization is achieved in time, the video file and the video additional element are respectively arranged on the two layers, when the video is added or edited, only the generated new layer needs to be edited, and the video stream after each editing does not need to be recoded, so that the waiting time does not need to be consumed, a user can see the video processing result in real time, and the efficiency of video processing is improved.
Drawings
FIG. 1 is a block diagram of an embodiment of a video processing system;
FIG. 2 is a schematic interface diagram of a presentation layer of a client in the video file processing method according to the embodiment of the present application;
FIG. 3 is a schematic diagram of an embodiment of a method for processing a video file according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an interface for receiving an add operation of a video add-on element in the video file processing method according to the embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating additional video elements in the video file processing method according to the embodiment of the present application;
FIG. 6 is a schematic diagram of an interface for acquiring image elements in the video file processing method according to the embodiment of the present application;
FIG. 7 is a schematic diagram of an interface for acquiring text elements in the video file processing method according to the embodiment of the present application;
FIG. 8 is a schematic diagram of an interface for obtaining adjustment operations in the video file processing method according to the embodiment of the present application;
fig. 9 is a schematic flowchart of a process of performing a panning process on a video add-on element in a method for processing a video file according to an embodiment of the present application;
Fig. 10 is a schematic interface diagram illustrating an operation of acquiring a video add-on deletion in the video file processing method according to the embodiment of the present application;
fig. 11 is a schematic flowchart illustrating a deleting operation performed on a video add-on element in the method for processing a video file according to the embodiment of the present application;
FIG. 12 is a schematic diagram of an interface showing a plurality of elements to be processed in the method for processing a video file according to the embodiment of the present application;
FIG. 13 is a schematic diagram of an embodiment of a video processing apparatus according to the present embodiment;
fig. 14 is a schematic diagram of an embodiment of a terminal device in the embodiment of the present application.
Detailed Description
the embodiment of the application provides a video file processing method, a related device and equipment, wherein a video file and video additional elements are respectively arranged on two image layers, when the video additional elements are added or edited, only the generated new image layer needs to be edited, and the video stream after each editing does not need to be recoded, so that waiting time does not need to be consumed, a user can see a video processing result in real time, and the video processing efficiency is improved.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
it should be understood that the embodiment of the application is applied to various clients that add custom elements to videos, where the client that adds custom elements to videos may be embodied as a client of an album class or a gallery class, and may also be embodied as an image capturing class client, where the image capturing class client is a client that has both an image capturing function and an image processing function. Further, as an example, after a piece of video is shot, in order to increase the interest of the video, the video may be redrawn through an album-like client to add a sticker to the video; as another example, after a video is shot, for example, the video may be edited for the second time directly by an image shooting client to add text to the video, and since the currently adopted technology needs to regenerate a new video including a target element after the target element is added to the video, which requires a lot of waiting time, a video editing scheme without waiting time is in need to be introduced.
Based on the above needs, in order to facilitate understanding, the present application provides a method for processing a video file, in which a client obtains a video file to be processed, the video file to be processed is displayed on a first layer, and the client generates a second layer according to the video file to be processed, where the second layer covers the first layer and corresponds to duration information, and in a playing process of the video file to be processed, when a video additional element is obtained at a first time node, a first video processing result is displayed, and the first video processing result includes displaying the video additional element on the second layer, that is, in the present application, the video additional element is not added to the video file to be processed in a manner of redrawing the video file to be processed, but the video additional element is displayed through the second layer covering the first layer, so that waiting time is not consumed any more, the efficiency of video processing has been promoted.
the clients may be represented as web page clients or application program clients, the servers are background servers of the video processing system, and may be a server cluster or a cloud computing center formed by one or more servers, and the like, which are not limited herein. The video method provided in the embodiment of the present application is applied to the video processing system shown in fig. 1, where fig. 1 is a schematic structural diagram of the video processing system in the embodiment of the present application, and it should be noted that the client is disposed on the terminal device shown in fig. 1, as shown in the figure, the terminal device includes but is not limited to a tablet computer, a notebook computer, a palm computer, a mobile phone, a voice interaction device, and a Personal Computer (PC), and is not limited herein. The voice interaction device includes, but is not limited to, an intelligent sound and an intelligent household appliance. The server in fig. 1 is a background server of the client, and the background server may be a server cluster or a cloud computing center, which is formed by one or more servers, and the like, which are not limited herein. After the client finishes video or image processing, the processed content can be uploaded to a server, so that a sharing or forwarding function and the like are realized.
the client and the server can be connected through a wireless network. Wherein the wireless network described above uses standard communication techniques and/or protocols. The wireless Network is typically the internet, but can be any Network including, but not limited to, any combination of Local Area Networks (LANs), Metropolitan Area Networks (MANs), Wide Area Networks (WANs), mobile, private, or virtual private networks. In some embodiments, custom or dedicated data communication techniques may be used in place of or in addition to the data communication techniques described above.
Although only five terminal devices and one server are shown in fig. 1, it should be understood that the example in fig. 1 is only used for understanding the present solution, and the number of the specific terminal devices and the number of the servers should be flexibly determined according to actual situations. In the embodiment of the present application, the method for processing a video file is applied to a communication type client, for example, and is described.
in some embodiments of the present application, a client may provide editing operations such as enlarging, reducing, translating, and the like on a video additional element, where a presentation layer of the client is introduced, please refer to fig. 2, fig. 2 is an interface schematic diagram of the presentation layer of the client in the method for processing a video file in the embodiments of the present application, the presentation layer of the client may be divided into four parts, which are a video file playing area, a video additional element display area, an element editing area, and an operation editing area, respectively, where the video file playing area, the video additional element display area, and the element editing area may overlap with each other, so that a user can intuitively feel a position of an added video additional element. The video file playing area is responsible for playing the video to be processed and corresponds to the first image layer; the video additional element display area is used for displaying all video additional elements, the whole background of the video additional element display area can be transparent, various to-be-processed elements in the video additional element display area can support clicking operation of a user, when the user clicks the to-be-processed elements in the video additional element display area, playing of the to-be-processed video is paused, an editing process is started, the to-be-processed elements are edited, and the video additional element display area corresponds to the second image layer; the element editing area may have a transparent background and contain an editable view (explained in detail in fig. 8 below) that supports user editing, and the editable view is hidden in the playing state of the video to be processed and appears only in the editing state; the operation edit area may be used to receive various video additional element addition operations input by the user, and it should be understood that the example in fig. 2 is only one example and is not used to define the actual form of the product.
In addition to the description of the presentation layer of the client, the following description may also describe a data layer, a logic layer, and a control layer of the client, where first, the data layer is a data layer of the client, and the data layer is responsible for providing corresponding data and is managed by a data structure, and specifically, when the video additional element is an image element, the data structure may be different from that adopted when the video additional element is a text element, for example, the data structure adopted when the video additional element is an image element may be a picture album manager multimedia image model (pgmomentitvideoimagemodel), and when the video additional element is a text element, the data structure adopted may be a picture album manager multimedia text model (pgmomentintextmodel). Further, the embodiment of the present application also discloses related codes of the two data structures:
PGMomentResourceType resourceType; // details
NSInteger resourceID; // resource ID
NSString _ resourceName; // resource name
NSURL fileURL; // remote Address
NSString & localFilePath; // local resource Address
CGFloat start; // Start time
CGfloat duration; // duration
CGSize targetSize; // size
CGPoint centteroffset; // center point offset 0 ~ 1.0
CGfloat rotate; // angle of rotation
the above is an example of codes for recording data content by using a data structure of the PGMomentVideoImageModel class when the video add-on element is an image element, and since there are annotations corresponding to the codes on the right side of the codes, the codes are not explained one by one here.
NSString text; // text
NSString transformation; // translation
UIColor textColor; // text color
UIFont textFont; // text font
UIFont transFont; // text font
BOOL needShowTrans; // show translation
PGMomentTextModelType textType; whether plain text or location text
CGFloat start; // Start time
CGfloat duration; // duration
CGFloat offset x; v/v < center point x deviation, range [ -0.5, 0.5]
CGFloat offset; v/v < center point y deviation, range [ -0.5, 0.5]
UIImageView iconView; /< site specific
The above is an example of codes for recording data content by using a data structure of the PGMomentTextModel class when the video add-on element is a text element, and since there are comments corresponding to the codes on the right side of the codes, the codes are not explained one by one here. It should be noted that, when the video additional element is a text element, the text element may be a normal text, or may also be a positioning text, that is, a text for marking a location, and as the right side of the code is provided with a comment corresponding to the code, the code is not explained one by one here.
Since the client can regard the video additional element as a sticker on the second layer no matter whether the video additional element is an image element or a text element, and further can manage the sticker by using a picture album manager video layer (pgmometitemlayer), the following shows the relevant codes when the video additional element is a dynamic element:
CALAyer content layer; // details
CGFloat startTime; // Start time
CGFloat duration time; // duration
CGFloat offset x; // x offset
CGFloat offset; // y offset
CGfloat rotate; angle// angle
CGFloat animationRate; v/animation Rate
CAKeyframeanimation analysis; // animation
id dataModel; // data, point to videoImageModel
It should be understood that, the pgmometitemlayer may also be used in combination with the content in the pgmometvideoimagemodel or the content in the pgmometext model, respectively, where the pgmometmentitemlayer is biased to record the presentation form in the video add-on element, and the pgmometereioimagemodel and the pgmometextmodel are biased to record the specific content in the video add-on element, it should be noted that the above example is only for convenience of understanding of the present solution, and is not used for limiting the present solution, and the pgmometitemlayer, pgmometeoimagemodel, and pgmometext model are also only code names, and the data structure may also be named by other names in the actual product, and the content actually included in each data structure is also not limited to the above example, and should be determined flexibly in combination with the actual situation.
the logic layer of the client is introduced in the following, the logic layer is responsible for controlling the corresponding operations of the generation and editing of the third layer, two interfaces mainly exist, one interface is responsible for generating the corresponding third layer, the other interface is responsible for refreshing the third layer when receiving editing instructions such as a translation instruction, a zoom instruction or a delete instruction and the like input by a user, the third layer bears a video additional element and is one layer displayed on the second layer, and since the third layer is described in detail in the following embodiments, the description is not expanded here first. For example, the client may manage the third layer by using a photo album manager video layer assistant (pgmomentitler handler), wherein an example of the first interface may be + (pgmomentitletlayerwithvideoimage (pgmomentivevideoimagemodel) videoImageModel, meaning that the data structure corresponding to the third layer is of pgmomentivevideoimagemodel type; an example of a second interface may be + (void) itemlayerfish: (pgmomentitlemlayer) itemLayer start: a (CGFloat) start duration, where partial parameters of the third layer are shown, which are respectively a start time and a display duration of the third layer, and are used for refreshing the display of the third layer, it should be understood that the above example is only for convenience of understanding the present solution, and is not used for limiting the present solution.
Finally, a control layer of the client is introduced, where the control layer is responsible for managing the cooperative work among the above layers, and as an example, the management may be implemented by a picture album manager (pgmomentitler layer manager), it should be understood that the above description of each layer of the client and examples of specific implementation manners are only for facilitating understanding of the present solution, in a specific product implementation process, the client may not be limited to be divided into the above four layers, and may be less than four layers or more than four layers, that is, none of the above examples is used for limiting the present solution.
with reference to fig. 3, a method for processing a video file in the present application will be described below, where an embodiment of the method for processing a video file in the present application includes:
101. the client provides a video file processing interface, wherein the video file processing interface comprises a video file playing area and an operation editing area, and the operation editing area comprises at least one video additional element operation control.
in this embodiment, when a user enters a display interface of a client, the client provides a video file processing interface to the user, where the video file processing interface includes a video file playing area and an operation editing area, the operation editing area includes at least one video additional element operation control, and the at least one video additional element operation control is used to receive an addition and/or editing operation on a video additional element, and further, the video additional element may be a text element, an image element, or other types of video additional elements, and the specific details are not limited herein.
102. The client side obtains a video file to be processed and time length information of the video file to be processed, wherein the video file to be processed comprises at least one frame of image to be processed, the video file to be processed and the time length information are displayed in a video file playing area, and the video file to be processed is located in a first image layer.
in this embodiment, a client obtains a video file to be processed and time length information of the video file to be processed, where the video file to be processed includes at least one frame of image to be processed, and the video file to be processed is displayed in a video file playing area and located on a first image layer; optionally, the video file to be processed and the time information are both displayed in the video file playing area. Further, the first layer is a layer for playing a video file to be processed, for example, when the scheme is applied to an apple operating system (ios), the first layer may be specifically represented as a multimedia playing layer (AVPlayerLayer); when the scheme is applied to an android operating system, the first layer can be embodied as other types of layers, and the layers are not listed one by one here. Specifically, the client may obtain the video file to be processed through shooting by the shooting module, or may obtain the video file to be processed from a plurality of stored videos by selecting the video file to be processed by the client, which is not limited herein. Optionally, the client may display the total duration and the playing progress bar (i.e., duration information) of the video file to be processed through the first layer, and in addition, the client may display a control for receiving a pause instruction of the video file to be processed and a playing instruction of the video file to be processed through the first layer, so that the user may pause playing the video file to be processed and play the video file to be processed again at any time.
103. And the client receives the addition and/or editing operation of the video additional elements in the operation editing area, which is triggered by any time node in the corresponding duration information interval in the playing, dragging and/or positioning process of the video file to be processed.
In this embodiment, after the client completes the loading process of the video file to be processed, the video file to be processed may be played on the first layer. In the process of playing the video file to be processed by the client, the adding and/or editing operation for the video additional element can be received by the operation video zone at any time node in the duration information interval. Optionally, if a progress bar corresponding to the duration information of the video file to be processed is displayed in the video file playing area of the client, the user may control the playing of the video file to be processed by performing a dragging operation on the progress bar, and the client may also receive an adding and/or editing operation for the video additional element through the operating video area in a process of the multi-action progress bar controlling the playing of the video file to be processed (i.e., in the dragging process of the video file to be processed); the user may also perform a positioning operation on the progress bar, for example, by performing a click operation on the progress bar, and during the positioning of the progress bar (i.e., in a representation form for positioning a video file to be processed), an adding and/or editing operation for an additional element of the video may also be received through the operating video zone. Further optionally, if a control for receiving a to-be-processed video file pause instruction and a to-be-processed video file play instruction is displayed on the first layer, the client may obtain the video additional element at the first time node when the to-be-processed video file is in a play state; the adding and/or editing operation for the video additional element may also be received by operating the video area when the video file to be processed is in a pause playing state (that is, another representation form for positioning the time-frequency file to be processed), the client may also receive the adding and/or editing operation for the video additional element during any combination process of playing, dragging and/or positioning the video file to be processed, for example, during the process of playing and positioning the video file to be processed, the adding and/or editing operation for the video additional element is received by the client, which is not exemplified in other scenes herein, or the adding and/or editing operation for the video additional element may also be received by the client in other manners, which is not limited herein.
104. the client responds to the video additional element adding and/or editing operation, and generates a second layer above a first layer where a to-be-processed video file in a video file playing area is located, wherein the added video additional element is located in the second layer, and a time node of the added video additional element on the second layer corresponds to a time node of the to-be-processed video file in the first layer, which is triggered to perform the video additional element adding and/or editing operation.
In this embodiment, after receiving an adding and/or editing operation for a video additional element, a client may obtain the video additional element, and then may generate a second layer according to a to-be-processed video file, specifically, may generate the second layer above a first layer where the to-be-processed video file in a video file playing area is located, where the second layer covers the first layer, and the second layer may be a transparent layer, and the size of the second layer may be the same as that of the first layer, and the occurrence duration of the second layer is also a target long duration. Specifically, after the client generates the second layer, the client may bind the second layer with the video file to be processed, that is, synchronize the duration information of the video file to be processed with the second layer, thereby ensuring that the second layer and the video file to be processed appear and disappear at the same time; more specifically, the method means that a one-to-one corresponding relationship is formed between the second image layer and the video file to be processed, and the video file to be processed synchronizes the current playing progress information to the second image layer in real time through the relationship; for example, when the scheme is applied to an ios operating system, the second layer may be specifically represented as a multimedia synchronization layer (avsynchronized layer), and the client may bind the multimedia synchronization layer (avsynchronized layer) and the video file to be processed (avplayitem), thereby completing the loading process of the video file to be processed; it should be understood that the second layer may also take other forms, such as an avvideo composition coreanimationtool (avvideocomposition coreanimationtool); further, the multimedia synchronization layer (avsynchronized layer) may be applied in a scene where a preview of the subsequently mentioned first video processing result is to be performed, and the multimedia video and animation combination core tool (avvideo composition core animation tool) may be applied in a scene where the subsequently mentioned first video processing result is stored, where other representations of the second layer are not exhaustive.
After generating the second layer, the client may display the added video additional elements on the second layer. Specifically, after the client acquires the added video additional element, the added video additional element may be synthesized on the second layer, where a time node of the added video additional element on the second layer corresponds to a time node of the to-be-processed video file on the first layer, which is triggered to perform video additional element addition and/or editing operation, and specifically, the time node of the to-be-processed video file, which is triggered to perform video additional element addition and/or editing operation, may be regarded as an initial display time node of the added video additional element on the second layer, so that an effect of adding an instant play in real time may be achieved; optionally, the starting presentation time node and the ending presentation time node of the added video additional element may also be adjusted by a subsequent adjustment operation on the presentation duration of the added video additional element, which is not limited herein.
105. And the client displays the video file which is subjected to synthesis processing and comprises the content of the video file to be processed on the first layer and the video additional element on the second layer.
in this embodiment, after a second image layer is generated and the added video additional elements are synthesized in the second image layer, a video file including the content of the to-be-processed video file in the first image layer and the video additional elements in the second image layer after the synthesis processing may be displayed. Because the second layer is bound with the first layer, the client can display the added video additional elements on the second layer while playing the video file to be processed through the first layer. Further, after the first video processing result is presented, the first video processing result may be stored, for example, content in a multimedia synchronization layer (avsynchronized layer) is transferred to a multimedia video and animation combination core tool (avvideo composition core animation tool), and then a storage operation may be performed, and the like.
In the embodiment of the application, a video file processing interface is provided, the video file processing interface comprises a video file playing area and an operation editing area, after the time length information of a video file to be processed and the video file to be processed is obtained, the video file to be processed is displayed in the video file playing area, in the playing, dragging or positioning process of the video file to be processed, any time node in the time length information interval receives the adding and/or editing operation of a video additional element in the operation editing area, a second layer is generated above a first layer where the video file to be processed in the video file playing area is located, the added video additional element can be borne by the second layer, the time node of the added video additional element on the second layer corresponds to the time node where the video additional element adding and/or editing operation of the video file to be processed is triggered, and then displaying the video file which is subjected to synthesis processing and comprises the content of the video file to be processed on the first image layer and the video additional element on the second image layer. Through the mode, a layer, namely the second layer, is newly added on the layer of the video playing, the generated new layer and the layer of the video playing are bound, the synchronization is achieved in time, the video file and the video additional element are respectively arranged on the two layers, when the video is added or edited, only the generated new layer needs to be edited, and the video stream after each editing does not need to be recoded, so that the waiting time does not need to be consumed, the user can see the video processing result in real time, and the efficiency of video processing is improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, in response to a video additional element adding and/or editing operation, a client generates a second layer above a first layer where a to-be-processed video file in a video file playing area is located, where the method includes:
The client controls the video file to be processed to pause playing in response to the video additional element adding operation, the video additional element adding operation corresponds to the identification of the added video additional element, and the added video additional element is a dynamic text element or a dynamic image element;
The client generates a second layer above the first layer where the video file to be processed in the video file playing area is located;
the client displays the video file which is subjected to synthesis processing and comprises the content of the video file to be processed on the first layer and the video additional element on the second layer, and the method comprises the following steps:
the client displays the content of the video file to be processed in the video file on the first image layer, and displays the added video additional element in the video file on the second image layer.
In this embodiment, the client may further display the operation editing area to the user in the process of playing, dragging, or positioning the video file to be processed through the video file playing area, so as to receive a video additional element adding operation input by the user through the operation editing area, where the video additional element is a dynamic text element or a dynamic image element. In one implementation manner, the operation editing area may be fixedly displayed on a display interface of the client, may be disposed below the video playing area, may also be disposed above the video playing area, and may also be disposed on the left side or the right side of the video playing area, and the like; in another implementation manner, the client may pop up the operation editing area through the display interface only when the user clicks a preset position in the display interface, or when the user places a cursor at the preset position in the display interface, where the preset position may correspond to the display position of the operation editing area, and the like, and the display manner of the operation editing area is not specifically limited here.
Specifically, the operation editing area may include a receiving control of a dynamic text element addition operation corresponding to a dynamic text element and a receiving control of a dynamic image element addition operation corresponding to an image element, respectively, so that a user may input a video additional element addition and/or editing operation through the operation editing area, and then the client may display a receiving interface of the text video additional element addition operation, or the client may display a receiving interface of the image video additional element addition operation, so as to receive the video additional element addition operation through the operation editing area, where the video additional element addition operation carries an identifier of the video additional element, and the identifier of the video additional element is used to uniquely identify the corresponding video additional element, and may be specifically expressed as a digital code, for example, a digital code corresponding to each image element is stored on the client, such as "000001", "000002", "000003", and the like; the identification of the video additional element may be specifically expressed as a character code, for example, a character code corresponding to each image element is stored on the client, and the character part of the character code has an association relationship with the series to which the image element belongs, such as "XR 0001", "QX 0001", and the like; the identification of the video additional element may also be embodied as a binary code, for example, a binary code for describing text information input by a user, and the like, and the identification of the video additional element may also be embodied as other types, which are not limited herein.
after the client receives the video additional element adding operation through the operation editing area, the client can control the video file to be processed to pause playing in response to the video additional element adding operation. Specifically, if the client receives a video additional element adding operation while playing the video file to be processed, controlling the video file to be processed to pause playing refers to converting the video file to be processed in a playing state into a pause state; optionally, since the client may further display a control for receiving a pending video file pause instruction and a pending video file play instruction, if the client receives the video additional element addition operation and the pending video file is in a paused state, the client maintains the paused state of the pending video file.
After determining the video additional elements according to the identifiers of the video additional elements carried in the video additional element adding operation, the client can generate a second layer above a first layer where a video file to be processed in a video file playing area is located, so that the content of the video file to be processed in the video file is displayed on the first layer, and the added video additional elements in the video file are displayed on the second layer; further, the video additional element may be displayed in a video additional element display area of the second layer, where the video additional element display area may coincide with a display area of the video file to be processed, or the video additional element display area may be a partial area in the display area of the time frequency to be processed, and the like, and is not limited herein.
To further understand the present solution, please refer to fig. 4, where fig. 4 is a schematic interface diagram illustrating an operation of adding an additional video element in a video file processing method according to an embodiment of the present application. In fig. 4, the editing area of the display operation is fixed below the video playing area, where a1 refers to the video additional element display area and coincides with the video playing area; a2 refers to an operation editing area, A3 refers to a control for receiving a text element addition operation of a text element, and a4 refers to a control for receiving an image element addition operation of an image element, when a user clicks A3 or a4, a video played in a1 is in a pause playing state, and then an entry interface of an identifier of the corresponding image element or an entry interface of text information may be displayed, so that after the identifier of the image element or the text information is acquired, an additional added video element is displayed in a1, it should be understood that the example in fig. 4 is only for convenience of understanding the scheme, and is not used for limiting the scheme.
in the embodiment of the application, after the first time node receives the video additional element adding operation, the to-be-processed video file can be controlled to pause playing in response to the video additional element adding operation, a second layer is generated above a first layer where the to-be-processed video file in the video file playing area is located, content of the to-be-processed video file in the video file is displayed on the first layer, and the added video additional element in the video file is displayed on the second layer, wherein the added video additional element is a dynamic text element or a dynamic image element. Through the mode, a user does not need to repeatedly adjust the video progress bar until a proper video frame is selected, manually pauses the video, and then adds the dynamic video additional element, but adds the dynamic video additional element to the video at any time in the video playing process, so that the flexibility and convenience of adding the dynamic video additional element are ensured.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, the displaying, by the client, the added video additional element in the video file on the second layer includes:
the client acquires an image sequence corresponding to the added video additional element, wherein the image sequence comprises at least one frame of image;
the client generates a third layer above the second layer where the added video additional elements are located, and the content attribute of the third layer is assigned as a first frame image in the image sequence;
And the client displays the image sequence corresponding to the added video additional element on the third layer of the video file playing area.
in this embodiment, when the added video additional element is an image element, the client obtains an image sequence corresponding to the added video additional element, where the image sequence includes at least one frame of image, that is, the added video additional element may be a static element including only one frame of image, or may also be a dynamic element including at least two frames of images, and further, the content in the added video additional element may be a picture, a character, or the like, which is not limited herein. Still further, the format of the images in the image sequence corresponding to the added video additional elements includes, but is not limited to, Portable Network Graphics (png), Bitmap (bmp), or other picture formats, among others.
After the client acquires the image sequence corresponding to the added video additional element, a third image layer corresponding to the added video additional element can be generated above the second image layer where the added video additional element is located, the content attribute of the third image layer is assigned as a first frame image in the image sequence, and then the image sequence corresponding to the added video additional element can be displayed on the third image layer in the video file playing area. Specifically, if the image sequence corresponding to the added video add-on element includes only one frame of image, the content attribute of the third layer is assigned to be the only one frame of image in the image sequence, and if the image sequence corresponding to the added video add-on element includes at least two frames of images, the content attribute of the third layer is first assigned to be the first frame of image in the at least two frames of images, so as to generate the map corresponding to the content in the added video add-on element. The size of the third layer may be consistent with the presentation size of the added video additional element, for example, the third layer may be specifically expressed as a CALayer, where the CALayer is a minimum content expression unit in a multimedia synchronization layer (avsynchronized layer).
since the time node (hereinafter referred to as "first time node") of the to-be-processed video file in the first layer, which is triggered to perform the video additional element addition and/or editing operation, can be regarded as the initial display time node of the added video additional element, and the user can also input the termination display time of the added video additional element through the client, so that the client can acquire the display duration of the added video additional element. The image sequence corresponding to the added video additional element can be displayed through a third layer above the second layer in the video additional element display area. Specifically, the added video additional element may be displayed from the first time node, and the display duration of the added video additional element is the display duration; of course, the user may also adjust the specific time point corresponding to the first time node, for example, adjust the actual time point corresponding to the first time node before or after, and the like.
more specifically, in one case, if the added video add-on element is a static element, the image sequence corresponding to the added video add-on element only includes one frame image, and then the unique frame image may be assigned to the content attribute of the third layer and then displayed for the entire display duration, and further, the content attribute, the first time node, and the display duration of the third layer may all be stored in the data structure of the third layer, for example, if the added video add-on element is an image element, the content attribute of the third layer may be stored in the data structure pgmoviegoimagemodel, and the initial display time node and the display duration of the third layer may all be stored in the data structure pgmomentitletlayer; for example, if the added video additional element is a text element, the content attribute of the third layer may be stored in the data structure PGMomentTextModel, and both the start presentation time node and the display duration of the third layer may be stored in the data structure pgmomentitletlayer; and the like, and are not particularly limited herein.
In another case, if the image sequence corresponding to the added video additional element includes at least two frames of images, that is, the added video additional element is a dynamic element, the third layer may be configured as an animation, and the assignment of the content attribute of the third layer is updated according to a predetermined frequency; specifically, at least two frames of images included in the image sequence may be ordered according to a display order, so that each frame of image is sequentially assigned to a content attribute of the third image; more specifically, for example, if the image sequence includes two frames of images, the two frames of images may be ordered to obtain a first frame of image (i.e., a first frame of image) and a second frame of image, and in an implementation manner, the first frame of image and the second frame of image may be repeatedly assigned to the third layer according to a preset frequency until the display duration of the third layer is finished; in another implementation manner, the first frame image may be copied to a third layer, and then the second frame image may be assigned to the third layer, and the assignment of the third layer is not updated until the entire display duration is over.
to further understand the present solution, please refer to fig. 5, fig. 5 is a schematic flowchart illustrating a video additional element added in the method for processing a video file according to the embodiment of the present application, and fig. 5 illustrates an example of the video additional element added as a dynamic element. B1, the client side obtains a video file to be processed; b2, the client generates a multimedia synchronization layer (AVSynchronizedLayer) (namely an example of a second layer) according to the video file to be processed; b3, the client plays the video file to be processed; b4, the client receives the added video additional element adding operation through the operation editing area, the added video additional element adding operation carries the identification of the added video additional element, and the client controls the video file to be processed to pause playing in response to the added video additional element adding operation; b5, the client acquires the png sequence (namely one example of the image sequence) corresponding to the added video additional element; b6, the client generates CALAYER (namely an example of a third layer), and assigns the first frame image in the png sequence to the content attribute of the third layer; b7, the client configures an animation CAKeyframeanimation for CALAyer; b8, updating the content attribute of the third layer by cakeyframe animation in the client according to a preset frequency, and configuring the data structure of the third layer into an animation, specifically, assigning each frame image to the content attribute of the third layer from the png sequence in sequence for cakeyframe animation according to the frequency configured by the client, and in addition, the client needs to configure the start time and the duration time in the data structure of the third layer according to the first time node and the display duration time; b9, the client adds the generated CALayer to the generated avsyncronized layer, and it should be understood that the example in fig. 4 is only for convenience of understanding the present solution and is not used to limit the present solution.
in the embodiment of the application, an image sequence corresponding to the added video additional element is obtained, a third layer corresponding to the video additional element is generated, wherein the content attribute of the third layer is assigned as a first frame image in the image sequence, and the image sequence corresponding to the video additional element is displayed in the third layer of the video file playing area according to the first time node and the display duration. Through the mode, a specific implementation mode for displaying the added video additional elements in the video additional element display area is provided, and the realizability of the scheme is improved; and because the added video additional elements can comprise at least two frames of images, namely the added video additional elements can be specifically expressed in a video form, and the mode of image sequence is adopted, the performance consumption caused by the decoding of the video is effectively avoided.
optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, in response to a video additional element adding and/or editing operation, a client generates a second layer above a first layer where a to-be-processed video file in a video file playing area is located, where the method includes:
The client controls the video to be processed to pause playing in response to the video additional element adding operation, the video additional element adding operation corresponds to the identification of the video additional element, and the video additional element is a static text element or a static image element;
and generating a second layer above the first layer where the video file to be processed in the video file playing area is located.
In this embodiment, the client may further display the operation editing area to the user in the process of playing, dragging, or positioning the video file to be processed through the video file playing area, so as to receive a video additional element adding operation input by the user through the operation editing area, where the video additional element is a static text element or a static image element.
Specifically, the operation editing area may include a receiving control of a static text element addition operation corresponding to the static text element and a receiving control of a static image element addition operation corresponding to the image element, respectively, so that a user may input a video additional element addition and/or editing operation through the operation editing area, and after the client receives the video additional element addition operation through the operation editing area, the client may control the to-be-processed video file to pause playing in response to the video additional element addition operation.
after determining the static elements to be added according to the identifiers of the video additional elements, the client may generate a second layer above the first layer where the video file to be processed in the video file playing area is located, and then display the content of the video file to be processed in the video file on the first layer, and display the added video additional elements in the video file on the second layer.
In the embodiment of the application, through the method, a user does not need to repeatedly adjust the video progress bar until a proper video frame is selected, manually pauses the video and then adds the static video additional element, but adds the static video additional element to the video at any time in the video playing process, so that the flexibility and convenience of adding the static video additional element are ensured; in addition, not only is an adding mode of the dynamic elements provided, but also an adding mode of the static elements is provided, and the implementation scene of the scheme is expanded.
optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, in response to the video additional element adding and/or editing operation, the client generates a second image layer above the first image layer where the video file to be processed in the video file playing area is located, where the method includes:
The method comprises the steps that a client receives a map adding operation at a first time node, wherein the map adding operation carries an identifier of an image element, and the image element belongs to a video additional element;
The client responds to the map adding operation and controls the video file to be processed to pause playing;
and generating the second image layer above the first image layer where the video file to be processed in the video file playing area is located.
In this embodiment, because the receiving control of the element selection operation corresponding to the image element can be displayed in the operation editing region on the client, after the image element selection operation is received, the client may enter a receiving interface of the map adding operation to receive the map adding operation at the first time node, and after the map adding operation is received, the client may control the video file to be processed to pause playing and display a receiving interface of the identifier of the image element to receive the identifier of the image element input by the user, where the identifier of the image element may be specifically represented by a digital code, a character code or other forms. Furthermore, the map corresponding to the image element may be a static map or a dynamic map. Specifically, a plurality of maps can be displayed on the receiving interface of the identifier of the image element, so that a user can enter the identifier of the image element by entering a selection operation, and more specifically, when the map corresponding to the image element is a dynamic map, the client displays a first frame image (i.e., a first frame image) of at least two frames of images included in the dynamic map. Furthermore, because the number of the maps which can be displayed on the receiving interface of the identifier of the image element is limited, the user can switch the maps displayed on the receiving interface of the identifier of the image element by entering a sliding operation mode, and then enter the identifier of the image element. After the user inputs the identifier of the image element, the client is considered to acquire the identifier of the image element carried in the map adding operation, and then the corresponding image element can be acquired according to the identifier of the image element.
optionally, after acquiring the identifier of the image element, the client may further display a display duration entry interface, specifically, the client may default the first time node as the initial play time, so that the user may enter the display duration of the image element by selecting the play termination time; furthermore, the user may also select the starting playing time of the image element, that is, the user may enter the display duration of the image element by selecting the starting playing time and the ending playing time. Specifically, the user may select the end play time and/or the start play time by a click operation, a slide operation, or the like.
After the client acquires the image element, the client may adjust the to-be-processed video file from the pause playing state to the playing state, and display the image element at a preset position in the video additional element display area, where the video additional element display area and the video playing area may coincide, and the preset position may be any position of the video additional element display area, for example, an upper left corner, an upper right corner, a lower left corner, a lower right corner, a middle position, or any other position of the video additional element display area.
Specifically, the client may regard the playing stage of the entire video file to be processed as the display duration, and then display the image elements along with the playing of the video file to be processed in the playing process of the entire video file to be processed; or the period from the first time node to the end of playing the video file to be processed may be regarded as the display duration, and the image elements are displayed from the first time node to the end of playing the video file to be processed; optionally, if the client may receive the display duration entered by the user, the image element and the like are displayed within the display duration entered by the user, which is not limited herein.
more specifically, if the image element is a static element, the client may continuously display a unique frame of image corresponding to the image element during the display duration of the image element; if the image element is a dynamic element, that is, the image sequence corresponding to the image element includes at least two frames of images, the at least two frames of images in the image sequence may be repeatedly displayed during the display duration of the image element, or the image element may stay on the last frame of images in the at least two frames of images after the at least two frames of images are displayed for one time; as an example, for example, the display duration is 3 seconds, the image elements include two frames of images, and 6 frames of images can be displayed within 3 seconds, in one implementation, the two frames of images can be repeatedly displayed 3 times, in another implementation, the next frame of images in the two frames of images can be continuously displayed after the two frames of images are displayed 1 time, and the like.
To further understand the present solution, please refer to fig. 6, fig. 6 is an interface schematic diagram for acquiring an image element in the video file processing method in the embodiment of the present application, fig. 6 includes four sub schematic diagrams (a), (b), (C), and (d), the sub schematic diagram (a) of fig. 6 is similar to the above-mentioned fig. 4, and details are not repeated here, when a user clicks a4, it is determined that a chartlet adding operation added by the user starts to be received, and then (b) of fig. 6 is entered, because the client may further classify preset image elements, where C1 refers to an image element of an expression class, C2 refers to an image element of a pendant class, when the user clicks C2, the schematic diagram (C) of fig. 6 may be entered to acquire an identifier of the image element in the chartlet adding operation, where C3 shows multiple pendants, and when the user clicks one target pendant of C3, the selection operation of the target pendant is input, and then when the user clicks and determines C4, the client may acquire the identifier of the image element, and then enter the schematic diagram (d) in fig. 6, that is, the image element is displayed, of course, an interface of the schematic diagram (b) in fig. 6 may not exist in an actual product, but directly enter the schematic diagram (C) in fig. 6 when the user clicks C2, it should be understood that the example in fig. 6 is only for convenience of understanding the present solution, and is not used to limit the present solution.
in the embodiment of the application, a map adding operation is received at a first time node, wherein the map adding operation carries an identifier of an image element, a video file to be processed is controlled to pause playing in response to the map adding operation, and the image element is displayed in a video additional element display area. By the method, a specific implementation mode that the user adds the image elements to the video file to be processed is provided, and the realizability of the scheme is improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, in response to the video additional element adding and/or editing operation, the client generates a second image layer above the first image layer where the video file to be processed in the video file playing area is located, where the method includes:
the method comprises the steps that a client receives a text adding operation at a first time node, wherein the text adding operation carries an identifier of an image element, and the image element belongs to a video additional element;
the client responds to the text adding operation and controls the video file to be processed to pause playing;
and generating the second image layer above the first image layer where the video file to be processed in the video file playing area is located.
in this embodiment, since the client can display the receiving control of the element selection operation corresponding to the text element through the operation editing area, after receiving the text element selection operation, the client can enter a receiving interface of the text addition operation to receive the text addition operation at the first time node, and after receiving the text addition operation, the client can control the video file to be processed to pause and display the receiving interface of the text information to receive the text information input by the user, specifically, a text box for receiving the text information can be displayed on the receiving interface of the text information, so that the user can enter the text information; a microphone icon for receiving the text information may also be displayed on the text information receiving interface, so that the user may enter the text information in the form of voice through the microphone, which is not limited herein. After the user inputs the text information, the client acquires the text elements carried in the text adding operation, wherein the text elements comprise the text information input by the user.
Correspondingly, after the client acquires the text information, a display duration entry interface may also be displayed, and the specific display manner is similar to that of the display duration entry interface described in the embodiment corresponding to fig. 6, which is not repeated here.
after the client acquires the text element, the client can adjust the video file to be processed from the pause playing state to the playing state, and display the text element at a preset position in the video additional element display area. Specifically, the client may regard the playing stage of the entire video file to be processed as the display duration, and then the text elements are displayed along with the playing of the video file to be processed in the playing process of the entire video file to be processed; or the period from the first time node to the end of playing the video file to be processed may be regarded as the display duration, and the text elements are displayed from the first time node to the end of playing the video file to be processed; optionally, if the client may receive the display duration entered by the user, the text element and the like are displayed within the display duration entered by the user, which is not limited herein.
To further understand the present solution, please refer to fig. 7, fig. 7 is a schematic diagram of an interface for acquiring text elements in the video file processing method according to the embodiment of the present application, fig. 7 includes four sub-schematic diagrams (a), (b), (c), and (d), and fig. 7 exemplifies that text information input by a user is received through a text box. The sub-diagram (a) of fig. 7 is similar to the above-mentioned fig. 4, and is not repeated here, when the user clicks a3, it is regarded as that a text adding operation added by the user starts to be received, then the diagram (b) of fig. 7 is entered, because the client may further classify the received text information, where D1 refers to text information in a subtitle class, D2 refers to text information in an address class, and when the user clicks D1, the diagram (c) of fig. 7 may be entered to obtain text information in the text adding operation, where D3 shows a text box for receiving text information, and after the user inputs text information through D3, the user may click D4, so that the client may obtain text information, and then enter the diagram (D) of fig. 7, that is, a text element is shown, and an interface of the diagram (b) of fig. 7 may also not exist in an actual product, but rather directly into the schematic diagram of fig. 7 (c) when the user clicks D1, it should be understood that the example in fig. 7 is only for convenience of understanding the scheme and is not intended to limit the scheme.
In the embodiment of the application, a text adding operation is received at a first time node, wherein the text adding operation carries an identifier of a text element, a video file to be processed is controlled to pause playing in response to the text adding operation, and the text element is displayed in a video additional element display area. By the method, a specific implementation mode that the user adds the text element to the video file to be processed is provided, and the realizability of the scheme is improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, after the client generates a second layer above the first layer where the to-be-processed video file in the video file playing area is located in response to a video additional element adding and/or editing operation, the method further includes:
The client side obtains a third layer corresponding to the video additional element, and the third layer is located above the second layer;
And processing the video additional element in the third layer to obtain a processed video additional element, and displaying the processed video additional element on the third layer.
In this embodiment, after acquiring the image sequence corresponding to the added video additional element, the client may generate a third layer corresponding to the added video additional element above the second layer where the added video additional element is located, where a content attribute of the third layer is assigned as a first frame image in the image sequence, so that the video additional element is synthesized on the second layer through the third layer, and the second layer is bound to the first layer, so that the synthesis of the video additional element on the first layer is realized, after the client adds the video additional element to the second layer, an adjustment operation for the video additional element may also be acquired, and then, in response to the adjustment operation, the third layer of the video additional element is acquired, and the video additional element is processed through the third layer to obtain a processed video additional element, and displaying the processed video additional element on the third layer. Specifically, the adjustment operation for the video additional element may be obtained before the video additional element is obtained for the first time, that is, before the video additional element is not played with the video file to be processed, so as to obtain a third layer corresponding to the video additional element, and process the video additional element through the third layer; the adjustment operation for the video additional element may be acquired after the video additional element is played along with the video file to be processed and a selection instruction for the video additional element is entered, so as to acquire a third layer corresponding to the video additional element, and further process the video additional element through the third layer. The user may enter a selection instruction for the video additional element by clicking the video additional element, may also enter a selection instruction for the video additional element by entering a preset gesture operation, and the like, which is not limited herein.
the adjustment operation may be embodied in various forms, and if the first adjustment operation is received, the client performs translation processing on the video additional element in the video additional element display area to obtain a processed element, where the processed element is displayed on the second layer. Specifically, a user can enter a first adjustment operation in a mode of pressing the video additional element and dragging the video additional element to a target position; or inputting a first adjustment operation in a mode of clicking a target position in a display area of the video additional element after the video additional element is selected; other implementations are not exhaustive herein. And the client can further perform translation processing on the video additional element in the video additional element display area so as to move the video additional element to the target position, thereby obtaining the processed element located at the target position.
and if the second adjustment operation is received, the client performs amplification or reduction processing on the video additional element in the video additional element display area to obtain a processed element, wherein the processed element is displayed on the second image layer. Specifically, in an implementation manner, the client may display an icon for receiving the second adjustment operation on the adjustment operation receiving interface, so that the user may enter the second adjustment operation based on the icon, and more specifically, the icon for receiving the second adjustment operation may be disposed on an upper left corner, an upper right corner, a lower left corner, or a lower right corner of the video additional element, so that the user may perform an enlargement or reduction process on the video additional element (that is, enter the second adjustment operation) by dragging the icon outward or inward; in another implementation manner, the client may not display an icon for receiving the second adjustment operation, so that the user may directly input an enlargement or reduction operation on the first to-be-processed operation to enter the second adjustment operation, and the like, which is not exhaustive in other implementation manners. And the client can further perform amplification or reduction processing on the video additional element in the video additional element display area to obtain the processed element.
and if the third adjustment operation is received, the client performs rotation processing on the video additional element in the video additional element display area to obtain a processed element, wherein the processed element is displayed on the second image layer. Specifically, in an implementation manner, an icon for receiving a third adjustment operation may be displayed on the adjustment operation receiving interface by the client, so that the user may enter the third adjustment operation based on the icon, and more specifically, the icon for receiving the third adjustment operation may be disposed on an upper left corner, an upper right corner, a lower left corner, or a lower right corner of the video additional element, or may be disposed outside a display area of the video additional element, or the like, so that the user may perform rotation processing on the video additional element (i.e., enter the third adjustment operation) by dragging the icon, where the dragging track may be a circular-like shape, and optionally, the icon for receiving the third adjustment operation and the icon for receiving the second adjustment operation may be the same icon; in another implementation manner, the client may not display an icon for receiving the third adjustment operation, so that the user may directly input a rotation operation on the first to-be-processed operation to enter the third adjustment operation, and the like, which is not exhaustive in other implementation manners. And the client can further perform rotation processing on the video additional element in the video additional element display area to obtain a processed element.
And if the fourth adjustment operation is received, the client adjusts the display duration corresponding to the video additional element in the video additional element display area to obtain the target display duration. Specifically, the client may display an icon for receiving the fourth adjustment operation on the adjustment operation receiving interface, the icon for receiving the fourth adjustment operation may be specifically represented in a form of a progress bar, the progress bar may display a start playing time and a stop playing time corresponding to the display duration of the video additional element, and of course, the progress bar may also display the display duration. The starting playing time and the ending playing time can be processed graphically, so that a user can adjust the display duration corresponding to the video additional element by executing sliding operation on the starting playing time and the ending playing time, and the adjusted target display duration is obtained.
to further understand the present disclosure, please refer to fig. 8, and fig. 8 is a schematic interface diagram illustrating an obtaining adjustment operation in a video file processing method according to an embodiment of the present disclosure. Where E1 refers to a video additional element, the whole E1 is used for receiving a first adjustment operation, E2 may be used for receiving a second adjustment operation, and may also be used for receiving a third adjustment operation, E3 represents a start playing time corresponding to a display duration, and E5 represents an end playing time corresponding to the display duration, so that a user may input a fourth adjustment operation by performing a sliding operation on E4 and/or E5, it should be understood that the example in fig. 8 is only for convenience of understanding the present solution, and is not used for limiting the present solution. In an actual product, for example, the user may input various adjustment operations in a form of voice, and the like, which is not limited herein.
In the embodiment of the application, after the video additional element is added to the second layer, the client may further obtain a third layer corresponding to the video additional element, and process the video additional element through the third layer to obtain the processed video additional element. By the method, the video additional elements can be secondarily processed after being acquired, and display flexibility of the video additional elements is improved.
optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, if the first adjustment operation is received, the client processes the video additional element in the third layer, and obtaining the processed video additional element includes:
The client adds the third layer to the editable view in the operation editing area;
The client side carries out translation processing on the video additional element on the editable view to obtain translation parameters;
The client records the translation parameters to a data structure corresponding to the third layer;
and the client redraws the third layer on the second layer.
in this embodiment, if the client receives the first adjustment operation, the client may obtain a third layer corresponding to the video additional element, and may further remove the third layer from the second layer in the operation editing region and add the third layer to an editable view (view), and may further obtain an element movement instruction through the editable view, where the editable view is located in the element editing region, and since the element editing region has been described in the description of fig. 2, details of the element editing region are not described here; the editable view may be generated in a lazy-loading fashion when the user first clicks on an additional element of the video.
the client responds to the element moving instruction, translation processing is carried out on the video additional element on the editable view, and translation parameters are obtained, wherein in one case, a certain point in the element editing area can be used as an origin, the horizontal direction is used as an abscissa, the vertical direction is used as an ordinate, the translation parameters can comprise coordinate information of a target position of the video additional element after the translation processing, the translation parameters can also comprise a moving angle in the translation process of the video additional element, a moving distance in the horizontal direction, a moving distance in the vertical direction and the like, further, the origin coordinates can be an upper left corner vertex, a lower left corner vertex, an upper right corner vertex, a lower right corner vertex, a central point and the like of the element editing area, and the central point of the video additional element can also be used as a coordinate origin and the like; and recording the translation parameter into the data structure corresponding to the third layer, which is not described herein again because the data structure has been described in detail in the foregoing embodiment. After the user finishes the adjustment operation on the video additional element, the client determines the position of the third layer on the second layer again according to the pre-movement position and the translation parameter stored in the data structure of the third layer, and then redraws the third layer on the second layer.
To further understand the present solution, please refer to fig. 9, and fig. 9 is a schematic flowchart illustrating a process of performing a panning process on a video add-on element in a video file processing method according to an embodiment of the present application. F1, the client receives a first adjustment operation on the video additional element and controls the video file to be processed to be in a pause playing state; f2, the client acquires a CALAYER (namely one example of the third layer) corresponding to the video additional element, and removes the CALAYER from the AVSynchronizedLayer (namely one example of the second layer); f3, adding CALAyer to the editable view by the client; f4, the client side obtains the element moving instruction through the editable view, and the additional video elements are translated on the editable view to obtain translation parameters; f5, the client records the translation parameters to a data structure corresponding to the third layer; f6, the client redraws the CALayer onto avsyncronized layer according to the third layer data structure, and it should be understood that the example in fig. 9 is only for convenience of understanding the present solution, and is not used to limit the present solution.
In the embodiment of the application, the third layer is added to the editable view in the operation editing area, after the element moving instruction is obtained through the editable view, the video additional element is translated on the editable view to obtain the translation parameter, the translation parameter is recorded in the data structure corresponding to the third layer, and the third layer is redrawn onto the second layer according to the translation parameter. Through the mode, the specific implementation mode for performing the translation processing on the video additional element is provided, the realizability of the scheme is improved, and due to the fact that updating of video file data to be processed is not involved in the translation processing process of the video additional element, operation is only performed on the second image layer, redrawing of each frame of video is not needed, waiting time does not need to be consumed, a user can see a video processing result in real time, and therefore the efficiency of video processing is improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, the processing, by the client, the video additional element in the third layer, and obtaining the processed video additional element includes:
The client adds the third layer to the editable view in the operation editing area;
the client performs zooming processing on the video additional element on the editable view to obtain zooming parameters, wherein the zooming parameters comprise magnification parameters or reduction magnification parameters;
the client records the scaling parameters to a data structure corresponding to the third layer;
and the client redraws the third layer on the second layer.
In this embodiment, if the client receives the second adjustment operation, the client may obtain a third layer corresponding to the video additional element, and may further remove the third layer from the second layer in the operation editing area, and add the third layer to the editable view, and may further obtain an element zoom instruction through the editable view, where the client responds to the element zoom instruction to perform reduction or enlargement processing on the video additional element on the editable view to obtain a zoom parameter, where a current size of the video additional element may be regarded as 1 time, and the zoom parameter includes a multiple of enlargement or reduction of the video additional element; and then recording the scaling parameters into a data structure corresponding to the third layer, after the user finishes the adjustment operation on the video additional elements, the client re-determines the size of the third layer on the second layer according to the scaling parameters stored in the data structure of the third layer, and further re-draws the third layer on the second layer.
In the embodiment of the application, through the above manner, a specific implementation manner for performing reduction or amplification processing on the video additional element is provided, so that the realizability of the scheme is improved, and the application scene of the scheme is expanded; in addition, the update of the video file data to be processed is not involved in the process of reducing or amplifying the video additional elements, and only the second image layer is operated, that is, each frame of video does not need to be redrawn, so that a user can see the video processing result in real time, and the video processing efficiency is improved.
optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, the processing, by the client, the video additional element in the third layer, and obtaining the processed video additional element includes:
The client adds the third layer to the editable view in the operation editing area;
The client rotates the video additional element on the editable view to obtain a rotation angle parameter;
The client records the rotation angle parameter to a data structure corresponding to the third layer;
And the client redraws the third layer on the second layer.
In this embodiment, if the client receives a third adjustment operation, a third layer corresponding to the video additional element may be obtained, and then the third layer may be removed from the second layer in the operation editing area and added to the editable view, and then an element rotation instruction may be obtained through the editable view, where the client performs rotation processing on the video additional element on the editable view in response to the element rotation instruction to obtain a rotation parameter, where the rotation parameter may include information such as a rotation direction and a rotation angle of the video additional element; and then recording the rotation parameters into a data structure corresponding to the third layer, after the user finishes the adjustment operation on the video additional elements, the client side re-determines the display direction of the third layer according to the rotation parameters stored in the data structure of the third layer, and then redraws the third layer onto the second layer.
In the embodiment of the application, through the above manner, a specific implementation manner for performing rotation processing on the video additional element is provided, so that the realizability of the scheme is improved, and the application scene of the scheme is expanded; in addition, updating of video file data to be processed is not involved in the process of rotation processing of the video additional elements, and only the second image layer is operated, that is, each frame of video does not need to be redrawn, so that a user can see a video processing result in real time, and the video processing efficiency is improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, after the client generates, in response to a video add-on element adding and/or editing operation, a second layer above the first layer where the video file to be processed in the video file playing area is located, the method further includes:
the client receives a video additional element deleting operation of an operation editing area, which is triggered by any time node in a corresponding duration information interval in the playing, dragging and/or positioning process of a video file to be processed;
And the client side responds to the video additional element deleting operation and deletes the video additional element on the second layer of the video file playing area.
In this embodiment, after the client acquires the video additional element, the client may further receive a video additional element deletion operation at a second time node based on the video additional element, where the video additional element deletion operation carries an identifier of the video additional element, the video additional element is a text element or an image element, and the second time node is triggered by any time node in the duration information interval, so that the client may delete the video additional element in the video additional element display area in response to the video additional element deletion operation, and further delete the video additional element on a second layer of the video file playing area, that is, the video additional element does not appear in the video file playing area after the video additional element deletion operation.
specifically, an instruction receiving interface may be displayed before the video additional element is obtained for the first time, that is, only the video additional element is displayed on the second image layer but the video additional element is not played with the video file to be processed, a control for receiving an element deletion operation for the video additional element may be provided on the instruction receiving interface, so that the video additional element deletion operation entered by the user is received through the icon; the method may also be that after the video additional element is played with the video file to be processed, and a selection instruction for the video additional element is entered, an instruction receiving interface is displayed, a control for receiving an element deletion operation for the video additional element may be arranged on the instruction receiving interface, so that the video additional element deletion operation is received through the icon, where a manner in which a user enters the selection instruction for the video additional element is described in the embodiment corresponding to fig. 8, and is not described herein again; the video additional element may be dragged to a preset position after the video additional element is pressed for a long time to input an element deletion operation on the video additional element, wherein the preset position may be the bottom, the top, the left side, the right side or other positions of the display area of the video additional element.
To further understand the present solution, please refer to fig. 10, where fig. 10 is a schematic interface diagram illustrating an operation for acquiring a video additional element deletion in a video file processing method according to an embodiment of the present application. Fig. 10 is similar to fig. 8, and similar parts refer to the description of fig. 8 above, and are not repeated here, and different from fig. 8, fig. 10 further includes G1, where G1 refers to a control for receiving an element deletion operation for a video additional element, and when a user clicks G1, a client may receive the video additional element deletion operation, it should be understood that the example in fig. 10 is only for convenience of understanding the present solution, and is not used to limit the present solution. In an actual product, for example, the user may also input a video add-on deletion operation in the form of voice, and so on, which are not limited herein.
In the embodiment of the application, after the video additional element is acquired, a video additional element deleting operation can be received, and then the video additional element is deleted in the video additional element display area in response to the video additional element deleting operation. By the method, the display flexibility of the video additional elements is further improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, in response to a video additional element deletion operation, a client deletes a video additional element on a second layer of a video file playing area, where the method includes:
And the client side responds to the video additional element deleting operation and deletes the third layer and the video additional element which are generated above the second layer where the video additional element is located.
in this embodiment, if the client receives a selection instruction for the video additional element, the third layer may be removed from the second layer and added to the editable view, and since the deletion module is displayed on the editable view, the client may obtain the video additional element addition operation through the deletion module, and then delete the third layer and the video additional element generated above the second layer where the video additional element is located in response to the video additional element deletion operation, specifically, the client may delete the prestored third layer, clear the video additional element displayed on the third layer, and release the image sequence corresponding to the video additional element.
to further understand the present solution, please refer to fig. 11, fig. 11 is a schematic flowchart illustrating a deleting operation performed on a video add-on element in a video file processing method according to an embodiment of the present application. H1, the client receives a selection instruction of the video additional element and controls the video file to be processed to be in a pause playing state; h2, the client acquires a CALAYER (namely one example of a third layer) corresponding to the video additional element, and removes the CALAYER from an AVSynchronizedLayer (namely one example of a second layer); h3, adding CALAyer to the editable view by the client; h4, the client acquires the video additional element deletion operation through the editable view; h5, the client deletes the data of the third layer and the video additional element, it should be understood that the example in fig. 11 is only for convenience of understanding the present solution and is not used to limit the present solution.
In the embodiment of the application, through the mode, a specific implementation mode for deleting the video additional element is provided, the realizability of the scheme is improved, and the application scene of the scheme is further expanded; in addition, updating of the video file data to be processed is not involved in the process of deleting the video additional elements, and only the third image layer is operated, that is, each frame of video does not need to be redrawn, so that a user can see the video processing result in real time, and the video processing efficiency is improved.
optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, the displaying, by the client, a video file after synthesis processing and including a content of the video file to be processed in the first layer and a video additional element in the second layer includes:
and displaying the video file which is subjected to the synthesis processing and comprises the content of the video file to be processed of the first image layer.
In this embodiment, because the client may delete the third layer and the video additional element generated above the second layer where the video additional element is located, only the video file including the content of the video file to be processed in the first layer after the synthesis processing may be displayed in the video file playing area of the client, specifically, the video file including the content of the video file to be processed in the first layer after the synthesis processing refers to a video file obtained after the synthesis operation of the first layer and the second layer is performed and the deletion operation of the video additional element on the third layer generated above the second layer is performed.
In the embodiment of the application, through the mode, the display interface after the video additional element deleting operation is executed is provided, and the completeness of the scheme is improved.
optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the method for processing a video file provided in the embodiment of the present application, the method further includes:
when the video file to be processed is played, the client acquires a second element to be processed at a third moment, wherein the third moment is any moment in the duration information;
And the client displays a third video processing result at a third moment according to the video additional element and the second element to be processed, wherein the third video processing result comprises the display effect of the video additional element and the second element to be processed on the video file to be processed, and the second element to be processed is displayed on the second layer.
In this embodiment, in the playing process of the video file to be processed, the playing process of the video file to be processed includes that the video file to be processed is in a pause playing state and the video file to be processed is in a playing state. The client can also acquire a second element to be processed at a third moment, wherein the third moment is any moment in the duration information, the third moment and the first time node can be the same moment, and the third moment and the first time node can also be different moments; the second element to be processed and the video additional element may be embodied as the same element or may be embodied as different elements. The client may display a third time-frequency processing result at a third time according to the video additional element and the second element to be processed, where the third processing result includes display effects of the first element to be processed and the second element to be processed on the video file to be processed, and the second element to be processed is displayed on the second layer. Specifically, if the third time is the same as the first time node, the client may combine the video additional element and the second to-be-processed element on the second layer at the same time; and if the third time is different from the first time node, the client synthesizes the second element to be processed on a second layer, and the second layer is synthesized with the video additional element. The specific implementation manner of specifically synthesizing the second element to be processed on the second layer may refer to the embodiment corresponding to fig. 5, which is not described herein again.
to further understand the present disclosure, please refer to fig. 12, in which fig. 12 is a schematic interface diagram showing a plurality of elements to be processed in the video file processing method according to the embodiment of the present disclosure. Wherein, I1, I2 and I3 respectively show three elements to be processed, and fig. 12 illustrates three elements to be processed, which are all image elements, it should be understood that the example in fig. 12 is only for convenience of understanding the present solution and is not used to limit the present solution.
In the embodiment of the application, when the video file to be processed is played, the second element to be processed is obtained at the third moment, and the third video processing result is displayed at the third moment according to the video additional element and the second element to be processed, wherein the third video processing result includes the display effect of the video additional element and the second element to be processed on the video file to be processed, and the second element to be processed is displayed on the second layer. Through the mode, the specific implementation mode of adding the multiple elements to be processed to the video file to be processed is provided, the application scene of the scheme is expanded, and due to the fact that the elements are added in the scheme, only the generated new image layer is edited, and the video of each frame does not need to be redrawn, waiting time does not need to be consumed even if the multiple elements to be processed are added, a user can see the video processing result in real time, and therefore the video processing efficiency is improved.
referring to fig. 13, fig. 13 is a schematic view of an embodiment of a video processing apparatus according to the present application, and the video processing apparatus 20 includes:
The display module 201 is configured to provide a video file processing interface, where the video file processing interface includes a video file playing area and an operation editing area, and the operation editing area includes at least one video additional element operation control;
the processing module 202 is configured to obtain a video file to be processed and time length information of the video file to be processed, where the video file to be processed includes at least one frame of image to be processed, the video file to be processed and the time length information are displayed in a video file playing area, and the video file to be processed is located in a first layer;
the receiving module 203 is configured to receive a video additional element adding and/or editing operation on an operation editing area, which is triggered by any time node in a corresponding duration information interval during playing, dragging and/or positioning of a video file to be processed;
the processing module 202 is further configured to generate a second layer above the first layer where the to-be-processed video file in the video file playing region is located in response to the video additional element adding and/or editing operation received by the receiving module 203, where the added video additional element is located in the second layer, and a time node of the added video additional element on the second layer corresponds to a time node of the to-be-processed video file in the first layer, which is triggered to perform the video additional element adding and/or editing operation;
The displaying module 201 is further configured to display the video file that is synthesized by the processing module 202 and includes the content of the to-be-processed video file in the first layer and the video additional element in the second layer.
Optionally, on the basis of the embodiment corresponding to fig. 13, in another embodiment of the video processing apparatus 20 provided in the embodiment of the present application, the processing module 202 is specifically configured to:
Controlling the video file to be processed to pause playing in response to a video additional element adding operation, wherein the video additional element adding operation corresponds to the identification of the added video additional element, and the added video additional element is a dynamic text element or a dynamic image element;
generating a second layer above the first layer where the video file to be processed in the video file playing area is located;
The displaying module 201 is specifically configured to display content of a video file to be processed in the video file on a first layer, and display an added video additional element in the video file on a second layer.
optionally, on the basis of the embodiment corresponding to fig. 13, in another embodiment of the video processing apparatus 20 provided in the embodiment of the present application, the display module 201 is specifically configured to:
Acquiring an image sequence corresponding to the added video additional element, wherein the image sequence comprises at least one frame of image;
generating a third layer above the second layer where the added video additional elements are located, wherein the content attribute of the third layer is assigned as a first frame image in the image sequence;
and displaying the image sequence corresponding to the added video additional element on a third layer of the video file playing area.
Optionally, on the basis of the embodiment corresponding to fig. 13, in another embodiment of the video processing apparatus 20 provided in the embodiment of the present application, the processing module 202 is specifically configured to:
controlling the video to be processed to pause playing in response to the video additional element adding operation, wherein the video additional element adding operation corresponds to the identifier of the video additional element, and the video additional element is a static text element or a static image element;
And generating a second layer above the first layer where the video file to be processed in the video file playing area is located.
Alternatively, on the basis of the embodiment corresponding to fig. 13, in another embodiment of the video processing apparatus 20 provided in the embodiment of the present application,
The processing module 202 is further configured to obtain a third layer corresponding to the video additional element, where the third layer is located above the second layer;
the processing module 202 is further configured to process the video additional element in the third layer to obtain a processed video additional element, where the processed video additional element is displayed on the third layer.
optionally, on the basis of the embodiment corresponding to fig. 13, in another embodiment of the video processing apparatus 20 provided in the embodiment of the present application, the processing module 202 is specifically configured to:
Adding a third layer to the editable view in the operation editing area;
Performing translation processing on the video additional element on the editable view to obtain translation parameters;
recording the translation parameters to a data structure corresponding to the third layer;
and redrawing the third image layer on the second image layer.
Optionally, on the basis of the embodiment corresponding to fig. 13, in another embodiment of the video processing apparatus 20 provided in the embodiment of the present application, the processing module 202 is specifically configured to:
Adding a third layer to the editable view in the operation editing area;
zooming the video additional element on the editable view to obtain zooming parameters, wherein the zooming parameters comprise magnification parameters or reduction magnification parameters;
Recording the scaling parameters to a data structure corresponding to the third layer;
And redrawing the third image layer on the second image layer.
Optionally, on the basis of the embodiment corresponding to fig. 13, in another embodiment of the video processing apparatus 20 provided in the embodiment of the present application, the processing module 202 is specifically configured to:
Adding a third layer to the editable view in the operation editing area;
performing rotation processing on the video additional element on the editable view to obtain a rotation angle parameter;
recording the rotation angle parameter to a data structure corresponding to the third layer;
And redrawing the third image layer on the second image layer.
alternatively, on the basis of the embodiment corresponding to fig. 13, in another embodiment of the video processing apparatus 20 provided in the embodiment of the present application,
The receiving module 203 is further configured to receive a deletion operation of a video additional element in the operation editing area, which is triggered by any time node in the corresponding duration information interval during the playing, dragging, and/or positioning process of the video file to be processed;
the processing module 202 is further configured to delete the video additional element on the second layer of the video file playing area in response to the video additional element deletion operation.
optionally, on the basis of the embodiment corresponding to fig. 13, in another embodiment of the video processing apparatus 20 provided in the embodiment of the present application, the processing module 202 is specifically configured to delete the third layer and the video additional element generated above the second layer where the video additional element is located, in response to a video additional element deletion operation.
optionally, on the basis of the embodiment corresponding to fig. 13, in another embodiment of the video processing apparatus 20 provided in the embodiment of the present application, the displaying module 201 is specifically configured to display a video file after synthesis processing and including a content of a to-be-processed video file in a first image layer.
next, an embodiment of the present application further provides a terminal device, where the video processing apparatus provided in the embodiment corresponding to fig. 13 may be deployed on the terminal device, and is configured to execute steps executed by the client in the embodiments corresponding to fig. 3 to fig. 12. As shown in fig. 14, for convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the technology are not disclosed, please refer to the method part of the embodiments of the present application. The terminal device may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a Point of Sales (POS), a vehicle-mounted computer, and the like, taking the attribute information display apparatus as the mobile phone as an example:
Fig. 14 is a block diagram showing a partial structure of a cellular phone related to the attribute information presentation apparatus according to the embodiment of the present application. Referring to fig. 14, the handset includes: radio Frequency (RF) circuit 310, memory 320, input unit 330, display unit 340, sensor 330, audio circuit 360, wireless fidelity (WiFi) module 370, processor 380, and power supply 390. Those skilled in the art will appreciate that the handset configuration shown in fig. 14 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 14:
The RF circuit 310 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 380; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 310 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 310 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to global system for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
the memory 320 may be used to store software programs and modules, and the processor 380 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 320. The memory 320 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 320 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
the input unit 330 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 330 may include a touch panel 331 and other input devices 332. The touch panel 331, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on the touch panel 331 or near the touch panel 331 using any suitable object or accessory such as a finger, a stylus, etc.) on or near the touch panel 331, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 331 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 380, and can receive and execute commands sent by the processor 380. In addition, the touch panel 331 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 330 may include other input devices 332 in addition to the touch panel 331. In particular, other input devices 332 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
the display unit 340 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 340 may include a display panel 341, and optionally, the display panel 341 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 331 can cover the display panel 341, and when the touch panel 331 detects a touch operation on or near the touch panel 331, the touch panel is transmitted to the processor 380 to determine the type of the touch event, and then the processor 380 provides a corresponding visual output on the display panel 341 according to the type of the touch event. Although in fig. 14, the touch panel 331 and the display panel 341 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 331 and the display panel 341 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 330, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 341 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 341 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 360, speaker 361, microphone 362 may provide an audio interface between the user and the handset. The audio circuit 360 may transmit the electrical signal converted from the received audio data to the speaker 361, and the audio signal is converted by the speaker 361 and output; on the other hand, the microphone 362 converts the collected sound signals into electrical signals, which are received by the audio circuit 360 and converted into audio data, which are then processed by the audio data output processor 380 and then transmitted to, for example, another cellular phone via the RF circuit 310, or output to the memory 320 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 370, and provides wireless broadband internet access for the user. Although fig. 14 shows the WiFi module 370, it is understood that it does not belong to the essential constitution of the handset, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 380 is a control center of the mobile phone, connects various parts of the whole mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory 320, thereby performing overall monitoring of the mobile phone. Optionally, processor 380 may include one or more processing units; optionally, processor 380 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 380.
the handset also includes a power supply 390 (e.g., a battery) for powering the various components, optionally, the power supply may be logically connected to the processor 380 through a power management system, so that the power management system may be used to manage charging, discharging, and power consumption.
Although not shown, the mobile phone may further include a camera module, a bluetooth module, etc., which will not be described herein.
in this embodiment, the processor 380 included in the terminal device is further configured to perform the following steps:
Providing a video file processing interface, wherein the video file processing interface comprises a video file playing area and an operation editing area, and the operation editing area comprises at least one video additional element operation control;
Acquiring a video file to be processed and time length information of the video file to be processed, wherein the video file to be processed comprises at least one frame of image to be processed, the video file to be processed and the time length information are displayed in a video file playing area, and the video file to be processed is positioned in a first layer;
Receiving video additional element adding and/or editing operation of the operation editing area, which is triggered by any time node in the duration information interval in the playing, dragging and/or positioning process of the video file to be processed;
responding to the video additional element adding and/or editing operation, generating a second layer above a first layer where a to-be-processed video file of the video file playing area is located, wherein the added video additional element is located in the second layer, and a time node of the added video additional element on the second layer corresponds to a time node of the to-be-processed video file of the first layer, which is triggered to perform the video additional element adding and/or editing operation;
and displaying the video file which is subjected to synthesis processing and comprises the content of the video file to be processed of the first image layer and the video additional element of the second image layer.
optionally, the processor 380 is further configured to perform other steps performed by the client in the embodiments shown in fig. 3 to fig. 12, which is not described herein again.
an embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to perform the steps performed by the client in the method described in the foregoing embodiments shown in fig. 3 to 12.
Embodiments of the present application also provide a computer program product including a program, which, when run on a computer, causes the computer to perform the steps performed by the client in the method described in the foregoing embodiments shown in fig. 3 to 12.
it is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
the integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (14)

1. a method of video file processing, comprising:
Providing a video file processing interface, wherein the video file processing interface comprises a video file playing area and an operation editing area, and the operation editing area comprises at least one video additional element operation control;
acquiring a video file to be processed and time length information of the video file to be processed, wherein the video file to be processed comprises at least one frame of image to be processed, the video file to be processed and the time length information are displayed in a video file playing area, and the video file to be processed is positioned in a first layer;
receiving video additional element adding and/or editing operation of the operation editing area, which is triggered by any time node in the duration information interval in the playing, dragging and/or positioning process of the video file to be processed;
responding to the video additional element adding and/or editing operation, generating a second layer above a first layer where a to-be-processed video file of the video file playing area is located, wherein the added video additional element is located in the second layer, and a time node of the added video additional element on the second layer corresponds to a time node of the to-be-processed video file of the first layer, which is triggered to perform the video additional element adding and/or editing operation;
And displaying the video file which is subjected to synthesis processing and comprises the content of the video file to be processed of the first image layer and the video additional element of the second image layer.
2. The method according to claim 1, wherein the generating a second layer above the first layer on which the video file to be processed in the video file playing area is located in response to the video add-on and/or edit operation comprises:
Controlling the video file to be processed to pause playing in response to the video additional element adding operation, wherein the video additional element adding operation corresponds to the identification of the added video additional element, and the added video additional element is a dynamic text element or a dynamic image element;
generating a second layer above the first layer where the video file to be processed in the video file playing area is located;
The displaying the video file which is subjected to the synthesis processing and comprises the content of the video file to be processed of the first image layer and the video additional element of the second image layer comprises the following steps:
displaying the content of the video file to be processed in the video file on the first layer, and displaying the added video additional element in the video file on the second layer.
3. the method according to claim 2, wherein said presenting said added video additional elements in said video file on said second layer comprises:
acquiring an image sequence corresponding to the added video additional element, wherein the image sequence comprises at least one frame of image;
generating a third layer above the second layer where the added video additional element is located, wherein the content attribute of the third layer is assigned as a first frame image in the image sequence;
and displaying an image sequence corresponding to the added video additional element on the third layer of the video file playing area.
4. The method according to claim 1, wherein the generating a second layer above the first layer on which the video file to be processed in the video file playing area is located in response to the video add-on and/or edit operation comprises:
controlling the video to be processed to pause playing in response to the video additional element adding operation, wherein the video additional element adding operation corresponds to the identification of a video additional element, and the video additional element is a static text element or a static image element;
And generating the second image layer above the first image layer where the video file to be processed in the video file playing area is located.
5. The method according to any one of claims 1 to 4, wherein after generating a second layer above a first layer on which a video file to be processed in the video file playing area is located in response to the video add-on and/or edit operation, the method further comprises:
acquiring a third layer corresponding to the video additional element, wherein the third layer is positioned above the second layer;
And processing the video additional element in the third layer to obtain a processed video additional element, wherein the processed video additional element is displayed on the third layer.
6. The method according to claim 5, wherein said processing the video add-on element in the third layer to obtain a processed video add-on element comprises:
adding the third layer to an editable view in the operation editing area;
Translating the video additional element on the editable view to obtain translation parameters;
recording the translation parameters to a data structure corresponding to the third layer;
And redrawing the third image layer on the second image layer.
7. The method according to claim 5, wherein said processing the video add-on element in the third layer to obtain a processed video add-on element comprises:
adding the third layer to an editable view in the operation editing area;
Zooming the video additional element on the editable view to obtain zooming parameters, wherein the zooming parameters comprise magnification parameters or reduction magnification parameters;
Recording the scaling parameters to a data structure corresponding to the third layer;
and redrawing the third image layer on the second image layer.
8. The method according to claim 5, wherein said processing the video add-on element in the third layer to obtain a processed video add-on element comprises:
Adding the third layer to an editable view in the operation editing area;
Performing rotation processing on the video additional element on the editable view to obtain a rotation angle parameter;
recording the rotation angle parameter to a data structure corresponding to the third layer;
and redrawing the third image layer on the second image layer.
9. The method according to any one of claims 1 to 4, wherein after generating a second layer above a first layer on which a video file to be processed in the video file playing area is located in response to the video add-on and/or edit operation, the method further comprises:
Receiving a video additional element deleting operation of the operation editing area, which is triggered by any time node in the corresponding duration information interval in the playing, dragging and/or positioning process of the video file to be processed;
And in response to the video additional element deleting operation, deleting the video additional element on the second layer of the video file playing area.
10. The method according to claim 9, wherein said deleting the video add-on element on the second layer of the video file playing area in response to the video add-on element deleting operation comprises:
And in response to the video additional element deleting operation, deleting the third layer and the video additional element generated above the second layer where the video additional element is located.
11. The method of claim 9, wherein the presenting the video file after the compositing process that includes the pending video file content of the first layer and the video additional elements of the second layer comprises:
and displaying the video file which is subjected to the synthesis processing and comprises the content of the video file to be processed of the first image layer.
12. a video processing apparatus, comprising:
the display module is used for providing a video file processing interface, wherein the video file processing interface comprises a video file playing area and an operation editing area, and the operation editing area comprises at least one video additional element operation control;
The processing module is used for acquiring a video file to be processed and time length information of the video file to be processed, wherein the video file to be processed comprises at least one frame of image to be processed, the video file to be processed and the time length information are displayed in the video file playing area, and the video file to be processed is positioned in a first layer;
the receiving module is used for receiving video additional element adding and/or editing operation of the operation editing area, which is triggered by any time node in the corresponding duration information interval in the playing, dragging and/or positioning process of the video file to be processed;
the processing module is further configured to generate a second layer above a first layer where a to-be-processed video file in the video file playing region is located in response to the video additional element adding and/or editing operation received by the receiving module, where the added video additional element is located in the second layer, and a time node of the added video additional element on the second layer corresponds to a time node of the to-be-processed video file in the first layer, which is triggered to perform the video additional element adding and/or editing operation;
and the display module is also used for displaying the video file which is synthesized and processed by the processing module and comprises the content of the video file to be processed on the first layer and the video additional element on the second layer.
13. a terminal device, comprising: a memory, a transceiver, a processor, and a bus system;
wherein the memory is used for storing programs;
the processor is used for executing the program in the memory and comprises the following steps:
Providing a video file processing interface, wherein the video file processing interface comprises a video file playing area and an operation editing area, and the operation editing area comprises at least one video additional element operation control;
acquiring a video file to be processed and time length information of the video file to be processed, wherein the video file to be processed comprises at least one frame of image to be processed, the video file to be processed and the time length information are displayed in a video file playing area, and the video file to be processed is positioned in a first layer;
receiving video additional element adding and/or editing operation of the operation editing area, which is triggered by any time node in the duration information interval in the playing, dragging and/or positioning process of the video file to be processed;
Responding to the video additional element adding and/or editing operation, generating a second layer above a first layer where a to-be-processed video file of the video file playing area is located, wherein the added video additional element is located in the second layer, and a time node of the added video additional element on the second layer corresponds to a time node of the to-be-processed video file of the first layer, which is triggered to perform the video additional element adding and/or editing operation;
Displaying the video file which is subjected to synthesis processing and comprises the content of the video file to be processed of the first image layer and the video additional element of the second image layer;
the bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.
14. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1 to 12.
CN201910872350.XA 2019-09-16 2019-09-16 Video file processing method, related device and equipment Active CN110582018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910872350.XA CN110582018B (en) 2019-09-16 2019-09-16 Video file processing method, related device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910872350.XA CN110582018B (en) 2019-09-16 2019-09-16 Video file processing method, related device and equipment

Publications (2)

Publication Number Publication Date
CN110582018A true CN110582018A (en) 2019-12-17
CN110582018B CN110582018B (en) 2022-06-10

Family

ID=68811962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910872350.XA Active CN110582018B (en) 2019-09-16 2019-09-16 Video file processing method, related device and equipment

Country Status (1)

Country Link
CN (1) CN110582018B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111491206A (en) * 2020-04-17 2020-08-04 维沃移动通信有限公司 Video processing method, video processing device and electronic equipment
CN111629252A (en) * 2020-06-10 2020-09-04 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN111726701A (en) * 2020-06-30 2020-09-29 腾讯科技(深圳)有限公司 Information implantation method, video playing method, device and computer equipment
CN111899155A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN112118397A (en) * 2020-09-23 2020-12-22 腾讯科技(深圳)有限公司 Video synthesis method, related device, equipment and storage medium
CN112463017A (en) * 2020-12-17 2021-03-09 中国农业银行股份有限公司 Interactive element synthesis method and related device
CN112637520A (en) * 2020-12-23 2021-04-09 新华智云科技有限公司 Dynamic video editing method and system
CN113613067A (en) * 2021-08-03 2021-11-05 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium
CN113903297A (en) * 2021-12-07 2022-01-07 深圳金采科技有限公司 Display control method and system of LED display screen
CN115022697A (en) * 2022-04-28 2022-09-06 京东科技控股股份有限公司 Method for displaying video added with content element, electronic device and program product
CN116896672A (en) * 2023-09-11 2023-10-17 北京美摄网络科技有限公司 Video special effect processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140359656A1 (en) * 2013-05-31 2014-12-04 Adobe Systems Incorporated Placing unobtrusive overlays in video content
CN104811787A (en) * 2014-10-27 2015-07-29 深圳市腾讯计算机系统有限公司 Game video recording method and game video recording device
CN107369197A (en) * 2017-07-05 2017-11-21 腾讯科技(深圳)有限公司 Image processing method, device and equipment
CN109495791A (en) * 2018-11-30 2019-03-19 北京字节跳动网络技术有限公司 A kind of adding method, device, electronic equipment and the readable medium of video paster
CN110198486A (en) * 2019-05-28 2019-09-03 上海哔哩哔哩科技有限公司 A kind of method, computer equipment and the readable storage medium storing program for executing of preview video material

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140359656A1 (en) * 2013-05-31 2014-12-04 Adobe Systems Incorporated Placing unobtrusive overlays in video content
CN104811787A (en) * 2014-10-27 2015-07-29 深圳市腾讯计算机系统有限公司 Game video recording method and game video recording device
CN107369197A (en) * 2017-07-05 2017-11-21 腾讯科技(深圳)有限公司 Image processing method, device and equipment
CN109495791A (en) * 2018-11-30 2019-03-19 北京字节跳动网络技术有限公司 A kind of adding method, device, electronic equipment and the readable medium of video paster
CN110198486A (en) * 2019-05-28 2019-09-03 上海哔哩哔哩科技有限公司 A kind of method, computer equipment and the readable storage medium storing program for executing of preview video material

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
百度经验: "Inshot视频编辑怎么添加贴纸", 《HTTPS://JINGYAN.BAIDU.COM/ARTICLE/2F9B480D78683901CB6CC2A3.HTML》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111491206A (en) * 2020-04-17 2020-08-04 维沃移动通信有限公司 Video processing method, video processing device and electronic equipment
CN111629252A (en) * 2020-06-10 2020-09-04 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN111629252B (en) * 2020-06-10 2022-03-25 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN111899155A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN111726701A (en) * 2020-06-30 2020-09-29 腾讯科技(深圳)有限公司 Information implantation method, video playing method, device and computer equipment
CN112118397A (en) * 2020-09-23 2020-12-22 腾讯科技(深圳)有限公司 Video synthesis method, related device, equipment and storage medium
CN112118397B (en) * 2020-09-23 2021-06-22 腾讯科技(深圳)有限公司 Video synthesis method, related device, equipment and storage medium
CN112463017B (en) * 2020-12-17 2021-12-14 中国农业银行股份有限公司 Interactive element synthesis method and related device
CN112463017A (en) * 2020-12-17 2021-03-09 中国农业银行股份有限公司 Interactive element synthesis method and related device
CN112637520B (en) * 2020-12-23 2022-06-21 新华智云科技有限公司 Dynamic video editing method and system
CN112637520A (en) * 2020-12-23 2021-04-09 新华智云科技有限公司 Dynamic video editing method and system
CN113613067A (en) * 2021-08-03 2021-11-05 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium
CN113613067B (en) * 2021-08-03 2023-08-22 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium
CN113903297A (en) * 2021-12-07 2022-01-07 深圳金采科技有限公司 Display control method and system of LED display screen
CN115022697A (en) * 2022-04-28 2022-09-06 京东科技控股股份有限公司 Method for displaying video added with content element, electronic device and program product
CN116896672A (en) * 2023-09-11 2023-10-17 北京美摄网络科技有限公司 Video special effect processing method and device, electronic equipment and storage medium
CN116896672B (en) * 2023-09-11 2023-12-29 北京美摄网络科技有限公司 Video special effect processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110582018B (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN110582018B (en) Video file processing method, related device and equipment
JP6868659B2 (en) Image display method and electronic device
CN109819313B (en) Video processing method, device and storage medium
TWI592021B (en) Method, device, and terminal for generating video
WO2020187086A1 (en) Video editing method and apparatus, device, and storage medium
CN108022279B (en) Video special effect adding method and device and intelligent mobile terminal
JP7387891B2 (en) Video file generation method, device, terminal, and storage medium
TWI732240B (en) Video file generation method, device, and storage medium
KR102013331B1 (en) Terminal device and method for synthesizing a dual image in device having a dual camera
CN102754352B (en) Method and apparatus for providing information of multiple applications
WO2018184488A1 (en) Video dubbing method and device
WO2016177296A1 (en) Video generation method and apparatus
CN106803993B (en) Method and device for realizing video branch selection playing
CN110662090B (en) Video processing method and system
US11568899B2 (en) Method, apparatus and smart mobile terminal for editing video
CN111432265B (en) Method for processing video pictures, related device and storage medium
CN105359121A (en) Remote operation of applications using received data
CN109960504B (en) Object switching method based on visual programming, interface display method and device
CN108646961B (en) Management method and device for tasks to be handled and storage medium
CN108055567B (en) Video processing method and device, terminal equipment and storage medium
WO2015131767A1 (en) Video processing method and apparatus
CN108055587A (en) Sharing method, device, mobile terminal and the storage medium of image file
WO2019105446A1 (en) Video editing method and device, and smart mobile terminal
CN112672061B (en) Video shooting method and device, electronic equipment and medium
WO2023061414A1 (en) File generation method and apparatus, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018893

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant