CN113470701A - Audio and video editing method and device, computer equipment and storage medium - Google Patents

Audio and video editing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113470701A
CN113470701A CN202110731896.0A CN202110731896A CN113470701A CN 113470701 A CN113470701 A CN 113470701A CN 202110731896 A CN202110731896 A CN 202110731896A CN 113470701 A CN113470701 A CN 113470701A
Authority
CN
China
Prior art keywords
audio
video
interface
design layer
configuration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110731896.0A
Other languages
Chinese (zh)
Other versions
CN113470701B (en
Inventor
王小艳
程亮
章健权
许东学
白霜雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wondershare Software Co Ltd
Original Assignee
Shenzhen Sibo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sibo Technology Co ltd filed Critical Shenzhen Sibo Technology Co ltd
Priority to CN202110731896.0A priority Critical patent/CN113470701B/en
Publication of CN113470701A publication Critical patent/CN113470701A/en
Application granted granted Critical
Publication of CN113470701B publication Critical patent/CN113470701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

The method comprises the steps of obtaining audio and video configuration requests through a plurality of interface design layer interfaces, calling a basic service library and a non-linear editing adapter to perform basic configuration processing on the audio and video configuration requests to obtain operation objects and operation information corresponding to the audio and video configuration requests, performing push/pop processing on the operation information through a Undo/Redo manager to call the operation objects to perform push/pop processing to obtain target data, sending the target data to the interface design layer through a message queue, and updating audio and video controls in the interface design layer based on the target data in the interface design layer. The method and the device realize the nonlinear editing of the audio and the video in different interface design layers and nonlinear editing versions, and are favorable for improving the editing efficiency of the audio and the video.

Description

Audio and video editing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of audio and video processing technologies, and in particular, to an audio and video editing method and apparatus, a computer device, and a storage medium.
Background
With the development of the film and television industry, the editing processing of audio and video is increasingly emphasized. In the prior art, each single platform completes respective editing tasks on audio and video, and a direct packaging method is adopted for non-linear editing of the audio and video. However, this method cannot strip interface design and service logic well, and does not conform to the low coupling principle and multiplexing principle, so that audio and video editing needs to be developed repeatedly, thereby resulting in low audio and video editing efficiency. A method capable of improving the efficiency of audio/video editing is needed.
Disclosure of Invention
The embodiment of the application aims to provide an audio and video editing method and device, computer equipment and a storage medium so as to improve the audio and video editing efficiency.
In order to solve the above technical problem, an embodiment of the present application provides an audio and video editing method, including:
acquiring an audio and video configuration request through a plurality of interface design layer interfaces;
calling a basic service library and a non-linear editing adapter to perform basic configuration processing on the audio and video configuration request to obtain an operation object and operation information corresponding to the audio and video configuration request;
performing stack pushing/popping processing on the operation information by calling an Undo/Redo manager to call the operation object to perform the stack pushing/popping processing to obtain target data;
sending the target data to an interface design layer through a message queue;
and updating the audio and video control in the interface design layer based on the target data in the interface design layer.
Further, the step of performing push/pop processing on the operation information by calling the Undo/Redo manager to call the operation object to perform the push/pop processing to obtain target data includes:
splitting the operation information to obtain a plurality of one-step operation steps;
merging the plurality of one-step operation steps by calling the Undo/Redo manager to obtain one-step Undo/Redo operation;
and calling the operation object to perform the stack pushing/popping processing based on the one-step Undo/Redo operation to obtain basic data, and taking the basic data as the target data, wherein the basic data comprises an execution message and data change caused by the execution message.
Further, the sending the target data to the interface design layer through the message queue includes:
when the generation of basic data is monitored, identifying an interface design layer interface corresponding to the basic data as a target interface design layer interface;
sending the basic data to the interface design layer through the message queue based on the target interface design layer interface;
and if all the basic data in the target data are sent to the corresponding interface design layer, completing the transmission of the target data.
Further, in the interface design layer, updating the audio/video control in the interface design layer based on the target data includes:
in the interface design layer, identifying an audio/video control corresponding to each basic data in the target data;
taking the data change caused by the execution message as the updating direction of the audio and video control, and updating the audio and video control;
and if the target data are executed completely, updating the audio and video control.
Further, the calling basic service library and the non-wired editing adapter perform basic configuration processing on the audio and video configuration request to obtain an operation object and operation information corresponding to the audio and video configuration request, wherein the operation object and the operation information comprise;
analyzing the audio and video configuration request to acquire configuration operation corresponding to the audio and video;
acquiring a request interface corresponding to the configuration operation by calling the non-linear editing adapter;
acquiring configuration data corresponding to the configuration operation from a basic service library according to the request interface;
and generating the operation object and the operation information based on the configuration data and the configuration operation.
Further, before the audio/video configuration request is acquired through the plurality of interface design layer interfaces, the method further includes:
acquiring interface information corresponding to a plurality of interface design layers;
and configuring the interface information according to the unified configuration information to obtain a plurality of interface design layer interfaces.
Further, before the calling basic service library and the non-wired codec adapter perform basic configuration processing on the audio/video configuration request to obtain an operation object and operation information corresponding to the audio/video configuration request, the method further includes:
acquiring bottom layer interface information corresponding to a plurality of non-linear editing versions;
and acquiring preset uniform interface configuration, and performing translation adaptation processing on the bottom layer interface information through the preset uniform interface configuration to obtain the non-linear editing adapter, wherein the non-linear editing adapter supports a plurality of linear editing versions.
In order to solve the above technical problem, an embodiment of the present application provides an audio and video editing apparatus, including:
the audio and video configuration request module is used for acquiring audio and video configuration requests through a plurality of interface design layer interfaces;
the basic configuration processing module is used for calling a basic service library and the non-linear editing adapter to perform basic configuration processing on the audio and video configuration request to obtain an operation object and operation information corresponding to the audio and video configuration request;
the target data acquisition module is used for performing stack pushing/popping processing on the operation information by calling an Undo/Redo manager so as to call the operation object to perform the stack pushing/popping processing to obtain target data;
the target data sending module is used for sending the target data to an interface design layer through a message queue;
and the audio and video control updating module is used for updating the audio and video control in the interface design layer based on the target data in the interface design layer.
In order to solve the technical problems, the invention adopts a technical scheme that: a computer device is provided that includes, one or more processors; a memory for storing one or more programs to cause the one or more processors to implement the method for editing audio-video as described in any one of the above.
In order to solve the technical problems, the invention adopts a technical scheme that: a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of editing audio and video of any of the above.
The embodiment of the invention provides an audio and video editing method and device, computer equipment and a storage medium. The embodiment of the invention obtains the audio and video configuration request through a plurality of interface design layer interfaces, calls a basic service library and a non-linear editing adapter to perform basic configuration processing on the audio and video configuration request to obtain an operation object and operation information corresponding to the audio and video configuration request, then calls an Undo/Redo manager to perform stack pressing/unstacking processing on the operation information to call the operation object to perform stack pressing/unstacking processing to obtain target data, then sends the target data to the interface design layer through a message queue, and then updates an audio and video control in the interface design layer based on the target data in the interface design layer. The audio and video configuration requests of a plurality of interface design layers are responded, and meanwhile, the editing of the audio and video is adaptive to a plurality of non-linear editing versions by calling the non-linear editing adapter, so that the audio and video can be subjected to non-linear editing in different interface design layers and non-linear editing versions, and the audio and video editing efficiency is improved.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
Fig. 1 is a flowchart of an implementation of an audio/video editing method provided in an embodiment of the present application;
fig. 2 is a flowchart of an implementation of a sub-process in an audio and video editing method provided in an embodiment of the present application;
fig. 3 is a flowchart of another implementation of a sub-process in an audio and video editing method provided in an embodiment of the present application;
fig. 4 is a flowchart of another implementation of a sub-process in an audio and video editing method provided in an embodiment of the present application;
fig. 5 is a flowchart of another implementation of a sub-process in an audio and video editing method provided in an embodiment of the present application;
fig. 6 is a flowchart of another implementation of a sub-process in an audio and video editing method provided in an embodiment of the present application;
fig. 7 is a flowchart of another implementation of a sub-process in an audio and video editing method provided in an embodiment of the present application;
fig. 8 is a schematic diagram of an editing apparatus for audio and video provided by an embodiment of the present application;
fig. 9 is a schematic diagram of a computer device provided in an embodiment of the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
The present invention will be described in detail below with reference to the accompanying drawings and embodiments.
Referring to fig. 1, fig. 1 shows a specific embodiment of an editing method for audio and video.
It should be noted that, if the result is substantially the same, the method of the present invention is not limited to the flow sequence shown in fig. 1, and the method includes the following steps:
s1: and acquiring an audio and video configuration request through a plurality of interface design layer interfaces.
In the embodiments of the present application, in order to more clearly understand the technical solution, the following detailed description is made on the terminal related to the present application. The technical scheme is described in the angle of the server.
The server can acquire an audio and video configuration request in an interface design layer interface of the front end, analyze the audio and video configuration request to acquire an operation object and operation information, and correspondingly process the operation object and the operation information to update an audio and video control.
And secondly, the user side sends an audio and video configuration request to the server in the interface design layer and can also receive a message of completing the updating of the audio and video control.
Specifically, the embodiment of the application updates the audio and video control by acquiring the audio and video configuration request in different interface design layers. Wherein, the interface design layer includes but is not limited to: a Mac platform Filmora UI, a Win platform Filmora UI, a Mac platform Filmii UI, and a Win platform Filmii UI, and so on.
Referring to fig. 2, fig. 2 shows an embodiment before step S1, which is described in detail as follows:
S1A: and acquiring interface information corresponding to the plurality of interface design layers.
Specifically, since the interface information of different interface design layers is different, in order to design the interfaces of the plurality of interface design layers into uniform interface information, the interface information corresponding to the plurality of interface design layers is obtained.
S1B: and configuring the plurality of interface information according to the unified configuration information to obtain a plurality of interface design layer interfaces.
Specifically, the preset unified configuration information is obtained first, and then the configuration information is used for configuring the plurality of interface information, so that different interface design layer interfaces are realized, and different operation characteristics can be configured for the unified request. For example, a unified interface design layer interface is designed for different services (such as part of the product timeline is a magnetic timeline feature, part of the product timeline operation is auto ripple specific, and other various operation features) of the same module (such as a timeline operation module) of a front-end product, and different operation characteristics can be configured for the unified services, so that various service requirements are supported.
In this embodiment, the interface information corresponding to the plurality of interface design layers is acquired, and the plurality of interface information is configured according to the unified configuration information to obtain the plurality of interface design layer interfaces, so that the unified interface configuration is designed, and the method is favorable for adapting to different requests, thereby being favorable for improving the editing efficiency of the audio and video.
S2: and calling a basic service library and a non-linear editing adapter to perform basic configuration processing on the audio and video configuration request to obtain an operation object and operation information corresponding to the audio and video configuration request.
Specifically, the corresponding configuration operation is acquired by analyzing the audio/video configuration request, the request interface of the configuration operation is acquired through the non-wired adapter, and corresponding data is acquired from the basic service library according to the request interface, so that the corresponding operation object and operation information are generated.
Among them, non-linear editing (non-linear editing): the digital video file is processed mainly by a computer, and under a nonlinear editing mode, an editor does not need to work from beginning to end sequentially, but can cut a sample from the middle at any time, insert a shot, or cut some pictures, and the whole source film cannot be influenced. The basic service library is a database for storing operation data and business data. The non-linear coding adapter is an interface converter, and is suitable for information interfaces of different non-linear coding versions.
Referring to fig. 3, fig. 3 shows an embodiment of step S2, which is described in detail as follows:
s21: and analyzing the audio and video configuration request to acquire configuration operation corresponding to the audio and video.
Specifically, the user side performs configuration operation on the audio and video in different interface design layers according to actual conditions, and the configuration operation can generate a corresponding audio and video configuration request, so that the server analyzes the audio and video configuration request to obtain the corresponding configuration operation. The configuration operation comprises an audio and video operation object and data required by the audio and video operation.
S22: and acquiring a request interface corresponding to the configuration operation by calling the non-linear editing adapter.
Specifically, since the non-linear programming adapter abstracts non-linear programming related function interfaces uniformly, the request interface corresponding to the configuration operation is obtained by calling the non-linear programming adapter.
S23: and acquiring configuration data corresponding to the configuration operation from the basic service library according to the request interface.
S24: and generating an operation object and operation information based on the configuration data and the configuration operation.
Specifically, the corresponding basic service library is called through the request interface, and the configuration data required by the configuration operation is identified, so that the corresponding data is acquired in the corresponding basic service library, that is, the corresponding configuration data is acquired. And initializing the configuration data and the configuration operation to generate an operation object and operation information. The operation object refers to a corresponding audio/video control which needs to be operated. The operation information is an operation procedure corresponding to the operation object.
In the implementation, the audio and video configuration request is analyzed to obtain the configuration operation corresponding to the audio and video, the configuration data corresponding to the configuration operation is obtained from the basic service library according to the request interface, and then the operation object and the operation information are generated based on the configuration data and the configuration operation, so that the audio and video configuration request is analyzed, the corresponding data are obtained from different basic service libraries, the audio and video operation is adaptive to different versions of information and adaptive to corresponding interfaces, and the audio and video editing efficiency is improved.
Referring to fig. 4, fig. 4 shows an embodiment before step S2, which is described in detail as follows:
S2A: and acquiring bottom layer interface information corresponding to the plurality of non-linear editing versions.
S2B: and acquiring preset uniform interface configuration, and performing translation adaptation processing on the bottom layer interface information through the preset uniform interface configuration to obtain the non-linear editing adapter.
Specifically, the non-linear editing adapter is obtained by obtaining bottom layer interface information of different non-linear editing versions and preset uniform interface configuration, and performing translation adaptation processing on the bottom layer interface information by the preset uniform interface configuration. The method also defines a set of interfaces uniformly, translates and adapts different non-linear editing bottom layer functional interfaces to the defined set of uniform interfaces, so that the interfaces on the adapter adopt uniform configuration, the non-linear editing adapter uniformly abstracts non-linear editing related functional interfaces, and simultaneously different adaptation subclasses inherit the abstract interfaces to realize different non-linear editing version adaptations, thereby being beneficial to flexibly expanding and supporting different non-linear editing versions.
S3: and performing stack pushing/popping processing on the operation information by calling the Undo/Redo manager to call the operation object to perform stack pushing/popping processing to obtain target data.
Specifically, an Undo/Redo manager is called to split operation information to obtain a plurality of one-step operation steps, then the plurality of one-step operation steps are packaged and combined to form one-step Undo/Redo operation, and the one-step Undo/Redo operation calling operation objects are pushed/popped to perform stack pushing/pop management to obtain target data, wherein the target data meet the Undo/Redo operation.
The Undo is to restore the change of the program caused by the operation done by the user in the previous step to the original state, and the Redo operation is to implement the change again. The Undo/Redo manager refers to a component that manages Undo/Redo. The stack pushing processing means that data is put into a stack from the top of the stack; pop processing refers to fetching from the top of the stack. Push/pop processing satisfies the characteristic of push-first-then-pop
Referring to fig. 5, fig. 5 shows an embodiment of step S3, which is described in detail as follows:
s31: and splitting the operation information to obtain a plurality of one-step operation steps.
Specifically, the operation information includes a plurality of one-step operation steps, and each one-step operation step represents an operation instruction. And splitting the operation information to obtain a plurality of one-step operation steps, so that the subsequent one-step operation step supports Undo/Redo operation.
S32: and merging the plurality of one-step operation steps by calling the Undo/Redo manager to obtain one-step Undo/Redo operation.
S33: and calling the operation object to perform stack pushing/popping processing based on one-step Undo/Redo operation to obtain basic data, and taking the basic data as target data.
Specifically, the Undo/Redo manager packs a plurality of operation steps into one Undo/Redo operation, so that an operation object is called to perform stack pushing/popping processing, and target data meeting the Undo/Redo function is achieved. Meanwhile, when the packaging operation in each step is completed, the execution message and the data change caused by the execution message are notified to the interface design layer through the message communication module, and the audio and video control in the interface design layer can be updated point to point according to specific data change information.
In this embodiment, the operation information is split to obtain a plurality of one-step operation steps, then the Undo/Redo manager is called, the plurality of one-step operation steps are combined to obtain one-step Undo/Redo operation, then the operation object is called to perform stack pushing/unstacking processing based on the one-step Undo/Redo operation to obtain basic data, and the plurality of basic data are used as target data, so that the operation information meets the Undo/Redo operation, and data change caused by the execution message is used as the update direction of the audio/video control, thereby being beneficial to improving the editing efficiency of the audio/video.
S4: and sending the target data to the interface design layer through the message queue.
Specifically, when the generation of the target data is monitored, an interface of an interface design layer corresponding to the target data is identified, and then the target data is sent to the corresponding interface design layer through a message queue. Because the message queue has the characteristic of first-in first-out, namely the data in the message queue is firstly output, the generated target data is sent to the interface design layer according to the generated sequence by using the message queue.
Referring to fig. 6, fig. 6 shows an embodiment of step S4, which is described in detail as follows:
s41: and when the generation of the basic data is monitored, identifying an interface design layer interface corresponding to the basic data as a target interface design layer interface.
Specifically, since the audio/video configuration request may come from different interface design layers, the basic data needs to be sent to the corresponding interface design layer, and when it is monitored that the basic data is generated, the interface of the interface design layer of the basic data is identified first.
S42: and sending the basic data to the interface design layer through a message queue based on the target interface design layer interface.
S43: and if all the basic data in the target data are sent to the corresponding interface design layer, completing the transmission of the target data.
Specifically, the basic data in the message queue is sent to the interface design layer through the interface of the target interface design layer. And when all the basic data in the target data are sent to the corresponding interface design layer, completing the transmission of the target data.
It should be noted that, when it is monitored that one piece of basic data is generated, the basic data is sent to the interface design layer, so that the basic data updates the audio/video control, and it is not necessary to wait for all pieces of basic data to be generated and sent. Therefore, the interface design layer audio and video control can be updated point to point by the basic data, and the audio and video editing efficiency is improved.
In the embodiment, the generation of the basic data is monitored, the interface design layer interface of the basic data is identified, and the basic data is sent to the interface design layer, so that the generation of the real-time monitoring data is realized, the point-to-point updating of the audio and video control is facilitated, and the audio and video editing efficiency is improved.
S5: and in the interface design layer, updating the audio and video control in the interface design layer based on the target data.
Specifically, in the interface design layer, the audio/video control is updated by identifying the audio/video control corresponding to each basic data in the target data and then determining the updating direction of each audio/video control based on the basic data.
In the embodiment, an audio and video configuration request is obtained through a plurality of interface design layer interfaces, a basic service library and a non-linear editing adapter are called to perform basic configuration processing on the audio and video configuration request to obtain an operation object and operation information corresponding to the audio and video configuration request, an Undo/Redo manager is called to perform stack pressing/unstacking processing on the operation information to call the operation object to perform stack pressing/unstacking processing to obtain target data, the target data is sent to an interface design layer through a message queue, and then an audio and video control in the interface design layer is updated based on the target data in the interface design layer. The audio and video configuration requests of a plurality of interface design layers are responded, and meanwhile, the editing of the audio and video is adaptive to a plurality of non-linear editing versions by calling the non-linear editing adapter, so that the audio and video can be subjected to non-linear editing in different interface design layers and non-linear editing versions, and the audio and video editing efficiency is improved.
Referring to fig. 7, fig. 7 shows an embodiment of step S5, which is described in detail as follows:
s51: and in the interface design layer, identifying the audio and video control corresponding to each basic data in the target data.
Specifically, different basic data update different audio/video controls, so that the audio/video control corresponding to each basic data needs to be identified.
S52: and taking the data change caused by the execution message as the updating direction of the audio and video control, and updating the audio and video control.
S53: and if the target data are executed, updating the audio and video control.
Specifically, each basic data includes an execution message, and after the execution message is executed, a corresponding data change is generated, where the data change is the direction in which the audio/video control needs to be updated, and therefore, the data change caused by the execution message is used as the update direction of the audio/video control, so as to update the audio/video control by the basic data.
In the embodiment, in the interface design layer, the audio and video control corresponding to each basic data in the target data is identified, the data change caused by the execution message is used as the updating direction of the audio and video control, the audio and video control is updated, if the target data is completely executed, the updating of the audio and video control is completed, the basic data is updated one by one, and the audio and video control is favorably improved in editing efficiency of the audio and video.
Referring to fig. 8, as an implementation of the method shown in fig. 1, the present application provides an embodiment of an audio/video editing apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 1, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 8, the audio/video editing apparatus of the present embodiment includes: the system comprises an audio and video configuration request module 61, a basic configuration processing module 62, a target data acquisition module 63, a target data sending module 64 and an audio and video control updating module 65, wherein:
the audio and video configuration request module 61 is used for acquiring audio and video configuration requests through a plurality of interface design layer interfaces;
the basic configuration processing module 62 is configured to invoke the basic service library and the non-wired codec adapter to perform basic configuration processing on the audio/video configuration request, so as to obtain an operation object and operation information corresponding to the audio/video configuration request;
a target data obtaining module 63, configured to perform push/pop processing on the operation information by calling the Undo/Redo manager, so as to call an operation object to perform push/pop processing, and obtain target data;
a target data sending module 64, configured to send target data to the interface design layer through a message queue;
and the audio/video control updating module 65 is configured to update the audio/video control in the interface design layer based on the target data in the interface design layer.
Further, the target data obtaining module 63 includes:
the operation information splitting unit is used for splitting the operation information to obtain a plurality of one-step operation steps;
the operation step merging unit is used for merging the plurality of one-step operation steps by calling the Undo/Redo manager to obtain one-step Undo/Redo operation;
and the basic data acquisition unit is used for calling the operation object to perform stack pushing/popping processing based on one-step Undo/Redo operation to obtain basic data and taking the basic data as target data, wherein the basic data comprises an execution message and data change caused by the execution message.
Further, the target data sending module 64 includes:
the target interface design layer interface determining unit is used for identifying an interface design layer interface corresponding to the basic data as a target interface design layer interface when the generation of the basic data is monitored;
the basic data sending unit is used for sending basic data to the interface design layer through the message queue based on the interface of the target interface design layer;
and the target data transmission completion unit is used for completing the transmission of the target data if the basic data in the target data are all sent to the corresponding interface design layer.
Further, the audio/video control update module 65 includes:
the audio and video control identification unit is used for identifying the audio and video control corresponding to each basic data in the target data in the interface design layer;
the updating direction determining unit is used for taking the data change caused by the execution message as the updating direction of the audio and video control and updating the audio and video control;
and the audio and video control updating completion unit is used for completing the updating of the audio and video control if the target data is completely executed.
Further, the basic configuration processing module 62 includes:
the configuration operation acquisition unit is used for analyzing the audio and video configuration request to acquire configuration operation corresponding to the audio and video;
the request interface acquisition unit is used for acquiring a request interface corresponding to configuration operation by calling the non-wire-coded adapter;
the basic data generating unit is used for acquiring configuration data corresponding to configuration operation from the basic service library according to the request interface;
and the operation information generating unit is used for generating an operation object and operation information based on the configuration data and the configuration operation.
Further, before the audio/video configuration request module 61, the method further includes:
the interface information acquisition module is used for acquiring interface information corresponding to the plurality of interface design layers;
and the configuration processing module is used for configuring the plurality of interface information according to the unified configuration information to obtain a plurality of interface design layer interfaces.
Further, before the basic configuration processing module 62, the following steps are also included:
the bottom layer interface information acquisition unit is used for acquiring bottom layer interface information corresponding to a plurality of non-linear editing versions;
and the translation adaptation processing unit is used for acquiring the preset uniform interface configuration and performing translation adaptation processing on the bottom layer interface information through the preset uniform interface configuration to obtain the non-linear editing adapter, wherein the non-linear editing adapter supports a plurality of linear editing versions.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 9, fig. 9 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 7 comprises a memory 71, a processor 72, a network interface 73, communicatively connected to each other by a system bus. It is noted that only a computer device 7 having three components memory 71, processor 72, network interface 73 is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 71 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 71 may be an internal storage unit of the computer device 7, such as a hard disk or a memory of the computer device 7. In other embodiments, the memory 71 may also be an external storage device of the computer device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device 7. Of course, the memory 71 may also comprise both an internal storage unit of the computer device 7 and an external storage device thereof. In this embodiment, the memory 71 is generally used to store an operating system installed in the computer device 7 and various types of application software, such as program codes of an editing method of audio and video. Further, the memory 71 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 72 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 72 is typically used to control the overall operation of the computer device 7. In this embodiment, the processor 72 is configured to execute the program code stored in the memory 71 or process data, for example, execute the program code of the above-mentioned audio/video editing method, so as to implement various embodiments of the audio/video editing method.
The network interface 73 may comprise a wireless network interface or a wired network interface, and the network interface 73 is typically used to establish a communication connection between the computer device 7 and other electronic devices.
The present application further provides another embodiment, that is, a computer-readable storage medium is provided, which stores a computer program, and the computer program can be executed by at least one processor, so as to make the at least one processor execute the steps of the method for editing the audio and video.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method of the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. An audio/video editing method, comprising:
acquiring an audio and video configuration request through a plurality of interface design layer interfaces;
calling a basic service library and a non-linear editing adapter to perform basic configuration processing on the audio and video configuration request to obtain an operation object and operation information corresponding to the audio and video configuration request;
performing stack pushing/popping processing on the operation information by calling an Undo/Redo manager to call the operation object to perform the stack pushing/popping processing to obtain target data;
sending the target data to an interface design layer through a message queue;
and updating the audio and video control in the interface design layer based on the target data in the interface design layer.
2. The method for editing audio/video according to claim 1, wherein the step of performing push/pop processing on the operation information by calling an Undo/Redo manager to call the operation object to perform the push/pop processing to obtain target data includes:
splitting the operation information to obtain a plurality of one-step operation steps;
merging the plurality of one-step operation steps by calling the Undo/Redo manager to obtain one-step Undo/Redo operation;
and calling the operation object to perform the stack pushing/popping processing based on the one-step Undo/Redo operation to obtain basic data, and taking the basic data as the target data, wherein the basic data comprises an execution message and data change caused by the execution message.
3. The method for editing audio/video according to claim 2, wherein the sending the target data to the interface design layer through a message queue comprises:
when the generation of basic data is monitored, identifying an interface design layer interface corresponding to the basic data as a target interface design layer interface;
sending the basic data to the interface design layer through the message queue based on the target interface design layer interface;
and if all the basic data in the target data are sent to the corresponding interface design layer, completing the transmission of the target data.
4. The method for editing audio/video according to claim 2, wherein the updating, in the interface design layer, the audio/video control in the interface design layer based on the target data includes:
in the interface design layer, identifying an audio/video control corresponding to each basic data in the target data;
taking the data change caused by the execution message as the updating direction of the audio and video control, and updating the audio and video control;
and if the target data are executed completely, updating the audio and video control.
5. The method for editing audio and video according to claim 1, wherein the calling basic service library and the non-wired editing adapter perform basic configuration processing on the audio and video configuration request to obtain an operation object and operation information corresponding to the audio and video configuration request, including;
analyzing the audio and video configuration request to acquire configuration operation corresponding to the audio and video;
acquiring a request interface corresponding to the configuration operation by calling the non-linear editing adapter;
acquiring configuration data corresponding to the configuration operation from a basic service library according to the request interface;
and generating the operation object and the operation information based on the configuration data and the configuration operation.
6. The method for editing audio/video according to claim 1, wherein before the obtaining the audio/video configuration request through the plurality of interface design layer interfaces, the method further comprises:
acquiring interface information corresponding to a plurality of interface design layers;
and configuring the interface information according to the unified configuration information to obtain a plurality of interface design layer interfaces.
7. The audio/video editing method according to any one of claims 1 to 6, wherein before the invoking of the basic service library and the non-wired editing adapter perform basic configuration processing on the audio/video configuration request to obtain an operation object and operation information corresponding to the audio/video configuration request, the method further comprises:
acquiring bottom layer interface information corresponding to a plurality of non-linear editing versions;
and acquiring preset uniform interface configuration, and performing translation adaptation processing on the bottom layer interface information through the preset uniform interface configuration to obtain the non-linear editing adapter, wherein the non-linear editing adapter supports a plurality of linear editing versions.
8. An audio/video editing apparatus, comprising:
the audio and video configuration request module is used for acquiring audio and video configuration requests through a plurality of interface design layer interfaces;
the basic configuration processing module is used for calling a basic service library and the non-linear editing adapter to perform basic configuration processing on the audio and video configuration request to obtain an operation object and operation information corresponding to the audio and video configuration request;
the target data acquisition module is used for performing stack pushing/popping processing on the operation information by calling an Undo/Redo manager so as to call the operation object to perform the stack pushing/popping processing to obtain target data;
the target data sending module is used for sending the target data to an interface design layer through a message queue;
and the audio and video control updating module is used for updating the audio and video control in the interface design layer based on the target data in the interface design layer.
9. A computer device, characterized by comprising a memory in which a computer program is stored and a processor that implements the method for editing audio and video according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which when executed by a processor implements the method of editing audio-video according to any one of claims 1 to 7.
CN202110731896.0A 2021-06-30 2021-06-30 Audio and video editing method and device, computer equipment and storage medium Active CN113470701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110731896.0A CN113470701B (en) 2021-06-30 2021-06-30 Audio and video editing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110731896.0A CN113470701B (en) 2021-06-30 2021-06-30 Audio and video editing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113470701A true CN113470701A (en) 2021-10-01
CN113470701B CN113470701B (en) 2022-07-01

Family

ID=77874026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110731896.0A Active CN113470701B (en) 2021-06-30 2021-06-30 Audio and video editing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113470701B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1747047A (en) * 2005-08-04 2006-03-15 北京华傲精创科技开发有限公司 Audio/video frequency editing unit and method thereof
CN102855908A (en) * 2012-08-28 2013-01-02 深圳市万兴软件有限公司 Method and device for processing audio and video editing track
CN103095994A (en) * 2012-12-28 2013-05-08 中山大学 High-definition medium non-linear editing method used in digital home and device thereof
CN111541914A (en) * 2020-05-14 2020-08-14 腾讯科技(深圳)有限公司 Video processing method and storage medium
CN112269527A (en) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 Application interface generation method and related device
CN112331235A (en) * 2021-01-04 2021-02-05 腾讯科技(深圳)有限公司 Multimedia content editing control method and device, electronic equipment and storage medium
CN112947923A (en) * 2021-02-25 2021-06-11 维沃移动通信有限公司 Object editing method and device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1747047A (en) * 2005-08-04 2006-03-15 北京华傲精创科技开发有限公司 Audio/video frequency editing unit and method thereof
CN102855908A (en) * 2012-08-28 2013-01-02 深圳市万兴软件有限公司 Method and device for processing audio and video editing track
CN103095994A (en) * 2012-12-28 2013-05-08 中山大学 High-definition medium non-linear editing method used in digital home and device thereof
CN111541914A (en) * 2020-05-14 2020-08-14 腾讯科技(深圳)有限公司 Video processing method and storage medium
CN112269527A (en) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 Application interface generation method and related device
CN112331235A (en) * 2021-01-04 2021-02-05 腾讯科技(深圳)有限公司 Multimedia content editing control method and device, electronic equipment and storage medium
CN112947923A (en) * 2021-02-25 2021-06-11 维沃移动通信有限公司 Object editing method and device and electronic equipment

Also Published As

Publication number Publication date
CN113470701B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
WO2019148727A1 (en) Electronic device, redis-based exception warning method and storage medium
CN111061489B (en) Multi-platform compiling detection method, device, equipment and medium
CN110750291A (en) Edge device algorithm updating method, system, device and storage medium
CN114564294A (en) Intelligent service arranging method and device, computer equipment and storage medium
CN112631924A (en) Automatic testing method and device, computer equipment and storage medium
CN110659210A (en) Information acquisition method and device, electronic equipment and storage medium
CN113961196A (en) Method, device and equipment for automatically generating codes and readable storage medium
CN112860662A (en) Data blood relationship establishing method and device, computer equipment and storage medium
CN113127050A (en) Application resource packaging process monitoring method, device, equipment and medium
CN115794437A (en) Calling method and device of microservice, computer equipment and storage medium
CN116755844A (en) Data processing method, device and equipment of simulation engine and storage medium
CN109683856B (en) Electronic device, faas platform function creation method and storage medium
CN118394279A (en) Data processing method, device, storage medium and computer program product based on interceptor
CN113470701B (en) Audio and video editing method and device, computer equipment and storage medium
CN111352644A (en) Applet updating method, device, server and storage medium
CN113849287A (en) Processing method and device of algorithm service, electronic equipment and storage medium
CN116302847B (en) Dynamic acquisition method and device of abnormal information, computer equipment and medium
CN112929675B (en) Image resource compression method and device and electronic equipment
CN113127051B (en) Application resource packaging process monitoring method, device, equipment and medium
CN112486510A (en) Method and device for software installation, terminal equipment and storage medium
CN115756512A (en) Method and system for dynamically packaging client into installation package
CN115756640A (en) Method for automatically generating job scheduling configuration and related equipment thereof
CN117271635A (en) List export method, device, computer equipment and storage medium
CN117290019A (en) Interface calling method and device, computer equipment and storage medium
CN117950635A (en) Application development method and device of platform, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211125

Address after: 518000 1001, block D, building 5, software industry base, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Wanxing Software Co.,Ltd.

Address before: 518000 1002, block D, building 5, software industry base, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN SIBO TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant