CN115016871B - Multimedia editing method, electronic device and storage medium - Google Patents

Multimedia editing method, electronic device and storage medium Download PDF

Info

Publication number
CN115016871B
CN115016871B CN202111615710.1A CN202111615710A CN115016871B CN 115016871 B CN115016871 B CN 115016871B CN 202111615710 A CN202111615710 A CN 202111615710A CN 115016871 B CN115016871 B CN 115016871B
Authority
CN
China
Prior art keywords
editing
application
interface
video
multimedia
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111615710.1A
Other languages
Chinese (zh)
Other versions
CN115016871A (en
Inventor
朱登奎
周建东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202111615710.1A priority Critical patent/CN115016871B/en
Publication of CN115016871A publication Critical patent/CN115016871A/en
Application granted granted Critical
Publication of CN115016871B publication Critical patent/CN115016871B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Abstract

The application relates to the technical field of intelligent terminals, in particular to a multimedia editing method, electronic equipment and a storage medium. Wherein the method comprises the following steps: the electronic equipment displays a first interface of a first application; detecting that a user inputs a first editing operation to the first multimedia data, and displaying a second interface of the first application by the electronic equipment, wherein the second interface comprises one or more editing controls; detecting that a user selects a first editing control for first multimedia data, and the electronic equipment executes first editing processing on the first multimedia data by calling the multimedia editing capability of a second application to obtain second multimedia data; the electronic device displays the second multimedia data in the second interface. According to the method and the device, the editing capability of the audio/video editing application can be used for being jumped in Cheng Shi under the condition that the display interface of the current audio/video application is not switched, the audio/video editing processing process is completed, the user vision is smoother, and the use experience of the user is improved.

Description

Multimedia editing method, electronic device and storage medium
Technical Field
The invention relates to the technical field of intelligent terminals, in particular to a multimedia editing method, electronic equipment and a storage medium.
Background
With the rapid development of the short video multimedia industry, people are more and more used to adding favorite music or sound effects to the shot video or picture to process the shot video or picture into short video for sharing, saving, enjoying and the like. Therefore, some terminal electronic devices can realize audio/video editing so as to improve the operation convenience of people for editing short videos. Referring to fig. 1a to 1b, when a user browses a video in a gallery application of a mobile phone 100, the user may click an edit button 011 on a video browsing interface 101 shown in fig. 1a to jump to an edit interface 102 of a video editing application shown in fig. 1b, and at this time, the mobile phone 100 correspondingly switches to running the video editing application, where the video editing application may be, for example, a clip-on application TM Tremble and sound TM And applications with short video editing functions. After the user completes the operations of adding audio, adding speed change, animation, etc. on the interface shown in fig. 1b, the user can click the export button on the upper right corner of the editing interface 102021 derives a short video that completes the editing.
However, in the above-described process of performing audio/video editing across applications shown in fig. 1a to 1b, the mobile phone 100 needs to be switched from running one application to running another application, and thus a time delay is necessarily generated in this process due to time required for starting, jumping, etc. of the application. Meanwhile, if the user wants to continue to return to the gallery application to browse other videos or photos after finishing editing, the user needs to first exit the video editing application currently running in the mobile phone 100 to return to the interface of the gallery application, or needs to click the gallery application icon on the desktop of the mobile phone 100 again to enter the interface of the gallery application again, so that the operation is complicated, and meanwhile, the interface jump caused by switching the application of the mobile phone 100 also causes insufficient smooth visual experience of the user, and further causes poor user experience.
Disclosure of Invention
The embodiment of the application provides a multimedia editing method, electronic equipment and a storage medium, which can enable an audio/video application to use the editing capability of the audio/video editing application in a cross-process manner, realize the process of completing audio/video editing processing under the condition that the display interface of the current audio/video application is not switched, and enable the result of editing processing to be directly displayed to a user at the interface of the audio/video application, so that the user vision is smoother, and the use experience of the user is improved.
In a first aspect, an embodiment of the present application provides a multimedia editing method applied to an electronic device installed with a first application and a second application, where the method includes: the electronic equipment displays a first interface of a first application, wherein the first interface comprises first multimedia data; detecting that a user inputs first editing operation to the first multimedia data, and displaying a second interface of the first application by the electronic equipment, wherein the second interface comprises one or more editing controls, and the editing controls are used for responding to the user operation to carry out corresponding editing processing on the first multimedia data; detecting that a user selects a first editing control for first multimedia data, and the electronic equipment executes first editing processing on the first multimedia data by calling the multimedia editing capability of a second application to obtain second multimedia data; the electronic device displays the second multimedia data in the second interface.
That is, the user may browse the first multimedia data through a first application on the electronic device and may choose to edit the first multimedia data on a first interface that presents the first multimedia data. If the user selects to edit the first multimedia data, the electronic device is a second interface, the display interface of which can be switched to the first application to display the first multimedia data, and the user can further click an edit control on the second interface to edit the first multimedia data. The editing operation performed by the user on the second interface may perform corresponding editing processing across the program through the multimedia editing capability of the second application executed by the electronic device, and the result of the editing processing may be presented on the second interface of the first application, that is, the preview effect of the editing processing may be displayed on the second interface.
As an example, the first application may be, for example, a gallery, an AOD, or the like audio/video application described in the following embodiments, and the second application may be, for example, a petal clip described in the following embodiments TM An audio/video editing application. Therefore, the multimedia editing capability of the second application, that is, the editing processing capability of the audio/video editing application for multimedia data, for example, the editing capability described in step 404 shown in fig. 4 in the following embodiments, includes segmentation, speed change (or speed), volume, animation, filter, and the like. The first interface of the first application may be, for example, a video browsing interface shown in fig. 2a and the like in the following embodiments, and the second interface of the first application may be, for example, a video editing interface shown in fig. 2b and the like in the following embodiments. The one or more editing controls included in the second interface may be, for example, respective editing buttons in the editing function menu bar 230 shown in fig. 2b in the following embodiments. The user can click on the respective edit buttons in the edit function menu bar 230 to perform the corresponding audio/video editing operations.
It can be understood that, when the user clicks any editing button on the video editing interface (i.e. the second interface) of the audio/video application, the electronic devices such as the mobile phone 100 can edit the first multimedia data displayed on the current video editing interface through the editing capability provided by the running audio/video editing application and corresponding to the editing button clicked by the user, so as to obtain the second multimedia data. If the user clicks the editing button for multiple times, multiple editing operations are performed, and the corresponding audio/video editing application finishes each editing process corresponding to the user editing operation, so that second multimedia data can be obtained, and the results of the multiple editing processes can be overlapped. That is, the second multimedia data obtained by the plurality of editing processes can be retained.
It can be understood that the user may cancel the editing operation performed, and the corresponding editing processing result may cancel accordingly. Of course, the user's undo operation may be understood as an editing operation that is performed by the audio/video editing application running on the electronic device.
In a possible implementation of the first aspect, the first interface includes a first editing button, and the user inputs a first editing operation on the first multimedia data as an operation acting on the first editing button.
The first interface is, for example, a video browsing interface shown in fig. 2a and the like in the following embodiment, on which an edit button capable of currently browsing a video is displayed, and the user inputs a first edit operation to the first multimedia data, for example, an operation in which the user clicks the edit button 211 on the video browsing interface shown in fig. 2a in the following embodiment.
In one possible implementation of the first aspect, the electronic device runs a first application on a first process and the electronic device runs a second application on a second process; detecting that a user inputs a first editing operation to the first multimedia data, the electronic device displaying a second interface of the first application, including: in response to the first editing operation, the electronic device sends a first request from the first process to the second process, the first request being for requesting a multimedia editing capability list of the second application from the second process; based on the multimedia editing capability list returned by the second process, the electronic device generates one or more editing controls displayed on the second interface; the electronic device displays a second interface.
That is, the first application is run on a first process running on the electronic device system, the second application is run on a second process running on the electronic device system, and the operation request (including the first request) generated corresponding to the operation of the user on the interface of the first application can be sent from the first process to the second process across the processes, and is processed by the second application running on the second process. The first request may be, for example, an "editing capability request" described in step 402 shown in fig. 4 in the following embodiments, and is used to obtain the editing capability of the video editing application, where the second application is the video editing application, and the multimedia editing capability is the editing capability of the video editing application. The first application is, for example, gallery 311 described in the following embodiment, and the list of multimedia editing capabilities returned by the second process to the first process running the first application may be, for example, a "list of editing capabilities" returned to gallery 311, where the type of editing control and the corresponding editing function displayed on the video editing interface (i.e., the second interface) of gallery 311 (i.e., the first application) correspond to each editing capability on the list of editing capabilities.
It can be appreciated that when the first application running on the first process is different, the list of multimedia editing capabilities returned by the second process is also different, and specific reference may be made to the description related to step 404 in the following embodiments, which is not described herein.
In a possible implementation of the first aspect, the method further includes: each editing control in the plurality of editing controls corresponds to each multimedia editing capability in the multimedia editing capability list one by one.
In a possible implementation of the first aspect, detecting that the user selects the first editing control for the first multimedia data includes: a second editing operation of the user on the second interface is detected that acts on the first editing control.
The second interface is, for example, a video editing interface described in the following embodiments, and the editing control is, for example, an editing function button described in the following embodiments. Namely, the user can click an editing function button on the video editing interface to perform corresponding editing operation. The first edit control described above may be any one of a plurality of edit controls displayed on the second interface, that is, any one of edit function buttons in an edit function menu bar described in the embodiment below. It will be appreciated that each edit function button (i.e., each edit control) in the edit function menu bar corresponds to a different edit capability, respectively, and also to a different edit process effect.
In a possible implementation of the first aspect, the electronic device performs a first editing process on the first multimedia data by calling a multimedia editing capability of the second application, including: responding to the second editing operation, the electronic equipment sends a first editing instruction from the first process to the second process, wherein the first editing instruction is used for instructing the second process to execute first editing processing on the first multimedia data; the second process running on the electronic device responds to the first editing instruction and executes first editing processing based on the multimedia editing capability of the second application.
That is, the second editing operation performed by the user on the second interface of the first application may trigger the cross-process editing process within the electronic device. That is, a first editing instruction corresponding to the second editing operation is handed from the first process running the first application to the second process running the second application across processes. After the second process receives the first editing instruction, the second process can perform corresponding editing processing on the first multimedia data through the editing capability of the second application running on the second process, namely the first editing processing. The first editing instruction may be, for example, an editing operation instruction in the related description of fig. 4 and fig. 7 in the following embodiments, where the editing operation instruction is associated with which editing function button is clicked by the user on the video editing interface. For example, when the user clicks the filter button, the editing operation instruction is used to instruct the process where the video editing application is located to perform filter processing on the first media data.
In a possible implementation of the first aspect, the electronic device performs a first editing process on the first multimedia data by calling a multimedia editing capability of the second application, and further includes: and the second process running on the electronic equipment feeds back the execution result of the first editing process to the first process.
That is, after the second process executed by the electronic device finishes the editing process on the first multimedia data, the first process may be notified that the first process has finished the editing process, for example, the second process may send a return value for finishing the editing process to the second process, which is described in step 408 or step 908 in the following embodiments, and will not be described herein.
In a possible implementation manner of the first aspect, the electronic device displays second multimedia data in a second interface, including: the electronic device plays the second multimedia data on the second interface to preview the picture effect and/or the electronic device plays the second multimedia data to preview the sound effect.
I.e. the first multimedia data (e.g. video) for which the editing process is completed, a preview may be made on the second interface of the first application currently displayed by the electronic device. It is understood that the editing process of the first multimedia data may include a process of a picture. And/or processing of sound including, for example, a filter, a sticker, or the like described in the following embodiments, and processing of sound including a special effect of mixing sound and adding audio, sound effects, or the like described in the following embodiments. When the video preview effect of the editing process is displayed on the electronic device such as the mobile phone 100, the pictures with the filter, the sticker and the like can be played, and the added sound effect, the mixed sound special effect and the like can be synchronously played.
In a possible implementation of the first aspect, the second interface includes a save control, and after the electronic device displays the second multimedia data in the second interface, the method further includes: the electronic device detects a save operation acting on the save control; the electronic equipment responds to the save operation and sends a save instruction from the first process to the second process; a second process running by the electronic equipment responds to the storage instruction and stores second multimedia data under the first directory; the first catalog is a storage space with access rights for both the first application and the second application.
That is, the user clicks the save control on the second interface of the first application, and the corresponding save instruction may be sent from the first process running the first application to the second process. The second process can combine and pack the video stream files of the encoded cache such as the first multimedia data (e.g. video) which is edited based on the received storage instruction, and store the video stream files as video files. The video file may be, for example, a short video file as described in step 836 in the embodiments below.
In one possible implementation of the first aspect, the interaction between the first process and the second process is implemented by a cross-process communication technology, and the cross-process communication technology includes any one of the following: pipeline communication; cross-process communication based on named pipes; cross-process communication based on memory mapping; cross-process communication based on message queues; cross-process communication based on shared memory; signal quantity or signal based cross-process communication; socket-based cross-process communication.
In a possible implementation of the first aspect, each edit control of the plurality of edit controls includes any one of: dividing; speed change; volume; animation; a filter; single frame derivation; cutting; masking; chromaticity matting; mirror image; pouring; fixing the grid; a picture-in-picture; specific effects; sticking paper; adding music/sound effects; and adding subtitles.
In a possible implementation of the first aspect described above, the form of the editing control includes any one of the following: buttons, progress bars, date/time controls, upload controls, list boxes, tree controls, page views, input boxes, multi-format text boxes, tab controls, drop-down boxes.
In a second aspect, an embodiment of the present application provides an electronic device, including: one or more processors; one or more memories; the one or more memories store one or more programs that, when executed by the one or more processors, cause the electronic device to perform the above-described multimedia editing method.
In a third aspect, embodiments of the present application provide a computer-readable storage medium having instructions stored thereon, which when executed on a computer, cause the computer to perform the above-described multimedia editing method.
In a fourth aspect, embodiments of the present application provide a computer program product comprising a computer program/instruction which, when executed by an electronic device, implements the above-described multimedia editing method.
Drawings
Fig. 1a to 1b are diagrams illustrating some User Interfaces (UIs).
Fig. 2a to 2d are schematic views of some UI interfaces provided in embodiments of the present application.
Fig. 3 is a schematic block diagram of a software system architecture of a mobile phone 100 according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of an implementation of a multimedia editing method according to embodiment 1 of the present application.
Fig. 5a to 5g are schematic views of some UI interfaces provided in embodiment 1 of the present application.
Fig. 6a to 6d are schematic views of other UI interfaces provided in embodiment 1 of the present application.
Fig. 7 is a schematic diagram of interaction timing between application processes according to embodiment 1 of the present application.
Fig. 8a to 8c are schematic views of a specific interaction flow between application processes provided in embodiment 1 of the present application.
Fig. 9 is a schematic flow chart of an implementation of a multimedia editing method according to embodiment 2 of the present application.
Fig. 10a to 10g are schematic views of some UI interfaces provided in embodiment 2 of the present application.
Fig. 11 is a schematic diagram of a composition structure of an editorial capability service according to an embodiment of the present application.
Fig. 12 is a schematic hardware structure diagram of a mobile phone 100 according to embodiment 2 of the present application.
Detailed Description
Before describing the implementation process of the multimedia editing method provided in the embodiments of the present application in detail, some computer technical terms related to the embodiments of the present application are briefly explained below to better understand the schemes of the present application.
The process is a basic unit of resource (CPU, memory, etc.) allocation, the system creates a process when the application program runs, allocates resources for it, then puts the process into a process ready queue, allocates CPU time for it when the process scheduler selects it, and the application program starts to run really. Among them, the process scheduler is used for scheduling and managing the running of processes, and an important goal of the process scheduler is to effectively allocate CPU time, and when facing some conflicting process tasks, to ensure that the response time is minimized for critical real-time tasks, and to maximally improve the overall utilization of the CPU, so as to provide a better user experience.
Session, in computer terminology, refers to a time interval during which an end user communicates with an interactive system. Generally refers to the time elapsed between registering the system and logging off the system, as well as some of the operating space required in the system to handle the traffic. For example, the session may be translated into a session. In the embodiment of the present application, the session refers to an editing session in which one process communicates with another process, and thus, a series of interactions (for example, interactions during the editing process) between one process and another process may be referred to as a session in the embodiment of the present application. In this embodiment of the present application, an operation space included in one session may be, for example, an editing environment such as a time axis (Timelines) created by one process (for example, a video editing application process) for another process (for example, a browsing application process such as a gallery) included in an editing session created between two processes, where the operation space may be used to perform audio/video editing processing.
In order to solve the problem of poor user experience caused by the interface for switching the audio/video editing application across applications, the embodiment of the application provides a multimedia editing method. In the method, in an audio/video application such as a gallery application or an album application, a recording application, or the like, an editing interface for editing a browsed audio/video file may be displayed to a user. And then, through a cross-process communication technology, transmitting the editing operation of the user in an editing interface to an audio/video editing application which performs audio/video editing actually to perform various editing operations. Corresponding to the audio/video editing application executing the editing operationAfter the editing process, the return value after the editing process is completed is sent to the audio/video application, so that the edited audio/video is displayed to the user in the audio/video application. The audio/video editing application may be, for example, a clip TM Tremble and sound TM Petal clipping TM For example, the edited audio/video may be a short video obtained by clipping, and the user may view the short video obtained by clipping on the browsing interface of the gallery application.
Based on the multimedia editing scheme provided by the embodiment of the application, the audio/video application can use the editing capability of the audio/video editing application in a cross-process manner, so that the editing capability of the video editing application can be multiplexed in the maximum range, and the occupation of a system ROM is greatly reduced; the result of the editing processing can also be directly displayed to the user at the interface of the audio/video application, and the process of completing the audio/video editing processing under the condition of not switching the display interface of the current audio/video application is also realized, so that the user vision is smoother, and the user experience is improved.
As an example, fig. 2 a-2 d show some UI schematics according to embodiments of the present application. The multimedia editing scheme provided in the embodiment of the present application is applied to the mobile phone 100 shown in fig. 2a to 2 d.
Fig. 2a shows a video browsing interface 210 displayed by clicking a video file after the user operates the mobile phone 100 to open the gallery application, where the video browsing interface 210 shown in fig. 2a is the same as the video browsing interface 101 shown in fig. 1 a.
The user may click edit button 211 on video browsing interface 210 shown in fig. 2a, at which time handset 100 remains running the gallery application and displays video editing interface 220 of the gallery application shown in fig. 2 b. In comparison with the video browsing interface 210 shown in fig. 2a, an edit function menu bar 230 for performing audio/video editing is displayed above the menu bar where the edit button 211 is located in the video editing interface 220 shown in fig. 2 b. The user may click on the respective editing buttons in the editing function menu bar 230 to perform the corresponding audio/video editing operations, and in addition, the save button 240 may be displayed in the upper right corner of the video editing interface 220 shown in fig. 2 b.
Referring to operation (1) shown in fig. 2b, the user performs a sliding return operation from the screen of the mobile phone 100 to exit the video editing interface 220 shown in fig. 2b, and then the screen of the mobile phone 100 may hide the editing function menu bar 230 and the respective editing buttons and save buttons 240 on the video editing interface 220, and display the video browsing interface 210 shown in fig. 2 a.
Here, the term "hide" means that the editing function menu bar 230, the save button 240, etc. which are originally displayed are not displayed on the interface any more. "stealth" may also be referred to as exit or return, etc. Under operation (1) shown in fig. 2b, the process from fig. 2b to fig. 2a of changing the interface displayed on the screen of the mobile phone 100 only involves a part of functional buttons to be hidden or not to be displayed, so that from the perspective of a user, the process of changing the interface displayed on the screen of the mobile phone 100 is smoother compared with the process of changing the interface between two applications, thereby being beneficial to improving the visual experience of the user.
Referring to operation (2) shown in fig. 2b, after the user clicks one or more editing buttons in the editing function menu bar 230 on the video editing interface 220 to perform an editing operation, for example, after the user clicks the filter button 231 in the editing function menu bar 230 to perform an editing operation of switching filter effects, the screen of the mobile phone 100 may display the effect preview 250 in the video editing interface 220 shown in fig. 2 c; after the user further clicks the save button 240 in the upper right corner of the video editing interface 220, the screen of the mobile phone 100 may display the video browsing interface 260 shown in fig. 2 d.
As shown in fig. 2d, the mobile phone 100 displays the video 213 after the filter effect is switched, and the video 212 before the filter effect is switched is displayed in the thumbnail box 270. It will be appreciated that in other embodiments, the video browsing interface 260 shown in fig. 2d may not include the thumbnail box 270 or include the controls corresponding to the video 212 and the video 213 in the thumbnail box 270, which is not limited herein. The user can slide left and right on the video browsing interface 260 shown in fig. 2d to view the video 212 or the video 213 and then the video or the image is not described herein.
Based on the above interface changes shown in fig. 2a to 2d, it can be understood that, in the mobile phone 100 to which the audio/video editing scheme of the present application is applied, the audio/video file may be browsed on the interface of the gallery application (refer to the video browsing interface 210 shown in fig. 2 a), or the audio/video file being browsed may be edited on the interface of the gallery application (refer to the video editing interface 220 shown in fig. 2 b). The system of the mobile phone 100 can send the editing operation performed by the user on the video editing interface 220 of the gallery application to the process where the video editing application actually performs the audio/video editing process is located by the corresponding operation instruction through the cross-process communication technology. The audio/video file obtained by the editing process may be, for example, the video 213 shown in fig. 2 d.
Therefore, in the process of editing operations performed by the user, the mobile phone 100 always displays the interface of the gallery application, and the processing results of each editing operation can also be displayed on the editing interface of the gallery application in real time for the user to refer to. Therefore, the visual experience of the audio/video editing operation can be smoother when the user browses the audio/video, and the user use experience can be improved.
It may be appreciated that the electronic devices to which the multimedia editing method provided in the embodiments of the present application is applicable may include, but are not limited to, mobile phones, tablet computers, desktop computers, laptops, handheld computers, netbooks, and Augmented Reality (AR) devices, smart televisions, smart watches, monitoring devices, etc. having one or more processor electronic devices, which are not limited herein. The implementation process of the multimedia editing method provided in the embodiment of the present application will be specifically described below by taking the example that the electronic device is a mobile phone 100.
It may be understood that the audio/video applications described in the embodiments of the present application are application programs capable of displaying audio/video files, for example, the gallery application or the album application illustrated above, the recording application, and the screen-extinguishing display application illustrated below, and in the embodiments of the present application, these audio/video applications may also satisfy the editing requirements of the user on the displayed audio/video files.
The audio/video editing applications described in the embodiments of the present application are applications with editing capabilities, such as the clips exemplified above TM Tremble and sound TM Petal clipping TM And the like, in the embodiment of the application, the audio/video editing application may provide an editing capability service for the audio/video application, and the editing capability service may be deployed in a system of an electronic device such as the mobile phone 100 when the corresponding audio/video editing application is installed.
According to the multimedia editing method provided by the embodiment of the application, the edited object is multimedia data, the multimedia data comprises audio data, video data, picture image data and the like, wherein the audio data can be music, recording, audio effects, audio clips and the like, the video data can be video recorded by a camera, a downloaded video file, a short video or a video clip of a screenshot and the like, and the picture image data can be a photo shot by the camera, a downloaded picture, an image obtained by editing and the like, and the method is not limited herein.
Fig. 3 shows a schematic block diagram of a software system architecture of a mobile phone 100 according to an embodiment of the present application.
The software system of the mobile phone 100 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the mobile phone 100 is illustrated.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer 310, an application framework layer 320, an Android runtime 330 and a system library 340, and a kernel layer 350.
As shown in FIG. 3, the application layer 310 may include a series of application packages, including both system applications and third party applications. These application packages may include gallery 311, screen display (Always On Display, AOD) 312, camera 313, video editing 314, and applications such as map, WLAN, navigation, bluetooth, music, calendar, short message, etc. which are not described in detail herein.
The application framework layer 320 provides an application programming interface (application programming interface, API) and programming framework for the applications of the application layer 310. The application framework layer 320 includes some predefined functions. The application framework layer 320 may include an editing capability service 321, which editing capability service 321 is a service that provides video editing 314 calls of the application layer 310 to provide multimedia editing capabilities; the application framework layer 320 may also include a window manager, content provider, telephony manager, resource manager, notification manager, view system, and the like.
Wherein the window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc. The telephony manager is used to provide the communication functions of the handset 100. Such as the management of call status (including on, hung-up, etc.). The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
Android Runtime (Android run) 330 includes a core library and virtual machines. An Zhuoyun row 330 is responsible for scheduling and management of the android system. The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer 310 and the application framework layer 320 run in virtual machines. The virtual machine executes the java files of the application layer 310 and the application framework layer 320 as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library 340 may include a plurality of functional modules. For example: surface manager (surface manager), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), media library (Media Libraries), etc. The surface manager is used for managing the display subsystem and providing fusion of 2D and 3D layers for a plurality of application programs. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio/video encoding formats such as MPEG4, h.264, MP3, MP4, AVI, AAC, AVC, AMR, JPG, PNG, etc.
Kernel layer 350 is a layer between hardware and software. The kernel layer 350 contains at least a display driver, a camera driver, an audio driver, and a sensor driver.
Based on the software system framework of the mobile phone 100 shown in fig. 3, the following describes in detail the implementation procedure of the multimedia editing scheme provided in the embodiment of the present application with reference to the specific embodiment.
It is understood that the cross-process communication manner applied by embodiments of the present application may include, but is not limited to: pipes (pipes), named pipes (FIFOs), memory mapping (mapped message), message queues (message queue), shared memory (shared memory), semaphores (semaphore), signals (signal), socketsWord (Socket). Android in TM In the system, the Android interface definition language (Android Interface Definition Language, AIDL) can implement the cross-process communication capability in the mobile phone 100 system by using the above-mentioned cross-process communication method, which is not limited herein.
In the following, with reference to embodiment 1, a specific process of implementing the multimedia editing method of the present application by the mobile phone 100 in response to a user operation will be described in a process of performing editing operation and saving of a video to be edited (i.e., a target object) by the user operating the mobile phone 100 to open the gallery 311.
Example 1
In the embodiment of the present application, taking the audio/video application as an example of the gallery 311, a specific implementation process of the multimedia editing method of the present application is described in detail in conjunction with an interaction process between the gallery 311 and the video editing 314.
Fig. 4 is a schematic flow chart of an implementation of a multimedia editing method according to an embodiment of the present application.
As shown in fig. 4, in the software system of the mobile phone 100, the multimedia editing method provided in the embodiment of the present application may be implemented through interaction between the gallery 311 and the video editing 314.
As shown in fig. 4, the flow includes the steps of:
401: gallery 311 detects that the user has initiated an edit to the target object.
Illustratively, after the user operates the mobile phone 100 to open the gallery 311, the user may select an image or video to browse. If the user wants to perform an editing operation on an image or video being browsed (i.e., a target object), clicking an editing button or the like may be operated on a corresponding browsing interface displayed on the mobile phone 100. At this time, the gallery 311 executed by the mobile phone 100 detects the editing operation of the user on the corresponding browsing interface on the target object.
It will be appreciated that prior to performing this step 401, when the handset 100 detects an operation to start running the gallery 311, the system creates a gallery application process and allocates resources for the created gallery application process. When the process scheduler selects the gallery application process, CPU time is allocated to the gallery application process, and the gallery 311 starts to run. In the process of implementing the multimedia editing method provided in the embodiments of the present application, interactions between a specific gallery application process and a video editing application process will be described in detail below, which is not described herein.
Fig. 5 a-5 g illustrate some UI interface diagrams according to embodiments of the present application.
As shown in fig. 5a, the user clicks gallery application icon 511 on desktop 510 of handset 100 to run gallery 311, and handset 100 runs gallery 311 and displays gallery interface 520 shown in fig. 5 b.
As shown in fig. 5b, the user clicks on the video type 521 in the gallery interface 520 and selects the video file to be browsed, and the mobile phone 100 may display the video browsing interface 530 shown in fig. 5 c.
As shown in fig. 5c, the video browsing interface 530 shown in fig. 5c is the same as the video browsing interface 210 shown in fig. 2a, and the user may click the edit button 531 on the video browsing interface 530 to make the gallery 311 operated by the mobile phone 100 enter a preparation interface for editing the target video 532 shown in fig. 5c, where the preparation interface is, for example, the interface shown in fig. 6a and illustrated in step 405 below, and will not be described herein.
In other embodiments, the video browsing interface displayed by the user operating the mobile phone 100 to open the videos in the gallery may be other types, such as the video browsing interface 540 shown in fig. 5d, the video browsing interface 550 shown in fig. 5e, and the video browsing interface 560 shown in fig. 5f, and the user may click an edit button on the interfaces shown in fig. 5d to 5f to initiate editing of the currently browsed video (i.e. the target object), which is not limited herein.
In other embodiments, the user may also operate the mobile phone 100 to open other applications with functions similar to the gallery 311, and browse images or videos, for example, the video browsing interface 570 shown in fig. 5g is an interface for opening a corresponding display of a video to be browsed in another application that can browse images, videos, edit videos, and the like. The user may click a style button 571 shown in fig. 5g, call up a button for a corresponding editing function to edit a video style, etc., without limitation.
402: gallery 311 sends an edit initiation request to video edit 314. The editing starting request at least comprises an editing session request, an editing capability request and a file path of a target object, wherein the editing session request is used for requesting to create an editing session (session) between a gallery application process and a video editing application process, and the editing capability request is used for acquiring the editing capability of the video editing application.
Illustratively, after the gallery 311 executed by the mobile phone 100 responds to the detected operation of starting editing on the target object, a data packet of the editing start request may be generated, where the data packet may include the request data of the editing session request, the request data of the editing capability request, and the file path information of the target object. The file path information of the target object is, for example, file path information of a video currently being browsed.
It will be appreciated that after the gallery 311 generates an edit initiation request in response to a user initiating an operation to edit a target object, the system of the handset 100 may initiate running of the video edit 314 by an application process that creates the video edit 314 based on the edit initiation request of the gallery 311. Then, the gallery 311 may send the generated data packet of the editing start request to the process where the video editing 314 is located for processing through the cross-process communication technology, which will be described in detail below, and will not be described herein.
403: the video editing 314 initializes the editing operating environment based on the received editing start request, and adds a target object to the ready editing operating environment.
Illustratively, after the application process of the video editing 314 runs, the data packet of the editing start request sent from the gallery 311 may be parsed, so as to obtain the request data of the editing session request, the request data of the editing capability request, the file path information of the target object, and so on in the data packet.
As described above, the editing session request is for requesting creation of an editing session (session) between the application process of the gallery 311 and the application process of the video editing 314. Thus, based on the request data of the editing session request, the video editing 314 can establish an editing session (session) with the process in which the gallery 311 is located.
As described above, the editing capability request is used to acquire the editing capability of the video editing application. Accordingly, based on the request data of the editing capability request, the video editing 314 can initialize an editing operation environment, for example, a basic engineering environment such as a creation time axis (Timelines), as an editing environment based on the editing capability service that the video editing 314 has. After the creation of the editing environment is completed, the process where the video editing 314 is located may return a return value for completing the creation to the process where the gallery 311 is located, and specifically, reference may be made to the following detailed description, which is not described herein.
It will be appreciated that in other embodiments, the above-described edit initiation request may also be a third request that includes the content requested by the above-described edit session request and edit capability request, i.e., the third request may be used to request creation of an edit session between an application process of gallery 311 and an application process of video editing 314, and acquisition of editing capabilities of the video editing application. There is no limitation in this regard.
After the editing environment is ready, the video editing 314 may load the target object into the ready editing environment based on the file path information of the target object. In other embodiments, the video editing 314 may also obtain the relevant parameters of the target object based on the file path of the target object and add the relevant parameters to the ready editing environment, which is not limited herein.
404: the video editing 314 returns an edit capability list to the gallery 311.
Illustratively, after the video editing 314 completes the initialization of the editing operating environment, a list of editing capabilities available for the gallery 311 to call is returned to the gallery 311. The list of editing capabilities may include, for example, one or more of the following editing capabilities: segmentation, speed change (or speed), volume, animation, filters, single frame derivation, clipping, masking, chroma matting, mirroring, reverse playing, stop motion, picture-in-picture, special effects, decals, music/sound effects, subtitles, etc., without limitation.
In some embodiments, the encoding formats of the target objects to be edited are different, and the list of editing capabilities returned by video editor 314 to gallery 311 may be correspondingly different. For example, when the target object of gallery 311 initiating editing is an MP4 format video file, video editing 314 may return a first list of editing capabilities to gallery 311; when the target object of gallery 311 initiating editing is a video file in AVI format, video editing 314 may return a second list of editing capabilities to gallery 311. Wherein the first edit capability list and the second edit capability list may include different edit capability types and/or style options or the like in each edit capability type.
In other embodiments, different applications send edit initiation requests to the video edit 314, and the list of edit capabilities returned by the video edit 314 may be different. For example, in other embodiments, where AOD 312 performs step 402 described above to send an edit initiation request to video edit 314, video edit 314 may return a third list of editing capabilities to AOD 312. The third edit capability list may include fewer edit capability types than the first edit capability list or the second edit capability list.
405: gallery 311 displays editing functionality controls based on the returned list of editing capabilities.
Illustratively, the gallery 311 may display, on the original video browsing interface, editing function controls corresponding to the corresponding editing capabilities, for example, editing function buttons shown in fig. 6a to 6c, which are shown in fig. 5c, for example, in a submenu (or referred to as a secondary menu bar) of the menu bar (as a primary menu bar) where the editing button 531 is shown in fig. 5 c. It will be appreciated that when the editing capability list includes a plurality of editing capability items, a plurality of levels of menus may be displayed on the corresponding video browsing interface of the gallery 311, for example, the editing function buttons corresponding to the respective editing capabilities may be classified and displayed as a second level menu bar, a third level menu bar, etc. called up by the editing button 531 in the first level menu bar, which is not limited herein.
Fig. 6 a-6 d show some UI interface diagrams of the mobile phone 100 displaying a video editing interface.
The user clicks the edit button 531 on the video browsing interface 530 shown in fig. 5c, and the mobile phone 100 screen may display the video editing interface 610 shown in fig. 6a, where the video editing interface 610 is the same as the video editing interface 220 shown in fig. 2b, and the user may click any edit function button in the edit function menu bar 611 on the video editing interface 610 to perform an editing operation, and after performing the editing operation, the user may also click the save button 612 on the video editing interface 610 to save the completed editing operation.
In other embodiments, if the video browsing interface displayed by the mobile phone 100 is the video browsing interface 550 shown in fig. 5e, the user clicks the edit button on the video browsing interface 550, and the mobile phone 100 may display the video editing interface 620 shown in fig. 6b, as shown in fig. 6b, and if the user clicks the filter function button in the primary function menu bar 621, the video editing interface 620 may display the secondary function menu bar 622, and the user may further click the "add filter" or "add adjust" function option in the secondary function menu bar 622 to perform a corresponding editing operation.
In other embodiments, if the video browsing interface displayed by the mobile phone 100 is the video browsing interface 560 shown in fig. 5f, the user clicks the edit button on the video browsing interface 560, and the mobile phone 100 may display the video editing interface 630 shown in fig. 6 c; if the video browsing interface displayed by the mobile phone 100 is the video browsing interface 570 shown in fig. 5g, the user clicks the style button 571 on the video browsing interface 570, the mobile phone 100 may display the video editing interface 640 shown in fig. 6d, and the user may click a video style option on the video editing interface 640 to switch to the editing operation of the corresponding video style. There is no limitation in this regard.
406: the gallery 311 detects the editing operation of the user on the target object, and generates a corresponding editing operation instruction.
For example, after detecting an editing operation performed by the user on the interface shown in fig. 6a to 6d, the gallery 311 executed by the mobile phone 100 may generate a corresponding editing operation instruction, for example, an operation of clicking an editing function button on the interface shown in fig. 6a to 6c, or an operation of clicking a video style option on the interface shown in fig. 6d by the user. Gallery 311 may send the generated editing operation instructions to video editor 314. It can be appreciated that the editing operation instruction generated by the gallery 311 running on the mobile phone 100 may be sent from the process of the gallery 311 to the process of the video editing 314 for processing through cross-process communication, which will be described in detail below, and will not be repeated here.
407: the video editing 314 invokes the corresponding editing capability to perform the corresponding editing process on the target object based on the received editing operation instruction.
Illustratively, after receiving the editing operation instruction sent by the gallery 311, the video editing 314 executed by the mobile phone 100 invokes the editing capability corresponding to the editing operation instruction, and performs corresponding editing processing on the target object in the ready editing environment. The editing process is performed on the target object in the ready editing environment, and may include a time node or the like for controlling the editing process performed on the target object by the time axis (Timelines) created in step 403. For example, if the edit manipulation instruction received by the video edit 314 is a filter instruction, the video edit 314 may invoke filter processing capabilities to filter the video currently being edited. During this process, the video editing 314 may also cause the filter process to be applied to a video clip for a certain duration in the video being edited through time axis (Timelines) control. Reference may be made specifically to the following detailed description, which is not repeated here.
It will be appreciated that if the user performs an operation of adding music or sound effects on the video editing interface of the gallery 311, the video editing 314 may also call the capability of creating audio tracks in the current editing environment, and add the music or sound effect clip selected by the user to the created audio tracks, thereby completing the editing process of adding music or sound effects to the video data being processed. Reference may be made to the following detailed description, and details are not described here.
It will be appreciated that in other embodiments, the video editing 314 adds an option tag, such as a filter style, corresponding to the editing operation instruction to the target object data currently being edited based on the received editing operation instruction. For example, based on the received filter instructions, the video editing 314 may add an option tag corresponding to the user-selected filter style to the video data currently being edited to complete the filter processing of the video data. There is no limitation in this regard.
408: the video editing 314 transmits the return value for completing the editing process to the gallery 311.
Illustratively, the video editing 314 may return a return value for completing the editing process to the gallery 311 after completing the editing process on the target object based on the corresponding editing capability.
409: the preview effect corresponding to the editing processing result is displayed on the editing interface of the gallery 311.
Illustratively, after the gallery 311 receives the return value (i.e., the step 408) that the video editing 314 is successfully edited, the video data obtained by the editing process of the video editing 314 in the step 407 may be displayed in the editing interface of the current gallery 311, so as to present the preview effect of the video data after the editing process to the user. It will be appreciated that the editing interface of the gallery 311 currently displayed on the screen of the mobile phone 100 may be any of the video editing interfaces shown in fig. 6a to 6 d.
For example, when the editing process performed by the video editing 314 on the target video is to add a filter, add a sticker, a subtitle, or the like, the mobile phone 100 may play the processed video data on the interface of the currently displayed gallery 311 to display the preview effect of the editing process. Accordingly, the preview effect may include a preview effect of different filter styles, a preview effect of a sticker added to a video picture, a preview effect of a subtitle added to a video picture, and the like.
For example, if the editing process performed by the video editing 314 on the target video is to change speed, add audio/audio effects, etc., the mobile phone 100 may play the processed video data on the editing interface of the currently displayed gallery 311 to display the preview effect of the editing process. Accordingly, the preview effect may include a video sound playing effect after the speed change process, a sound playing effect after the audio/sound effect is added to the video data, and the like.
It will be appreciated that if the user operates the mobile phone 100 to perform multiple editing operations on the target object, the mobile phone 100 may repeatedly perform the steps 406 to 409 to complete the corresponding editing process, which is not limited herein.
410: the gallery 311 detects a save operation by the user and transmits a save instruction to the video edit 314.
Illustratively, when the gallery 311 running on the mobile phone 100 detects a save operation of the user, for example, the user clicks a save button on a video editing interface displayed on the mobile phone 100, the gallery 311 may generate a corresponding save instruction and send the corresponding save instruction to the video editing 314 for save processing. It can be appreciated that the save instruction generated by the gallery 311 running in the mobile phone 100 may be sent from the process in which the gallery 311 is located to the process in which the video editing 314 is located for processing through cross-process communication, which will be described in detail below, and will not be repeated here.
It will be appreciated that in other embodiments, the user's saving operation may be performed by other operation methods, for example, the user may click the share button on the video editing interface displayed by the mobile phone 100 to trigger saving of the target object that completes editing, and the gallery 311 may generate a corresponding save instruction to send to the video editing 314 when detecting that the user clicks the share button. There is no limitation in this regard.
411: the video editing 314 saves the target object for which editing is completed based on the received save instruction.
Illustratively, the video editing 314 executed by the mobile phone 100 may save the target object that completes editing to the public directory after receiving the save instruction. In other embodiments, the video editing 314 may also save the target object that completes editing to a common directory with the gallery 311 for the gallery 311 to acquire, which is not limited herein. It will be appreciated that the object to be edited may be, for example, a short video based on image or video processing, without limitation.
412: gallery 311 obtains and displays the edited target object.
Illustratively, the gallery 311 operated by the mobile phone 100 may obtain the object of interest from the public directory and display the object on the updated video browsing interface, and the updated video browsing interface may refer to the video browsing interface 260 shown in fig. 2 d.
In other embodiments, if the object of completing editing is stored in a common directory of gallery 311 and video editing 314, gallery 311 may obtain the object of completing editing from the common directory for display, which is not limited herein.
In the steps 401 to 412, the mobile phone 100 implements cross-process editing on the target object through the interaction between the running gallery 311 and the video editing 314. The mobile phone 100 always displays the interfaces of the gallery 311, including the video browsing interface and the video editing interface, in the implementation of the steps 401 to 412, so that the interface switching process displayed by the mobile phone 100 is smoother in the vision of the user, the mobile phone 100 does not need to switch between interfaces of different applications, the response speed of interface display is faster, and the user can operate the mobile phone 100 to perform audio/video editing operation more conveniently and rapidly.
In the following, in the process of implementing the multimedia editing method provided in the embodiment of the present application by using the mobile phone 100 with reference to the accompanying drawings, the bottom implementation logic in terms of data and instruction transmission and data processing instruction execution between two application processes in the mobile phone 100 system is related.
Fig. 7 is a schematic diagram of interaction timing between two application processes running in the mobile phone 100 system in the implementation process of a multimedia editing method according to an embodiment of the present application.
As shown in fig. 7, the process of running the application program by the mobile phone 100 includes an application process 01 and an application process 02, where the application process 01 is, for example, a process where the gallery 311 is located, or a process where other audio/video application programs are located; the application process 02 is, for example, the process of the video editing 314, and the video editing 314 may be, for example, a clip TM Tremble and sound TM Petal clipping TM Etc. application program with audio/video editing capabilityThere is no limitation in this regard. The application process 01 provides an interface for displaying the target object and the editing effect, and the application process 02 provides an editing capability service for performing corresponding editing processing on the target object designated by the application process 01.
Referring to fig. 7, after detecting an operation of initiating editing on a target object, the application process 01 may invoke an initialization interface 710 for initiating editing on a related display resource, for example, the video editing interface shown in fig. 6a to 6 d.
It may be appreciated that, in the process of displaying the initialization interface 710, the application process 01 may provide the application process 02 with a file path of the target object; the application process 01 can also obtain editing capability support from the application process 02; the application process 01 may also call the associated display resource creation interface canvas (Surface) and share the address (or handle) of the created interface canvas to the application process 02 through the shared memory capabilities of the system. In this way, the application process 01 can display the editing function control generated based on the editing capability supported by the application process 02, the editing processing effect preview of the target object in the application process 02, the cached editing processed target object related data, and the like on the editing interface of the application operated by the application process 01. The application process 02 may obtain the target object based on the file path provided by the application process 01, for example, the target object is a video file, and then the application process 02 may decode the obtained video file to obtain a video file format that may be edited, for example, the decoded video file format is an RGB thumb and single frame image format.
It will be appreciated that, after detecting that the user performs the editing operation on the target object on the initialization interface 710, the application process 01 may jump from the initialization interface 710 to the editing interface 720. Editing operations of the user on the editing interface 720 may trigger the application process 01 to generate corresponding editing operation instructions and send the corresponding editing operation instructions to the application process 02. The generated editing operation instructions include, for example, a create track instruction, an edit track instruction, and the like, the tracks including a video track, an audio track, and a subtitle track, and the like, which will be described in detail below without limitation.
As further shown in fig. 7, after receiving the edit manipulation instruction sent by the application process 01, the application process 02 may execute a corresponding track creation instruction, track editing instruction, etc. to create a video track, an audio track, etc. The created video track, audio track and the like can adopt a multi-track frame, for example, the video track can comprise a plurality of edit sub-tracks such as stickers, flowers, special effects, filters, beauty, reverse order and the like, and the edit sub-tracks are used for synthesizing interface effects such as video special effects, subtitles and the like; the created audio tracks may include audio, sound effects, etc. editing sub-tracks for synthesizing sound effects such as mixing effects. The application process 02 executes the track editing instruction to complete the editing processing process on the corresponding track, and the interface effect and/or sound effect obtained by processing can be synchronously displayed on the interface canvas created by the application process 01. At this time, the editing interface 720 may further include a preview interface 730 for editing the processing effect, and the preview interface 730 synchronously displays the video screen after the editing processing.
After the editing operation is completed, the user performs a save operation on the editing interface 720 or the preview interface 730 displayed on the mobile phone 100, and then the application process 01 may send a save instruction to the application process 02, and call the system related display resource to display the save interface 740 on the screen of the mobile phone 100. The save operation is, for example, an operation of clicking the save button 240 on the interface shown in fig. 2b by the user, and the save interface 740 is, for example, the video browsing interface 260 shown in fig. 2d, on which the saved completed editing is displayed.
And the application process 02 can package the synthesized video special effects, subtitles, audio mixing special effects and the like after receiving the storage instruction of the application process 01. And then carrying out coding merging and storing treatment, namely, coding the target object which is completed with editing, merging the video special effects, the subtitles and the audio mixing special effects, and generating the target object which is completed with editing. And finally, storing the edited target object under a common directory of the system or under a common directory with the application process 01.
Fig. 8a to 8c are schematic views showing a specific interaction flow between two application processes running in the mobile phone 100 system in the implementation process of a multimedia editing method according to an embodiment of the present application.
Fig. 8a shows an interactive flow for implementing editing processing on a target object by an application process 01 through an editing capability service on an application process 02, where the flow includes the following steps:
801: the application process 01 requests creation of an editing session to the application process 02, requesting initialization of an editing service. The application process 02 may then create a basic engineering environment, i.e. an editing environment, such as a time axis (Timelines) based on the request. After completion of creation of the editing environment, the application process 02 may send a return value of creation completion to the application process 01.
Illustratively, after the user (i.e., operator) initiates the operation of editing the target object on the browsing interface of the gallery 311 running by the application process 01, the application process 01 executes a corresponding operation instruction, and requests to create an editing Session (Session) to the application process 02 for establishing a cross-process communication interaction with the application process 02, and the application process 01 may provide a file path to the application process 02 based on the cross-process communication, for example, provide the application process 02 with a file path under a public directory, and may further authorize the application process 02 to provide the application process 02 with access rights of a private directory of the application program running by the application process 01 to provide the application process 02 with files, materials, data, etc. required for editing.
In addition, the application process 01 also requests the application process 02 to initialize editing services, i.e., requests the application process 02 to create an editing environment, for example, to cause the video editing 314 run by the application process 02 to provide editing capability services.
802: the application process 01 requests the application process 02 for editing capability support. The application process 02 then returns a List (List) of available editing capabilities to the application process 01 based on the request.
Illustratively, application process 01 sends a request to application process 02 requesting that editing capability support of the video editing application run by application process 02 be obtained. The application process 02 returns a list of available editing capabilities to the application process 01 upon request. It will be appreciated that the list of editing capabilities may include all or part of the editing capabilities that the video editing 314 running on the application process 02 has, without limitation.
Based on the edit capability list returned by the application process 02, the application process 01 may invoke the relevant display resource of the mobile phone 100 to display the initialization interface 710, which is specifically described with reference to fig. 7 and will not be described herein.
803: the application process 01 sends an instruction to create a video Track (Track) to the application process 02. The application process 02 creates the video track based on the instruction, and after the creation is completed, the application process 02 may send a return value of the completion of the creation to the application process 01.
Illustratively, if the gallery 311 run by the application process 01 detects that the user operates to add a video material segment on the interface, the application process 01 sends an instruction to create a video track to the application process 02 accordingly. The application process 02 completes the creation of the video track based on the instruction.
It will be appreciated that the video track is the location where video material segments are added, simply referred to as a video track, and all video material segment formats that are supported for parsing by the video editor 314 may be loaded into the video track for editing, adding effects, etc.
804: the application process 01 sends an instruction to the application process 02 to add a video material Segment (Segment) to the video track. The application process 02 adds the video material segments to the video track based on the instruction, and after the addition is completed, the application process 02 may send a return value of the completion of the addition to the application process 01.
Illustratively, the application process 01 may edit the file path of the video material segment material selected by the user as an instruction to add the video material segment, and send the instruction to the application process 02.
The application process 02 then adds video material segments to the video track based on the instruction, and the added video material segments can be obtained based on the file path provided by the application process 01.
805: the application process 01 sends an instruction to create an audio Track (Track) to the application process 02. The application process 02 creates the audio track based on the instruction, and after the creation is completed, the application process 02 may send a return value of the completion of the creation to the application process 01.
Illustratively, if the gallery 311 running by the application process 01 detects that the user operates to add an audio material segment on the interface, where the audio material segment includes audio material, the application process 01 sends an instruction to create an audio track to the application process 02 accordingly, and the application process 02 completes the creation of the audio track based on the instruction.
It will be appreciated that an audio track is a location where a piece of audio material is added, simply referred to as an audio track.
806: the application process 01 sends an instruction to the application process 02 to add an audio material Segment (Segment) to the audio track. The application process 02 adds the audio material segments to the audio track based on the instruction, and after the addition is completed, the application process 02 may send a return value of the completion of the addition to the application process 01.
Illustratively, the application process 01 may edit the file path of the audio material segment selected by the user as an instruction to add the audio material segment, and send the instruction to the application process 02.
The application process 02 then adds audio material segments to the audio track based on the instructions, and the added audio material segments may be obtained based on the file path provided by the application process 01.
807: the application process 01 sends an instruction to create a subtitle Track (Track) to the application process 02. The application process 02 creates the subtitle track based on the instruction, and after the creation is completed, the application process 02 can send a return value of the completion of the creation to the application process 01.
Illustratively, if the gallery 311 operated by the application process 01 detects that the user operates to add the subtitle on the interface, the application process 01 sends an instruction to create the subtitle track to the application process 02 accordingly, and the application process 02 completes the creation of the subtitle track based on the instruction.
It can be understood that the subtitle track is a position where the subtitle is added, abbreviated as a subtitle track.
808: the application process 01 sends an instruction to the application process 02 to add a text Segment (Segment) to the subtitle track. The application process 02 adds text clips to the subtitle track based on the instruction, and after the addition is completed, the application process 02 may send a return value of the addition completion to the application process 01.
Illustratively, the application process 01 may edit the text clip file path selected by the user for addition into an instruction to add a text clip, and send the instruction to the application process 02.
The application process 02 adds text segments to the subtitle track based on instructions, where the added text segments are obtained based on a file path provided by the application process 01, and in other embodiments, the application process 01 may also provide a text input path to the application process 02 to obtain real-time text input by a user on an editing interface of the gallery 311 operated by the application process 01, which is not limited herein.
809: the application process 01 sends an instruction to add an Effect (Effect) to the application process 02. The application process 02 binds the created special effects to the video/audio/subtitle track or the added video/audio/subtitle fragments based on the instruction, and after the addition is completed, the application process 02 can send a return value of the completion of the addition to the application process 01.
Illustratively, the application process 01 sends an instruction to the application process 02 to add a video effect, an audio effect, or a caption effect, where the video effect includes, for example, a zoom effect, a transition effect, etc., the audio effect includes, for example, a sound effect, etc., and the caption effect includes, for example, setting a lighting font, etc., without limitation. The application process 02 creates the corresponding effect based on the instruction and binds the created effect to the video/audio/subtitle track or the added video/audio/subtitle clip.
810: the application process 01 sends an instruction to export the editing engineering configuration file to the application process 02.
Illustratively, if the gallery 311 executed by the application process 01 detects that the user exits the editing operation, the already completed editing process progress may be temporarily saved, for example, by sending an instruction to the application process 02 to export an editing engineering configuration file, and the created time axis, track, and added segments, special effects, and other editing process progress may be saved by editing the engineering configuration file. The application process 02 saves the created time axis, track, added segments, special effects and other editing process progress as an editing engineering configuration file based on the instruction.
It will be appreciated that, when the application process 01 is the process where the gallery 311 is located and the application process 02 is the process where the video editing 314 is located, steps 801 to 810 shown in fig. 8a are described as specific interaction implementation processes between two application processes involved in the execution process of steps 401 to 404 shown in fig. 4. The execution of steps 801 to 810 shown in fig. 8b, that is, the triggering of the gallery 311 when the above step 402 sends the edit initiation request to the video edit 314, is not described herein.
It can be understood that, in the implementation process of the multimedia editing method provided in the embodiment of the present application, after the interaction step shown in fig. 8a is performed between two application processes, if the user clicks each edit function control on the edit interface of the application process 01 (for example, the application process of the gallery 311), when performing a specific editing operation on the target object, a preview effect corresponding to each editing operation may be displayed on the edit interface of the gallery 311. The specific interaction process between the two processes involved in displaying the corresponding preview effect on the editing interface of the gallery 311 will be described in detail with reference to fig. 8 b.
Fig. 8b shows an interactive flow of displaying a preview effect in an editing interface of an application process 01 based on editing processing in the application process 02, the flow comprising the following steps:
811: the application process 01 creates an interface canvas (Surface).
Illustratively, the application process 01 running on the handset 100, when executing steps 801 through 802 described above, may invoke the associated display resource creation interface canvas (Surface) of the handset 100. Reference is made in particular to the above description and to the illustration of fig. 7.
812: the application process 01 shares the address of the interface canvas to the application process 02.
Illustratively, the application process 01 shares the address (or handle) of the created interface canvas to the application process 02 through the shared memory capabilities of the system. The sharing of the canvas address between the application process 01 and the application process 02 through the shared memory capability may be described with reference to fig. 7, and will not be described herein.
813: the application process 02 creates a cross-process rendering environment based on the address of the interface canvas and binds with the canvas.
Illustratively, the application process 02 creates a cross-process rendering environment based on the interface canvas address shared by the application process 01 and completes binding the created cross-process rendering environment with the corresponding interface canvas to synchronously display the canvas content edited by the application process 02 on the corresponding canvas to the interface of the gallery 311 operated by the application process 01.
The creation of the cross-process rendering environment may be implemented, for example, by a business programming language (Enterprise Generation Language, EGL), which is an interface between a rendering API, such as OpenGL ES provided by OpenGL (Open Graphics Library), and a native window system.
It will be appreciated that any OpenGL ES application must use the EGL to perform queries and initialize the device-available displays, create rendering surfaces, create rendering contexts, etc. before starting rendering. And will not be described in detail herein.
814: the application process 02 restores the editing process progress of the time axis, track, added clips, special effects, and the like from the editing engineering configuration file. After the recovery is completed, the application process 02 may send a return value of the recovery completion to the application process 01.
For example, if the target object of the editing operation started by the user on the browsing interface of the gallery 311 executed by the application process 01 is the video file processed through the steps 801 to 810 (may include some or all of the steps 803 to 809), after the application process 01 executes the steps 801 to 802 and the steps 812 to 813 again, the application process 02 may be triggered to restore the created time axis, track, and the editing process progress of the added clips, special effects, and the like from the saved editing engineering configuration file. If the target object of the editing operation started on the browsing interface of the gallery 311 operated by the application process 01 is the video file which is edited for the first time, the creation of the time axis, the track, the added segment, the special effect and other editing process progress between the application process 01 and the application process 02 can be completed through the steps 801 to 810. There is no limitation in this regard.
815: the application process 01 sends an operation instruction for synthesizing the mixing special effect to the application process 02.
For example, the user may perform an editing operation of making a special effect of mixing sound on an editing interface of the gallery 311 operated by the application process 01, and then the application process 01 sends an operation instruction of synthesizing the special effect of mixing sound, which is generated correspondingly, to the application process 02.
816: the application process 02 decodes the audio material segments into an editable audio format.
Illustratively, the application process 02 decodes, based on the instructions, the segments of audio material added to the audio track into an editable audio format, including, for example, pulse code modulated recording (Pulse Code Modulation, PCM) format, etc., without limitation.
817: the application process 02 synthesizes the mixed special effect audio based on the editable audio format obtained by decoding.
Illustratively, the application process 02 synthesizes the audio decoded on the audio track into corresponding mixed special effects audio by the corresponding editing capabilities of the video editing 314 run based on the resulting editable audio format.
818: the application process 02 synchronizes the mixed special effect audio to show a mixed special effect audio listening preview.
For example, the mixed special effect audio processed by the application process 02 can be synchronized to the system of the mobile phone 100 through the system shared memory capability, at this time, the mobile phone 100 can play the mixed special effect audio processed by the application process 02 as a listening preview, and at the same time, control buttons of the listening preview of the mixed special effect audio, such as a play progress control button, a pause/play button, etc., can be displayed on an editing interface of the gallery 311 operated by the application process 01, that is, the mixed special effect audio synchronized by the application process 02 to the system of the mobile phone 100 can be displayed to the user through a sound channel and an interface canvas of the application process 01.
819: the application process 01 sends an operation instruction for synthesizing the video effect and/or the subtitle effect to the application process 02.
For example, the user may perform editing operations such as making a video special effect on the editing interface of the gallery 311 running in the application process 01, and the application process 01 sends an operation instruction of the corresponding generated composite video special effect to the application process 02. It can be understood that, the user may also perform the operation of adding the caption effect on the editing interface of the gallery 311 operated by the application process 01, and the application process 01 may also send the operation instruction of the corresponding generated synthesized caption effect to the application process 02
820: the application process 02 decodes the video material segments and special effect material into single frame image data.
Illustratively, the application process 02 decodes, based on the instructions, the video material segments added to the video track and the special effects material or the like correspondingly added to the video track into single frame image data, which may be in RGB format or in YUV format or the like, without limitation.
821: the application process 02 synthesizes the video special effects based on the decoded single-frame image data.
Illustratively, the application process 02 edits the effect material and the video material on the video track based on the decoded single frame image data to synthesize a video effect.
The editing effect synthesis related to the video special effect synthesis can include, but is not limited to, the following categories:
the first type of texture mapping, for example, synthesizing stickers, flowers, special effects, subtitles and the like to form texture mapping;
second, filters, beauty, etc., for example, by processing video pictures through image algorithms to achieve a switching filter style, a beauty effect, etc.;
third, raising and lowering the playing rate by, for example, changing the playing rate by frame extraction and frame insertion (i.e., raising the playing rate) or lowering the playing rate;
fourth, the reverse, e.g., frame-order of decoded image data on the video track is reversed.
It will be appreciated that, if the instruction for adding a subtitle is included in the instruction received by the application process 02, the application process 02 may also add a corresponding text segment to a corresponding position on the subtitle track.
822: the application process 02 renders the single frame image data after the special effects are synthesized to a display device interface.
Illustratively, the application process 02 obtains single-frame image data after the video effect and/or the subtitle effect are synthesized, and at this time, the application process 02 may render the single-frame image data after the effect is synthesized to a display device interface so as to be conveniently rendered to a bound interface canvas.
It is understood that the Display device interface invoked by the application process 02 is, for example, an EGL Display, where Display may understand the address or handle of the interface canvas in embodiments of the present application.
823: the application process 02 synchronizes the video effect and/or the subtitle effect to the application process 01 to display the video effect and/or the subtitle effect interface preview.
For example, the application process 02 may synchronize the video effect and/or the subtitle effect to the canvas of the interface of the application process 01 through the above-mentioned shared memory capability, and thus the interface of the gallery 311 operated by the application process 01 may display the corresponding video effect and/or the subtitle effect preview.
It will be appreciated that, when the application process 01 is the process where the gallery 311 is located and the application process 02 is the process where the video editing 314 is located, steps 815 to 823 shown in fig. 8b are described as specific interaction implementation processes between two application processes involved in the execution process of steps 405 to 409 shown in fig. 4. The execution of steps 815 and 819 shown in fig. 8b may be triggered when the gallery 311 executes the above-mentioned step 406 to detect the editing operation of the target object by the user, which is not described herein.
In addition, steps 811 to 814 shown in fig. 8b also describe a specific interaction implementation procedure between two application processes involved in the execution of steps 401 to 404 shown in fig. 4, which is not described herein.
It can be understood that, in the implementation process of the multimedia editing method provided in the embodiment of the present application, after the interaction step shown in fig. 8b is performed between two application processes, if the user clicks the save button on the editing interface of the gallery 311 operated by the application process 01, when performing the save operation on the object after the editing process, each step shown in fig. 8c below can be continuously performed between two processes operated by the mobile phone 100, so as to complete the save process on the object after the editing process.
Fig. 8c shows an interaction flow of the application process 01 by the application process 02 to save the target object after completing the editing process. It will be understood that the steps in the interaction flow shown in fig. 8c may be performed before and after the partial steps shown in fig. 8a or fig. 8b, and the steps in the method for editing multimedia provided in the embodiment of the present application are not limited to the steps shown in fig. 8a to fig. 8c and the execution sequence of the steps, which are not limited herein.
As shown in fig. 8c, the process includes the steps of:
830: the application process 02 calls the multimedia codec interface, encodes the mixed special effect audio, forms an audio stream file and caches it.
For example, after performing the steps 801 to 809 or the steps 814, if the steps 815 to 818 are further performed, the mobile phone 100 may trigger the application process 02 to perform the step 830, that is, call the multimedia codec interface to recode the audio with special effects of mixing obtained by the editing process, obtain an audio stream file and cache the audio stream file, where the cached audio stream file may be synchronized to the application process 01 based on the shared memory capability of the system, and in other embodiments, the cache address of the audio stream file obtained by the application process 02 by caching the encoding may also be a public directory, and the application process 01 may also obtain the cached audio stream file through the cache directory synchronization provided by the application process 02. The audio stream file format formed by encoding may be, for example, an advanced audio coding (Advanced Audio Coding, AAC) format or the like, and is not limited herein.
It will be appreciated that if the mobile phone 100 performs steps 815 to 818 to make the special effects of mixing sound in response to the user operation multiple times, the application process 02 executed by the mobile phone 100 may repeatedly perform step 830, which is not limited herein.
832: the application process 02 calls the multimedia codec interface, encodes the image data after synthesizing the video special effects, forms a video stream file and caches the video stream file.
For example, if the mobile phone 100 performs the steps 819 to 821, the application process 02 may be triggered to perform the step 832, that is, call the multimedia codec interface, re-encode the image data after the synthesized video special effects, form a video stream file, and cache. The video stream file format formed by encoding may be, for example, an advanced video coding (Advanced Video Coding, AVC) format, etc., and is not limited thereto.
It will be appreciated that if the mobile phone 100 performs the steps 819 to 821 to make the special effects of mixing sound in response to the user operation multiple times, the application process 02 executed by the mobile phone 100 may repeatedly perform the step 832, which is not limited herein.
834: the application process 01 sends a save instruction to the application process 02.
Illustratively, the user may perform a save operation on the editing interface of the gallery 311 executed by the application process 01, for example, the operation of clicking the save button 612 on the video editing interface 610 shown in fig. 6a, and the application process 01 sends the corresponding generated save instruction to the application process 02 for execution.
836: the application process 02 combines and packages the audio stream file and the video stream file obtained by encoding to form a short video file, and stores the generated short video file.
Illustratively, the application process 02 combines and packages the audio stream file cached in step 830 and the video stream file cached in step 832 based on the received save instruction, thereby obtaining a short video file in which the audio stream file and the video stream file are synthesized. The application process 02 may, for example, call a multimedia wrapper (Media Muxer) to package the audio stream file and the video stream file; the short video file format formed by the package may be, for example, a media file format such as MP4 or audio video interleave format (Audio Video Interleaved, AVI), which is not limited herein.
It will be appreciated that if the editing process for adding the caption effect is included in the execution of steps 819 to 821, the application process 02 in step 836 may also compose the caption track to which the text clip is added while packaging the audio stream file and the video stream file so that the caption effect is included in the formed short video file.
It can be understood that, in the multimedia editing method provided in the embodiment of the present application, the audio/video editing task received on the application process 01 is handed to the editing capability service on the application process 02 for processing in a cross-process communication manner, and the result of the editing processing is synchronized to the application process 01 in a cross-process communication manner. The shared memory capability described in steps 812, 818, 823 and 830 above is also one of the implementations on which the system of the mobile phone 100 implements cross-process communication.
Example 2
In the embodiment of the present application, taking the example that the audio/video application is the AOD 312, a specific implementation procedure of the multimedia editing method of the present application is described.
Fig. 9 is a schematic flow chart of another embodiment of a multimedia editing method according to the present application.
As shown in fig. 9, the steps of the process for implementing the multimedia editing method in the embodiment of the present application are different from the steps of the process shown in fig. 4 only in that the interaction entities involved in the steps of the process shown in fig. 9 are AOD 312 and video editing 314. Specifically, the flow shown in fig. 9 includes the following steps:
901: AOD 312 detects that the user initiates an operation for editing the target object.
902: AOD 312 sends an edit initiation request to video edit 314. The editing starting request at least comprises request data for acquiring editing capability support and a file path of a target object.
903: the video editing 314 initializes the editing operating environment based on the received editing start request, and adds a target object to the ready editing operating environment.
904: video editing 314 returns an edit capability list to AOD 312.
905: AOD 312 displays an edit functionality control based on the returned list of edit capabilities.
906: AOD 312 detects an editing operation of the user on the target object and generates a corresponding editing operation instruction.
908: the video editing 314 sends a return value to the AOD 312 that completes the editing process.
909: the preview effect corresponding to the editing processing result is displayed on the editing interface of AOD 312.
910: AOD 312 detects a save operation by a user and generates a save instruction.
911: the video editing 314 saves the target object for which editing is completed based on the received save instruction.
912: AOD 312 obtains and displays the edited target object.
Fig. 10 a-10 g illustrate some UI interface diagrams according to embodiments of the present application.
As shown in fig. 10a, the user may click the more button 1011 on the picture/video browsing interface 1010 of gallery 311 and select the "set super wallpaper" option to enter super wallpaper editing interface 1030 of AOD 312 shown in fig. 10 e.
In other embodiments, the user may also find the super wallpaper option through the setup application of the system of the handset 100 to open the super wallpaper setup interface 1020 shown in fig. 10b, and then click on the "select from gallery" option on the super wallpaper setup interface 1020 to enter the editing interface 1030 of AOD 312 shown in fig. 10 c.
As shown in fig. 10c, the user may click a clip button 1031 on the editing interface 1030 to clip the video material 1032 selected by the user, for example, a 3s long video clip may be selected from the video material 1032 intelligently, or the current edited video material 1032 may be replaced by clicking a replace material button 1032.
As shown in fig. 10d, the user may also click on the filter button 1033 on the editing interface 1030 to switch the filter to the clip-processed video material 1032, for example, the user clicks on to switch to the filter style 1034, so that the clip of the video material 1032 in the filter style 1034 may be displayed on the editing interface 1030.
As shown in fig. 10e, the user may click on the screen-rest trimming button 1035 on the editing interface 1030 to select the trimming shape 1036, and the preview wallpaper 1037 corresponding to the trimming shape 1036 selected by the user may be displayed on the editing interface 1030.
After the user completes the editing operation on the editing interface 1030, the user may click the save button 1038 in the upper right corner of the editing interface 1030 to save the editing operation result. Thereafter, the handset 100 may display the wallpaper application interface 1040 shown in fig. 10 e.
As shown in fig. 10e, the wallpaper application interface 1040 displays a preview effect interface after AOD 312 completes setting, and the user may click the edit button 1041 on the wallpaper application interface 1040 shown in fig. 10e to return to the edit interface 1030 shown in fig. 10c, and perform the editing process again. The user may also click the apply button 1042 on wallpaper application interface 1040 shown in fig. 10e to determine to apply the preview effect displayed by wallpaper application interface 1040 to the off-screen display interface of AOD 312. The screen-off display interface of AOD 312 may be shown in fig. 10g, where mobile phone 100 is in the screen-off state, and displays the dynamic video wallpaper edited via the interfaces shown in fig. 10c to 10 e.
It can be appreciated that clicking the editing button 1031, the filter button 1033, and the screen-off trimming button 1035 on the editing interface 1030 by the user performs various editing operations, and the editing capability of the video clip application 314 can be invoked to perform editing processing by executing the interaction flow shown in fig. 9; the user may also instruct the video clip application 314 to save the result of the corresponding editing process by clicking the save button 1038 on the editing interface 1030 through the interaction flow shown in fig. 9, which will not be described in detail herein.
The following embodiments provide a brief description of the composition of an editorial capability service (e.g., editorial capability service 321 in FIG. 3) in the application framework layer, by way of example with gallery 311 and video editing 314.
Fig. 11 is a schematic diagram showing a composition structure of an editing capability service according to an embodiment of the present application.
As shown in FIG. 11, the editing capability service 1100 providing editing capabilities may be based on the functional modules provided by the media presentation integration layer 1110, the media editing processing platform 1120, and the system platform 1130.
Wherein the media presentation integration layer 1110 includes a UI interface, a capability call interface, and a media editing capability software development kit. The UI interface is used to provide an editing capability presentation resource for the video editing 314 that invokes the editing capability service 1100, and the UI interface may provide presentation resources such as a material list display and an effect preview interface, and in this embodiment, the video editing 314 invoking the editing capability presentation resource that can be provided by the editing capability service 1100 may be synchronized to the gallery 311, the AOD 312, and the camera 313 that request editing capability support, etc. across processes. The capability calling interface is used for calling system capabilities such as media framework capabilities, rendering capabilities, audio/video editing capabilities, media editing processing capabilities, and media file parsing, media file encoding and decoding, and media file playing capabilities provided by the media editing processing platform 1120.
Media editing processing platform 1120 may comprise media control layer 1121 and capability layer 1122. Wherein the media control layer 1121 may include editing engineering management that implements management functions of engineering configuration, rendering management, track management, material management, and clip management based on a created time axis (Timelines); input video stream, audio stream, picture, etc. data; a separator, a decoder, etc. for performing format parsing on input data; the media control layer 1121 is further used for managing audio/video editing types, including reverse order, speed, etc. of capability class, and special effects, stickers, beauty, flowers, filters, etc. of material class; the media control layer 1121 further includes an encoder, a packer, and the like for encoding and packing the editable format audio, single-frame image data, and the like obtained by the editing process.
The capabilities layer 1122 may include open source media framework capabilities, openGL rendering capabilities, audio/video editing capabilities, and the like.
The system platform 1130 may then provide some system functional modules including media file parsing, media file encoding and decoding, media file player, rendering engine, etc. for the capability call interface calls of the media presentation integration layer 1110 described above.
It will be appreciated that the configuration of the editing capability service 1100 shown in fig. 11 is not limited to the configuration of the editing capability service 1100, and that when the editing capability of the video editing 314 installed on the mobile phone 100 is different, the configuration of the editing capability service 1100 that the mobile phone 100 can provide may be different, and in other embodiments, the configuration of the editing capability service 1100 may further include more or less configurations than those shown in fig. 11, which is not limited herein.
Fig. 12 is a schematic diagram of a hardware structure of a mobile phone 100 according to an embodiment of the present application.
The handset 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the structure illustrated in the embodiments of the present invention is not limited to the specific embodiment of the mobile phone 100. In other embodiments of the present application, the handset 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components may be provided. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system. In this embodiment of the present application, the instruction and the data for implementing the multimedia editing method provided in this embodiment of the present application may be stored in the memory, and the processor 110 may generate an operation control signal according to the instruction operation code and the timing signal through the controller, so as to complete the execution process of the instruction to implement the multimedia editing method provided in this embodiment of the present application, which is not described herein.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
In the embodiment of the present application, the audio material selected by the user to be edited or added to the audio track may be sampled, quantized and encoded through the PCM interface and then transmitted, which will not be described herein. The PCM interface may also be used for audio communication, among other things, to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present invention is only illustrative, and is not limited to the structure of the mobile phone 100. In other embodiments of the present application, the mobile phone 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the mobile phone 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied to the handset 100. The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. applied to the handset 100.
The mobile phone 100 implements display functions through a GPU, a display 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. In some embodiments, the cell phone 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The mobile phone 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display 194, an application processor, and the like. The ISP is used to process data fed back by the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the cell phone 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Video codecs are used to compress or decompress digital video. The handset 100 may support one or more video codecs. In this way, the mobile phone 100 can play or record video in multiple coding formats, for example: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc. In embodiments of the present application, video material segments that a user selects to edit or add to a video track may be compression encoded or decoded by a video codec. And will not be described in detail herein.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capabilities of the handset 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code that includes instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data (e.g., audio data, phonebook, etc.) created during use of the handset 100, etc. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications and data processing of the mobile phone 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The handset 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The handset 100 may listen to music, or to hands-free calls, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the handset 100 is answering a telephone call or voice message, the voice can be received by placing the receiver 170B close to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The handset 100 may be provided with at least one microphone 170C. In other embodiments, the mobile phone 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the mobile phone 100 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify the source of sound, implement directional recording, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The handset 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display 194, the mobile phone 100 detects the intensity of the touch operation according to the pressure sensor 180A. The mobile phone 100 may also calculate the position of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the mobile phone 100 at a different location than the display 194.
In this embodiment of the present application, the mobile phone 100 may detect, through the pressure sensor 180A and the touch sensor 180K, an editing operation, a saving operation, etc. of a user on an editing interface of an application such as a gallery displayed by the mobile phone 100 for a target object (for example, a video object), which will not be described herein.
Illustratively, the handset 100 may also include one or more of keys 190, a motor 191, an indicator 192, a SIM card interface 195 (or eSIM card), and the like. Of course, the electronic device for executing the above-mentioned multimedia editing method in the embodiment of the present application may also be a tablet computer, a PC, or a wearable device, which is not limited in any way.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one example implementation or technique disclosed in accordance with embodiments of the present application. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
The disclosure of the embodiments of the present application also relates to an operating device for executing the text. The apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random Access Memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application Specific Integrated Circuits (ASICs), or any type of media suitable for storing electronic instructions, and each may be coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processors for increased computing power.
The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform one or more method steps. The structure for a variety of these systems is discussed in the following description. In addition, any particular programming language sufficient to implement the techniques and embodiments disclosed in the examples herein may be used. Various programming languages may be used to implement the present disclosure, as discussed herein.
Additionally, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the disclosed subject matter. Accordingly, the present application example disclosure is intended to be illustrative, but not limiting, of the scope of the concepts discussed herein.

Claims (14)

1. A multimedia editing method applied to an electronic device installed with a first application and a second application, the method comprising:
the electronic equipment displays a first interface of the first application, wherein the first interface comprises first multimedia data;
detecting that a user inputs a first editing operation to the first multimedia data, and displaying a second interface of the first application by the electronic equipment, wherein the second interface comprises one or more editing controls, and the editing controls are used for responding to the user operation to carry out corresponding editing processing on the first multimedia data;
Detecting that a user selects a first editing control for the first multimedia data, and the electronic equipment executes first editing processing on the first multimedia data by calling the multimedia editing capability of the second application to obtain second multimedia data;
the electronic device displays the second multimedia data in the second interface.
2. The method of claim 1, wherein the first interface comprises a first edit button, wherein,
the user inputs a first editing operation on the first multimedia data, including an operation acting on the first editing button.
3. The method of claim 1, wherein the electronic device runs the first application on a first process and the electronic device runs the second application on a second process;
the detecting that the user inputs a first editing operation to the first multimedia data, the electronic device displaying a second interface of the first application, includes:
in response to the first editing operation, the electronic device sends a first request from the first process to the second process, the first request being for requesting a multimedia editing capability list of the second application from the second process;
Based on the multimedia editing capability list returned by the second process, the electronic device generates the one or more editing controls displayed on the second interface;
the electronic device displays the second interface.
4. The method of claim 3, wherein each of the plurality of editing controls corresponds one-to-one with each of the multimedia editing capabilities in the list of multimedia editing capabilities.
5. The method of claim 4, wherein the detecting that the user selected the first editing control for the first multimedia data comprises:
a second editing operation of the user acting on the first editing control on the second interface is detected.
6. The method of claim 5, wherein the electronic device performs a first editing process on the first multimedia data by invoking a multimedia editing capability of the second application, comprising:
in response to the second editing operation, the electronic device sends a first editing instruction from the first process to the second process, wherein the first editing instruction is used for instructing the second process to execute a first editing process on the first multimedia data;
And the second process running by the electronic equipment responds to the first editing instruction and executes the first editing processing based on the multimedia editing capability of the second application.
7. The method of claim 6, wherein the electronic device performs a first editing process on the first multimedia data by invoking a multimedia editing capability of the second application, further comprising:
and the second process running by the electronic equipment feeds back the execution result of the first editing process to the first process.
8. The method of claim 7, wherein the electronic device displaying the second multimedia data in the second interface comprises:
the electronic device plays the second multimedia data on the second interface to preview the picture effect, and/or
The electronic device plays the second multimedia data to preview the sound effect.
9. The method of claim 7, wherein the second interface includes a save control and,
after the electronic device displays the second multimedia data in the second interface, the method further comprises:
the electronic device detects a save operation acting on the save control;
The electronic equipment responds to the save operation and sends a save instruction from the first process to the second process;
the second process operated by the electronic equipment responds to the storage instruction and stores the second multimedia data under a first directory; the first catalog is a storage space with access rights for the first application and the second application.
10. The method according to any of claims 3 to 9, wherein interaction between the first process and the second process is achieved by a cross-process communication technique, and wherein the cross-process communication technique comprises any of the following:
pipeline communication;
cross-process communication based on named pipes;
cross-process communication based on memory mapping;
cross-process communication based on message queues;
cross-process communication based on shared memory;
signal quantity or signal based cross-process communication;
socket-based cross-process communication.
11. The method of any one of claims 1 to 9, wherein each of the plurality of editing controls comprises any one of:
dividing; speed change; volume; animation; a filter; single frame derivation; cutting; masking; chromaticity matting; mirror image; pouring; fixing the grid; a picture-in-picture; specific effects; sticking paper; adding music/sound effects; and adding subtitles.
12. The method of any one of claims 1 to 9, wherein the form of the editing control comprises any one of the following:
buttons, progress bars, date/time controls, upload controls, list boxes, tree controls, page views, input boxes, multi-format text boxes, tab controls, drop-down boxes.
13. An electronic device, comprising: one or more processors; one or more memories; the one or more memories stores one or more programs that, when executed by the one or more processors, cause the electronic device to perform the multimedia editing method of any of claims 1-12.
14. A computer readable storage medium having stored thereon instructions which, when executed on a computer, cause the computer to perform the multimedia editing method of any of claims 1 to 12.
CN202111615710.1A 2021-12-27 2021-12-27 Multimedia editing method, electronic device and storage medium Active CN115016871B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111615710.1A CN115016871B (en) 2021-12-27 2021-12-27 Multimedia editing method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111615710.1A CN115016871B (en) 2021-12-27 2021-12-27 Multimedia editing method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN115016871A CN115016871A (en) 2022-09-06
CN115016871B true CN115016871B (en) 2023-05-16

Family

ID=83064327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111615710.1A Active CN115016871B (en) 2021-12-27 2021-12-27 Multimedia editing method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115016871B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113391739A (en) * 2021-05-27 2021-09-14 北京达佳互联信息技术有限公司 Function menu display method and device, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004090900A1 (en) * 2003-04-07 2004-10-21 Internet Pro Video Limited Method of enabling an application program running on an electronic device to provide media manipulation capabilities
WO2013023063A1 (en) * 2011-08-09 2013-02-14 Path 36 Llc Digital media editing
CN110309006B (en) * 2019-06-28 2021-06-04 百度在线网络技术(北京)有限公司 Function calling method and device, terminal equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113391739A (en) * 2021-05-27 2021-09-14 北京达佳互联信息技术有限公司 Function menu display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115016871A (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN110109636B (en) Screen projection method, electronic device and system
WO2020253719A1 (en) Screen recording method and electronic device
CN110231905B (en) Screen capturing method and electronic equipment
CN112558825A (en) Information processing method and electronic equipment
CN113726950B (en) Image processing method and electronic equipment
CN113838490B (en) Video synthesis method and device, electronic equipment and storage medium
CN112527174B (en) Information processing method and electronic equipment
WO2021249318A1 (en) Screen projection method and terminal
CN109819306B (en) Media file clipping method, electronic device and server
CN114040242B (en) Screen projection method, electronic equipment and storage medium
CN116055773A (en) Multi-screen collaboration method, system and electronic equipment
CN109981885B (en) Method for presenting video by electronic equipment in incoming call and electronic equipment
CN112527222A (en) Information processing method and electronic equipment
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
WO2023030099A1 (en) Cross-device interaction method and apparatus, and screen projection system and terminal
CN116450251A (en) Method for adapting page layout of multiple devices and electronic device
CN115550597A (en) Shooting method, system and electronic equipment
CN114756785A (en) Page display method and device, electronic equipment and readable storage medium
CN116028149B (en) Window rendering method, system, device, storage medium and computer program product
CN115016871B (en) Multimedia editing method, electronic device and storage medium
WO2023005711A1 (en) Service recommendation method and electronic device
CN111131019B (en) Multiplexing method and terminal for multiple HTTP channels
CN115550559A (en) Video picture display method, device, equipment and storage medium
CN113835802A (en) Device interaction method, system, device and computer readable storage medium
WO2023030057A1 (en) Screen recording method, electronic device, and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant