CN111770288A - Video editing method, device, terminal and storage medium - Google Patents

Video editing method, device, terminal and storage medium Download PDF

Info

Publication number
CN111770288A
CN111770288A CN202010577961.4A CN202010577961A CN111770288A CN 111770288 A CN111770288 A CN 111770288A CN 202010577961 A CN202010577961 A CN 202010577961A CN 111770288 A CN111770288 A CN 111770288A
Authority
CN
China
Prior art keywords
screen
video
target
instruction
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010577961.4A
Other languages
Chinese (zh)
Other versions
CN111770288B (en
Inventor
胡焱华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010577961.4A priority Critical patent/CN111770288B/en
Publication of CN111770288A publication Critical patent/CN111770288A/en
Priority to PCT/CN2021/087257 priority patent/WO2021258821A1/en
Application granted granted Critical
Publication of CN111770288B publication Critical patent/CN111770288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data

Abstract

The application belongs to the technical field of terminals, and particularly relates to a video editing method, a video editing device, a video editing terminal and a storage medium. The video editing method comprises the following steps: receiving a first editing instruction aiming at an initial video displayed on a first screen, and displaying a material set on a second screen; receiving a moving instruction aiming at a target material in the material set, and editing the initial video based on the target material after the target material is moved to the first screen to generate a target video. The video editing interface and the picture material display interface are respectively displayed through the double screens, only after the target material on the second screen is moved to the first screen, the initial video can be edited on the first screen based on the target material, the target video is generated, the convenience of video editing can be improved, and the use experience of a user can be improved.

Description

Video editing method, device, terminal and storage medium
Technical Field
The application belongs to the technical field of terminals, and particularly relates to a video editing method, a video editing device, a video editing terminal and a storage medium.
Background
With the development of terminal technology, more and more functions can be supported by the terminal, and the life of the user can be enriched continuously. For example, a user may listen to music, watch videos, receive voice information, and the like using the terminal.
When a user needs to use the terminal to edit the video, the terminal gallery can be opened, the target picture is selected from the terminal gallery, and then the target picture is added into the video template to generate the video. And if the target picture does not meet the preset requirement, the user can open the terminal gallery again and reselect the target picture.
Disclosure of Invention
The embodiment of the application provides a video editing method, a video editing device, a terminal and a storage medium, which can improve the convenience of video editing. The technical scheme comprises the following steps:
in a first aspect, an embodiment of the present application provides a video editing method, where the method includes:
receiving a first editing instruction aiming at an initial video displayed on a first screen, and displaying a material set on a second screen;
receiving a moving instruction aiming at a target material in the material set, and editing the initial video based on the target material after the target material is moved to the first screen to generate a target video.
In a second aspect, an embodiment of the present application provides a video editing method and apparatus, where the apparatus includes:
an instruction receiving unit configured to receive a first editing instruction for an initial video displayed on a first screen, and display a material set on a second screen;
and the video editing unit is used for receiving a moving instruction aiming at a target material in the material set, and editing the initial video based on the target material after the target material is moved to the first screen to generate a target video.
In a third aspect, an embodiment of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method of any one of the above first aspects when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program is used for implementing any one of the methods described above when executed by a processor.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application provides a video editing method, wherein a material set can be displayed on a second screen by receiving an editing instruction for an initial video displayed on a first screen, and after a moving instruction for a target material in the material set is received, the initial video can be edited based on the target material after the target material is moved to the first screen to generate the target video. The video editing interface and the picture material display interface are respectively displayed through the double screens, only after the target material on the second screen is moved to the first screen, the initial video can be edited on the first screen based on the target material, the target video is generated, the picture material can be simultaneously previewed and edited, a user does not need to frequently open a terminal gallery to select the target material, the video editing operation steps can be reduced, the convenience of video editing can be improved, and the use experience of the user can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic view illustrating an application scenario of a video editing method or a video editing apparatus applied to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating a video editing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an example of a terminal interface according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an example of a terminal interface according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an example of a terminal interface according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating an example of a terminal interface according to an embodiment of the present application;
fig. 7 is a flowchart illustrating a video editing method according to an embodiment of the present application;
FIG. 8 illustrates an exemplary diagram of a terminal interface according to an embodiment of the present application;
fig. 9 is a schematic flow chart illustrating a video editing method according to an embodiment of the present application;
FIG. 10 is an exemplary diagram illustrating a terminal interface according to an embodiment of the disclosure;
FIG. 11 is a schematic diagram illustrating an example of a terminal interface according to an embodiment of the present application;
FIG. 12 illustrates an exemplary diagram of a rotating terminal according to an embodiment of the present disclosure;
fig. 13 is a flowchart illustrating a video editing method according to an embodiment of the present application;
FIG. 14 is an exemplary diagram illustrating a terminal interface according to an embodiment of the disclosure;
FIG. 15 is a schematic diagram illustrating an example of a terminal interface according to an embodiment of the application;
fig. 16 is a schematic structural diagram of a video editing apparatus according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the development of terminal technology, more and more functions can be supported by the terminal, and the life of the user can be enriched continuously. For example, a user may listen to music, watch videos, receive voice information, and the like using the terminal.
According to some embodiments, fig. 1 illustrates an application scenario diagram applied to a video editing method or a video editing apparatus according to an embodiment of the present application. As shown in fig. 1, when a user edits a video using a terminal, the user may click on a gallery control on a display interface of the terminal. When the terminal detects that the user clicks the gallery, the terminal can display the pictures stored in the terminal. When the user selects the target picture, the user can click a completion control on the terminal display interface. When the terminal detects that the user clicks the completion control, the terminal can display the target picture on the video editing interface. The terminal may generate a video based on the target picture. However, in the process of video editing, the terminal needs to open the terminal gallery based on the gallery opening instruction input by the user, so that the video editing operation is complicated.
It is easy to understand that when the target picture does not meet the preset requirement, the user needs to click the gallery control again to reselect the target picture. When the user selects to finish the target picture, the terminal can display the target picture on the video editing interface. When the user confirms that the style of the target picture does not meet the preset requirement, the user can open the terminal gallery again and reselect the target picture, so that the video editing operation is complex. The embodiment of the application provides a video editing method which can improve the convenience of video editing.
The video editing method provided by the embodiment of the present application will be described in detail below with reference to fig. 2 to fig. 15. The execution bodies of the embodiments shown in fig. 2-15 may be terminals, for example.
Referring to fig. 2, a schematic flow chart of a video editing method according to an embodiment of the present application is provided. As shown in fig. 2, the method of the embodiment of the present application may include the following steps S101 to S102.
S101, receiving a first editing instruction aiming at the initial video displayed on the first screen, and displaying the material set on the second screen.
According to some embodiments, the execution subject of the embodiments of the present application is a terminal including at least two screens, including but not limited to a smart phone having a folding screen, a smart phone having left and right display screens, a smart phone having up and down display screens, and the like. The first screen refers to any screen in the terminal, the first screen refers to only one of the screens in the terminal, and does not refer to a certain fixed screen. For example, the two screens included in the terminal may be an a screen and a B screen, respectively. When the A screen is the first screen, the B screen is the second screen. When the A screen is the second screen, the B screen is the first screen.
It is readily understood that the initial video may refer to a video that does not contain target material, and the initial video may or may not contain original material. The initial video may be a short video or a long video according to the video time division. A material set refers to a set containing at least one material, wherein the material set includes, but is not limited to, animation material, text material, picture material, audio material, video material, and the like. The embodiment of the application takes a picture material set as an example for introduction. The picture material set may display a plurality of pictures in a list form, or may display a plurality of pictures in an icon form. Target material refers to one of a collection of materials.
Optionally, the first editing instruction refers to an instruction input by a user on a first screen of the terminal. The first editing instruction includes, but is not limited to, a text editing instruction, a voice editing instruction, a click editing instruction, and the like. When the terminal does not receive the first editing instruction, the display interface of the terminal may only display the first screen, and when the terminal displays the first screen, the terminal may display the first screen in a single screen mode or may display the first screen in a full screen mode. An example schematic diagram of a terminal interface when the terminal displays the first screen in a single screen may be as shown in fig. 3. When the terminal displays the first screen in full screen, an exemplary diagram of the terminal interface may be as shown in fig. 4.
According to some embodiments, when the terminal receives a first editing instruction for an initial video displayed on a first display screen, the terminal may display a second screen and display a set of materials on the second screen. When the terminal displays a first screen on a single screen and receives a first editing instruction for an initial video displayed on the first screen, the terminal can turn over a second screen and display a material set on the second screen. When the terminal displays the first screen in a full screen mode and receives an editing instruction for the initial video displayed on the first display screen, the terminal can display a second screen based on preset display rules and display the material set on the second screen. The preset display rule may be, for example, to reduce the display area of the first screen and simultaneously display the first screen and the second screen on the full screen.
It will be readily appreciated that the terminal may display the collection of picture material on the second screen, for example, when the terminal receives a click edit instruction for the initial video displayed on the first screen. At this time, an exemplary diagram of the terminal interface may be as shown in fig. 5.
S102, receiving a moving instruction aiming at a target material in the material set, and editing the initial video based on the target material after the target material is moved to a first screen to generate a target video.
According to some embodiments, the movement instruction refers to a movement instruction received by the terminal for a target material in the set of materials displayed on the second screen. The movement instruction includes, but is not limited to, a drag movement instruction, a click movement instruction, a voice movement instruction, and the like. The movement instruction of the embodiment of the present application may be, for example, a drag movement instruction. When the terminal detects that the finger of the user clicks the second screen, the terminal can acquire the click position and determine the material corresponding to the click position as the target material. When the terminal detects that the finger of the user moves on the display screen, the terminal can receive a dragging movement instruction aiming at the target material in the target material set displayed on the second screen.
According to some embodiments, the movement instruction received by the terminal may also be a voice movement instruction, for example. The voice movement instruction may be, for example, "move the Q target material from the second screen to the first screen". When the terminal detects that the user inputs the voice movement instruction, the terminal can receive a movement instruction for a target material in the material set on the second screen.
It is easy to understand that, when the terminal receives a movement instruction for a target video in the preview video set displayed on the first screen, the terminal may obtain a movement endpoint corresponding to the movement instruction. When the terminal acquires the mobile terminal corresponding to the mobile instruction, the terminal can judge whether the mobile terminal corresponding to the mobile instruction is located on the first screen. And when the terminal judges that the moving end point corresponding to the moving instruction is positioned on the first screen, the terminal moves the target material to the first screen and displays the target material on the first screen.
Optionally, when the terminal determines whether the movement destination corresponding to the movement instruction is located on the first screen, the terminal may, for example, obtain the position coordinate of the movement destination corresponding to the movement instruction. When the terminal detects that the position coordinate is located on the first screen, the terminal can judge that the moving end point corresponding to the moving instruction is located on the first screen.
It is easily understood that, after the target material is moved to the first screen, the terminal may edit the initial video based on the target material, generating the target video. For example, the movement instruction acquired by the terminal may be, for example, a drag movement instruction, and a drag trajectory of the drag movement instruction may be, for example, as shown in fig. 6. When the terminal acquires the dragging movement instruction, the terminal can acquire a dragging terminal corresponding to the dragging movement instruction. The drag end point of the drag movement instruction acquired by the terminal may be, for example, the B position. When the terminal detects that the B position is on the first screen, the terminal can move the W target material onto the first screen. After the terminal moves the W target material to the first screen, the terminal may generate a target video based on the W target material.
Optionally, after the terminal moves the W target material to the first screen, the terminal receives a movement instruction for the C target material, and detects that a movement end point corresponding to the movement instruction of the C target material is on the first screen, the terminal may replace the W target material with the C target material.
According to some embodiments, when the terminal receives a deletion instruction for the target material after the terminal moves the target material to the first screen, the terminal may delete the target material based on the deletion instruction. When the terminal deletes the target material, a blank interface can be displayed on the first screen, and a display material before the target material is moved to the first screen can also be displayed on the first screen. The deleting instruction for the target material received by the terminal includes, but is not limited to, a click deleting instruction, a drag deleting instruction, a voice deleting instruction, and the like.
The embodiment of the application provides a video editing method, wherein a material set can be displayed on a second screen by receiving a first editing instruction for an initial video displayed on a first screen, and after a moving instruction for a target material in the material set is received, the initial video can be edited based on the target material after the target material is moved to the first screen to generate the target video. Therefore, when the user edits the video, the initial video can be edited on the first screen based on the target material only by moving the target material on the second screen to the first screen, so that the target video is generated, the switching operation of selecting the target material and editing the video can be reduced, the video editing operation steps can be reduced, the convenience of video editing can be improved, and the user experience is improved.
Referring to fig. 7, a flowchart of a video editing method is provided according to an embodiment of the present application. As shown in fig. 7, the method of the embodiment of the present application may include the following steps S201 to S207.
S201, a first editing instruction for an initial video displayed on a first screen is received.
The specific process is as described above, and is not described herein again.
And S202, receiving a second screen opening instruction, and displaying the material set on a second screen.
According to some embodiments, the opening instruction of the second screen includes, but is not limited to, a voice opening instruction, a click opening instruction, a touch opening instruction, a press opening instruction, and the like. When the terminal receives an edit instruction for the initial video displayed on the first screen, the terminal may default the edit instruction to a start instruction of the second screen on which the terminal can display the material set. The material set of the embodiment of the application is introduced by taking the picture material set as an example.
It is easy to understand that the opening instruction of the second screen received by the terminal may be, for example, a pressing opening instruction. When the terminal receives an editing instruction for an initial video displayed on a first screen, the terminal may receive a pressing pressure of a pressing control corresponding to a second screen of the terminal. When the terminal detects that the pressing pressure is larger than the preset pressure threshold value, the terminal can open the second screen and display the material set on the second screen.
S203, receiving a moving instruction aiming at the target material in the material set, and acquiring a moving track corresponding to the moving instruction.
According to some embodiments, when the terminal receives a movement instruction for a target material in the material set displayed on the second screen, the terminal may acquire a movement track corresponding to the movement instruction. When the terminal acquires the moving track corresponding to the moving instruction, the terminal can control the target material to move synchronously according to the moving track. The terminal can also only obtain the moving track corresponding to the moving instruction, and the target material is not controlled to move synchronously according to the moving track.
It is easy to understand that the movement track corresponding to the terminal acquiring the movement instruction may be in an "S" shape, for example. When the terminal receives a moving instruction for a target material D material in a material set displayed on a first screen, the terminal can acquire a moving track corresponding to the moving instruction and control the D material to move synchronously according to the moving track. At this time, an example schematic diagram of the terminal interface may be as shown in fig. 8.
According to some embodiments, please refer to fig. 9, which provides a flowchart of a video editing method according to an embodiment of the present application. As shown in fig. 9, the method of the embodiment of the present application may further include the following steps S301 to S302 before receiving a movement instruction for a target material in a material set. S301, receiving a browsing instruction aiming at each material in the material set, and marking a selected target material; and S302, receiving a zooming instruction aiming at the target material, and displaying the zoomed target material on a second screen.
It is easy to understand that when the terminal receives the moving instruction for the target material in the material set, the user can browse the materials in the material set. When the user browses each material in the material set, the user can input a browsing instruction for each material in the material set. The browsing instruction includes, but is not limited to, a voice browsing instruction, a click browsing instruction, and a touch browsing instruction. For example, when the second screen of the terminal displays the material set, the terminal may set a slider on the second screen so that the user can operate on the material set by moving the slider. At this time, an example schematic diagram of the terminal interface may be as shown in fig. 10. The slider bar may be determined based on the amount of material in the collection of material and the size of the second screen.
According to some embodiments, when the terminal receives a browsing instruction for each material in the set of picture materials, the terminal may move each material based on the browsing instruction. When a user determines a target material in a browsing process, a marking instruction for the target material can be input, and the terminal can mark the selected target material based on the marking instruction, wherein the target material can be a target picture in a picture material set. When the terminal selects the target material, the terminal can receive a zoom instruction for the target material and display the zoomed target material on the second screen. The zoom command includes, but is not limited to, a voice zoom command, a click zoom command, and a touch zoom command.
It is easy to understand that the materials in the material set displayed on the second screen by the terminal may be T materials, Y materials, U materials, I materials, and D materials. When the terminal selects the D material as the target material, the zoom instruction received by the terminal for the D material may be, for example, a click zoom instruction. When the terminal receives the zoom instruction, the terminal may zoom in the target material and display the zoomed-in D material on the second screen. At this time, an example schematic diagram of the terminal interface may be as shown in fig. 11.
S204, when the moving end point of the moving track is located on the first screen, after the target material is determined to move to the first screen, the initial video is edited based on the target material, and the target video is generated.
According to some embodiments, when the terminal receives a movement instruction for a target material in the material set displayed on the second screen, the terminal may acquire a movement track corresponding to the movement instruction. When the terminal acquires the moving end point of the moving track, the terminal can detect whether the moving end point is located on the first screen. When the terminal determines that the moving destination is located on the first screen, the terminal may move the target material onto the first screen. After the terminal determines that the target material moves to the first screen, the terminal can edit the video based on the target material to generate the target video.
It is easy to understand that when the terminal receives a movement instruction for the target material D material in the material set displayed on the second screen, the movement track corresponding to the movement instruction acquired by the terminal may be, for example, in an "S" shape. When the terminal acquires that the moving end point of the moving track is the position H, the terminal can detect whether the position H of the moving end point is located on the first screen. When the terminal determines that the position of the moving destination H is on the first screen, the terminal may move the target material D onto the first screen. After the terminal determines that the target material D material moves to the first screen, the terminal can edit the initial video based on the target material D material to generate the target video.
According to some embodiments, when the terminal edits the initial video based on the target material, the terminal may insert the target material into a position corresponding to the moving end point in the initial video, or replace the original material displayed by the moving end point in the initial video with the target material, so that steps of inserting or replacing the target material may be reduced, and convenience of video editing may be improved. For example, when the terminal determines that the moving destination H position is located on the first screen, the terminal may insert the target material D material into the position H corresponding to the moving destination in the initial video. And when the terminal does not move the target material to the first screen, the material displayed in the initial video of the first screen of the terminal is the original material. When the terminal edits the initial video based on the target material, the terminal can replace the original material displayed by the mobile endpoint in the initial video with the target material. For example, before the terminal moves the target material onto the first screen, the original material displayed in the initial video of the first screen by the terminal is the M material. When the initial video is edited based on the target material D material, the terminal can replace the original material M material displayed by the mobile endpoint in the initial video with the target material D material.
Optionally, when the terminal edits the initial video, the terminal may divide the first screen into at least one region, and the terminal edits the initial video displayed in each region, so that the influence on the initial video displayed in other regions may be reduced. When the terminal edits the initial video based on the target material, the terminal can also acquire a moving end point of the moving instruction. The movement instruction may be, for example, a drag movement instruction. When the terminal receives the dragging movement instruction, the terminal can acquire a movement track corresponding to the dragging movement instruction and acquire a movement terminal of the movement track. When the terminal acquires the mobile terminal, the terminal can detect whether the mobile terminal is on the first screen. When the terminal determines that the mobile endpoint is located on the first screen, the terminal may obtain a position of the mobile endpoint on the first screen and a region corresponding to the position, and the terminal may replace an original material displayed in the region with a target material. The position may be, for example, a coordinate position. The terminal replaces the original material displayed in the area with the target material, the original material displayed in the area can be directly replaced, the user does not need to replace the original material displayed in the area in multiple steps, and the operation steps of video editing can be reduced.
According to some embodiments, when the terminal divides the first screen into a G1 area, a G2 area, a G3 area, a G4 area, a G5 area, and a G6 area, which are the same in size, the terminal determines that the position of the movement end point on the first screen is (G211, G221), and the terminal determines that the area corresponding to the position is a G2 area, the terminal may replace the original material displayed in the G2 area with the target material.
It is easy to understand that the terminal may also receive a shake instruction, which includes but is not limited to a voice shake instruction, a manual shake instruction. The shaking instruction refers to an instruction for exchanging the first screen display content and the second screen display content by the terminal. When the terminal receives the shaking instruction, the terminal can display the material set on the first screen and display the target video on the second screen. The shaking instruction received by the terminal may be as shown in fig. 12.
S205, a playing instruction for the target video is received.
According to some embodiments, after the terminal edits the initial video based on the target material to generate the target video, the terminal may receive a play instruction for the target video, and the playing of the target video by the terminal may enable the user to view an editing effect of the target video, so that the target video is edited again when the editing effect does not meet the user requirement.
Please refer to fig. 13, which is a flowchart illustrating a video editing method according to an embodiment of the present application. As shown in fig. 13, the method of the embodiment of the present application after receiving the play instruction for the target video may further include the following steps S401 to S403. S401, acquiring a first screen size of a first screen and a second screen size of a second screen based on a playing instruction; s402, when the first screen size is larger than the second screen size, playing a target video on the first screen; and S403, when the second screen size is larger than the first screen size, playing the target video on the second screen, and displaying the material set on the first screen.
It is easy to understand that the terminal can set a single-screen play target video mode. When the terminal receives a play instruction for a target video, the terminal may acquire a first screen size of the first screen and a second screen size of the second screen. At this time, the terminal may detect whether the first screen size is larger than the second screen size. And when the terminal detects that the first screen size is larger than the second screen size, the terminal plays the target video on the first screen. And when the terminal detects that the second screen size is larger than the first screen size, the terminal plays the target video on the second screen and displays the material set on the first screen. The terminal can play the target video on a larger screen in a single-screen target video playing mode, and the watching experience of a user can be improved.
Optionally, when the terminal receives a play instruction for the target video, the terminal acquires that the first screen size of the first screen may be 5.5 inches, for example, and the second screen size of the second screen may be 5.2 inches, for example, and the terminal plays the target video on the first screen. When the terminal acquires a first screen size of the first screen, which may be 5.0 inches, for example, and a second screen size of the second screen, which may be 5.2 inches, for example, the terminal plays the target video on the second screen and displays the material set on the first screen.
And S206, playing the target video on the full screen based on the playing instruction.
According to some embodiments, the terminal may further set a full-screen playing of the target video. When the terminal receives a playing instruction for the target video, the terminal can display the target video on a full screen formed by the first screen and the second screen based on the playing instruction. The play instruction includes, but is not limited to, a rotate instruction, a click instruction, a voice instruction, and the like. The play instruction received by the terminal may be, for example, a rotation instruction. When the terminal receives the rotation instruction, the rotation parameter of the terminal can be acquired. When the terminal detects that the rotation parameter is larger than the preset rotation parameter, the terminal can display the target video on a full screen formed by the first screen and the second screen. At this time, an example schematic diagram of the terminal interface may be as shown in fig. 14. The detection of the rotation parameters by the terminal can reduce misoperation of the terminal.
S207, receiving a zooming instruction aiming at the target video on the full screen, displaying the zoomed target video in a first area of the full screen, and displaying a reference video pushed by the target video in a second area of the full screen.
According to some embodiments, after the terminal displays the target video on the full screen formed by the first screen and the second screen, the terminal may further receive a zoom instruction for the target video on the full screen. The terminal can display the zoomed target video in a first area of the full screen and display the zoomed target video as a reference video pushed by the target video in a second area of the full screen. Including but not limited to voice zoom instructions, text zoom instructions, and click zoom instructions, among others. The zoom instruction received by the terminal may be, for example, a click zoom instruction. The user can click the target video, the terminal can display a frame of the target video, and the terminal can display the zoomed target video in the first area of the full screen by receiving the moving instruction aiming at the frame. The first area refers to only a part of the area of the full screen, and the area does not refer to a fixed area on the full screen.
It is easy to understand that when the terminal receives a zoom instruction for a target video on the full screen, the terminal may identify a key image in the target video by using an image recognition algorithm, and obtain a reference video corresponding to the target video based on the key image. Therefore, when the terminal displays the zoomed target video in the first area of the full screen, the terminal can also display the reference video pushed by the target video in the second area of the full screen. For example, the terminal identifies the key image in the Z target video by using an image identification algorithm. When the terminal acquires a reference video corresponding to the target video as an X reference video based on the key image, the terminal can display a zoomed Z target video in a first area of a full screen, and the terminal can also display the X reference video pushed by the target video in a second area of the full screen. At this time, an example schematic diagram of the terminal interface may be as shown in fig. 15.
It is easy to understand that before the terminal displays the reference video pushed by the target video in the second area of the full screen, the terminal may further obtain the tag category of the target video, and push the reference video for the target video based on the tag category. For example, before the terminal displays the reference video pushed by the target video in the second area of the full screen, when the terminal acquires that the video tag of the Z target video is a travel tag, the terminal can push the reference video corresponding to the travel tag for the target video.
Optionally, after the terminal displays the reference video pushed by the target video in the second area of the full screen, the terminal may receive a second editing instruction for the target video. The terminal can edit the target video based on the reference video and the second editing instruction. After the terminal finishes editing the initial video based on the target material to generate the target video, the terminal can edit the target video again based on the reference video, so that the target video better meets the requirements of the user, and the use experience of the user can be improved.
According to some embodiments, when the terminal receives a zoom instruction for the target video on the full screen, the terminal may display the zoomed target video in the first area of the full screen. After the reference video pushed by the target video is displayed in the second area of the full screen, the terminal can also receive a browsing instruction aiming at the reference video and update the reference video displayed in the second area. The browsing instruction includes, but is not limited to, a click browsing instruction, a voice browsing instruction, and a touch browsing instruction.
It is easy to understand that, when the terminal receives a zoom instruction for the target video on the full screen, the zoomed target video is displayed in the first area of the full screen, and the reference video pushed by the target video is displayed in the second area of the full screen, the terminal may further update the reference video displayed in the second area according to a preset update duration. The update duration may be 15 seconds, for example. When the reference video which is displayed as the target video push by the terminal in the second area of the full screen is the X reference video, the terminal can display as the target video push V reference video in the second area of the full screen after 15 seconds.
The embodiment of the application provides a video editing method, and a terminal can display a material set on a second screen based on a received second screen opening instruction, so that misoperation of displaying the material set when the terminal does not perform video editing can be reduced. Then, the terminal moves the target material to the first screen based on the moving track corresponding to the moving instruction, edits the initial video based on the target material, generates the target video, can reduce misoperation of video editing performed by a user, directly moves the target material to the first screen, can reduce switching operation of a material collection interface and a video editing interface, and improves convenience of video editing. Secondly, when the terminal receives a playing instruction, the target video can be displayed on the full screen formed by the first screen and the second screen, the video editing area can be increased, and the convenience of a user for editing the video is improved. In addition, the terminal can also display the zoomed target video in the first area of the full screen and display the zoomed target video in the second area of the full screen as the reference video pushed by the target video, so that a user can edit the video based on the reference video, the convenience of video editing can be improved, and the user experience can be improved.
The video editing apparatus provided in the embodiment of the present application will be described in detail below with reference to fig. 16. It should be noted that the video editing apparatus shown in fig. 16 is used for executing the method of the embodiment shown in fig. 2 to fig. 15 of the present application, and for convenience of description, only the portion related to the embodiment of the present application is shown, and details of the specific technology are not disclosed, please refer to the embodiment shown in fig. 2 to fig. 15 of the present application.
Please refer to fig. 16, which illustrates a schematic structural diagram of a video editing apparatus according to an embodiment of the present application. The video editing apparatus 1600 may be implemented by software, hardware, or a combination of both as all or a part of a user terminal. According to some embodiments, the video editing apparatus 1600 includes an instruction receiving unit 1601 and a video editing unit 1602, specifically configured to:
an instruction receiving unit 1601 configured to receive a first editing instruction for an initial video displayed on a first screen, and display a material set on a second screen;
a video editing unit 1602, configured to receive a moving instruction for a target material in the material set, and edit the initial video based on the target material after the target material is moved to the first screen, so as to generate a target video.
According to some embodiments, the video editing unit 1602 is configured to receive a moving instruction for a target material in the material set, and edit the initial video based on the target material after the target material is moved to the first screen, and when generating the target video, specifically:
receiving a moving instruction aiming at a target material in a material set, and acquiring a moving track corresponding to the moving instruction;
and when the moving end point of the moving track is positioned behind the first screen, determining that the target material moves to the first screen, editing the initial video based on the target material, and generating the target video.
According to some embodiments, the video editing unit 1602, configured to, when editing the initial video based on the target material, specifically:
inserting the target material into a position corresponding to the moving end point in the initial video, or replacing the original material displayed by the moving end point in the initial video with the target material, or determining the position of the moving end point on the first screen and an area corresponding to the position, and replacing the original material displayed by the area with the target material.
According to some embodiments, the video editing apparatus 1600 further includes a material marking unit 1603 for receiving a browsing instruction for each material in the material set before receiving a moving instruction for a target material in the material set, and marking the selected target material;
and receiving a zooming instruction aiming at the target material, and displaying the zoomed target material on the second screen.
According to some embodiments, the video editing apparatus 1600 further includes a video playing unit 1604, configured to edit the initial video based on the target material, and after generating the target video, receive a playing instruction for the target video;
based on the playing instruction, playing the target video on a full screen, wherein the full screen consists of a first screen and a second screen; or
Acquiring a first screen size of a first screen and a second screen size of a second screen based on a playing instruction;
when the first screen size is larger than the second screen size, playing the target video on the first screen;
and when the second screen size is larger than the first screen size, playing the target video on the second screen, and displaying the material set on the first screen.
According to some embodiments, the video editing apparatus 1600 further comprises a video pushing unit 1605 configured to receive a zoom instruction for the target video on the full screen after the target video is played on the full screen based on the play instruction, display the zoomed target video in a first area of the full screen, and display a reference video pushed by the target video in a second area of the full screen.
According to some embodiments, the video pushing unit 1605 is further configured to, before the reference video pushed by the target video is displayed in the second area of the full screen, obtain a tag category of the target video, and push the reference video for the target video based on the tag category;
after the reference video pushed by the target video is displayed in the second area of the full screen, the method further comprises the following steps:
receiving a second editing instruction aiming at the target video;
and editing the target video based on the reference video and the second editing instruction.
According to some embodiments, the video editing apparatus 1600 includes a video updating unit 1606 configured to receive a browsing instruction for the reference video, and update the reference video displayed in the second area.
According to some embodiments, the video editing unit 1602, when displaying the material set on the second screen, is specifically configured to:
and receiving a second screen opening instruction, and displaying the material set on the second screen.
The embodiment of the application provides a video editing device, wherein a first editing instruction for an initial video displayed on a first screen is received through an instruction receiving unit, a material set is displayed on a second screen, the video editing unit can receive a moving instruction for a target material in the material set, and after the target material is moved to the first screen, the initial video is edited based on the target material to generate the target video. Therefore, when the user edits the video, the user only needs to move the target material on the second screen of the video editing device to the first screen, and the initial video can be edited on the first screen based on the target material to generate the target video, so that the switching operation of selecting the target material and editing the video can be reduced, the video editing operation steps can be reduced, the convenience of video editing can be improved, and the user experience can be improved.
Please refer to fig. 17, which is a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 17, the terminal 1700 may include: at least one processor 1701, at least one network interface 1704, a user interface 1703, memory 1705, at least one communication bus 1702.
The communication bus 1702 is used to enable, among other things, connectivity communication between these components.
The user interface 1703 may include a Display screen (Display) and a GPS, and the optional user interface 1703 may include a standard wired interface and a wireless interface.
The network interface 1704 may optionally include a standard wired interface or a wireless interface (e.g., WI-FI interface).
The processor 1701 may include one or more processing cores, among others. The processor 1701 connects various parts throughout the terminal 1700 using various interfaces and lines, and executes various functions of the terminal 1700 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1705 and calling data stored in the memory 1705. Alternatively, the processor 1701 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1701 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is to be understood that the modem may be implemented by a single chip without being integrated into the processor 1701.
The Memory 1705 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1705 includes a non-transitory computer-readable medium. The memory 1705 may be used to store instructions, programs, code sets, or instruction sets. The memory 1705 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1705 may optionally be at least one storage device located remotely from the processor 1701. As shown in fig. 17, a memory 1705, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an application program for video editing.
In the terminal 1700 shown in fig. 17, the user interface 1703 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and the processor 1701 may be configured to invoke an application for video editing stored in the memory 1705 and specifically perform the following operations:
receiving a first editing instruction aiming at an initial video displayed on a first screen, and displaying a material set on a second screen;
and receiving a moving instruction aiming at a target material in the material set, and editing the initial video based on the target material after the target material is moved to the first screen to generate the target video.
According to some embodiments, the processor 1701 is configured to receive a moving instruction for a target material in the material set, and after the target material is moved to the first screen, edit the initial video based on the target material, and when generating the target video, specifically perform the following steps:
receiving a moving instruction aiming at a target material in a material set, and acquiring a moving track corresponding to the moving instruction;
and when the moving end point of the moving track is positioned behind the first screen, determining that the target material moves to the first screen, editing the video based on the target material, and generating the target video.
According to some embodiments, the processor 1701 is configured to perform the following steps when editing the initial video based on the target material:
inserting the target material into a position corresponding to the moving end point in the initial video, or replacing the original material displayed by the moving end point in the initial video with the target material, or determining the position of the moving end point on the first screen and an area corresponding to the position, and replacing the original material displayed by the area with the target material.
According to some embodiments, the processor 1701, prior to receiving the move instruction for the target material in the collection of materials, is further specifically configured to perform the steps of:
receiving a browsing instruction aiming at each material in the material set, and marking the selected target material;
and receiving a zooming instruction aiming at the target material, and displaying the zoomed target material on the second screen.
According to some embodiments, the processor 1701 is configured to edit the initial video based on the target material, and after generating the target video, is further configured to perform the following steps:
receiving a playing instruction aiming at a target video;
based on the playing instruction, playing the target video on a full screen, wherein the full screen consists of a first screen and a second screen; or
Acquiring a first screen size of a first screen and a second screen size of a second screen based on a playing instruction;
when the first screen size is larger than the second screen size, playing the target video on the first screen;
and when the second screen size is larger than the first screen size, playing the target video on the second screen, and displaying the material set on the first screen.
According to some embodiments, the processor 1701 is configured to, after playing the target video on a full screen based on the play instruction, further specifically configured to perform the following steps:
and receiving a zooming instruction aiming at the target video on the full screen, displaying the zoomed target video in a first area of the full screen, and displaying a reference video pushed by the target video in a second area of the full screen.
According to some embodiments, the processor 1701 is further specifically configured to, before displaying the reference video pushed as the target video in the second area of full screen, perform the following steps:
acquiring the label category of a target video, and pushing a reference video for the target video based on the label category;
after the reference video pushed by the target video is displayed in the second area of the full screen, the method further comprises the following steps:
receiving a second editing instruction aiming at the target video;
and editing the target video based on the reference video and the second editing instruction.
According to some embodiments, the processor 1701 is further specifically configured to perform the steps of:
and receiving a browsing instruction aiming at the reference video, and updating the reference video displayed in the second area.
According to some embodiments, the processor 1701 is configured to perform the following steps in particular when displaying the set of materials on the second screen:
and receiving a second screen opening instruction, and displaying the material set on the second screen.
The embodiment of the application provides a terminal, a material set can be displayed on a second screen by receiving a first editing instruction for an initial video displayed on a first screen, and after a moving instruction for a target material in the material set is received, the initial video can be edited based on the target material after the target material is moved to the first screen to generate the target video. Therefore, when the user edits the video, the initial video can be edited on the first screen based on the target material only by moving the target material on the second screen to the first screen, so that the target video is generated, the switching operation of selecting the target material and editing the video can be reduced, the video editing operation steps can be reduced, the convenience of video editing can be improved, and the user experience is improved.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the above-described method. The computer-readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the video editing methods as set forth in the above method embodiments.
It is clear to a person skilled in the art that the solution of the present application can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, a Field-ProgrammaBLE gate array (FPGA), an Integrated Circuit (IC), or the like.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some service interfaces, devices or units, and may be an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, and the memory may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above description is only an exemplary embodiment of the present disclosure, and the scope of the present disclosure should not be limited thereby. That is, all equivalent changes and modifications made in accordance with the teachings of the present disclosure are intended to be included within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (12)

1. A method of video editing, the method comprising:
receiving a first editing instruction aiming at an initial video displayed on a first screen, and displaying a material set on a second screen;
receiving a moving instruction aiming at a target material in the material set, and editing the initial video based on the target material after the target material is moved to the first screen to generate a target video.
2. The method of claim 1, wherein receiving a move instruction for a target material in the set of materials, wherein editing the initial video based on the target material after the target material is moved to the first screen, and wherein generating a target video comprises:
receiving a moving instruction for a target material in the material set, and acquiring a moving track corresponding to the moving instruction;
and when the moving end point of the moving track is positioned on the first screen, after the target material is determined to move to the first screen, editing the initial video based on the target material, and generating the target video.
3. The method of claim 2, wherein the editing the initial video based on the target material comprises:
inserting the target material into a position corresponding to the moving end point in the initial video, or replacing the original material displayed by the moving end point in the initial video with the target material, or determining the position of the moving end point on the first screen and a region corresponding to the position, and replacing the original material displayed by the region with the target material.
4. The method of claim 1, wherein prior to receiving the movement instruction for the target material of the set of materials, further comprising:
receiving a browsing instruction aiming at each material in the material set, and marking the selected target material;
and receiving a zooming instruction aiming at the target material, and displaying the zoomed target material on the second screen.
5. The method of claim 1, wherein after editing the initial video based on the target material to generate a target video, further comprising:
receiving a playing instruction aiming at the target video;
based on the playing instruction, playing the target video on a full screen, wherein the full screen consists of the first screen and the second screen; or
Acquiring a first screen size of the first screen and a second screen size of the second screen based on the playing instruction;
when the first screen size is larger than the second screen size, playing the target video on the first screen;
and when the second screen size is larger than the first screen size, playing the target video on the second screen, and displaying the material set on the first screen.
6. The method of claim 5, wherein the playing the target video on a full screen based on the playing instruction further comprises:
and receiving a zooming instruction aiming at the target video on the full screen, displaying the zoomed target video in a first area of the full screen, and displaying a reference video pushed by the target video in a second area of the full screen.
7. The method of claim 6, wherein before the displaying of the reference video pushed for the target video in the second area of the full screen, further comprising:
acquiring the label category of the target video, and pushing a reference video for the target video based on the label category;
wherein, after the displaying of the reference video pushed by the target video in the second area of the full screen, the method further comprises:
receiving a second editing instruction aiming at the target video;
and editing the target video based on the reference video and the second editing instruction.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
and receiving a browsing instruction aiming at the reference video, and updating the reference video displayed in the second area.
9. The method of claim 1, wherein displaying the collection of materials on a second screen comprises:
and receiving a second screen opening instruction, and displaying the material set on a second screen.
10. A video editing method apparatus, the apparatus comprising:
an instruction receiving unit configured to receive a first editing instruction for an initial video displayed on a first screen, and display a material set on a second screen;
and the video editing unit is used for receiving a moving instruction aiming at a target material in the material set, and editing the initial video based on the target material after the target material is moved to the first screen to generate a target video.
11. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-9 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of the preceding claims 1 to 9.
CN202010577961.4A 2020-06-23 2020-06-23 Video editing method, device, terminal and storage medium Active CN111770288B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010577961.4A CN111770288B (en) 2020-06-23 2020-06-23 Video editing method, device, terminal and storage medium
PCT/CN2021/087257 WO2021258821A1 (en) 2020-06-23 2021-04-14 Video editing method and device, terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010577961.4A CN111770288B (en) 2020-06-23 2020-06-23 Video editing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111770288A true CN111770288A (en) 2020-10-13
CN111770288B CN111770288B (en) 2022-12-09

Family

ID=72721709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010577961.4A Active CN111770288B (en) 2020-06-23 2020-06-23 Video editing method, device, terminal and storage medium

Country Status (2)

Country Link
CN (1) CN111770288B (en)
WO (1) WO2021258821A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565871A (en) * 2020-11-06 2021-03-26 深圳市易平方网络科技有限公司 Video preloading method, intelligent terminal and storage medium
WO2021258821A1 (en) * 2020-06-23 2021-12-30 Oppo广东移动通信有限公司 Video editing method and device, terminal, and storage medium
CN114222076A (en) * 2021-12-10 2022-03-22 北京百度网讯科技有限公司 Face changing video generation method, device, equipment and storage medium
WO2022142750A1 (en) * 2020-12-29 2022-07-07 北京字跳网络技术有限公司 Tutorial-based multimedia resource editing method and apparatus, device, and storage medium
CN116095250A (en) * 2022-05-30 2023-05-09 荣耀终端有限公司 Method and device for video cropping
CN116095412A (en) * 2022-05-30 2023-05-09 荣耀终端有限公司 Video processing method and electronic equipment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697710B (en) * 2022-04-22 2023-08-18 卡莱特云科技股份有限公司 Material preview method, device, system, equipment and medium based on server
CN115334361B (en) * 2022-08-08 2024-03-01 北京达佳互联信息技术有限公司 Material editing method, device, terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090284619A1 (en) * 2008-05-14 2009-11-19 Sony Coropration Image processing apparatus, image processing method, and program
CN103336686A (en) * 2013-06-05 2013-10-02 福建星网视易信息系统有限公司 Editing device and editing method for terminal playing template of digital signage system
CN104811629A (en) * 2015-04-21 2015-07-29 上海极食信息科技有限公司 Method and system for acquiring video materials on same interface and conducting production on video materials
CN107909634A (en) * 2017-11-30 2018-04-13 努比亚技术有限公司 Image display method, mobile terminal and computer-readable recording medium
CN108628976A (en) * 2018-04-25 2018-10-09 咪咕动漫有限公司 A kind of material methods of exhibiting, terminal and computer storage media
CN110494833A (en) * 2018-05-28 2019-11-22 深圳市大疆创新科技有限公司 A kind of multimedia editing method and intelligent terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770288B (en) * 2020-06-23 2022-12-09 Oppo广东移动通信有限公司 Video editing method, device, terminal and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090284619A1 (en) * 2008-05-14 2009-11-19 Sony Coropration Image processing apparatus, image processing method, and program
CN103336686A (en) * 2013-06-05 2013-10-02 福建星网视易信息系统有限公司 Editing device and editing method for terminal playing template of digital signage system
CN104811629A (en) * 2015-04-21 2015-07-29 上海极食信息科技有限公司 Method and system for acquiring video materials on same interface and conducting production on video materials
CN107909634A (en) * 2017-11-30 2018-04-13 努比亚技术有限公司 Image display method, mobile terminal and computer-readable recording medium
CN108628976A (en) * 2018-04-25 2018-10-09 咪咕动漫有限公司 A kind of material methods of exhibiting, terminal and computer storage media
CN110494833A (en) * 2018-05-28 2019-11-22 深圳市大疆创新科技有限公司 A kind of multimedia editing method and intelligent terminal

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021258821A1 (en) * 2020-06-23 2021-12-30 Oppo广东移动通信有限公司 Video editing method and device, terminal, and storage medium
CN112565871A (en) * 2020-11-06 2021-03-26 深圳市易平方网络科技有限公司 Video preloading method, intelligent terminal and storage medium
WO2022142750A1 (en) * 2020-12-29 2022-07-07 北京字跳网络技术有限公司 Tutorial-based multimedia resource editing method and apparatus, device, and storage medium
CN114222076A (en) * 2021-12-10 2022-03-22 北京百度网讯科技有限公司 Face changing video generation method, device, equipment and storage medium
CN116095250A (en) * 2022-05-30 2023-05-09 荣耀终端有限公司 Method and device for video cropping
CN116095412A (en) * 2022-05-30 2023-05-09 荣耀终端有限公司 Video processing method and electronic equipment
CN116095250B (en) * 2022-05-30 2023-10-31 荣耀终端有限公司 Method and device for video cropping
CN116095412B (en) * 2022-05-30 2023-11-14 荣耀终端有限公司 Video processing method and electronic equipment

Also Published As

Publication number Publication date
WO2021258821A1 (en) 2021-12-30
CN111770288B (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN111770288B (en) Video editing method, device, terminal and storage medium
US11726645B2 (en) Display apparatus for classifying and searching content, and method thereof
CN109525885B (en) Information processing method, information processing device, electronic equipment and computer readable storage medium
CN108924622B (en) Video processing method and device, storage medium and electronic device
CN108334371B (en) Method and device for editing object
US11604580B2 (en) Configuration of application execution spaces and sub-spaces for sharing data on a mobile touch screen device
KR20150095536A (en) User terminal device and method for displaying thereof
JP2013546081A (en) Method, apparatus, and computer program product for overwriting input
US20140164993A1 (en) Method and electronic device for enlarging and displaying contents
CN113918522A (en) File generation method and device and electronic equipment
KR20150142347A (en) User terminal device, and Method for controlling for User terminal device, and multimedia system thereof
JP2008067354A (en) Image display device, image data providing device, image display system, image display system control method, control program, and recording medium
CN110377220A (en) A kind of instruction response method, device, storage medium and electronic equipment
CN111679772B (en) Screen recording method and system, multi-screen device and readable storage medium
WO2024022473A1 (en) Method for sending comment in live-streaming room, method for receiving comment in live-streaming room, and related device
US20130298005A1 (en) Drawing HTML Elements
KR20140142071A (en) Electronic apparatus and Method for making document thereof
KR20170043944A (en) Display apparatus and method of controlling thereof
CN115460448A (en) Media resource editing method and device, electronic equipment and storage medium
CN115344159A (en) File processing method and device, electronic equipment and readable storage medium
CN114882915A (en) Information recording method, device and electronic equipment
JP2008067356A (en) Image display device, image data providing device, image display system, image display system control method, control program, and recording medium
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
KR20140127131A (en) Method for displaying image and an electronic device thereof
CN113436297A (en) Picture processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant