WO2021258821A1 - 视频编辑方法、装置、终端及存储介质 - Google Patents

视频编辑方法、装置、终端及存储介质 Download PDF

Info

Publication number
WO2021258821A1
WO2021258821A1 PCT/CN2021/087257 CN2021087257W WO2021258821A1 WO 2021258821 A1 WO2021258821 A1 WO 2021258821A1 CN 2021087257 W CN2021087257 W CN 2021087257W WO 2021258821 A1 WO2021258821 A1 WO 2021258821A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
screen
target
instruction
terminal
Prior art date
Application number
PCT/CN2021/087257
Other languages
English (en)
French (fr)
Inventor
胡焱华
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021258821A1 publication Critical patent/WO2021258821A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data

Definitions

  • This application belongs to the field of terminal technology, and specifically relates to a video editing method, device, terminal, and storage medium.
  • terminal technology With the development of terminal technology, more and more functions can be supported by the terminal, which can continuously enrich the lives of users. For example, users can use the terminal to listen to music, watch videos, receive voice information, and so on.
  • the user when the user needs to use the terminal to edit the video, he can open the terminal gallery, select the target picture in the terminal gallery, and then add the target picture to the video template to generate the video. If the target picture does not meet the preset requirements, the user can open the terminal gallery again and select the target picture again.
  • the embodiments of the present application provide a video editing method, device, terminal, and storage medium, which can improve the convenience of video editing.
  • This technical solution includes:
  • an embodiment of the present application provides a video editing method, and the method includes:
  • an embodiment of the present application provides a video editing method and device, the device including:
  • the instruction receiving unit is configured to receive the first editing instruction for the initial video displayed on the first screen, and display the material set on the second screen;
  • the video editing unit is configured to receive a movement instruction for a target material in the material set, and after the target material is moved to the first screen, edit the initial video based on the target material to generate a target video.
  • an embodiment of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor implements the foregoing when the computer program is executed. The method of any one of the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the method described in any one of the above is implemented.
  • an embodiment of the present application provides a computer program product, wherein the foregoing computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the foregoing computer program is operable to cause a computer to execute Some or all of the steps described in one aspect.
  • the computer program product may be a software installation package.
  • the embodiment of the present application provides a video editing method.
  • a material set can be displayed on the second screen, and after receiving a movement instruction for the target material in the material set, After the target material is moved to the first screen, the initial video can be edited based on the target material to generate the target video.
  • FIG. 1 shows a schematic diagram of an application scenario of a video editing method or video editing device applied to an embodiment of the present application
  • FIG. 2 shows a schematic flowchart of a video editing method according to an embodiment of the present application
  • FIG. 3 shows an example schematic diagram of a terminal interface according to an embodiment of the present application
  • FIG. 4 shows an example schematic diagram of a terminal interface according to an embodiment of the present application
  • FIG. 5 shows an example schematic diagram of a terminal interface according to an embodiment of the present application
  • FIG. 6 shows an example schematic diagram of a terminal interface according to an embodiment of the present application
  • FIG. 7 shows a schematic flowchart of a video editing method according to an embodiment of the present application.
  • FIG. 8 shows an example schematic diagram of a terminal interface according to an embodiment of the present application.
  • FIG. 9 shows a schematic flowchart of a video editing method according to an embodiment of the present application.
  • FIG. 10 shows an example schematic diagram of a terminal interface according to an embodiment of the present application.
  • FIG. 11 shows an example schematic diagram of a terminal interface according to an embodiment of the present application.
  • FIG. 12 shows a schematic diagram of an example of a rotating terminal according to an embodiment of the present application.
  • FIG. 13 shows a schematic flowchart of a video editing method according to an embodiment of the present application
  • FIG. 14 shows an example schematic diagram of a terminal interface according to an embodiment of the present application.
  • FIG. 15 shows an example schematic diagram of a terminal interface according to an embodiment of the present application.
  • FIG. 16 shows a schematic structural diagram of a video editing device according to an embodiment of the present application.
  • FIG. 17 shows a schematic structural diagram of a terminal according to an embodiment of the present application.
  • terminal technology With the development of terminal technology, more and more functions can be supported by the terminal, which can continuously enrich the lives of users. For example, users can use the terminal to listen to music, watch videos, receive voice information, and so on.
  • FIG. 1 shows a schematic diagram of an application scenario of a video editing method or a video editing device applied to an embodiment of the present application.
  • the user can click on the gallery control on the display interface of the terminal.
  • the terminal detects that the user clicks on the gallery, the terminal can display the pictures stored in the terminal.
  • the user can click the completion control on the terminal display interface.
  • the terminal detects that the user clicks the completion control, the terminal can display the target picture on the video editing interface.
  • the terminal can generate a video based on the target picture.
  • the terminal needs to open the terminal's gallery based on the user's input to open the gallery, which makes the video editing operation complicated.
  • the terminal can display the target picture on the video editing interface.
  • the user confirms that the style of the target picture does not meet the preset requirements, the user can open the terminal gallery again and reselect the target picture, which makes the video editing operation complicated.
  • the embodiments of the present application provide a video editing method, which can improve the convenience of video editing.
  • Fig. 2- The execution subject of the embodiment shown in Fig. 15 may be, for example, a terminal.
  • FIG. 2 provides a schematic flowchart of a video editing method according to an embodiment of this application.
  • the method of the embodiment of the present application may include the following steps S101 to S102.
  • S101 Receive a first editing instruction for an initial video displayed on a first screen, and display a material set on a second screen.
  • the execution subject of the embodiments of the present application is a terminal including at least two screens.
  • the terminal includes, but is not limited to, a smart phone with a folding screen, a smart phone with a left and right display, a smart phone with a top and bottom display, etc. Wait.
  • the first screen refers to any screen in the terminal, and the first only refers to one of the terminal screens, and does not specifically refer to a fixed screen.
  • the two screens included in the terminal may be A screen and B screen respectively. When the A screen is the first screen, the B screen is the second screen. When the A screen is the second screen, the B screen is the first screen.
  • the initial video may refer to a video that does not contain the target material, and the initial video may or may not contain the original material. According to the video duration, the initial video can be a short video or a long video.
  • a material collection refers to a collection that contains at least one material, where the material collection includes, but is not limited to, animation material, text material, picture material, audio material, and video material. The embodiment of this application is introduced by taking a collection of picture materials as an example.
  • the picture material collection can display multiple pictures in the form of a list, or display multiple pictures in the form of icons.
  • the target material refers to one of the material collection.
  • the first editing instruction refers to an instruction input by the user on the first screen of the terminal.
  • the first editing instruction includes, but is not limited to, a text editing instruction, a voice editing instruction, a click editing instruction, and so on.
  • the display interface of the terminal may display only the first screen.
  • the terminal displays the first screen the first screen may be displayed on a single screen, or the first screen may be displayed on a full screen.
  • an example schematic diagram of the terminal interface may be as shown in FIG. 3.
  • an example schematic diagram of the terminal interface may be as shown in FIG. 4.
  • the terminal when the terminal receives the first editing instruction for the initial video displayed on the first display screen, the terminal may display the second screen and display the material collection on the second screen.
  • the terminal displays the first screen on a single screen and receives the first editing instruction for the initial video displayed on the first display screen, the terminal can flip out the second screen and display the material collection on the second screen.
  • the terminal displays the first screen in full screen and receives an editing instruction for the initial video displayed on the first display screen
  • the terminal may display the second screen based on preset display rules and display the material collection on the second screen.
  • the preset display rule may be, for example, reducing the display area of the first screen, and simultaneously displaying the first screen and the second screen on the full screen.
  • the terminal when the terminal receives a click and edit instruction for the initial video displayed on the first display screen, the terminal can display a collection of picture materials on the second screen.
  • an example schematic diagram of the terminal interface may be as shown in FIG. 5.
  • S102 Receive a movement instruction for the target material in the material set, and after the target material is moved to the first screen, edit the initial video based on the target material to generate the target video.
  • the movement instruction refers to a movement instruction for a target material in the set of materials displayed on the second screen received by the terminal.
  • the movement instruction includes, but is not limited to, a drag movement instruction, a click movement instruction, a voice movement instruction, and so on.
  • the movement instruction in the embodiment of the present application may be, for example, a drag movement instruction.
  • the movement instruction received by the terminal may also be a voice movement instruction, for example.
  • the voice movement instruction may be, for example, “move the Q target material from the second screen to the first screen”.
  • the terminal may receive the movement instruction for the target material in the material collection on the second screen.
  • the terminal can obtain the movement destination corresponding to the movement instruction.
  • the terminal can determine whether the movement destination corresponding to the movement instruction is located on the first screen.
  • the terminal determines that the moving destination corresponding to the movement instruction is located on the first screen, the terminal moves the target material to the first screen, and displays the target material on the first screen.
  • the terminal may, for example, obtain the position coordinates of the moving end point corresponding to the movement instruction.
  • the terminal can determine that the moving destination corresponding to the movement instruction is located on the first screen.
  • the terminal can edit the initial video based on the target material to generate the target video.
  • the movement instruction acquired by the terminal may be a drag movement instruction, and the drag trajectory of the drag movement instruction may be, for example, as shown in FIG. 6.
  • the terminal may obtain the drag end point corresponding to the drag movement instruction.
  • the drag end point of the drag movement instruction acquired by the terminal may be, for example, position B.
  • the terminal detects that the B position is located on the first screen, the terminal can move the W target material to the first screen. After the terminal moves the W target material to the first screen, the terminal can generate a target video based on the W target material.
  • the terminal after the terminal moves the W target material to the first screen, the terminal receives a movement instruction for the C target material, and detects that the movement destination corresponding to the movement instruction of the C target material is on the first screen, the terminal may Replace W target material with C target material.
  • the terminal may delete the target material based on the delete instruction.
  • a blank interface may be displayed on the first screen, and the display material before the target material is moved to the first screen may also be displayed on the first screen.
  • the delete instruction for the target material received by the terminal includes, but is not limited to, a click delete instruction, a drag delete instruction, a voice delete instruction, and so on.
  • the embodiment of the present application provides a video editing method.
  • a material collection can be displayed on a second screen, and after receiving a movement of a target material in the material collection Instruction, after the target material is moved to the first screen, the initial video can be edited based on the target material to generate the target video. Therefore, when the user edits the video, he only needs to move the target material on the second screen to the first screen, then the initial video can be edited based on the target material on the first screen to generate the target video, which can reduce the selection of target materials.
  • the switching operation with video editing can reduce video editing operation steps, improve the convenience of video editing, and enhance user experience.
  • FIG. 7 provides a schematic flowchart of a video editing method according to an embodiment of the present application.
  • the method of the embodiment of the present application may include the following steps S201 to S207.
  • S201 Receive a first editing instruction for the initial video displayed on the first screen.
  • S202 Receive a second screen start instruction, and display the material collection on the second screen.
  • the opening instruction of the second screen includes, but is not limited to, a voice opening instruction, a tap opening instruction, a touch opening instruction, a pressing opening instruction, and so on.
  • the terminal may default the editing instruction as the start instruction of the second screen, and the terminal may display the material collection on the second screen.
  • the material collection of the embodiment of the present application is introduced by taking a picture material collection as an example.
  • the opening instruction of the second screen received by the terminal may be, for example, a pressing opening instruction.
  • the terminal may receive the pressing pressure of the pressing control corresponding to the second screen of the terminal.
  • the terminal may open the second screen and display the material collection on the second screen.
  • S203 Receive a movement instruction for the target material in the material set, and obtain a movement track corresponding to the movement instruction.
  • the terminal when the terminal receives a movement instruction for the target material in the material set displayed on the second screen, the terminal may obtain the movement track corresponding to the movement instruction.
  • the terminal can control the target material to move synchronously according to the movement trajectory.
  • the terminal may also only acquire the movement track corresponding to the movement instruction, and the uncontrolled target material moves synchronously according to the movement track.
  • the movement track corresponding to the movement instruction acquired by the terminal may be in an "S" shape, for example.
  • the terminal can obtain the movement track corresponding to the movement instruction, and control the D material to move synchronously according to the movement track.
  • an example schematic diagram of the terminal interface may be as shown in FIG. 8.
  • FIG. 9 provides a schematic flowchart of a video editing method according to an embodiment of the present application.
  • the method of the embodiment of the present application may further include the following steps S301 to S302 before receiving the movement instruction for the target material in the material set.
  • S301 receiving a browsing instruction for each material in the material set, and marking the selected target material;
  • S302 receiving a zooming instruction for the target material, and displaying the zoomed target material on the second screen.
  • the terminal receives a movement instruction for the target material in the material collection, the user can browse each material in the material collection.
  • the browsing instructions include, but are not limited to, voice browsing instructions, click browsing instructions, and touch browsing instructions.
  • the terminal may set a sliding bar on the second screen, so that the user can operate the material collection by moving the sliding bar.
  • FIG. 10 an example schematic diagram of the terminal interface may be as shown in FIG. 10. The sliding bar can be determined based on the number of materials in the material collection and the size of the second screen.
  • the terminal when the terminal receives a browsing instruction for each material in the picture material collection, the terminal may move each material based on the browsing instruction.
  • a marking instruction for the target material may be input, and the terminal may mark the selected target material based on the marking instruction.
  • the target material may be, for example, a target picture in a collection of picture materials.
  • the terminal may receive a zoom instruction for the target material, and display the zoomed target material on the second screen.
  • the zoom command includes, but is not limited to, a voice zoom command, a click zoom command, and a touch zoom command.
  • the materials in the material set displayed on the second screen of the terminal may be T materials, Y materials, U materials, I materials, and D materials.
  • the zoom instruction for the D material received by the terminal may be, for example, a click to zoom instruction.
  • the terminal receives the zoom instruction, the terminal can zoom in on the target material, and display the zoomed D material on the second screen.
  • an example schematic diagram of the terminal interface may be as shown in FIG. 11.
  • the terminal when the terminal receives a movement instruction for the target material in the material set displayed on the second screen, the terminal may obtain the movement track corresponding to the movement instruction.
  • the terminal acquires the moving end point of the moving track, the terminal can detect whether the moving end point is located on the first screen.
  • the terminal determines that the moving destination is located on the first screen, the terminal can move the target material to the first screen.
  • the terminal After the terminal determines that the target material moves to the first screen, the terminal can edit the video based on the target material to generate the target video.
  • the movement track corresponding to the movement instruction acquired by the terminal may be in an "S" shape, for example.
  • the terminal can detect whether the moving end H position is located on the first screen.
  • the terminal can move the target material D to the first screen.
  • the terminal can edit the initial video based on the target material D material to generate the target video.
  • the terminal when the terminal edits the initial video based on the target material, the terminal can insert the target material into a position corresponding to the moving end point in the initial video, or replace the original material displayed at the moving end point in the initial video with the target material, which can reduce the target material.
  • the steps of inserting or replacing materials improve the convenience of video editing. For example, when the terminal determines that the position of the moving end point H is located on the first screen, the terminal may insert the target material D into the position H position corresponding to the moving end point in the initial video. Before the terminal moves the target material to the first screen, the material displayed in the initial video on the first screen of the terminal is the original material.
  • the terminal may use the target material to replace the original material displayed at the moving end point in the initial video.
  • the original material displayed by the terminal in the initial video of the first screen is the M material.
  • the terminal may use the target material D material to replace the original material M material displayed at the moving end point in the initial video.
  • the terminal may divide the first screen into at least one area, and the terminal edits the initial video displayed in each area to reduce the impact on the initial video displayed in other areas.
  • the terminal edits the initial video based on the target material
  • the terminal can also obtain the moving destination of the moving instruction.
  • the movement instruction may be a drag movement instruction, for example.
  • the terminal receives the drag movement instruction, the terminal can obtain the movement trajectory corresponding to the drag movement instruction, and obtain the movement end point of the movement trajectory.
  • the terminal acquires the moving destination, the terminal can detect whether the moving destination is on the first screen.
  • the terminal can obtain the location of the moving destination on the first screen and the area corresponding to the location, and the terminal can replace the original material displayed in the area with the target material.
  • the position may be a coordinate position, for example.
  • the terminal uses the target material to replace the original material displayed in the area, which can directly replace the original material displayed in the area, and does not require the user to perform multi-step replacement of the original material displayed in the area, which can reduce the operation steps of video editing .
  • the terminal when the terminal divides the first screen into G1 area, G2 area, G3 area, G4 area, G5 area, and G6 area of the same size, the terminal determines that the position of the moving end point on the first screen is (G211, G221). At this time, the terminal determines that the area corresponding to the position is the G2 area, and the terminal can replace the original material displayed in the G2 area with the target material.
  • the terminal can also receive shaking instructions, which include but are not limited to voice shaking instructions and manual shaking instructions.
  • the shaking instruction refers to an instruction for the terminal to exchange the content displayed on the first screen and the content displayed on the second screen.
  • the terminal can display the material collection on the first screen and display the target video on the second screen.
  • the shaking instruction received by the terminal may be as shown in FIG. 12.
  • the terminal when the terminal edits the initial video based on the target material and generates the target video, the terminal can receive a playback instruction for the target video, and the playback of the target video by the terminal allows the user to watch the editing effect of the target video, so that Edit the target video again when the editing effect does not meet the user's requirements.
  • FIG. 13 provides a schematic flowchart of a video editing method according to an embodiment of this application.
  • the method may further include the following steps S401 to S403.
  • the terminal can be set to play the target video mode on a single screen.
  • the terminal receives the playback instruction for the target video
  • the terminal can obtain the first screen size of the first screen and the second screen size of the second screen.
  • the terminal can detect whether the first screen size is larger than the second screen size.
  • the terminal detects that the first screen size is larger than the second screen size
  • the terminal plays the target video on the first screen.
  • the terminal detects that the second screen size is larger than the first screen size
  • the terminal plays the target video on the second screen and displays the material collection on the first screen.
  • the terminal can play the target video on a larger screen in the single-screen mode of playing the target video, which can improve the user's viewing experience.
  • the first screen size of the first screen acquired by the terminal may be, for example, 5.5 inches
  • the second screen size of the second screen may be, for example, 5.2 inches
  • the terminal is in the first screen.
  • the target video is played on the screen.
  • the terminal obtains that the first screen size of the first screen may be, for example, 5.0 inches and the second screen size of the second screen may be, for example, 5.2 inches
  • the terminal plays the target video on the second screen and displays the material on the first screen. gather.
  • the terminal may also set the target video to be played in full screen.
  • the terminal may display the target video on the full screen composed of the first screen and the second screen based on the playback instruction.
  • the playback instructions include but are not limited to rotation instructions, click instructions, voice instructions, etc.
  • the playback instruction received by the terminal may be, for example, a rotation instruction.
  • the terminal receives the rotation instruction, it can obtain the rotation parameters of the terminal.
  • the terminal detects that the rotation parameter is greater than the preset rotation parameter, the terminal may display the target video on the full screen composed of the first screen and the second screen.
  • FIG. 14 An example schematic diagram of the terminal interface may be as shown in FIG. 14. The terminal's detection of rotation parameters can reduce terminal misoperation.
  • S207 Receive a zoom instruction for the target video on the full screen, display the zoomed target video in the first area of the full screen, and display the reference video pushed as the target video in the second area of the full screen.
  • the terminal may also receive a zoom instruction for the target video on the full screen.
  • the terminal may display the zoomed target video in the first area of the full screen, and display the reference video pushed as the target video in the second area of the full screen.
  • the zoom instruction includes, but is not limited to, a voice zoom instruction, a text zoom instruction, a click zoom instruction, and so on.
  • the zoom instruction received by the terminal may be, for example, a click zoom instruction.
  • the user can click on the target video, and the terminal can display the frame of the target video.
  • the terminal can display the zoomed target video in the first area of the full screen.
  • the first area only refers to a part of the full screen, and this area does not refer to a certain fixed area on the full screen.
  • the terminal can use an image recognition algorithm to identify the key image in the target video, and obtain a reference video corresponding to the target video based on the key image. Therefore, when the terminal displays the zoomed target video in the first area of the full screen, the terminal may also display the reference video pushed as the target video in the second area of the full screen. For example, the terminal uses an image recognition algorithm to recognize the key image in the Z target video. When the terminal acquires the reference video corresponding to the target video based on the key image as the X reference video, the terminal can display the zoomed Z target video in the first area of the full screen, and the terminal can also display the target video in the second area of the full screen. X reference video. At this time, an example schematic diagram of the terminal interface may be as shown in FIG. 15.
  • the terminal may also obtain the tag category of the target video, and push the reference video for the target video based on the tag category. For example, before the terminal displays the reference video pushed by the target video in the second area of the full screen, when the terminal obtains that the video tag of the Z target video is a travel tag, the terminal may push the reference video corresponding to the travel tag for the target video.
  • the terminal may receive a second editing instruction for the target video.
  • the terminal may edit the target video based on the reference video and the second editing instruction.
  • the terminal can edit the target video again based on the reference video to make the target video more in line with the user's requirements and improve the user's experience.
  • the terminal when the terminal receives a zoom instruction for the target video on the full screen, the terminal may display the zoomed target video in the first area of the full screen. After the second area of the full screen is displayed as the reference video pushed by the target video, the terminal may also receive a browsing instruction for the reference video to update the reference video displayed in the second area.
  • the browsing instructions include, but are not limited to, click browsing instructions, voice browsing instructions, and touch browsing instructions.
  • the terminal when the terminal receives a zoom instruction for the target video on the full screen, it displays the zoomed target video in the first area of the full screen, and displays the reference video pushed by the target video in the second area of the full screen.
  • the reference video displayed in the second area can be updated according to the preset update duration.
  • the update duration may be 15 seconds, for example.
  • the terminal displays the target video push reference video as the X reference video in the second area of the full screen, the terminal may display the target video push V reference video in the second area of the full screen 15 seconds later.
  • the embodiment of the present application provides a video editing method.
  • the terminal can display the material collection on the second screen, which can reduce the misoperation of displaying the material collection when the terminal is not performing video editing.
  • the terminal moves the target material to the first screen based on the movement trajectory corresponding to the movement instruction, and edits the initial video based on the target material to generate the target video, which can reduce the user's mistakes in video editing and directly move the target material
  • the switching operation between the material collection interface and the video editing interface can be reduced, and the convenience of video editing can be improved.
  • the terminal when it receives the play instruction, it can display the target video on the full screen composed of the first screen and the second screen, which can increase the video editing area and improve the convenience of the user in editing the video.
  • the terminal can also display the zoomed target video in the first area of the full screen, and display the reference video pushed as the target video in the second area of the full screen, so that the user can edit the video based on the reference video, which can improve the convenience of video editing Performance, which in turn can improve the user experience.
  • the video editing device provided by the embodiment of the present application will be described in detail below with reference to FIG. 16. It should be noted that the video editing device shown in FIG. 16 is used to execute the method of the embodiment shown in FIG. 2 to FIG. 15 of the present application. For ease of description, only the parts related to the embodiment of the present application are shown. For technical details that are not disclosed, please refer to the embodiments shown in Figures 2 to 15 of this application.
  • FIG. 16 shows a schematic structural diagram of a video editing device according to an embodiment of the present application.
  • the video editing apparatus 1600 can be implemented as all or a part of the user terminal through software, hardware or a combination of the two.
  • the video editing device 1600 includes an instruction receiving unit 1601 and a video editing unit 1602, which are specifically configured to:
  • the instruction receiving unit 1601 is configured to receive a first editing instruction for the initial video displayed on the first screen, and display the material set on the second screen;
  • the video editing unit 1602 is configured to receive a movement instruction for the target material in the material set, and after the target material is moved to the first screen, edit the initial video based on the target material to generate the target video.
  • the video editing unit 1602 is configured to receive a movement instruction for the target material in the material collection, and after the target material is moved to the first screen, edit the initial video based on the target material, and when generating the target video, it is specifically used for :
  • the moving end point of the moving track When the moving end point of the moving track is located behind the first screen, it is determined that the target material is moved to the first screen, and the initial video is edited based on the target material to generate the target video.
  • the video editing unit 1602 is configured to edit the initial video based on the target material, specifically:
  • the video editing device 1600 further includes a material marking unit 1603, configured to receive a browse instruction for each material in the material set before receiving a movement instruction for a target material in the material set, and mark the selected target material;
  • the video editing device 1600 further includes a video playback unit 1604, configured to edit the initial video based on the target material, and after the target video is generated, receive a playback instruction for the target video;
  • the target video is played on the second screen, and the material collection is displayed on the first screen.
  • the video editing apparatus 1600 further includes a video pushing unit 1605, configured to receive a zoom instruction for the target video on the full screen after playing the target video on the full screen based on the playback instruction, and display the zoomed video in the first area of the full screen The target video is displayed in the second area of the full screen as the reference video pushed by the target video.
  • a video pushing unit 1605 configured to receive a zoom instruction for the target video on the full screen after playing the target video on the full screen based on the playback instruction, and display the zoomed video in the first area of the full screen The target video is displayed in the second area of the full screen as the reference video pushed by the target video.
  • the video pushing unit 1605 is further configured to obtain the tag category of the target video before displaying the reference video pushed by the target video in the second area of the full screen, and push the reference video for the target video based on the tag category;
  • the video editing apparatus 1600 includes a video update unit 1606, configured to receive a browsing instruction for a reference video, and update the reference video displayed in the second area.
  • the video editing unit 1602 when used to display the material collection on the second screen, it is specifically used to:
  • the second screen opening instruction is received, and the material collection is displayed on the second screen.
  • An embodiment of the present application provides a video editing device.
  • a first editing instruction for an initial video displayed on a first screen is received through an instruction receiving unit, and a material set is displayed on a second screen, and the video editing unit can receive data from the material set.
  • the movement instruction of the target material after the target material is moved to the first screen, edit the initial video based on the target material to generate the target video. Therefore, when the user edits the video, he only needs to move the target material on the second screen of the video editing device to the first screen, and then the initial video can be edited on the first screen based on the target material to generate the target video. Reducing the switching operations of selecting target materials and video editing can reduce video editing operation steps, improve the convenience of video editing, and enhance user experience.
  • the communication bus 1702 is used to implement connection and communication between these components.
  • the user interface 1703 may include a display screen (Display) and GPS, and the optional user interface 1703 may also include a standard wired interface and a wireless interface.
  • Display display screen
  • GPS GPS
  • the optional user interface 1703 may also include a standard wired interface and a wireless interface.
  • the network interface 1704 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the processor 1701 may include one or more processing cores.
  • the processor 1701 uses various excuses and lines to connect various parts of the entire terminal 1700, and executes the terminal by running or executing instructions, programs, code sets, or instruction sets stored in the memory 1705, and calling data stored in the memory 1705.
  • the processor 1701 may use at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA).
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PDA Programmable Logic Array
  • the processor 1701 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), and a modem.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • modem modem
  • the CPU mainly processes the operating system, user interface, and application programs; the GPU is used to render and draw the content that needs to be displayed on the display; the modem is used to process wireless communication. It is understandable that the above-mentioned modem may not be integrated into the processor 1701, but may be implemented by a chip alone.
  • the memory 1705 may include random access memory (Random Access Memory, RAM), and may also include read-only memory (Read-Only Memory).
  • the memory 1705 includes a non-transitory computer-readable storage medium.
  • the memory 1705 may be used to store instructions, programs, codes, code sets or instruction sets.
  • the memory 1705 may include a program storage area and a data storage area, where the program storage area may store instructions for implementing the operating system and instructions for at least one function (such as touch function, sound playback function, image playback function, etc.), Instructions used to implement the foregoing method embodiments, etc.; the storage data area can store the data involved in the foregoing method embodiments, etc.
  • the memory 1705 may also be at least one storage device located far away from the foregoing processor 1701.
  • the memory 1705 as a computer storage medium may include an operating system, a network communication module, a user interface module, and an application program for video editing.
  • the user interface 1703 is mainly used to provide an input interface for the user to obtain data input by the user; and the processor 1701 may be used to call a video editing application stored in the memory 1705, and specifically Do the following:
  • the processor 1701 is configured to receive a movement instruction for the target material in the material set, and after the target material is moved to the first screen, edit the initial video based on the target material, and when generating the target video, it is specifically configured to perform the following step:
  • the moving end point of the moving track When the moving end point of the moving track is located behind the first screen, it is determined that the target material is moved to the first screen, and the video is edited based on the target material to generate the target video.
  • the processor 1701 when the processor 1701 is configured to edit the initial video based on the target material, it is specifically configured to execute the following steps:
  • the processor 1701 is further specifically configured to perform the following steps before receiving a movement instruction for the target material in the material collection:
  • the processor 1701 is configured to edit the initial video based on the target material, and after the target video is generated, it is further specifically configured to perform the following steps:
  • the target video is played on the second screen, and the material collection is displayed on the first screen.
  • the processor 1701 is configured to perform the following steps after playing the target video on the full screen based on the playback instruction:
  • the processor 1701 is configured to perform the following steps before displaying the reference video pushed as the target video in the second area of the full screen:
  • the processor 1701 is further specifically configured to execute the following steps:
  • processor 1701 when the processor 1701 is configured to display the material collection on the second screen, it is specifically configured to perform the following steps:
  • the second screen opening instruction is received, and the material collection is displayed on the second screen.
  • the embodiment of the present application provides a terminal, which can display a material collection on a second screen by receiving a first editing instruction for an initial video displayed on a first screen, and upon receiving a movement instruction for a target material in the material collection, After the target material is moved to the first screen, the initial video can be edited based on the target material to generate the target video. Therefore, when the user edits the video, he only needs to move the target material on the second screen to the first screen, and then edit the initial video based on the target material on the first screen to generate the target video, which can reduce the selection of target materials.
  • the switching operation with video editing can reduce video editing operation steps, improve the convenience of video editing, and enhance user experience.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the above method are realized.
  • the computer-readable storage medium may include, but is not limited to, any type of disk, including floppy disks, optical disks, DVDs, CD-ROMs, micro drives, and magneto-optical disks, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, flash memory devices , Magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or equipment suitable for storing instructions and/or data.
  • the embodiments of the present application also provide a computer program product.
  • the computer program product includes a non-transitory computer-readable storage medium storing a computer program.
  • the computer program is operable to cause a computer to execute any of the methods described in the above-mentioned method embodiments. Part or all of the steps of the video editing method.
  • the technical solution of the present application can be implemented by means of software and/or hardware.
  • the "unit” and “module” in this specification refer to software and/or hardware that can independently complete or cooperate with other components to complete specific functions.
  • the hardware may be a Field-Programmable Gate Array (FPGA), for example. , Integrated Circuit (IC), etc.
  • FPGA Field-Programmable Gate Array
  • IC Integrated Circuit
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some service interfaces, devices or units, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable memory.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory, A number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned memory includes: U disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • the program can be stored in a computer-readable memory, and the memory can include: flash memory Disk, Read-Only Memory (ROM), Random Access Memory (RAM), magnetic disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种视频编辑方法、装置、终端及存储介质。其中,一种视频编辑方法,包括:接收针对第一屏幕上所显示的初始视频的第一编辑指令,在第二屏幕上显示素材集合;接收针对所述素材集合中目标素材的移动指令,在所述目标素材移动至所述第一屏幕之后,基于所述目标素材对所述初始视频进行编辑,生成目标视频。通过双屏分别显示视频编辑界面和图片素材显示界面,只需要将第二屏幕上的目标素材移动到第一屏幕之后,即可在第一屏幕上基于目标素材对初始视频进行编辑,生成目标视频,可以提高视频编辑的便利性,进而可以提升用户的使用体验。

Description

视频编辑方法、装置、终端及存储介质 技术领域
本申请属于终端技术领域,具体而言,涉及一种视频编辑方法、装置、终端及存储介质。
背景技术
随着终端技术的发展,终端可以支持的功能越来越多,可以不断地丰富用户的生活。例如,用户可以使用终端听音乐、观看视频、接收语音信息等。
其中,用户需要使用终端编辑视频时,可以打开终端图库,并在终端图库中选中目标图片,然后将目标图片添加至视频模板中,生成视频。若目标图片不符合预设要求时,用户可以再次打开终端图库,重新选择目标图片。
发明内容
本申请实施例提供一种视频编辑方法、装置、终端及存储介质,可以提高视频编辑的便利性。本技术方案包括:
第一方面,本申请实施例提供一种视频编辑方法,所述方法包括:
接收针对第一屏幕上所显示的初始视频的第一编辑指令,在第二屏幕上显示素材集合;
接收针对所述素材集合中目标素材的移动指令,在所述目标素材移动至所述第一屏幕之后,基于所述目标素材对所述初始视频进行编辑,生成目标视频。
第二方面,本申请实施例提供一种视频编辑方法装置,所述装置包括:
指令接收单元,用于接收针对第一屏幕上所显示的初始视频的第一编辑指令,在第二屏幕上显示素材集合;
视频编辑单元,用于接收针对所述素材集合中目标素材的移动指令,在所述目标素材移动至所述第一屏幕之后,基于所述目标素材对所述初始视频进行编辑,生成目标视频。
第三方面,本申请实施例提供一种终端,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述第一方面任一项所述的方法。
第四方面,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一项所述的方法。
第五方面,本申请实施例提供一种计算机程序产品,其中,上述计算机程序产品包括存储计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
本申请实施例提供一种视频编辑方法,通过接收针对第一屏幕上所显示的初始视频的编辑指令,可以在第二屏幕上显示素材集合,在接收到针对素材集合中目标素材的移动指令,在目标素材移动至第一屏幕之后,可以基于目标素材对初始视频进行编辑,生成目标视频。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出应用于本申请实施例的视频编辑方法或者视频编辑装置的应用场景示意图;
图2示出本申请实施例的一种视频编辑方法的流程示意图;
图3示出本申请实施例的一种终端界面的举例示意图;
图4示出本申请实施例的一种终端界面的举例示意图;
图5示出本申请实施例的一种终端界面的举例示意图;
图6示出本申请实施例的一种终端界面的举例示意图;
图7示出本申请实施例的一种视频编辑方法的流程示意图;
图8示出本申请实施例的一种终端界面的举例示意图;
图9示出本申请实施例的一种视频编辑方法的流程示意图;
图10示出本申请实施例的一种终端界面的举例示意图;
图11示出本申请实施例的一种终端界面的举例示意图;
图12示出本申请实施例的一种旋转终端的举例示意图;
图13示出本申请实施例的一种视频编辑方法的流程示意图;
图14示出本申请实施例的一种终端界面的举例示意图;
图15示出本申请实施例的一种终端界面的举例示意图;
图16示出本申请实施例的一种视频编辑装置的结构示意图;
图17示出本申请实施例的一种终端的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅为本申请实施例的一部分,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
随着终端技术的发展,终端可以支持的功能越来越多,可以不断地丰富用户的生活。例如,用户可以使用终端听音乐、观看视频、接收语音信息等。
根据一些实施例,图1示出应用于本申请实施例的视频编辑方法或者视频编辑装置的应用场景示意图。如图1所示,当用户使用终端编辑视频时,用户可以在终端的显示界面上点击图库控件。当终端检测到用户点击图库时,终端可以显示终端中存储的图片。当用户选择目标图片时,用户可以在终端显示界示界面上点击完成控件。终端检测到用户点击完成控件时,终端可以将目标图片显示在视频编辑界面上。终端可以基于该目标图片生成视频。但是在视频编辑的过程中,终端需要基于用户输入的图库打开指令,打开终端图库,使得视频编辑操作复杂。
易于理解的是,当目标图片不符合预设要求时,用户需要重新点击图库控件,重新选择目标图片。当用户选择完成目标图片时,终端可以在视频编辑界面显示该目标图片。当用户确认目标图片的风格不符合预设要求时,用户可以再次打开终端图库,重新选择目标图片,使得视频编辑操作复杂。本申请实施例提供一种视频编辑方法,可以提高视频编辑的便利性。
下面将结合附图2-附图15,对本申请实施例提供的视频编辑方法进行详细介绍。附图2-附图15所示实施例的执行主体例如可以为终端。
请参见图2,为本申请实施例提供了一种视频编辑方法的流程示意图。如图2所示,本申请实施例的所述方法可以包括以下步骤S101-步骤S102。
S101,接收针对第一屏幕上所显示的初始视频的第一编辑指令,在第二屏幕上显示素材集合。
根据一些实施例,本申请实施例的执行主体是包括至少两个屏幕的终端,该终端包括但不限于具有折叠屏的智能手机,具有左右显示屏的智能手机、具有上下显示屏的智能手机等等。其中第一屏幕是指终端中的任一屏幕,第一仅仅是指终端屏幕中的其中的一个,并不特指某一固定屏幕。例如终端包括的两个屏幕可以分别为A屏幕和B屏幕。当A屏幕为第一屏幕时,B屏幕为第二屏幕。当A屏幕为第二屏幕时,B屏幕为第一屏幕。
易于理解的是,初始视频可以是指不包含目标素材的视频,该初始视频可以包含原始素材,也可以不包含原始素材。按照视频时长分,该初始视频可以是短视频,也可以是长视频。素材集合是指包含至少一个素材的集合,其中素材集合中包括但不限于动画素材、文字素材、图片素材、音频素材和视频素材等。本申请实施例以图片素材集合为例进行介绍。该图片素材集合可以是以列表形式显示多 个图片,还可以是图标形式显示多个图片。目标素材是指素材集合中的一个。
可选的,第一编辑指令是指用户在终端的第一屏幕上输入的指令。该第一编辑指令包括但不限于文字编辑指令、语音编辑指令、点击编辑指令等等。当终端未接收到该第一编辑指令时,终端的显示界面可以是只显示第一屏幕,终端显示第一屏幕时,可以是单屏显示第一屏幕,还可以是全屏显示第一屏幕。当终端单屏显示第一屏幕时,终端界面的举例示意图可以如图3所示。当终端全屏显示第一屏幕,终端界面的举例示意图可以如图4所示。
根据一些实施例,当终端接收到针对第一显示屏上所显示的初始视频的第一编辑指令时,终端可以显示第二屏幕,并在第二屏幕上显示素材集合。当终端单屏显示第一屏幕,接收到针对第一显示屏上所显示的初始视频的第一编辑指令时,终端可以翻转出第二屏幕,并在第二屏幕上显示素材集合。当终端全屏显示第一屏幕,接收到针对第一显示屏上所显示的初始视频的编辑指令时,终端可以基于预设的显示规则显示第二屏幕,并在第二屏幕上显示素材集合。预设显示规则例如可以是缩小第一屏幕的显示区域,并在全屏上同时显示第一屏幕和第二屏幕。
易于理解的是,例如当终端接收到针对第一显示屏所显示的初始视频的点击编辑指令时,终端可以在第二屏幕上显示图片素材集合。此时,终端界面的举例示意图可以如图5所示。
S102,接收针对素材集合中目标素材的移动指令,在目标素材移动至第一屏幕之后,基于目标素材对初始视频进行编辑,生成目标视频。
根据一些实施例,移动指令是指终端接收到的针对第二屏幕上的显示的素材集合中目标素材的移动指令。该移动指令包括但不限于拖拽移动指令、点击移动指令、语音移动指令等等。本申请实施例的移动指令例如可以是拖拽移动指令。当终端检测到用户的手指点击第二屏幕时,终端可以获取该点击位置,并将该点击位置对应的素材确定为目标素材。当终端检测到用户的手指在显示屏上移动时,终端可以接收到针对第二屏幕上显示的目标素材集合中目标素材的拖拽移动指令。
根据一些实施例,终端接收到的移动指令例如还可以是语音移动指令。该语音移动指令例如可以是“将Q目标素材从第二屏幕移动至第一屏幕”。当终端检测到用户输入该语音移动指令时,终端可以接收到针对第二屏幕上的素材集合中目标素材的移动指令。
易于理解的是,当终端接收到针对第一屏幕上显示的预览视频集合中目标视频的移动指令时,终端可以获取该移动指令对应的移动终点。当终端获取到移动指令对应的移动终点时,终端可以判断该移动指令对应的移动终点是否位于第一屏幕上。当终端判断到移动指令对应的移动终点位于第一屏幕上时,终端将目标素材移动至第一屏幕上,并在第一屏幕上显示该目标素材。
可选的,终端判断该移动指令对应的移动终点是否位于第一屏幕上时,终端例如可以获取移动指令对应的移动终点的位置坐标。当终端检测到该位置坐标位于第一屏幕上时,终端可以判断到该移动指令对应的移动终点位于第一屏幕上。
易于理解的是,在目标素材移动至第一屏幕之后,终端可以基于目标素材对初始视频进行编辑,生成目标视频。例如终端获取到的移动指令例如可以是拖拽移动指令,该拖拽移动指令的拖拽轨迹例如可以如图6所示。当终端获取到该拖拽移动指令时,终端可以获取该拖拽移动指令对应的拖拽终点。终端获取到的拖拽移动指令的拖拽终点例如可以是B位置。当终端检测到B位置位于第一屏幕时,终端可以将W目标素材移动至第一屏幕上。当终端将W目标素材移动至第一屏幕之后,终端可以基于W目标素材,生成目标视频。
可选的,当终端将W目标素材移动至第一屏幕之后,终端接收到针对C目标素材的移动指令,并且检测到C目标素材的移动指令对应的移动终点在第一屏幕上时,终端可以将W目标素材替换成C目标素材。
根据一些实施例,当终端将目标素材移动至第一屏幕之后,终端接收到针对目标素材的删除指令时,终端可以基于该删除指令,删除目标素材。终端删除目标素材时,可以在第一屏幕上显示空白界面,还可以在第一屏幕上显示未将目标素材移动至第一屏幕之前的显示素材。终端接收到的针对目标素材的删除指令包括但不限于点击删除指令、拖拽删除指令、语音删除指令等等。
本申请实施例提供一种视频编辑方法,通过接收针对第一屏幕上所显示的初始视频的第一编辑指令,可以在第二屏幕上显示素材集合,在接收到针对素材集合中目标素材的移动指令,在目标素材移动至第一屏幕之后,可以基于目标素材对初始视频进行编辑,生成目标视频。因此用户对视频进行编 辑时,只需要将第二屏幕上的目标素材移动到第一屏幕上,即可在第一屏幕上基于目标素材对初始视频进行编辑,生成目标视频,可以减少选择目标素材和视频编辑的切换操作,可以减少视频编辑操作步骤,可以提高视频编辑的便利性,提升用户体验。
请参见图7,为本申请实施例提供了一种视频编辑方法的流程示意图。如图7所示,本申请实施例的所述方法可以包括以下步骤S201-步骤S207。
S201,接收针对第一屏幕上所显示的初始视频的第一编辑指令。
具体过程如上所述,此处不再赘述。
S202,接收第二屏幕开启指令,在第二屏幕上显示素材集合。
根据一些实施例,第二屏幕的开启指令包括但不限于语音开启指令、点击开启指令、触控开启指令以及按压开启指令等等。当终端接收到针对第一屏幕上所显示的初始视频的编辑指令时,终端可以默认该编辑指令为第二屏幕的开始指令,终端可以在第二屏幕上显示素材集合。本申请实施例的素材集合以图片素材集合为例进行介绍。
易于理解的是,终端接收到的第二屏幕的开启指令例如可以是按压开启指令。当终端接收到针对第一屏幕上所显示的初始视频的编辑指令时,终端可以接收与终端第二屏幕对应的按压控件的按压压力。终端检测到该按压压力大于预设压力阈值时,终端可以打开第二屏幕,并在第二屏幕上显示素材集合。
S203,接收针对素材集合中目标素材的移动指令,获取移动指令对应的移动轨迹。
根据一些实施例,当终端接收到针对第二屏幕上显示的素材集合中目标素材的移动指令时,终端可以获取移动指令对应的移动轨迹。当终端获取到移动指令对应的移动轨迹时,终端可以控制目标素材按照移动轨迹同步移动。终端还可以只获取该移动指令对应的移动轨迹,未控制目标素材按照移动轨迹同步移动。
易于理解的是,终端获取到移动指令对应的移动轨迹例如可以是呈“S”型。当终端接收到针对第一屏幕上显示的素材集合中目标素材D素材的移动指令时,终端可以获取该移动指令对应的移动轨迹,并控制D素材按照移动轨迹同步移动。此时,终端界面的举例示意图可以如图8所示。
根据一些实施例,请参见图9,为本申请实施例提供了一种视频编辑方法的流程示意图。如图9所示,本申请实施例的所述方法在接收针对素材集合中目标素材的移动指令之前还可以包括以下步骤S301-步骤S302。S301,接收针对素材集合中各素材的浏览指令,标记选中的目标素材;S302,接收针对目标素材的缩放指令,在第二屏幕上显示缩放后的目标素材。
易于理解的是,当终端在接收针对素材集合中目标素材的移动指令之前,用户可以对素材集合中的各素材进行浏览。当用户对素材集合中的各素材进行浏览时,用户可以输入针对素材集合中各素材的浏览指令。该浏览指令包括但不限于语音浏览指令、点击浏览指令、触控浏览指令。例如终端的第二屏幕显示素材集合时,终端可以在第二屏幕上设置滑动条,以便用户可以通过移动滑动条对素材集合进行操作。此时,终端界面的举例示意图可以如图10所示。该滑动条可以基于素材集合中素材的数量和第二屏幕的尺寸确定。
根据一些实施例,当终端接收到针对图片素材集合中各素材的浏览指令时,终端可以基于该浏览指令移动各素材。当用户在浏览过程中确定目标素材时,可以输入针对该目标素材的标记指令,终端可以基于该标记指令标记选中的目标素材,该目标素材例如可以是图片素材集合中的目标图片。终端选中该目标素材时,终端可以接收针对目标素材的缩放指令,并在在第二屏幕上显示缩放后的目标素材。该缩放指令包括但不限于语音缩放指令、点击缩放指令以及触控缩放指令。
易于理解的是,终端在第二屏幕上显示的素材集合中的素材可以是T素材、Y素材、U素材、I素材和D素材。当终端选中D素材为目标素材时,终端接收到的针对D素材的缩放指令例如可以是点击缩放指令。当终端接收到该缩放指令时,终端可以将目标素材进行放大,并在第二屏幕上显示放大后的D素材。此时,终端界面的举例示意图可以如图11所示。
S204,当移动轨迹的移动终点位于第一屏幕上时,确定目标素材移动至第一屏幕之后,基于目标素材对初始视频进行编辑,生成目标视频。
根据一些实施例,当终端接收到针对第二屏幕上显示的素材集合中目标素材的移动指令时,终端 可以获取移动指令对应的移动轨迹。当终端获取到该移动轨迹的移动终点时,终端可以检测该移动终点是否位于第一屏幕上。当终端确定该移动终点位于第一屏幕上时,终端可以将目标素材移动至第一屏幕上。当终端确定目标素材移动至第一屏幕之后,终端可以基于目标素材对视频进行编辑,生成目标视频。
易于理解的是,当终端接收到针对第二屏幕上显示的素材集合中目标素材D素材的移动指令时,终端获取到的移动指令对应的移动轨迹例如可以是呈“S”型。当终端获取到该移动轨迹的移动终点为H位置时,终端可以检测该移动终点H位置是否位于第一屏幕上。当终端确定该移动终点H位置位于第一屏幕上时,终端可以将目标素材D素材移动至第一屏幕上。当终端确定目标素材D素材移动至第一屏幕之后,终端可以基于目标素材D素材对初始视频进行编辑,生成目标视频。
根据一些实施例,当终端基于目标素材对初始视频进行编辑时,终端可以将目标素材插入至初始视频中移动终点对应的位置,或采用目标素材替换初始视频中移动终点所显示的原始素材,可以减少目标素材插入或者替换的步骤,提高视频编辑的便利性。例如当终端确定移动终点H位置位于第一屏幕上时,终端可以将目标素材D素材插入至初始视频中移动终点对应的位置H位置。当终端未将目标素材移动至第一屏幕上之前,终端第一屏幕的初始视频中显示的素材为原始素材。当终端基于目标素材对初始视频进行编辑时,终端可以采用目标素材替换初始视频中移动终点所显示的原始素材。例如,当终端未将目标素材移动至第一屏幕上之前,终端在第一屏幕的初始视频中显示的原始素材为M素材。当基于目标素材D素材对初始视频进行编辑时,终端可以采用目标素材D素材替换初始视频中移动终点所显示的原始素材M素材。
可选的,终端在对初始视频编辑时,终端可以将第一屏幕划分成至少一个区域,终端对各个区域显示的初始视频进行编辑可以减少对其他区域显示的初始视频的影响。当终端基于目标素材对初始视频进行编辑时,终端还可以获取移动指令的移动终点。该移动指令例如可以是拖拽移动指令。终端接收到拖拽移动指令时,终端可以获取该拖拽移动指令对应的移动轨迹,并获取该移动轨迹的移动终点。当终端获取到该移动终点时,终端可以检测该移动终点是否在第一屏幕上。当终端确定该移动终点位于第一屏幕上时,终端可以获取该移动终点在第一屏幕上的位置以及该位置对应的区域,终端可以采用目标素材替换该区域所显示的原始素材。该位置例如可以是坐标位置。终端采用目标素材替换该区域所显示的原始素材,可以直接对该区域所显示的原始素材进行替换,不需要用户对该区域所显示的原始素材进行多步骤的替换,可以减少视频编辑的操作步骤。
根据一些实施例,当终端将第一屏幕划分成尺寸相同的G1区域、G2区域、G3区域、G4区域、G5区域和G6区域时,终端确定移动终点在第一屏幕上的位置为(G211,G221),此时终端确定该位置对应的区域为G2区域,终端可以采用目标素材替换G2区域所显示的原始素材。
易于理解的是,终端还可以接收晃动指令,晃动指令包括但不限于语音晃动指令、手动晃动指令。该晃动指令是指终端将第一屏幕显示内容和第二屏幕显示内容进行交换的指令。当终端接收到该晃动指令时,终端可以在第一屏幕上显示素材集合,并在第二屏幕上显示目标视频。终端接收到的晃动指令可以如图12所示。
S205,接收针对目标视频的播放指令。
根据一些实施例,当终端基于目标素材对初始视频进行编辑,生成目标视频之后,终端可以接收到针对目标视频的播放指令,终端对目标视频的播放可以让用户观看到目标视频的编辑效果,以便在编辑效果不符合用户要求时对目标视频进行再次编辑。
请参见图13,为本申请实施例提供了一种视频编辑方法的流程示意图。如图13所示,本申请实施例的在接收针对目标视频的播放指令之后所述方法还可以包括以下步骤S401-步骤S403。S401,基于播放指令,获取第一屏幕的第一屏幕尺寸和第二屏幕的第二屏幕尺寸;S402,在第一屏幕尺寸大于第二屏幕尺寸时,在第一屏幕上播放目标视频;S403,在第二屏幕尺寸大于第一屏幕尺寸时,在第二屏幕上播放目标视频,并在第一屏幕上显示素材集合。
易于理解的是,终端可以设置单屏播放目标视频模式。当终端接收到针对目标视频的播放指令时,终端可以获取第一屏幕的第一屏幕尺寸和第二屏幕的第二屏幕尺寸。此时终端可以检测第一屏幕尺寸是否大于第二屏幕尺寸。当终端检测到第一屏幕尺寸大于第二屏幕尺寸时,终端在第一屏幕上播放目标视频。当终端检测到第二屏幕尺寸大于第一屏幕尺寸时,终端在第二屏幕上播放目标视频,并在第 一屏幕上显示素材集合。终端可以在单屏播放目标视频模式下,在较大的屏幕播放目标视频,可以提高用户的观看体验。
可选的,终端接收到针对目标视频的播放指令时,终端获取到第一屏幕的第一屏幕尺寸例如可以是5.5英寸和第二屏幕的第二屏幕尺寸例如可以是5.2英寸,终端在第一屏幕上播放目标视频。当终端获取到第一屏幕的第一屏幕尺寸例如可以是5.0英寸和第二屏幕的第二屏幕尺寸例如可以是5.2英寸,终端在第二屏幕上播放目标视频,并在第一屏幕上显示素材集合。
S206,基于播放指令,在全屏上播放目标视频。
根据一些实施例,终端还可以设置全屏播放目标视频。终端接收到针对目标视频的播放指令时,终端可以基于该播放指令在在第一屏幕和第二屏幕组成的全屏上显示目标视频。该播放指令包括但不限于旋转指令、点击指令、语音指令等。终端接收到的播放指令例如可以是旋转指令。当终端接收到该旋转指令时,可以获取终端的旋转参数。当终端检测到旋转参数大于预设旋转参数时,终端可以在第一屏幕和第二屏幕组成的全屏上显示目标视频。此时,终端界面的举例示意图可以如图14所示。终端对旋转参数的检测可以减少终端的误操作。
S207,接收针对全屏上目标视频的缩放指令,在全屏的第一区域显示缩放后的目标视频,并在全屏的第二区域显示为目标视频推送的参考视频。
根据一些实施例,当终端在第一屏幕和第二屏幕组成的全屏上显示目标视频之后,终端还可以接收针对全屏上目标视频的缩放指令。终端可以在全屏的第一区域显示缩放后的目标视频,并在全屏的第二区域显示为目标视频推送的参考视频。其中该缩放指令包括但不限于语音缩放指令、文字缩放指令以及点击缩放指令等等。终端接收到的缩放指令例如可以是点击缩放指令。用户可以点击目标视频,终端可以显示目标视频的边框,通过接收针对该边框的移动指令,终端可以在全屏的第一区域显示缩放后的目标视频。其中第一区域仅仅是指全屏的其中一部分区域,该区域并不指全屏上的某一固定区域。
易于理解的是,终端接收到针对全屏上目标视频的缩放指令时,终端可以采用图像识别算法识别目标视频中的关键图像,并基于关键图像获取与目标视频对应的参考视频。因此终端在全屏的第一区域显示缩放后的目标视频时,终端还可以在全屏的第二区域显示为目标视频推送的参考视频。例如终端采用图像识别算法识别到识别Z目标视频中的关键图像。终端基于关键图像获取与目标视频对应的参考视频为X参考视频时,终端可以在全屏的第一区域显示缩放后的Z目标视频时,终端还可以在全屏的第二区域显示为目标视频推送的X参考视频。此时,终端界面的举例示意图可以如图15所示。
易于理解的是,当终端在全屏的第二区域显示为目标视频推送的参考视频之前,终端还可以获取目标视频的标签类别,基于标签类别为目标视频推送参考视频。例如终端在全屏的第二区域显示为目标视频推送的参考视频之前,终端获取到Z目标视频的视频标签为旅游标签时,终端可以为目标视频推送与旅游标签对应的参考视频。
可选的,当终端在全屏的第二区域显示为目标视频推送的参考视频之后,终端可以接收针对目标视频的第二编辑指令。终端可以基于参考视频和第二编辑指令,对目标视频进行编辑。当终端基于目标素材对初始视频编辑完成生成目标视频之后,终端还可以基于参考视频对目标视频进行再次编辑,以使目标视频更符合用户的要求,可以提高用户的使用体验。
根据一些实施例,当终端接收到针对全屏上目标视频的缩放指令,终端可以在全屏的第一区域显示缩放后的目标视频。在全屏的第二区域显示为目标视频推送的参考视频之后,终端还可以接收针对参考视频的浏览指令,更新第二区域显示的参考视频。该浏览指令包括但不限于点击浏览指令、语音浏览指令、触控浏览指令。
易于理解的是,当终端接收到针对全屏上目标视频的缩放指令,在全屏的第一区域显示缩放后的目标视频,并在全屏的第二区域显示为目标视频推送的参考视频之后,终端还可以按照预设更新时长,更新第二区域显示的参考视频。该更新时长例如可以是15秒。当终端在全屏的第二区域显示为目标视频推送的参考视频为X参考视频时,终端可以在15秒后在全屏的第二区域显示为目标视频推送V参考视频。
本申请实施例提供一种视频编辑方法,终端基于接收到的第二屏幕开启指令,可以在第二屏幕上 显示素材集合,可以减少终端未进行视频编辑时显示素材集合的误操作。然后,终端基于移动指令对应的移动轨迹将目标素材移动至第一屏幕上,并基于目标素材对初始视频进行编辑,生成目标视频,可以减少用户进行视频编辑的误操作,并且直接将目标素材移动至第一屏幕上,可以减少素材集合界面和视频编辑界面的切换操作,提高视频编辑的便利性。其次,终端接收到播放指令时,可以在第一屏幕和第二屏幕组成的全屏上显示目标视频,可以增加视频编辑的区域,提高用户编辑视频的方便性。另外,终端还可以在全屏的第一区域显示缩放后的目标视频,并在全屏的第二区域显示为目标视频推送的参考视频,以便用户可以基于参考视频进行视频编辑,可以提高视频编辑的便利性,进而可以提高用户体验。
下面将结合附图16,对本申请实施例提供的视频编辑装置进行详细介绍。需要说明的是,附图16所示的视频编辑装置,用于执行本申请图2-图15所示实施例的方法,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请图2-图15所示的实施例。
请参见图16,其示出本申请实施例的视频编辑装置的结构示意图。该视频编辑装置1600可以通过软件、硬件或者两者的结合实现成为用户终端的全部或一部分。根据一些实施例,该视频编辑装置1600包括指令接收单元1601和视频编辑单元1602,具体用于:
指令接收单元1601,用于接收针对第一屏幕上所显示的初始视频的第一编辑指令,在第二屏幕上显示素材集合;
视频编辑单元1602,用于接收针对素材集合中目标素材的移动指令,在目标素材移动至第一屏幕之后,基于目标素材对初始视频进行编辑,生成目标视频。
根据一些实施例,视频编辑单元1602,用于接收针对素材集合中目标素材的移动指令,在目标素材移动至第一屏幕之后,基于目标素材对初始视频进行编辑,生成目标视频时,具体用于:
接收针对素材集合中目标素材的移动指令,获取移动指令对应的移动轨迹;
当移动轨迹的移动终点位于第一屏幕之后,确定目标素材移动至第一屏幕上,基于目标素材对初始视频进行编辑,生成目标视频。
根据一些实施例,视频编辑单元1602,用于基于目标素材对初始视频进行编辑时,具体用于:
将目标素材插入至初始视频中移动终点对应的位置,或采用目标素材替换初始视频中移动终点所显示的原始素材,或确定移动终点在第一屏幕上的位置以及位置对应的区域,采用目标素材替换区域所显示的原始素材。
根据一些实施例,该视频编辑装置1600还包括素材标记单元1603,用于接收针对素材集合中目标素材的移动指令之前,接收针对素材集合中各素材的浏览指令,标记选中的目标素材;
接收针对目标素材的缩放指令,在第二屏幕上显示缩放后的目标素材。
根据一些实施例,该视频编辑装置1600还包括视频播放单元1604,用于基于目标素材对初始视频进行编辑,生成目标视频之后,接收针对目标视频的播放指令;
基于播放指令,在全屏上播放目标视频,全屏由第一屏幕和第二屏幕组成;或
基于播放指令,获取第一屏幕的第一屏幕尺寸和第二屏幕的第二屏幕尺寸;
在第一屏幕尺寸大于第二屏幕尺寸时,在第一屏幕上播放目标视频;
在第二屏幕尺寸大于第一屏幕尺寸时,在第二屏幕上播放目标视频,并在第一屏幕上显示素材集合。
根据一些实施例,该视频编辑装置1600还包括视频推送单元1605,用于基于播放指令,在全屏上播放目标视频之后,接收针对全屏上目标视频的缩放指令,在全屏的第一区域显示缩放后的目标视频,并在全屏的第二区域显示为目标视频推送的参考视频。
根据一些实施例,视频推送单元1605,还用于在全屏的第二区域显示为目标视频推送的参考视频之前,获取目标视频的标签类别,基于标签类别为目标视频推送参考视频;
其中,在全屏的第二区域显示为目标视频推送的参考视频之后,还包括:
接收针对目标视频的第二编辑指令;
基于参考视频和第二编辑指令,对目标视频进行编辑。
根据一些实施例,该视频编辑装置1600包括视频更新单元1606,用于接收针对参考视频的浏览 指令,更新第二区域显示的参考视频。
根据一些实施例,视频编辑单元1602,用于在第二屏幕上显示素材集合时,具体用于:
接收第二屏幕开启指令,在第二屏幕上显示素材集合。
本申请实施例提供一种视频编辑装置,通过指令接收单元接收针对第一屏幕上所显示的初始视频的第一编辑指令,在第二屏幕上显示素材集合,视频编辑单元可以接收针对素材集合中目标素材的移动指令,在目标素材移动至第一屏幕之后,基于目标素材对初始视频进行编辑,生成目标视频。因此用户对视频进行编辑时,只需要将视频编辑装置的第二屏幕上的目标素材移动到第一屏幕上,即可在第一屏幕上基于目标素材对初始视频进行编辑,生成目标视频,可以减少选择目标素材和视频编辑的切换操作,可以减少视频编辑操作步骤,可以提高视频编辑的便利性,提升用户体验。
请参见图17,为本申请实施例提供的一种终端的结构示意图。如图17所示,所述终端1700可以包括:至少一个处理器1701,至少一个网络接口1704,用户接口1703,存储器1705,至少一个通信总线1702。
其中,通信总线1702用于实现这些组件之间的连接通信。
其中,用户接口1703可以包括显示屏(Display)和GPS,可选用户接口1703还可以包括标准的有线接口、无线接口。
其中,网络接口1704可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。
其中,处理器1701可以包括一个或者多个处理核心。处理器1701利用各种借口和线路连接整个终端1700内的各个部分,通过运行或执行存储在存储器1705内的指令、程序、代码集或指令集,以及调用存储在存储器1705内的数据,执行终端1700的各种功能和处理数据。可选的,处理器1701可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器1701可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示屏所需要显示的内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器1701中,单独通过一块芯片进行实现。
其中,存储器1705可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。可选的,该存储器1705包括非瞬时性计算机可读介质(non-transitory computer-readable storage medium)。存储器1705可用于存储指令、程序、代码、代码集或指令集。存储器1705可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现上述各个方法实施例的指令等;存储数据区可存储上面各个方法实施例中涉及到的数据等。存储器1705可选的还可以是至少一个位于远离前述处理器1701的存储装置。如图17所示,作为一种计算机存储介质的存储器1705中可以包括操作系统、网络通信模块、用户接口模块以及用于视频编辑的应用程序。
在图17所示的终端1700中,用户接口1703主要用于为用户提供输入的接口,获取用户输入的数据;而处理器1701可以用于调用存储器1705中存储的视频编辑的应用程序,并具体执行以下操作:
接收针对第一屏幕上所显示的初始视频的第一编辑指令,在第二屏幕上显示素材集合;
接收针对素材集合中目标素材的移动指令,在目标素材移动至第一屏幕之后,基于目标素材对初始视频进行编辑,生成目标视频。
根据一些实施例,处理器1701用于接收针对素材集合中目标素材的移动指令,在目标素材移动至第一屏幕之后,基于目标素材对初始视频进行编辑,生成目标视频时,具体用于执行以下步骤:
接收针对素材集合中目标素材的移动指令,获取移动指令对应的移动轨迹;
当移动轨迹的移动终点位于第一屏幕之后,确定目标素材移动至第一屏幕上,基于目标素材对视频进行编辑,生成目标视频。
根据一些实施例,处理器1701用于基于目标素材对初始视频进行编辑时,具体用于执行以下步骤:
将目标素材插入至初始视频中移动终点对应的位置,或采用目标素材替换初始视频中移动终点所 显示的原始素材,或确定移动终点在第一屏幕上的位置以及位置对应的区域,采用目标素材替换区域所显示的原始素材。
根据一些实施例,处理器1701用于接收针对素材集合中目标素材的移动指令之前,还具体用于执行以下步骤:
接收针对素材集合中各素材的浏览指令,标记选中的目标素材;
接收针对目标素材的缩放指令,在第二屏幕上显示缩放后的目标素材。
根据一些实施例,处理器1701用于基于目标素材对初始视频进行编辑,生成目标视频之后,还具体用于执行以下步骤:
接收针对目标视频的播放指令;
基于播放指令,在全屏上播放目标视频,全屏由第一屏幕和第二屏幕组成;或
基于播放指令,获取第一屏幕的第一屏幕尺寸和第二屏幕的第二屏幕尺寸;
在第一屏幕尺寸大于第二屏幕尺寸时,在第一屏幕上播放目标视频;
在第二屏幕尺寸大于第一屏幕尺寸时,在第二屏幕上播放目标视频,并在第一屏幕上显示素材集合。
根据一些实施例,处理器1701用于基于播放指令,在全屏上播放目标视频之后,,还具体用于执行以下步骤:
接收针对全屏上目标视频的缩放指令,在全屏的第一区域显示缩放后的目标视频,并在全屏的第二区域显示为目标视频推送的参考视频。
根据一些实施例,处理器1701用于在全屏的第二区域显示为目标视频推送的参考视频之前,还具体用于执行以下步骤:
获取目标视频的标签类别,基于标签类别为目标视频推送参考视频;
其中,在全屏的第二区域显示为目标视频推送的参考视频之后,还包括:
接收针对目标视频的第二编辑指令;
基于参考视频和第二编辑指令,对目标视频进行编辑。
根据一些实施例,处理器1701还具体用于执行以下步骤:
接收针对参考视频的浏览指令,更新第二区域显示的参考视频。
根据一些实施例,处理器1701用于在第二屏幕上显示素材集合时,具体用于执行以下步骤:
接收第二屏幕开启指令,在第二屏幕上显示素材集合。
本申请实施例提供一种终端,通过接收针对第一屏幕上所显示的初始视频的第一编辑指令,可以在第二屏幕上显示素材集合,在接收到针对素材集合中目标素材的移动指令,在目标素材移动至第一屏幕之后,可以基于目标素材对初始视频进行编辑,生成目标视频。因此用户对视频进行编辑时,只需要将第二屏幕上的目标素材移动到第一屏幕之后,即可在第一屏幕上基于目标素材对初始视频进行编辑,生成目标视频,可以减少选择目标素材和视频编辑的切换操作,可以减少视频编辑操作步骤,可以提高视频编辑的便利性,提升用户体验。
本申请还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述方法的步骤。其中,计算机可读存储介质可以包括但不限于任何类型的盘,包括软盘、光盘、DVD、CD-ROM、微型驱动器以及磁光盘、ROM、RAM、EPROM、EEPROM、DRAM、VRAM、闪速存储器设备、磁卡或光卡、纳米系统(包括分子存储器IC),或适合于存储指令和/或数据的任何类型的媒介或设备。
本申请实施例还提供一种计算机程序产品,该计算机程序产品包括存储计算机程序的非瞬时性计算机可读存储介质,该计算机程序可操作来使计算机执行如上述方法实施例中记载的任何一种视频编辑方法的部分或全部步骤。
本领域的技术人员可以清楚地了解到本申请的技术方案可借助软件和/或硬件来实现。本说明书中的“单元”和“模块”是指能够独立完成或与其他部件配合完成特定功能的软件和/或硬件,其中硬件例如可以是现场可编程门阵列(Field-ProgrammaBLE Gate Array,FPGA)、集成电路(Integrated Circuit,IC)等。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合, 但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些服务接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通进程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
以上所述者,仅为本公开的示例性实施例,不能以此限定本公开的范围。即但凡依本公开教导所作的等效变化与修饰,皆仍属本公开涵盖的范围内。本领域技术人员在考虑说明书及实践这里的公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未记载的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的范围和精神由权利要求限定。

Claims (20)

  1. 一种视频编辑方法,其特征在于,所述方法包括:
    接收针对第一屏幕上所显示的初始视频的第一编辑指令,在第二屏幕上显示素材集合;
    接收针对所述素材集合中目标素材的移动指令,在所述目标素材移动至所述第一屏幕之后,基于所述目标素材对所述初始视频进行编辑,生成目标视频。
  2. 根据权利要求1所述的方法,其特征在于,所述接收针对所述素材集合中目标素材的移动指令,在所述目标素材移动至所述第一屏幕之后,基于所述目标素材对所述初始视频进行编辑,生成目标视频,包括:
    接收针对所述素材集合中目标素材的移动指令,获取所述移动指令对应的移动轨迹;
    当所述移动轨迹的移动终点位于所述第一屏幕上时,确定所述目标素材移动至所述第一屏幕之后,基于所述目标素材对所述初始视频进行编辑,生成所述目标视频。
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述目标素材对所述初始视频进行编辑,包括:
    将所述目标素材插入至所述初始视频中所述移动终点对应的位置,或采用所述目标素材替换所述初始视频中所述移动终点所显示的原始素材,或确定所述移动终点在所述第一屏幕上的位置以及所述位置对应的区域,采用所述目标素材替换所述区域所显示的原始素材。
  4. 根据权利要求1所述的方法,其特征在于,所述接收针对所述素材集合中目标素材的移动指令之前,还包括:
    接收针对所述素材集合中各素材的浏览指令,标记选中的目标素材;
    接收针对所述目标素材的缩放指令,在所述第二屏幕上显示缩放后的所述目标素材。
  5. 根据权利要求1所述的方法,其特征在于,所述基于所述目标素材对所述初始视频进行编辑,生成目标视频之后,还包括:
    接收针对所述目标视频的播放指令;
    基于所述播放指令,在全屏上播放所述目标视频,所述全屏由所述第一屏幕和所述第二屏幕组成;或
    基于所述播放指令,获取所述第一屏幕的第一屏幕尺寸和所述第二屏幕的第二屏幕尺寸;
    在所述第一屏幕尺寸大于所述第二屏幕尺寸时,在所述第一屏幕上播放所述目标视频;
    在所述第二屏幕尺寸大于所述第一屏幕尺寸时,在所述第二屏幕上播放所述目标视频,并在所述第一屏幕上显示所述素材集合。
  6. 根据权利要求5所述的方法,其特征在于,所述基于所述播放指令,在全屏上播放所述目标视频之后,还包括:
    接收针对所述全屏上所述目标视频的缩放指令,在所述全屏的第一区域显示缩放后的所述目标视频,并在所述全屏的第二区域显示为所述目标视频推送的参考视频。
  7. 根据权利要求6所述的方法,其特征在于,所述在所述全屏的第二区域显示为所述目标视频推送的参考视频之前,还包括:
    获取所述目标视频的标签类别,基于所述标签类别为所述目标视频推送参考视频;
    其中,所述在所述全屏的第二区域显示为所述目标视频推送的参考视频之后,还包括:
    接收针对所述目标视频的第二编辑指令;
    基于所述参考视频和所述第二编辑指令,对所述目标视频进行编辑。
  8. 根据权利要求6或7所述的方法,其特征在于,所述方法还包括:
    接收针对所述参考视频的浏览指令,更新所述第二区域显示的所述参考视频。
  9. 根据权利要求1所述的方法,其特征在于,所述在第二屏幕上显示素材集合,包括:
    接收第二屏幕开启指令,在第二屏幕上显示所述素材集合。
  10. 一种视频编辑方法装置,其特征在于,所述装置包括:
    指令接收单元,用于接收针对第一屏幕上所显示的初始视频的第一编辑指令,在第二屏幕上显示素材集合;
    视频编辑单元,用于接收针对所述素材集合中目标素材的移动指令,在所述目标素材移动至所述第一屏幕之后,基于所述目标素材对所述初始视频进行编辑,生成目标视频。
  11. 根据权利要求10所述的装置,其特征在于,所述视频编辑单元,用于接收针对所述素材集合中目标素材的移动指令,在所述目标素材移动至所述第一屏幕之后,基于所述目标素材对所述初始视频进行编辑,生成目标视频时,具体用于:
    接收针对所述素材集合中目标素材的移动指令,获取所述移动指令对应的移动轨迹;
    当所述移动轨迹的移动终点位于所述第一屏幕上时,确定所述目标素材移动至所述第一屏幕之后,基于所述目标素材对所述初始视频进行编辑,生成所述目标视频。
  12. 根据权利要求10所述的装置,其特征在于,所述视频编辑单元,用于基于所述目标素材对所述初始视频进行编辑时,具体用于:
    将所述目标素材插入至所述初始视频中所述移动终点对应的位置,或采用所述目标素材替换所述初始视频中所述移动终点所显示的原始素材,或确定所述移动终点在所述第一屏幕上的位置以及所述位置对应的区域,采用所述目标素材替换所述区域所显示的原始素材。
  13. 根据权利要求10所述的装置,其特征在于,所述视频编辑装置还包括素材标记单元,用于接收针对所述素材集合中目标素材的移动指令之前,接收针对所述素材集合中各素材的浏览指令,标记选中的目标素材;
    接收针对所述目标素材的缩放指令,在所述第二屏幕上显示缩放后的所述目标素材。
  14. 根据权利要求10所述的装置,其特征在于,所述视频编辑装置还包括视频播放单元,用于基于所述目标素材对所述初始视频进行编辑,生成目标视频之后,接收针对所述目标视频的播放指令;
    基于所述播放指令,在全屏上播放所述目标视频,所述全屏由所述第一屏幕和所述第二屏幕组成;或
    基于所述播放指令,获取所述第一屏幕的第一屏幕尺寸和所述第二屏幕的第二屏幕尺寸;
    在所述第一屏幕尺寸大于所述第二屏幕尺寸时,在所述第一屏幕上播放所述目标视频;
    在所述第二屏幕尺寸大于所述第一屏幕尺寸时,在所述第二屏幕上播放所述目标视频,并在所述第一屏幕上显示所述素材集合。
  15. 根据权利要求10所述的装置,其特征在于,所述视频编辑装置还包括视频推送单元,用于基于所述播放指令,在全屏上播放所述目标视频之后,接收针对所述全屏上所述目标视频的缩放指令,在所述全屏的第一区域显示缩放后的所述目标视频,并在所述全屏的第二区域显示为所述目标视频推送的参考视频。
  16. 根据权利要求15所述的装置,其特征在于,所述视频推送单元,还用于在所述全屏的第二区域显示为所述目标视频推送的参考视频之前,获取所述目标视频的标签类别,基于所述标签类别为所述目标视频推送参考视频;
    其中,所述视频编辑单元,还用于在所述全屏的第二区域显示为所述目标视频推送的参考视频之后,接收针对所述目标视频的第二编辑指令;
    基于所述参考视频和所述第二编辑指令,对所述目标视频进行编辑。
  17. 根据权利要求15或16所述的装置,其特征在于,所述视频编辑装置还包括视频更新单元,用于接收针对所述参考视频的浏览指令,更新所述第二区域显示的所述参考视频。
  18. 根据权利要求10所述的装置,其特征在于,所述视频编辑单元,用于在第二屏幕上显示素材集合时,具体用于:
    接收第二屏幕开启指令,在第二屏幕上显示所述素材集合。
  19. 一种终端,包括存储器、处理器以及存储在存储器中并可在处理器上运行的计算机程序,其特征在于,处理器执行计算机程序时实现上述权利要求1-9中任一项方法。
  20. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现上述权利要求1-9中任一项方法。
PCT/CN2021/087257 2020-06-23 2021-04-14 视频编辑方法、装置、终端及存储介质 WO2021258821A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010577961.4 2020-06-23
CN202010577961.4A CN111770288B (zh) 2020-06-23 2020-06-23 视频编辑方法、装置、终端及存储介质

Publications (1)

Publication Number Publication Date
WO2021258821A1 true WO2021258821A1 (zh) 2021-12-30

Family

ID=72721709

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/087257 WO2021258821A1 (zh) 2020-06-23 2021-04-14 视频编辑方法、装置、终端及存储介质

Country Status (2)

Country Link
CN (1) CN111770288B (zh)
WO (1) WO2021258821A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697710A (zh) * 2022-04-22 2022-07-01 卡莱特云科技股份有限公司 基于服务器的素材预览方法、装置、系统、设备及介质
CN115334361A (zh) * 2022-08-08 2022-11-11 北京达佳互联信息技术有限公司 素材编辑方法、装置、终端及存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770288B (zh) * 2020-06-23 2022-12-09 Oppo广东移动通信有限公司 视频编辑方法、装置、终端及存储介质
CN112565871A (zh) * 2020-11-06 2021-03-26 深圳市易平方网络科技有限公司 一种视频预加载方法、智能终端及存储介质
CN114692033A (zh) * 2020-12-29 2022-07-01 北京字跳网络技术有限公司 基于教程的多媒体资源编辑方法、装置、设备及存储介质
CN114222076B (zh) * 2021-12-10 2022-11-18 北京百度网讯科技有限公司 一种换脸视频生成方法、装置、设备以及存储介质
CN114564921A (zh) * 2022-02-18 2022-05-31 维沃移动通信有限公司 文档编辑方法及其装置
CN116095412B (zh) * 2022-05-30 2023-11-14 荣耀终端有限公司 视频处理方法及电子设备
CN117915020A (zh) * 2022-05-30 2024-04-19 荣耀终端有限公司 用于视频裁剪的方法和装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090284619A1 (en) * 2008-05-14 2009-11-19 Sony Coropration Image processing apparatus, image processing method, and program
CN103336686A (zh) * 2013-06-05 2013-10-02 福建星网视易信息系统有限公司 数字标牌系统的终端播放模板的编辑装置及其编辑方法
CN104811629A (zh) * 2015-04-21 2015-07-29 上海极食信息科技有限公司 一种在同一界面内获取视频素材并对其制作的方法及系统
CN107909634A (zh) * 2017-11-30 2018-04-13 努比亚技术有限公司 图片显示方法、移动终端及计算机可读存储介质
CN108628976A (zh) * 2018-04-25 2018-10-09 咪咕动漫有限公司 一种素材展示方法、终端和计算机存储介质
CN110494833A (zh) * 2018-05-28 2019-11-22 深圳市大疆创新科技有限公司 一种多媒体编辑方法及智能终端
CN111770288A (zh) * 2020-06-23 2020-10-13 Oppo广东移动通信有限公司 视频编辑方法、装置、终端及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090284619A1 (en) * 2008-05-14 2009-11-19 Sony Coropration Image processing apparatus, image processing method, and program
CN103336686A (zh) * 2013-06-05 2013-10-02 福建星网视易信息系统有限公司 数字标牌系统的终端播放模板的编辑装置及其编辑方法
CN104811629A (zh) * 2015-04-21 2015-07-29 上海极食信息科技有限公司 一种在同一界面内获取视频素材并对其制作的方法及系统
CN107909634A (zh) * 2017-11-30 2018-04-13 努比亚技术有限公司 图片显示方法、移动终端及计算机可读存储介质
CN108628976A (zh) * 2018-04-25 2018-10-09 咪咕动漫有限公司 一种素材展示方法、终端和计算机存储介质
CN110494833A (zh) * 2018-05-28 2019-11-22 深圳市大疆创新科技有限公司 一种多媒体编辑方法及智能终端
CN111770288A (zh) * 2020-06-23 2020-10-13 Oppo广东移动通信有限公司 视频编辑方法、装置、终端及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697710A (zh) * 2022-04-22 2022-07-01 卡莱特云科技股份有限公司 基于服务器的素材预览方法、装置、系统、设备及介质
CN114697710B (zh) * 2022-04-22 2023-08-18 卡莱特云科技股份有限公司 基于服务器的素材预览方法、装置、系统、设备及介质
CN115334361A (zh) * 2022-08-08 2022-11-11 北京达佳互联信息技术有限公司 素材编辑方法、装置、终端及存储介质
CN115334361B (zh) * 2022-08-08 2024-03-01 北京达佳互联信息技术有限公司 素材编辑方法、装置、终端及存储介质

Also Published As

Publication number Publication date
CN111770288B (zh) 2022-12-09
CN111770288A (zh) 2020-10-13

Similar Documents

Publication Publication Date Title
WO2021258821A1 (zh) 视频编辑方法、装置、终端及存储介质
US10884620B2 (en) Configuration of application execution spaces and sub-spaces for sharing data on a mobile touch screen device
US10825456B2 (en) Method and apparatus for performing preset operation mode using voice recognition
JP6779250B2 (ja) メディア編集アプリケーション用の扇形編出ユーザインタフェースコントロール
US9851862B2 (en) Display apparatus and displaying method for changing a cursor based on a user change of manipulation mode
CN107426403B (zh) 移动终端
US20230168805A1 (en) Configuration of application execution spaces and sub-spaces for sharing data on a mobile touch screen device
US9811349B2 (en) Displaying operations performed by multiple users
US10380773B2 (en) Information processing apparatus, information processing method, and computer readable medium
AU2014200042B2 (en) Method and apparatus for controlling contents in electronic device
CN109286836B (zh) 多媒体数据处理方法、装置及智能终端、存储介质
CN112035195B (zh) 应用界面的展示方法、装置、电子设备及存储介质
CN112947923A (zh) 对象编辑方法、装置和电子设备
CN110377220A (zh) 一种指令响应方法、装置、存储介质及电子设备
CN113918522A (zh) 一种文件生成方法、装置及电子设备
CN113315883A (zh) 调整视频组合素材的方法和装置
WO2024022473A1 (zh) 在直播间发送评论和接收评论的方法及相关设备
WO2023197678A1 (zh) 信息记录方法、装置、电子设备及存储介质
WO2022237491A1 (zh) 多媒体数据处理方法、装置、设备、计算机可读存储介质及计算机程序产品
CN115904168A (zh) 基于多设备的影像素材处理方法及相关装置
CN115460448A (zh) 一种媒体资源编辑方法、装置、电子设备以及存储介质
KR101423168B1 (ko) 콘텐츠 표시를 위한 그래픽 유저 인터페이스 제공방법 및 이를 위한 단말
CN114489550B (zh) 投屏控制方法、投屏器及存储介质
WO2023115316A1 (zh) 投屏方法、装置、存储介质及电子设备
WO2024113679A1 (zh) 多媒体资源处理方法、装置和设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21829890

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21829890

Country of ref document: EP

Kind code of ref document: A1