WO2023179539A1 - 视频编辑方法、装置及电子设备 - Google Patents

视频编辑方法、装置及电子设备 Download PDF

Info

Publication number
WO2023179539A1
WO2023179539A1 PCT/CN2023/082504 CN2023082504W WO2023179539A1 WO 2023179539 A1 WO2023179539 A1 WO 2023179539A1 CN 2023082504 W CN2023082504 W CN 2023082504W WO 2023179539 A1 WO2023179539 A1 WO 2023179539A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
input
user
identification
target
Prior art date
Application number
PCT/CN2023/082504
Other languages
English (en)
French (fr)
Inventor
张彦雯
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2023179539A1 publication Critical patent/WO2023179539A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Definitions

  • This application belongs to the field of video technology, and specifically relates to a video editing method, device and electronic equipment.
  • the user can trigger the electronic device to run a certain video clipping application, so that the electronic device can cut a video clip from the complete video through the video clipping application. Video clips. Afterwards, the user can trigger the electronic device to save the video clip in the memory space.
  • the purpose of the embodiments of the present application is to provide a video editing method, device and electronic equipment, which can solve the problem of electronic equipment occupying a large memory space due to storing edited video clips.
  • embodiments of the present application provide a video editing method, which method includes: receiving a first input from a user to a video playback interface of a first video; in response to the first input, displaying a first logo, the first The identification indicates a first video segment in the first video; a second input from the user is received; and in response to the second input, the first identification is stored.
  • inventions of the present application provide a video editing device.
  • the video editing device includes: a receiving module, a display module, and a storage module.
  • the receiving module is configured to receive the user's first input to the video playback interface of the first video.
  • the display module is configured to display a first identification in response to the first input received by the receiving module, where the first identification indicates the first video segment in the first video.
  • the receiving module is also used to receive the user's second input.
  • the storage module is configured to store the first identification in response to the second input received by the receiving module.
  • inventions of the present application provide an electronic device.
  • the electronic device includes a processor and a memory.
  • the memory stores programs or instructions that can be run on the processor.
  • the programs or instructions are processed by the processor.
  • the processor is executed, the steps of the method described in the first aspect are implemented.
  • embodiments of the present application provide a readable storage medium.
  • Programs or instructions are stored on the readable storage medium.
  • the steps of the method described in the first aspect are implemented. .
  • inventions of the present application provide a chip.
  • the chip includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement the first aspect. the method described.
  • embodiments of the present application provide a computer program product, the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the method as described in the first aspect.
  • a first input from the user to the video playback interface of the first video is received; in response to the first input, a first identification is displayed, and the first identification indicates the first video clip in the first video;
  • a second input from the user is received; in response to the second input, the first identification is stored.
  • Figure 1 is a schematic diagram of a video editing method provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of a video editing interface provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of an interface for generating identification provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of an interface for logo editing provided by an embodiment of the present application.
  • FIG. 5 is a schematic interface diagram of video parameter details provided by an embodiment of the present application.
  • Figure 6 is a schematic diagram of an interface for viewing logos provided by an embodiment of the present application.
  • Figure 7 is a schematic structural diagram of a video editing device provided by an embodiment of the present application.
  • Figure 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Figure 9 is a hardware schematic diagram of an electronic device provided by an embodiment of the present application.
  • first, second, etc. in the description and claims of this application are used to distinguish similar objects and are not used to describe a specific order or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances so that the embodiments of the application can be practiced in sequences other than those illustrated or described herein, and that "first,”"second,” etc. are distinguished Objects are usually of one type, and the number of objects is not limited. For example, the first object can be one or multiple.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the related objects are in an "or” relationship.
  • the electronic device needs to spend a large amount of storage space to store the edited video clip. In this way, a large memory space of the electronic device is occupied, resulting in a waste of resources.
  • embodiments of the present application provide a video editing method. After the user triggers the display of a logo through an input to the video playback interface of a certain video, the user can trigger the storage instruction of the video through another input. This identification of a video clip in the video clip greatly saves storage space compared with the electronic device saving the clipped video clip in the memory space.
  • an embodiment of the present application provides a video editing method, which may include the following S101 to S104.
  • the video editing device receives the user's first input to the video playback interface of the first video.
  • the above-mentioned first video may be a video played online or a video stored in an electronic device.
  • the first input may be the user's touch input, voice input or gesture input on the video playback interface.
  • the touch input is the user's click input on the video playback interface.
  • the first input may also be other possible inputs, which are not limited in the embodiments of the present application.
  • the video editing device displays the first logo in response to the first input.
  • the above-mentioned first identification indicates the first video segment in the first video.
  • the logos in this application can be text, symbols, images, etc. used to indicate information, and controls or other containers can be used as carriers to display information, including but not limited to text logos, symbol logos, and image logos.
  • the video editing device receives the user's second input.
  • the above-mentioned second input may be the user's touch input, voice input or gesture input.
  • the touch input is the user's double-click input on the first identifier; for another example, the touch input is the user's click input on the save control.
  • the video editing device responds to the second input and stores the first identification.
  • the video editing method provided by the embodiment of the present application may also include: receiving a third input from the user to the identification control; and, in response to the third input, placing the first video in an editable state.
  • the video editing device takes the video editing device as a mobile phone. Assume that the mobile phone is playing video 1. When the mobile phone displays the video playback interface of video 1, the user can click on the image frame of video 1. After the mobile phone receives the click input, the identification 1 can be displayed in response to the click input. Next, if the user wants to save the identity, the user can click the Save control. When the mobile phone receives a click input to the save control, the mobile phone can respond to the click input and store the identifier 1.
  • the embodiment of the present application provides a video editing method.
  • the user clicks on the video playback interface of a certain video After one input triggers the display of a logo, since the user can trigger the storage of the logo indicating a video clip in the video through another input, it is extremely difficult to save the clipped video clip in the memory space compared to the electronic device. Greatly saves storage space.
  • the above-mentioned first input includes a first sub-input and a second sub-input; accordingly, the above-mentioned S101 may be specifically implemented through the following S101A, and the above-mentioned S102 may specifically include S102A and S102B.
  • the video editing device receives the user's first sub-input to the first image frame in the first video, and the user's second sub-input to the second image frame in the first video.
  • the first sub-input is the input of a first playback node on the playback progress bar, and the first playback node corresponds to the first image frame.
  • the second sub-input is the input of the second play node on the play progress bar, and the second play node corresponds to the second image frame.
  • the first input includes a first sub-input and a second sub-input.
  • the first sub-input may be executed first, and then the second sub-input may be executed; or the second sub-input may be executed first, and then the first sub-input may be executed; or the first sub-input and the second sub-input may be executed simultaneously.
  • the specific determination is based on actual usage conditions, and the embodiments of the present application do not limit this.
  • the video editing device responds to the first sub-input and displays the starting mark point.
  • the first image frame corresponds to the starting marker point; in another possible case, the second image frame corresponds to the starting marker point.
  • the video editing device responds to the second sub-input and displays the end mark point and the first identifier.
  • the first video segment is a video segment between the first image frame and the second image frame, and the first identification is determined based on the starting mark point and the ending mark point.
  • the second image frame corresponds to the ending marker point; if the second image frame corresponds to the starting marker point, then the first image frame corresponds to the ending marker point.
  • Example 1 takes the video editing device as a mobile phone.
  • the mobile phone is playing a cartoon (that is, the first video).
  • the mobile phone displays image frame 01 of the first video (that is, the first image frame).
  • the user can click on the image frame 01; after the mobile phone receives the click input, the mobile phone can display the starting mark point 02.
  • the mobile phone plays the image frame 03 of the first video (that is, the second image frame)
  • the mobile phone may display the end mark point 04 and the identification a (ie, the first identification) in response to the click input.
  • the user can trigger the display of the starting mark point, and display the ending mark point and the first identifier by inputting the first image frame and the second image frame. Therefore, the electronic device can display the third image frame.
  • a logo is stored in the storage space, thereby saving the larger memory space required to save the edited video clip.
  • the video editing method provided by the embodiment of the present application may also include the following: S105 and S106.
  • the video editing device receives the user's fourth input to the target mark point.
  • the above-mentioned target mark point includes at least one of the following: a starting mark point and an ending mark point.
  • the above-mentioned fourth input may be the user's touch input, voice input or gesture input on the target mark point.
  • the touch input is the user's drag input on the target marker point.
  • the fourth input may also be other possible inputs, which are not limited in the embodiments of the present application.
  • the video editing device updates the position of the target marker point and the first video clip indicated by the first identifier.
  • the image frame corresponding to the target mark node after the updated position changes accordingly, and then the image frame included in the first video segment indicated by the first identifier also changes. changes occur.
  • the user can input the target marker point to trigger the update of the position of the target marker point and the first video clip indicated by the first identification, so that the user can update the video clip indicated by the identification according to actual needs.
  • the editing operation of video clips in the video production process is simplified to the operation of logo.
  • the video editing method provided by the embodiment of the present application may also include the following S107 and S108.
  • the video editing device receives the user's fifth input to the first identifier.
  • the fifth input may be the user's touch input, voice input or gesture input to the first identification.
  • the touch input is the user's double-click input on the first identifier.
  • the fifth input may also be other possible inputs, which are not limited in the embodiments of the present application.
  • the video editing device displays the first editing window of the first video clip.
  • the above-mentioned first editing window is used to update the video parameter information of the first video clip.
  • the above-mentioned first editing window may include at least one of the following controls: filter control, text control, and background music control.
  • the filter control is used to update the filter information of the first video clip
  • the text control is used to update the text information of the first video clip
  • the background music control is used to update the background music of the first video clip.
  • the above video parameter information may include at least one of the following: filter information, background music, text information, text display and disappearing animation effect information, etc.
  • the first editing window includes a filter control.
  • a pop-up filter selection list can be triggered, so that the user can select a filter parameter from the filter selection list, so that the video editing device can update the first identified filter information to the filter. Filter information corresponding to the mirror parameters.
  • the video editing device can generate the first identification based on the first image frame, the second image frame and the video parameter information.
  • Example 2 takes the video editing device as a mobile phone. Combined with the above Figure 2, the user clicks on the logo a. After the mobile phone receives the click input on the logo a, in response to the input, as shown in Figure 3, the mobile phone displays an editing window.
  • the editing window includes control 05, control 06 and control 07. Since the control 05 is used to update the indicator indicated by a Filter information of the video clip, so the user can click the control 05; after the mobile phone receives the click input to the control 05, in response to the click input, the filter of the first video clip is set.
  • the video editing method provided by the embodiment of the present application may also include: the video editing device receives an input to the first identifier; in response to the input, plays the first video clip with updated video parameter information .
  • the video editing device receives an input to the first identifier; in response to the input, plays the first video clip with updated video parameter information .
  • the image effect of a certain video clip can be quickly changed according to the user's wishes.
  • the first video segment with the updated video parameter information is played; conversely, if the first identifier is not operated, the original video parameters are played The first video clip of the message.
  • the user since the user can trigger the display of the first editing window by inputting the first identifier, the user can trigger the update of the video parameter information of the first video clip according to the needs, and make the video In the process, the editing operation of the video is simplified to the operation of the logo.
  • the above S104 can be specifically implemented through the following S104A.
  • the video editing device stores the first logo thumbnail corresponding to the first logo into the logo folder in the album.
  • an identification file may be created in the album.
  • the logo folder is used to store logo thumbnails, and the logo thumbnails correspond to video clips in a video.
  • the above identification folder may also include a second identification thumbnail corresponding to the second identification, the second identification being used to indicate the second video segment in the first video; wherein, the video in the second video segment is The image frame is completely different from the video image frame in the first video segment, or the video image frame in the second video segment is partially the same as the video image frame in the first video segment.
  • the first logo thumbnail corresponding to the first logo can be added to the label folder of the album, thereby avoiding repeated saving of reusable video clips and occupying excess memory space.
  • the video editing method provided by the embodiment of the present application may also include the following S109 to S112.
  • the video editing device receives the user's sixth input for identifying the folder.
  • the sixth input may be the user's touch input, voice input or gesture input for identifying the folder.
  • the touch input is the user's click input on the identification folder.
  • the sixth input may also be other possible inputs, which are not limited in the embodiments of the present application.
  • the video editing device displays P logo thumbnails and video editing controls.
  • the above-mentioned P logo thumbnails include the first logo thumbnail, and P is a positive integer.
  • the shape of the above video editing control can be circular, rectangular or other possible shapes; the size of the video editing control is a preset size; the video editing control can be displayed in any blank space in the folder interface corresponding to the label folder area.
  • the embodiment of the present application does not limit the display form of the editing control.
  • the video editing device receives the user's seventh input to the video editing control.
  • the seventh input may be the user's touch input, voice input or gesture input on the video editing control.
  • the touch input is the user's click input on the video editing control.
  • the seventh input can also be other possible inputs, which are not limited in the embodiments of the present application.
  • the video editing device displays a video editing interface in response to the seventh input.
  • the display of P logo thumbnails and video editing controls can be triggered by the user's input to the label folder, the user can view the logo thumbnails; then, because the video editing control can be Another input triggers the display of the video editing interface, so it is convenient for users to trigger video splicing in the video editing interface.
  • the above video editing interface includes a first display area and a second display area.
  • the first display area includes at least one video thumbnail, each video thumbnail corresponds to a video clip.
  • the second display area includes at least one logo. Thumbnail; accordingly, after the above S112, the video editing method provided by the embodiment of the present application may also include the following S113 to S115.
  • the video editing device receives the user's eighth input to the target video thumbnail in at least one video thumbnail, and receives the user's ninth input to the target identification thumbnail in at least one identification thumbnail.
  • the eighth input may be the user's touch input, voice input or gesture input on the target video thumbnail.
  • the touch input is the user's click input on the target video thumbnail.
  • the eighth input may also be other possible inputs, which are not limited in the embodiments of the present application.
  • the ninth input may be the user's touch input, voice input or gesture input on the target identification thumbnail.
  • the touch input is the user's click input on the thumbnail of the target logo.
  • the ninth input may also be other possible inputs, which are not limited in the embodiments of the present application.
  • the number of the above target video thumbnail and target identification thumbnail is at least one.
  • the video editing device displays the target video thumbnail and the target identification thumbnail in the third display area of the video editing interface.
  • the arrangement order of the target video thumbnails and target logo thumbnails has a correlation relationship with the eighth input and the ninth input.
  • the arrangement order of the target video thumbnails and the target identification thumbnails is determined based on the input order of the eighth input and the ninth input.
  • the order of the target video thumbnail and the target identification thumbnail is: target video thumbnail, target identification thumbnail; if the ninth input is performed first and then the ninth input is performed, Eight inputs, then the order of the target video thumbnail and target logo thumbnail is: target logo thumbnail, target video thumbnail.
  • the video editing device generates a target video based on the third video segment corresponding to the target video thumbnail and the fourth video segment corresponding to the target identification thumbnail.
  • the above S115 may specifically include: the video editing device combines the third video clip and the fourth video clip into The first/last video frame in the video is spliced to obtain the target video.
  • the mobile phone displays a video editing interface.
  • the display area 08 of the video editing interface includes video thumbnail 1, video thumbnail 2 and video thumbnail 3.
  • the display area 09 of the video editing interface includes logo thumbnails a, Logo thumbnail b and logo thumbnail c. If the user wants to trigger the video splicing on the mobile phone, the user can click on the video thumbnail 1, the logo thumbnail a and the logo thumbnail c; when the mobile phone receives the click input of these three thumbnails (i.e. the eighth input and the ninth input After input), the mobile phone can display the three thumbnails in the display area 10 in response to the click input. Afterwards, the mobile phone can generate the target video based on these three video clips.
  • the video editing device can splice the third video clip and the fourth video clip according to the target order to obtain the target video.
  • the above-mentioned target sequence includes any of the following: the selection sequence of the third video clip and the fourth video clip, and the arrangement order of the third video clip and the fourth video clip.
  • the target sequence is the order in which the user clicks on logo thumbnail a, video thumbnail 1 and logo thumbnail c, that is, the selection order of these three thumbnails.
  • video thumbnail 1, logo thumbnail a and logo thumbnail c can be displayed in the third display area 10 of the video editing interface , that is, the target order is the order in which the three thumbnails are arranged in the third display area 10 .
  • the N video segments can be spliced according to the target order to obtain the target video, after the user adjusts the target order according to actual needs, the N video segments can be spliced according to the adjusted target order to obtain different target videos. . In this way, the user experience is improved.
  • the video editing method since at least one video thumbnail and at least one logo thumbnail are displayed in the video editing interface, when the user wants to trigger the electronic device to perform video splicing, as long as the user selects the at least one video thumbnail as needed. Selecting the target video thumbnail and the target logo thumbnail from the video thumbnail and the at least one logo thumbnail can trigger the acquisition of the target video based on at least two video segments, without the need for the user to frequently trigger the switching interface of the electronic device to add the video to be spliced. , thus simplifying the operation process of video splicing on electronic devices.
  • the video editing method provided by the embodiment of the present application may also include the following S116 and S117.
  • the video editing device receives the user's tenth input to the third display area.
  • the tenth input may be the user's touch input, voice input or gesture input to the third display area.
  • the touch input is the user's movement input in the third display area.
  • the video editing device updates the display information of the third display area.
  • the above display information includes at least one of the following: the number of target video thumbnails, target video thumbnails The position of the image, the number of target logo thumbnails, and the location of the target logo thumbnail.
  • the above display information may also include: the order in which the target video thumbnails and target identification thumbnails are arranged in the third display area.
  • the user since the user can trigger the update of the display information of the third display area by inputting to the third display area, the user adjusts the video clips to be spliced to obtain a target video that meets the user's needs.
  • the video editing method provided by the embodiment of the present application may also include the following S118 and S119.
  • the video editing device receives the user's eleventh input for the first logo thumbnail.
  • the eleventh input may be the user's touch input, voice input or gesture input on the first logo thumbnail.
  • the touch input is the user's click input on the first logo thumbnail.
  • the eleventh input may also be other possible inputs, which are not limited in the embodiments of the present application.
  • the video editing device displays the target interface in response to the eleventh input.
  • the target interface includes video parameter information of the first video clip.
  • the above target interface may also include at least one of the following: information about the original video of the first video clip corresponding to the first identification thumbnail, and timestamp information of the first video clip corresponding to the first identification thumbnail.
  • the video editing device takes the video editing device as a mobile phone. If the user clicks on the logo thumbnail a, then after the mobile phone receives the click input, as shown in Figure 5, the mobile phone can display the video parameter information 11 of the first video clip corresponding to the logo thumbnail a.
  • the video parameter information 11 Including information such as "Original video: xxx.mp4, timestamp: 00:05-00:15, filter: black and white".
  • the user can trigger the display of the video parameter information of the video clip corresponding to the logo thumbnail by inputting a certain logo thumbnail, so that the user can know the filter, text, background music, etc. of the video clip. information.
  • the video editing method provided by the embodiment of the present application may also include the following S120 and S121.
  • the video editing device receives the user's twelfth input.
  • the above-mentioned twelfth input may be the user's touch input, voice input or gesture input on the first logo thumbnail.
  • the touch input is the user's click input on the first logo thumbnail.
  • the video editing device updates the target information in response to the twelfth input.
  • the above target information includes at least one of the following: video parameter information of the first video clip and the first video clip.
  • the user when the target interface of the first logo thumbnail is displayed, the user can trigger the update of the target information through input, so that when the user clicks on the video clip indicated by a certain logo or the video parameter information of the video clip When the user's needs are not met, the user can trigger the update of the video clip or the video parameter information of the video clip through operations on the target interface.
  • the video editing method provided by the embodiment of this application may also include the following S122 and S123.
  • the video editing device receives the user's thirteenth input to the video playback interface of the first video.
  • the above-mentioned thirteenth input may be the user's touch input, voice input or gesture input on the video playback interface.
  • the touch input is the user's click input on the label control.
  • the thirteenth input can also be other possible inputs, which are not limited in the embodiments of the present application.
  • the video editing device displays S identifiers of the first video.
  • each identifier indicates a video segment in the first video, and S is a positive integer.
  • the video editing device takes the video editing device as a mobile phone.
  • the playback interface of the first video is displayed, and the playback interface includes a logo control 12 . If the user wants to view the logo of the first video, the user can click the logo control 12 . After the mobile phone receives the click input, it can respond to the click input. As shown in Figure 6, the mobile phone displays video identification a, video identification b, ... and other identifications.
  • the user can trigger the display of all the logos included in the video by inputting to the video playback interface of a certain video. This way, users can view the logo while the video is playing.
  • the video editing method provided by the embodiment of the present application may also include the following S124 and S125.
  • the video editing device receives the user's fourteenth input of the target identifier among the S identifiers.
  • the above-mentioned fourteenth input may be the user's touch input, voice input or gesture input on the target identification.
  • the touch input is the user's click input on the target identification.
  • the fourteenth input may also be other possible inputs, which are not limited in the embodiments of the present application.
  • the above target identifier is any one of the S identifiers.
  • the target identifier may be the first identifier, or other identifiers among the S identifiers other than the first identifier.
  • the video editing device displays the second editing window of the third video segment indicated by the target identification.
  • the above-mentioned second editing window is used to update the video parameter information of the third video clip.
  • the above-mentioned fourteenth input may include one sub-input and another sub-input.
  • the above-mentioned S124 may specifically include: the video editing device receiving a sub-input for the target logo; in response to the sub-input, displaying a logo interface for the target logo, where the logo interface includes a logo editing control.
  • the above-mentioned S125 may specifically include: the video editing device receiving a second sub-input to the logo editing control; in response to the second sub-input, displaying a second editing window of the third video segment indicated by the target logo.
  • the mobile phone receives the click input, in response to the click input, as shown in Figure 3, the editing window of the video clip indicated by the video identifier a is displayed.
  • the video editing method provided by the embodiment of the present application triggers an editing window that displays the video clip indicated by the logo by inputting a certain logo among multiple logos, so that the video parameter information of the video clip can be edited. edit.
  • an editing window that displays the video clip indicated by the logo by inputting a certain logo among multiple logos, so that the video parameter information of the video clip can be edited. edit.
  • another way to enter the editing window is provided, and it is also convenient for users to enter the editing window from different entrances to edit and update the video parameter information.
  • the execution subject may be a video editing device.
  • the method of performing video editing by a video editing device is taken as an example to describe the video editing device provided by the embodiment of the present application.
  • this embodiment of the present application provides a video editing device 200 , which may include a receiving module 201 , a display module 202 and a storage module 203 .
  • the receiving module 201 may be configured to receive the user's first input to the video playback interface of the first video.
  • the display module 202 may be configured to display a first logo in response to the first input received by the receiving module 201, where the first logo indicates the first video segment in the first video.
  • the receiving module 201 may also be used to receive the user's second input.
  • the storage module 203 may be configured to store the first identification in response to the second input received by the receiving module 201 .
  • the video editing device may also include a processing module.
  • the receiving module 201 may also be used to receive the user's third input to the identification control.
  • the processing module may be configured to put the first video in an editable state in response to the third input received by the receiving module.
  • the first input may include a first sub-input and a second sub-input; the receiving module 201 may be specifically configured to receive the user's first sub-input to the first image frame in the first video, and to receive the user's first sub-input to the first image frame in the first video. Second sub-input for the second image frame.
  • the display module 202 may be specifically configured to display the starting mark point in response to the first sub-input; and to display the ending mark point and the first identifier in response to the second sub-input; wherein the first video segment is the first image frame to For the video segment between the second image frames, the first identification is determined based on the starting mark point and the ending mark point.
  • the video editing device may also include a processing module.
  • the receiving module 201 may also be configured to receive a fourth input from the user to a target mark point, where the target mark point includes at least one of the following: a start mark point and an end mark point.
  • the processing module may be configured to, in response to the fourth input received by the receiving module, update the position of the target marker point and the first video segment indicated by the first identification.
  • the receiving module 201 may also be configured to receive the user's fifth input of the first identification.
  • the display module 202 may be configured to display a first editing window of the first video clip in response to the fifth input, and the first editing window is used to update the video parameter information of the first video clip.
  • the first editing window includes at least one of the following controls: a filter control, a text control, and a background music control.
  • the filter control is used to update the filter information of the first video clip
  • the text control is used to update the first video clip.
  • text information, and the background music control is used to update the background music of the first video clip.
  • the storage module 203 may be specifically configured to store the first logo thumbnail corresponding to the first logo into a logo folder in the album.
  • the identification folder also includes a second identification thumbnail corresponding to the second identification, and the second identification is used to indicate the second video segment in the first video; wherein the video image frame in the second video segment is the same as the first video segment.
  • the video image frames in one video segment are completely different, or the video image frames in the second video segment are partially the same as the video image frames in the first video segment.
  • the receiving module 201 may also be configured to receive the user's sixth input for identifying the folder.
  • the display module 202 may also be configured to display P logo thumbnails and video editing controls in response to the sixth input received by the receiving module, where the P logo thumbnails include the first logo thumbnail, and P is a positive integer.
  • the receiving module can also be used to receive the user's seventh input to the video editing control.
  • the display module 202 may also be configured to display a video editing interface in response to the seventh input received by the receiving module.
  • the video editing interface may include a first display area and a second display area.
  • the first display area includes at least one video thumbnail, each video thumbnail corresponds to a video segment, and the second display area includes at least one identification thumbnail.
  • the video editing device may also include a processing module.
  • the receiving module 201 may also be configured to receive an eighth input from a user to a target video thumbnail in at least one video thumbnail, and to receive a ninth input from a user to a target identification thumbnail in at least one identification thumbnail.
  • the display module 202 may also be configured to display the target video thumbnail and the target logo thumbnail in the third display area of the video editing interface in response to the eighth input and the ninth input received by the receiving module.
  • the display order of the thumbnails is related to the eighth input and the ninth input.
  • the processing module can be used to generate a target video based on the target video thumbnail and the video clip corresponding to the target identification thumbnail.
  • the receiving module 201 may also be used to receive the user's tenth input to the third display area.
  • the processing module may also be configured to update the display information of the third display area in response to the tenth input received by the receiving module, the display information including at least one of the following: the number of target video thumbnails, the position of the target video thumbnail, the target The number of identification thumbnails and the position of the target identification thumbnail.
  • the receiving module 201 may also be configured to receive the user's eleventh input for the first identification thumbnail.
  • the display module 202 may also be configured to display a target interface in response to the eleventh input received by the receiving module; wherein the target interface includes video parameter information of the first video clip.
  • the receiving module 201 may also be used to receive a twelfth input from the user.
  • the processing module may also be configured to update target information in response to the twelfth input, where the target information includes at least one of the following: video parameter information of the first video segment, and the first video segment.
  • the receiving module 201 may also be used to receive the user's thirteenth input to the video playback interface of the first video.
  • the display module may also be configured to display S identifiers of the first video in response to the thirteenth input received by the receiving module. Each identifier corresponds to a video segment in the first video, and S is a positive integer.
  • the receiving module 201 may also be configured to receive the user's fourteenth input to the target identification among the S identifications.
  • the display module 202 may also be configured to display a second editing window of the third video clip indicated by the target identification in response to the fourteenth input received by the receiving module, and the second editing window is used to update the video parameter information of the third video clip.
  • Embodiments of the present application provide a video editing device. After the user triggers the display of a logo through an input to the video playback interface of a certain video, the user can trigger the storage of a video clip in the video through another input. This logo greatly saves storage space compared with electronic devices that save edited video clips in memory space.
  • the video editing device in the embodiment of the present application may be an electronic device or a component in the electronic device, such as an integrated circuit or chip.
  • the electronic device may be a terminal or other devices other than the terminal.
  • the electronic device can be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle-mounted electronic device, a mobile internet device (Mobile Internet Device, MID), or augmented reality (AR)/virtual reality (VR).
  • the video editing device in the embodiment of the present application may be a device with an operating system.
  • the operating system can be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of this application.
  • the video editing device provided by the embodiments of the present application can implement each process implemented by the method embodiments of Figures 1 to 6 and achieve the same technical effect. To avoid duplication, the details will not be described here.
  • this embodiment of the present application also provides an electronic device 300, including a processor 301 and a memory 302.
  • the memory 302 stores programs or instructions that can be run on the processor 301.
  • the program or instruction is executed by the processor 301, each step of the above video editing method embodiment is implemented, and the same technical effect can be achieved. To avoid duplication, the details will not be described here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 9 is a schematic diagram of the hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 400 includes but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, etc. part.
  • the electronic device 400 may also include a power supply (such as a battery) that supplies power to various components.
  • the power supply may be logically connected to the processor 410 through a power management system, thereby managing charging, discharging, and function through the power management system. Consumption management and other functions.
  • the structure of the electronic device shown in Figure 9 does not constitute a limitation on the electronic device.
  • the electronic device may include more or less components than shown in the figure, or combine certain components, or arrange different components, which will not be described again here. .
  • the user input unit 407 may be used to receive the user's first input to the video playback interface of the first video.
  • the display unit 406 may be configured to display a first identification in response to the first input received by the receiving module, where the first identification indicates the first video segment in the first video.
  • the user input unit 407 can also be used to receive the second input from the user.
  • the memory 409 may be used to store the first identification in response to the second input received by the receiving module.
  • the user input unit 407 can also be used to receive the user's third input to the identification control.
  • the processor 410 may be configured to put the first video in an editable state in response to the third input received by the receiving module.
  • the first input may include a first sub-input and a second sub-input;
  • the user input unit 407 may be specifically configured to receive the user's first sub-input to the first image frame in the first video, and to receive the user's first sub-input to the first image frame in the first video.
  • the display unit 406 may be specifically configured to display the starting mark point in response to the first sub-input; and to display the ending mark point and the first identifier in response to the second sub-input; wherein the first video segment is the first image frame to For the video segment between the second image frames, the first identification is determined based on the starting mark point and the ending mark point.
  • the user input unit 407 may also be used to receive a fourth input from the user to a target marker point, where the target marker point includes at least one of the following: a start marker point and an end marker point.
  • the processor 410 may be configured to, in response to the fourth input received by the user input unit 407, update the position of the target marker point and the first video segment indicated by the first identification.
  • the user input unit 407 may also be used to receive the user's fifth input to the first identification.
  • the display unit 406 may be configured to display a first editing window of the first video clip in response to the fifth input, and the first editing window is used to update the video parameter information of the first video clip.
  • the memory 409 may be specifically configured to store the first logo thumbnail corresponding to the first logo into a logo folder in the album.
  • the user input unit 407 may also be used to receive the user's sixth input for identifying the folder.
  • the display unit 406 may also be configured to display P logo thumbnails and video editing controls in response to the sixth input received by the user input unit 407, where the P logo thumbnails include the first logo thumbnail, and P is a positive integer.
  • the user input unit 407 may also be used to receive a seventh user input to the video editing control.
  • the display unit 406 may also be configured to display a video editing interface in response to the seventh input received by the user input unit 407.
  • the video editing interface may include a first display area and a second display area.
  • the first display area includes at least one video thumbnail, each video thumbnail corresponds to a video segment, and the second display area includes at least one identification thumbnail.
  • the user input unit 407 may also be configured to receive an eighth input from the user to the target video thumbnail in at least one video thumbnail, and to receive a ninth input from the user to the target identification thumbnail in at least one identification thumbnail.
  • the display unit 406 may also be configured to display the target video thumbnail and the target identification thumbnail in the third display area of the video editing interface in response to the eighth input and the ninth input received by the user input unit 407.
  • the target video thumbnail, target The display order of the logo thumbnails is associated with the eighth input and the ninth input.
  • the processor 410 may be configured to generate a target video based on the target video thumbnail and the video clip corresponding to the target identification thumbnail.
  • the user input unit 407 may also be used to receive the user's tenth input to the third display area.
  • the processor 410 may also be configured to, in response to the tenth input received by the user input unit 407, update the display information of the third display area, where the display information includes at least one of the following: the number of target video thumbnails, the number of target video thumbnails, Position, number of target logo thumbnails, location of target logo thumbnails.
  • the user input unit 407 may also be used to receive an eleventh input from the user on the first identification thumbnail.
  • the display unit 406 may also be configured to display the target in response to the eleventh input received by the user input unit 407.
  • Target interface wherein, the target interface includes video parameter information of the first video clip.
  • the user input unit 407 can also be used to receive a twelfth input from the user.
  • the processor 410 may also be configured to update target information in response to the twelfth input, where the target information includes at least one of the following: video parameter information of the first video segment, and the first video segment.
  • the user input unit 407 may also be used to receive the user's thirteenth input to the video playback interface of the first video.
  • the display unit 406 may also be configured to display S identifiers of the first video in response to the thirteenth input received by the user input unit 407. Each identifier corresponds to a video segment in the first video, and S is a positive integer.
  • the user input unit 407 may also be used to receive the user's fourteenth input to the target identification among the S identifications.
  • the display unit 406 may also be configured to display a second editing window of the third video segment indicated by the target identification in response to the fourteenth input received by the user input unit 407, and the second editing window is used to update the video parameters of the third video segment. information.
  • Embodiments of the present application provide an electronic device. After the user triggers the display of a logo through an input to the video playback interface of a certain video, the user can trigger the storage of a video clip indicating a video clip in the video through another input. This logo, therefore, greatly saves storage space compared to electronic devices that save the edited video clips in memory space.
  • the input unit 404 may include a graphics processing unit (GPU) 4041 and a microphone 4042.
  • the graphics processor 4041 is responsible for the image capture device (GPU) in the video capture mode or the image capture mode. Process the image data of still pictures or videos obtained by cameras (such as cameras).
  • the display unit 406 may include a display panel 4061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 407 includes a touch panel 4071 and at least one of other input devices 4072 . Touch panel 4071, also called touch screen.
  • the touch panel 4071 may include two parts: a touch detection device and a touch controller.
  • Other input devices 4072 may include but are not limited to physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be described again here.
  • Memory 409 may be used to store software programs as well as various data.
  • the memory 409 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instructions required for at least one function (such as a sound playback function, Image playback function, etc.) etc.
  • memory 409 may include volatile memory or nonvolatile memory, or memory 409 may include both volatile and nonvolatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory.
  • Volatile memory can be random access memory (Random Access Memory, RAM), static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (Synch link DRAM, SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DRRAM).
  • Memory 409 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
  • the processor 410 may include one or more processing units; optionally, the processor 410 integrates an application processor and a modem processor, where the application processor mainly handles operations related to the operating system, user interface, application programs, etc., Modem processors mainly process wireless communication signals, such as baseband processors. It can be understood that the above modem processor may not be integrated into the processor 410.
  • Embodiments of the present application also provide a readable storage medium.
  • Programs or instructions are stored on the readable storage medium.
  • the program or instructions are executed by a processor, each process of the above video editing method embodiment is implemented, and the same can be achieved. The technical effects will not be repeated here to avoid repetition.
  • the processor is the processor in the electronic device described in the above embodiment.
  • the readable storage medium includes computer readable storage media, such as computer read-only memory ROM, random access memory RAM, magnetic disk or optical disk, etc.
  • An embodiment of the present application further provides a chip.
  • the chip includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement the above video editing method embodiment. Each process can achieve the same technical effect. To avoid duplication, it will not be described again here.
  • chips mentioned in the embodiments of this application may also be called system-on-chip, system-on-a-chip, system-on-a-chip or system-on-chip, etc.
  • Embodiments of the present application provide a computer program product.
  • the program product is stored in a storage medium.
  • the program product is executed by at least one processor to implement each process of the above video editing method embodiment, and can achieve the same technical effect. , to avoid repetition, will not be repeated here.
  • the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation.
  • the technical solution of the present application essentially or the part that contributes to the existing technology can be embodied in the form of a computer software product.
  • the computer software product exists Stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including a number of instructions to cause a terminal (which can be a mobile phone, computer, server, or network device, etc.) to execute the steps described in various embodiments of this application. method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本申请公开了一种视频编辑方法、装置及电子设备,属于通信技术领域。该方法包括:接收用户对第一视频的视频播放界面的第一输入;响应于该第一输入,显示第一标识,该第一标识指示该第一视频中的第一视频片段;接收用户的第二输入;响应于该第二输入,存储该第一标识。

Description

视频编辑方法、装置及电子设备
相关申请的交叉引用
本申请主张在2022年03月21日在中国提交的中国专利申请号202210282085.1的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于视频技术领域,具体涉及一种视频编辑方法、装置及电子设备。
背景技术
随着通信技术的不断发展,自媒体的应用越来越广泛。在日常生活中,人们可以通过电子设备拍摄视频记录自己的生活,然后再将视频拼接起来分享给网友。
通常,如果用户触发电子设备从一个完整视频中剪辑一个视频片段,那么用户可以触发电子设备运行某个视频剪辑应用程序,从而电子设备可以通过该视频剪辑应用程序,从该完整视频中剪辑出一个视频片段。之后,用户可以触发电子设备将该视频片段保存在内存空间中。
发明内容
本申请实施例的目的是提供一种视频编辑方法、装置及电子设备,能够解决电子设备因存储剪辑的视频片段导致的占用较大内存空间的问题。
第一方面,本申请实施例提供了一种视频编辑方法,该方法包括:接收用户对第一视频的视频播放界面的第一输入;响应于该第一输入,显示第一标识,该第一标识指示该第一视频中的第一视频片段;接收用户的第二输入;响应于该第二输入,存储该第一标识。
第二方面,本申请实施例提供了一种视频编辑装置,视频编辑装置包括:接收模块、显示模块和存储模块。接收模块,用于接收用户对第一视频的视频播放界面的第一输入。显示模块,用于响应于接收模块接收的该第一输入,显示第一标识,该第一标识指示该第一视频中的第一视频片段。接收模块,还用于接收用户的第二输入。存储模块,用于响应于接收模块接收的该第二输入,存储该第一标识。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
第六方面,本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如第一方面所述的方法。
在本申请实施例中,接收用户对第一视频的视频播放界面的第一输入;响应于该第一输入,显示第一标识,该第一标识指示该第一视频中的第一视频片段;接收用户的第二输入;响应于该第二输入,存储该第一标识。通过该方法,在用户通过对某个视频的视频播放界面的一个输入,触发显示一个标识之后,由于用户可以通过另一个输入,触发存储指示该视频中的一个视频片段的该标识,因此与电子设备将剪辑出的视频片段保存在内存空间中相比,本申请实施例通过对标识进行存储极大地节省了存储空间。
附图说明
图1为本申请实施例提供的一种视频编辑方法的示意图;
图2为本申请实施例提供的一种视频编辑的界面示意图;
图3为本申请实施例提供的一种生成标识的界面示意图;
图4为本申请实施例提供的一种标识编辑的界面示意图;
图5为本申请实施例提供的一种视频参数详情的界面示意图;
图6为本申请实施例提供的一种标识查看的界面示意图;
图7为本申请实施例提供的视频编辑装置的结构示意图;
图8为本申请实施例提供的电子设备的结构示意图;
图9为本申请实施例提供的电子设备的硬件示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
根据上文中提及的视频剪辑方法,如果用户触发电子设备剪辑至少一个视频片段,那么电子设备需要花费大量的存储空间存储剪辑完成的视频片段。如此,导致占用电子设备的内存空间较大,从而造成了资源浪费。
基于上述问题,本申请实施例提供了一种视频编辑方法,在用户通过对某个视频的视频播放界面的一个输入,触发显示一个标识之后,由于用户可以通过另一个输入,触发存储指示该视频中的一个视频片段的该标识,因此与电子设备将剪辑出的视频片段保存在内存空间中相比,极大地节省了存储空间。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的视频编辑方法进行详细地说明。
如图1所示,本申请实施例提供一种视频编辑方法,该方法可以包括下述S101至S104。
S101、视频编辑装置接收用户对第一视频的视频播放界面的第一输入。
可选地,上述第一视频可以为在线播放的视频,或电子设备中存储的视频。
可选地,上述第一输入可以为用户对视频播放界面的触控输入、语音输入或手势输入。例如,该触控输入为用户对视频播放界面的点击输入。当然,该第一输入还可以为其他可能的输入,本申请实施例对此不作限定。
S102、视频编辑装置响应于该第一输入,显示第一标识。
其中,上述第一标识指示第一视频中的第一视频片段。
可选地,本申请中的标识可以为用于指示信息的文字、符号、图像等,可以以控件或者其他容器作为显示信息的载体,包括但不限于文字标识、符号标识、图像标识。
S103、视频编辑装置接收用户的第二输入。
可选地,上述第二输入可以为用户的触控输入、语音输入或手势输入。例如,该触控输入为用户对第一标识的双击输入;再例如,该触控输入为用户对保存控件的点击输入。
S104、视频编辑装置响应于该第二输入,存储该第一标识。
可选地,在上述S101之前,本申请实施例提供的视频编辑方法还可以包括:接收用户对标识控件的第三输入;响应于第三输入,使第一视频处于可编辑状态。
可以理解的是,在通过对标识控件的输入,触发使第一视频处于可编辑状态之后,用户可以根据实际需要触发设置标识。
示例性地,以视频编辑装置为手机为例。假设手机正在播放视频1。在手机显示该视频1的视频播放界面的情况下,用户可以点击该视频1的图像帧。在手机接收到点击输入之后,可以响应于该点击输入,显示标识1。接着,如果用户想要存储该标识,那么用户可以点击保存控件。在手机接收到对该保存控件的点击输入,手机可以响应于该点击输入,并存储该标识1。
本申请实施例提供一种视频编辑方法,在用户通过对某个视频的视频播放界面的 一个输入,触发显示一个标识之后,由于用户可以通过另一个输入,触发存储指示该视频中的一个视频片段的该标识,因此与电子设备将剪辑出的视频片段保存在内存空间中相比,极大地节省了存储空间。
可选地,上述第一输入包括第一子输入和第二子输入;相应地,上述S101具体可以通过下述S101A实现,上述S102具体可以包括S102A和S102B。
S101A、视频编辑装置接收用户对第一视频中第一图像帧的第一子输入,以及对第一视频中第二图像帧的第二子输入。
可选地,在第一视频的视频播放界面包括播放进度条的情况下,第一子输入为在该播放进度条上的第一播放节点的输入,该第一播放节点与第一图像帧对应;第二子输入为在该播放进度条上的第二播放节点的输入,该第二播放节点与第二图像像帧对应。
需要说明的是,第一输入包括第一子输入和第二子输入。具体地,可以先执行第一子输入,再执行第二子输入;或者,可以先执行第二子输入,再执行第一子输入;或者,第一子输入和第二子输入同时执行。具体根据实际使用情况确定,本申请实施例对此不作限定。
S102A、视频编辑装置响应于该第一子输入,显示起始标记点。
可选地,一种可能的情况,第一图像帧与起始标记点对应;另一种可能的情况,第二图像帧与起始标记点对应。
S102B、视频编辑装置响应于该第二子输入,显示终止标记点和第一标识。
其中,第一视频片段为第一图像帧至第二图像帧之间的视频片段,第一标识是基于起始标记点和终止标记点确定的。
可选地,若第一图像帧与起始标记点对应,则第二图像帧与终止标记点对应;若第二图像帧与起始标记点对应,则第一图像帧与终止标记点对应。
示例一,以视频编辑装置为手机为例。手机正在播放动画片(即第一视频),如图2中的(a)所示,手机显示第一视频的图像帧01(即第一图像帧)。在该第一视频处于编辑状态的情况下,用户可以点击该图像帧01;在手机接收到该点击输入之后,手机可以显示起始标记点02。然后,如图2中的(b)所示,当手机播放至该第一视频的图像帧03(即第二图像帧)时,用户再次点击该图像帧03。在手机接收到对该图像帧03的点击输入之后,手机可以响应于该点击输入,显示终止标记点04和标识a(即第一标识)。
本申请实施例提供的视频编辑方法,用户可以通过对第一图像帧和第二图像帧的输入,触发显示起始标记点,并显示终止标记点和第一标识,因此电子设备可以将该第一标识存储在存储空间中,从而节省了因保存剪辑的视频片段而所需的较大内存空间。
可选地,在上述S102B之后,本申请实施例提供的视频编辑方法还可以包括下述 S105和S106。
S105、视频编辑装置接收用户对目标标记点的第四输入。
其中,上述目标标记点包括以下至少一项:起始标记点、终止标记点。
可选地,上述第四输入可以用户对目标标记点的触控输入、语音输入或手势输入。例如,该触控输入为用户对目标标记点的拖动输入。当然,该第四输入还可以为其他可能的输入,本申请实施例对此不作限定。
S106、视频编辑装置响应于该第四输入,更新该目标标记点的位置以及第一标识指示的第一视频片段。
需要说明的是,在接收到对目标标记节点的第四输入之后,更新位置后的该目标标记节点对应的图像帧随之发生变化,进而第一标识指示的第一视频片段包括的图像帧也发生变化。
在本申请实施例中,用户可以通过对目标标记点的输入,触发更新该目标标记点的位置以及第一标识指示的第一视频片段,从而用户可以根据实际需要,更新标识指示的视频片段,且将视频制作过程中对视频片段的编辑操作简化为对标识的操作。
可选地,在上述S104之后,本申请实施例提供的视频编辑方法还可以包括下述S107和S108。
S107、视频编辑装置接收用户对第一标识的第五输入。
可选地,上述第五输入可以为用户对第一标识的触控输入、语音输入或手势输入。例如,该触控输入为用户对第一标识的双击输入。当然,该第五输入还可以为其他可能的输入,本申请实施例对此不作限定。
S108、视频编辑装置响应于该第五输入,显示第一视频片段的第一编辑窗口。
其中,上述第一编辑窗口用于更新第一视频片段的视频参数信息。
可选地,上述第一编辑窗口可以包括以下至少一项控件:滤镜控件、文字控件、背景音乐控件。该滤镜控件用于更新第一视频片段的滤镜信息,该文字控件用于更新第一视频片段的文字信息,该背景音乐控件用于更新第一视频片段的背景音乐。
可选地,上述视频参数信息可以包括以下至少一项:滤镜信息、背景音乐、文字信息、文字显示和消失的动画效果信息等。
示例性地,假设第一编辑窗口中包括滤镜控件。当用户点击该滤镜控件时,可以触发弹出滤镜选择列表,从而用户可以从该滤镜选择列表中选择一个滤镜参数,从而视频编辑装置可以将第一标识的滤镜信息更新为该滤镜参数对应的滤镜信息。
可以理解的是,视频编辑装置可以根据第一图像帧、第二图像帧和视频参数信息,生成第一标识。
示例二,以视频编辑装置为手机为例。结合上述图2,用户点击标识a,在手机接收到对该标识a的点击输入之后,响应于该输入,如图3所示,手机显示编辑窗口,该编辑窗口包括控件05、控件06和控件07。由于该控件05用于更新该标识a指示的 视频片段的滤镜信息,因此用户可以点击该控件05;在手机接收到对该控件05的点击输入之后,响应于该点击输入,设置第一视频片段的滤镜。
可选地,在上述S108之后,本申请实施例提供的视频编辑方法还可以包括:视频编辑装置接收对第一标识的输入;响应于该输入,播放更新后的视频参数信息的第一视频片段。如此,可以按照用户的意愿,快速改变某个视频片段的图像效果。
需要说明的是,在播放第一视频的过程中,若对第一标识操作,则播放更新后的视频参数信息的第一视频片段;反之,若未对第一标识操作,则播放原视频参数信息的第一视频片段。
本申请实施例提供的视频编辑方法,由于用户可以通过对第一标识的输入,触发设置显示第一编辑窗口,因此用户可以根据需求,触发更新第一视频片段的视频参数信息,且将视频制作过程中对视频的编辑操作简化为对标识的操作。
可选地,上述S104可以具体通过下述S104A实现。
S104A、视频编辑装置将第一标识对应的第一标识缩略图存储至相册中的标识文件夹。
可选地,在用户参照上述实施例中的S105和S106,用户触发生成指示视频片段的第一个标识之后,可以在相册中创建一个标识文件。需要说明的是,该标识文件夹用于存储标识缩略图,该标识缩略图对应一个视频中的视频片段。
可选地,上述标识文件夹中还可以包括第二标识对应的第二标识缩略图,该第二标识用于指示第一视频中的第二视频片段;其中,给第二视频片段中的视频图像帧与该第一视频片段中的视频图像帧完全不同,或者该第二视频片段中的视频图像帧与该第一视频片段中的视频图像帧部分相同。
可以理解的是,在生成第一标识之后,可以在相册的标签文件夹中添加第一标识对应的第一标识缩略图,从而可以避免可重复利用的视频片段反复保存而占用多余内存空间。
可选地,在上述S104A之后,本申请实施例提供的视频编辑方法还可以包括下述S109至S112。
S109、视频编辑装置接收用户对标识文件夹的第六输入。
可选地,上述第六输入可以用户对标识文件夹的触控输入、语音输入或手势输入。例如,该触控输入为用户对标识文件夹的点击输入。当然,该第六输入还可以为其他可能的输入,本申请实施例对此不作限定。
S110、视频编辑装置响应于该第六输入,显示P个标识缩略图和视频编辑控件。
其中,上述P个标识缩略图包括第一标识缩略图,P为正整数。
可选地,上述视频编辑控件的形状可以为圆形、长方形或其他可能的形状;视频编辑控件的尺寸为预设尺寸;视频编辑控件可以显示在标签文件夹对应的文件夹界面中的任意空白区域。本申请实施例对编辑控件的显示形式不作限定。
S111、视频编辑装置接收用户对该视频编辑控件的第七输入。
可选地,上述第七输入可以用户对视频编辑控件的触控输入、语音输入或手势输入。例如,该触控输入为用户对视频编辑控件的点击输入。当然,该第七输入还可以为其他可能的输入,本申请实施例对此不作限定。
S112、视频编辑装置响应于该第七输入,显示视频编辑界面。
在本申请实施例中,由于可以通过用户对标签文件夹的一个输入,可以触发显示P个标识缩略图和视频编辑控件,因此用户可以查看标识缩略图;接着,由于可以通过对该视频编辑控件的另一个输入,触发显示视频编辑界面,因此便于用户在该视频编辑界面触发视频拼接。
可选地,上述视频编辑界面包括第一显示区域和第二显示区域,该第一显示区域包括至少一个视频缩略图,每一个视频缩略图对应一个视频片段,该第二显示区域包括至少一个标识缩略图;相应地,在上述S112之后,本申请实施例提供的视频编辑方法还可以包括下述S113至S115。
S113、视频编辑装置接收用户对至少一个视频缩略图中的目标视频缩略图的第八输入,以及接收用户对至少一个标识缩略图中目标标识缩略图的第九输入。
可选地,上述第八输入可以是用户对目标视频缩略图的触控输入、语音输入或手势输入。例如,该触控输入为用户对目标视频缩略图的点击输入。当然,该第八输入还可以为其他可能的输入,本申请实施例对此不作限定。
可选地,上述第九输入可以是用户对目标标识缩略图的触控输入、语音输入或手势输入。例如,该触控输入为用户对目标标识缩略图的点击输入。当然,该第九输入还可以为其他可能的输入,本申请实施例对此不作限定。
可选地,上述目标视频缩略图和目标标识缩略图的数量为至少一个。
S114、视频编辑装置响应于该第八输入和该第九输入,在视频编辑界面的第三显示区域显示该目标视频缩略图和该目标标识缩略图。
其中,上述目标视频缩略图、目标标识缩略图的排列顺序与第八输入、第九输入具有关联关系。
可选地,目标视频缩略图和目标标识缩略图的排列顺序,是根据第八输入和第九输入的输入顺序确定。
示例性地,若先进行第八输入再进行第九输入,则目标视频缩略图和目标标识缩略图的排列顺序为:目标视频缩略图、目标标识缩略图;若先进行第九输入再进行第八输入,则目标视频缩略图和目标标识缩略图的排列顺序为:目标标识缩略图、目标视频缩略图。
S115、视频编辑装置基于该目标视频缩略图对应的第三视频片段和该目标标识缩略图对应的第四视频片段,生成目标视频。
可选地,上述S115具体可以包括:视频编辑装置将第三视频片段和第四视频片段 中的首/尾一帧视频画面进行拼接,以得到目标视频。
示例性地,以视频编辑装置为手机为例。如图4所示,手机显示视频编辑界面,该视频编辑界面的显示区域08包括视频缩略图1、视频缩略图2和视频缩略图3,该视频编辑界面的显示区域09包括标识缩略图a、标识缩略图b和标识缩略图c。如果用户想要触发手机拼接视频,那么用户可以点击该视频缩略图1、标识缩略图a和标识缩略图c;在手机接收到对这3个缩略图的点击输入(即第八输入和第九输入)之后,手机可以响应于该点击输入,在显示区域10显示这3个缩略图。之后,手机可以基于这3个视频片段,生成目标视频。
进一步地,视频编辑装置可以按照目标顺序,拼接第三视频片段和第四视频片段,得到目标视频。
其中,上述目标顺序包括以下任一项:第三视频片段和第四视频片段的选中顺序,第三视频片段和第四视频片段的排列顺序。
示例性地,以目标顺序为第三视频片段和第四视频片段的选中顺序为例。如图4所示,目标顺序为用户点击标识缩略图a、视频缩略图1和标识缩略图c的顺序,即这3个缩略图的选中顺序。
示例性地,以目标顺序为第三视频片段和第四视频片段的排列顺序为例。如图4所示,在选中视频缩略图1、标识缩略图a和标识缩略图c之后,可以在视频编辑界面的第三显示区域10显示视频缩略图1、标识缩略图a和标识缩略图c,即目标顺序为3个缩略图在该第三显示区域10中的排列顺序。
可以理解的,由于可以按照目标顺序,拼接N个视频段,得到目标视频,因此在用户根据实际需要调整该目标顺序之后,按照调整后的目标顺序拼接N个视频段,可以得到不同的目标视频。如此,提高了用户体验感。
本申请实施例提供的视频编辑方法,由于视频编辑界面中显示有至少一个视频缩略图和至少一个标识缩略图,因此当用户想要触发电子设备进行视频拼接时,只要用户根据需要从该至少一个视频缩略图和该至少一个标识缩略图中选择目标视频缩略图和目标标识缩略图,就可以触发根据至少两个视频段,得到目标视频,而无需用户频繁触发电子设备切换界面以添加待拼接视频,从而简化了电子设备拼接视频的操作过程。
可选地,在上述S114之后,本申请实施例提供的视频编辑方法还可以包括下述S116和S117。
S116、视频编辑装置接收用户对第三显示区域的第十输入。
可选地,上述第十输入可以为用户对第三显示区域的触控输入、语音输入或手势输入。例如,该触控输入为用户在第三显示区域的移动输入。
S117、视频编辑装置响应于该第十输入,更新第三显示区域的显示信息。
其中,上述显示信息包括以下至少一项:目标视频缩略图的数量、目标视频缩略 图的位置、目标标识缩略图的数量、目标标识缩略图的位置。
可选地,上述显示信息还可以包括:目标视频缩略图和目标标识缩略图在第三显示区域的排列顺序。
在本申请实施例中,由于用户可以通过对第三显示区域的输入,触发更新第三显示区域的显示信息,因此用户对待拼接的视频片段进行调整,以得到满足用户需求的目标视频。
可选地,上述S104A之后,本申请实施例提供的视频编辑方法还可以包括下述S118和S119。
S118、视频编辑装置接收用户对第一标识缩略图的第十一输入。
可选地,上述第十一输入可以为用户对第一标识缩略图的触控输入、语音输入或手势输入。例如,该触控输入为用户对第一标识缩略图的点击输入。当然,该第十一输入还可以为其他可能的输入,本申请实施例对此不作限定。
S119、视频编辑装置响应于该第十一输入,显示目标界面。
其中,上述目标界面中包括第一视频片段的视频参数信息。
可选地,上述目标界面还可以包括以下至少一项:第一标识缩略图对应的第一视频片段的原视频的信息、第一标识缩略图对应的第一视频片段的时间戳信息。
示例性地,以视频编辑装置为手机为例。如果用户点击标识缩略图a,那么在手机接收到该点击输入之后,如图5所示,手机可以显示该标识缩略图a对应的第一视频片段的视频参数信息11,该视频参数信息11中包括“原视频:xxx.mp4、时间戳:00:05—00:15、滤镜:黑白”等信息。
在本申请实施例中,用户可以通过对某个标识缩略图的输入,触发显示该标识缩略图对应的视频片段的视频参数信息,从而用户可以知晓该视频片段的滤镜、文字、背景音乐等信息。
可选地,在上述S119之后,本申请实施例提供的视频编辑方法还可以包括下述S120和S121。
S120、视频编辑装置接收用户的第十二输入。
可选地,上述第十二输入可以为用户对第一标识缩略图的触控输入、语音输入或手势输入。例如,该触控输入为用户对第一标识缩略图的点击输入。
S121、视频编辑装置响应于该第十二输入,更新目标信息。
其中,上述目标信息包括以下至少一项:第一视频片段的视频参数信息、第一视频片段。
在本申请实施例中,在显示第一标识缩略图的目标界面的情况下,用户可以通过输入,触发更新目标信息,从而当用户对某个标识指示的视频片段或该视频片段的视频参数信息不满足用户需求时,用户可以通过在目标界面的操作,触发对视频片段或该视频片段的视频参数信息进行更新。
可选地,本申请实施例提供的视频编辑方法还可以包括下述S122和S123。
S122、视频编辑装置接收用户对第一视频的视频播放界面的第十三输入。
可选地,上述第十三输入可以为用户对视频播放界面的触控输入、语音输入或手势输入。例如,该触控输入为用户对标签控件的点击输入。当然,该第十三输入还可以为其他可能的输入,本申请实施例对此不作限定。
S123、视频编辑装置响应于该第十三输入,显示该第一视频的S个标识。
其中,每个标识指示第一视频中的一个视频片段,S为正整数。
示例性地,以视频编辑装置为手机为例。如图6所示,显示第一视频的播放界面,该播放界面包括标识控件12。如果用户想要查看第一视频的标识,那么用户可以点击该标识控件12。在手机接收到点击输入之后,可以响应于该点击输入,如图6所示,手机显示视频标识a、视频标识b、……等标识。
本申请实施例提供的视频编辑方法,用户可以通过对某个视频的视频播放界面的输入,触发显示该视频包括的所有标识。如此,用户可以在播放视频的过程中查看标识。
可选地,在上述S123之后,本申请实施例提供的视频编辑方法还可以包括下述S124和S125。
S124、视频编辑装置接收用户对S个标识中目标标识的第十四输入。
可选地,上述第十四输入可以用户对目标标识的触控输入、语音输入或手势输入。例如,该触控输入为用户对该目标标识的点击输入。当然,该第十四输入还可以为其他可能的输入,本申请实施例对此不作限定。
可选地,上述目标标识为S个标识中的任意一个标识。该目标标识可能为第一标识,或为S个标识中除第一标识之外的其他标识。
S125、视频编辑装置响应于该第十四输入,显示该目标标识指示的第三视频片段的第二编辑窗口。
其中,上述第二编辑窗口用于更新第三视频片段的视频参数信息。
可选地,上述第十四输入可以包括一个子输入和另一个子输入。相应地,上述S124具体可以包括:视频编辑装置接收对目标标识的一个子输入;响应于该一个子输入,显示目标标识的标识界面,该标识界面包括标识编辑控件。
相应地,上述S125具体可以包括:视频编辑装置接收对标识编辑控件的第二子输入;响应于该第二子输入,显示目标标识指示的第三视频段的第二编辑窗口。
示例性地,结合图6,在显示视频标识a、视频标识b等多个标识的情况下,用户点击视频标识a。在手机接收到点击输入之后,可以响应于该点击输入,如图3所示,显示该视频标识a指示的视频片段的编辑窗口。
本申请实施例提供的视频编辑方法,通过对多个标识中的某个标识的输入,触发显示该标识指示的视频片段的编辑窗口,从而可以对该视频片段的视频参数信息进行 编辑。如此,提供了另一种进入编辑窗口的方式,也便于用户从不同入口进入编辑窗口对视频参数信息进行编辑更新。
本申请实施例提供的视频编辑方法,执行主体可以为视频编辑装置。本申请实施例中以视频编辑装置执行视频编辑的方法为例,说明本申请实施例提供的视频编辑的装置。
如图7所示,本申请实施例提供一种视频编辑装置200,该视频编辑装置可以包括接收模块201、显示模块202和存储模块203。接收模块201,可以用于接收用户对第一视频的视频播放界面的第一输入。显示模块202,可以用于响应于接收模块201接收的所述第一输入,显示第一标识,第一标识指示第一视频中的第一视频片段。接收模块201,还可以用于接收用户的第二输入。存储模块203,可以用于响应于接收模块201接收的第二输入,存储第一标识。
可选地,视频编辑装置还可以包括处理模块。接收模块201,还可以用于接收用户对标识控件的第三输入。处理模块,可以用于响应于接收模块接收的第三输入,使第一视频处于可编辑状态。
可选地,第一输入可以包括第一子输入和第二子输入;接收模块201,可以具体用于接收用户对第一视频中第一图像帧的第一子输入,以及对第一视频中第二图像帧的第二子输入。显示模块202,可以具体用于响应于第一子输入,显示起始标记点;并响应于第二子输入,显示终止标记点和第一标识;其中,第一视频片段为第一图像帧至第二图像帧之间的视频片段,第一标识是基于起始标记点和终止标记点确定的。
可选地,视频编辑装置还可以包括处理模块。接收模块201,还可以用于接收用户对目标标记点的第四输入,目标标记点包括以下至少一项:起始标记点、终止标记点。处理模块,可以用于响应于接收模块接收的第四输入,更新目标标记点的位置以及第一标识指示的第一视频片段。
可选地,接收模块201,还可以用于接收用户对第一标识的第五输入。显示模块202,可以用于响应于第五输入,显示第一视频片段的第一编辑窗口,第一编辑窗口用于更新第一视频片段的视频参数信息。
可选地,第一编辑窗口包括以下至少一项控件:滤镜控件、文字控件、背景音乐控件,滤镜控件用于更新第一视频片段的滤镜信息,文字控件用于更新第一视频片段的文字信息,背景音乐控件用于更新第一视频片段的背景音乐。
可选地,存储模块203,可以具体用于将第一标识对应的第一标识缩略图存储至相册中的标识文件夹。
可选地,标识文件夹中还包括第二标识对应的第二标识缩略图,第二标识用于指示第一视频中的第二视频片段;其中,第二视频片段中的视频图像帧与第一视频片段中的视频图像帧完全不同,或者第二视频片段中的视频图像帧与第一视频片段中的视频图像帧部分相同。
可选地,接收模块201,还可以用于接收用户对标识文件夹的第六输入。显示模块202,还可以用于响应于接收模块接收的第六输入,显示P个标识缩略图和视频编辑控件,P个标识缩略图包括第一标识缩略图,P为正整数。接收模块,还可以用于接收用户对视频编辑控件的第七输入。显示模块202,还可以用于响应于接收模块接收的第七输入,显示视频编辑界面。
可选地,视频编辑界面可以包括第一显示区域和第二显示区域,第一显示区域包括至少一个视频缩略图,每一个视频缩略图对应一个视频片段,第二显示区域包括至少一个标识缩略图;视频编辑装置还可以包括处理模块。接收模块201,还可以用于接收用户对至少一个视频缩略图中的目标视频缩略图的第八输入,以及接收用户对至少一个标识缩略图中目标标识缩略图的第九输入。显示模块202,还可以用于响应于接收模块接收的第八输入和第九输入,在视频编辑界面的第三显示区域显示目标视频缩略图和目标标识缩略图,目标视频缩略图、目标标识缩略图的显示顺序与第八输入、第九输入具有关联关系。处理模块,可以用于基于目标视频缩略图和目标标识缩略图对应的视频片段,生成目标视频。
可选地,接收模块201,还可以用于接收用户对第三显示区域的第十输入。处理模块,还可以用于响应于接收模块接收的第十输入,更新第三显示区域的显示信息,该显示信息包括以下至少一项:目标视频缩略图的数量、目标视频缩略图的位置、目标标识缩略图的数量、目标标识缩略图的位置。
可选地,接收模块201,还可以用于接收用户对第一标识缩略图的第十一输入。显示模块202,还可以用于响应于接收模块接收的第十一输入,显示目标界面;其中,目标界面中包括第一视频片段的视频参数信息。
可选地,接收模块201,还可以用于接收用户的第十二输入。处理模块,还可以用于响应于第十二输入,更新目标信息,目标信息包括以下至少一项:第一视频片段的视频参数信息、第一视频片段。
可选地,接收模块201,还可以用于接收用户对第一视频的视频播放界面的第十三输入。显示模块,还可以用于响应于接收模块接收的第十三输入,显示第一视频的S个标识,每个标识对应第一视频中的一个视频片段,S为正整数。
可选地,接收模块201,还可以用于接收用户对S个标识中的目标标识的第十四输入。显示模块202,还可以用于响应于接收模块接收的第十四输入,显示目标标识指示的第三视频片段的第二编辑窗口,第二编辑窗口用于更新第三视频片段的视频参数信息。
本申请实施例提供一种视频编辑装置,在用户通过对某个视频的视频播放界面的一个输入,触发显示一个标识之后,由于用户可以通过另一个输入,触发存储指示该视频中的一个视频片段的该标识,因此与电子设备将剪辑出的视频片段保存在内存空间中相比,极大地节省了存储空间。
本申请实施例中的视频编辑装置可以是电子设备,也可以是电子设备中的部件,例如集成电路或芯片。该电子设备可以是终端,也可以为除终端之外的其他设备。示例性的,电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,还可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的视频编辑装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的视频编辑装置能够实现图1至图6的方法实施例实现的各个过程,实现相同的技术效果,为避免重复,这里不再赘述。
可选地,如图8所示,本申请实施例还提供一种电子设备300,包括处理器301和存储器302,存储器302上存储有可在所述处理器301上运行的程序或指令,该程序或指令被处理器301执行时实现上述视频编辑方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图9为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备400包括但不限于:射频单元401、网络模块402、音频输出单元403、输入单元404、传感器405、显示单元406、用户输入单元407、接口单元408、存储器409、以及处理器410等部件。
本领域技术人员可以理解,电子设备400还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器410逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图9中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,用户输入单元407,可以用于接收用户对第一视频的视频播放界面的第一输入。显示单元406,可以用于响应于接收模块接收的所述第一输入,显示第一标识,第一标识指示第一视频中的第一视频片段。用户输入单元407,还可以用于接收用户的第二输入。存储器409,可以用于响应于接收模块接收的第二输入,存储第一标识。
可选地,用户输入单元407,还可以用于接收用户对标识控件的第三输入。处理器410,可以用于响应于接收模块接收的第三输入,使第一视频处于可编辑状态。
可选地,第一输入可以包括第一子输入和第二子输入;用户输入单元407,可以具体用于接收用户对第一视频中第一图像帧的第一子输入,以及对第一视频中第二图像帧的第二子输入。显示单元406,可以具体用于响应于第一子输入,显示起始标记点;并响应于第二子输入,显示终止标记点和第一标识;其中,第一视频片段为第一图像帧至第二图像帧之间的视频片段,第一标识是基于起始标记点和终止标记点确定的。
可选地,用户输入单元407,还可以用于接收用户对目标标记点的第四输入,目标标记点包括以下至少一项:起始标记点、终止标记点。处理器410,可以用于响应于用户输入单元407接收的第四输入,更新目标标记点的位置以及第一标识指示的第一视频片段。
可选地,用户输入单元407,还可以用于接收用户对第一标识的第五输入。显示单元406,可以用于响应于第五输入,显示第一视频片段的第一编辑窗口,第一编辑窗口用于更新第一视频片段的视频参数信息。
可选地,存储器409,可以具体用于将第一标识对应的第一标识缩略图存储至相册中的标识文件夹。
可选地,用户输入单元407,还可以用于接收用户对标识文件夹的第六输入。显示单元406,还可以用于响应于用户输入单元407接收的第六输入,显示P个标识缩略图和视频编辑控件,P个标识缩略图包括第一标识缩略图,P为正整数。用户输入单元407,还可以用于接收用户对视频编辑控件的第七输入。显示单元406,还可以用于响应于用户输入单元407接收的第七输入,显示视频编辑界面。
可选地,视频编辑界面可以包括第一显示区域和第二显示区域,第一显示区域包括至少一个视频缩略图,每一个视频缩略图对应一个视频片段,第二显示区域包括至少一个标识缩略图;用户输入单元407,还可以用于接收用户对至少一个视频缩略图中的目标视频缩略图的第八输入,以及接收用户对至少一个标识缩略图中目标标识缩略图的第九输入。显示单元406,还可以用于响应于用户输入单元407接收的第八输入和第九输入,在视频编辑界面的第三显示区域显示目标视频缩略图和目标标识缩略图,目标视频缩略图、目标标识缩略图的显示顺序与第八输入、第九输入具有关联关系。处理器410,可以用于基于目标视频缩略图和目标标识缩略图对应的视频片段,生成目标视频。
可选地,用户输入单元407,还可以用于接收用户对第三显示区域的第十输入。处理器410,还可以用于响应于用户输入单元407接收的第十输入,更新第三显示区域的显示信息,该显示信息包括以下至少一项:目标视频缩略图的数量、目标视频缩略图的位置、目标标识缩略图的数量、目标标识缩略图的位置。
可选地,用户输入单元407,还可以用于接收用户对第一标识缩略图的第十一输入。显示单元406,还可以用于响应于用户输入单元407接收的第十一输入,显示目 标界面;其中,目标界面中包括第一视频片段的视频参数信息。
可选地,用户输入单元407,还可以用于接收用户的第十二输入。处理器410,还可以用于响应于第十二输入,更新目标信息,目标信息包括以下至少一项:第一视频片段的视频参数信息、第一视频片段。
可选地,用户输入单元407,还可以用于接收用户对第一视频的视频播放界面的第十三输入。显示单元406,还可以用于响应于用户输入单元407接收的第十三输入,显示第一视频的S个标识,每个标识对应第一视频中的一个视频片段,S为正整数。
可选地,用户输入单元407,还可以用于接收用户对S个标识中的目标标识的第十四输入。显示单元406,还可以用于响应于用户输入单元407接收的第十四输入,显示目标标识指示的第三视频片段的第二编辑窗口,第二编辑窗口用于更新第三视频片段的视频参数信息。
本申请实施例提供一种电子设备,在用户通过对某个视频的视频播放界面的一个输入,触发显示一个标识之后,由于用户可以通过另一个输入,触发存储指示该视频中的一个视频片段的该标识,因此与电子设备将剪辑出的视频片段保存在内存空间中相比,极大地节省了存储空间。
应理解的是,本申请实施例中,输入单元404可以包括图形处理器(graphics processing unit,GPU)4041和麦克风4042,图形处理器4041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元406可包括显示面板4061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板4061。用户输入单元407包括触控面板4071以及其他输入设备4072中的至少一种。触控面板4071,也称为触摸屏。触控面板4071可包括触摸检测装置和触摸控制器两个部分。其他输入设备4072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
存储器409可用于存储软件程序以及各种数据。存储器409可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器409可以包括易失性存储器或非易失性存储器,或者,存储器409可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器 (Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器409包括但不限于这些和任意其它适合类型的存储器。
处理器410可包括一个或多个处理单元;可选地,处理器410集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器410中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述视频编辑方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述视频编辑方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如上述视频编辑方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存 储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (20)

  1. 一种视频编辑方法,所述方法包括:
    接收用户对第一视频的视频播放界面的第一输入;
    响应于所述第一输入,显示第一标识,所述第一标识指示所述第一视频中的第一视频片段;
    接收用户的第二输入;
    响应于所述第二输入,存储所述第一标识。
  2. 根据权利要求1所述的方法,其中,所述接收用户对第一视频的视频播放界面的第一输入之前,所述方法还包括:
    接收用户对标识控件的第三输入;
    响应于所述第三输入,使所述第一视频处于可编辑状态。
  3. 根据权利要求1所述的方法,其中,所述第一输入包括第一子输入和第二子输入;
    所述接收用户对第一视频的视频播放界面的第一输入,包括:
    接收用户对所述第一视频中第一图像帧的第一子输入,以及对所述第一视频中第二图像帧的第二子输入;
    所述响应于所述第一输入,显示第一标识,包括:
    响应于所述第一子输入,显示起始标记点;
    响应于所述第二子输入,显示终止标记点和所述第一标识;
    其中,所述第一视频片段为所述第一图像帧至所述第二图像帧之间的视频片段,所述第一标识是基于所述起始标记点和所述终止标记点确定的。
  4. 根据权利要求3所述的方法,其中,所述显示终止标记点和所述第一标识之后,所述方法还包括:
    接收用户对目标标记点的第四输入,所述目标标记点包括以下至少一项:所述起始标记点、所述终止标记点;
    响应于所述第四输入,更新所述目标标记点的位置以及所述第一标识指示的第一视频片段。
  5. 根据权利要求1所述的方法,其中,所述显示第一标识之后,所述方法还包括:
    接收用户对所述第一标识的第五输入;
    响应于所述第五输入,显示所述第一视频片段的第一编辑窗口,所述第一编辑窗口用于更新所述第一视频片段的视频参数信息。
  6. 根据权利要求5所述的方法,其中,所述第一编辑窗口包括以下至少一项控件:滤镜控件、文字控件、背景音乐控件,所述滤镜控件用于更新所述第一视频片段的滤镜信息,所述文字控件用于更新所述第一视频片段的文字信息,所述背景音乐控件用于更新所述第一视频片段的背景音乐。
  7. 根据权利要求1所述的方法,其中,所述存储所述第一标识,包括:
    将所述第一标识对应的第一标识缩略图存储至相册中的标识文件夹。
  8. 根据权利要求7所述的方法,其特征在于,所述标识文件夹中还包括第二标识对应的第二标识缩略图,所述第二标识用于指示所述第一视频中的第二视频片段;其中,所述第二视频片段中的视频图像帧与所述第一视频片段中的视频图像帧完全不同,或者所述第二视频片段中的视频图像帧与所述第一视频片段中的视频图像帧部分相同。
  9. 根据权利要求7所述的方法,其中,所述将所述第一标识对应的第一标识缩略图存储至相册中的标识文件夹之后,所述方法还包括:
    接收用户对所述标识文件夹的第六输入;
    响应于所述第六输入,显示P个标识缩略图和视频编辑控件,所述P个标识缩略图包括所述第一标识缩略图,P为正整数;
    接收用户对所述视频编辑控件的第七输入;
    响应于所述第七输入,显示视频编辑界面。
  10. 根据权利要求9所述的方法,其中,所述视频编辑界面包括第一显示区域和第二显示区域,所述第一显示区域包括至少一个视频缩略图,每一个所述视频缩略图对应一个视频片段,所述第二显示区域包括至少一个标识缩略图;
    所述显示视频编辑界面之后,所述方法还包括:
    接收用户对所述至少一个视频缩略图中的目标视频缩略图的第八输入,以及接收用户对所述至少一个标识缩略图中目标标识缩略图的第九输入;
    响应于所述第八输入和所述第九输入,在所述视频编辑界面的第三显示区域显示所述目标视频缩略图和所述目标标识缩略图,所述目标视频缩略图、所述目标标识缩略图的排列顺序与所述第八输入、所述第九输入具有关联关系;
    基于所述目标视频缩略图对应的第三视频片段和所述目标标识缩略图对应的第四视频片段,生成目标视频。
  11. 根据权利要求10所述的方法,其中,所述在所述视频编辑界面的第三显示区域显示所述目标视频缩略图和所述目标标识缩略图之后,所述方法还包括:
    接收用户对所述第三显示区域的第十输入;
    响应于所述第十输入,更新所述第三显示区域的显示信息,所述显示信息包括以下至少一项:目标视频缩略图的数量、目标视频缩略图的位置、目标标识缩略图的数量、目标标识缩略图的位置。
  12. 根据权利要求7所述的方法,其中,所述将所述第一标识对应的第一标识缩略图存储至相册中的标识文件夹之后,所述方法还包括:
    接收用户对所述第一标识缩略图的第十一输入;
    响应于所述第十一输入,显示目标界面;
    其中,所述目标界面中包括所述第一视频片段的视频参数信息。
  13. 根据权利要求12所述的方法,其中,所述显示目标界面之后,所述方法还包括:
    接收用户的第十二输入;
    响应于所述第十二输入,更新目标信息,所述目标信息包括以下至少一项:所述第一视频片段的视频参数信息、所述第一视频片段。
  14. 根据权利要求1所述的方法,其中,所述方法还包括:
    接收用户对第一视频的视频播放界面的第十三输入;
    响应于所述第十三输入,显示所述第一视频的S个标识,每个标识指示所述第一视频中的一个视频片段,S为正整数。
  15. 根据权利要求14所述的方法,其中,所述显示所述第一视频的S个标识之后,所述方法还包括:
    接收用户对所述S个标识中目标标识的第十四输入;
    响应于所述第十四输入,显示所述目标标识指示的第五视频片段的第二编辑窗口,所述第二编辑窗口用于更新所述第五视频片段的视频参数信息。
  16. 一种视频编辑装置,所述视频编辑装置包括接收模块、显示模块和存储模块;
    所述接收模块,用于接收用户对第一视频的视频播放界面的第一输入;
    所述显示模块,用于响应于所述接收模块接收的所述第一输入,显示第一标识,所述第一标识指示所述第一视频中的第一视频片段;
    所述接收模块,还用于接收用户的第二输入;
    所述存储模块,用于响应于所述接收模块接收的所述第二输入,存储所述第一标识。
  17. 根据权利要求16所述的装置,其中,所述视频编辑装置还包括处理模块;
    所述接收模块,还用于接收用户对标识控件的第三输入;
    所述处理模块,用于响应于所述接收模块接收的所述第三输入,使第一视频处于可编辑状态。
  18. 根据权利要求16所述的装置,其中,所述第一输入包括第一子输入和第二子输入;
    所述接收模块,具体用于接收用户对第一视频中所述第一图像帧的第一子输入,以及对第一视频中所述第二图像帧的第二子输入;
    所述显示模块,具体用于响应于所述第一子输入,显示起始标记点;并响应于所述第二子输入,显示终止标记点和所述第一标识;
    其中,所述第一视频片段为所述第一图像帧至所述第二图像帧之间的视频片段,所述第一标识是基于所述起始标记点和所述终止标记点确定的。
  19. 一种电子设备,包括处理器和存储器,所述存储器存储可在所述处理器上运 行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-15任一项所述的视频编辑方法的步骤。
  20. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-15任一项所述的视频编辑方法的步骤。
PCT/CN2023/082504 2022-03-21 2023-03-20 视频编辑方法、装置及电子设备 WO2023179539A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210282085.1 2022-03-21
CN202210282085.1A CN114845171A (zh) 2022-03-21 2022-03-21 视频编辑方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2023179539A1 true WO2023179539A1 (zh) 2023-09-28

Family

ID=82561840

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/082504 WO2023179539A1 (zh) 2022-03-21 2023-03-20 视频编辑方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN114845171A (zh)
WO (1) WO2023179539A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114845171A (zh) * 2022-03-21 2022-08-02 维沃移动通信有限公司 视频编辑方法、装置及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102099860A (zh) * 2008-05-15 2011-06-15 苹果公司 用于编辑视频剪辑的用户界面
CN104506937A (zh) * 2015-01-06 2015-04-08 三星电子(中国)研发中心 音视频的分享处理方法和系统
US20180295396A1 (en) * 2017-04-06 2018-10-11 Burst, Inc. Techniques for creation of auto-montages for media content
CN110647500A (zh) * 2019-09-23 2020-01-03 Oppo广东移动通信有限公司 文件存储方法、装置、终端及计算机可读存储介质
CN110737435A (zh) * 2019-10-18 2020-01-31 网易(杭州)网络有限公司 游戏中的多媒体编辑方法、装置、终端设备及存储介质
CN113242464A (zh) * 2021-01-28 2021-08-10 维沃移动通信有限公司 视频编辑方法、装置
CN114845171A (zh) * 2022-03-21 2022-08-02 维沃移动通信有限公司 视频编辑方法、装置及电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110868633A (zh) * 2019-11-27 2020-03-06 维沃移动通信有限公司 一种视频处理方法及电子设备
CN111464761A (zh) * 2020-04-07 2020-07-28 北京字节跳动网络技术有限公司 视频的处理方法、装置、电子设备及计算机可读存储介质
CN112997506A (zh) * 2020-05-28 2021-06-18 深圳市大疆创新科技有限公司 视频文件编辑方法、设备、系统及计算机可读存储介质
CN113891127A (zh) * 2021-08-31 2022-01-04 维沃移动通信有限公司 视频编辑方法、装置及电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102099860A (zh) * 2008-05-15 2011-06-15 苹果公司 用于编辑视频剪辑的用户界面
CN104506937A (zh) * 2015-01-06 2015-04-08 三星电子(中国)研发中心 音视频的分享处理方法和系统
US20180295396A1 (en) * 2017-04-06 2018-10-11 Burst, Inc. Techniques for creation of auto-montages for media content
CN110647500A (zh) * 2019-09-23 2020-01-03 Oppo广东移动通信有限公司 文件存储方法、装置、终端及计算机可读存储介质
CN110737435A (zh) * 2019-10-18 2020-01-31 网易(杭州)网络有限公司 游戏中的多媒体编辑方法、装置、终端设备及存储介质
CN113242464A (zh) * 2021-01-28 2021-08-10 维沃移动通信有限公司 视频编辑方法、装置
CN114845171A (zh) * 2022-03-21 2022-08-02 维沃移动通信有限公司 视频编辑方法、装置及电子设备

Also Published As

Publication number Publication date
CN114845171A (zh) 2022-08-02

Similar Documents

Publication Publication Date Title
CN112153288B (zh) 用于发布视频或图像的方法、装置、设备和介质
CN112887794B (zh) 视频剪辑方法及装置
CN112817676B (zh) 信息处理方法和电子设备
WO2023061280A1 (zh) 应用程序显示方法、装置及电子设备
CN113596555B (zh) 视频播放方法、装置及电子设备
WO2023061414A1 (zh) 一种文件生成方法、装置及电子设备
EP4171006A1 (en) Previewing method and apparatus for effect application, and device and storage medium
US20230326110A1 (en) Method, apparatus, device and media for publishing video
WO2023030306A1 (zh) 视频编辑方法、装置及电子设备
WO2023179539A1 (zh) 视频编辑方法、装置及电子设备
WO2022262721A1 (zh) 信息交互方法、装置和电子设备
CN112672061A (zh) 视频拍摄方法、装置、电子设备及介质
WO2022068721A1 (zh) 截屏方法、装置及电子设备
WO2024114571A1 (zh) 信息显示方法、装置、电子设备和存储介质
CN114302009A (zh) 视频处理方法、装置、电子设备及介质
WO2024037404A1 (zh) 消息引用方法、装置、电子设备及存储介质
WO2023185701A1 (zh) 一种显示方法及其装置、电子设备和可读存储介质
CN112698762A (zh) 图标显示方法、装置及电子设备
WO2023155874A1 (zh) 应用图标管理方法、装置和电子设备
CN111641551A (zh) 语音播放方法、语音播放装置和电子设备
WO2023005899A1 (zh) 图形标识显示方法和电子设备
US20150058394A1 (en) Method for processing data and electronic apparatus
CN115344159A (zh) 文件的处理方法、装置、电子设备和可读存储介质
CN113296676A (zh) 输入方法、输入装置和电子设备
CN117097945A (zh) 视频处理方法及终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23773787

Country of ref document: EP

Kind code of ref document: A1