CN114390356A - Video processing method, video processing device and electronic equipment - Google Patents

Video processing method, video processing device and electronic equipment Download PDF

Info

Publication number
CN114390356A
CN114390356A CN202210059891.2A CN202210059891A CN114390356A CN 114390356 A CN114390356 A CN 114390356A CN 202210059891 A CN202210059891 A CN 202210059891A CN 114390356 A CN114390356 A CN 114390356A
Authority
CN
China
Prior art keywords
video
progress
target
input
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210059891.2A
Other languages
Chinese (zh)
Inventor
戴卓伟
程万里
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210059891.2A priority Critical patent/CN114390356A/en
Publication of CN114390356A publication Critical patent/CN114390356A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The application discloses a video processing method, a video processing device and electronic equipment, and belongs to the technical field of communication. The video processing method comprises the following steps: the method comprises the steps that a first progress control of a first video is displayed, the first progress control comprises at least one first mark, the at least one first mark marks the first progress control into at least two first progress bars, and the first progress bars are used for indicating video clips of the first video; and generating a target video according to the video clip indicated by the target progress bar in the at least two first progress bars.

Description

Video processing method, video processing device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a video processing method, a video processing device and electronic equipment.
Background
At present, after a user records a video, if only a single video segment in the video is needed, video clipping is performed on the video by using video clipping software to obtain a video segment; or, if a user needs to combine some video segments in the video, the user needs to use video clipping software to perform video clipping on the video for multiple times to obtain multiple video segments, and then perform a combining operation. Obviously, the operation steps of the user in the above process are complicated, and the user cannot obtain the required video file by a simple method.
Disclosure of Invention
An embodiment of the present application provides a video processing method, a video processing apparatus, an electronic device, and a readable storage medium, which can solve the problem that an operation process for obtaining a video file in the related art is complicated.
In a first aspect, an embodiment of the present application provides a video processing method, where the video processing method includes:
the method comprises the steps that a first progress control of a first video is displayed, the first progress control comprises at least one first mark, the at least one first mark marks the first progress control into at least two first progress bars, and the first progress bars are used for indicating video clips of the first video;
and generating a target video according to the video clip indicated by the target progress bar in the at least two first progress bars.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the display module is used for displaying a first progress control of a first video, the first progress control comprises at least one first identifier, the at least one first identifier marks the first progress control into at least two first progress bars, and the first progress bars are used for indicating video clips of the first video;
and the processing module is used for generating a target video according to the video clip indicated by the target progress bar in the at least two first progress bars.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement a method as in the first aspect.
In the embodiment of the application, the first video has a corresponding first progress control for indicating the playing progress of the first video. The first progress control comprises at least one first identifier, the at least one first identifier can mark the first progress control into at least two first progress bars, and each first progress bar is used for indicating one video clip of the first video. Furthermore, the target video can be generated according to the video clip indicated by the target progress bar in the at least two first progress bars. Specifically, the video segment indicated by the target progress bar may be used as the target video, or the video segment indicated by the target progress bar may be video-synthesized with other video segments to generate the target video. According to the method and the device, the corresponding video clip is obtained based on the progress bar on the first progress control of the first video, so that the target video required by the user is obtained, the mode simplifies the operation steps of video processing of the user, and the time of the user for processing the video is saved.
Drawings
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present application;
FIG. 2 is one of the display diagrams of an electronic device of an embodiment of the present application;
FIG. 3 is a second schematic display diagram of an electronic device according to an embodiment of the present application;
FIG. 4 is a third schematic display diagram of an electronic device according to an embodiment of the present application;
FIG. 5 is a fourth illustration of a display of an electronic device according to an embodiment of the present application;
FIG. 6 is a fifth illustration of a display of an electronic device according to an embodiment of the present application;
FIG. 7 is a sixth illustration of a display schematic diagram of an electronic device according to an embodiment of the present application;
FIG. 8 is a seventh illustration of a display of an electronic device in accordance with an embodiment of the present application;
FIG. 9 is an eighth schematic display diagram of an electronic device according to an embodiment of the present application;
FIG. 10 is a ninth illustration of a display schematic of an electronic device in accordance with an embodiment of the present application;
FIG. 11 is a tenth illustration of a display schematic of an electronic device of an embodiment of the present application;
FIG. 12 is an eleventh illustration of a display schematic diagram of an electronic device in an embodiment of the present application;
FIG. 13 is a twelve-point diagram of an electronic device according to an embodiment of the present application;
FIG. 14 is a thirteen-display schematic diagram of an electronic device according to an embodiment of the present application;
fig. 15 is a schematic block diagram of a video processing apparatus of an embodiment of the present application;
FIG. 16 is one of the schematic block diagrams of an electronic device of an embodiment of the present application;
fig. 17 is a second schematic block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video processing method, the video processing apparatus, the electronic device, and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
An embodiment of the present application provides a video processing method, as shown in fig. 1, the video processing method includes:
102, displaying a first progress control of a first video, wherein the first progress control comprises at least one first identifier, the at least one first identifier marks the first progress control into at least two first progress bars, and the first progress bars are used for indicating video clips of the first video;
and 104, generating a target video according to the video clip indicated by the target progress bar in the at least two first progress bars.
In this embodiment, the first video has a corresponding first progress control for indicating a progress of the playing of the first video. The first progress control comprises at least one first identifier, the at least one first identifier can mark the first progress control into at least two first progress bars, and each first progress bar is used for indicating one video clip of the first video.
Optionally, the target video may be generated according to a video segment indicated by the target progress bar of the at least two first progress bars. Specifically, the video segment indicated by the target progress bar may be used as the target video, or the video segment indicated by the target progress bar may be video-synthesized with other video segments to generate the target video.
According to the method and the device, the corresponding video clip is obtained based on the progress bar on the first progress control of the first video, so that the target video required by the user is obtained, the mode simplifies the operation steps of video processing of the user, and the time of the user for processing the video is saved.
Further, in an embodiment of the present application, generating a target video according to a video clip indicated by a target progress bar of at least two first progress bars includes: receiving a first input of a user to a target progress bar in at least two first progress bars; and generating a target video based on the video clip indicated by the target progress bar in response to the first input.
The first input includes, but is not limited to, a single click input, a double click input, a slide input, a press input, and the like. Specifically, the embodiment of the present application does not specifically limit the manner of the first input, and may be any realizable manner.
In this embodiment, the user may manually select the target progress bar, so as to generate the target video according to the video segment corresponding to the target progress bar. Specifically, a first input of a user selecting a target progress bar from at least two first progress bars is received, the target progress bar is determined based on the first input, and a video clip corresponding to the target progress bar is generated into a target video.
In the embodiment of the application, the corresponding video clip can be selected to generate the video through the input of the user to the at least two first progress bars, so that the user operation is facilitated, and the time for the user to process the video is saved.
Further, in an embodiment of the present application, before displaying the first progress control of the first video, the method further includes: receiving a second input of the user to the video recording interface of the first video; determining at least one first video segmentation point in response to a second input; and the display position of the first identifier has an association relation with the first video division point.
In this embodiment, when recording the first video, receiving a second input of the user on the video recording interface, and determining at least one first video segmentation point, where the first video segmentation point has an association relationship with the display position of the first identifier, that is, the first video segmentation point can automatically divide the first video into a plurality of video segments.
It should be noted that the second input applied by the user on the video recording interface includes, but is not limited to, a single-click input, a double-click input, a slide input, a press input, a preset graphic input, a voice input, and the like. Specifically, the embodiment of the present application does not specifically limit the manner of the second input, and may be any realizable manner. For example, in the process of recording a first video, a user clicks a video recording interface at the 3 rd minute of video recording, and a user clicks the video recording interface at the 5 th minute of video recording, so that the 3 rd minute and the 5 th minute are both correspondingly determined as first video division points, and the two first video division points divide the first video into 3 video segments.
Illustratively, a user opens a camera application in the electronic device, selects a video recording function, and enters a video recording interface, as shown in fig. 2, a recording control 202 and a prompt control 204 are displayed on the video recording interface, the recording control 202 is used for controlling to start or end video recording, the prompt control 204 is used for controlling to display first prompt information, the first prompt information is used for prompting the user to start recording after clicking the recording control, and clicking a screen to mark a first video division point in the recording process.
After the user clicks the recording control 202, the user starts recording the video, and when the recording reaches a time point of the beginning or the end of the video segment required by the user, as shown in fig. 3, the user clicks the screen to mark the first video division point. When recording to the point in time of the start or end of another user-desired video segment, the user clicks on the screen, marking the second first video segmentation point, as shown in fig. 4. After recording a video, the user clicks the recording control 202 to end recording the first video. The user can record the next video in the same step until all required videos are recorded.
After the user has recorded all the videos, as shown in fig. 5, the user clicks the first control 206 on the video recording interface, as shown in fig. 6, and enters the video editing interface. And displaying a first progress control for recording each video in the video editing interface, and displaying a first identifier corresponding to a first video segmentation point set by a user on the first progress control. For example, for a first video, a first progress control 604 of the first video is displayed, and at least one first identifier 606 corresponding to a first video splitting point is displayed on the first progress control 604, the first identifier 606 on the first progress control 604 divides the first progress control 604 into at least two first progress bars, for example, n first identifiers 606 can divide the first progress control 604 into n +1 first progress bars, that is, divide the first video into n +1 video segments.
The display form (e.g., shape, color, transparency, etc.) of the first indicator 606 is not limited.
As shown in fig. 6, the first progress control 604 of the first video includes 4 first identifiers 606, which indicate that the first video division points determined by the user performing the input operation on the video recording interface of the first video when the user records the first video are 4, and the first progress control 604 is divided into 5 first progress bars, so that the first video is divided into 5 video segments.
And the user double-clicks the selected target progress bar in the 5 first progress bars, then clicks the generation control 608, and generates the target video according to the video clip corresponding to the target progress bar. It should be noted that the number of the target progress bars selected by the user from the 5 first progress bars is not limited, and may be 1 or more. When the number of the target progress bars is multiple, after the user clicks the generation control 608, synthesizing the video clips corresponding to the target sub-progress bars according to the sequence selected by the user for the target progress bars, and generating the target video.
In addition, after selecting the target progress bar and before synthesizing the video, the user may click the preview control 610 to preview the effect of the synthesized video.
In the embodiment of the application, the first progress control of the first video is divided into the plurality of first progress bars through the first video segmentation points set by the user during video recording, so that the corresponding video clip can be selected to be synthesized according to the input of the progress bars. By the method, the video segmentation points are set by the user when the video is recorded, and the set video segmentation points are utilized to realize video editing after the video is recorded, so that the video processing efficiency of the user is improved.
Further, in an embodiment of the present application, before displaying the first progress control of the first video, the method further includes: receiving a third input of a user to a first recording area in a video recording interface of the first video; determining a second video segmentation point in response to a third input; receiving a fourth input of a user to a second recording area in a video recording interface of the first video; determining a third video segmentation point in response to a fourth input; the at least one first mark comprises a starting mark and an ending mark, the starting mark and the second video segmentation point have an association relation, and the ending mark and the third video segmentation point have an association relation.
In this embodiment, the video recording interface of the first video includes a first recording area and a second recording area, for example, the upper half area of the video recording interface is the first recording area, the lower half area is the second recording area, or the left half area of the video recording interface is the first recording area, and the right half area is the second recording area.
And receiving a third input of the user to the first recording area of the video recording interface, and marking a second video division point of the first video according to the third input. And receiving fourth input of the user to a second recording area of the video recording interface, and marking a third video division point of the first video according to the fourth input. The at least one first mark on the first progress control of the first video comprises a starting mark and an ending mark, the second video division point represents the starting mark, and the third video division point represents the ending mark. Therefore, the first progress control can be divided according to the second video division point and the third video division point, namely the first video is divided into video segments, the first progress bar between the second video division point and the third video division point is determined as the target progress bar, and then the target video is generated according to the video segments corresponding to the target progress bar.
Illustratively, a user opens a camera application in the electronic device, selects a video recording function, and enters a video recording interface, as shown in fig. 2, a recording control 202 and a prompt control 204 are displayed on the video recording interface, the recording control 202 is used for controlling to start or end video recording, the prompt control 204 is used for controlling to display second prompt information, the second prompt information is used for prompting the user to start recording after clicking the recording control, and during the recording process, a first recording area of the clicking interface marks a second video division point, and a second recording area of the clicking interface marks a third video division point.
After the user clicks the recording control 202, the user starts to record the video, and as shown in fig. 7, when the start time point of a video segment required by the user is recorded, the user clicks the first recording area 702 to mark the start identifier corresponding to the second video segmentation point. As shown in fig. 8, when the end time point of a video segment required by a user is recorded, the user clicks the second recording area 704 to mark an end identifier corresponding to the third video segmentation point. After recording a video, the user clicks the recording control 202 to end the video recording. The user can record the next video in the same step until all videos required by the user are recorded.
In the embodiment of the application, the user can set the start identifier and the end identifier of the video clip of the first video by inputting the video in different areas of the video recording interface, and then the required video clip can be automatically determined according to the start identifier and the end identifier after the video recording is finished, so that the video processing is finished, and the video processing efficiency of the user is improved.
In addition, in some embodiments, the user may mark the second video segmentation point by input to a start control displayed on the recording interface, the start control being for indicating marking of the second video segmentation point, and mark the third video segmentation point by input to an end control displayed on the recording interface, the end control being for indicating marking of the third video segmentation point. Moreover, the marking of the second video division point and the third video division point may also be realized by different operations on the recording control 202, for example, the marking of the second video division point by sliding the recording control 202 to the left and the marking of the third video division point by sliding the recording control 202 to the right.
Further, in an embodiment of the present application, generating a target video according to a video clip indicated by a target progress bar of at least two first progress bars includes: and generating a target video according to the video segment indicated by the target progress bar between the starting mark and the ending mark.
In the embodiment, according to a video segmentation point set by a user when a video is recorded, a start identifier and an end identifier are correspondingly determined on a first progress control of a first video, so that a first progress bar between the start identifier and the end identifier is determined as a target progress bar, a required video segment is automatically determined, and generation of the target video is realized.
Illustratively, after the user has recorded the video, as shown in FIG. 5, the user clicks a first control 206 on the video recording interface, as shown in FIG. 9, and enters the video clip interface. And displaying a first progress control for recording each video in the video editing interface, and displaying a starting identifier corresponding to a second video segmentation point and an ending identifier corresponding to a third video segmentation point, which are set by a user, on the first progress control, wherein the starting identifier and the ending identifier are different in display form (such as shape, color, transparency and the like).
For example, for a first video, 4 marks are displayed on the first progress control 604 of the first video, which are a solid point a, an open point b, a solid point c, and an open point d, respectively, where the solid point represents a start mark marked by a user clicking a first recording area in the video recording interface of the first video, and the open point represents an end mark marked by the user clicking a second recording area in the video recording interface of the first video. And automatically taking the first progress bar between the solid point a and the hollow point b and the first progress bar between the solid point c and the hollow point d as target progress bars. After the user clicks on the generate control 608, a target video is generated that includes the video clip indicated by the target progress bar.
In addition, after selecting the target progress bar and before synthesizing the video, the user can click the preview control 610 to preview the effect of the synthesized video.
In the embodiment of the application, the user sets the starting identifier and the ending identifier by inputting the different areas of the video recording interface, and then the required video segments are automatically determined according to the starting identifier and the ending identifier, so that the video processing is completed, the user does not need to manually select the video segments, and the operation steps of the user are further reduced.
In some embodiments, after the user sets the start identifier and the end identifier and the video is recorded, the start identifier and the end identifier may be generated after the user selects a target progress bar between the start identifier and the end identifier.
Further, in an embodiment of the present application, the video processing method further includes: receiving a fifth input of the first identifier by the user; in response to a fifth input, adjusting a display position of the first indicia on the first progress control.
In this embodiment, the user may manually adjust the display position of the first identifier on the first progress control, so as to implement adjustment of the first progress bar partitioned according to the first identifier, and then implement adjustment of the video clip corresponding to the first progress bar.
Through the mode, the user can adjust the video clip of the first video based on the display position of the first identification on the first progress control, so that the flexibility of adjusting the video clip is improved, and the video processing efficiency is improved.
Further, in an embodiment of the present application, generating a target video according to a video clip indicated by a target progress bar of at least two first progress bars includes: and generating a target video according to the video clip indicated by the target progress bar in the at least two first progress bars and the video clip of the second video.
In this embodiment, in addition to being able to video-compose at least one video clip of the first video, it is also possible to video-compose a video clip of the first video with video clips of other videos (i.e., the second video).
Additionally, the video clip of the second video may operate in the same manner as the video clip of the first video. For example, as shown in fig. 6, a first identifier is displayed on the progress control 616 of the second video, where the first identifier is obtained by a user clicking a screen on a video recording interface of the second video and marking a first video division point corresponding to the first video division point, the first identifier divides the progress control of the second video into a plurality of progress bars, and after the user selects a target progress bar, a video clip corresponding to the target progress bar and a video clip corresponding to the target progress bar of the first video are subjected to video synthesis to generate the target video.
Alternatively, as shown in fig. 9, a solid point e (i.e., a start identifier) and an empty point f (i.e., an end identifier) are displayed on the progress control 616 of the second video, where the start identifier and the end identifier are obtained by a user clicking the first recording area and the second recording area on the video recording interface of the second video, and marking the corresponding points of the second video segmentation point and the third video segmentation point. The progress bar between the solid point e and the hollow point f is automatically taken as the target progress bar.
By the method, the purpose of synthesizing the video clip of the first video and the video clips of other videos is achieved, and the flexibility of video synthesis is improved.
Further, in an embodiment of the present application, the video processing method further includes: and displaying a first video thumbnail, wherein the first video thumbnail is a thumbnail of the video clip indicated by the first progress bar.
In the embodiment, the first video thumbnail of the video segment indicated by the first progress bar is displayed, so that the user can accurately and quickly identify the video content corresponding to the section of progress bar through the prompt of the first video thumbnail.
It should be noted that the thumbnail may include a frame of image at the start time of the video segment, a frame of image at the end time of the video segment, and an image at a time or times between the start time and the end time of the video segment. The number of the first video thumbnails corresponding to the first progress bar is not limited.
Illustratively, a video thumbnail is displayed at a corresponding position of each progress bar, for example, as shown in fig. 6, a thumbnail 612 corresponding to each first progress bar is displayed above the first progress control 604 of the first video to represent the video content corresponding to the section of progress bar.
Further, in one embodiment of the present application, the video editing interface of the first video includes a first editing area and a second editing area; displaying at least one first progress bar in the first editing area, and displaying a target progress bar in the second editing area; generating the target video includes: receiving a sixth input of the user; and responding to the sixth input, updating the display sequence of the target progress bar, and generating the target video according to the display sequence of the target progress.
In this embodiment, the video editing interface of the first video includes a first editing region and a second editing region, and illustratively, as shown in fig. 6 and 9, the video editing interface includes a first editing region 602 and a second editing region 614.
The method comprises the steps of displaying at least one first progress bar in a first editing area of a video editing interface of a first video, and displaying a target progress bar in a second editing area of the video editing interface of the first video. That is, after the user selects the target progress bar in the at least one first progress bar in the first editing region, the target progress bar is displayed in the second editing region. The first progress bar and the target progress bar are displayed in different areas, so that the user can determine the selected target progress bar more intuitively.
It should be noted that, in some embodiments, a second video thumbnail of the target progress bar may also be displayed in the second editing area, and the second video thumbnail is a thumbnail of the video segment indicated by the target progress bar.
Further, the user can make a sixth input in the second editing region to update the display order of the target progress bar, thereby generating the target video in the updated display order.
Exemplarily, as shown in fig. 10, the arrangement order of the target progress bars in the second editing region 614 is target progress bar 1, target progress bar 2, and target progress bar 3, as shown in fig. 11, the user adjusts the arrangement order of the target progress bars to be target progress bar 1, target progress bar 3, and target progress bar 2, and clicks the generation control 608, and then performs video composition according to the order of the video clip corresponding to the target progress bar 1, the video clip corresponding to the target progress bar 3, and the video clip corresponding to the target progress bar 2, so as to generate the target video.
In the embodiment of the application, the display sequence of the target progress bar in the second editing area is adjusted, the synthesis sequence of the video clips is adjusted, and the flexibility of video processing is improved.
Further, in an embodiment of the present application, the video processing method further includes: receiving a seventh input of the target progress bar in the second editing area by the user; responding to a seventh input, displaying a second progress control of the video clip corresponding to the target progress bar, wherein the second progress control comprises at least one second identifier, and the at least one second identifier marks the second progress control as at least two second progress bars; receiving an eighth input of the second identifier by the user; in response to an eighth input, adjusting a position of the second marker on the second progress control.
In this embodiment, the user makes a seventh input to the target progress bar in the second editing region, thereby displaying the second progress control of its corresponding video clip. The user can further move the second identifier on the second progress control, so as to finely adjust the length of the video clip.
Illustratively, as shown in fig. 12, the user clicks the target progress bar 2 displayed in the second editing region 614, as shown in fig. 13, a second progress control 618 of the video clip corresponding to the target progress bar 2 is displayed, and a second identifier 620 and a second identifier 622 are displayed on the second progress control 618. As shown in fig. 14, the user adjusts the positions of the second identifier 620 and the second identifier 622, so as to adjust the length of the video segment. Finally, the target video is generated according to the video segment between the second identifier 620 and the second identifier 622.
By the method, the length of the video clip can be adjusted, and the flexibility of video editing is improved.
In the video processing method provided by the embodiment of the application, the execution main body can be a video processing device. In the embodiment of the present application, a video processing apparatus executing a video processing method is taken as an example, and the video processing apparatus provided in the embodiment of the present application is described.
An embodiment of the present application provides a video processing apparatus, as shown in fig. 15, the video processing apparatus 1500 includes:
the display module 1502 is configured to display a first progress control of a first video, where the first progress control includes at least one first identifier, and the at least one first identifier marks the first progress control as at least two first progress bars, where the first progress bars are used to indicate video clips of the first video;
the processing module 1504 is configured to generate a target video according to the video clip indicated by the target progress bar in the at least two first progress bars.
In this embodiment, the first video has a corresponding first progress control for indicating a progress of the playing of the first video. The first progress control comprises at least one first identifier, the at least one first identifier can mark the first progress control into at least two first progress bars, and each first progress bar is used for indicating one video clip of the first video. Furthermore, the target video can be generated according to the video clip indicated by the target progress bar in the at least two first progress bars. Specifically, the video segment indicated by the target progress bar may be used as the target video, or the video segment indicated by the target progress bar may be video-synthesized with other video segments to generate the target video. According to the method and the device, the corresponding video clip is obtained based on the progress bar on the first progress control of the first video, so that the target video required by the user is obtained, the mode simplifies the operation steps of video processing of the user, and the time of the user for processing the video is saved.
Further, in an embodiment of the present application, the video processing apparatus 1500 further includes: the receiving module is used for receiving first input of a user to a target progress bar in at least two first progress bars; the processing module 1504 is specifically configured to generate a target video based on the video clip indicated by the target progress bar in response to the first input.
Further, in an embodiment of the application, the receiving module is further configured to receive a second input of the user to the video recording interface of the first video; the video processing apparatus 1500 further includes: a determination module for determining at least one first video segmentation point in response to a second input; and the display position of the first identifier has an association relation with the first video division point.
Further, in an embodiment of the application, the receiving module is further configured to receive a third input of the user to the first recording area in the video recording interface of the first video; a determining module, further configured to determine a second video segmentation point in response to a third input; the receiving module is further used for receiving a fourth input of the user to a second recording area in the video recording interface of the first video; a determination module, further configured to determine a third video segmentation point in response to a fourth input; the at least one first mark comprises a starting mark and an ending mark, the starting mark and the second video segmentation point have an association relation, and the ending mark and the third video segmentation point have an association relation.
Further, in an embodiment of the present application, the processing module 1504 is specifically configured to generate a target video according to a video segment indicated by the target progress bar between the start identifier and the end identifier.
Further, in an embodiment of the present application, the receiving module is further configured to receive a fifth input of the first identifier by the user; the processing module 1504 is further configured to adjust a display position of the first identifier on the first progress control in response to a fifth input.
Further, in an embodiment of the present application, the processing module 1504 is specifically configured to generate the target video according to the video segment indicated by the target progress bar in the at least two first progress bars and the video segment of the second video.
Further, in an embodiment of the present application, the display module 1502 is further configured to display a first video thumbnail, where the first video thumbnail is a thumbnail of a video clip indicated by the first progress bar.
Further, in one embodiment of the present application, the video editing interface of the first video includes a first editing area and a second editing area; the display module 1502 is further configured to display at least one first progress bar in the first editing region, and display a target progress bar in the second editing region; the receiving module is further used for receiving a sixth input of the user; the processing module 1504 is specifically configured to update the display order of the target progress bar in response to the sixth input, and generate the target video according to the display order of the target progress.
The video processing apparatus 1500 in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the Mobile electronic device may be a Mobile phone, a tablet Computer, a notebook Computer, a palm top Computer, an in-vehicle electronic device, a wearable device, an Ultra-Mobile Personal Computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-Mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (Personal Computer, PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The video processing apparatus 1500 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video processing apparatus 1500 provided in this embodiment of the application can implement each process implemented in the video processing method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 16, an electronic device 1600 is further provided in this embodiment of the present application, and includes a processor 1602, a memory 1604, and a program or an instruction stored in the memory 1604 and executable on the processor 1602, where the program or the instruction is executed by the processor 1602 to implement each process of the video processing method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 17 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1700 includes, but is not limited to: radio frequency unit 1702, network module 1704, audio output unit 1706, input unit 1708, sensor 1710, display unit 1712, user input unit 1714, interface unit 1716, memory 1718, and processor 1720.
Those skilled in the art will appreciate that the electronic device 1700 may also include a power supply (e.g., a battery) for powering the various components, and that the power supply may be logically coupled to the processor 1720 via a power management system that may be configured to manage charging, discharging, and power consumption management functions. The electronic device structure shown in fig. 17 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
The display unit 1712 is configured to display a first progress control of a first video, where the first progress control includes at least one first identifier, the at least one first identifier marks the first progress control as at least two first progress bars, and the first progress bars are used to indicate video clips of the first video; a processor 1720 for generating a target video according to the video segment indicated by the target progress bar of the at least two first progress bars.
In this embodiment, the first video has a corresponding first progress control for indicating a progress of the playing of the first video. The first progress control comprises at least one first identifier, the at least one first identifier can mark the first progress control into at least two first progress bars, and each first progress bar is used for indicating one video clip of the first video. Furthermore, the target video can be generated according to the video clip indicated by the target progress bar in the at least two first progress bars. Specifically, the video segment indicated by the target progress bar may be used as the target video, or the video segment indicated by the target progress bar may be video-synthesized with other video segments to generate the target video. According to the method and the device, the corresponding video clip is obtained based on the progress bar on the first progress control of the first video, so that the target video required by the user is obtained, the mode simplifies the operation steps of video processing of the user, and the time of the user for processing the video is saved.
Further, in an embodiment of the present application, the user input unit 1714 is configured to receive a first input of a user to a target progress bar of the at least two first progress bars; the processor 1720 is specifically configured to generate a target video based on the video segment indicated by the target progress bar in response to the first input.
Further, in an embodiment of the present application, the user input unit 1714 is further configured to receive a second input of the video recording interface of the first video from the user; processor 1720, further configured to determine at least one first video segmentation point in response to a second input; and the display position of the first identifier has an association relation with the first video division point.
Further, in an embodiment of the present application, the user input unit 1714 is further configured to receive a third input of the user to the first recording area in the video recording interface of the first video; processor 1720, further configured to determine a second video segmentation point in response to a third input; the user input unit 1714 is further configured to receive a fourth input of the user to the second recording area in the video recording interface of the first video; processor 1720, further configured to determine a third video segmentation point in response to a fourth input; the at least one first mark comprises a starting mark and an ending mark, the starting mark and the second video segmentation point have an association relation, and the ending mark and the third video segmentation point have an association relation.
Further, in an embodiment of the present application, the processor 1720 is specifically configured to generate a target video according to a video segment indicated by the target progress bar between the start identifier and the end identifier.
Further, in an embodiment of the present application, the user input unit 1714 is further configured to receive a fifth input of the first identifier from the user; processor 1720, further configured to adjust a display position of the first indicator on the first progress control in response to a fifth input.
Further, in an embodiment of the present application, the processor 1720 is specifically configured to generate the target video according to a video segment indicated by the target progress bar in the at least two first progress bars and a video segment of the second video.
Further, in an embodiment of the present application, the display unit 1712 is configured to display a first video thumbnail, where the first video thumbnail is a thumbnail of a video clip indicated by the first progress bar.
Further, in one embodiment of the present application, the video editing interface of the first video includes a first editing area and a second editing area; the display unit 1712 is further configured to display at least one first progress bar in the first editing region, and display a target progress bar in the second editing region; a user input unit 1714, further configured to receive a sixth input from the user; the processor 1720 is specifically configured to update a display order of the target progress bar in response to the sixth input, and generate the target video according to the display order of the target progress.
It should be understood that in the embodiment of the present application, the input Unit 1708 may include a Graphics Processing Unit (GPU) 17082 and a microphone 17084, and the Graphics Processing Unit 17082 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1712 may include a display panel 17122, and the display panel 17122 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1714 includes at least one of a touch panel 17142 and other input devices 17144. A touch panel 17142, also referred to as a touch screen. The touch panel 17142 may include two portions of a touch detection device and a touch controller. Other input devices 17144 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1718 may be used to store software programs as well as various data. The memory 1718 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, an application program or instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1718 may include volatile memory or nonvolatile memory, or the memory 1718 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1718 in the embodiments of the subject application includes, but is not limited to, these and any other suitable types of memory.
Processor 1720 may include one or more processing units; optionally, processor 1720 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It is to be appreciated that the modem processor described above may not be integrated into processor 1720.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the video processing method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing video processing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A video processing method, comprising:
displaying a first progress control of a first video, wherein the first progress control comprises at least one first identifier, the at least one first identifier marks the first progress control into at least two first progress bars, and the first progress bars are used for indicating video clips of the first video;
and generating a target video according to the video clip indicated by the target progress bar in the at least two first progress bars.
2. The video processing method according to claim 1, wherein the generating a target video according to the video segment indicated by the target progress bar in the at least two first progress bars comprises:
receiving a first input of a user to a target progress bar in the at least two first progress bars;
and generating a target video based on the video clip indicated by the target progress bar in response to the first input.
3. The video processing method of claim 1, wherein before displaying the first progress control of the first video, further comprising:
receiving a second input of the user to the video recording interface of the first video;
determining at least one first video segmentation point in response to the second input;
wherein the display position of the first identifier has an association relation with the first video division point.
4. The video processing method of claim 1, wherein prior to the displaying the first progress control of the first video, further comprising:
receiving a third input of a user to a first recording area in a video recording interface of the first video;
determining a second video segmentation point in response to the third input;
receiving a fourth input of a user to a second recording area in the video recording interface of the first video;
determining a third video segmentation point in response to the fourth input;
wherein the at least one first identifier comprises a start identifier and an end identifier, the start identifier has an association relationship with the second video segmentation point, and the end identifier has an association relationship with the third video segmentation point.
5. The video processing method according to claim 4, wherein the generating a target video according to the video segment indicated by the target progress bar of the at least two first progress bars comprises:
and generating a target video according to the video segment indicated by the target progress bar between the starting mark and the ending mark.
6. The video processing method according to any one of claims 1 to 5, further comprising:
receiving a fifth input of the first identifier by the user;
in response to the fifth input, adjusting a display position of the first indicia on the first progress control.
7. The video processing method according to any one of claims 1 to 5, wherein the generating a target video according to the video segment indicated by the target progress bar of the at least two first progress bars comprises:
and generating a target video according to the video clip indicated by the target progress bar in the at least two first progress bars and the video clip of the second video.
8. The video processing method according to any one of claims 1 to 5, further comprising:
and displaying a first video thumbnail which is a thumbnail of the video clip indicated by the first progress bar.
9. The video processing method according to any one of claims 1 to 5, wherein the video editing interface of the first video includes a first editing area and a second editing area; the video processing method further comprises:
displaying the at least one first progress bar in the first editing area, and displaying the target progress bar in the second editing area;
the generating the target video comprises:
receiving a sixth input of the user;
and responding to the sixth input, updating the display sequence of the target progress bar, and generating a target video according to the display sequence of the target progress.
10. A video processing apparatus, comprising:
the display module is used for displaying a first progress control of a first video, the progress control comprises at least one first identifier, the at least one first identifier marks the first progress control into at least two first progress bars, and the first progress bars are used for indicating video clips of the first video;
and the processing module is used for generating a target video according to the video clip indicated by the target progress bar in the at least two first progress bars.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the video processing method according to any one of claims 1 to 9.
CN202210059891.2A 2022-01-19 2022-01-19 Video processing method, video processing device and electronic equipment Pending CN114390356A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210059891.2A CN114390356A (en) 2022-01-19 2022-01-19 Video processing method, video processing device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210059891.2A CN114390356A (en) 2022-01-19 2022-01-19 Video processing method, video processing device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114390356A true CN114390356A (en) 2022-04-22

Family

ID=81204126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210059891.2A Pending CN114390356A (en) 2022-01-19 2022-01-19 Video processing method, video processing device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114390356A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150293675A1 (en) * 2014-04-10 2015-10-15 JBF Interlude 2009 LTD - ISRAEL Dynamic timeline for branched video
CN105430508A (en) * 2015-11-27 2016-03-23 华为技术有限公司 Video play method and device
CN106412706A (en) * 2016-09-28 2017-02-15 北京小米移动软件有限公司 Video playing control, apparatus and device
CN107426583A (en) * 2017-06-16 2017-12-01 广州视源电子科技股份有限公司 Video editing method, server and audio/video player system based on focus
CN109151553A (en) * 2018-09-29 2019-01-04 传线网络科技(上海)有限公司 Display control method and device, electronic equipment and storage medium
CN111770386A (en) * 2020-05-29 2020-10-13 维沃移动通信有限公司 Video processing method, video processing device and electronic equipment
CN112004136A (en) * 2020-08-25 2020-11-27 广州市百果园信息技术有限公司 Method, device, equipment and storage medium for video clipping
CN112040280A (en) * 2020-08-20 2020-12-04 连尚(新昌)网络科技有限公司 Method and equipment for providing video information
CN112188307A (en) * 2019-07-03 2021-01-05 腾讯科技(深圳)有限公司 Video resource synthesis method and device, storage medium and electronic device
WO2021008055A1 (en) * 2019-07-17 2021-01-21 广州酷狗计算机科技有限公司 Video synthesis method and apparatus, and terminal and storage medium
CN112287165A (en) * 2020-10-29 2021-01-29 深圳市艾酷通信软件有限公司 File processing method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150293675A1 (en) * 2014-04-10 2015-10-15 JBF Interlude 2009 LTD - ISRAEL Dynamic timeline for branched video
CN105430508A (en) * 2015-11-27 2016-03-23 华为技术有限公司 Video play method and device
CN106412706A (en) * 2016-09-28 2017-02-15 北京小米移动软件有限公司 Video playing control, apparatus and device
CN107426583A (en) * 2017-06-16 2017-12-01 广州视源电子科技股份有限公司 Video editing method, server and audio/video player system based on focus
CN109151553A (en) * 2018-09-29 2019-01-04 传线网络科技(上海)有限公司 Display control method and device, electronic equipment and storage medium
CN112188307A (en) * 2019-07-03 2021-01-05 腾讯科技(深圳)有限公司 Video resource synthesis method and device, storage medium and electronic device
WO2021008055A1 (en) * 2019-07-17 2021-01-21 广州酷狗计算机科技有限公司 Video synthesis method and apparatus, and terminal and storage medium
CN111770386A (en) * 2020-05-29 2020-10-13 维沃移动通信有限公司 Video processing method, video processing device and electronic equipment
CN112040280A (en) * 2020-08-20 2020-12-04 连尚(新昌)网络科技有限公司 Method and equipment for providing video information
CN112004136A (en) * 2020-08-25 2020-11-27 广州市百果园信息技术有限公司 Method, device, equipment and storage medium for video clipping
CN112287165A (en) * 2020-10-29 2021-01-29 深圳市艾酷通信软件有限公司 File processing method and device

Similar Documents

Publication Publication Date Title
CN109525884A (en) Video paster adding method, device, equipment and storage medium based on split screen
CN112269522A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN113918522A (en) File generation method and device and electronic equipment
CN114756154A (en) File editing method and device
CN112887794B (en) Video editing method and device
CN111885298B (en) Image processing method and device
CN111638839A (en) Screen capturing method and device and electronic equipment
CN114302009A (en) Video processing method, video processing device, electronic equipment and medium
CN115344159A (en) File processing method and device, electronic equipment and readable storage medium
CN114679546A (en) Display method and device, electronic equipment and readable storage medium
CN114390356A (en) Video processing method, video processing device and electronic equipment
CN114845171A (en) Video editing method and device and electronic equipment
CN112162805B (en) Screenshot method and device and electronic equipment
CN115097979A (en) Icon management method and icon management device
CN115543137A (en) Video playing method and device
CN115016686A (en) File selection method and device, electronic equipment and readable storage medium
CN114564921A (en) Document editing method and device
CN114584704A (en) Shooting method and device and electronic equipment
CN114025237A (en) Video generation method and device and electronic equipment
CN114063854A (en) File editing processing method and device and electronic equipment
CN114500844A (en) Shooting method and device and electronic equipment
CN113096686A (en) Audio processing method and device, electronic equipment and storage medium
CN112015310A (en) Cover for acquiring electronic icon, cover setting method and device and electronic equipment
CN114390205A (en) Shooting method and device and electronic equipment
CN115334242A (en) Video recording method, video recording device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination