CN117097945A - Video processing method and terminal - Google Patents

Video processing method and terminal Download PDF

Info

Publication number
CN117097945A
CN117097945A CN202311051404.9A CN202311051404A CN117097945A CN 117097945 A CN117097945 A CN 117097945A CN 202311051404 A CN202311051404 A CN 202311051404A CN 117097945 A CN117097945 A CN 117097945A
Authority
CN
China
Prior art keywords
input
video
editing
identifier
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311051404.9A
Other languages
Chinese (zh)
Inventor
严琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311051404.9A priority Critical patent/CN117097945A/en
Publication of CN117097945A publication Critical patent/CN117097945A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video processing method and a terminal, and belongs to the technical field of images. Wherein the method comprises the following steps: receiving a first input of a user on the editing progress bar under the condition that the editing progress bar is displayed on a playing interface of the first video; in response to the first input, displaying a first identification on the editing progress bar, the first identification for determining a first image frame or a first video clip in the first video; receiving a second input from the user; and responding to the second input, editing the first image frame or the first video fragment in the first video to obtain a second video, wherein a playing interface of the second video comprises a second identifier, and the second identifier is used for indicating that the first image frame or the first video fragment is edited.

Description

Video processing method and terminal
Technical Field
The application belongs to the technical field of images, and particularly relates to a video processing method and a terminal.
Background
Currently, with the development of video technology, watching video becomes a main online leisure mode for users. Accordingly, the interest of users in video clips is also increasing. Among other things, common video editing operations include: cutting, splicing, adding characters, adding pictures and the like are carried out on the video file.
With the continued rise of video clips among users of various ages, many users want to participate in them. However, current video editing techniques are too cumbersome for the average user to operate.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method and a terminal, which can solve the problem of complex operation when editing video.
In a first aspect, an embodiment of the present application provides a video processing method, including:
receiving a first input of a user on the editing progress bar under the condition that the editing progress bar is displayed on a playing interface of the first video;
in response to the first input, displaying a first identification on the editing progress bar, the first identification for determining a first image frame or a first video clip in the first video;
receiving a second input from the user;
and responding to the second input, editing the first image frame or the first video fragment in the first video to obtain a second video, wherein a playing interface of the second video comprises a second identifier, and the second identifier is used for indicating that the first image frame or the first video fragment is edited.
In a second aspect, an embodiment of the present application provides a terminal, including:
The receiving module is used for receiving a first input of a user on the editing progress bar under the condition that the editing progress bar is displayed on a playing interface of the first video;
a display module for displaying a first identification on the editing progress bar in response to a first input, the first identification being used to determine a first image frame or a first video clip in a first video;
the receiving module is also used for receiving a second input of the user;
the editing module is used for responding to the second input and editing the first image frames or the first video clips in the first video to obtain a second video, and the playing interface of the second video comprises a second identifier which is used for indicating that the first image frames or the first video clips are edited.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the editing progress bar is displayed, the editing of a user is facilitated, and under the condition that the editing progress bar is displayed on the playing interface of the first video, the first input of the user on the editing progress bar is received, the first input is used for indicating the first image frame or the first video fragment which the user wants to edit, the first identification used for determining the first image frame or the first video fragment in the first video is displayed on the editing progress bar in response to the first input, and the user can conveniently view and confirm the first image frame or the first video fragment which the user wants to edit through the first identification, so that the first image frame or the first video fragment in the first video can be intuitively and clearly determined through the first identification, and the user can conveniently and subsequently edit the first image frame or the first video fragment in the first video. Then, in response to a second input of a user, editing the first image frames or the first video clips in the first video to obtain the second video, so that the first image frames or the first video clips in the first video can be rapidly and conveniently edited, and the complexity and difficulty of video editing are reduced. The playing interface of the second video comprises a second identifier for indicating that the first image frame or the first video fragment is edited, so that a viewer can quickly and intuitively know that the second video is edited.
Drawings
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an editing progress bar according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a folding screen display interface according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a first identifier provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a first video clip according to an embodiment of the present application;
FIG. 6 is a schematic diagram of adjusting a first identifier according to an embodiment of the present application;
FIG. 7 is a schematic diagram of editing an image frame according to an embodiment of the present application;
FIG. 8 is a schematic diagram of editing a first image frame according to an embodiment of the present application;
FIG. 9 is a schematic diagram of editing video clips according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a copy operation provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of a cancel copy operation provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of a delete operation provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of an editing progress bar corresponding to a delete operation according to an embodiment of the present application;
fig. 14 is a block diagram of a terminal according to an embodiment of the present application;
FIG. 15 is a schematic diagram of a hardware architecture of an electronic device according to an embodiment of the present application;
fig. 16 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the accompanying drawings of the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms "first," "second," and the like in the description of the present application, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video processing method provided by the embodiment of the application can be at least applied to the following application scenes, and is explained below.
At present, with the continuous development of video technology, elements which can be inserted in the video playing process are more and more abundant, and especially when online playing is performed, the video watching experience can be shared with users watching videos simultaneously through comment barrages and other forms.
At present, the video playing mode is single, and the personalized watching requirement of a user cannot be met. For the problems of the related art, the embodiment of the application provides a video processing method and a terminal, which can solve the problem of complex operation when editing video in the related art.
The video processing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application.
As shown in fig. 1, the video processing method may include steps 110 to 140, where the method is applied to a terminal, and specifically shown as follows:
step 110, receiving a first input of a user to the editing progress bar under the condition that the editing progress bar is displayed on a playing interface of the first video.
The method comprises the steps of displaying an editing progress bar, and facilitating the user to edit the video through the editing progress bar.
Wherein the editing progress bar may be displayed in response to user input to the first video. The editing progress bar may be displayed at a preset position of the playing interface of the first video. The preset position may be a bottom position of a playing interface of the first video. The editing progress bar may be hidden by default and then displayed based on a user's touch input to the video playback interface.
In one possible embodiment, the method may further comprise the steps of:
receiving eighth input of a user on the editing progress bar; in response to the eighth input, the display position of the editing progress bar is updated or the editing progress bar is hidden.
As shown in fig. 2 (a), the video playing interface of the first video may display a playing progress bar 100 and an editing progress bar 200. The user can check the playing progress through the playing progress bar, and edit the video based on the editing progress bar. As shown in fig. 2 (b), the editing progress bar may be displayed as a broken line in response to a long press input to the editing progress bar 200.
As shown in fig. 2 (b), in response to a drag input to the editing progress bar, the display position of the editing progress bar may be updated, and along with the drag input by the user, the editing progress bar displayed at the bottom position of the playback interface of the first video may be moved to positions corresponding to the other three directions. The positions corresponding to the other three directions may include: an upper position, a left position, and a right position of a playback interface of the first video.
When the display position of the editing progress bar is close to the edge area of the screen, the editing progress bar is automatically sucked, i.e., automatically moved to the corresponding area 210, as shown in fig. 2 (c).
For external screens of the non-folding electronic equipment and the folding screen electronic equipment, responding to the input of a user to the editing progress bar, and suspending the playing of the first video; for a folding screen electronic device, as shown in fig. 3, the operation interface 310 of the first video and the playing interface 320 of the first video are displayed in a split screen mode, and at this time, the playing of the first video does not need to be paused.
In response to the first input, a first identification is displayed on the editing progress bar, the first identification being used to determine a first image frame or a first video clip in the first video, step 120.
Wherein, when the first identifier is used to determine the first image frame in the first video, as shown in fig. 4, a thumbnail of the first image frame may be displayed above the first identifier 410 of the editing progress bar.
When the first identifier is used to determine the first video segment in the first video, the third identifier and the first identifier are used together to indicate the first video segment in the first video.
In one possible embodiment, after step 120, the following steps may be further included:
Receiving a fifth input of a user on the editing progress bar; in response to the fifth input, a third logo is displayed on the editing progress bar, the third logo having a display position different from a display position of the first logo, the third logo and the first logo indicating a first video segment in the first video.
Wherein the first identifier is used for determining a start frame of the first video segment, and the third identifier is used for determining an end frame of the first video segment; alternatively, the first identifier is used to determine an end frame of the first video segment and the third identifier is used to determine a start frame of the first video segment. The first and third identifications are used to indicate a first video segment in the first video.
As shown in fig. 5, the user performs a sliding operation over the editing progress bar, and in response to a fifth input of the editing progress bar by the user, a third identifier 520 is displayed on the editing progress bar, and a time point a corresponds to the first identifier 510; time point B corresponds to third identification 520.
In one possible embodiment, the first identifier comprises a first sub-identifier and a second sub-identifier; after step 120, the following steps may also be included:
receiving a fourth input of the first identifier by the user; in response to the fourth input, the display positions of the first sub-identifier and the second sub-identifier are updated.
As shown in fig. 6, the upper and lower endpoints of the first identifier may be the first sub-identifier and the second sub-identifier, respectively.
It will be appreciated that the fourth input to the first sub-identifier by the user is a fourth input to the first sub-identifier 610 and the second sub-identifier 620, such as the user's double finger holding the first sub-identifier 610 and the second sub-identifier 620 simultaneously and rotating, and in response to the fourth input, updating the display positions of the first sub-identifier 610 and the second sub-identifier 620.
The first sub-identifier is used for determining an image frame before an image frame corresponding to the current position of the first identifier; the second sub-identifier is used for determining an image frame after the image frame corresponding to the current position of the first identifier.
When the display positions of the first sub-mark and the second sub-mark are updated, the functions corresponding to the first sub-mark and the second sub-mark are exchanged, namely, the first sub-mark is used for determining the image frame after the image frame corresponding to the current position of the first mark; the second sub-identifier is used for determining an image frame before the image frame corresponding to the current position of the first identifier.
Step 130, a second input is received from the user.
In one possible embodiment, the first identifier comprises a first sub-identifier and a second sub-identifier; prior to step 130, the steps of:
Receiving a third input of a user to the first sub-identifier or the second sub-identifier;
in response to the third input, updating a location of the first identifier on the editing progress bar and updating an image frame in the first video indicated by the first identifier;
wherein the functions corresponding to the first sub-identifier and the second sub-identifier are different.
The functions corresponding to the first sub-identifier and the second sub-identifier are different, that is, the first sub-identifier is used for determining an image frame before an image frame corresponding to the current position of the first identifier, and the second sub-identifier is used for determining an image frame after the image frame corresponding to the current position of the first identifier, or vice versa.
As shown in fig. 6, the upper and lower endpoints of the first identifier may be a first sub-identifier 610 and a second sub-identifier 620, respectively.
In response to a third input by the user of the first sub-identifier or the second sub-identifier, a position of the first identifier on the editing progress bar is updated.
And receiving a third input of the user to the first sub-identifier, updating the position of the first identifier on the editing progress bar, updating the first position to the second position, and otherwise, enabling the time progress corresponding to the second position to be before the time progress corresponding to the first position. And updating the image frames in the first video indicated by the first identification.
The response procedure for the second sub-identity is the same.
Therefore, compared with the common frame taking line in the current video clip, the display area corresponding to the first sub-mark or the second sub-mark is easier to accurately control, and the user can conveniently adjust the first image frame corresponding to the first mark. The frame taking line is usually displayed on a progress bar and is a line segment for determining a video frame. By updating the position of the first mark on the editing progress bar and updating the image frame in the first video indicated by the first mark in response to the third input of the user to the first sub mark or the second sub mark, the image frame in the first video indicated by the first mark can be quickly and accurately determined.
In step 140, in response to the second input, the first image frame or the first video segment in the first video is edited to obtain a second video, and the playing interface of the second video includes a second identifier, where the second identifier is used to indicate that the first image frame or the first video segment is edited.
The second identifier is used for indicating that the first image frame or the first video segment in the first video is edited, for example, the second identifier may be identification information of an editor, or a title edited by the editor for the second video. The display parameters of the second identifiers corresponding to the different second videos are also different, and the display parameters of the second identifiers comprise: color, shape, pattern, etc.
In one possible embodiment, where the first identification is used to determine a first image frame in the first video, the second input includes a first sub-input and a second sub-input; step 140 may specifically include the following steps:
displaying at least one image editing function in a corresponding region of the first image frame in response to the first sub-input;
in response to a second sub-input by the user to a first editing function of the at least one image editing function, editing processing is performed on the first image frame based on the first editing function.
As shown in fig. 7, at least one image editing function of the corresponding area display of the first image frame includes: setting options, graffiti options, rotation options, etc.
Responding to a second sub-input of a first editing function in at least one image editing function by a user, editing a first image frame in a first video according to the selected first editing function, and if the first editing function selected by the user is a rotation option, performing rotation processing on the first image frame to obtain a rotated first image frame.
Specifically, as shown in fig. 8 (a), in response to a selection input to the first editing function, a picture editing page is entered, and after editing is completed, the edited first image frame is saved.
As shown in fig. 8 (b), when the second video is played later and the time 810 corresponding to the first image frame is played, the edited first image frame is displayed.
Thus, by responding to the second sub-input of the first editing function in the at least one image editing function by the user, the first image frame can be edited quickly and efficiently based on the first editing function.
In one possible embodiment, the second input comprises a third sub-input and a fourth sub-input; step 140 may specifically include the following steps:
displaying at least one video editing function in response to a third sub-input by the user;
responsive to a fourth sub-input by the user of a second editing function of the at least one video editing function, editing the first video clip based on the second editing function;
wherein the video editing function includes at least one of: volume adjustment function, play speed adjustment function, image parameter adjustment function, copy function, and delete function.
At least one video editing function is displayed for selection by the user in response to a third sub-input by the user.
The volume adjusting function is used for carrying out volume adjusting processing on the first video clip;
A play speed adjusting function for performing play speed adjustment processing on the first video clip;
an image parameter adjusting function for performing image parameter adjustment processing on the first video clip;
a copying function for copying the first video clip;
and the deleting function is used for deleting at least part of image frames of the first video clip.
The image parameters involved in the image parameter adjusting function may include: contrast, brightness, etc.
And in response to a fourth sub-input of a second editing function of the at least one video editing function by the user, editing the first video clip based on at least one second editing function of the volume adjustment function, the play speed adjustment function, the image parameter adjustment function, the copy function, and the delete function to obtain a second video.
Illustratively, the second editing function selected by the fourth sub-input of the user is an image parameter adjustment function, and the editing process is performed on the first video clip based on the image parameter adjustment function in response to the fourth sub-input of the image parameter adjustment function by the user.
Therefore, the first video clip can be rapidly and conveniently edited based on the second editing function by responding to the fourth sub-input of the second editing function in the at least one video editing function, and the video editing efficiency is improved.
In a possible embodiment, the second input includes a fifth sub-input, and the step of editing the first video segment based on the second video editing function may specifically include the following steps:
responding to a fifth sub-input of the editing progress bar by a user, and editing the first video clip according to input parameters of the fifth sub-input; wherein the input parameters include at least one of: input direction, input distance, input pressure.
Wherein, when the video editing function is a volume adjusting function, a play speed adjusting function, and an image parameter adjusting function, the input to the first direction may be used to indicate an image parameter value that increases volume, accelerates play speed, and increases an image parameter; the input to the second direction may be used to indicate a decrease in volume, a decrease in play speed, a decrease in image parameter value of the image parameter; the image parameters may include: contrast, brightness, etc., and accordingly, the image parameter values may include a contrast value, a brightness value, etc., with the first and second directions being opposite.
For example, the input in the first direction is sliding to the right and the input in the second direction is sliding to the left.
The input distance is used for indicating the magnitude of the adjustment volume, the play speed and the image parameter value, and the input distance is proportional to the magnitude of the adjustment volume, the play speed and the image parameter.
The longer the input distance is, the larger the adjusted volume is, the faster the adjusted playing speed is, and the larger the image parameter value corresponding to the adjusted image parameter is; for example, when the input distance is 2 cm, the playing speed is adjusted to be 2 times of the original playing speed; when the input distance is 3 cm, the playing speed is adjusted to be 3 times of the original playing speed. The input pressure is used for indicating the adjustment of the volume, the play speed and the size of the image parameters, and the input pressure is proportional to the adjustment of the volume, the play speed and the size of the image parameters.
Similarly, the larger the input pressure is, the larger the volume is adjusted, the faster the play speed is adjusted, and the larger the image parameter value corresponding to the adjusted image parameter is.
For example, the fifth sub-input is an input that slides a first distance in a first direction, the parameter value corresponding to the first distance is 10, and the first direction is a direction along the playing progress. The first video clip is subjected to a process of increasing the volume of 10 units according to the input parameters of the fifth sub-input.
Illustratively, as shown in fig. 9, the input direction of the first input may be a direction from the time point a to the time point B; the volume corresponding to the time point A is 0, the volume corresponding to the time point B is 100, when the user slides from the time point A to the time point B, the input distance of the first input is increased, correspondingly, the adjusted volume is increased, and similarly, when the user slides from the time point B to the time point A, the adjusted volume is decreased from 100 to 0.
If the sliding distance from the time point A to the time point B is a preset distance, the sliding distance corresponding to the first input is 45% × When the distance is preset, the volume of the first video clip is adjusted to 45.
Similarly, for the play speed, if the first input is slid from the time point a to the time point B, the play speed is represented by 2× when the first input corresponds to the slide distance of 50% ×the preset distance, and the play speed is again slid, and the speed is represented by 4× when the first input corresponds to the slide distance of 100% ×the preset distance.
If the first input is slid from the time point B to the time point a, the first input represents the play speed of 0.5× when the sliding distance corresponding to the first input is 50% × the preset distance, and the second input is slid again, and the first input represents the play speed of 0.25× when the sliding distance corresponding to the first input is 100% × the preset distance.
Therefore, in response to the fifth sub-input of the editing progress bar by the user, the first video clip is edited according to the input parameters of the fifth sub-input, the first video clip can be edited rapidly and conveniently according to the input parameters of the fifth sub-input, and video editing efficiency is improved.
In a possible embodiment, where the second editing function comprises a copy function, the second input comprises a sixth sub-input, step 140, may specifically comprise the steps of:
And responding to the sixth sub-input, carrying out copying processing on the first video segment to obtain a second video, wherein the second identifier comprises a copying identifier which indicates that at least part of image frames in the first video segment are repeatedly played.
For the copy operation, the sixth sub-input may be an input of a long time point a progress bar area and sliding to a time point B, as shown in fig. 10, displaying a prompt message "repeat play once", and representing repeat play of the first video clip once.
Multiple repeated plays can also be provided. In response to the sixth sub-input, after the first video segment is duplicated, duplication marks 1010 parallel to the progress bar appear from the time point a to the time point B, and several duplication marks appear if repeated several times, where the duplication marks indicate that replay editing exists.
And responding to the sixth sub-input, copying the first video segment, and clicking the blank area to store content modification to obtain a second video.
Therefore, a user can input the editing progress bar in the video playing process, edit the first video clips in the first video to obtain the second video, and save the edited second video as an independent video, and meanwhile, the original first video is still reserved.
In a possible embodiment, after the step of obtaining the second video, the method may further include the following steps:
receiving a sixth input of a user for a copy identification;
in response to the sixth input, the display of the duplication flag is canceled, and duplication processing for the first video segment is canceled, resulting in a third video.
After the first video segment is copied, if the copy operation of the first video segment is to be canceled, the display of the copy identifier may be canceled and the copy process of the first video segment may be canceled in response to the sixth input of the copy identifier by the user, to obtain the third video.
As shown in fig. 11, the copying process of the first video clip may be canceled by a sixth input of the copy flag 1110 in the second direction. Wherein the second direction is a direction from the time point B to the time point a; wherein the input direction of the sixth input is opposite to the input direction of the sixth sub-input.
Therefore, in response to the sixth input of the copy identifier, the copy processing of the first video segment can be quickly canceled, the third video can be obtained, and the video editing efficiency is improved.
In a possible embodiment, where the second editing function comprises a delete function, the second input comprises a seventh sub-input, step 140, may specifically comprise the steps of:
Deleting at least a portion of the image frames of the first video segment in response to the seventh sub-input;
the second identifier includes a deletion identifier indicating that editing of at least a portion of the image frames in the first video segment is not possible.
As shown in fig. 12, the seventh sub-input may be an input that moves to an arbitrary position within the selection range along the editing progress bar at the time point a for a long time, and at least part of the image frame of the first video clip is deleted in response to the seventh sub-input; before deleting at least a portion of the image frames of the first video clip, a hint may be displayed that is used to hint whether the user confirms deletion of at least a portion of the image frames of the first video clip.
Wherein when the seventh sub-input is moving along the editing progress bar at the time point a for a long time, at least a portion of the image frames of the first video clip are deleted in response to the seventh sub-input, the at least a portion of the image frames of the first video clip being deleted being at least a portion of the image frames between the time point a to the time point B.
Wherein the second identifier includes a deletion identifier indicating that at least a portion of the image frames in the first video segment have been deleted and that editing of at least a portion of the image frames in the first video segment is not possible.
As shown in fig. 13, the editing progress bar corresponding to at least some of the image frames in the deleted first video clip changes the display mode, for example, at least some of the image frames in the first video clip are displayed in a dotted line, so as to indicate that at least some of the image frames in the first video clip have been deleted and cannot be edited.
When the second video is played again, at least part of the image frames in the deleted first video segment are skipped when the corresponding moment of the first video segment is played, that is, the image frames except at least part of the image frames in the deleted first video segment are played. Therefore, at least part of image frames of the first video clip can be deleted rapidly in response to the seventh sub-input, the operation is simple, and the video editing efficiency is improved.
In a possible embodiment, after step 140, the following steps may be further included:
receiving a seventh input of the user;
in response to the seventh input, a configuration file is generated, the configuration file being used to configure playback parameters of the first video.
In response to the seventh input, a configuration file is generated for configuring the playback parameters of the first video for subsequent use in determining the video playback parameters from the configuration file.
In a possible embodiment, the step of generating the configuration file in response to the seventh input may further comprise the steps of:
displaying at least one video identification;
receiving a ninth input of a target video identity of the at least one video identity;
and responding to the ninth input, and playing the video corresponding to the target video identifier.
After the video is obtained by editing, at least one video identifier corresponding to the first video and at least one video identifier corresponding to a second video which is obtained by editing the first video can be displayed.
The video identification may be a video title, which may be a video title edited by the creator for the second video.
The at least one video identification is displayed to facilitate user selection of the edited first video from which the user desires to view. In response to input of a target video identification in the at least one video identification, playing a video corresponding to the target video identification.
Here, one first video corresponds to at least one second video, and each second video corresponds to different editing effects, so that a user can watch the second videos subjected to different editing operations conveniently, and video playing interestingness can be increased.
In the embodiment of the application, the editing progress bar is displayed, the editing of a user is facilitated, and under the condition that the editing progress bar is displayed on the playing interface of the first video, the first input of the user on the editing progress bar is received, the first input is used for indicating the first image frame or the first video fragment which the user wants to edit, the first identification used for determining the first image frame or the first video fragment in the first video is displayed on the editing progress bar in response to the first input, and the user can conveniently view and confirm the first image frame or the first video fragment which the user wants to edit through the first identification, so that the first image frame or the first video fragment in the first video can be intuitively and clearly determined through the first identification, and the user can conveniently and subsequently edit the first image frame or the first video fragment in the first video. Then, in response to a second input of the user, editing the first image frame or the first video fragment in the first video to obtain a second video, wherein the first image frame or the first video fragment which the user wants to edit in the first video can be rapidly and conveniently edited, and the complexity and difficulty of video editing are reduced. The playing interface of the second video comprises a second identifier for indicating that the first image frame or the first video fragment is edited, so that a viewer can quickly and intuitively know that the second video is edited.
According to the video processing method provided by the embodiment of the application, the execution subject can be a terminal. In the embodiment of the application, a method for executing video processing by a terminal is taken as an example, and the terminal provided by the embodiment of the application is described.
Fig. 14 is a block diagram of a terminal provided in an embodiment of the present application, and the terminal 1400 includes:
the receiving module 1410 is configured to receive, when the playing interface of the first video displays an editing progress bar, a first input of a user to the editing progress bar;
a display module 1420 for displaying, in response to the first input, a first identification on the editing progress bar, the first identification for determining a first image frame or a first video clip in the first video;
the receiving module 1410 is further configured to receive a second input from a user;
and the editing module 1430 is configured to perform editing processing on the first image frame or the first video segment in the first video in response to the second input, so as to obtain a second video, where a playing interface of the second video includes a second identifier, and the second identifier is used to indicate that editing processing is performed on the first image frame or the first video segment.
In one possible embodiment, the first identifier comprises a first sub-identifier and a second sub-identifier;
The receiving module 1410 is further configured to receive a third input of the first sub-identifier or the second sub-identifier from a user;
the terminal 1400 further includes:
a first updating module, configured to respond to the third input, update a position of the first identifier on the editing progress bar, and update an image frame in the first video indicated by the first identifier;
wherein the functions corresponding to the first sub-identifier and the second sub-identifier are different.
In a possible embodiment, the receiving module 1410 is further configured to receive a fourth input of the first identification by a user;
the terminal 1400 further includes:
and a second updating module, configured to update a display position of the first sub-identifier or the second sub-identifier in response to the fourth input.
In one possible embodiment, where the first identification is used to determine a first image frame in the first video, the second input includes a first sub-input and a second sub-input; the editing module 1430 is specifically configured to:
displaying at least one image editing function in a corresponding region of the first image frame in response to the first sub-input;
in response to a second sub-input by the user to a first editing function of the at least one image editing function, editing processing is performed on the first image frame based on the first editing function.
In one possible embodiment, the receiving module 1410 is further configured to receive a fifth input of the editing progress bar from the user;
the display module 1420 is further configured to display a third identifier on the editing progress bar in response to the fifth input, where a display position of the third identifier is different from a display position of the first identifier, and the third identifier and the first identifier indicate the first video clip in the first video.
In one possible embodiment, the second input comprises a third sub-input and a fourth sub-input; the editing module 1430 is specifically configured to:
displaying at least one video editing function in response to a third sub-input by the user;
responsive to a fourth sub-input by the user of a second editing function of the at least one video editing function, editing the first video clip based on the second editing function;
wherein the video editing function includes at least one of: volume adjustment function, play speed adjustment function, image parameter adjustment function, copy function, and delete function.
In one possible embodiment, the second input includes a fifth sub-input, the editing module 1430, specifically configured to:
responding to a fifth sub-input of the editing progress bar by a user, and editing the first video clip according to input parameters of the fifth sub-input; wherein the input parameters include at least one of: input direction, input distance, input pressure.
In a possible embodiment, where the second editing function includes a copy function, the second input includes a sixth sub-input, and the editing module 1430 is specifically configured to:
and responding to the sixth sub-input, carrying out copying processing on the first video segment to obtain a second video, wherein the second identifier comprises a copying identifier which indicates that at least part of image frames in the first video segment are repeatedly played.
In one possible embodiment, the receiving module 1410 is further configured to receive a sixth input of the copy identifier from the user;
the terminal 1400 may further include:
and the cancellation module is used for canceling the display of the duplication identification and canceling the duplication processing of the first video segment in response to the sixth input to obtain the third video.
In a possible embodiment, where the second editing function includes a delete function, the second input includes a seventh sub-input, and the editing module 1430 is specifically configured to:
deleting at least a portion of the image frames of the first video segment in response to the seventh sub-input;
the second identifier includes a deletion identifier indicating that editing of at least a portion of the image frames in the first video segment is not possible.
In one possible embodiment, the receiving module 1410 is further configured to receive a seventh input from the user;
The terminal 1400 may further include:
and the generating module is used for responding to the seventh input and generating a configuration file, wherein the configuration file is used for configuring the playing parameters of the first video.
In one possible embodiment, the receiving module 1410 is further configured to receive an eighth input of the editing progress bar from the user;
the terminal 1400 may further include:
and a third updating module for updating the display position of the editing progress bar or hiding the editing progress bar in response to the eighth input.
In the embodiment of the application, the editing progress bar is displayed, the editing of a user is facilitated, and under the condition that the editing progress bar is displayed on the playing interface of the first video, the first input of the user on the editing progress bar is received, the first input is used for indicating the first image frame or the first video fragment which the user wants to edit, the first identification used for determining the first image frame or the first video fragment in the first video is displayed on the editing progress bar in response to the first input, and the user can conveniently view and confirm the first image frame or the first video fragment which the user wants to edit through the first identification, so that the first image frame or the first video fragment in the first video can be intuitively and clearly determined through the first identification, and the user can conveniently and subsequently edit the first image frame or the first video fragment in the first video. Then, in response to a second input of the user, the first image frame or the first video fragment in the first video is edited to obtain the second video, and here, the first image frame or the first video fragment which the user wants to edit in the first video can be rapidly and conveniently edited, so that the complexity of video editing is reduced. The playing interface of the second video comprises a second identifier for indicating that the first image frame or the first video fragment is edited, so that a viewer can quickly and intuitively know that the second video is edited.
The terminal in the embodiment of the application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The terminal of the embodiment of the application can be a device with an action system. The action system may be an Android (Android) action system, an iOS action system, or other possible action systems, and the embodiment of the application is not limited specifically.
The terminal provided by the embodiment of the application can realize the processes realized by the embodiment of the method and the same technical effects, and is not repeated here.
Optionally, as shown in fig. 15, an electronic device 1510 is further provided in the embodiment of the present application, including a processor 1511, a memory 1512, and a program or an instruction stored in the memory 1512 and capable of being executed on the processor 1511, where the program or the instruction implements the steps of any one of the embodiments of the video processing method when executed by the processor 1511, and the steps achieve the same technical effects, and are not repeated herein.
The electronic device of the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 16 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1600 includes, but is not limited to: radio frequency unit 1601, network module 1602, audio output unit 1603, input unit 1604, sensor 1605, display unit 1606, user input unit 1607, interface unit 1608, memory 1609, and processor 1610.
Those skilled in the art will appreciate that the electronic device 1600 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1610 by a power management system that performs the functions of managing charge, discharge, and power consumption. The electronic device structure shown in fig. 16 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in the drawings, or may combine some components, or may be arranged in different components, which will not be described in detail herein.
The user input unit 1607 is configured to receive a first input of the editing progress bar from a user when the editing progress bar is displayed on the playing interface of the first video;
a display unit 1606 for displaying a first identification on the editing progress bar in response to the first input, the first identification being used to determine a first image frame or a first video clip in the first video;
a user input unit 1607 for receiving a second input of a user;
the processor 1610 is configured to perform editing processing on a first image frame or a first video segment in the first video in response to the second input, so as to obtain a second video, where a playing interface of the second video includes a second identifier, and the second identifier is used to indicate that editing processing is performed on the first image frame or the first video segment.
In some implementations, the first identifier includes a first sub-identifier and a second sub-identifier;
a user input unit 1607, further for receiving a third input of the first sub-identifier or the second sub-identifier by the user;
processor 1610 is further configured to update, in response to a third input, a position of the first identifier on the editing progress bar and update an image frame in the first video indicated by the first identifier;
wherein the functions corresponding to the first sub-identifier and the second sub-identifier are different.
In some implementations, the user input unit 1607 is further for receiving a fourth input of the first identification by the user;
processor 1610 is further configured to update the display positions of the first sub-identifier and the second sub-identifier in response to the fourth input.
In some implementations, where the first identification is used to determine a first image frame in the first video, the second input includes a first sub-input and a second sub-input; a display unit 1606 for displaying at least one image editing function in a corresponding region of the first image frame in response to the first sub-input;
the processor 1610 is further configured to perform editing processing on the first image frame based on the first editing function in response to a second sub-input of the first editing function of the at least one image editing function by the user.
In some embodiments, the user input unit 1607 is further for receiving a fifth input by the user of the editing progress bar;
the display unit 1606 is further configured to display a third identifier on the editing progress bar in response to the fifth input, where a display position of the third identifier is different from a display position of the first identifier, and the third identifier and the first identifier indicate the first video clip in the first video.
In some implementations, the second input includes a third sub-input and a fourth sub-input; a display unit 1606 for displaying at least one video editing function in response to a third sub-input by the user;
A processor 1610 further configured to perform editing processing on the first video clip based on the second editing function in response to a fourth sub-input of the second editing function from the at least one video editing function by the user;
wherein the video editing function includes at least one of: volume adjustment function, play speed adjustment function, image parameter adjustment function, copy function, and delete function.
In some embodiments, the second input includes a fifth sub-input, and the processor 1610 is further configured to, in response to the user's fifth sub-input of the editing progress bar, perform editing processing on the first video clip according to an input parameter of the fifth sub-input; wherein the input parameters include at least one of: input direction, input distance, input pressure.
In some embodiments, where the second editing function includes a copy function, the second input includes a sixth sub-input, and the processor 1610 is further configured to copy the first video segment in response to the sixth sub-input to obtain a second video, the second identifier includes a copy identifier, and the copy identifier indicates that at least a portion of the image frames in the first video segment are to be played repeatedly.
In some implementations, the user input unit 1607 is further for receiving a sixth input of the user for the copy identification;
The processor 1610 is further configured to cancel displaying the copy identifier and cancel the copying process of the first video segment in response to the sixth input, and obtain the third video.
In some implementations, where the second editing function includes a delete function, the second input includes a seventh sub-input, and the processor 1610 is further configured to delete at least a portion of the image frame of the first video segment in response to the seventh sub-input;
the second identifier includes a deletion identifier indicating that editing of at least a portion of the image frames in the first video segment is not possible.
In some implementations, the user input unit 1607 is further for receiving a seventh input of the user;
the processor 1610 is further configured to generate, in response to the seventh input, a configuration file, where the configuration file is used to configure the playing parameters of the first video.
In some embodiments, the user input unit 1607 is further for receiving an eighth input of the editing progress bar by the user;
the processor 1610 is further configured to update a display position of the editing progress bar or hide the editing progress bar in response to the eighth input.
In the embodiment of the application, the editing progress bar is displayed, the editing of a user is facilitated, and under the condition that the editing progress bar is displayed on the playing interface of the first video, the first input of the user on the editing progress bar is received, the first input is used for indicating the first image frame or the first video fragment which the user wants to edit, the first identification used for determining the first image frame or the first video fragment in the first video is displayed on the editing progress bar in response to the first input, and the user can conveniently view and confirm the first image frame or the first video fragment which the user wants to edit through the first identification, so that the first image frame or the first video fragment in the first video can be intuitively and clearly determined through the first identification, and the user can conveniently and subsequently edit the first image frame or the first video fragment in the first video. Then, in response to a second input of the user, the first image frame or the first video fragment in the first video is edited to obtain the second video, and here, the first image frame or the first video fragment which the user wants to edit in the first video can be rapidly and conveniently edited, so that the complexity of video editing is reduced. The playing interface of the second video comprises a second identifier for indicating that the first image frame or the first video fragment is edited, so that a viewer can quickly and intuitively know that the second video is edited.
It should be appreciated that in embodiments of the present application, the input unit 1604 may include a graphics processor (Graphics Processing Unit, GPU) 16041 and a microphone 16042, the graphics processor 16041 processing image data of still pictures or video images obtained by an image capturing device (e.g., a camera) in a video image capturing mode or an image capturing mode. The display unit 1606 may include a display panel 16061, and the display panel 16061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1607 includes at least one of a touch panel 16071 and other input devices 16072. The touch panel 16071 is also referred to as a touch screen. The touch panel 16071 may include two parts, a touch detection device and a touch controller. Other input devices 16072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 1609 may be used to store software programs as well as various data including, but not limited to, application programs and action systems. Processor 1610 may integrate an application processor that primarily handles the action system, user pages, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1610.
Memory 1609 may be used to store software programs as well as various data. The memory 1609 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, memory 1609 may include volatile memory or nonvolatile memory, or memory x09 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1609 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
Processor 1610 may include one or more processing units; optionally, processor 1610 integrates an application processor that primarily processes operations involving an operating system, user interface, and applications, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1610.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the video processing method embodiment described above, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the video processing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the video processing method embodiment described above, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (15)

1. A method of video processing, the method comprising:
receiving a first input of a user on an editing progress bar under the condition that the editing progress bar is displayed on a playing interface of a first video;
in response to the first input, displaying a first identification on the editing progress bar, the first identification being used to determine a first image frame or a first video clip in the first video;
receiving a second input from the user;
and responding to the second input, editing the first image frame or the first video fragment in the first video to obtain a second video, wherein a playing interface of the second video comprises a second identifier, and the second identifier is used for indicating that the first image frame or the first video fragment is edited.
2. The method of claim 1, wherein the first identifier comprises a first sub-identifier and a second sub-identifier;
before said receiving the second input of the user, the method further comprises:
receiving a third input of a user to the first sub-identifier or the second sub-identifier;
in response to the third input, updating a location of the first identifier on the editing progress bar and updating an image frame in the first video indicated by the first identifier;
Wherein the functions corresponding to the first sub-identifier and the second sub-identifier are different.
3. The method of claim 2, wherein after displaying the first identifier on the editing progress bar, the method further comprises:
receiving a fourth input of a user to the first identifier;
and in response to the fourth input, updating the display positions of the first sub-identifier and the second sub-identifier.
4. The method of claim 1, wherein the second input comprises a first sub-input and a second sub-input if the first identification is used to determine a first image frame in the first video; the editing process of the first image frame or the first video clip in the first video in response to the second input includes:
displaying at least one image editing function in a corresponding region of the first image frame in response to the first sub-input;
responsive to a second sub-input by a user to a first editing function of the at least one image editing function, editing processing is performed on the first image frame based on the first editing function.
5. The method of claim 1, wherein after displaying the first identifier on the editing progress bar, the method further comprises:
Receiving a fifth input of a user to the editing progress bar;
and in response to the fifth input, displaying a third identifier on the editing progress bar, wherein the display position of the third identifier is different from the display position of the first identifier, and the third identifier and the first identifier indicate a first video segment in the first video.
6. The method of claim 5, wherein the second input comprises a third sub-input and a fourth sub-input; the editing process of the first image frame or the first video clip in the first video in response to the second input includes:
displaying at least one video editing function in response to a third sub-input by the user;
responsive to a fourth sub-input by a user to a second editing function of the at least one video editing function, editing the first video clip based on the second editing function;
wherein the video editing function includes at least one of: volume adjustment function, play speed adjustment function, image parameter adjustment function, copy function, and delete function.
7. The method of claim 6, wherein the second input comprises a fifth sub-input, the editing of the first video segment based on the second video editing function comprising:
Responding to a fifth sub-input of the editing progress bar by a user, and editing the first video clip according to the input parameters of the fifth sub-input; wherein the input parameters include at least one of: input direction, input distance, input pressure.
8. The method of claim 6, wherein, in the case where the second editing function includes a copy function, the second input includes a sixth sub-input, and the editing the first image frame or the first video segment in the first video in response to the second input to obtain a second video includes:
and responsive to the sixth sub-input, performing copy processing on the first video segment to obtain the second video, wherein the second identifier comprises a copy identifier, and the copy identifier indicates that at least part of the image frames in the first video segment are repeatedly played.
9. The method of claim 8, wherein after the second video is obtained, the method further comprises:
receiving a sixth input of the user to the copy identification;
and in response to the sixth input, canceling display of the copy identification, canceling copy processing of the first video segment, and obtaining a third video.
10. The method of claim 6, wherein, in the case where the second editing function includes a delete function, the second input includes a seventh sub-input, and the editing process of the first image frame or the first video clip in the first video in response to the second input includes:
deleting at least a portion of the image frames of the first video segment in response to the seventh sub-input;
the second identifier includes a deletion identifier indicating that editing of the at least a portion of the image frames in the first video segment is not possible.
11. The method of claim 1, wherein after the second video is obtained, the method further comprises:
receiving a seventh input of the user;
and generating a configuration file in response to the seventh input, wherein the configuration file is used for configuring the playing parameters of the first video.
12. The method according to claim 1, wherein the method further comprises:
receiving eighth input of a user to the editing progress bar;
and in response to the eighth input, updating a display position of the editing progress bar or hiding the editing progress bar.
13. A terminal, the terminal comprising:
the receiving module is used for receiving a first input of a user on the editing progress bar under the condition that the editing progress bar is displayed on a playing interface of the first video;
a display module for displaying a first identification on the editing progress bar in response to the first input, the first identification being used to determine a first image frame or a first video clip in the first video;
the receiving module is also used for receiving a second input of the user;
and the editing module is used for responding to the second input, editing the first image frames or the first video fragments in the first video to obtain a second video, and the playing interface of the second video comprises a second identifier which is used for indicating that the first image frames or the first video fragments are edited.
14. The terminal of claim 13, wherein the first identifier comprises a first sub-identifier and a second sub-identifier;
the receiving module is further configured to receive a third input of the user to the first sub-identifier or the second sub-identifier;
the terminal further comprises:
A first updating module, configured to respond to the third input, update a position of the first identifier on the editing progress bar, and update an image frame in the first video indicated by the first identifier;
wherein the functions corresponding to the first sub-identifier and the second sub-identifier are different.
15. The terminal of claim 14, wherein the receiving module is further configured to receive a fourth input of the first identification by a user;
the terminal further comprises:
and a second updating module, configured to update a display position of the first sub-identifier or the second sub-identifier in response to the fourth input.
CN202311051404.9A 2023-08-18 2023-08-18 Video processing method and terminal Pending CN117097945A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311051404.9A CN117097945A (en) 2023-08-18 2023-08-18 Video processing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311051404.9A CN117097945A (en) 2023-08-18 2023-08-18 Video processing method and terminal

Publications (1)

Publication Number Publication Date
CN117097945A true CN117097945A (en) 2023-11-21

Family

ID=88770895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311051404.9A Pending CN117097945A (en) 2023-08-18 2023-08-18 Video processing method and terminal

Country Status (1)

Country Link
CN (1) CN117097945A (en)

Similar Documents

Publication Publication Date Title
CN108924622B (en) Video processing method and device, storage medium and electronic device
CN111612873A (en) GIF picture generation method and device and electronic equipment
CN112887794B (en) Video editing method and device
WO2023061414A1 (en) File generation method and apparatus, and electronic device
CN113596555B (en) Video playing method and device and electronic equipment
CN112672061B (en) Video shooting method and device, electronic equipment and medium
JP2014149634A (en) Input/output device
WO2023030306A1 (en) Method and apparatus for video editing, and electronic device
CN112839190A (en) Method for synchronously recording or live broadcasting video of virtual image and real scene
CN113794835A (en) Video recording method and device and electronic equipment
CN114430460A (en) Shooting method and device and electronic equipment
WO2019105062A1 (en) Content display method, apparatus, and terminal device
WO2023179539A1 (en) Video editing method and apparatus, and electronic device
WO2023093669A1 (en) Video filming method and apparatus, and electronic device and storage medium
CN114025237B (en) Video generation method and device and electronic equipment
CN111757177B (en) Video clipping method and device
CN117097945A (en) Video processing method and terminal
CN114500844A (en) Shooting method and device and electronic equipment
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
CN112764632B (en) Image sharing method and device and electronic equipment
CN115334242B (en) Video recording method, device, electronic equipment and medium
KR101765133B1 (en) Method of producing animated image of mobile app, computer program and mobile device executing thereof
CN117395462A (en) Method and device for generating media content, electronic equipment and readable storage medium
CN115967854A (en) Photographing method and device and electronic equipment
CN117651180A (en) Video playing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination