CN112929748B - Video processing method, video processing device, electronic equipment and medium - Google Patents

Video processing method, video processing device, electronic equipment and medium Download PDF

Info

Publication number
CN112929748B
CN112929748B CN202110087983.7A CN202110087983A CN112929748B CN 112929748 B CN112929748 B CN 112929748B CN 202110087983 A CN202110087983 A CN 202110087983A CN 112929748 B CN112929748 B CN 112929748B
Authority
CN
China
Prior art keywords
input
image frames
video
target
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110087983.7A
Other languages
Chinese (zh)
Other versions
CN112929748A (en
Inventor
周桓宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110087983.7A priority Critical patent/CN112929748B/en
Publication of CN112929748A publication Critical patent/CN112929748A/en
Application granted granted Critical
Publication of CN112929748B publication Critical patent/CN112929748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The application discloses a video processing method, a video processing device, electronic equipment and media, and belongs to the technical field of video playing. The video processing method comprises the following steps: receiving a first input of a user, wherein the first input is used for adjusting the playing progress of a video; in response to the first input, displaying M image frames in the video, wherein the M image frames are associated with a playing progress point in the video, the playing progress point is a playing position corresponding to the first input, and the target area is a display area associated with a playing interface of the video; receiving a second input of a user to N target image frames in the M image frames; in response to the second input, updating the playing progress of the video to target playing positions corresponding to the N target image frames; wherein M, N are all positive integers, and N is less than or equal to M. This can simplify the adjustment operation of the play progress.

Description

Video processing method, video processing device, electronic equipment and medium
Technical Field
The application belongs to the technical field of video playing, and particularly relates to a video processing method and device, an electronic device and a medium.
Background
With the continuous development of internet technology and video technology, users have more and more demands for watching videos. In many cases, a user may not need to view the entire video content, but only some of the video segments.
Currently, when a user locates a video playing progress of a video clip of interest, the user generally drags a progress bar of a video playing interface. However, in the process of dragging the progress bar, the user is often required to drag the progress bar back and forth to locate a specific position of the video clip in which the user is interested.
Therefore, in the related art, when the playing progress of the video clip which is interested by the user is positioned, the problems of complex operation and time consumption exist.
Disclosure of Invention
An embodiment of the present application provides a video processing method, an apparatus, an electronic device, and a medium, which can solve the problem that in the related art, when the playing progress of a video clip in which a user is interested is located, operations are complicated and time-consuming.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video processing method, including:
receiving a first input of a user, wherein the first input is used for adjusting the playing progress of a video;
in response to the first input, displaying M image frames in the video, wherein the M image frames are associated with a playing progress point in the video, the playing progress point is a playing position corresponding to the first input, and the target area is a display area associated with a playing interface of the video;
receiving a second input of the N target image frames in the M image frames from the user;
in response to the second input, updating the playing progress of the video to target playing positions corresponding to the N target image frames;
wherein M, N are positive integers, and N is less than or equal to M.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the first receiving module is used for receiving a first input of a user, wherein the first input is used for adjusting the playing progress of a video;
a first display module, configured to, in response to the first input, display M image frames in the video, where the M image frames are associated with a play progress point in the video, where the play progress point is a play position corresponding to the first input, and the target area is a display area associated with a play interface of the video;
a second receiving module, configured to receive a second input of the user to N target image frames in the M image frames;
a first updating module, configured to update the playing progress of the video to target playing positions corresponding to the N target image frames in response to the second input;
wherein M, N are all positive integers, and N is less than or equal to M.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored in the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the video processing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video processing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the video processing method according to the first aspect.
In the embodiment of the application, the playing progress of the video is updated to the target playing position corresponding to the selected N target image frames, so that the playing progress is accurately and quickly positioned and adjusted, a user does not need to adjust the playing progress back and forth, and the adjustment operation of the playing progress can be effectively simplified.
Drawings
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 2 is an operation diagram of a video processing method according to an embodiment of the present application;
fig. 3 is a second schematic operational diagram of a video processing method according to an embodiment of the present application;
fig. 4a is a third schematic operational diagram of a video processing method according to an embodiment of the present application;
FIG. 4b is a fourth schematic diagram illustrating an operation of a video processing method according to an embodiment of the present application;
fig. 5a is a fifth schematic operational diagram of a video processing method according to an embodiment of the present application;
fig. 5b is a sixth schematic view illustrating an operation of the video processing method according to the embodiment of the present application;
fig. 6 is a block diagram of a video processing apparatus according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a block diagram of an electronic device according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, of the embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application are capable of operation in sequences other than those illustrated or described herein, and that the terms "first," "second," etc. are generally used in a generic sense and do not limit the number of terms, e.g., a first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/", and generally means that the former and latter related objects are in an "or" relationship. The mark in the present application is used for indicating words, symbols, images and the like of information, and a control or other container can be used as a carrier for displaying information, including but not limited to a word mark, a symbol mark, an image mark.
The following describes the video processing provided by the embodiments of the present application in detail through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of video processing provided in an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step 101, receiving a first input of a user, wherein the first input is used for adjusting the playing progress of a video.
In this step, the first input may be a dragging operation of a play progress bar acting on the video, or a sliding operation of a play interface acting on the video, and is used to adjust the play progress of the video. For example, the playing progress of the video can be adjusted from a first playing progress point to a second playing progress point by receiving a dragging operation of the playing progress bar, so as to adjust the playing progress of the video.
For example, the first input may be a dragging operation of a user's finger directly acting on the playing progress bar, or may be a dragging operation of a user acting on the playing progress bar by manipulating a touch device such as a stylus.
In addition, the first input is a sliding operation acting on the video playing interface, the first input comprises a first operation and a second operation, the first operation user triggers the adjustment of the video playing progress, and the second operation is used for adjusting the video playing progress. For example, the first operation may be a click operation, a long-press operation, or the like, for adjusting a video playing progress; and after triggering the adjustment of the video playing progress, responding to the received second operation, such as a sliding operation, to adjust the video playing progress.
And 102, responding to the first input, and displaying M image frames in the video, wherein the M image frames are associated with a playing progress point in the video, the playing progress point is a playing position corresponding to the first input, and the target area is a display area associated with a playing interface of the video.
In the step, the M image frames associated with the playing progress point in the video are displayed in the target area, so that a user can conveniently view the image frames interested by the user from the M image frames, and the playing position corresponding to the image frames interested by the user can be switched by receiving the selection operation of the image frames interested by the user, thereby realizing the accurate and rapid positioning and adjustment of the playing progress. In addition, under the condition that the value of M is greater than 1, more video image frames associated with the playing progress point can be used, so that the user can conveniently judge whether to position the video playing progress point wanted by the user on the basis of more image information, and the playing progress is further positioned and adjusted more quickly.
The playing progress point corresponding to the first input can be a target playing progress point of the first input action; for example, if the first input is used to adjust the playing progress of the video from the first playing progress point to the second playing progress point, the second playing progress point may be used as the playing progress point corresponding to the first input.
Wherein, the M image frames associated with the playing progress point may also be all image frames included in the video.
In addition, for an electronic device with a single screen, the target area may be a floating window displayed by a playing interface of the video; for an electronic device such as a folding screen including two or more screens, a playing interface of a video may be displayed on a first screen, and a target area may be a display interface of a second screen.
As shown in fig. 2, the electronic device includes a first screen 21 and a second screen 22, the first screen 21 is used for displaying a playing interface of a video, and the second screen 22 can be used as a target area and is used for displaying M image frames associated with a playing progress point.
And 103, receiving a second input of the user to the N target image frames in the M image frames.
In the step, M, N are positive integers, and N is less than or equal to M; the second input may be a selection operation on a single target image frame or a sliding selection operation on a plurality of target image frames.
In the process of selecting the N target image frames from the M image frames, the target image frame may be selected by receiving a click input or a long press input of a user on the target image frame of the M image frames.
In addition, the second input is a sliding input of M image frames displayed by the user aiming at the target area, a first target image frame in the N target image frames is an image frame corresponding to a sliding starting point of the sliding input, and an Nth target image frame in the N target image frames is an image frame corresponding to a sliding ending point of the sliding input.
And step 104, responding to the second input, and updating the playing progress of the video to the target playing positions corresponding to the N target image frames.
In the step, the playing progress of the video can be updated to the target playing position corresponding to the selected N target image frames by responding to the second input, so that the playing progress can be accurately and quickly positioned and adjusted without the need of a user for adjusting the playing progress back and forth, and the adjustment operation of the playing progress can be effectively simplified.
Moreover, by displaying the M image frames associated with the playing progress point corresponding to the first input, the user can also conveniently and quickly identify whether a video clip which the user wants to watch exists near the playing progress point, and the adjustment efficiency of the target playing position is further improved.
When N is 1, that is, only one target image frame is selected, the playing progress of the video may be updated to the playing position corresponding to the target image frame, so as to achieve accurate positioning and adjustment of the playing progress.
In the case where N is greater than 1, that is, in the case where two or more target image frames are selected, the target playback position may be determined based on the selection order of the target image frames. For example, when a target image frame selected first is set as an initial playing frame, a playing position corresponding to a first selected target image frame in the N target image frames is determined as a target playing position; and under the condition that the last selected target image frame is set as the initial playing frame, determining the playing position corresponding to the last selected target image frame in the N target image frames as the target playing position.
In the case that N is greater than 1, how to determine the starting playing frame in the N target image frames may be set based on user preference or habit.
Wherein, in the case that N is greater than 1, that is, in the case that the number of target image frames selected by the user is greater than 1, the second input may be a sliding selection operation for N image frames of the M image frames; or the determination of the N target image frames may be realized by selecting a first image frame and a second image frame of the M image frames, taking the first image frame as an initial image frame, taking the second image frame as a termination image frame, and taking the first image frame, the second image frame, and an image frame between the first image frame and the second image frame as selected target image frames, that is, selected N target image frames.
As shown in fig. 3, the second input includes a long press operation or a double click operation for the first image frame 31 for treating the first image frame 31 as a start image frame among the N target image frames; the second input further comprises a long press operation or a double click operation for the second image frame 32 for treating the second image frame 32 as an end image frame of the N image frames, thereby enabling determination of the start image frame and the end image frame; and the first image frame 31, the second image frame 32 and the image frame between the first image frame and the second image frame are used as the selected target image frames, i.e. the selected N target image frames, thereby realizing the determination of the N target image frames.
In addition, when the playing progress point corresponding to the first input is changed, the M image frames displayed in the target area are also changed; the number of the M image frames, that is, M values, is also associated with the dragging speed or the sliding distance of the first input, for example, the faster the dragging speed of the first input is, or the longer the sliding distance in unit time is, the larger the value of M is, that is, the more the associated image frames are.
The number of the image frames associated with the play progress point may also be adjusted by setting the associated condition, for example, the time period of the image frames associated with the play progress point may be set to be the image frames included 30 seconds before and after the play progress point, may also be set to be the image frames included 1 minute before the play progress point, or may be set to be the image frames included 1 minute after the play progress point. Wherein the longer the period of time associated with the play progress point, the greater the number of image frames associated with the play progress point. Therefore, in a specific setting process, the user can adjust the length of the time period associated with the playing progress point and the dragging speed or the sliding distance of the first input according to actual requirements, so as to display a reasonable number of image frames in the target area.
Alternatively, in case N is greater than 1;
after the receiving of the second input of the user to the N target image frames of the M image frames, the method further includes:
generating a video clip associated with the N target image frames in response to the second input.
In this embodiment, for the case that N is greater than N, a video clip associated with N target image frames may also be generated, which is equivalent to stripping the video content that the user wants to watch from the video to generate a new video clip, and the user does not need to adjust the playing progress back and forth, which may further simplify the adjustment operation of the playing progress.
Optionally, the play progress point is a real-time display position of a slider on a play progress control in response to the first input, where the slider is used to indicate a play progress of the video, and the first input is used to update a display position of the slider on the play progress control.
In this embodiment, the playing progress control may be a playing progress bar, and the slider is used to indicate the playing progress of the video; moreover, the playing progress of the video is correspondingly changed by changing the position of the sliding block.
Optionally, after the target area displays M image frames associated with a play progress point in the video in response to the first input and before the receiving of the second input of the user on N target image frames of the M image frames, the method further comprises:
receiving a third input of a user to a filter control displayed in the target area;
in response to the third input, displaying at least one filter identifier in the target area;
receiving a fourth input of a user to a target filtering identifier in the at least one filtering identifier;
in response to the fourth input, displaying the M image frames as Q image frames;
the receiving a second input of the user to N target image frames of the M image frames includes:
receiving a second input of the user to N target image frames in the Q image frames;
wherein Q is a positive integer, and N is not less than Q and not more than M.
In this embodiment, the filtering operation may be performed on M image frames to filter unnecessary image frames in the M image frames, so as to achieve the purpose of reducing the filtering range of the target image frame.
The filtering control and the M image frames may be displayed in a target area in a partitioned manner, and when a third input for the filtering control is received, such as a pressing operation and a clicking operation, at least one filtering identifier corresponding to the filtering control is displayed in the target area, so as to perform a setting operation on the at least one filtering identifier, mask off image frames of the M image frames, which are associated with the filtering identifier, and only display Q image frames that are not masked off, so as to select N target image frames from the Q image frames, thereby achieving a purpose of reducing a filtering range of the target image frames.
As shown in fig. 4a, the filter control 41 and the M image frames 42 may be displayed in the target area, that is, in the second screen 22.
At least one filter identifier displayed on the second screen 22, as shown in FIG. 4 b; wherein, at least one filtering mark comprises at least one of a time period mark, a similarity mark, a character mark, a background mark and an article mark.
For example, if the selected filtering identifier is a person identifier and the selected person identifier is a person a, the image frames including the person a in the M image frames are reduced to be masked, and only Q image frames not including the person a in the M image frames are displayed, so as to achieve the purpose of reducing the filtering range of the target image frame.
Wherein, for the shielded image frames, the display is updated on the playing progress bar of the video; for example, the progress area corresponding to the image frame that is masked off may be displayed in a first color (e.g., blue) on the playing progress bar, and the progress area corresponding to the image frame that is not masked off may be displayed in a second color (e.g., gray) on the playing progress bar, so as to narrow the selection range of the playing progress point. That is, when the image frame associated with the current play progress point includes the target image frame and the play progress point needs to be adjusted, the selection range of the play progress point can be narrowed, and unnecessary dragging is avoided, so as to achieve the purpose of simplifying the adjustment operation of the play progress.
The filtering identifier includes, but is not limited to, the above types, and may also be other types, and the user may also add or delete the filtering identifier according to actual needs.
The display mode of the M image frames in the target area can be a style of a palace, such as a style of a nine-palace, or a list; and when the number of the M image frames exceeds the display number of the target area, other image frames in the M image frames can be displayed in a sliding-down, left-sliding and right-sliding mode, so that a user can select the target image frame from the image frames.
Furthermore, in the process of displaying the M image frames, the types of the image frames are also identified, and the image frames of different types are displayed in a partitioned mode or in a segmented mode. For example, if the M image frames include a human image frame, a landscape image frame, and a food image frame, the human image frame, the landscape image frame, and the food image frame may be displayed in a partitioned manner or in a segmented manner, so that the user can quickly select an image frame of interest.
Optionally, the displaying, in a target area, M image frames associated with a play progress point in the video includes:
and displaying M image frames related to the playing progress point in the video on an annular control in the target area.
In this embodiment, M image frames in the video may be displayed in a ring display manner, that is, the M image frames may be distributed and displayed on the ring control in the target area, so as to simplify the display manner of the M image frames.
The ring control can be arranged in a segmented manner, namely, the M image frames can be distributed on the ring control in a segmented manner; moreover, the number of segments may be dynamic, e.g., the longer the video time corresponding to M image frames, the fewer the number of segments; the shorter the video time corresponding to the M image frames, the greater the number of segments.
For example, the duration of the video is 1 hour, and the number of segments of the ring control may be set to 6 at this time, that is, each segment displays an image frame within 10 minutes, as shown in fig. 5 a; after a certain segment is selected, the number of segments of the ring control can be updated to 10, that is, each segment displays an image frame obtained within 1 minute, as shown in fig. 5 b; therefore, a user can conveniently and quickly position the target image frame from the image frame of the video, the playing progress can be accurately and quickly adjusted, and the aim of simplifying the adjustment operation of the playing progress is fulfilled.
Optionally, after the displaying, on the ring control in the target region, M image frames in the video associated with the play progress point, the method further includes:
receiving a fifth input of the user to the annular control, wherein the fifth input is used for controlling the annular control to rotate;
updating the M image frames to S image frames in response to the fifth input;
wherein S is a positive integer.
In this embodiment, the content and the number of the image frames displayed on the ring control can be updated by receiving an input operation on the ring control, so that the user can quickly locate the target image frame. Moreover, the image frames are displayed through the annular control, so that not only can the display effect of the image frames be improved, but also the image frames can be changed by controlling the rotation of the annular control, and the controllability of the change operation of the image frames can be enhanced.
The annular control piece can rotate clockwise or anticlockwise, and clockwise rotation and anticlockwise rotation can also correspond to forward or backward movement of the video playing progress.
In addition, the rotation speed of the annular control can also adjust the number of the displayed image frames. For example, the faster the rotation speed of the ring control is, the greater the number of image frames displayed; and the slower the rotation speed of the annular control is, the less the number of the image frames is displayed, and the adjustment of the number of the image frames displayed in the annular control can be realized only by adjusting the rotation speed, so that the operation is convenient and fast.
For example, when the rotation speed of the fifth input is greater than the preset rotation speed, M is less than S, that is, the number of frames of the image displayed on the ring control is increased; and when the rotating speed of the fifth input is less than or equal to the preset rotating speed, M is greater than or equal to S, namely the number of image frames displayed on the annular control is reduced.
The preset rotation speed may be set based on the preference or habit of the user, that is, the value of M may be set based on the preference or habit of the user.
Furthermore, in the middle of the annular space, an image frame corresponding to the current playing progress point can be displayed, so that a user can view the image frame corresponding to the current playing progress point.
Furthermore, the ring control also has a sliding boundary indicator, which may be a line drawn on the ring control and which is displayed differently from the segmented dividing line. As shown in fig. 5a and 5b, the segmentation separation line is a thin dash line on the ring control, and the sliding boundary is marked as a thick dash line on the ring control.
And under the condition that the received fifth input is applied to the sliding boundary mark, the image frame on the annular control can be triggered to update the display. The working principle is similar to that of the old dial fixed telephone.
According to the shooting method, a first input of a user is received, and the first input is used for adjusting the playing progress of a video; in response to the first input, displaying M image frames in the video, wherein the M image frames are associated with a playing progress point in the video, the playing progress point is a playing position corresponding to the first input, and the target area is a display area associated with a playing interface of the video; receiving a second input of a user to N target image frames in the M image frames; responding to the second input, and updating the playing progress of the video to target playing positions corresponding to the N target image frames; wherein M, N are positive integers, and N is less than or equal to M. This can simplify the adjustment operation of the play progress.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution main body may be a video processing apparatus, or a control module used in the video processing apparatus to execute the video processing method. In the embodiment of the present application, a video processing apparatus is taken as an example to execute a video processing method, and the video processing apparatus provided in the embodiment of the present application is described.
Referring to fig. 6, fig. 6 is a block diagram of a video processing apparatus according to an embodiment of the present application, and as shown in fig. 6, the apparatus 600 includes:
a first receiving module 601, configured to receive a first input of a user, where the first input is used to adjust a playing progress of a video;
a first display module 602, configured to display, in response to the first input, M image frames in the video that are associated with a play progress point in a target area, where the play progress point is a play position corresponding to the first input, and the target area is a display area associated with a play interface of the video;
a second receiving module 603, configured to receive a second input from a user to N target image frames in the M image frames;
a first updating module 604, configured to update a playing progress of the video to target playing positions corresponding to the N target image frames in response to the second input;
wherein M, N are positive integers, and N is less than or equal to M.
Optionally, the playing progress point is a real-time display position of a slider on a playing progress control in response to the first input, the slider is used for indicating the playing progress of the video, and the first input is used for updating the display position of the slider on the playing progress control;
and under the condition that N is greater than 1, the target playing position is a playing position corresponding to a first target image frame, and the first target image frame is a target image frame determined based on the selection sequence of the N target image frames.
Optionally, the apparatus 600 further comprises:
the third receiving module is used for receiving a third input of the filter control displayed in the target area by the user;
a second display module, configured to display at least one filtering identifier in the target area in response to the third input;
the fourth receiving module is used for receiving fourth input of a user on a target filtering identifier in the at least one filtering identifier;
a third display module, configured to update and display the M image frames as Q image frames in response to the fourth input;
the second receiving module 603 is specifically configured to receive a second input of the user to N target image frames in the Q image frames;
wherein Q is a positive integer, and N is not less than Q and not more than M.
Optionally, the at least one filtering identifier includes at least one of a similarity identifier, a time period identifier, a character identifier, a background identifier, and an item identifier.
Alternatively, in case N is greater than 1;
the apparatus 600 further comprises:
a generating module to generate a video clip associated with the N target image frames in response to the second input.
Optionally, the second input is a sliding input of the M image frames displayed by the user for the target area;
the first target image frame in the N target image frames is an image frame corresponding to a sliding start point of the sliding input, and the nth target image frame in the N target image frames is an image frame corresponding to a sliding end point of the sliding input.
Optionally, the display module is specifically configured to display, on the annular control in the target area, M image frames associated with the play progress point in the video.
Optionally, the apparatus 600 further comprises:
a fifth receiving module, configured to receive a fifth input to the ring control by a user, where the fifth input is used to control the ring control to rotate;
a second updating module for updating the M image frames to S image frames in response to the fifth input;
wherein S is a positive integer.
Optionally, in a case that the rotation speed of the fifth input is greater than a preset rotation speed, M is less than S;
and M is greater than or equal to S under the condition that the rotating speed of the fifth input is less than or equal to the preset rotating speed.
The video processing apparatus in the embodiment of the present application may be an apparatus, and may also be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The video processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 7, an electronic device 700 is further provided in this embodiment of the present application, and includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, where the program or the instruction is executed by the processor 701 to implement each process of the video processing method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Referring to fig. 8, fig. 8 is a block diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 8, the electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.
Those skilled in the art will appreciate that the electronic device 800 may further comprise a power supply (e.g., a battery) for supplying power to the various components, and the power supply may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. Drawing (A)8The electronic device structures shown in the figures do not constitute limitations of the electronic device, and the electronic device may include more or less components than those shown in the figures, or some components may be combined, or different component arrangements may be provided, which are not described in detail herein.
A user input unit 807 for receiving a first input of a user, the first input being used to adjust a playing progress of the video;
a display unit 806, configured to display, in response to the first input, M image frames in the video that are associated with a play progress point in a target area, where the play progress point is a play position corresponding to the first input, and the target area is a display area associated with a play interface of the video;
a user input unit 807 for receiving a second input of the user to N target image frames among the M image frames;
a processor 810, configured to update a playing progress of the video to target playing positions corresponding to the N target image frames in response to the second input;
wherein M, N are all positive integers, and N is less than or equal to M.
Optionally, the playing progress point is a real-time display position of a slider on a playing progress control in response to the first input, the slider is used for indicating the playing progress of the video, and the first input is used for updating the display position of the slider on the playing progress control;
and under the condition that N is greater than 1, the target playing position is a playing position corresponding to a first target image frame, and the first target image frame is a target image frame determined based on the selection sequence of the N target image frames.
Optionally, a user input unit 807 for receiving a third input of the filter control displayed by the target region by the user;
a display unit 806, configured to display at least one filtering identifier in the target area in response to the third input;
a user input unit 807 for receiving a fourth input of the target filtering identifier of the at least one filtering identifier from the user;
a display unit 806 for updating and displaying the M image frames as Q image frames in response to the fourth input;
a user input unit 807 for receiving a second input of the user to N target image frames among the Q image frames;
wherein Q is a positive integer, and N is not less than Q and not more than M.
Optionally, the at least one filtering identifier includes at least one of a similarity identifier, a time period identifier, a character identifier, a background identifier, and an article identifier.
Alternatively, in case N is greater than 1;
a processor 810 for generating a video clip associated with the N target image frames in response to the second input.
Optionally, the second input is a sliding input of the M image frames displayed by the user for the target area;
the first target image frame in the N target image frames is an image frame corresponding to a sliding start point of the sliding input, and the nth target image frame in the N target image frames is an image frame corresponding to a sliding end point of the sliding input.
Optionally, the display unit 806 is configured to display, on the ring control in the target area, M image frames in the video that are associated with the play progress point.
Optionally, a user input unit 807 for receiving a fifth input of the ring control from the user, wherein the fifth input is used for controlling the ring control to rotate;
a processor 810 for updating the M image frames to S image frames in response to the fifth input;
wherein S is a positive integer.
Optionally, in a case that the rotation speed of the fifth input is greater than a preset rotation speed, M is less than S;
and when the rotating speed of the fifth input is less than or equal to the preset rotating speed, M is greater than or equal to S.
It should be understood that, in the embodiment of the present application, the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics Processing Unit 8041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer-readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (20)

1. A video processing method, comprising:
receiving a first input of a user, wherein the first input is used for adjusting the playing progress of a video;
in response to the first input, displaying M image frames in the video, wherein the M image frames are associated with a playing progress point in the video, the playing progress point is a playing position corresponding to the first input, and the target area is a display area associated with a playing interface of the video;
receiving a second input of the N target image frames in the M image frames from the user;
in response to the second input, updating the playing progress of the video to target playing positions corresponding to the N target image frames;
m, N are positive integers, M is greater than 1, N is less than or equal to M, and the value of M is associated with the dragging speed or the sliding distance of the first input.
2. The method of claim 1, wherein the playing progress point is a real-time display position of a slider on a playing progress control in response to the first input, the slider being used to indicate the playing progress of the video, and the first input being used to update the display position of the slider on the playing progress control;
and under the condition that N is greater than 1, the target playing position is a playing position corresponding to a first target image frame, and the first target image frame is a target image frame determined based on the selection sequence of the N target image frames.
3. The method of claim 1, wherein in response to the first input, after a target area displays M image frames in the video associated with a playback progress point and before the receiving a second input of a user to N target image frames in the M image frames, the method further comprises:
receiving a third input of a user to a filter control displayed in the target area;
displaying at least one filter identifier in the target area in response to the third input;
receiving a fourth input of a user to a target filtering identifier in the at least one filtering identifier;
in response to the fourth input, displaying the M image frames as Q image frames;
the receiving a second input of the user to N target image frames of the M image frames includes:
receiving a second input of the user to N target image frames in the Q image frames;
wherein Q is a positive integer, and N is not less than Q and not more than M.
4. The method of claim 3, wherein the at least one filtering indicator comprises at least one of a similarity indicator, a time period indicator, a character indicator, a background indicator, and an item indicator.
5. The method according to claim 1, characterized in that in case N is greater than 1;
after the receiving of the second input of the user to the N target image frames of the M image frames, the method further includes:
generating a video clip associated with the N target image frames in response to the second input.
6. The method of claim 1, wherein the second input is a sliding input by a user for the M image frames displayed by the target area;
the first target image frame in the N target image frames is an image frame corresponding to a sliding start point of the sliding input, and the nth target image frame in the N target image frames is an image frame corresponding to a sliding end point of the sliding input.
7. The method according to claim 1, wherein the displaying M image frames associated with a playing progress point in the video in a target area comprises:
and displaying M image frames related to the playing progress point in the video on an annular control in the target area.
8. The method according to claim 7, wherein after the displaying, on the ring control in the target area, the M image frames of the video associated with the play progress point, the method further comprises:
receiving a fifth input of the user to the annular control, wherein the fifth input is used for controlling the rotation of the annular control;
updating the M image frames to S image frames in response to the fifth input;
wherein S is a positive integer.
9. The method of claim 8,
when the rotating speed of the fifth input is higher than the preset rotating speed, M is lower than S;
and M is greater than or equal to S under the condition that the rotating speed of the fifth input is less than or equal to the preset rotating speed.
10. A video processing apparatus, comprising:
the first receiving module is used for receiving a first input of a user, and the first input is used for adjusting the playing progress of the video;
a first display module, configured to, in response to the first input, display M image frames in the video, where the M image frames are associated with a play progress point in the video, where the play progress point is a play position corresponding to the first input, and the target area is a display area associated with a play interface of the video;
a second receiving module, configured to receive a second input of the N target image frames in the M image frames from the user;
a first updating module, configured to update a playing progress of the video to target playing positions corresponding to the N target image frames in response to the second input;
m, N are positive integers, M is greater than 1, N is less than or equal to M, and the value of M is associated with the dragging speed or the sliding distance of the first input.
11. The apparatus of claim 10, wherein the play progress point is a real-time display position of a slider on a play progress control in response to the first input, the slider being used to indicate the play progress of the video, and the first input being used to update the display position of the slider on the play progress control;
and under the condition that N is greater than 1, the target playing position is a playing position corresponding to a first target image frame, and the first target image frame is a target image frame determined based on the selection sequence of the N target image frames.
12. The apparatus of claim 10, further comprising:
the third receiving module is used for receiving a third input of the filter control displayed in the target area by the user;
the second display module is used for responding to the third input and displaying at least one filtering identifier in the target area;
the fourth receiving module is used for receiving fourth input of a user on a target filtering identifier in the at least one filtering identifier;
a third display module for updating and displaying the M image frames as Q image frames in response to the fourth input;
the second receiving module is specifically configured to receive a second input of the user to N target image frames in the Q image frames;
wherein Q is a positive integer, and N is not less than Q and not more than M.
13. The apparatus of claim 12, wherein the at least one filtering indicator comprises at least one of a similarity indicator, a time period indicator, a character indicator, a background indicator, and an item indicator.
14. The apparatus of claim 10, wherein in the case that N is greater than 1;
the device further comprises:
a generating module to generate a video clip associated with the N target image frames in response to the second input.
15. The apparatus of claim 10, wherein the second input is a slide input by a user for the M image frames displayed by the target area;
the first target image frame in the N target image frames is an image frame corresponding to a sliding start point of the sliding input, and the nth target image frame in the N target image frames is an image frame corresponding to a sliding end point of the sliding input.
16. The apparatus according to claim 10, wherein the display module is specifically configured to display M image frames associated with the playback progress point in the video on a ring control in the target area.
17. The apparatus of claim 16, further comprising:
a fifth receiving module, configured to receive a fifth input to the ring control by a user, where the fifth input is used to control the ring control to rotate;
a second updating module for updating the M image frames to S image frames in response to the fifth input;
wherein S is a positive integer.
18. The apparatus of claim 17,
when the rotating speed of the fifth input is higher than the preset rotating speed, M is smaller than S;
and when the rotating speed of the fifth input is less than or equal to the preset rotating speed, M is greater than or equal to S.
19. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the video processing method according to any one of claims 1 to 9.
20. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the video processing method according to any one of claims 1 to 9.
CN202110087983.7A 2021-01-22 2021-01-22 Video processing method, video processing device, electronic equipment and medium Active CN112929748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110087983.7A CN112929748B (en) 2021-01-22 2021-01-22 Video processing method, video processing device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110087983.7A CN112929748B (en) 2021-01-22 2021-01-22 Video processing method, video processing device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN112929748A CN112929748A (en) 2021-06-08
CN112929748B true CN112929748B (en) 2022-07-15

Family

ID=76164762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110087983.7A Active CN112929748B (en) 2021-01-22 2021-01-22 Video processing method, video processing device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN112929748B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113507614A (en) * 2021-06-23 2021-10-15 青岛海信移动通信技术股份有限公司 Video playing progress adjusting method and display equipment
CN113905125B (en) * 2021-09-08 2023-02-21 维沃移动通信有限公司 Video display method and device, electronic equipment and storage medium
CN115878844A (en) * 2021-09-27 2023-03-31 北京有竹居网络技术有限公司 Video-based information display method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577508A (en) * 2012-07-27 2014-02-12 纬创资通股份有限公司 Movie preview method, movie preview system and computer program product
CN109121008A (en) * 2018-08-03 2019-01-01 腾讯科技(深圳)有限公司 A kind of video previewing method, device, terminal and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040128317A1 (en) * 2000-07-24 2004-07-01 Sanghoon Sull Methods and apparatuses for viewing, browsing, navigating and bookmarking videos and displaying images
CN101976169B (en) * 2010-10-22 2012-11-21 鸿富锦精密工业(深圳)有限公司 Electronic reading device and page turning method thereof
US8732579B2 (en) * 2011-09-23 2014-05-20 Klip, Inc. Rapid preview of remote video content
CN106231343A (en) * 2016-10-11 2016-12-14 青岛海信电器股份有限公司 Video playback processing method, device and TV
US20190149885A1 (en) * 2017-11-13 2019-05-16 Philo, Inc. Thumbnail preview after a seek request within a video
CN110324717B (en) * 2019-07-17 2021-11-02 咪咕文化科技有限公司 Video playing method and device and computer readable storage medium
CN111050214A (en) * 2019-12-26 2020-04-21 维沃移动通信有限公司 Video playing method and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577508A (en) * 2012-07-27 2014-02-12 纬创资通股份有限公司 Movie preview method, movie preview system and computer program product
CN109121008A (en) * 2018-08-03 2019-01-01 腾讯科技(深圳)有限公司 A kind of video previewing method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN112929748A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN112929748B (en) Video processing method, video processing device, electronic equipment and medium
CN112135181B (en) Video preview method and device and electronic equipment
CN112954199B (en) Video recording method and device
CN111669507A (en) Photographing method and device and electronic equipment
CN112162685B (en) Attribute adjusting method and device and electronic equipment
CN112672061B (en) Video shooting method and device, electronic equipment and medium
CN112911147B (en) Display control method, display control device and electronic equipment
CN111857510B (en) Parameter adjusting method and device and electronic equipment
CN112114734A (en) Online document display method and device, terminal and storage medium
CN112698762B (en) Icon display method and device and electronic equipment
CN114237801A (en) Desktop display method and device, electronic equipment and medium
CN114116098A (en) Application icon management method and device, electronic equipment and storage medium
CN113747080A (en) Shooting preview method, shooting preview device, electronic equipment and medium
CN113891127A (en) Video editing method and device and electronic equipment
CN113703623A (en) Program icon display method, device, electronic equipment and medium
CN112394806A (en) User interface display method and device, electronic equipment and storage medium
CN112199552A (en) Video image display method and device, electronic equipment and storage medium
CN112181252A (en) Screen capturing method and device and electronic equipment
CN112214774A (en) Permission setting method, file playing method and device and electronic equipment
CN111796746A (en) Volume adjusting method, volume adjusting device and electronic equipment
CN111638828A (en) Interface display method and device
CN115268816A (en) Split screen control method and device, electronic equipment and readable storage medium
CN113923392A (en) Video recording method, video recording device and electronic equipment
CN115460448A (en) Media resource editing method and device, electronic equipment and storage medium
CN112698771B (en) Display control method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant