CN112437353A - Video processing method, video processing apparatus, electronic device, and readable storage medium - Google Patents

Video processing method, video processing apparatus, electronic device, and readable storage medium Download PDF

Info

Publication number
CN112437353A
CN112437353A CN202011479953.2A CN202011479953A CN112437353A CN 112437353 A CN112437353 A CN 112437353A CN 202011479953 A CN202011479953 A CN 202011479953A CN 112437353 A CN112437353 A CN 112437353A
Authority
CN
China
Prior art keywords
target
input
video
target video
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011479953.2A
Other languages
Chinese (zh)
Other versions
CN112437353B (en
Inventor
王梦莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011479953.2A priority Critical patent/CN112437353B/en
Publication of CN112437353A publication Critical patent/CN112437353A/en
Application granted granted Critical
Publication of CN112437353B publication Critical patent/CN112437353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The application discloses a video processing method, a video processing device, electronic equipment and a readable storage medium, and belongs to the technical field of video processing. The video processing method comprises the following steps: receiving a first input of a user in a case of outputting a target video; in response to the first input, playing a target video segment in a target display area, wherein the target video segment comprises a video segment in the target video that was intercepted according to the first input. According to the method and the device, the user can learn directly through the target video clip, the learning process is simplified, and the time consumption of the learning process is shortened.

Description

Video processing method, video processing apparatus, electronic device, and readable storage medium
Technical Field
The present application belongs to the field of video processing technologies, and in particular, to a video processing method, a video processing apparatus, an electronic device, and a readable storage medium.
Background
In the days of this explosion of knowledge, people are increasingly keen about learning knowledge. The terminal equipment provides a learning platform which is not limited by time and place for a user. In particular, more and more users achieve autonomous learning through videos.
Generally, when a user learns through a video, there are some difficulties that require repeated watching learning. The user can only drag the slider on the playing progress bar to the place where playing needs to be started by fingers, the place is played, the user drags the slider again, and the steps are repeated until the user learns. The whole process is not only complicated, but also consumes a lot of learning time of the user, and brings relatively poor learning experience to the user.
Disclosure of Invention
An embodiment of the present application provides a video processing method, a video processing apparatus, an electronic device, and a readable storage medium, which can solve the problems of the prior art that a video learning process is complicated and time consuming is long.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video processing method, where the video processing method includes:
receiving a first input of a user in a case of outputting a target video;
in response to the first input, playing a target video segment in a target display area, wherein the target video segment comprises a video segment in the target video that was intercepted according to the first input.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the first receiving module is used for receiving a first input of a user under the condition of outputting the target video;
a first response module, configured to play a target video clip in a target display area in response to the first input, where the target video clip includes a video clip in the target video that is captured according to the first input.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the video processing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video processing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the video processing method according to the first aspect.
In the embodiment of the application, under the condition that the target video is output, the part which is interesting to the user or needs to be studied in a key mode in the target video, namely the target video clip, is intercepted through the first input, and then the target video clip is played in the target display area, so that the user can conveniently learn through the target video clip directly, the learning process is simplified, and the time consumption of the learning process is shortened.
Drawings
Fig. 1 is a flowchart illustrating steps of a video processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic illustration showing a play mode control provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a preview image display provided by an embodiment of the present application;
fig. 4 is a schematic illustration showing a first preset control and a second preset control provided in the embodiment of the present application;
FIG. 5 is a schematic illustration of a window control provided in an embodiment of the present application;
FIG. 6 is a second illustration of a window control according to an embodiment of the present application;
FIG. 7 is a schematic illustration of a markup control presentation provided by an embodiment of the present application;
fig. 8 is a block diagram of a video processing apparatus according to an embodiment of the present application;
fig. 9 is one of the hardware structure diagrams of the electronic device provided in the embodiment of the present application;
fig. 10 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, a video processing method provided in an embodiment of the present application includes:
step 101: in a case where the target video is output, a first input of a user is received.
In this step, outputting the target video may be understood as playing the target video by the electronic device, where the playing state may be a playing state or a playing pause state. The target video may be a local video played locally by the electronic device, or may be a network streaming media video suitable for being played in a network. The video content of the target video can be any video content, for example, the video content for showing actions and skills; or course teaching video content in the form of course explanation. The first input may be a click input, a swipe input, a long press input, or the like.
Step 102: in response to the first input, the target video clip is played in the target display area.
In this step, the target video segment includes a video segment in the target video captured according to the first input. Different first inputs will intercept video segments of different durations at different locations in the target video. Here, the target video clip is saved after being intercepted. And may receive user input to name the saved target video segment. Under the condition that a user does not name the target video clip, the name of the target video clip is automatically generated according to a preset rule, for example, the clip N is a positive integer.
Of course, by repeating step 101, a plurality of target video segments can be obtained and stored. When the target video clips are played, a user inputs the displayed selection control to select one of the target video clips for playing. The latest intercepted target video clip can also be directly played. The target display area is a fixed or non-fixed display area in a screen of the electronic device.
In the embodiment of the application, under the condition of outputting the target video, the part which is interesting to the user or needs to be studied in a key mode in the target video, namely the target video clip, is intercepted through the first input, and then the target video clip is played in the target display area, so that the user can learn directly through the target video clip, the learning process is simplified, and the consumed time of the learning process is shortened.
Optionally, after playing the target video segment in the target display area in response to the first input, the video processing method further includes:
and displaying a play mode control.
In this step, the play mode control is used to control the play mode of the target video segment. Wherein, the play mode includes: at least one of a loop play mode and a double speed play mode. The circular play mode includes a sequential play mode and a repeat play mode. And when the number of the target video clips is multiple, in the sequential playing mode, sequentially playing the target video clips according to the stored sequence of the target video clips. And under the repeated playing mode, repeatedly playing the current target video clip. The multiple speed range in the multiple speed playing mode includes 0.5 times to 3.0 times, but is not limited thereto.
Receiving a second input of the user to the play mode control;
in this step, the second input may be a click input, a slide input, a long press input, or the like.
And responding to the second input, and controlling the target video clip to be played in the target display area according to the target playing mode.
In this step, the target play mode is the play mode indicated by the second input. For example, referring to fig. 2, the play mode control includes a double speed control 21, and the double speed at which the target video segment is played can be selected by inputting to the double speed control 21. Here, the play mode control may also include other controls, and a dialog box 23 including all play mode control information may be triggered and displayed by clicking the displayed target control 22, and a corresponding play mode control may be displayed by inputting to the dialog box 23.
In the embodiment of the application, the input of the user can be received, and the target video clip is played according to the play mode indicated by the input of the user, so that various play requirements of the user are met.
Optionally, before receiving the first input of the user, the video processing method further includes:
a selection control is displayed that includes at least one intercept mode option.
In this step, each interception mode option corresponds to an interception mode for intercepting a video segment.
And receiving a third input of the target option in the at least one interception mode option from the user.
And responding to the third input, and adjusting the intercepting mode of the intercepted video clip to the intercepting mode corresponding to the target option.
In this step, when the electronic device is in different capture modes, different controls are displayed to capture video clips in the target video. For example, a control for inputting time can be displayed, and a video segment can be intercepted from the target video according to the starting time and the ending time input by the user. The method comprises the steps of displaying a window control for inputting duration, adjusting the length of the window control according to the target duration input by a user, and further intercepting a video clip of the target duration from a target video according to the relative position between the window control and a progress bar, wherein the progress bar is used for indicating the current playing progress of the target video. The two draggable controls can be displayed on the progress bar, a user can drag the positions of the two controls on the progress bar respectively, and then video clips between the two positions are intercepted from the target video according to the positions of the two controls on the progress bar. Of course, when the electronic device is in different intercepting modes, the function of intercepting the video clip of the displayed control can be started. For example, a progress bar for indicating the current playing progress of the target video is displayed on the playing screen in the playing process of the target video, and a slider is arranged on the progress bar. When the intercepting mode is not started, the current playing progress of the target video can be adjusted through sliding the sliding block. When the capture mode is started, the sliding block does not influence the current playing progress of the target video, but displays the picture of the video frame at the current position above the sliding block, a user can input the displayed picture, and the video clip between two positions is captured from the target video according to the respective corresponding positions of the pictures input twice continuously.
In the embodiment of the application, the user can input the intercepting mode selection control according to the own requirement or the video content of the target video, so that the target video clip is intercepted according to the intercepting mode indicated by the user input, the requirement of the user is met, and the flexibility of the intercepting video clip is improved.
Optionally, the capture mode includes a first capture mode, and when the electronic device is in the first capture mode and the user slides the slider on the progress bar of the target video, a preview image is displayed above the slider, where the preview image includes a video frame and/or a subtitle of the target video corresponding to a position of the slider on the progress bar. Here, the progress bar and the slider are used to indicate or adjust the playing progress of the target video, and are similar to the progress bar and the slider when the video player plays the video file, and are not described herein again. The preview image may be a thumbnail of a video frame or a subtitle. When the target video includes a subtitle and a video frame, the subtitle has the same time axis as the video frame. Referring to fig. 3, when the user slides the slider to the first position 31, the displayed preview image is the first preview image 32. The first preview image 32 may include only the video frame corresponding to the first position 31, or only the subtitle corresponding to the first position 31, or include both the video frame corresponding to the first position 31 and the subtitle corresponding to the first position 31. When the user slides the slider to the second position 33, the displayed preview image is the second preview image 34. The second preview image 34 includes content similar to that included in the first preview image 32 and will not be described in detail herein.
Under the condition that the interception mode corresponding to the target option is the first interception mode, receiving a first input of a user, wherein the first input comprises the following steps:
first input of a user to the two preview images is received.
In this step, the two preview images are displayed when the user slides the slider at two different times, respectively. For example, with reference to fig. 3, when the user slides the slider to the first position 31 at the first time, the displayed preview image is the first preview image 32, and the user can perform a first input on the first preview image 32 to determine the start position of the video segment, i.e. the first position 31. And the user continues to slide the slider, and when the slider is slid to the second position 33 at the second moment, the displayed preview image is the second preview image 34, and then the user can perform the first input on the second preview image 34 to determine the end position of the video clip, namely the second position 33.
In response to a first input, playing a target video clip in a target display area, comprising:
in response to a first input, a target video frame or target subtitle is determined from each of the two preview images, respectively.
In this step, the target video frame includes a video frame in the target video corresponding to a position where the slider is located on the progress bar when the preview image is displayed, and the target subtitle includes a subtitle in the target video corresponding to a position where the slider is located on the progress bar when the preview image is displayed. Here, the progress bar, the video frame, and the subtitle have the same time axis, and the position of the slider on the progress bar corresponds to a specific time, and the target video frame and the target subtitle are determined according to the time.
And intercepting the video frames between two target video frames or two target subtitles in the target video to obtain a target video segment.
In this step, when a video frame between two target video frames or a video frame between two target subtitles is captured, the capturing may be performed according to a time corresponding to the target video frame or the target subtitles. The corresponding moments of the intercepted video frames are both positioned between the corresponding moments of the two target video frames or between the corresponding moments of the two target subtitles.
And playing the target video clip in the target display area.
In the embodiment of the application, the content of the beginning and the ending of the video clip is displayed through the preview image, so that a user can conveniently intercept the target video clip according to the video content.
Optionally, the capturing mode includes a second capturing mode, the first preset control is displayed when the electronic device is in the second capturing mode, and at least one second preset control is displayed according to a keyword input by a user in the first preset control, wherein each second preset control corresponds to a position where the keyword appears in a subtitle of the target video. When the target video comprises the subtitles and the video frames, the time axes of the subtitles and the video frames are the same, and the subtitles are correspondingly displayed while the video frames are displayed. Due to different behavior habits of users, some users are sensitive to subtitles and can remember some key subtitles of video clips approximately. The user inputs a subtitle at the starting position of a video clip to be intercepted, for example, the subtitle is ' following knowledge points which are key for explaining the scholastic period, the point is determined and the application of the point is performed ', the user possibly only remembers three characters of the point clearly, the point is input ', and all subtitles matched with the point are listed for the user to select. Referring to fig. 4, a user inputs "fixed points" in the first preset control 41, matches the subtitles of the target video in a fuzzy matching manner, determines all positions of the subtitles where the "fixed points" appear, and displays the subtitles containing the "fixed points" corresponding to one position through each second preset control 42.
Under the condition that the interception mode corresponding to the target option is the second interception mode, receiving a first input of a user, wherein the first input comprises the following steps:
receiving a first input to a target second preset control, wherein the target second preset control is one of at least one second preset control;
in response to a first input, playing a target video clip in a target display area, comprising:
and responding to the first input, intercepting a video clip with preset duration from the target position in the target video to obtain the target video clip.
In this step, the target position is a position where a keyword appears in a subtitle of the target video corresponding to the target second preset control. With continued reference to fig. 4, if the target second preset control is the uppermost second preset control 42 of the three second preset controls 42 displayed, the target position 43 may be determined. The preset duration may be any duration that is preset and smaller than the total duration of the target video, and may also be duration input by the user when the user intercepts the target video clip. Certainly, another target position may also be determined in a similar manner, and the video segment between the two target positions is intercepted to obtain the target video segment, which is not described herein again.
And playing the target video clip in the target display area.
In the embodiment of the application, the subtitle factors in the target video are considered, the keywords input by the user are matched in the subtitles of the target video, and the target video segment related to the keywords is obtained, so that the user can conveniently intercept the required target video segment through the subtitles.
Optionally, the intercepting mode includes a third intercepting mode, when the electronic device is in the third intercepting mode, a window control is displayed, and after a user inputs a duration in the window control, the length of the window control is adjusted according to the duration input by the user; the window control and the progress bar have the same time axis, that is, the window control and the progress bar with the same length indicate the same duration. Referring to fig. 5, the window control 51 may input a time duration for intercepting the video segment. The input duration is not long enough, and the window control 51 includes two windows, the left window is in minutes, and the right window is in seconds, but is not limited thereto. In the process of sliding the window control by the user, a preview image is displayed on the window control, where the preview image is the same as the preview image displayed when the slider on the progress bar is slid in the above embodiment, and details are not repeated here. Referring to fig. 6, the window control 61 includes a first window 62 and a second window 63, wherein the first window 62 displays a preview image at a first target location 64 corresponding to the leftmost side of the window control 61. The second window 63 displays a preview image at the rightmost corresponding second target position 65 of the window control 61.
Under the condition that the interception mode corresponding to the target option is the third interception mode, receiving a first input of a user, wherein the first input comprises the following steps:
receiving a first input of a window control by a user;
in response to a first input, playing a target video clip in a target display area, comprising:
and responding to the first input, and intercepting a video frame corresponding to two positions on the progress bar in the target video according to the positions on the progress bar corresponding to the two ends of the window control respectively to obtain a target video clip.
And playing the target video clip in the target display area.
In the embodiment of the application, the content of the beginning and the ending of the video clip can be displayed through the preview image, and meanwhile, a user can conveniently intercept the target video clip with fixed duration according to the requirement of the user.
Optionally, after playing the target video segment in the target display area in response to the first input, the video processing method further includes:
a markup control is displayed.
A fourth input to the tagging control by the user is received.
In response to a fourth input, a text label or a voice label is added at the target position in the target video segment.
In this step, the text and voice can be added at any position of the target video segment as a mark, so that when the target video segment is played after the mark is added, the text and voice mark is output at the moment when the mark is added. Two markup controls may be displayed, one markup control for adding text markup and the other markup control for adding voice markup. Referring to fig. 7, a first label control 71 is used to add a text label, which may be a target text label 72. The second markup control 73 is used to add voice markup, which may be target voice 74. The target speech 74 may be a sound recorded after triggering the second mark-up control 73, or may be an audio file local to the electronic device. A preset edit control 75 may be displayed and upon activation of the preset edit control 75, the first mark-up control 71 and the second mark-up control 73 are displayed. When the preset editing control 75 is triggered, the playing of the target video segment is paused, and the added text mark or voice mark is added to the current playing time.
In the embodiment of the application, the marks are added to the target video clips, so that a user can conveniently learn and memorize the target video clips subsequently.
Optionally, after playing the target video segment in the target display area in response to the first input, the video processing method further includes:
and displaying a subtitle control.
A fifth input to the subtitle control by the user is received.
And responding to the fifth input, extracting all or part of subtitles in the target video segment, and generating subtitle pages.
In this step, all subtitles in the target video segment can be extracted, and the size of the subtitles is adjusted according to the size of a preset display page, so that all the subtitles are displayed in the preset display page, and the preset display page is a subtitle page. Certainly, in order to avoid that the word size of the characters in the subtitles is small and the user cannot see clearly, part of the subtitles in the target video segment can be extracted, and the extracted subtitles are displayed in the preset display page by the preset word size. The extracted subtitles may be adjusted to the form of paragraphs. Specifically, all subtitles are generated into a whole subtitle.
And displaying the subtitle page.
In this step, the subtitle page may be displayed in the form of a floating frame. I.e. the subtitle page may be displayed in suspension on the playing picture of the target video segment. The subtitle page can be independent of the target video clip and is not affected in the playing process of the target video clip. Of course, the subtitle page may also be associated with the target video segment, and during the playing process of the target video segment, the subtitle corresponding to the currently played video frame is highlighted. For example, in the process of playing the target video segment, if the subtitle corresponding to the currently played video frame is "fixed point and application thereof", the corresponding "fixed point and application thereof" is displayed in bold in the subtitle page. In order to improve the flexibility of user operation, the display mode of the subtitles can be switched between the subtitle page and the original subtitle display mode of the target video clip.
In the embodiment of the application, the subtitles corresponding to the unplayed and played parts in the target video clip are continuously displayed on the subtitle page, so that a user can learn conveniently.
Optionally, in response to the first input, playing the target video clip in the target display area comprises:
in response to the first input, the electronic device is controlled to be in a split screen mode.
In this step, in the split screen mode, the screen of the electronic device includes at least two display areas.
And playing the target video in the first display area, and playing the target video clip in the second display area.
In this step, the first display area and the second display area are two of the at least two display areas. And a preset control can be set for adjusting the display area of the target video and the target video clip. In the split screen mode, in the case where the screen of the electronic device includes two display areas, namely, left and right, it is common for a user to be used: the target video clip is played in the display area on the right side, but some users may prefer to play the target video clip in the display area on the left side. The preset control can be used for exchanging the playing areas of the target video and the target video clip. A return control can be set, the split-screen mode can be quitted by triggering the return control, the target video is played in a full screen mode, and the playing progress can be adjusted to the end of the target video clip when the target video clip is intercepted for the last time.
In the embodiment of the application, the target video and the target video clip are respectively played by utilizing the split screen mode of the electronic equipment, so that the technical difficulty of development is reduced.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution subject may be a video processing apparatus, or a control module in the video processing apparatus for executing the video processing method. In the embodiment of the present application, a video processing apparatus executing a video processing method is taken as an example, and the video processing apparatus provided in the embodiment of the present application is described.
As shown in fig. 8, an embodiment of the present application further provides a video processing apparatus, including:
a first receiving module 81, configured to receive a first input of a user in a case where the target video is output;
a first response module 82, configured to play the target video segment in the target display area in response to the first input, where the target video segment includes a video segment in the target video captured according to the first input.
Optionally, the video processing apparatus further includes:
the first display module is used for displaying the play mode control;
the second receiving module is used for receiving a second input of the user for controlling the playing mode;
and the second response module is used for responding to a second input and controlling the target video clip to be played in the target display area according to the target playing mode, wherein the target playing mode is the playing mode indicated by the second input.
Optionally, the play mode includes: at least one of a loop play mode and a double speed play mode.
Optionally, the video processing apparatus further comprises:
the second display module is used for displaying a selection control comprising at least one interception mode option, wherein each interception mode option corresponds to an interception mode for intercepting the video clip;
the third receiving module is used for receiving a third input of a target option in the at least one interception mode option from a user;
and the third response module is used for responding to the third input and adjusting the intercepting mode of the intercepted video clip to be the intercepting mode corresponding to the target option.
Optionally, the capture mode includes a first capture mode, and when the electronic device is in the first capture mode and a user slides a slider on a progress bar of the target video, a preview image is displayed above the slider, where the preview image includes a video frame and/or a subtitle of the target video corresponding to a position of the slider on the progress bar;
the first receiving module 81 is specifically configured to receive a first input of the two preview images by the user when the capture mode corresponding to the target option is the first capture mode; the two preview images are displayed when the user slides the slider at two different moments respectively.
A first response module 82, configured to respond to a first input, and determine a target video frame or a target subtitle according to each of two preview images, respectively, where the target video frame includes a video frame in a target video corresponding to a position of a slider on a progress bar when the preview image is displayed, and the target subtitle includes a subtitle in the target video corresponding to a position of the slider on the progress bar when the preview image is displayed; intercepting video frames between two target video frames or two target subtitles in a target video to obtain a target video segment; and playing the target video clip in the target display area.
Optionally, the capturing mode includes a second capturing mode, the first preset control is displayed when the electronic device is in the second capturing mode, and at least one second preset control is displayed according to a keyword input by a user in the first preset control, wherein each second preset control corresponds to a position where the keyword appears in a subtitle of the target video;
the first receiving module 81 is specifically configured to receive a first input to a target second preset control when the interception mode corresponding to the target option is a second interception mode, where the target second preset control is one of at least one second preset control;
the first response module 82 is specifically configured to, in response to the first input, intercept a video segment of a preset duration starting from a target position in the target video to obtain a target video segment; the target position is a position where a keyword appears in a subtitle of the target video corresponding to the target second preset control; and playing the target video clip in the target display area.
Optionally, the video processing apparatus further comprises:
the third display module is used for displaying the marking control;
the fourth receiving module is used for receiving fourth input of the marking control by the user;
a fourth response module for adding a text label or a voice label at the target position in the target video segment in response to a fourth input.
Optionally, the video processing apparatus further comprises:
the fourth display module is used for displaying the subtitle control;
the fifth receiving module is used for receiving fifth input of the subtitle control by the user;
the fifth response module is used for responding to the fifth input, extracting all or part of subtitles in the target video clip and generating subtitle pages;
and the fifth display module is used for displaying the subtitle page.
Optionally, the first response module 82 is specifically configured to, in response to the first input, control the electronic device to be in a split-screen mode, where in the split-screen mode, a screen of the electronic device includes at least two display areas; playing the target video in the first display area and playing the target video clip in the second display area; wherein the first display area and the second display area are two of the at least two display areas.
In the embodiment of the application, under the condition of outputting the target video, the part which is interesting to the user or needs to be studied in a key mode in the target video, namely the target video clip, is intercepted through the first input, and then the target video clip is played in the target display area, so that the user can learn directly through the target video clip, the learning process is simplified, and the consumed time of the learning process is shortened.
The video processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The video processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 7, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 9, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 901, a memory 902, and a program or an instruction stored in the memory 902 and executable on the processor 901, where the program or the instruction is executed by the processor 901 to implement each process of the above-mentioned video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
A user input unit 1007 for receiving a first input of a user in a case where the target video is output.
A display unit 1006, configured to play a target video segment in the target display area in response to the first input, where the target video segment includes a video segment in the target video captured according to the first input.
In the embodiment of the application, under the condition of outputting the target video, the part which is interesting to the user or needs to be studied in a key mode in the target video, namely the target video clip, is intercepted through the first input, and then the target video clip is played in the target display area, so that the user can learn directly through the target video clip, the learning process is simplified, and the consumed time of the learning process is shortened.
Optionally, the display unit 1006 is further configured to display a play mode control.
The user input unit 1007 is further configured to receive a second input from the user to the play mode control.
The display unit 1006 is further configured to, in response to a second input, control the target video segment to be played in the target display area according to a target play mode, where the target play mode is a play mode indicated by the second input.
In the embodiment of the application, the input of the user can be received, and the target video clip is played according to the play mode indicated by the input of the user, so that various play requirements of the user are met.
Optionally, the display unit 1006 is further configured to display a selection control including at least one capture mode option, where each capture mode option corresponds to a capture mode for capturing a video segment;
the user input unit 1007 is further configured to receive a third input from the user on a target option in the at least one interception mode option.
And the processor 1010 is configured to, in response to a third input, adjust an intercepting mode of the intercepted video segment to an intercepting mode corresponding to the target option.
In the embodiment of the application, the user can input the intercepting mode selection control according to the own requirement or the video content of the target video, so that the target video clip is intercepted according to the intercepting mode indicated by the user input, the requirement of the user is met, and the flexibility of the intercepting video clip is improved.
Optionally, the display unit 1006 is further configured to display a mark control.
The user input unit 1007 is further configured to receive a fourth input from the user to the mark-up control.
The processor 1010 is further configured to add a text label or a voice label at the target position in the target video segment in response to a fourth input.
In the embodiment of the application, the marks are added to the target video clips, so that a user can conveniently learn and memorize the target video clips subsequently.
Optionally, the display unit 1006 is further configured to display a subtitle control.
The user input unit 1007 is further configured to receive a fifth input from the user to the subtitle control.
The processor 1010 is further configured to extract all or part of subtitles in the target video segment in response to a fifth input, and generate a subtitle page.
The display unit 1006 is further configured to display a subtitle page.
In the embodiment of the application, the subtitles corresponding to the unplayed and played parts in the target video clip are continuously displayed on the subtitle page, so that a user can learn conveniently.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A video processing method, characterized in that the video processing method comprises:
receiving a first input of a user in a case of outputting a target video;
in response to the first input, playing a target video segment in a target display area, wherein the target video segment comprises a video segment in the target video that was intercepted according to the first input.
2. The video processing method of claim 1, wherein prior to said receiving a first input from a user, the video processing method further comprises:
displaying a selection control comprising at least one interception mode option, wherein each interception mode option corresponds to an interception mode for intercepting a video clip;
receiving a third input of a target option in the at least one interception mode option from a user;
and responding to the third input, and adjusting the intercepting mode of the intercepted video clip to the intercepting mode corresponding to the target option.
3. The video processing method according to claim 2, wherein the capture mode includes a first capture mode, and when the electronic device is in the first capture mode, a user slides a slider on a progress bar of the target video, and displays a preview image on the slider, wherein the preview image includes video frames and/or subtitles of the target video corresponding to a position of the slider on the progress bar;
under the condition that the interception mode corresponding to the target option is the first interception mode, the receiving a first input of a user includes:
receiving first input of a user to the two preview images; the two preview images are displayed when the user slides the slider at two different moments respectively;
the playing a target video clip in a target display area in response to the first input, comprising:
responding to the first input, and respectively determining a target video frame or a target subtitle according to each preview image in the two preview images, wherein the target video frame comprises a video frame in the target video corresponding to the position of the slider on the progress bar when the preview images are displayed, and the target subtitle comprises a subtitle in the target video corresponding to the position of the slider on the progress bar when the preview images are displayed;
intercepting video frames positioned between two target video frames or two target subtitles in the target video to obtain a target video segment;
and playing the target video clip in a target display area.
4. The video processing method according to claim 2, wherein the capture mode comprises a second capture mode, and when the electronic device is in the second capture mode, the first preset control is displayed, and at least one second preset control is displayed according to a keyword input by a user in the first preset control, wherein each second preset control corresponds to a position where the keyword appears in the subtitles of the target video;
under the condition that the interception mode corresponding to the target option is the second interception mode, the receiving a first input of a user includes:
receiving a first input to a target second preset control, wherein the target second preset control is one of the at least one second preset control;
the playing a target video clip in a target display area in response to the first input, comprising:
in response to the first input, intercepting a video clip with preset duration from a target position in the target video to obtain a target video clip; the target position is a position where the keyword appears in the subtitle of the target video corresponding to the target second preset control;
and playing the target video clip in a target display area.
5. The video processing method of claim 1, wherein after said playing a target video segment in a target display area in response to said first input, said video processing method further comprises:
displaying a marking control;
receiving a fourth input of the marking control by the user;
in response to the fourth input, adding a text or voice tag at a target location in the target video segment.
6. The video processing method of claim 1, wherein after said playing a target video segment in a target display area in response to said first input, said video processing method further comprises:
displaying a subtitle control;
receiving a fifth input of the subtitle control by the user;
responding to the fifth input, extracting all or part of subtitles in the target video segment, and generating a subtitle page;
and displaying the subtitle page.
7. The video processing method of claim 1, wherein said playing a target video segment in a target display area in response to the first input comprises:
in response to the first input, controlling the electronic equipment to be in a split screen mode, wherein in the split screen mode, a screen of the electronic equipment comprises at least two display areas;
playing the target video in a first display area, and playing the target video clip in a second display area; wherein the first display area and the second display area are two of the at least two display areas.
8. A video processing apparatus, characterized in that the video processing apparatus comprises:
the first receiving module is used for receiving a first input of a user under the condition of outputting the target video;
a first response module, configured to play a target video clip in a target display area in response to the first input, where the target video clip includes a video clip in the target video that is captured according to the first input.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video processing method according to any one of claims 1 to 7.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video processing method according to any one of claims 1 to 7.
CN202011479953.2A 2020-12-15 2020-12-15 Video processing method, video processing device, electronic apparatus, and readable storage medium Active CN112437353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011479953.2A CN112437353B (en) 2020-12-15 2020-12-15 Video processing method, video processing device, electronic apparatus, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011479953.2A CN112437353B (en) 2020-12-15 2020-12-15 Video processing method, video processing device, electronic apparatus, and readable storage medium

Publications (2)

Publication Number Publication Date
CN112437353A true CN112437353A (en) 2021-03-02
CN112437353B CN112437353B (en) 2023-05-02

Family

ID=74691260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011479953.2A Active CN112437353B (en) 2020-12-15 2020-12-15 Video processing method, video processing device, electronic apparatus, and readable storage medium

Country Status (1)

Country Link
CN (1) CN112437353B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113568551A (en) * 2021-07-26 2021-10-29 北京达佳互联信息技术有限公司 Picture saving method and device
CN113806570A (en) * 2021-09-22 2021-12-17 维沃移动通信有限公司 Image generation method and generation device, electronic device and storage medium
CN113986083A (en) * 2021-10-29 2022-01-28 维沃移动通信有限公司 File processing method and electronic equipment
CN114125137A (en) * 2021-11-08 2022-03-01 维沃移动通信有限公司 Video display method and device, electronic equipment and readable storage medium
CN114339375A (en) * 2021-08-17 2022-04-12 腾讯科技(深圳)有限公司 Video playing method, method for generating video directory and related product
CN115103225A (en) * 2022-06-15 2022-09-23 北京爱奇艺科技有限公司 Video clip extraction method, device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635837A (en) * 2015-12-30 2016-06-01 努比亚技术有限公司 Video playing method and device
CN105933538A (en) * 2016-06-15 2016-09-07 维沃移动通信有限公司 Video finding method for mobile terminal and mobile terminal
CN106959816A (en) * 2017-03-31 2017-07-18 努比亚技术有限公司 Video intercepting method and mobile terminal
US20180103296A1 (en) * 2016-10-11 2018-04-12 Hisense Electric Co., Ltd. Method and apparatus for video playing processing and television
CN108900918A (en) * 2018-08-17 2018-11-27 深圳市茁壮网络股份有限公司 A kind of VOD method, client and electronic equipment
CN110324717A (en) * 2019-07-17 2019-10-11 咪咕文化科技有限公司 A kind of video broadcasting method, equipment and computer readable storage medium
CN111818393A (en) * 2020-08-07 2020-10-23 联想(北京)有限公司 Video progress adjusting method and device and electronic equipment
CN111988663A (en) * 2020-08-28 2020-11-24 北京百度网讯科技有限公司 Method, device and equipment for positioning video playing node and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635837A (en) * 2015-12-30 2016-06-01 努比亚技术有限公司 Video playing method and device
CN105933538A (en) * 2016-06-15 2016-09-07 维沃移动通信有限公司 Video finding method for mobile terminal and mobile terminal
US20180103296A1 (en) * 2016-10-11 2018-04-12 Hisense Electric Co., Ltd. Method and apparatus for video playing processing and television
CN106959816A (en) * 2017-03-31 2017-07-18 努比亚技术有限公司 Video intercepting method and mobile terminal
CN108900918A (en) * 2018-08-17 2018-11-27 深圳市茁壮网络股份有限公司 A kind of VOD method, client and electronic equipment
CN110324717A (en) * 2019-07-17 2019-10-11 咪咕文化科技有限公司 A kind of video broadcasting method, equipment and computer readable storage medium
CN111818393A (en) * 2020-08-07 2020-10-23 联想(北京)有限公司 Video progress adjusting method and device and electronic equipment
CN111988663A (en) * 2020-08-28 2020-11-24 北京百度网讯科技有限公司 Method, device and equipment for positioning video playing node and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113568551A (en) * 2021-07-26 2021-10-29 北京达佳互联信息技术有限公司 Picture saving method and device
CN114339375A (en) * 2021-08-17 2022-04-12 腾讯科技(深圳)有限公司 Video playing method, method for generating video directory and related product
CN114339375B (en) * 2021-08-17 2024-04-02 腾讯科技(深圳)有限公司 Video playing method, method for generating video catalogue and related products
CN113806570A (en) * 2021-09-22 2021-12-17 维沃移动通信有限公司 Image generation method and generation device, electronic device and storage medium
CN113986083A (en) * 2021-10-29 2022-01-28 维沃移动通信有限公司 File processing method and electronic equipment
CN114125137A (en) * 2021-11-08 2022-03-01 维沃移动通信有限公司 Video display method and device, electronic equipment and readable storage medium
CN115103225A (en) * 2022-06-15 2022-09-23 北京爱奇艺科技有限公司 Video clip extraction method, device, electronic equipment and storage medium
CN115103225B (en) * 2022-06-15 2023-12-26 北京爱奇艺科技有限公司 Video clip extraction method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112437353B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN112437353B (en) Video processing method, video processing device, electronic apparatus, and readable storage medium
CN106303723B (en) Video processing method and device
CN111970577B (en) Subtitle editing method and device and electronic equipment
US20230244363A1 (en) Screen capture method and apparatus, and electronic device
EP4206912A1 (en) Interface displaying method, device, and electronic device
CN111770386A (en) Video processing method, video processing device and electronic equipment
CN103886777B (en) Moving-image playback device and method, animation broadcast control device and method
CN108614872A (en) Course content methods of exhibiting and device
CN115065874A (en) Video playing method and device, electronic equipment and readable storage medium
CN112286617B (en) Operation guidance method and device and electronic equipment
CN112887794B (en) Video editing method and device
CN112711368B (en) Operation guidance method and device and electronic equipment
CN113806570A (en) Image generation method and generation device, electronic device and storage medium
CN115437736A (en) Method and device for recording notes
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN113283220A (en) Note recording method, device and equipment and readable storage medium
CN114518824A (en) Note recording method and device and electronic equipment
CN116137662A (en) Page display method and device, electronic equipment, storage medium and program product
CN113268961A (en) Travel note generation method and device
CN113593614A (en) Image processing method and device
US20140178045A1 (en) Video playback device, video playback method, non-transitory storage medium having stored thereon video playback program, video playback control device, video playback control method and non-transitory storage medium having stored thereon video playback control program
CN113835589A (en) Information storage method and device
CN117724648A (en) Note generation method, device, electronic equipment and readable storage medium
CN116744077A (en) Video note generation method and device
CN117224942A (en) Game interaction method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant