CN108471550B - Video intercepting method and terminal - Google Patents

Video intercepting method and terminal Download PDF

Info

Publication number
CN108471550B
CN108471550B CN201810219543.0A CN201810219543A CN108471550B CN 108471550 B CN108471550 B CN 108471550B CN 201810219543 A CN201810219543 A CN 201810219543A CN 108471550 B CN108471550 B CN 108471550B
Authority
CN
China
Prior art keywords
target
input
sub
video
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810219543.0A
Other languages
Chinese (zh)
Other versions
CN108471550A (en
Inventor
杨其豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810219543.0A priority Critical patent/CN108471550B/en
Publication of CN108471550A publication Critical patent/CN108471550A/en
Application granted granted Critical
Publication of CN108471550B publication Critical patent/CN108471550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a video intercepting method and a terminal, wherein the method comprises the following steps: receiving a first input of a user in a state that a playing interface of a video is displayed on a current interface; displaying a target control in response to the first input; receiving a second input of the user on the target control; displaying N sub-videos intercepted from the video in response to the second input; wherein N is a positive integer. Therefore, in the embodiment of the invention, after the user calls the target control by executing the first input, the terminal can be triggered to capture the N sub-videos from the videos by executing the second input to the target control, and the captured N sub-videos are displayed, so that the process of capturing the videos is simplified, and the video capturing becomes simple and easy to operate.

Description

Video intercepting method and terminal
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a video intercepting method and a terminal.
Background
With the popularization of wireless networks and the explosive spread of network video information, people increasingly rely on the way of watching videos to obtain information. Meanwhile, in the process of watching the video, if people want to store or share some information in the video, the method can be realized by acquiring a video screenshot, a video motion picture, or a small segment of video in the video and the like through video editing.
However, the existing video editing needs to be assisted by other applications, and therefore, when a user needs to acquire a short section of video in the video, the user needs to open other applications and then intercept the required short section of video from the video. It can be seen that the existing video capturing method is complicated in process and complex in operation.
Disclosure of Invention
The embodiment of the invention provides a video intercepting method and a terminal, and aims to solve the problems that the existing video intercepting method is complex in process and complex in operation.
In order to solve the above problems, the present invention is realized by:
in a first aspect, an embodiment of the present invention provides a video capture method, which is applied to a terminal, and the method includes:
receiving a first input of a user in a state that a playing interface of a video is displayed on a current interface;
displaying a target control in response to the first input;
receiving a second input of the user on the target control;
displaying N sub-videos intercepted from the video in response to the second input;
wherein N is a positive integer.
In a second aspect, an embodiment of the present invention further provides a terminal, where the terminal includes:
the first receiving module is used for receiving a first input of a user in a state that a video playing interface is displayed on a current interface;
a first response module for displaying a target control in response to the first input;
the second receiving module is used for receiving a second input of the user on the target control;
a second response module for displaying N sub-videos intercepted from the video in response to the second input;
wherein N is a positive integer.
In a third aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a processor, a memory, and a computer program stored on the memory and operable on the processor, and when the computer program is executed by the processor, the steps of the video capture method described above are implemented.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of the video capturing method as described above.
In the embodiment of the invention, under the state that the current interface displays the video playing interface, a first input of a user is received; displaying a target control in response to the first input; receiving a second input of the user on the target control; displaying N sub-videos intercepted from the video in response to the second input; wherein N is a positive integer. Therefore, in the embodiment of the invention, after the user calls the target control by executing the first input, the terminal can be triggered to capture the N sub-videos from the videos by executing the second input to the target control, and the captured N sub-videos are displayed, so that the process of capturing the videos is simplified, and the video capturing becomes simple and easy to operate.
Drawings
Fig. 1 is a flowchart of a video capture method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an interface provided by an embodiment of the present invention;
FIG. 3a is a second schematic view of an interface provided by an embodiment of the present invention;
FIG. 3b is a third schematic diagram of an interface provided by the embodiment of the present invention;
FIG. 4a is a fourth schematic view of an interface provided by an embodiment of the present invention;
FIG. 4b is a fifth schematic view of an interface provided by the present invention;
FIG. 5a is a sixth schematic view of an interface provided by an embodiment of the present invention;
FIG. 5b is a seventh schematic view of an interface provided by an embodiment of the present invention;
FIG. 6a is an eighth schematic view of an interface provided by an embodiment of the present invention;
FIG. 6b is a ninth schematic view of an interface provided by an embodiment of the present invention;
FIG. 7a is a cross-sectional view of an interface provided by an embodiment of the present invention;
FIG. 7b is an eleventh schematic view of an interface provided by an embodiment of the present invention;
FIG. 8a is a twelfth schematic interface diagram provided by an embodiment of the present invention;
FIG. 8b is a thirteen schematic interface diagram provided by the present embodiment;
FIG. 9 is a fourteenth schematic interface diagram provided by an embodiment of the present invention;
FIG. 10 is a fifteen schematic interface diagram provided by an embodiment of the present invention;
fig. 11 is one of the structural diagrams of a terminal provided in the embodiment of the present invention;
fig. 12 is a second structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The video intercepting method is mainly applied to a terminal and used for intercepting n sub-videos from a video, wherein n is a positive integer. The terminal may be a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
The following describes a video capture method according to an embodiment of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a video capture method according to an embodiment of the present invention, and as shown in fig. 1, the video capture method according to the embodiment includes the following steps:
step 101, receiving a first input of a user in a state that a video playing interface is displayed on a current interface.
In this step, the first input is used to trigger the terminal to display the target control. Wherein the first input may be expressed in at least one of the following ways.
First, the first input may be represented as a touch input, such as a click input, a slide input, or the like.
In this embodiment, the receiving the first input of the user may be represented by: a first input of a user in a display area of a display screen of a terminal is received.
Further, in order to reduce the misoperation rate of the user, the action area of the first input can be limited to a specific area. Therefore, in this application scenario, the receiving the first input of the user may specifically be represented as: a first input of a user in a first display area and/or a second display area of a terminal display screen is received. Therefore, compared with a scene that a user executes first input in the display area of the display screen of the terminal, the user can trigger the terminal to display the target control only by executing second input in the first display area and/or the second display area of the display area, and therefore the misoperation rate of the user can be reduced.
In particular, the first input may include, but is not limited to, at least one of:
clicking operation of the user on the first display area and/or the second display area;
double-click operation of the user on the first display area and/or the second display area;
long pressing operation of the user aiming at the first display area and/or the second display area;
and connecting the first display area and/or the second display area by the user, namely connecting the first display area and the second display area by the input track of the first input.
It should be understood that the first display area and the second display area may be preset according to actual situations when the display device is specifically implemented. Optionally, when the terminal display screen is a special-shaped screen, the first display area and the second display area may be two areas separated from each other in a top end area of the special-shaped screen. For convenience of understanding, referring to fig. 2 together, the terminal, i.e., the shaped screen of the mobile phone, may be referred to as a screen with a bang area, i.e., a top area 201, the top area 201 may be represented as a non-display area recessed in the top of the shaped screen for disposing components such as a camera, and the first display area 202 and the second display area 203 are two areas separated by the top area 201, such as an area defined by a virtual line and a side edge of the screen in fig. 2, and may be generally referred to as an ear area.
For the application scenario of fig. 2, the first input may be represented as an operation of the user sliding down from the first display area 202 and the second display area 203 to below the top area 201 by two fingers. In this way, when the terminal detects that the user slides down below the top area 201 from the first display area 202 and the second display area 203 by two fingers, the target control 204 may be displayed.
In addition, it should be understood that the first input may be input by a finger of a user, or may be input by other means, such as a stylus, and may be determined according to actual needs, which is not limited in the embodiment of the present invention.
The second, first input may be represented as a voice input.
In this embodiment, the terminal may display the target control upon receiving a voice such as "display target control".
Of course, in some embodiments, the first input may also be represented in other forms, such as character input, and the like, which may be determined according to actual needs, and is not limited in this embodiment of the present invention.
And 102, responding to the first input, and displaying a target control.
In this step, after receiving a first input, the terminal may respond to the first input and display a preset target control; a target control generated based on an input trajectory of the first input may also be displayed in response to the first input.
Optionally, in the embodiment of the present invention, the target control may be a linear control, such as a touch line (Touchline), and it is understood that the Touchline may also be referred to as a manipulation line, and the like. Specifically, the target control 204 may be represented as an arc-shaped touch line (as shown in fig. 2), a straight touch line, and so on. Further, the line-type width of the target control may range from 1 to 5cm, etc., but is not limited thereto.
It should be understood that the representation form and the display position of the target control in fig. 2 are only examples, in other embodiments, the target control 204 may also be represented in other shapes, such as a rectangle, a circle, etc., and the display position of the target control 204 may also be set according to actual needs, which is not limited herein.
And 103, receiving a second input of the user on the target control.
In this step, the second input is used to trigger the terminal to intercept N sub-videos from the video corresponding to the current video playing interface and display the N sub-videos, where N is a positive integer.
Therefore, it can be understood that, in this embodiment, the mobile terminal may pre-establish an association relationship between the target control and the video corresponding to the play interface.
In this way, the user can trigger the terminal to intercept N sub-videos from the video corresponding to the playing interface by executing the second input on the target control, and further, the N sub-videos are displayed.
Specifically, the second input may be an operation of truncating the target control into N sub-controls by the user, or an operation of truncating the N sub-controls from the target control by the user, but is not limited thereto.
And 104, responding to the second input, and displaying the N sub videos intercepted from the video.
In this step, after receiving a second input of the user on the target control, the terminal may intercept N sub-videos from the video in response to the second input, and further display the N sub-videos intercepted from the video on the display screen.
In the video capture method of the embodiment, a first input of a user is received in a state that a playing interface of a video is displayed on a current interface; displaying a target control in response to the first input; receiving a second input of the user on the target control; displaying N sub-videos intercepted from the video in response to the second input; wherein N is a positive integer. Therefore, in the embodiment of the invention, after the user calls the target control by executing the first input, the terminal can be triggered to intercept N sub-videos from the videos by executing the second input to the target control, and the intercepted N sub-videos are displayed, so that the process of video interception is simplified, and the video interception is simple and easy to operate.
In addition, the terminal displays the N sub-videos intercepted from the videos on the display screen, and therefore the user can conveniently edit the N sub-videos.
In the embodiment of the present invention, the N sub-videos captured from the video may be obtained by segmenting the video, or may be obtained by capturing a part of video segments in the video. Specifically, the obtaining manner of the N sub videos may be determined based on the representation manner of the second input.
In a first manner, in a case that the second input is an operation of truncating the target control into N sub-controls by the user, step 104 may include:
truncating the target control into N sub-controls and truncating the video into N sub-videos in response to the second input;
and displaying the N sub videos and N adjusting controls corresponding to the N sub videos.
In this embodiment, a user may trigger the terminal to truncate the target control into N sub-controls, that is, to segment the target control into N sub-controls, through a plurality of operation modes.
In particular, the second input may include, but is not limited to, at least one of:
the user slides the target control once through N-1 sliding tracks on the target control within a preset time length;
and one sliding track of the user on the target control is subjected to N-1 sliding operations of the target control.
In this way, a user can simultaneously or intermittently execute N-1 sliding operations of sliding the target control for only once by sliding the target control on the target control within a preset time length, and the terminal is triggered to cut off the target control into N sub-controls; the user can also execute one sliding operation of sliding the target control for N-1 times on the target control, and the terminal is triggered to cut off the target control into N sub-controls. The preset duration can be determined according to actual needs, for example: 5 seconds or 10 seconds, and it should be understood that the starting time of the preset time length is the time when the terminal detects the sliding operation of the user on the target control for the first time.
It should be noted that the present invention does not limit the manner in which the user truncates the target control into N sub-controls, and any manner that can truncate the target control into N sub-controls may be applied to the present invention.
After detecting that the user cuts the target control into N sub-controls, the terminal may cut the target control into N sub-controls and cut the video into N sub-videos.
Further, the terminal may display the N sub videos and N adjustment controls corresponding to the N sub videos on a display screen. Each of the N adjustment controls uniquely corresponds to one of the N sub-videos, that is, the adjustment controls correspond to the sub-videos one by one.
For ease of understanding, please refer to FIG. 3 a. In fig. 3a, a user performs a sliding operation in which a sliding track only crosses the target control 204 once on the target control 204 by using two fingers, that is, the user performs a sliding operation in which the sliding track only crosses the target control 204 once on the target control 204 twice within a preset time length, so that the terminal can be triggered to cut off the target control 204 into 3 sub-controls and cut off the video into 3 sub-videos.
Further, as shown in fig. 3b, 3 sub-videos, namely a first sub-video 2051, a second sub-video 2052 and a third sub-video 2053, and 3 adjustment controls of a first adjustment control 2061 corresponding to the first sub-video 2051, a second adjustment control 2062 corresponding to the second sub-video 2052 and a third adjustment control 2063 corresponding to the third sub-video 2053 are displayed on the display screen.
In this way, after the user triggers the terminal to display the target control, the terminal can be triggered to cut the target control into N sub-controls and cut the video into N sub-videos by executing the operation of cutting the target control into N sub-controls; and displaying the N sub-videos and N adjusting controls corresponding to the N sub-videos, thereby simplifying the process of video interception and enabling the video interception to be simple and easy to operate.
In a second mode, the second input is an operation of intercepting N child controls from the target control by the user, and step 104 may include:
in response to the second input, intercepting N sub-controls from the target control and N sub-videos from the video;
and displaying the N sub videos and N adjusting controls corresponding to the N sub videos.
In this embodiment, the user may trigger the terminal to intercept N child controls from the target control through a plurality of operation modes, that is, to intercept N child controls from the target control.
In particular, the second input may include, but is not limited to, a sliding operation in which the user slides the target control twice in the N-thorn sliding trajectory on the target control.
Therefore, the user can execute the sliding operation of sliding the target control twice by sliding the target control for N times on the target control within the preset time length, and the terminal is triggered to intercept N sub-controls from the target control. In the application scenario, the track of the sliding operation may be a "V" or a "U" or the like. The preset duration can be determined according to actual needs, for example: 5 seconds or 10 seconds, and it should be understood that the starting time of the preset time length is the time when the terminal detects the sliding operation of the user on the target control for the first time.
It should be noted that the present invention does not limit the manner in which the user intercepts the operations of the N child controls from the target control, and any manner that can intercept the operations of the N child controls from the target control can be applied to the present invention.
After detecting that the user intercepts the N sub-controls from the target control, the terminal may intercept the N sub-controls from the target control and intercept the N sub-videos from the video.
Further, the terminal may display the N sub videos and N adjustment controls corresponding to the N sub videos on a display screen. Each of the N adjustment controls uniquely corresponds to one of the N sub-videos, that is, the adjustment controls correspond to the sub-videos one by one.
For ease of understanding, please refer to FIG. 4 a. In fig. 4a, the user performs a sliding operation with a sliding track of an inverted "V" on the target control 204, so that the touch terminal intercepts 1 sub-control from the target control 204 and intercepts 1 sub-video from the video.
Further, as shown in fig. 4b, 1 sub-video, a fourth sub-video 2071 in fig. 4b, and a fourth adjustment control 2081 corresponding to the fourth sub-video 2071 are displayed on the display screen.
In this way, after the user triggers the terminal to display the target control, the terminal can be triggered to intercept N sub-controls from the target control and intercept N sub-videos from the video by executing the operation of intercepting N sub-videos from the video; and displaying the N sub-videos and N adjusting controls corresponding to the N sub-videos, thereby simplifying the process of video interception and enabling the video interception to be simple and easy to operate.
In addition, in an application scene that a user only needs part of the segments in the video, compared with the mode one in which the terminal divides the video, the terminal of the embodiment can facilitate the user to quickly cut the required video segments from the video, thereby further reducing the editing operation of the user on the sub-video, such as deletion operation, and reducing the operation time of the user.
Further, the terminal may arrange and display the N sub-videos according to a playing order of the N sub-videos in the intercepted video.
For ease of understanding, please refer to FIG. 3 b. In fig. 3, it is assumed that in the captured video, the playing order of the 3 sub-videos is the first sub-video 2051, the second sub-video 2052, and the third sub-video 2053, and the terminal may sequentially display the first sub-video 2051, the second sub-video 2052, and the third sub-video 2053 from top to bottom and/or from left to right.
Therefore, the user can conveniently and definitely determine the specific content contained in each sub-video, and the user can conveniently edit each sub-video.
In the embodiment of the invention, in the association relationship between the target control and the video, the characteristic parameters of the target control, such as the length, the size or the area, are associated with the length of the video, so that the terminal can respond to the second input and determine the position of the intercepted sub-video in the intercepted video according to the position of the intercepted or intercepted target control.
For ease of understanding, fig. 4a is taken as an example for illustration. Assuming that the length of the target control 204 is related to the length of the intercepted video in fig. 4a, the length of the target control 204 is 10 cm, and the total playing time of the intercepted video is 100 minutes. In this way, when the user performs a sliding operation with a sliding track of an inverted "V" on the target control 204, and the intersection points of the sliding track of the sliding operation and the target control 204 are at the 3 rd cm and the 4 th cm of the target control 204, the control terminal may be triggered to intercept the control parts of the 3 rd cm to the 4 th cm in the target control 204 as sub-controls, and intercept a video segment between the 30 th minute and the 40 th minute in the video as a fourth sub-video 2071.
Therefore, the user can execute the second input on the target control according to the requirement of the user for intercepting the video clip, so that the terminal is triggered to intercept N sub-videos from the videos, and the accuracy of video interception can be improved.
Of course, in some embodiments, as the first embodiment, in the case that the terminal detects that the user cuts the target control into N sub-controls, the video may be divided into N sub-videos equally, that is, the sub-videos are equal in length, or the video may be divided into N sub-videos according to a preset ratio, for example, the video is divided into 3 sub-videos according to a ratio of 1:2: 3.
In this embodiment of the present invention, further, the lengths of the N sub videos are associated with the characteristic parameters of the N adjustment controls.
The sub-videos correspond to the adjustment controls one to one, and therefore, it can be understood that the length of each sub-video is associated with the characteristic parameter of the adjustment control corresponding to the sub-video. In particular, the characteristic parameter includes, but is not limited to, at least one of a length, a size, and an area of the adjustment control.
Therefore, a user can edit the sub-videos by operating the adjusting control, such as adjusting the length of the sub-videos, the arrangement sequence of the sub-videos or deleting the sub-videos, so that the operation of editing the sub-videos by the user can be further simplified while the operation of intercepting the videos by the user is simplified.
Optionally, after step 104, the video capture method may further include:
receiving a third input of a user for a target adjustment control of the N adjustment controls;
and responding to the third input, and adjusting the target sub video corresponding to the target adjusting control.
In this embodiment, the third input is used to trigger the terminal to adjust the target sub-video corresponding to the target adjustment control. Therefore, after receiving the third input, the terminal may adjust the target sub-video corresponding to the target adjustment control according to the specific type of the third input, such as adjusting the length and the arrangement order of the target sub-video, or deleting the target sub-video.
Scene one and the third input are first sliding operations of the first end or the second end of the target adjusting control.
In this application scenario, in response to the third input, adjusting the target sub-video corresponding to the target adjustment control may specifically be represented as: and responding to the third input, and adjusting the length of the target sub video corresponding to the target adjusting control.
It should be understood that, in the embodiment of the present invention, the length of the video may be understood as the playing time of the video, and the frame rate of the video is kept unchanged during the process of intercepting the video.
Therefore, adjusting the length of the target sub-video corresponding to the target adjustment control can be understood as: and adjusting the video frame of the target sub video corresponding to the target adjustment control, and changing the playing length of the target sub video.
In practical application, the terminal may adjust the length of the target sub-video corresponding to the target adjustment control according to a preset ratio in response to the third input.
For convenience of understanding, the preset ratio is 0.1, the playing time of the target sub-video before the terminal receives the third input is 10 minutes, and the target sub-video includes video frames of video clips from 10 minutes to 20 minutes of the intercepted video. The terminal may extend or shorten the playing time of the target sub-video by 1 minute if receiving the third input, such as adjusting the length of the target sub-video, so that the target sub-video contains video frames of the video segments of the 9 th to 20 th minutes of the intercepted video.
Of course, the terminal may also adjust the length of the target sub-video corresponding to the target adjustment control according to the target ratio in response to the third input. And the target proportion is according to the characteristic parameters of the target adjusting control before the third input is received and the characteristic parameters of the target adjusting control after the third input is received.
Specifically, the adjusting the length of the target sub-video includes:
acquiring a first characteristic parameter and a second characteristic parameter of the target adjusting control, wherein the first characteristic parameter is the characteristic parameter of the target adjusting control before the third input is received, and the second characteristic parameter is the characteristic parameter of the target adjusting control after the third input is received;
calculating the ratio of the first characteristic parameter to the second characteristic parameter to obtain a target ratio;
according to the target proportion, adjusting the length of a target sub video corresponding to the target adjusting control;
wherein the characteristic parameter comprises at least one of a length, a size, and an area of the adjustment control.
For convenience of understanding, the characteristic parameter is the length of the adjustment control. It should be understood, however, that the present invention is not limited to the specific forms in which the characteristic parameters are expressed.
Assuming that the user executes a third input for the target adjustment control, the length of the target adjustment control before receiving the third input is 1 cm, and the length of the target adjustment control after receiving the third input is 0.8 cm, the terminal may calculate a ratio of 1 cm to 0.8 cm to obtain a target ratio of 5:4, and adjust the length of the target sub-video corresponding to the target adjustment control according to the target ratio of 5: 4.
Assuming that the received target sub-video has a length of 10 minutes and includes video frames of video clips from 10 minutes to 20 minutes in the captured video, the adjusted target sub-video has a length of 8 minutes and may include video frames of video clips from 12 minutes to 20 minutes in the captured video or video frames of video clips from 10 minutes to 18 minutes in the captured video.
In this way, compared with the method of adjusting the length of the target sub-video corresponding to the target adjustment control according to the preset proportion, the method and the device for adjusting the length of the target sub-video can improve the flexibility of adjusting the length of the target sub-video, and the adjustment result better meets the expected value of the user.
The terminal adjusts the length of the target sub-video according to the target proportion, and the length can be determined according to the third input operation object and the operation direction.
Optionally, the adjusting the length of the target sub-video according to the target proportion includes:
when the third input operation object is the first end and the third input operation direction is the first direction, adjusting the starting time of the target sub-video corresponding to the target adjustment control according to the target proportion, so that the starting time is advanced;
when the operation object of the third input is the second end and the third input is the second direction, adjusting the end time of the target sub-video corresponding to the target adjustment control according to the target proportion to delay the end time;
when the third input operation object is the first end and the third input operation direction is the second direction, adjusting the starting time of the target sub-video corresponding to the target adjustment control according to the target proportion to delay the starting time;
when the operation object of the third input is the second end and the third input is the first direction, adjusting the end time of the target sub-video corresponding to the target adjustment control according to the target proportion, so that the end time is advanced;
wherein the first direction is different from the second direction.
In this embodiment, adjusting the start time of the target sub-video corresponding to the target adjustment control so that the start time is advanced/delayed may be understood as: and adjusting the starting video frame of the target sub video corresponding to the target adjusting control so as to lead the playing starting time of the target sub video to be advanced/delayed.
Correspondingly, the end time of the target sub-video corresponding to the target adjustment control is adjusted, so that the end time is advanced/delayed, which can be understood as: and adjusting the ending video frame of the target sub video corresponding to the target adjusting control so as to lead the playing ending time of the target sub video to be advanced/delayed.
For ease of understanding, please refer to fig. 5a, 5b, 6a and 6b together. In fig. 5a, 5b, 6a and 6b, the target control 204 is represented as a touch line, the first end is represented as the left end point of the touch line, the second end is represented as the right end point of the touch line, the first direction is horizontally in the left direction, and the second direction is horizontally in the right direction.
In fig. 5a, the user performs a third input with respect to the second adjustment control 2062, the operation object of the third input is the first end of the target control 204, and the operation direction is the second direction.
Further, assuming that the length of the second adjustment control 2062 before receiving the third input is 1 cm and the length of the second adjustment control 2062 after receiving the third input is 0.8 cm, the terminal may calculate a ratio of 1 cm to 0.8 cm to obtain a target ratio of 5:4, and adjust the start time of the second sub-video 2052 corresponding to the second adjustment control 2062 according to the target ratio of 5:4, so that the start time is delayed.
Assuming that the length of the second sub-video 2052 is 10 minutes before the third input is received and includes video frames of a video segment of 10 minutes to 20 minutes in the captured video, the length of the adjusted second sub-video 2052 is 8 minutes, a start video frame of the second sub-video 2052 is adjusted, the first two minutes of the second sub-video 2052 are cut out, so that the play start time of the second sub-video 2052 is delayed by 2 minutes, and the adjusted second sub-video 2052 includes video frames of a video segment of 12 minutes to 20 minutes in the captured video. FIG. 5b is the display interface of FIG. 5a after responding to a third input.
In fig. 6a, the user performs a third input with respect to the fourth adjustment control 2081, an operation object of the third input is the second end of the target control 204, and an operation direction is a second direction.
Further, assuming that the length of the fourth adjustment control 2081 before receiving the third input is 0.4 cm, and the length of the fourth adjustment control 2081 after receiving the third input is 0.6 cm, the terminal may calculate a ratio of 0.4 cm to 0.6 cm, obtain a target ratio of 2:3, and adjust the ending time of the fourth adjustment control 2081 corresponding to the fourth adjustment control 2081 according to the target ratio of 2:3, so that the ending time is delayed.
Assuming that the length of the second sub-video 2052 is 10 minutes before the third input is received and includes video frames of video segments from 10 minutes to 20 minutes in the captured video, the length of the adjusted second sub-video 2052 is 15 minutes, the end video frame of the second sub-video 2052 is adjusted so that the play end time of the second sub-video 2052 is delayed by 5 minutes, and the adjusted second sub-video 2052 includes video frames of video segments from 10 minutes to 25 minutes in the captured video. FIG. 6b is the display interface of FIG. 6a after responding to a third input.
It should be understood that the length of the adjusted target sub-video cannot exceed the length of the intercepted video.
Further, as shown in fig. 5a, 5b, 6a and 6b, the play start time and the play end time of the target sub video may be displayed on the target sub video, so that it is effective to prompt the current state of the target sub video.
In this way, whether the start time or the end time of the target sub-video corresponding to the target adjustment control is adjusted can be determined according to the operation object and the operation direction of the target adjustment control by the third input, so that the accuracy of adjusting the target sub-video can be improved.
And in the second scene, the third input is the operation of exchanging the display positions of the first target adjusting control and the second target adjusting control, and N is greater than 1.
In this scenario, in response to the third input, adjusting the target sub-video corresponding to the target adjustment control may specifically be represented as:
responding to the third input, and adjusting the arrangement sequence of the first target sub-video corresponding to the first target adjustment control and the second target sub-video corresponding to the second target adjustment control in the first target video;
the first target video is generated according to the first target sub video and the second target sub video, and the arrangement sequence comprises a playing arrangement sequence and/or a storage position arrangement sequence.
For convenience of understanding, please refer to fig. 7a, a user may drag the first adjustment control 2061 and/or the second adjustment control to trigger the terminal to adjust the playing sequence and/or the storage position sequence of the first sub-video 2051 and the second sub-video 2052 in the first target video, where the first target video may be a video formed by splicing sub-videos including the first sub-video 2051 and the second sub-video 2052.
For example: in the first target video, the first sub-video 2051 is played before the second sub-video 2052 is received before the third input is received, and the first sub-video 2051 is played after the second sub-video 2052 is received after the third input is received.
Further, as shown in fig. 7b, the trigger terminal exchanges the display positions of the first adjustment control 2061 and the second adjustment control 2062, and exchanges the display positions of the first sub-video 2051 and the second sub-video 2052. Thereby achieving good prompting effect.
In this way, a user can trigger the terminal to trigger the arrangement sequence of the first target sub-video corresponding to the first target adjustment control and the second target sub-video corresponding to the second target adjustment control in the first target video by dragging the first target adjustment control and the second target adjustment control, thereby simplifying the user operation.
In a third scenario, the third input is an operation of deleting the target adjustment control;
in this scenario, in response to the third input, adjusting the target sub-video corresponding to the target adjustment control may specifically be represented as: and in response to the third input, deleting the target sub-video corresponding to the target adjustment control.
In this application scenario, the third input may be presented in a variety of ways.
Optionally, the third input may be a second sliding operation directed to both the first end and the second end of the target adjustment control. As shown in fig. 8a, the user can press the first end and the second end of the third adjustment control 2063 to respectively slide toward the middle of the third adjustment control 2063, and the terminal is triggered to delete the third sub-video 2053 corresponding to the third adjustment control 2063, and fig. 8b is a display interface after the terminal responds to the third input.
Of course, in other embodiments, the user may also trigger the terminal to delete the target sub-video corresponding to the target adjustment control through operation modes such as double-click and long-press of the target adjustment control. The specific requirement can be determined according to actual needs, and the embodiment of the present invention is not limited to this.
Optionally, the video capture method further includes:
receiving a fourth input of the user, wherein the fourth input is an operation of pointing from the first target area to the second target area;
in response to the fourth input, canceling the response operation of the historical input with the shortest interval time with the fourth input;
alternatively, the first and second electrodes may be,
receiving a fifth input of a user, wherein the fifth input is an operation of pointing from the second target area to the first target area;
and responding to the fifth input, and recovering the response operation of the historical input with the shortest interval time with the fifth input.
In this embodiment, in the case that the terminal display screen is a special-shaped screen, the first target area may be an area of the special-shaped screen other than the vertex area, and the second target area may be the vertex area 201 of the special-shaped screen. It should be understood, however, that the present invention is not limited to the specific location of the first target area and the second target area in the display screen.
For convenience of understanding, it is assumed that the historical input with the shortest time interval from the fourth input is the third input in the embodiment of the present invention, fig. 3a is a display interface before the terminal receives the third input, and fig. 3b is a display interface after the terminal receives the third input and performs a response operation for the third input. In this application scenario, as shown in fig. 9, if the user simply points to the second target area, i.e. the top area 201, and draws an arrow, the response operation of the terminal to cancel the third input can be initiated, and the display interface of the terminal returns to the mode of fig. 3 a.
If the terminal receives the fifth input, the response operation of the history input with the shortest interval time with the fifth input can be recovered, that is, the revocation of the latest input from the fifth input is recovered.
In this way, the user can execute the fourth input and trigger the terminal to cancel the response operation of the history input with the shortest interval time with the fourth input; and executing a fifth input, and triggering the terminal to recover the response operation of the historical input with the shortest interval time with the fifth input, thereby reducing the influence caused by misoperation of the user.
In the embodiment of the invention, after the terminal acquires N sub-videos from the video, the sub-videos can be spliced to obtain a new video.
Optionally, the video capturing method may include:
receiving a sixth input of the user;
and responding to the sixth input, splicing the N sub-videos and generating a second target video.
As shown in fig. 10, the user draws an arc 209 from the bang area below the bang area of the special-shaped screen to the ear areas on both sides of the bang through two fingers, and can trigger the terminal to splice the N sub-videos to generate a second target video. Of course, the user may also trigger the terminal to splice the N sub-videos in other manners to generate the second target video, which is not limited in the embodiment of the present invention.
Therefore, the user can execute the sixth input to trigger the terminal to splice the N sub-videos to generate the second target video required by the user, and the user experience can be improved.
It should be noted that, various optional implementations described in the embodiments of the present invention may be implemented in combination with each other or implemented separately, and the embodiments of the present invention are not limited thereto.
In addition, in order to avoid confusion of response operations of the terminal in response to different inputs, in a specific application, the expression forms of the inputs of the embodiment of the invention are different.
Referring to fig. 11, fig. 11 is a structural diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 11, a terminal 1100 includes: a first receiving module 1101, a first responding module 1102, a second receiving module 1103 and a second responding module 1104.
The first receiving module 1101 is configured to receive a first input of a user in a state that a playing interface of a video is displayed on a current interface;
a first response module 1102 for displaying a target control in response to the first input;
a second receiving module 1103, configured to receive a second input of the user on the target control;
a second response module 1104 for displaying N sub-videos intercepted from the video in response to the second input;
wherein N is a positive integer.
On the basis of fig. 11, a description is given below of modules further included in the terminal 1100, sub-modules included in each module, and/or units included in the sub-modules.
Optionally, the second input is an operation of truncating the target control into N sub-controls by the user;
the second response module 1104 includes:
a first response sub-module, configured to, in response to the second input, truncate the target control into N sub-controls, and truncate the video into N sub-videos;
and the first display sub-module is used for displaying the N sub-videos and N adjusting controls corresponding to the N sub-videos.
Optionally, the second input is an operation of intercepting N sub-controls from the target control by the user;
the second response module 1104 includes:
the second response submodule is used for responding to the second input, intercepting N sub-controls from the target control and intercepting N sub-videos from the video;
and the second display sub-module is used for displaying the N sub-videos and N adjusting controls corresponding to the N sub-videos.
Optionally, the lengths of the N sub-videos are associated with the feature parameters of the N adjustment controls;
wherein the characteristic parameter comprises at least one of a length, a size, and an area of the adjustment control.
Optionally, the target control is a linear control.
Optionally, the first receiving module 1101 is specifically configured to:
and receiving a first input of a user in a first display area and/or a second display area of a display screen of the terminal in a state that a playing interface of the video is displayed on the current interface.
Optionally, the terminal display screen is a special-shaped screen, and the first display area and the second display area are two areas separated from each other in a top end area of the special-shaped screen.
Optionally, the terminal 1100 further includes:
a third receiving module, configured to receive, after displaying N sub-videos intercepted from the video, a third input of a user for a target adjustment control in the N adjustment controls;
and the third response module is used for responding to the third input and adjusting the target sub-video corresponding to the target adjustment control.
Optionally, the third input is a first sliding operation for the first end or the second end of the target adjustment control;
the third response module is specifically configured to:
and responding to the third input, and adjusting the length of the target sub video corresponding to the target adjusting control.
Optionally, the third response module includes:
an obtaining sub-module, configured to obtain a first feature parameter and a second feature parameter of the target adjustment control, where the first feature parameter is the feature parameter of the target adjustment control before the third input is received, and the second feature parameter is the feature parameter of the target adjustment control after the third input is received;
the calculation submodule is used for calculating the ratio of the first characteristic parameter to the second characteristic parameter to obtain a target proportion;
the adjusting sub-module is used for adjusting the length of the target sub-video corresponding to the target adjusting control according to the target proportion;
wherein the characteristic parameter comprises at least one of a length, a size, and an area of the adjustment control.
Optionally, the adjusting sub-module includes:
a first adjusting unit, configured to, when the third input operation object is the first end and the third input operation direction is the first direction, adjust, according to the target proportion, a start time of a target sub-video corresponding to the target adjustment control, so that the start time is advanced;
a second adjusting unit, configured to, when the operation object of the third input is the second end and the third input is the second direction, adjust the end time of the target sub-video corresponding to the target adjusting control according to the target proportion, so that the end time is delayed;
a third adjusting unit, configured to, when the third input operation object is the first end and the third input operation direction is the second direction, adjust, according to the target proportion, a start time of a target sub-video corresponding to the target adjusting control, so that the start time is delayed;
a fourth adjusting unit, configured to, when the operation object of the third input is the second end and the third input is the first direction, adjust the end time of the target sub-video corresponding to the target adjustment control according to the target proportion, so that the end time is advanced;
wherein the first direction is different from the second direction.
Optionally, the third input is an operation of exchanging display positions of the first target adjustment control and the second target adjustment control, and N is greater than 1;
the third response module is specifically configured to:
responding to the third input, and adjusting the arrangement sequence of the first target sub-video corresponding to the first target adjustment control and the second target sub-video corresponding to the second target adjustment control in the first target video;
the first target video is generated according to the first target sub video and the second target sub video, and the arrangement sequence comprises a playing arrangement sequence and/or a storage position arrangement sequence.
Optionally, the third input is an operation of deleting the target adjustment control;
the third response module is specifically configured to:
and in response to the third input, deleting the target sub-video corresponding to the target adjustment control.
Optionally, the third input is a second sliding operation for the first end and the second end of the target adjustment control at the same time.
Optionally, the terminal 1100 further includes:
the fourth receiving module is used for receiving a fourth input of the user, wherein the fourth input is an operation of pointing to the second target area from the first target area;
the fourth response module is used for responding to the fourth input and canceling the response operation of the historical input with the shortest interval time with the fourth input;
alternatively, the first and second electrodes may be,
a fifth receiving module, configured to receive a fifth input from a user, where the fifth input is an operation directed from the second target area to the first target area;
and the fifth response module is used for responding to the fifth input and recovering the response operation of the historical input with the shortest interval time with the fifth input.
Optionally, the terminal 1100 further includes:
a sixth receiving module, configured to receive a sixth input of the user after displaying the N sub-videos intercepted from the video;
and a sixth response module, configured to splice the N sub-videos in response to the sixth input, and generate a second target video.
The terminal 1100 can implement each process in the method embodiment of the present invention and achieve the same beneficial effects, and is not described herein again to avoid repetition.
Referring to fig. 12, fig. 12 is a block diagram of a terminal according to another embodiment of the present invention, where the terminal may be a hardware structure diagram of a terminal for implementing various embodiments of the present invention. As shown in fig. 12, terminal 1200 includes, but is not limited to: radio frequency unit 1201, network module 1202, audio output unit 1203, input unit 1204, sensor 1205, display unit 1206, user input unit 1207, interface unit 1208, memory 1209, processor 1210, and power source 1211. Those skilled in the art will appreciate that the terminal configuration shown in fig. 12 is not intended to be limiting, and that the terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the radio frequency unit 1201 is configured to: receiving a first input of a user in a state that a playing interface of a video is displayed on a current interface; receiving a second input of the user on the target control;
a processor 1210 configured to: displaying a target control in response to the first input; displaying N sub-videos intercepted from the video in response to the second input; wherein N is a positive integer.
Optionally, the second input is an operation of truncating the target control into N sub-controls by the user; a processor 1210 further configured to: truncating the target control into N sub-controls and truncating the video into N sub-videos in response to the second input; and displaying the N sub videos and N adjusting controls corresponding to the N sub videos.
Optionally, the second input is an operation of intercepting N sub-controls from the target control by the user; in response to the second input, processor 1210 is further configured to: intercepting N sub-controls from the target control, and intercepting N sub-videos from the video; and displaying the N sub videos and N adjusting controls corresponding to the N sub videos.
Optionally, the lengths of the N sub-videos are associated with the feature parameters of the N adjustment controls; wherein the characteristic parameter comprises at least one of a length, a size, and an area of the adjustment control.
Optionally, the target control is a linear control.
Optionally, the processor 1210 is further configured to: a first input of a user in a first display area and/or a second display area of a terminal display screen is received.
Optionally, the terminal display screen is a special-shaped screen, and the first display area and the second display area are two areas separated from each other in a top end area of the special-shaped screen.
Optionally, the radio frequency unit 1201 is further configured to: receiving a third input of a user for a target adjustment control of the N adjustment controls; a processor 1210 further configured to: and responding to the third input, and adjusting the target sub video corresponding to the target adjusting control.
Optionally, the third input is a first sliding operation for the first end or the second end of the target adjustment control; a processor 1210 further configured to: and responding to the third input, and adjusting the length of the target sub video corresponding to the target adjusting control.
Optionally, the processor 1210 is further configured to: acquiring a first characteristic parameter and a second characteristic parameter of the target adjusting control, wherein the first characteristic parameter is the characteristic parameter of the target adjusting control before the third input is received, and the second characteristic parameter is the characteristic parameter of the target adjusting control after the third input is received; calculating the ratio of the first characteristic parameter to the second characteristic parameter to obtain a target ratio; according to the target proportion, adjusting the length of a target sub video corresponding to the target adjusting control; wherein the characteristic parameter comprises at least one of a length, a size, and an area of the adjustment control.
Optionally, the processor 1210 is further configured to:
when the third input operation object is the first end and the third input operation direction is the first direction, adjusting the starting time of the target sub-video corresponding to the target adjustment control according to the target proportion, so that the starting time is advanced;
when the operation object of the third input is the second end and the third input is the second direction, adjusting the end time of the target sub-video corresponding to the target adjustment control according to the target proportion to delay the end time;
when the third input operation object is the first end and the third input operation direction is the second direction, adjusting the starting time of the target sub-video corresponding to the target adjustment control according to the target proportion to delay the starting time;
when the operation object of the third input is the second end and the third input is the first direction, adjusting the end time of the target sub-video corresponding to the target adjustment control according to the target proportion, so that the end time is advanced;
wherein the first direction is different from the second direction.
Optionally, the third input is an operation of exchanging display positions of the first target adjustment control and the second target adjustment control, and N is greater than 1; a processor 1210 further configured to: responding to the third input, and adjusting the arrangement sequence of the first target sub-video corresponding to the first target adjustment control and the second target sub-video corresponding to the second target adjustment control in the first target video; the first target video is generated according to the first target sub video and the second target sub video, and the arrangement sequence comprises a playing arrangement sequence and/or a storage position arrangement sequence.
Optionally, the third input is an operation of deleting the target adjustment control; a processor 1210 further configured to: and in response to the third input, deleting the target sub-video corresponding to the target adjustment control.
Optionally, the third input is a second sliding operation for the first end and the second end of the target adjustment control at the same time.
Optionally, the radio frequency unit 1201 is further configured to: receiving a fourth input of the user, wherein the fourth input is an operation of pointing from the first target area to the second target area; a processor 1210 further configured to: in response to the fourth input, canceling the response operation of the historical input with the shortest interval time with the fourth input;
alternatively, the first and second electrodes may be,
a radio frequency unit 1201, further configured to: receiving a fifth input of a user, wherein the fifth input is an operation of pointing from the second target area to the first target area; a processor 1210 further configured to: and responding to the fifth input, and recovering the response operation of the historical input with the shortest interval time with the fifth input.
Optionally, the radio frequency unit 1201 is further configured to: receiving a sixth input of the user; a processor 1210 further configured to: and responding to the sixth input, splicing the N sub-videos and generating a second target video.
It should be noted that, in this embodiment, the terminal 1200 may implement each process in the method embodiment of the present invention and achieve the same beneficial effects, and for avoiding repetition, details are not described here again.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1201 may be used for receiving and sending signals during information transmission and reception or during a call, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 1210; in addition, the uplink data is transmitted to the base station. Typically, the radio frequency unit 1201 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 1201 can also communicate with a network and other devices through a wireless communication system.
The terminal provides wireless broadband internet access to the user through the network module 1202, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 1203 may convert audio data received by the radio frequency unit 1201 or the network module 1202 or stored in the memory 1209 into an audio signal and output as sound. Also, the audio output unit 1203 may also provide audio output related to a specific function performed by the terminal 1200 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1203 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1204 is used to receive audio or video signals. The input Unit 1204 may include a Graphics Processing Unit (GPU) 12041 and a microphone 12042, and the Graphics processor 12041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 1206. The image frames processed by the graphics processor 12041 may be stored in the memory 1209 (or other storage medium) or transmitted via the radio frequency unit 1201 or the network module 1202. The microphone 12042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1201 in case of the phone call mode.
The terminal 1200 also includes at least one sensor 1205, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 12061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 12061 and/or backlight when the terminal 1200 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 1205 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., and will not be described further herein.
The display unit 1206 is used to display information input by the user or information provided to the user. The Display unit 1206 may include a Display panel 12061, and the Display panel 12061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1207 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 1207 includes a touch panel 12071 and other input devices 12072. The touch panel 12071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 12071 (e.g., operations by a user on or near the touch panel 12071 using a finger, a stylus, or any suitable object or attachment). The touch panel 12071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1210, receives a command from the processor 1210, and executes the command. In addition, the touch panel 12071 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 1207 may include other input devices 12072 in addition to the touch panel 12071. In particular, the other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 12071 may be overlaid on the display panel 12061, and when the touch panel 12071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1210 to determine the type of the touch event, and then the processor 1210 provides a corresponding visual output on the display panel 12061 according to the type of the touch event. Although the touch panel 12071 and the display panel 12061 are shown as two separate components in fig. 12 to implement the input and output functions of the terminal, in some embodiments, the touch panel 12071 and the display panel 12061 may be integrated to implement the input and output functions of the terminal, and this is not limited herein.
An interface unit 1208 is an interface for connecting an external device to the terminal 1200. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1208 may be used to receive input from an external device (e.g., data information, power, etc.) and transmit the received input to one or more elements within the terminal 1200 or may be used to transmit data between the terminal 1200 and the external device.
The memory 1209 may be used to store software programs as well as various data. The memory 1209 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1209 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 1210 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by running or executing software programs and/or modules stored in the memory 1209 and calling data stored in the memory 1209, thereby monitoring the entire terminal. Processor 1210 may include one or more processing units; preferably, the processor 1210 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1210.
The terminal 1200 may also include a power source 1211 (e.g., a battery) for powering the various components, and preferably, the power source 1211 is logically connected to the processor 1210 via a power management system such that the functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the terminal 1200 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal, including a processor 1210, a memory 1209, and a computer program stored in the memory 1209 and capable of running on the processor 1210, where the computer program, when executed by the processor 1210, implements each process of the above-mentioned video capture method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video capture method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (30)

1. A video interception method is applied to a terminal and is characterized by comprising the following steps:
receiving a first input of a user in a state that a playing interface of a video is displayed on a current interface;
displaying a target control in response to the first input;
receiving a second input of the user on the target control;
displaying N sub-videos intercepted from the video in response to the second input;
wherein N is a positive integer;
the second input is the operation of truncating the target control into N sub-controls by the user;
the displaying, in response to the second input, N sub-videos intercepted from the video, including:
truncating the target control into N sub-controls and truncating the video into N sub-videos in response to the second input;
and displaying the N sub videos and N adjusting controls corresponding to the N sub videos.
2. The method of claim 1, wherein the lengths of the N sub-videos are associated with feature parameters of the N adjustment controls;
wherein the characteristic parameter comprises at least one of a length, a size, and an area of the adjustment control.
3. The method of claim 1, wherein the target control is a line type control.
4. The method of claim 1, wherein receiving a first input from a user comprises:
a first input of a user in a first display area and/or a second display area of a terminal display screen is received.
5. The method according to claim 4, wherein the terminal display screen is a shaped screen, and the first display area and the second display area are two areas separated from each other in a top end area of the shaped screen.
6. The method of claim 1, wherein after displaying the N sub-videos intercepted from the video, further comprising:
receiving a third input of a user for a target adjustment control of the N adjustment controls;
and responding to the third input, and adjusting the target sub video corresponding to the target adjusting control.
7. The method of claim 6, wherein the third input is a first sliding operation for a first end or a second end of the target adjustment control;
the adjusting the target sub-video corresponding to the target adjustment control in response to the third input includes:
and responding to the third input, and adjusting the length of the target sub video corresponding to the target adjusting control.
8. The method of claim 7, wherein the adjusting the length of the target sub-video comprises:
acquiring a first characteristic parameter and a second characteristic parameter of the target adjusting control, wherein the first characteristic parameter is the characteristic parameter of the target adjusting control before the third input is received, and the second characteristic parameter is the characteristic parameter of the target adjusting control after the third input is received;
calculating the ratio of the first characteristic parameter to the second characteristic parameter to obtain a target ratio;
according to the target proportion, adjusting the length of a target sub video corresponding to the target adjusting control;
wherein the characteristic parameter comprises at least one of a length, a size, and an area of the adjustment control.
9. The method of claim 8, wherein said adjusting the length of the target sub-video according to the target scale comprises:
when the third input operation object is the first end and the third input operation direction is the first direction, adjusting the starting time of the target sub-video corresponding to the target adjustment control according to the target proportion, so that the starting time is advanced;
when the operation object of the third input is the second end and the third input is the second direction, adjusting the end time of the target sub-video corresponding to the target adjustment control according to the target proportion to delay the end time;
when the third input operation object is the first end and the third input operation direction is the second direction, adjusting the starting time of the target sub-video corresponding to the target adjustment control according to the target proportion to delay the starting time;
when the operation object of the third input is the second end and the third input is the first direction, adjusting the end time of the target sub-video corresponding to the target adjustment control according to the target proportion, so that the end time is advanced;
wherein the first direction is different from the second direction.
10. The method of claim 6, wherein the third input is an operation to swap display positions of a first target adjustment control and a second target adjustment control, and N is greater than 1;
the adjusting the target sub-video corresponding to the target adjustment control in response to the third input includes:
responding to the third input, and adjusting the arrangement sequence of the first target sub-video corresponding to the first target adjustment control and the second target sub-video corresponding to the second target adjustment control in the first target video;
the first target video is generated according to the first target sub video and the second target sub video, and the arrangement sequence comprises a playing arrangement sequence and/or a storage position arrangement sequence.
11. The method of claim 6, wherein the third input is an operation to delete the target adjustment control;
the adjusting the target sub-video corresponding to the target adjustment control in response to the third input includes:
and in response to the third input, deleting the target sub-video corresponding to the target adjustment control.
12. The method of claim 11, wherein the third input is a second sliding operation directed to both the first end and the second end of the target adjustment control.
13. The method of claim 1, further comprising:
receiving a fourth input of the user, wherein the fourth input is an operation of pointing from the first target area to the second target area;
in response to the fourth input, canceling the response operation of the historical input with the shortest interval time with the fourth input;
alternatively, the first and second electrodes may be,
receiving a fifth input of a user, wherein the fifth input is an operation of pointing from the second target area to the first target area;
and responding to the fifth input, and recovering the response operation of the historical input with the shortest interval time with the fifth input.
14. The method of claim 1, wherein after displaying the N sub-videos intercepted from the video, further comprising:
receiving a sixth input of the user;
and responding to the sixth input, splicing the N sub-videos and generating a second target video.
15. A terminal, comprising:
the first receiving module is used for receiving a first input of a user in a state that a video playing interface is displayed on a current interface;
a first response module for displaying a target control in response to the first input;
the second receiving module is used for receiving a second input of the user on the target control;
a second response module for displaying N sub-videos intercepted from the video in response to the second input;
wherein N is a positive integer;
the second input is the operation of truncating the target control into N sub-controls by the user;
the second response module includes:
a first response sub-module, configured to, in response to the second input, truncate the target control into N sub-controls, and truncate the video into N sub-videos;
and the first display sub-module is used for displaying the N sub-videos and N adjusting controls corresponding to the N sub-videos.
16. The terminal of claim 15, wherein the lengths of the N sub-videos are associated with feature parameters of the N adjustment controls;
wherein the characteristic parameter comprises at least one of a length, a size, and an area of the adjustment control.
17. The terminal of claim 15, wherein the target control is a line type control.
18. The terminal according to claim 15, wherein the first receiving module is specifically configured to:
and receiving a first input of a user in a first display area and/or a second display area of a display screen of the terminal in a state that a playing interface of the video is displayed on the current interface.
19. The terminal of claim 18, wherein the terminal display is a shaped screen, and wherein the first display area and the second display area are two separate areas of a topmost area of the shaped screen.
20. The terminal of claim 15, further comprising:
a third receiving module, configured to receive, after displaying N sub-videos intercepted from the video, a third input of a user for a target adjustment control in the N adjustment controls;
and the third response module is used for responding to the third input and adjusting the target sub-video corresponding to the target adjustment control.
21. The terminal of claim 20, wherein the third input is a first sliding operation for a first end or a second end of the target adjustment control;
the third response module is specifically configured to:
and responding to the third input, and adjusting the length of the target sub video corresponding to the target adjusting control.
22. The terminal of claim 21, wherein the third response module comprises:
an obtaining sub-module, configured to obtain a first feature parameter and a second feature parameter of the target adjustment control, where the first feature parameter is the feature parameter of the target adjustment control before the third input is received, and the second feature parameter is the feature parameter of the target adjustment control after the third input is received;
the calculation submodule is used for calculating the ratio of the first characteristic parameter to the second characteristic parameter to obtain a target proportion;
the adjusting sub-module is used for adjusting the length of the target sub-video corresponding to the target adjusting control according to the target proportion;
wherein the characteristic parameter comprises at least one of a length, a size, and an area of the adjustment control.
23. The terminal of claim 22, wherein the adjusting submodule comprises:
a first adjusting unit, configured to, when the third input operation object is the first end and the third input operation direction is the first direction, adjust, according to the target proportion, a start time of a target sub-video corresponding to the target adjustment control, so that the start time is advanced;
a second adjusting unit, configured to, when the operation object of the third input is the second end and the third input is the second direction, adjust the end time of the target sub-video corresponding to the target adjusting control according to the target proportion, so that the end time is delayed;
a third adjusting unit, configured to, when the third input operation object is the first end and the third input operation direction is the second direction, adjust, according to the target proportion, a start time of a target sub-video corresponding to the target adjusting control, so that the start time is delayed;
a fourth adjusting unit, configured to, when the operation object of the third input is the second end and the third input is the first direction, adjust the end time of the target sub-video corresponding to the target adjustment control according to the target proportion, so that the end time is advanced;
wherein the first direction is different from the second direction.
24. The terminal of claim 20, wherein the third input is an operation to swap display positions of a first target adjustment control and a second target adjustment control, and N is greater than 1;
the third response module is specifically configured to:
responding to the third input, and adjusting the arrangement sequence of the first target sub-video corresponding to the first target adjustment control and the second target sub-video corresponding to the second target adjustment control in the first target video;
the first target video is generated according to the first target sub video and the second target sub video, and the arrangement sequence comprises a playing arrangement sequence and/or a storage position arrangement sequence.
25. The terminal of claim 20, wherein the third input is an operation to delete the target adjustment control;
the third response module is specifically configured to:
and in response to the third input, deleting the target sub-video corresponding to the target adjustment control.
26. The terminal of claim 25, wherein the third input is a second sliding operation directed to both the first end and the second end of the target adjustment control.
27. The terminal of claim 15, further comprising:
the fourth receiving module is used for receiving a fourth input of the user, wherein the fourth input is an operation of pointing to the second target area from the first target area;
the fourth response module is used for responding to the fourth input and canceling the response operation of the historical input with the shortest interval time with the fourth input;
alternatively, the first and second electrodes may be,
a fifth receiving module, configured to receive a fifth input from a user, where the fifth input is an operation directed from the second target area to the first target area;
and the fifth response module is used for responding to the fifth input and recovering the response operation of the historical input with the shortest interval time with the fifth input.
28. The terminal of claim 15, further comprising:
a sixth receiving module, configured to receive a sixth input of the user after displaying the N sub-videos intercepted from the video;
and a sixth response module, configured to splice the N sub-videos in response to the sixth input, and generate a second target video.
29. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the video interception method according to any one of claims 1 to 14.
30. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the video interception method according to any one of claims 1 to 14.
CN201810219543.0A 2018-03-16 2018-03-16 Video intercepting method and terminal Active CN108471550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810219543.0A CN108471550B (en) 2018-03-16 2018-03-16 Video intercepting method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810219543.0A CN108471550B (en) 2018-03-16 2018-03-16 Video intercepting method and terminal

Publications (2)

Publication Number Publication Date
CN108471550A CN108471550A (en) 2018-08-31
CN108471550B true CN108471550B (en) 2020-10-09

Family

ID=63265453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810219543.0A Active CN108471550B (en) 2018-03-16 2018-03-16 Video intercepting method and terminal

Country Status (1)

Country Link
CN (1) CN108471550B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109710779A (en) * 2018-12-24 2019-05-03 北京金山安全软件有限公司 Multimedia file intercepting method, device, equipment and storage medium
CN110798727A (en) * 2019-10-28 2020-02-14 维沃移动通信有限公司 Video processing method and electronic equipment
CN111324261B (en) * 2020-01-20 2021-01-19 北京无限光场科技有限公司 Intercepting method and device of target object, electronic equipment and storage medium
CN113810751B (en) * 2020-06-12 2022-10-28 阿里巴巴集团控股有限公司 Video processing method and device, electronic device and server
CN113038218B (en) * 2021-03-19 2022-06-10 厦门理工学院 Video screenshot method, device, equipment and readable storage medium
CN113079415B (en) * 2021-03-31 2023-07-28 维沃移动通信有限公司 Video processing method and device and electronic equipment
CN113596555B (en) * 2021-06-21 2024-01-19 维沃移动通信(杭州)有限公司 Video playing method and device and electronic equipment
CN113986083A (en) * 2021-10-29 2022-01-28 维沃移动通信有限公司 File processing method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104093083A (en) * 2014-07-23 2014-10-08 上海天脉聚源文化传媒有限公司 Video intercepting method and device
CN105472469A (en) * 2015-12-08 2016-04-06 小米科技有限责任公司 Video playing progress adjusting method and apparatus
CN107087137A (en) * 2017-06-01 2017-08-22 腾讯科技(深圳)有限公司 The method and apparatus and terminal device of video are presented
CN107368585A (en) * 2017-07-21 2017-11-21 杭州学天教育科技有限公司 A kind of storage method and system based on video of giving lessons
CN107729522A (en) * 2017-10-27 2018-02-23 优酷网络技术(北京)有限公司 Multimedia resource fragment intercept method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160155475A1 (en) * 2012-12-12 2016-06-02 Crowdflik, Inc. Method And System For Capturing Video From A Plurality Of Devices And Organizing Them For Editing, Viewing, And Dissemination Based On One Or More Criteria
CN104602126B (en) * 2013-10-31 2017-12-26 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN104506937B (en) * 2015-01-06 2017-11-17 三星电子(中国)研发中心 Audio frequency and video share processing method and system
CN105516828A (en) * 2015-12-14 2016-04-20 成都易瞳科技有限公司 Method and device for downloading video
CN106231439A (en) * 2016-07-21 2016-12-14 乐视控股(北京)有限公司 A kind of video segment intercept method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104093083A (en) * 2014-07-23 2014-10-08 上海天脉聚源文化传媒有限公司 Video intercepting method and device
CN105472469A (en) * 2015-12-08 2016-04-06 小米科技有限责任公司 Video playing progress adjusting method and apparatus
CN107087137A (en) * 2017-06-01 2017-08-22 腾讯科技(深圳)有限公司 The method and apparatus and terminal device of video are presented
CN107368585A (en) * 2017-07-21 2017-11-21 杭州学天教育科技有限公司 A kind of storage method and system based on video of giving lessons
CN107729522A (en) * 2017-10-27 2018-02-23 优酷网络技术(北京)有限公司 Multimedia resource fragment intercept method and device

Also Published As

Publication number Publication date
CN108471550A (en) 2018-08-31

Similar Documents

Publication Publication Date Title
CN108471550B (en) Video intercepting method and terminal
CN108668083B (en) Photographing method and terminal
CN108762954B (en) Object sharing method and mobile terminal
CN108536365B (en) Image sharing method and terminal
CN109495711B (en) Video call processing method, sending terminal, receiving terminal and electronic equipment
CN108495029B (en) Photographing method and mobile terminal
CN108491133B (en) Application program control method and terminal
CN108920239B (en) Long screen capture method and mobile terminal
CN108646958B (en) Application program starting method and terminal
CN108132752B (en) Text editing method and mobile terminal
WO2019196929A1 (en) Video data processing method and mobile terminal
CN109710349B (en) Screen capturing method and mobile terminal
CN108898555B (en) Image processing method and terminal equipment
CN108228902B (en) File display method and mobile terminal
CN110531915B (en) Screen operation method and terminal equipment
CN110196668B (en) Information processing method and terminal equipment
CN109407948B (en) Interface display method and mobile terminal
CN107728923B (en) Operation processing method and mobile terminal
CN109388324B (en) Display control method and terminal
WO2019120190A1 (en) Dialing method and mobile terminal
CN108132749B (en) Image editing method and mobile terminal
CN108469940B (en) Screenshot method and terminal
CN110768804A (en) Group creation method and terminal device
CN110413363B (en) Screenshot method and terminal equipment
CN109669656B (en) Information display method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant