CN111818390A - Video capturing method and device and electronic equipment - Google Patents

Video capturing method and device and electronic equipment Download PDF

Info

Publication number
CN111818390A
CN111818390A CN202010615641.3A CN202010615641A CN111818390A CN 111818390 A CN111818390 A CN 111818390A CN 202010615641 A CN202010615641 A CN 202010615641A CN 111818390 A CN111818390 A CN 111818390A
Authority
CN
China
Prior art keywords
video
time point
input
video time
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010615641.3A
Other languages
Chinese (zh)
Inventor
李文文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010615641.3A priority Critical patent/CN111818390A/en
Publication of CN111818390A publication Critical patent/CN111818390A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a video intercepting method, a video intercepting device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: receiving a first touch input of a user in a preset control area in the process of playing a first video; determining a first video time point in response to the first touch input; receiving a second touch input of the user in the preset control area; determining a second video time point in response to the second touch input, the time precision of the first video time point being lower than the time precision of the second video time point; determining a target key frame of a target video time point based on the first video time point and the second video time point; and performing video interception on the first video based on the target key frame to obtain a second video. According to the method and the device, the target key frame of the video capture can be quickly positioned, a screen recording function is not required to be started when a user records the screen, and user operation is reduced.

Description

Video capturing method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a video capturing method and device and electronic equipment.
Background
With the continuous development of science and technology, video becomes a mainstream information transmission mode. And the video is also a novel social mode raised by the military along with the small video. Interception of short videos will also become an increasing demand.
The current commonly used video capturing mode is to record a small video or capture a video by various screen recording software, the screen recording starting function of the existing scheme is complex, the screen recording starting function needs to be selected and called from complex check boxes in a setting interface or a drop-down box, and a user may need to select three times or four times to obtain a starting point and an end point of a video segment which is accurately captured once.
Disclosure of Invention
The embodiment of the application aims to provide a video capturing method, a video capturing device and electronic equipment, and the problem that the mode for starting a screen recording function is complex in the prior art can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video capture method, where the method includes:
receiving a first touch input of a user in a preset control area in the process of playing a first video;
determining a first video time point in response to the first touch input;
receiving a second touch input of the user in the preset control area;
determining a second video time point in response to the second touch input, the time precision of the first video time point being lower than the time precision of the second video time point;
determining a target key frame of a target video time point based on the first video time point and the second video time point;
and performing video interception on the first video based on the target key frame to obtain a second video.
In a second aspect, an embodiment of the present application provides a video capture apparatus, including:
the first touch input receiving module is used for receiving a first touch input of a user in a preset control area in the process of playing a first video;
a first time point determination module for determining a first video time point in response to the first touch input;
the second touch input receiving module is used for receiving second touch input of a user in the preset control area;
a second time point determining module, configured to determine a second video time point in response to the second touch input, where a time precision of the first video time point is lower than a time precision of the second video time point;
a target key frame determination module, configured to determine a target key frame of a target video time point based on the first video time point and the second video time point;
and the second video acquisition module is used for carrying out video interception on the first video based on the target key frame to obtain a second video.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the video capture method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, and when executed by a processor, the program or instructions implement the video capture method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the video capture method according to the first aspect.
In the embodiment of the application, in the process of playing a first video, a first touch input of a user in a preset control area is received, a first video time point is determined in response to the first touch input, a second touch input of the user in preset control is received, a second video time point is determined in response to the second touch input, the time precision of the first video time point is lower than that of the second video time point, a target key frame of the target video time point is determined based on the first video time point and the second video time point, and the first video is subjected to video capture based on the target key frame to obtain a second video. According to the video recording method and device, the video time precision is controlled through the user input in the preset control area, the target key frame of the video capture can be quickly positioned, the positioning accuracy of the video time point is improved, moreover, the screen recording function is not required to be started when the user records the screen, and the user operation is reduced.
Drawings
Fig. 1 is a flowchart illustrating steps of a video capture method according to an embodiment of the present application;
fig. 2 is a flowchart illustrating steps of a video capture method according to a second embodiment of the present application;
fig. 3 is a flowchart illustrating steps of a video capture method according to a third embodiment of the present application;
fig. 3a is a schematic diagram of entering a video capture mode according to an embodiment of the present application;
fig. 3b is a schematic diagram of determining a video capture start point according to an embodiment of the present application;
fig. 3c is a schematic diagram of another method for determining a video capture start point according to an embodiment of the present application;
fig. 3d is a schematic diagram of another method for determining a video capture start point according to an embodiment of the present application;
fig. 3e is a schematic diagram of a selected video capture start point and end point according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a video capture apparatus according to a fourth embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail a video capture scheme provided in the embodiments of the present application with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a flowchart illustrating steps of a video capture method according to a first embodiment of the present application is shown, and as shown in fig. 1, the video capture method may specifically include the following steps:
step 101: in the process of playing the first video, receiving a first touch input of a user in a preset control area.
The first video refers to a video that a user needs to capture one or more frames of a video frame, and in this embodiment, the first video may be a movie video, a variety video, or the like, and specifically, may be determined according to a service requirement, which is not limited in this embodiment.
The preset control area refers to an area for positioning and selecting a key frame captured from a played video, and in this embodiment, the preset control area may be a certain touch-controllable area on a screen of an electronic device (such as a mobile phone, a tablet computer, and the like), such as an upper bar area and/or a lower bar area of the mobile phone. The preset control area may also be an external device, such as a touch pad, capable of controlling the video time played by the electronic device. The preset control area can also be a certain area on the back surface of the electronic equipment, which can be touched. Specifically, the setting may be performed according to a service requirement, which is not limited in this embodiment.
In the process of playing the first video in the foreground of the electronic device, the video capture mode can be entered through input to the preset control area, for example, as shown in fig. 3a, the upper and lower two banners of the electronic device can be used as the preset control area, sliding left and right of the two banners can trigger fast forward and fast backward functions of the video, when the played first video needs to be captured, the two banners on both sides can be touched through two fingers (1 and 2 shown in fig. 3 a), a user can enter the video capture mode by double clicking the two banners at similar heights, the playing of the first video is paused after entering the video capture mode, and the key frame selection mode is entered.
The first touch input refers to a touch input executed by a user in a preset control area for controlling a playing time point of a first video, and in this embodiment, the first touch input may be an input formed by a click operation, an input formed by a slide operation, or an input formed by both the click operation and the slide operation, and specifically, may be determined according to a service requirement, which is not limited in this embodiment.
During the process of playing the first video by the electronic device, a first touch input may be performed in the preset control area by the user, and after the first touch input of the user in the preset control area is received, step 102 is performed.
Step 102: in response to the first touch input, a first video time point is determined.
The first video time point is a video time point of a first video positioned according to the first touch input, and the video time point is also the time for playing the first video. For example, when the user performs a slide input in the preset control area to control the playing progress bar of the first video to the playing time of 6s, the 6s is taken as the first video time point, that is, when the first video is played to 6s, the video frame at the time point is cut.
After receiving a first touch input of the user in the preset control area, a first video time point may be determined according to the first touch input, for example, as shown in fig. 3b, after entering the video capture mode, the user's finger can perform a sliding operation on two precision control partitions (the upper bar as the first precision control partition and the lower bar as the second precision control partition as shown in fig. 3 b), if the user's finger 2 performs the sliding operation 7, the user's finger 1 performs the sliding operation 8, the user's finger 2 performs the sliding operation 7 in the first precision control section, the video time positioning cursor 5 can be slid to the position shown in fig. 3b, and the first video time point can be determined by combining the position of the time positioning cursor 5, for example, and if the video time positioned by the video time positioning cursor 5 is 6s, the first video time point is 6 s.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
After determining the first video time point in response to the first touch input, step 103 is performed.
Step 103: and receiving a second touch input of the user in the preset control area.
The second touch input refers to a touch input executed by a user in a preset control area for controlling a playing time point of the first video, and in this embodiment, the second touch input may be an input formed by a click operation, an input formed by a slide operation, or an input formed by both the click operation and the slide operation, and specifically, may be determined according to a service requirement, which is not limited in this embodiment.
After the first video time point is located, according to the received second touch input of the user in the preset control area, as shown in fig. 3b, the user finger 1 performs a sliding operation 8 in the second precision control area, and the sliding operation 8 may be regarded as the second touch input.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
After receiving a second touch input of the user in the preset control area, step 104 is executed.
Step 104: in response to the second touch input, determining a second video time point, the time precision of the first video time point being lower than the time precision of the second video time point.
The second video time point is a video time point of the first video located according to the second touch input. For example, when the user performs a slide input in the preset control area to control the play progress bar of the first video to 0.2s, 0.2s is taken as the second video time point.
The temporal precision of the second video time point is higher than that of the first video time point, for example, when the first video time point is in the order of second temporal precision, the second video time point may be in the order of hundred second temporal precision. Alternatively, when the first video time point is time accurate on the order of minutes, the second video time point may be time accurate on the order of seconds, or time accurate on the order of hundreds of seconds, etc.
After receiving a second touch input of the user in the preset control area, a second video time point may be determined according to the second touch input, as shown in fig. 3b, after entering the video capture mode, the user finger may respectively perform a sliding operation in two precision control partitions (an upper side column is used as the first precision control partition and a lower side column is used as the second precision control partition as shown in fig. 3 b), the user finger 1 performs a sliding operation 8 in the second precision control partition, the sliding operation 8 may be regarded as the second touch input, the video time positioning cursor 6 may be slid to a position shown in fig. 3b according to the sliding operation 8, the second video time point may be determined by combining the position of the time positioning cursor 6, and for example, when the video time positioned by the video time positioning cursor 6 is 0.2s, the second video time point is 0.2 s.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
Step 105: determining a target key frame of a target video time point based on the first video time point and the second video time point.
The target video time point is a video time point of the first video determined by combining the first video time point and the second video time point, for example, when the first video time point is 20s and the second video time point is 0.5s, the target video time point is 20.5 s.
The target key frame refers to a video key frame corresponding to a target time point in the first video. In this embodiment, the target key frame may include at least one key frame of video key frames such as a start key frame and an end key frame, and of course, the target key frame may also include other video key frames, and specifically, may be determined according to a service requirement, and this embodiment is not limited thereto.
In the case that the target key frame only includes the start key frame, the method may be applied to a scene where the user selects the video capture time, for example, the user preset the duration of capturing the video, such as 5min, 8min, and the like, at this time, the user only needs to select one start key frame, where the start key frame corresponds to the video capture start time of the first video, and then all the key frames in the first video within the preset duration from the video capture start time are taken as the key frames that need to be subjected to frame capture. The method can also be applied to scenes in which an end key frame is preset, for example, the end key frame is preset, and after the start key frame is selected by a user, the start key frame and the end key frame in the first video, and a key frame located between the start key frame and the end key frame can be used as a key frame to be subjected to frame clipping. Of course, the last frame of the first video may be defaulted as the ending video frame, in which case the user only needs to select the starting video frame, etc.
In the case that the target key frame only includes the end key frame, the method may be applied to a scene in which the user selects the video capture time, for example, the user sets the duration of capturing the video in advance, such as 6min, 7min, and the like, at this time, the user only needs to select one end key frame, where the end key frame corresponds to the video capture end time of the first video, then the time corresponding to the end key frame in the first video is taken as the video capture end time, and all the key frames in the first video that are located within the set duration before the video capture end time are taken as the key frames that need to be captured. The method can also be applied to a scene with a preset starting key frame, for example, the starting key frame is preset, and after the ending key frame is selected by a user, the starting key frame and the ending key frame in the first video, and the key frame between the starting key frame and the ending key frame can be used as the key frame to be subjected to frame clipping. Of course, the first video first frame may be defaulted as the end key frame, in which case the user only needs to select the end key frame, etc.
Under the condition that the target key frame comprises a starting key frame and an ending key frame, the method can be applied to determining a scene of the key frame needing to perform video frame capture in the first video according to the starting key frame and the ending key frame selected by the user, at the moment, the starting key frame and the ending key frame are selected by the user, and further, the starting key frame, the ending key frame and other key frames positioned between the starting key frame and the ending key frame in the first video can be jointly used as the video key frame needing to perform video frame capture.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
After the first video time point and the second video time point are obtained, the target video time point of the first video at the first video time point and the second video time point can be combined, and the target key frame of the first video is determined according to the target video time point.
After determining the target key frame of the target video time point based on the first video time point and the second video time point, step 106 is performed.
Step 106: and performing video interception on the first video based on the target key frame to obtain a second video.
The second video refers to a second video obtained after video capture is performed on the first video according to the target key frame, for example, when the target key frame includes a start key frame and an end key frame, the second video is a video formed by two cut video frames obtained after frame capture is performed on video frames corresponding to the start key frame and the end key frame in the first video frame. Or when the target key frame comprises a starting key frame, an ending key frame and three video key frames positioned between the starting key frame and the ending key frame, the second video is a video formed by five video frames obtained after performing frame truncation on the starting key frame, the ending key frame and the video frames corresponding to the three video key frames therebetween.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
According to the embodiment of the application, the video time point is positioned by combining the preset control area, the positioning of the video time point can be realized, the screen recording function does not need to be started, and the operation steps of recording videos by a user can be reduced.
The video capture method provided by the embodiment of the application receives a first touch input of a user in a preset control area in the process of playing a first video, determines a first video time point in response to the first touch input, receives a second touch input of the user in preset control, determines a second video time point in response to the second touch input, wherein the time precision of the first video time point is lower than that of the second video time point, determines a target key frame of the target video time point based on the first video time point and the second video time point, and captures the first video based on the target key frame to obtain the second video. According to the video recording method and device, the video time precision is controlled through the user input in the preset control area, the target key frame of the video capture can be quickly positioned, the positioning accuracy of the video time point is improved, moreover, the screen recording function is not required to be started when the user records the screen, and the user operation is reduced.
Referring to fig. 2, a flowchart illustrating steps of a video capture method according to a second embodiment of the present application is shown, and as shown in fig. 2, the video capture method may specifically include the following steps:
step 201: during playing of the first video, a first sliding input of a first finger of a user in the first precision control partition is received.
The method and the device can be applied to a scene in which a user controls the two precision control subareas by respectively adopting one finger to realize the positioning of the video time point.
The first video refers to a video that needs to be captured, and in this embodiment, the first video may be a video of a movie or a video of a variety of arts, and specifically, the first video may be determined according to a service requirement, which is not limited in this embodiment.
The preset control area refers to an area for positioning and selecting a key frame captured from a played video, and in this embodiment, the preset control area may be a certain touch-controllable area on an electronic device (such as a mobile phone, a tablet computer, and the like), such as an upper bar area and/or a lower bar area of the mobile phone. The preset control area may also be an external device, such as a touch pad, capable of controlling the video time played by the electronic device.
In this embodiment, the preset control area may include a first precision control area and a second precision control area, where the first precision control area may be a low precision control area, and the second precision control area may be a high precision control area, for example, the first precision control area is a second-level control area, the second precision control area is a 100 second-level control area, and the like.
The first sliding input refers to a sliding input performed by the user in the first precision control partition with a first finger, for example, as shown in fig. 3b, the upper bar of the electronic device may be used as the first precision control partition, and the lower bar may be used as the second precision control partition, and the user presses the first precision control partition with a first finger 2 to perform a sliding input 7, as shown in fig. 3b, an input formed by sliding the first finger 2 to the right.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
Step 202 is performed after receiving a first swipe input of a first finger of a user at a first precision control zone.
Step 202: in response to the first swipe input, the first video point in time is determined.
The first video time point is a video time point of the first video positioned according to the first sliding input, for example, as shown in fig. 3b, when the first finger of the user performs the sliding input in the first precision control partition, the video time positioning cursor 5 can be controlled to slide left and right on the video progress bar 3, and when the first finger 2 of the user performs the sliding input 7 to position the video time positioning cursor 5 to the position shown in fig. 3b, the first video time point can be determined by combining the position of the video time positioning cursor 5.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
After receiving a first sliding input of a first finger of a user at a first precision control section, a first video time point of the located first video may be determined according to the first sliding input.
Step 203: receiving a second sliding input of a second finger of the user at the second precision control section.
The second sliding input is a sliding input performed by a second finger of the user in the second precision control partition, as shown in fig. 3b, the lower column of the electronic device is the second precision control partition, and the second finger 1 of the user performs a sliding operation in the lower column, that is, forms the second sliding input. Alternatively, as shown in fig. 3c, when the user presses the lower bar region with the second finger 1, the slide input 9 executed is the second slide input.
It is understood that, during the input process of the second sliding input, the first finger does not leave the first precision control partition, and at this time, the second sliding input performed by the second finger is the valid sliding input.
When the second finger of the user presses the second precision control section, a second slide input may be performed by the second finger within the second precision control section, and then step 204 is performed.
Step 204: determining the second video time point in response to the second slide input.
The second video time point is a video time point of the first video positioned according to the second sliding input, for example, as shown in fig. 3b, when the second finger 1 of the user performs the sliding input in the second precision control partition, the video time positioning cursor 6 can be controlled to slide left and right on the video progress bar 4, and when the second finger 1 of the user performs the sliding input 8 to position the video time positioning cursor 6 to the position shown in fig. 3b, the second video time point can be determined by combining the position of the video time positioning cursor 6. Alternatively, as shown in fig. 3c, when the second finger 1 of the user performs the sliding input 9 in the lower sidebar region to position the video time positioning cursor 6 to the position shown in fig. 3b, the second video time point can be determined by combining the position of the video time positioning cursor 6.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
After receiving a second swipe input of a second finger of the user at the second precision control zone, a second video time point of the located first video may be determined according to the second swipe input.
Step 205: and under the condition that the first finger and the second finger both leave the preset control area, determining the target key frame based on the first video time point and the second video time point.
The target key frame refers to a video key frame corresponding to a target time point in the first video. In this embodiment, the target key frame may include at least one key frame of video key frames such as a start key frame and an end key frame, and of course, the target key frame may also include other video key frames, and specifically, may be determined according to a service requirement, and this embodiment is not limited thereto.
When both the first finger and the second finger of the user leave the preset control area, indicating that the video time point located by the user is finished, as shown in fig. 3e, after the user locates the target video time point, the first finger 1 and the second finger 2 of the user may leave the upper bar area and the lower bar area, respectively, to indicate that the video time point location is finished. At this time, the target video time point of the first video at the first video time point and the second video time point may be combined, and the target key frame of the first video may be determined according to the target video time point.
After determining the target key frame for the target video time point based on the first video time point and the second video time point, step 206 is performed.
Step 206: and performing video interception on the first video based on the target key frame to obtain a second video.
The second video refers to a second video obtained after video capture is performed on the first video according to the target key frame, for example, when the target key frame includes a start key frame and an end key frame, the second video is a video formed by two cut video frames obtained after frame capture is performed on video frames corresponding to the start key frame and the end key frame in the first video frame. Or when the target key frame comprises a starting key frame, an ending key frame and three video key frames positioned between the starting key frame and the ending key frame, the second video is a video formed by five video frames obtained after performing frame truncation on the starting key frame, the ending key frame and the video frames corresponding to the three video key frames therebetween.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
According to the embodiment of the application, one finger of a user respectively positions the video time points of the first precision control partition and the second precision control partition, so that the video time positioning operation of the user can be simplified, and the video capturing speed can be improved.
The video capture method provided by the embodiment of the application has the beneficial effects that the video capture method provided by the first embodiment of the application has, and the user can respectively control the first precision control partition and the second precision control partition by using a single finger so as to realize the positioning of the video time point, simplify the operation steps of the user and improve the positioning precision of the video time point.
Referring to fig. 3, a flowchart illustrating steps of a video capture method provided in a third embodiment of the present application is shown, and as shown in fig. 3, the video capture method may specifically include the following steps:
step 301: and in the process of playing the first video, receiving N times of first sub-inputs of at least one first finger of a user in a first precision control subarea of the preset control area.
The embodiment of the application can be applied to a scene that a user controls two precision control partitions by adopting one or more fingers to realize the positioning of the video time point.
The first video refers to a video that needs to be captured, and in this embodiment, the first video may be a video of a movie or a video of a variety of arts, and specifically, the first video may be determined according to a service requirement, which is not limited in this embodiment.
The preset control area refers to an area for positioning and selecting a key frame captured from a played video, and in this embodiment, the preset control area may be a certain touch-controllable area on an electronic device (such as a mobile phone, a tablet computer, and the like), such as an upper bar area and/or a lower bar area of the mobile phone. The preset control area may also be an external device, such as a touch pad, capable of controlling the video time played by the electronic device.
In this embodiment, the preset control area may include a first precision control area and a second precision control area, where the first precision control area may be a low precision control area, and the second precision control area may be a high precision control area, for example, the first precision control area is a second-level control area, the second precision control area is a 100 second-level control area, and the like.
The N times of first sub-inputs refer to inputs performed by a user in the first precision control partition by using at least one first finger, wherein the click input may include one or more of single click, double click, or multiple click. Wherein N is a positive integer greater than or equal to 1.
When the first video time point of the first video needs to be located, the first precision control partition may be pressed by at least one first finger, and the first sub-input is executed in the first precision control partition by the at least one first finger, of course, whether the first sub-input is valid may be based on whether the second finger leaves the second precision control partition, and when the second finger does not leave the second precision control partition, the input executed by the first finger is the valid input, which may be described in detail with reference to the following specific implementation manner.
In a specific implementation manner of the present application, the step 301 may include:
substep S1: sequentially receiving N times of first sub-inputs of the at least one first finger in the first precision control partition under the condition that the at least one second finger does not leave the second precision control partition.
In this embodiment, the N first sub-inputs may include at least one click input and at least one slide input.
It can be understood that the execution order of the click input and the slide input is not sequential, and the user may execute the click input first and then execute the slide input, or may execute the slide input first and then execute the click input, specifically, the execution order may be determined according to the service requirement, and this embodiment is not limited thereto.
The N first sub-inputs of the at least one first finger within the first precision control section may be sequentially received while the at least one second finger is not leaving the second precision control section.
Step 302 is performed after receiving N first sub-inputs of at least one first finger of the user within the first precision control section of the preset control section.
Step 302: and responding to the N times of first sub-input, and determining the first video time point.
After receiving a first sub-input of at least one first finger of the user within a first precision control section of the preset control section, the first video time point may be determined in conjunction with the N times of the first sub input, and in particular, the video time point input by the nth time of the first sub input may be determined as the first video time point, for example, the preset control area is exemplified by the top bar and the bottom bar of the mobile phone, as long as any one of the fingers effectively sliding in the two side bars does not leave the side bar, the user still stays in the current click mode, the effective sliding of the upper sidebar of the user is performed to 10 seconds, the effective sliding of the lower sidebar in the high-precision mode is not convenient to perform to 2.2 seconds, as long as the finger performing the effective sliding before the upper sidebar is not released, the lower sidebar may continue to effectively slide to 2.2 after the upper sidebar has effectively slid to around 2 seconds.
Of course, in the point selection mode, rough point selection can be performed by clicking input, and specifically, a user can click anywhere on the video progress bar by hand, so as to perform rough and rapid point selection in a wider range. For example, if the user wants to select a 2.2 second point, the user is located 30 seconds away from the target point, and then only needs to be in the point selection mode, the user can click the progress bar of the screen with a finger to select the point for 10 seconds, and then the point is precisely selected in a sliding manner.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
Step 303: receiving M second sub-inputs of at least one second finger of the user in a second precision control subarea of the preset control area.
The M times of second sub-input refers to input executed by a user in the second precision control partition by using at least one second finger, wherein M is a positive integer greater than or equal to 1.
When the second video time point of the first video needs to be located, the second precision control partition may be pressed by at least one second finger, and the at least one second finger executes the second sub-input in the second precision control partition, of course, whether the second sub-input is valid may be based on whether the first finger leaves the first precision control partition, and when the first finger does not leave the first precision control partition, the input executed by the second finger is the valid input, which may be described in detail with reference to the following specific implementation manner.
In a specific implementation manner of the present application, the step 303 may include:
sub-step M1: sequentially receiving M times of second sub-inputs of the at least one second finger within the second precision control zone without the at least one first finger leaving the first precision control zone.
In this embodiment, the M second sub-inputs may include at least one click input and at least one slide input. It can be understood that the execution order of the click input and the slide input is not sequential, and the user may execute the click input first and then execute the slide input, or may execute the slide input first and then execute the click input, specifically, the execution order may be determined according to the service requirement, and this embodiment is not limited thereto.
The M second sub-inputs of the at least one second finger within the second precision control section may be sequentially received while the at least one first finger is not leaving the first precision control section.
After receiving M second sub-inputs of at least one second finger of the user within the second precision control partition of the preset control area, step 303 is performed.
Step 304: determining the second video time point in response to the M second sub-inputs.
After receiving a second sub-input of at least one second finger of the user in the second precision control partition of the preset control area, a second video time point may be determined in combination with the M second sub-inputs, and specifically, a video time point input by the M second sub-input may be determined as the second video time point, for example, the second sub-input may include a click input and a slide input, the click input may place the video time point at 6.5s, the slide input slides the video time point from 6.5s to 6.8s, and the second video time point is a time point corresponding to 6.8s, as shown in fig. 3d, the user clicks a position of the video progress bar 4, and may perform video time point positioning, as shown by the video time positioning cursor 10 shown in fig. 3 d.
After determining the second video time point in response to the M second sub inputs, step 305 is performed.
Step 305: determining the target key frame based on the first video time point and the second video time point under the condition that the at least one first finger and the at least one second finger both leave the preset control area.
The target key frame refers to a video key frame corresponding to a target time point in the first video. In this embodiment, the target key frame may include at least one key frame of video key frames such as a start key frame and an end key frame, and of course, the target key frame may also include other video key frames, and specifically, may be determined according to a service requirement, and this embodiment is not limited thereto.
When both the at least one first finger and the at least one second finger of the user leave the preset control area, indicating that the video time point located by the user is ended, as shown in fig. 3e, after the user locates the target video time point, the first finger 1 and the second finger 2 of the user may leave the upper bar area and the lower bar area, respectively, to indicate that the video time point location is completed. At this time, the target video time point of the first video at the first video time point and the second video time point may be combined, and the target key frame of the first video may be determined according to the target video time point.
After determining the target key frame of the target video time point based on the first video time point and the second video time point, step 306 is performed.
Step 306: and performing video interception on the first video based on the target key frame to obtain a second video.
The second video refers to a second video obtained after video capture is performed on the first video according to the target key frame, for example, when the target key frame includes a start key frame and an end key frame, the second video is a video formed by two cut video frames obtained after frame capture is performed on video frames corresponding to the start key frame and the end key frame in the first video frame. Or when the target key frame comprises a starting key frame, an ending key frame and three video key frames positioned between the starting key frame and the ending key frame, the second video is a video formed by five video frames obtained after performing frame truncation on the starting key frame, the ending key frame and the video frames corresponding to the three video key frames therebetween.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
According to the embodiment of the application, the user adopts at least one finger to respectively position the video time points of the first precision control partition and the second precision control partition, the flexibility of positioning the video time points can be guaranteed, even after the error of low-precision point selection is large, adjustment can be performed back and forth, the high-precision point selection mode and the low-precision point selection mode can be performed alternately, and the operation of the user can be smoother.
When the upper and lower side rails of the electronic equipment are used as the preset control area, the effective sliding of the upper and lower side rails is detected for sliding perception, so that the interference of the side rails of the electronic equipment when a plurality of fingers touch the side rails can be eliminated, and the function of supporting the electronic equipment by contacting the electronic equipment with the plurality of fingers can be realized.
The video capture method provided by the embodiment of the application has the beneficial effects that the video capture method provided by the first embodiment of the invention has, and the user can use a plurality of fingers to position the video time point of the first precision control partition and the second precision control partition, so that in the positioning process of the video time point, the interference of the electronic equipment sidebar when the plurality of fingers touch the electronic equipment can be eliminated, and the effect of the plurality of fingers contacting the electronic equipment to support the electronic equipment can be ensured.
It should be noted that, in the video capture method provided in the embodiment of the present application, the execution main body may be a video capture device, or a control module in the video capture device, which is used for executing the loaded video capture method. In the embodiment of the present application, a video capture device is taken as an example to execute a loaded video capture method, and the video capture method provided in the embodiment of the present application is described.
Referring to fig. 4, a schematic structural diagram of a video capture apparatus provided in the fourth embodiment of the present application is shown, and as shown in fig. 4, the video capture apparatus may specifically include the following modules:
a first touch input receiving module 410, configured to receive a first touch input of a user in a preset control area during a process of playing a first video;
a first time point determining module 420, configured to determine a first video time point in response to the first touch input;
a second touch input receiving module 430, configured to receive a second touch input of the user in the preset control area;
a second time point determining module 440, configured to determine a second video time point in response to the second touch input, where a time precision of the first video time point is lower than a time precision of the second video time point;
a target key frame determining module 450, configured to determine a target key frame of a target video time point based on the first video time point and the second video time point;
the second video obtaining module 460 is configured to perform video capturing on the first video based on the target key frame to obtain a second video.
Optionally, the preset control area comprises a first precision control partition and a second precision control partition;
the first touch input receiving module 410 includes:
a first sliding input receiving unit, configured to receive a first sliding input of a first finger of a user in the first precision control section;
the first time point determining module 420 includes:
a first time point determining unit for determining the first video time point in response to the first slide input;
the second touch input receiving module 430 includes:
receiving a second sliding input of a second finger of the user in the second precision control partition by the second sliding input;
the second time point determining module 440 includes:
a second time point determination unit for determining the second video time point in response to the second slide input.
Optionally, the target key frame determining module 450 includes:
a target key frame determining unit, configured to determine the target key frame based on the first video time point and the second video time point when both the first finger and the second finger leave the preset control area;
wherein the first finger does not leave the first precision control section during the input of the second slide input.
Optionally, the first touch input receiving module 410 includes:
the first sub-input receiving unit is used for receiving N times of first sub-inputs of at least one first finger of a user in a first precision control subarea of the preset control area;
the first time point determining module 420 includes:
a first time point obtaining unit, configured to determine the first video time point in response to the N times of first sub-inputs;
the second touch input receiving module 430 includes:
the second sub-input receiving unit is used for receiving M times of second sub-inputs of at least one second finger of the user in a second precision control subarea of the preset control area;
the second time point determining module 440 includes:
a second time point obtaining unit, configured to determine the second video time point in response to the M second sub-inputs;
the target key frame determination module 450 includes:
a target key frame obtaining unit, configured to determine the target key frame based on the first video time point and the second video time point when both the at least one first finger and the at least one second finger leave the preset control area;
wherein N, M are all positive integers greater than 1.
Optionally, the first sub-input receiving unit includes:
a first sub-input receiving subunit, configured to sequentially receive, when the at least one second finger does not leave the second precision control partition, N times of first sub-inputs of the at least one first finger in the first precision control partition, where the N times of first sub-inputs include at least one click input and at least one slide input;
the first time point acquisition unit includes:
a first time point obtaining subunit, configured to determine a video time point input by the nth first sub-input as a first video time point;
the second sub-input receiving unit includes:
a second sub-input receiving subunit, configured to sequentially receive, when the at least one first finger does not leave the first precision control partition, M times of second sub-inputs of the at least one second finger in the second precision control partition, where the M times of second sub-inputs include at least one click input and at least one slide input;
the second time point acquisition unit includes:
and the second time point acquisition subunit is used for determining the video time point input by the Mth second sub input as a second video time point.
Optionally, the target key frame comprises at least one of: a start key frame and an end key frame.
The video capture device provided by the embodiment of the application receives a first touch input of a user in a preset control area in the process of playing a first video, determines a first video time point in response to the first touch input, receives a second touch input of the user in preset control, determines a second video time point in response to the second touch input, wherein the time precision of the first video time point is lower than that of the second video time point, determines a target key frame of the target video time point based on the first video time point and the second video time point, and captures the first video based on the target key frame to obtain the second video. According to the video recording method and device, the video time precision is controlled through the user input in the preset control area, the target key frame of the video capture can be quickly positioned, the positioning accuracy of the video time point is improved, moreover, the screen recording function is not required to be started when the user records the screen, and the user operation is reduced.
The video capture device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a kiosk, and the like, and the embodiments of the present application are not particularly limited.
The video capture device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video capture device provided in the embodiment of the present application can implement each process implemented by the video capture method in the method embodiments of fig. 1 to fig. 3, and is not described here again to avoid repetition.
Optionally, an electronic device is further provided in this embodiment of the present application, as shown in fig. 5, the electronic device 500 may include a processor 502, a memory 501, and a program or an instruction stored in the memory 501 and executable on the processor 502, where the program or the instruction is executed by the processor 502 to implement each process of the above-mentioned video capture method embodiment, and can achieve the same technical effect, and in order to avoid repetition, it is not described here again.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Referring to fig. 6, a schematic structural diagram of another electronic device provided in the embodiment of the present application is shown.
As shown in fig. 6, the electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and the like.
Those skilled in the art will appreciate that the electronic device 600 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 610 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 610 is configured to receive a first touch input of a user in a preset control area during a process of playing a first video; determining a first video time point in response to the first touch input; receiving a second touch input of the user in the preset control area; determining a second video time point in response to the second touch input, the time precision of the first video time point being lower than the time precision of the second video time point; determining a target key frame of a target video time point based on the first video time point and the second video time point; and performing video interception on the first video based on the target key frame to obtain a second video.
According to the method and the device for capturing the video, in the process of playing the first video, a first touch input of a user in a preset control area is received, a first video time point is determined in response to the first touch input, a second touch input of the user in preset control is received, a second video time point is determined in response to the second touch input, the time precision of the first video time point is lower than that of the second video time point, a target key frame of the target video time point is determined based on the first video time point and the second video time point, and the first video is captured based on the target key frame to obtain the second video. According to the video recording method and device, the video time precision is controlled through the user input in the preset control area, the target key frame of the video capture can be quickly positioned, the positioning accuracy of the video time point is improved, moreover, the screen recording function is not required to be started when the user records the screen, and the user operation is reduced.
Optionally, the user input unit 607, configured to receive a first touch input of a user in a preset control area, includes:
receiving a first sliding input of a first finger of a user in the first precision control zone;
in the process that the processor 610 determines a first video time point in response to the first touch input, the processor 610 is specifically configured to determine the first video time point in response to the first sliding input;
in the process that the user input unit 607 receives a second touch input of the user in the preset control area, the user input unit 607 is specifically configured to receive a second sliding input of a second finger of the user in the second precision control area;
in the process that the processor 610 determines a second video time point in response to the second touch input, the processor 610 is specifically configured to determine the second video time point in response to the second sliding input.
Optionally, in the process that the processor 610 determines a target key frame of a target video time point based on the first video time point and the second video time point, the processor 610 is specifically configured to determine the target key frame based on the first video time point and the second video time point when both the first finger and the second finger leave the preset control area;
wherein the first finger does not leave the first precision control section during the input of the second slide input.
Optionally, in the process of receiving a first touch input of a user in a preset control area, the user input unit 607 is specifically configured to receive N times of first sub-inputs of at least one first finger of the user in a first precision control partition of the preset control area;
in the process that the processor 610 determines a first video time point in response to the first touch input, the processor 610 is specifically configured to determine the first video time point in response to the N times of first sub-inputs;
in the process of receiving a second touch input of the user in the preset control area, the user input unit 607 is specifically configured to receive M second sub-inputs of at least one second finger of the user in a second precision control partition of the preset control area;
in the process that the processor 610 determines a second video time point in response to the second touch input, the processor 610 is specifically configured to determine the second video time point in response to the M times of second sub-inputs;
in the process that the processor 610 determines a target key frame of a target video time point based on the first video time point and the second video time point, the processor 610 is specifically configured to determine the target key frame based on the first video time point and the second video time point when both the at least one first finger and the at least one second finger leave the preset control area;
wherein N, M are all positive integers greater than 1.
Optionally, the user input unit 607 is configured to receive a first sub-input of at least one first finger of the user in the first precision control partition of the preset control area. The precision control sub-area is used for sequentially receiving N times of first sub-inputs of the at least one first finger in the first precision control sub-area under the condition that the at least one second finger does not leave the second precision control sub-area, wherein the N times of first sub-inputs comprise at least one click input and at least one sliding input;
in the process that the processor 610 determines the first video time point in response to the N first sub inputs, the processor 610 is specifically configured to determine a video time point input by the nth first sub input as a first video time point;
in the process of receiving M times of second sub-inputs of at least one second finger of the user in the second precision control partition of the preset control area, the user input unit 607 is specifically configured to sequentially receive M times of second sub-inputs of the at least one second finger in the second precision control partition under the condition that the at least one first finger does not leave the first precision control partition, where the M times of second sub-inputs include at least one click input and at least one slide input;
in the process that the processor 610 determines the second video time point in response to the M second sub inputs, the processor 610 is specifically configured to determine the video time point input by the M second sub input as the second video time point.
Optionally, the target key frame comprises at least one of: a start key frame and an end key frame.
The embodiment of the application can also perform sliding perception by detecting effective sliding of the first precision control subarea and the second precision control subarea of the preset control area, so that the interference of the electronic equipment sidebar when a plurality of fingers are in touch control can be eliminated, and the effect of the plurality of fingers contacting the electronic equipment for supporting the electronic equipment can be ensured.
It should be understood that in the embodiment of the present application, the input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the graphics processing Unit 6041 processes image data of a still picture or a video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes a touch panel 6071 and other input devices 6072. A touch panel 6071, also referred to as a touch screen. The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 609 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 610 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned video capture method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video capture method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for video capture, comprising:
receiving a first touch input of a user in a preset control area in the process of playing a first video;
determining a first video time point in response to the first touch input;
receiving a second touch input of the user in the preset control area;
determining a second video time point in response to the second touch input, the time precision of the first video time point being lower than the time precision of the second video time point;
determining a target key frame of a target video time point based on the first video time point and the second video time point;
and performing video interception on the first video based on the target key frame to obtain a second video.
2. The method of claim 1, wherein the preset control area comprises a first precision control zone and a second precision control zone;
the receiving of the first touch input of the user in the preset control area includes:
receiving a first sliding input of a first finger of a user in the first precision control zone;
the determining a first video time point in response to the first touch input comprises:
determining the first video time point in response to the first slide input;
the receiving of the second touch input of the user in the preset control area includes:
receiving a second sliding input of a second finger of the user in the second precision control partition;
the determining a second video time point in response to the second touch input comprises:
determining the second video time point in response to the second slide input.
3. The method of claim 2, wherein determining the target keyframe for the target video point in time based on the first video point in time and the second video point in time comprises:
determining the target key frame based on the first video time point and the second video time point under the condition that the first finger and the second finger both leave the preset control area;
wherein the first finger does not leave the first precision control section during the input of the second slide input.
4. The method of claim 1, wherein the receiving a first touch input of a user in a preset control area comprises:
receiving N times of first sub-inputs of at least one first finger of a user in a first precision control subarea of the preset control area;
the determining a first video time point in response to the first touch input comprises:
determining the first video time point in response to the N first sub-inputs;
the receiving of the second touch input of the user in the preset control area includes:
receiving M times of second sub-inputs of at least one second finger of the user in a second precision control subarea of the preset control area;
the determining a second video time point in response to the second touch input comprises:
determining the second video time point in response to the M second sub-inputs;
the determining a target key frame of a target video time point based on the first video time point and the second video time point comprises:
determining the target key frame based on the first video time point and the second video time point under the condition that the at least one first finger and the at least one second finger both leave the preset control area;
wherein N, M are all positive integers greater than 1.
5. The method of claim 4, wherein receiving a first sub-input of at least one first finger of a user within a first precision control section of the preset control area comprises:
under the condition that the at least one second finger does not leave the second precision control partition, sequentially receiving N times of first sub-inputs of the at least one first finger in the first precision control partition, wherein the N times of first sub-inputs comprise at least one click input and at least one sliding input;
the determining the first video time point in response to the N first sub-inputs includes:
determining a video time point input by the Nth first sub-input as a first video time point;
the receiving M second sub-inputs of at least one second finger of the user in a second precision control partition of the preset control partition includes:
sequentially receiving M times of second sub-inputs of the at least one second finger in the second precision control partition under the condition that the at least one first finger does not leave the first precision control partition, wherein the M times of second sub-inputs comprise at least one click input and at least one sliding input;
the determining the second video time point in response to the M second sub inputs includes:
and determining the video time point input by the Mth second sub-input as a second video time point.
6. The method of claim 1, wherein the target keyframe comprises at least one of: a start key frame and an end key frame.
7. A video capture device, comprising:
the first touch input receiving module is used for receiving a first touch input of a user in a preset control area in the process of playing a first video;
a first time point determination module for determining a first video time point in response to the first touch input;
the second touch input receiving module is used for receiving second touch input of a user in the preset control area;
a second time point determining module, configured to determine a second video time point in response to the second touch input, where a time precision of the first video time point is lower than a time precision of the second video time point;
a target key frame determination module, configured to determine a target key frame of a target video time point based on the first video time point and the second video time point;
and the second video acquisition module is used for carrying out video interception on the first video based on the target key frame to obtain a second video.
8. The apparatus of claim 7, wherein the preset control area comprises a first precision control zone and a second precision control zone;
the first touch input receiving module includes:
a first sliding input receiving unit, configured to receive a first sliding input of a first finger of a user in the first precision control section;
the first time point determination module comprises:
a first time point determining unit for determining the first video time point in response to the first slide input;
the second touch input receiving module includes:
receiving a second sliding input of a second finger of the user in the second precision control partition by the second sliding input;
the second time point determination module includes:
a second time point determination unit for determining the second video time point in response to the second slide input.
9. The apparatus of claim 8, wherein the target keyframe determination module comprises:
a target key frame determining unit, configured to determine the target key frame based on the first video time point and the second video time point when both the first finger and the second finger leave the preset control area;
wherein the first finger does not leave the first precision control section during the input of the second slide input.
10. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the video capture method of claims 1-6.
CN202010615641.3A 2020-06-30 2020-06-30 Video capturing method and device and electronic equipment Pending CN111818390A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010615641.3A CN111818390A (en) 2020-06-30 2020-06-30 Video capturing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010615641.3A CN111818390A (en) 2020-06-30 2020-06-30 Video capturing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111818390A true CN111818390A (en) 2020-10-23

Family

ID=72856514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010615641.3A Pending CN111818390A (en) 2020-06-30 2020-06-30 Video capturing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111818390A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113242464A (en) * 2021-01-28 2021-08-10 维沃移动通信有限公司 Video editing method and device
CN114915851A (en) * 2022-05-31 2022-08-16 展讯通信(天津)有限公司 Video recording and playing method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5388197A (en) * 1991-08-02 1995-02-07 The Grass Valley Group, Inc. Video editing system operator inter-face for visualization and interactive control of video material
CN104822091A (en) * 2015-04-29 2015-08-05 努比亚技术有限公司 Video playing progress control method and device and mobile terminal
CN106231439A (en) * 2016-07-21 2016-12-14 乐视控股(北京)有限公司 A kind of video segment intercept method and device
CN107846631A (en) * 2017-10-31 2018-03-27 珠海市魅族科技有限公司 Video clipping method and device, computer equipment and storage medium
CN108062196A (en) * 2017-12-14 2018-05-22 维沃移动通信有限公司 The adjusting method and mobile terminal of a kind of playing progress rate
CN110446096A (en) * 2019-08-15 2019-11-12 天脉聚源(杭州)传媒科技有限公司 Video broadcasting method, device and storage medium a kind of while recorded
CN110677720A (en) * 2019-09-26 2020-01-10 腾讯科技(深圳)有限公司 Method, device and equipment for positioning video image frame and computer storage medium
CN110933505A (en) * 2019-10-31 2020-03-27 维沃移动通信有限公司 Progress adjusting method and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5388197A (en) * 1991-08-02 1995-02-07 The Grass Valley Group, Inc. Video editing system operator inter-face for visualization and interactive control of video material
CN104822091A (en) * 2015-04-29 2015-08-05 努比亚技术有限公司 Video playing progress control method and device and mobile terminal
CN106231439A (en) * 2016-07-21 2016-12-14 乐视控股(北京)有限公司 A kind of video segment intercept method and device
CN107846631A (en) * 2017-10-31 2018-03-27 珠海市魅族科技有限公司 Video clipping method and device, computer equipment and storage medium
CN108062196A (en) * 2017-12-14 2018-05-22 维沃移动通信有限公司 The adjusting method and mobile terminal of a kind of playing progress rate
CN110446096A (en) * 2019-08-15 2019-11-12 天脉聚源(杭州)传媒科技有限公司 Video broadcasting method, device and storage medium a kind of while recorded
CN110677720A (en) * 2019-09-26 2020-01-10 腾讯科技(深圳)有限公司 Method, device and equipment for positioning video image frame and computer storage medium
CN110933505A (en) * 2019-10-31 2020-03-27 维沃移动通信有限公司 Progress adjusting method and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113242464A (en) * 2021-01-28 2021-08-10 维沃移动通信有限公司 Video editing method and device
CN114915851A (en) * 2022-05-31 2022-08-16 展讯通信(天津)有限公司 Video recording and playing method and device

Similar Documents

Publication Publication Date Title
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN112887802A (en) Video access method and device
CN112887618B (en) Video shooting method and device
CN112887794B (en) Video editing method and device
CN111818390A (en) Video capturing method and device and electronic equipment
CN112954199A (en) Video recording method and device
CN112836086A (en) Video processing method and device and electronic equipment
CN112911401A (en) Video playing method and device
CN113596555A (en) Video playing method and device and electronic equipment
CN110750743B (en) Animation playing method, device, equipment and storage medium
WO2024153191A1 (en) Video generation method and apparatus, electronic device, and medium
CN113852756B (en) Image acquisition method, device, equipment and storage medium
CN104219578A (en) Video processing method and video processing device
CN112783406B (en) Operation execution method and device and electronic equipment
CN112328829A (en) Video content retrieval method and device
CN112711368A (en) Operation guidance method and device and electronic equipment
CN115756275A (en) Screen capture method, screen capture device, electronic equipment and readable storage medium
CN113132778B (en) Method and device for playing video, electronic equipment and readable storage medium
CN115002551A (en) Video playing method and device, electronic equipment and medium
CN114286160A (en) Video playing method and device and electronic equipment
CN111857496A (en) Operation execution method and device and electronic equipment
CN112764648A (en) Screen capturing method and device, electronic equipment and storage medium
CN113923392A (en) Video recording method, video recording device and electronic equipment
CN113473012A (en) Virtualization processing method and device and electronic equipment
CN112732392A (en) Operation control method and device for application program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201023