CN110012337B - Video interception method and device and electronic equipment - Google Patents

Video interception method and device and electronic equipment Download PDF

Info

Publication number
CN110012337B
CN110012337B CN201910244123.2A CN201910244123A CN110012337B CN 110012337 B CN110012337 B CN 110012337B CN 201910244123 A CN201910244123 A CN 201910244123A CN 110012337 B CN110012337 B CN 110012337B
Authority
CN
China
Prior art keywords
video
point
mobile terminal
cut point
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910244123.2A
Other languages
Chinese (zh)
Other versions
CN110012337A (en
Inventor
许枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910244123.2A priority Critical patent/CN110012337B/en
Publication of CN110012337A publication Critical patent/CN110012337A/en
Application granted granted Critical
Publication of CN110012337B publication Critical patent/CN110012337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The application discloses a video intercepting method, a video intercepting device and electronic equipment, wherein the method comprises the following steps: the method comprises the steps of obtaining a video to be processed and a motion state data set corresponding to the video, wherein the video is acquired by a mobile terminal, and the motion state data set comprises: in the process of collecting the video, the mobile terminal collects the motion state data of different collection moments; determining at least one video cutting point, corresponding to the acquisition moment, of which the stability of the mobile terminal meets a first condition, from the video according to the motion state data set corresponding to the video; if the video cut point meets a second condition, at least one video segment is cut from the video by utilizing the video cut point. The scheme of the application can realize more reliable interception of the video segment with higher quality from the video.

Description

Video interception method and device and electronic equipment
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video capture method and apparatus, and an electronic device.
Background
Video analytics processing has been applied to a number of fields. For example, when a user uploads a video to the network platform, the network platform needs to analyze the type of the video in order to store or present the video in a classified manner.
In the video analysis process, video interception is often involved, for example, a plurality of video segments are intercepted from the video so as to analyze the video segments. However, in the video capture process, if the quality of the video segment captured from the video is poor, the result of the video analysis may be greatly affected, so that the video analysis may be biased. Therefore, how to reliably intercept a video segment from a video is a technical problem which needs to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a video intercepting method, a video intercepting device and electronic equipment so as to realize more reliable interception of a video segment with higher quality from a video.
In order to achieve the purpose, the application provides the following technical scheme:
a video capture method, comprising:
the method comprises the steps of obtaining a video to be processed and a motion state data set corresponding to the video, wherein the video is acquired by a mobile terminal, and the motion state data set comprises: in the process of collecting the video, the mobile terminal collects the motion state data of different collection moments;
determining at least one video cutting point, corresponding to the acquisition moment, of which the stability of the mobile terminal meets a first condition, from the video according to the motion state data set corresponding to the video;
and if the video cutting point meets a second condition, utilizing the video cutting point to cut out at least one video segment from the video.
Optionally, the motion state data set includes: a set of sensor data for characterizing a motion state of the mobile terminal; the sensor data set comprises sensor data acquired by at least one sensor of the mobile terminal at different moments in the process of acquiring the video;
the determining, from the video, at least one video cut point at which the stability of the mobile terminal corresponding to the acquisition time meets a first condition according to the motion state data set corresponding to the video includes:
and determining at least one video cut point, of which the variation amplitude of the sensor data corresponding to the acquisition time is greater than the amplitude threshold value, from the video according to at least one sensor data acquired by the mobile terminal at different acquisition times.
Optionally, if the video cut point satisfies a second condition, intercepting at least one video segment from a video by using the video cut point, including:
if the candidate video segment between the video cut point and the nearest video cut point before the video cut point meets a second condition, at least one video segment is intercepted from the candidate video segment.
Optionally, the candidate video segment between the video cut point and the nearest video cut point before the video cut point satisfies a second condition, including:
and the video quality of the candidate video segment between the video cut point and the nearest video cut point before the video cut point meets the quality requirement.
Optionally, the candidate video segment between the video cut point and the nearest video cut point before the video cut point satisfies a second condition, including:
the video duration of the candidate video segment between the video cut point and the nearest video cut point before the video cut point exceeds a duration threshold.
Optionally, before the intercepting at least one video segment from the video by using the video cut point if the video cut point satisfies the second condition, the method further includes:
determining at least one video interception point from the video according to the video interception duration;
the intercepting at least one video segment from the video by using the video cut point comprises:
and intercepting at least one video segment from the video by using the video cutting point and the at least one video interception point.
In another aspect, the present application further provides a video capture apparatus, including:
the video acquisition unit is used for acquiring a video to be processed and a motion state data set corresponding to the video, wherein the video is acquired by a mobile terminal, and the motion state data set comprises: in the process of collecting the video, the mobile terminal collects the motion state data of different collection moments;
the cutting point determining unit is used for determining at least one video cutting point, corresponding to the acquisition time, of which the stability of the mobile terminal meets a first condition, from the video according to the motion state data set corresponding to the video;
and the video intercepting unit is used for intercepting at least one video segment from the video by using the video cutting point if the video cutting point meets a second condition.
Preferably, the motion state data set acquired by the video acquiring unit includes: a set of sensor data for characterizing a motion state of the mobile terminal; the sensor data set comprises sensor data acquired by at least one sensor of the mobile terminal at different moments in the process of acquiring the video;
the cut point determining unit is specifically configured to determine, from the video, at least one video cut point at which a change amplitude of sensor data corresponding to the acquisition time is greater than an amplitude threshold value, according to at least one type of sensor data acquired by the mobile terminal at different acquisition times.
Preferably, the video capture unit is specifically configured to capture at least one video segment from the candidate video segments if the candidate video segment between the video cut point and the nearest video cut point before the video cut point satisfies a second condition.
In another aspect, the present application further provides an electronic device, including:
the communication interface is used for acquiring a video to be processed and a motion state data set corresponding to the video, wherein the video is acquired by a mobile terminal, and the motion state data set comprises: in the process of collecting the video, the mobile terminal collects the motion state data of different collection moments;
the processor is used for determining at least one video cutting point, corresponding to the acquisition moment, of which the stability of the mobile terminal meets a first condition, from the video according to the motion state data set corresponding to the video; and if the video cutting point meets a second condition, utilizing the video cutting point to cut out at least one video segment from the video.
According to the scheme, the motion state data of the mobile terminal at different collection moments of the video can be obtained while the video to be processed is obtained, and therefore through setting the first condition and the second condition, the motion state data of the mobile terminal at different collection moments of the video can be obtained, at least one video cutting point with low stability of the mobile terminal corresponding to the collection moment can be determined from the video, and at least one video segment with good quality can be obtained based on the video cutting point, so that the obtained video segment is more suitable for video analysis.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a video capture method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another video capture method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of each video segment divided by a video capture point and a video cut point in the present application;
fig. 4 is a schematic flowchart of a video capture method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a video capture apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic composition diagram of an electronic device according to an embodiment of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced otherwise than as specifically illustrated.
Detailed Description
The video capturing method is suitable for electronic equipment such as desktop computers and notebook computers, and is beneficial to capturing the video segment with high picture stability, so that the quality of the captured video segment is improved.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present disclosure.
Please refer to fig. 1, which is a schematic flow chart of a video capture method according to an embodiment of the present application, where the method according to the embodiment may include:
s101, acquiring a video to be processed and a motion state data set corresponding to the video.
Wherein, this video is gathered by mobile terminal, and this motion state data set includes: and in the process of acquiring the video, the mobile terminal acquires the motion state data at different acquisition moments.
For example, when the mobile terminal collects a video, the mobile terminal continuously collects the motion state data of the mobile terminal, and thus, the video frame collected at each collection time corresponds to the motion state data of the mobile terminal at the collection time.
For another example, when the mobile terminal collects a video, the mobile terminal may collect motion state data of the mobile terminal according to a set interval.
It can be understood that the motion state data of the mobile terminal can reflect the stability of the mobile terminal during the process of shooting the video by the user. For example, when the mobile terminal is in a shaking state and continuously and suddenly changed in position, the stability of the mobile terminal may be poor, and accordingly, the motion state data of the mobile terminal may also be changed greatly.
The stability of the mobile terminal may affect the quality of a video picture shot by the mobile terminal, and thus affect the stability of the video picture. For example, when the mobile terminal shakes, the scene of the picture shot by the mobile terminal changes suddenly, and the picture quality is poor.
It is understood that the motion state data of the mobile terminal may have many different possibilities, for example, the motion state data of the mobile terminal may be location change data, etc.
Optionally, the motion state data of the mobile terminal may be sensor data acquired by at least one sensor in the mobile terminal. Accordingly, the set of motion state data may include: and the sensor data set is used for representing the motion state of the mobile terminal, wherein the sensor data set comprises sensor data acquired by at least one sensor of the mobile terminal at different moments in the process of acquiring the video.
For example, the sensor data may include one or more of:
the gyroscope data collected by a gyroscope in the mobile terminal may be, for example, an angular velocity of the mobile terminal, and specifically, the angular velocity may include three angular velocities in mutually perpendicular axial directions;
acceleration collected by an acceleration sensor in the mobile terminal;
and gravity data collected by a gravity sensor in the mobile terminal.
S102, according to the motion state data set corresponding to the video, at least one video cutting point, corresponding to the collection time, of which the stability of the mobile terminal meets the first condition is determined from the video.
The video cut point can be a position point of a video frame in the video, which is acquired at an acquisition moment when the stability of the mobile terminal meets the first condition.
It can be understood that the motion state data set corresponding to the video may reflect the motion state data of the mobile terminal at different acquisition moments during the process of shooting the video by the mobile terminal, and therefore, the stability of the mobile terminal at different acquisition moments may be determined based on the motion state data set. Correspondingly, the video position points acquired at each acquisition moment when the stability of the mobile terminal meets the first condition can be determined in the video, and then the video cutting points are obtained.
In the embodiment of the present application, the first condition may be set as needed.
In consideration of the fact that the quality of the video collected by the mobile terminal is poor under the condition that the stability of the mobile terminal is poor, the video is not suitable for video analysis, and therefore, the video position point in the video corresponding to the collection time with poor stability of the mobile terminal is not suitable for being used as the content in the video segment to be analyzed. Accordingly, the first condition is a condition for characterizing that the stability of the mobile terminal is not suitable for acquiring an image. For example, the first condition may be that the stability variation amplitude exceeds a set amplitude.
Optionally, the motion state data set includes: in the process of acquiring the video, the sensor data acquired by at least one sensor of the mobile terminal at different moments can determine at least one video cut point, of which the variation amplitude of the sensor data corresponding to the acquisition moment is greater than the amplitude threshold value, from the video according to the at least one sensor data acquired by the mobile terminal at different acquisition moments.
For example, taking sensor data as gyro data as an example, if the change amplitude of the gyro data of the mobile terminal in any one axial direction at a certain acquisition time exceeds a set threshold, it is determined that the stability of the mobile terminal meets the first condition, and thus a video position point corresponding to the acquisition time in the video is determined as a video cut point.
S103, if the video cut point meets a second condition, at least one video segment is cut from the video by using the video cut point.
The second condition is different from the first condition, and the second condition is a condition for further determining whether the video segment is suitable for being captured based on the video cut point, and may be specifically set as required.
In one possible case, whether the video cut point is suitable as the cut point for cutting the video segment can be analyzed by combining the characteristics of the video segment which can be cut based on the video cut point. If the candidate video segment between the video cut point and the nearest video cut point before the video cut point meets the second condition, at least one video segment is cut out of the candidate video segment.
Wherein the satisfaction of the second condition for the candidate video segment between the video cut point and the video cut point before the video cut point can characterize the candidate video segment as suitable for being intercepted as a video segment for video analysis.
For example, in one implementation, a video cut point is considered to satisfy the second condition if the video quality of the candidate video segment between the video cut point and the nearest video cut point before the video cut point meets the quality requirement. The video quality of the candidate video segment can be reflected in terms of the image quality, resolution, and the like of the candidate video segment.
For another example, in yet another implementation, if the video duration of the candidate video segment between the video cut point and the video cut point that is the latest video cut point before the video cut point exceeds the duration threshold, the video cut point is considered to satisfy the second condition. It can be understood that if the video time length of the candidate video segment between two adjacent video cut points is too short, there is no need to analyze the candidate video segment, and therefore, if the video time length of the video segment cut based on the two adjacent video cut points is also necessarily short, the video analysis requirement is not met. On the contrary, if the video duration of the candidate video segment between two adjacent video cut points is long, on the basis, the video segment capture can be realized based on the two adjacent video cut points.
It is understood that, in the case that it is determined that the video cut points satisfy the second condition, the video cut points that do not satisfy the second condition may be discarded, and only the video cut points that satisfy the second condition may be used to cut out at least one video segment with higher image quality from the video.
In particular, various implementations are possible as needed. For example, a video segment surrounded by any two adjacent video points of the video starting point, the video ending point and the video cut point satisfying the second condition and not including the video cut point not satisfying the second condition can be determined from the video, and the determined video segment is intercepted.
For example, assuming that 3 video cut points are determined in the video to be video cut point 1, video cut point 2, and video cut point 3, respectively, and assuming that video cut point 2 does not meet the second condition, the video segment separated by the start point, the end point, video cut point 1, and video cut point 3 of the video includes: the video segment 1 is composed of the starting point of the video and the video cutting point 1, and the video segment 2 is composed of the video cutting point 3 and the ending point of the video, and therefore, the video cutting point 2 is not used as a position point for a user to determine and intercept the video segment.
Specifically, for each video cut point, if a candidate video segment between the video cut point and the video cut point immediately before the video cut point satisfies the second condition, the video cut point is determined to satisfy the second condition, in which case, considering that the quality of the candidate video segment satisfies the requirement, at least one video segment may be cut out from the candidate video segment. For example, the candidate video segment is cut into one video segment, or a plurality of video segments are cut out from the candidate video segment.
For example, it is still assumed that a video cut point 1, a video cut point 2, and a video cut point 3 are sequentially determined in a video, and a candidate video segment between the video cut point 2 and the nearest previous video cut point, that is, the video cut point 1, does not meet the second condition, in this case, the candidate video segment meeting the second condition in the video includes: a video segment S1 between the start point of the video and the video cut point 1, a video segment S2 between the video cut point 2 and the video cut point 3, and a video segment S3 between the video cut point 3 and the end point of the video, so that at least one video segment can be respectively cut out of the video segment S1, the video segment S2, and the video segment S3. Compared with the previous example, the candidate video segments which can be used for capturing the video segments are increased, and each candidate video segment belongs to the video segment which is acquired by the mobile terminal under the stable condition, so that the stability of the video image in the finally captured video segment can be ensured, and the video analysis is facilitated.
According to the scheme, the motion state data of the mobile terminal at different collection moments of the video can be obtained while the video to be processed is obtained, and therefore through setting the first condition and the second condition, the motion state data of the mobile terminal at different collection moments of the video can be obtained, at least one video cutting point with low stability of the mobile terminal corresponding to the collection moment can be determined from the video, and at least one video segment with good quality can be obtained based on the video cutting point, so that the obtained video segment is more suitable for video analysis.
It can be understood that, in consideration of practical application, the video capturing duration of the video segment capturing is generally set, and the video is captured from the video according to the video capturing duration, so that the application can capture at least one video segment from the video by combining the video capture point and the video capture point determined according to the video capturing duration.
For example, referring to fig. 2, which shows a schematic flow chart of another embodiment of the video capture method of the present application, the method of the present embodiment may include:
s201, a video to be processed and a motion state data set corresponding to the video are obtained.
Wherein, this video is gathered by mobile terminal, and this motion state data set includes: and in the process of acquiring the video, the mobile terminal acquires the motion state data at different acquisition moments.
S202, according to the motion state data set corresponding to the video, at least one video cutting point, corresponding to the collection time, of which the stability of the mobile terminal meets the first condition is determined from the video.
The above steps S201 and S202 can refer to the related descriptions of the previous embodiments, and are not described herein again.
S203, determining at least one video interception point from the video according to the video interception duration.
The video intercepting time length can be preset according to needs or input by a user.
It is understood that there may be various ways to determine the video capture point based on the video capture duration. For example, in one possible implementation, at least one video capture point may be determined from the video according to the video capture duration, so that the video duration between any two adjacent video capture points is the video capture duration.
For example, in the case of needing to intercept a video with a duration of 100 seconds, the video interception duration may be set to 100 seconds, so that each 100 seconds in the video is one video interception point.
The above method is to determine all position points from the video which can be used as video capture points according to the video capture duration.
In practical application, it is also possible to determine a video capture point to be captured at present according to the sequence of each frame image in the video, and determine a next video capture point after capturing a video segment based on the video capture point. In this case, the step S203 may be to determine the current video capture point from the video according to the video capture duration. For example, from the current video capture starting point in the video, according to the video capture duration, the current video capture point is determined from the video capture starting point of the video. For example, a position point in the video, which is behind the video capture start point and has a time length from the video capture point being the video capture time length, is taken as the video capture start point. Accordingly, the video segment to be currently intercepted can be subsequently determined based on the video interception point.
It should be noted that the sequence of steps S202 and S203 is not limited to that shown in fig. 2, and in practical applications, the steps S202 and S203 may be executed simultaneously, or the step S203 is executed first and then the step S202 is executed.
S204, if the video cut point meets the second condition, at least one video segment is intercepted from the video by utilizing the video cut point and the at least one video interception point.
For the second condition that the video cut point needs to satisfy, reference may be made to the related description of the foregoing embodiments.
In this embodiment, the video cut point and the at least one video capture point may be utilized to capture at least one video segment from a video in various ways. For example, a video segment which is surrounded by any two adjacent video points of a video starting point, a video ending point, a video capturing point and a video cutting point meeting the second condition and does not contain the video cutting point meeting the second condition is determined from the video, and the determined video segment is captured.
For another example, in the case where it is necessary to determine that the candidate video segment satisfies the second condition, at least one video segment can be cut from the candidate video segments satisfying the second condition. If the candidate video segment does not have a video interception point, intercepting the candidate video segment into a video segment, and if the candidate video segment has at least one video interception point, intercepting the candidate video segment by using the at least one video interception point.
For ease of understanding, the following are illustrated:
as shown in fig. 3, a schematic diagram of determining a plurality of video capture points and a plurality of video cut points from a video is shown.
As can be seen from fig. 3, two video interception points and three video cutting points are located between the start point and the end point of the video, where the two video interception points are the interception point 1 and the interception point 2 in fig. 3 in turn, and the three video cutting points are the cutting point 1, the cutting point 2, and the cutting point 3 in turn.
Wherein, the candidate video segment between the cut point 2 and the previous cut point 1 does not satisfy the second condition, such as the duration of the candidate video segment is too short or the video effect is poor. And the candidate video segments between the other cut point and the nearest previous cut point satisfy the second condition. In this case, the video segment between the starting point of the video and the cut point 1 of the video can be cut, and the video segment between the cut point 1 and the cut point 2 can be directly discarded without being cut.
Correspondingly, the nearest video cut point before the cut point 3 is the cut point 2, and there are two video capture points in the candidate video segment formed between the cut point 3 and the cut point 2, and these two video capture points divide the candidate video segment between the cut point 2 and the cut point 3 into 3 segments, and accordingly, these 3 video segments can be captured. In this way, a video segment between cut point 2 and cut point 1, a video segment between cut point 1 and cut point 2, and a video segment between cut point 2 and cut point 3 can be cut.
Finally, the video segment of the cut point 3 and the end point of the video can also be intercepted as a video segment. Certainly, in practical application, for a video segment formed by the cut point and the starting point of the video or the ending point of the video, whether the video segment meets the second condition or not can be judged first, and if yes, the corresponding video segment is intercepted; otherwise, the video segment may not be intercepted.
It can be understood that if a video segment is cut from the video in fig. 3 according to the video cutting duration, three video segments are cut, which are: a video segment between the starting point of the video and the cut point 1, a video segment between the cut point 1 and the cut point 2, and a video segment between the cut point 2 and the end point of the video.
However, it can be seen from fig. 3 that a video segment between the starting point of the video and the intercept point 1 has a video point acquired at an acquisition time of the mobile terminal under an unstable condition such as shaking, that is, a cut point 1 (fig. 3 is a cut point, for example, in practical applications, once the mobile terminal shakes, a situation that each point in the video acquired within a continuous period of time is a cut point where images are unstable may occur), so that the video segment may have unstable images such as scene abrupt changes, and a deviation of an analysis result obtained by performing video analysis using the video segment is large. Accordingly, the same problem exists with the video segment between the cut point 2 and the end point of the video.
However, it can be known from the above description that, in the embodiment of the present application, interference of a video shot by the mobile terminal under an unstable condition is fully considered, so that not only a video segment is captured according to a video capture point determined based on a video capture duration, but also video points in the video that belong to the video captured by the mobile terminal under the unstable condition are considered, so that the video is captured more finely, each video point in each captured video segment does not belong to the video point captured by the mobile terminal under the unstable condition, and stability of image quality of each captured video segment is guaranteed.
It is understood that, since there is a video capture point in this embodiment, there are other possible situations in this second condition in this embodiment. If the video cut point satisfies the second condition, the video cut point may further be: the relationship between the video cut point and the video capture point satisfies a second condition. For example, in a case that there is at least one video capture point between the video cut point and the nearest video cut point before the video cut point, a front video capture point closest to the video cut point is determined from the at least one video capture point, and it is required to detect whether a candidate video segment between the video cut point and the front video capture point satisfies the second condition.
For another example, in another case, in a case that there is at least one video capture point between the video cut point and a video cut point that is closest to the video cut point, a post video capture point that is closest to the video cut point is determined from the at least one video capture point, and it is required to detect whether a candidate video segment between the video cut point and the post video capture point satisfies the second condition.
Wherein, whether the candidate video segment meets the second condition or not can be whether the video quality of the candidate video segment meets the quality requirement or whether the video length of the candidate video segment exceeds the time length threshold.
It can be understood that, by setting the above two second conditions, some video cut points which are closer to the video capture point or which constitute a video segment with poor video quality with the video capture point can be discarded, so that the video segment which can be determined based on the video cut points can not be captured.
It is understood that, in the present embodiment, the video cut point satisfying the second condition may include one or more of the above two cases and the above-mentioned cases of the previous embodiments.
It can be understood from the above description that the present application may intercept a video segment every time a video segment that can be intercepted is determined when the video segment is actually intercepted, or intercept a video segment after all video partition points and video interception points are determined. For ease of understanding, the following describes the scheme of the present application in one case.
For example, referring to fig. 4, which shows a schematic flow chart of a video capture method in an application scenario, the method of this embodiment may include:
s401, a video to be processed and a motion state data set corresponding to the video are obtained.
The video is acquired by the mobile terminal, and the motion state data set is a sensor data set used for representing the motion state of the mobile terminal. The sensor data set comprises sensor data acquired by at least one sensor of the mobile terminal at different moments in time during the process of acquiring the video.
S402, according to at least one type of sensor data acquired by the mobile terminal at different acquisition moments, at least one video cutting point of which the variation amplitude of the sensor data corresponding to the acquisition moment is larger than an amplitude threshold value is determined from the video.
The above steps S401 and S402 are described by taking a situation as an example, and other manners for the aforementioned motion state data set and determining the video cut point are also applicable to the present embodiment, which may specifically refer to the related descriptions of the foregoing embodiments, and are not described herein again.
And S403, determining a current video interception starting point in the video, and determining a video interception end point corresponding to the video interception starting point from the video according to the set video interception duration.
And a position point which is behind the video interception starting point and is far away from the video interception starting point by the video interception duration in the video is the current video interception end point.
It can be understood that the video capture endpoint is understood as the video capture point mentioned in the foregoing embodiment, and since the video capture endpoint is determined for one video capture starting point at a time, it is referred to as the video capture endpoint for ease of understanding. And if the time length between the end point of the video and the video interception starting point is less than the video interception time length, determining the end point of the video as the video interception end point.
The video capture starting points according to which the video segments are captured each time are different, for example, when the video segments to be captured are determined for the first time, the video capture starting points are the video capture starting points, and then the video capture end points corresponding to the video capture starting points are determined. And taking the last video capturing end point as the next corresponding video capturing starting point for the next time, and continuously circulating in sequence until the required video segment is captured from the whole video.
S404, if the video interception starting point and the video interception end point do not have a video cut point, intercepting a video segment between the video interception starting point and the video interception end point, and executing the step S407.
Optionally, before the video segment between the video capture start point and the video capture end point is captured in step S404, it may also be determined whether the video segment meets the second condition, for example, whether the quality of the video segment meets the quality requirement is determined, and if so, the video segment is captured.
S405, if at least one video cut point exists between the video capture starting point and the video capture end point, determining at least two candidate video segments formed by any two adjacent video points in the video capture starting point, the video capture end point and the at least one video segment cut point.
This step S405 is essentially to determine at least two candidate video segments between the video capture start point and the video capture end point, which are segmented by the at least one video segmentation point.
If a video cut point exists between the video capture starting point and the video capture ending point, a video segment between the video capture starting point and the video cut point is a candidate video segment, and a video segment between the video cut point and the video capture ending point is a candidate video segment.
S406, for each candidate video segment, in case that the candidate video segment satisfies the second condition, the candidate video segment is intercepted, and step S407 is executed.
If the time length of the candidate video segment exceeds the set time length threshold value, the candidate video segment is intercepted, otherwise, the candidate video segment is not intercepted. And if the quality of the candidate video segment meets the requirement, intercepting the candidate video segment, otherwise, discarding the candidate video segment.
S407, detecting whether the video interception end point is the end position of the video, and if so, ending the process; if not, the video interception end point is determined as the current video interception start point, and the step S403 is executed in a return manner.
After determining the video segments required to be captured in the video intervals within a segment of video capturing duration, continuously analyzing the video intervals corresponding to the next video capturing duration until the processing of all the video intervals in the video is completed.
The application also provides a video capturing device corresponding to the video capturing method.
For example, referring to fig. 5, which shows a schematic structural diagram of a video capture apparatus of the present application, the apparatus of this embodiment may include:
a video obtaining unit 501, configured to obtain a video to be processed and a motion state data set corresponding to the video, where the video is acquired by a mobile terminal, and the motion state data set includes: in the process of collecting the video, the mobile terminal collects the motion state data of different collection moments;
a cut point determining unit 502, configured to determine, according to the motion state data set corresponding to the video, at least one video cut point, where the stability of the mobile terminal corresponding to the acquisition time meets a first condition, from the video;
the video capture unit 503 is configured to capture at least one video segment from the video by using the video cut point if the video cut point satisfies the second condition.
In a possible implementation manner, the motion state data set acquired by the video acquisition unit includes: a set of sensor data for characterizing a motion state of the mobile terminal; the sensor data set comprises sensor data acquired by at least one sensor of the mobile terminal at different moments in the process of acquiring the video;
correspondingly, the cut point determining unit is specifically configured to determine, from the video, at least one video cut point at which the variation amplitude of the sensor data corresponding to the acquisition time is greater than the amplitude threshold value, according to at least one type of sensor data acquired by the mobile terminal at different acquisition times.
In another possible implementation manner, the video capture unit is specifically configured to capture at least one video segment from the candidate video segments if the candidate video segment between the video cut point and the nearest video cut point before the video cut point satisfies a second condition.
Optionally, the video capture unit may include:
the first video intercepting subunit is configured to intercept at least one video segment from the candidate video segments if the video quality of the candidate video segment between the video cut point and a nearest video cut point before the video cut point meets a quality requirement.
Optionally, the video capture unit may include:
and the second video intercepting subunit is configured to intercept at least one video segment from the candidate video segments if the video duration of the candidate video segment between the video cut point and the nearest video cut point before the video cut point exceeds a duration threshold.
Optionally, in an embodiment of the above apparatus, the apparatus further includes:
an interception point determining unit, configured to determine at least one video interception point from a video according to a video interception duration before the video interception unit intercepts at least one video segment from the video by using the video interception point;
the video capture unit is specifically configured to capture at least one video segment from the video by using the video cut point and the at least one video capture point when capturing the at least one video segment from the video by using the video cut point.
On the other hand, the application also provides an electronic device. Referring to fig. 6, which shows a schematic structural diagram of an electronic device according to the present application, the electronic device according to the present application may include:
a communication interface 601, configured to obtain a video to be processed and a motion state data set corresponding to the video, where the video is acquired by a mobile terminal, and the motion state data set includes: in the process of collecting the video, the mobile terminal collects the motion state data of different collection moments;
the processor 602 is configured to determine, from the video, at least one video cut point, where the stability of the mobile terminal corresponding to the acquisition time meets a first condition, according to the motion state data set corresponding to the video; and if the video cutting point meets a second condition, utilizing the video cutting point to cut out at least one video segment from the video.
The video and the motion state data acquired by the communication interface may be referred to in the related description of the foregoing method embodiment, and correspondingly, the operation performed by the processor may also be referred to in the related description of the foregoing video capturing method, which is not described herein again.
Optionally, the electronic device of the present application may further include a memory 603 for storing data required by the processor to execute the above program.
Of course, the electronic device of the present application may further include a display screen, an input unit, and the like, which is not limited herein.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A video capture method, comprising:
the method comprises the steps of obtaining a video to be processed and a motion state data set of a mobile terminal corresponding to the video when the video is collected, wherein the video is collected by the mobile terminal, and the motion state data set comprises: in the process of collecting the video, the mobile terminal collects the motion state data of different collection moments;
determining at least one video cutting point, corresponding to the acquisition moment, of which the stability of the mobile terminal meets a first condition, from the video according to a motion state data set, corresponding to the video, of the mobile terminal when the video is acquired; the first condition is a condition for representing that the stability of the mobile terminal is not suitable for acquiring an image;
if the video cutting point meets a second condition, at least one video segment is cut from the video by using the video cutting point, so that the at least one video segment obtained by cutting does not comprise the video image corresponding to the video cutting point at least.
2. The video intercepting method of claim 1, the set of motion state data comprising: a set of sensor data for characterizing a motion state of the mobile terminal; the sensor data set comprises sensor data acquired by at least one sensor of the mobile terminal at different moments in the process of acquiring the video;
the determining, from the video, at least one video cut point at which the stability of the mobile terminal corresponding to the acquisition time meets a first condition according to the motion state data set corresponding to the video includes:
and determining at least one video cut point, of which the variation amplitude of the sensor data corresponding to the acquisition time is greater than the amplitude threshold value, from the video according to at least one sensor data acquired by the mobile terminal at different acquisition times.
3. The video capturing method according to claim 1, wherein if the video cut point satisfies a second condition, capturing at least one video segment from the video using the video cut point includes:
if the candidate video segment between the video cut point and the nearest video cut point before the video cut point meets a second condition, at least one video segment is intercepted from the candidate video segment.
4. The video intercepting method according to claim 3, wherein the candidate video segment between the video cut point and the nearest video cut point before the video cut point satisfies a second condition, comprising:
and the video quality of the candidate video segment between the video cut point and the nearest video cut point before the video cut point meets the quality requirement, and the video quality is reflected from the aspects of the image quality and the resolution of the candidate video segment.
5. The video intercepting method according to claim 3, wherein the candidate video segment between the video cut point and the nearest video cut point before the video cut point satisfies a second condition, comprising:
the video duration of the candidate video segment between the video cut point and the nearest video cut point before the video cut point exceeds a duration threshold.
6. The video capturing method according to claim 1, before the capturing at least one video segment from the video using the video cut point if the video cut point satisfies the second condition, further comprising:
determining at least one video interception point from the video according to the video interception duration;
the intercepting at least one video segment from the video by using the video cut point comprises:
and intercepting at least one video segment from the video by using the video cutting point and the at least one video interception point.
7. A video capture device comprising:
a video obtaining unit, configured to obtain a video to be processed and a motion state data set of a mobile terminal corresponding to the video when the video is collected, where the video is collected by the mobile terminal, and the motion state data set includes: in the process of collecting the video, the mobile terminal collects the motion state data of different collection moments;
a cut point determining unit, configured to determine, from the video, at least one video cut point at which the stability of the mobile terminal corresponding to the acquisition time meets a first condition according to a motion state data set of the mobile terminal corresponding to the video when the video is acquired; the first condition is a condition representing poor stability of a mobile terminal which collects the video;
and the video intercepting unit is used for intercepting at least one video segment from a video by using the video cutting point if the video cutting point meets a second condition, so that the intercepted at least one video segment does not comprise a video image corresponding to the video cutting point.
8. The video capture device of claim 7, wherein the motion state data set obtained by the video obtaining unit comprises: a set of sensor data for characterizing a motion state of the mobile terminal; the sensor data set comprises sensor data acquired by at least one sensor of the mobile terminal at different moments in the process of acquiring the video;
the cut point determining unit is specifically configured to determine, from the video, at least one video cut point at which a change amplitude of sensor data corresponding to the acquisition time is greater than an amplitude threshold value, according to at least one type of sensor data acquired by the mobile terminal at different acquisition times.
9. The video capture device of claim 7, wherein the video capture unit is specifically configured to capture at least one video segment from the candidate video segments if the candidate video segment between the video cut point and a video cut point immediately before the video cut point satisfies a second condition.
10. An electronic device, comprising:
a communication interface, configured to obtain a video to be processed and a motion state data set of a mobile terminal corresponding to the video when the video is collected, where the video is collected by the mobile terminal, and the motion state data set includes: in the process of collecting the video, the mobile terminal collects the motion state data of different collection moments;
the processor is used for determining at least one video cutting point, corresponding to the acquisition moment, of which the stability of the mobile terminal meets a first condition, from the video according to the motion state data set of the mobile terminal corresponding to the video during the acquisition of the video, wherein the first condition is a condition for representing that the stability of the mobile terminal is not suitable for acquiring images; if the video cutting point meets a second condition, at least one video segment is cut from the video by using the video cutting point, so that the at least one video segment obtained by cutting does not comprise the video image corresponding to the video cutting point at least.
CN201910244123.2A 2019-03-28 2019-03-28 Video interception method and device and electronic equipment Active CN110012337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910244123.2A CN110012337B (en) 2019-03-28 2019-03-28 Video interception method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910244123.2A CN110012337B (en) 2019-03-28 2019-03-28 Video interception method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110012337A CN110012337A (en) 2019-07-12
CN110012337B true CN110012337B (en) 2021-02-19

Family

ID=67168638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910244123.2A Active CN110012337B (en) 2019-03-28 2019-03-28 Video interception method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110012337B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111541939B (en) * 2020-04-30 2022-04-22 北京奇艺世纪科技有限公司 Video splitting method and device, electronic equipment and storage medium
CN112911149B (en) * 2021-01-28 2022-08-16 维沃移动通信有限公司 Image output method, image output device, electronic equipment and readable storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003244628A (en) * 2002-02-20 2003-08-29 Kazumi Komiya Specified scene extracting apparatus
TW200633539A (en) * 2005-03-09 2006-09-16 Pixart Imaging Inc Estimation method of motion vector based on distance weighted searching sequence
CN102609723B (en) * 2012-02-08 2014-02-19 清华大学 Image classification based method and device for automatically segmenting videos
US9787986B2 (en) * 2014-06-30 2017-10-10 Intel Corporation Techniques for parallel video transcoding
CN104104969B (en) * 2014-07-23 2017-04-12 天脉聚源(北京)科技有限公司 Video interception method and device
CN105828146A (en) * 2016-03-21 2016-08-03 乐视网信息技术(北京)股份有限公司 Video image interception method, terminal and server
CN107037954A (en) * 2016-09-30 2017-08-11 乐视控股(北京)有限公司 Screenshotss method and device
CN106507203A (en) * 2016-11-30 2017-03-15 北京小米移动软件有限公司 The intercept method of video, device and terminal
CN107801106B (en) * 2017-10-24 2019-10-15 维沃移动通信有限公司 A kind of video clip intercept method and electronic equipment
CN107770625B (en) * 2017-10-30 2019-12-31 广东小天才科技有限公司 Video intercepting method based on mobile terminal and mobile terminal

Also Published As

Publication number Publication date
CN110012337A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
KR101703931B1 (en) Surveillance system
CN109194866B (en) Image acquisition method, device, system, terminal equipment and storage medium
CN110012337B (en) Video interception method and device and electronic equipment
CN105549814B (en) Photographing method based on mobile terminal and mobile terminal
CN107295352B (en) Video compression method, device, equipment and storage medium
CN105262948A (en) Panorama capturing method and mobile terminal
CN110677585A (en) Target detection frame output method and device, terminal and storage medium
EP4191513A1 (en) Image processing method and apparatus, device and storage medium
CN113259770B (en) Video playing method, device, electronic equipment, medium and product
CN108777765B (en) Method and device for acquiring full-definition image and electronic equipment
US20170150222A1 (en) Appratus for audience measurement on multiple devices and method of analyzing data for the same
CN111817947A (en) Message display system, method, device and storage medium for communication application
CN111586432A (en) Method and device for determining air-broadcast live broadcast room, server and storage medium
CN111212222A (en) Image processing method, image processing apparatus, electronic apparatus, and storage medium
CN108540817B (en) Video data processing method, device, server and computer readable storage medium
CN111586305B (en) Anti-shake method, anti-shake device and electronic equipment
CN107038053B (en) Statistical method and device for loading webpage pictures and mobile terminal
CN113542909A (en) Video processing method and device, electronic equipment and computer storage medium
CN111104915B (en) Method, device, equipment and medium for peer analysis
CN103512557A (en) Electronic equipment and method for determining relative location between electronic equipment
CN113010738A (en) Video processing method and device, electronic equipment and readable storage medium
CN112684963A (en) Screenshot method and device and electronic equipment
CN112449243B (en) Video processing method, device, equipment and storage medium
CN108024121B (en) Voice barrage synchronization method and system
CN108024060B (en) Face snapshot control method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant