CN113923340A - Video processing method, terminal and storage medium - Google Patents

Video processing method, terminal and storage medium Download PDF

Info

Publication number
CN113923340A
CN113923340A CN202010655275.4A CN202010655275A CN113923340A CN 113923340 A CN113923340 A CN 113923340A CN 202010655275 A CN202010655275 A CN 202010655275A CN 113923340 A CN113923340 A CN 113923340A
Authority
CN
China
Prior art keywords
video frame
key point
gyroscope
parameter vector
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010655275.4A
Other languages
Chinese (zh)
Other versions
CN113923340B (en
Inventor
胡振邦
李博群
刘阳兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan TCL Group Industrial Research Institute Co Ltd
Original Assignee
Wuhan TCL Group Industrial Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan TCL Group Industrial Research Institute Co Ltd filed Critical Wuhan TCL Group Industrial Research Institute Co Ltd
Priority to CN202010655275.4A priority Critical patent/CN113923340B/en
Publication of CN113923340A publication Critical patent/CN113923340A/en
Application granted granted Critical
Publication of CN113923340B publication Critical patent/CN113923340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors

Abstract

The invention discloses a video processing method, a terminal and a storage medium. The video processing method comprises the steps of extracting key points in video frames, obtaining gyroscope data, determining a target motion parameter matrix between the video frames according to the gyroscope data and the key points in the video frames, and performing video stabilization only by using data acquired by a gyroscope carried by a terminal without additional hardware auxiliary processing.

Description

Video processing method, terminal and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a video processing method, a terminal, and a storage medium.
Background
Video Stabilization (Video Stabilization) refers to a technology of processing an original Video sequence collected by a camera and removing jitter in the original Video sequence, in the prior art, extra hardware is often needed, for example, an optical component capable of adaptively adjusting an optical path is used to realize Video Stabilization, which leads to increase of cost, so that a middle-and-low-end mobile phone which is more commonly used by people does not have a Video Stabilization function.
Thus, there is a need for improvements and enhancements in the art.
Disclosure of Invention
The invention provides a video processing method, a terminal and a storage medium, and aims to solve the problem of high cost caused by adopting an additional optical component for video image stabilization in the prior art.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a video processing method, wherein the video processing method comprises:
acquiring a video frame to be processed and a first video frame, wherein the first video frame is a previous frame of the video frame to be processed, processing the first video frame, extracting each angular point in the first video frame, and acquiring a key point pair set according to each angular point;
acquiring gyroscope data and a calibration parameter vector, and acquiring a target motion parameter matrix of the video frame to be processed relative to the first video frame according to the gyroscope data, the calibration parameter vector and the key point pair set;
and processing the video frame to be processed according to the target motion parameter matrix to obtain a target video frame corresponding to the video frame to be processed.
The video processing method described above, wherein the set of key point pairs includes a plurality of key point pairs, each key point pair includes a first key point in the first video frame and a second key point corresponding to the first key point in the video frame to be processed, and the obtaining a set of key point pairs according to the corner points includes:
filtering each corner point to obtain each first key point in the first video frame;
acquiring optical flow from the first video frame to the video frame to be processed;
and acquiring corresponding second key points of the first key points in the video frame to be processed according to the optical flow.
The video processing method, wherein the video processing method further comprises:
and acquiring the target motion parameter matrix according to the key point pair set.
The video processing method, wherein the obtaining the target motion parameter matrix according to the key point pair set includes:
respectively acquiring a first coordinate of each first key point in the key point pair set and a second coordinate of each corresponding second key point;
and acquiring the target motion parameter matrix through a preset first optimization function according to the first coordinate and the second coordinate.
The video processing method, wherein each component of the calibration parameter vector is respectively a preset camera parameter, a timestamp difference between the camera and the gyroscope, a single-frame imaging time of the camera, and a rotation error of the gyroscope, and the obtaining a target motion parameter matrix of the video frame to be processed relative to the first video frame according to the gyroscope data, the calibration parameter vector, and the key point pair set includes:
acquiring a first calibration parameter vector corresponding to the first video frame, and acquiring a second calibration parameter vector corresponding to the video frame to be processed according to the first calibration parameter vector;
and acquiring the target motion parameter matrix according to the second calibration parameter vector.
The video processing method, wherein the obtaining a second calibration parameter vector corresponding to the video frame to be processed according to the first calibration parameter vector comprises:
filtering the set of key point pairs according to the first calibration parameter vector;
when the number of the filtered key point pairs is larger than or equal to a preset threshold value, processing the first calibration parameter vector to obtain a second calibration parameter vector;
and when the number of the filtered key point pairs is smaller than the preset threshold value, taking the first calibration parameter vector as the second calibration parameter vector.
The video processing method described above, wherein said filtering the set of keypoints according to the first calibration parameter vector includes:
obtaining each gyroscope motion parameter matrix corresponding to each key point pair according to the first calibration parameter vector;
and filtering the key point pair set according to the motion parameter matrixes of the gyroscopes.
The video processing method, wherein the obtaining of each gyroscope motion parameter matrix corresponding to each key point pair according to the first calibration parameter vector includes:
converting the camera time into a gyroscope time according to the timestamp difference in the first calibration parameter vector;
and acquiring gyroscope data corresponding to the camera moment according to the gyroscope moment, and acquiring gyroscope motion parameters respectively corresponding to the key point pairs according to the gyroscope data and the first calibration parameter vector.
The video processing method, wherein the processing the first calibration parameter vector comprises:
obtaining each loss function mean value corresponding to each parameter searching sub-interval according to a preset loss function, and determining a target parameter searching sub-interval according to each loss function mean value, wherein each parameter searching sub-interval is obtained by dividing a parameter searching total interval corresponding to the calibration parameter vector;
and acquiring the optimal solution of the target parameter searching subinterval as the second calibration parameter vector.
The video processing method, wherein the processing the video frame to be processed according to the target motion parameter matrix includes:
converting the target motion parameter matrix into an affine matrix;
smoothing the affine matrix by using a preset filter to obtain an intermediate parameter matrix;
and processing the video frame to be processed according to the target parameter matrix and the intermediate parameter matrix.
A terminal, wherein the terminal comprises: the video processing device comprises a processor and a storage medium which is in communication connection with the processor, wherein the storage medium is suitable for storing a plurality of instructions, and the processor is suitable for calling the instructions in the storage medium to execute the steps for realizing the video processing method.
A storage medium, wherein the storage medium stores one or more programs, which are executable by one or more processors to implement the steps of the video processing method described above.
Has the advantages that: compared with the prior art, the video processing method, the terminal and the storage medium are provided, the video processing method extracts key points in video frames, acquires gyroscope data, determines a target motion parameter matrix between the video frames according to the gyroscope data and the key points in the video frames, does not need additional hardware auxiliary processing, and can stabilize videos only by using data acquired by a gyroscope carried by the terminal, and most mobile phone terminals at present are provided with gyroscopes, so that the video processing method provided by the invention can be directly applied to mobile phones without increasing additional cost.
Drawings
FIG. 1 is a schematic block diagram of a video processing method according to the present invention;
FIG. 2 is a flow chart of an embodiment of a video processing method provided by the present invention;
FIG. 3 is a flow chart of the first sub-steps of an embodiment of a video processing method provided by the present invention;
FIG. 4 is a flow chart of sub-steps in an embodiment of a video processing method provided by the present invention;
fig. 5 is a flow chart of sub-steps in an embodiment of a video processing method provided by the present invention;
fig. 6 is a first graph of variation of loss function values in an embodiment of a video processing method provided by the present invention;
fig. 7 is a graph showing a variation curve of the loss function value in the embodiment of the video processing method according to the present invention;
fig. 8 is a schematic structural diagram of an embodiment of a terminal provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The video processing method provided by the invention can be applied to terminals, and the terminals can be but are not limited to various personal computers, notebook computers, mobile phones, tablet computers, vehicle-mounted computers and portable wearable equipment. After the terminal acquires the video, the video can be processed according to the video processing method.
Example one
Referring to fig. 2, fig. 2 is a flowchart illustrating a video processing method according to an embodiment of the present invention. The video processing method comprises the following steps:
s110, obtaining a video frame to be processed and a first video frame, wherein the first video frame is a frame before the target video frame, processing the first video frame, extracting each corner point in the first video frame, and obtaining a key point pair set corresponding to the video frame pair according to each corner point.
The video image stabilization processing is to eliminate the shake of the camera in the shooting process, and then the specific motion condition of the camera in the shooting process needs to be acquired, and the principle framework of the video processing method provided by this embodiment is shown in fig. 1, where F isiRepresenting the ith frame, S, of the original videoiRepresenting the ith frame, M, of the video after image stabilizationiA motion parameter matrix representing the change of the i-1 th frame of the original video into the i-th frame, wherein the motion parameter matrix reflects the motion change generated between the i-1 th frame and the i-th frame by a camera shooting the original videoNormalized parameter matrix, KiRepresents a pair MiMotion matrix after smoothing, RiThe expression center cutting matrix is a preset empirical parameter and can be derived according to an iteration rule
Figure BDA0002576544690000061
Thus, it can be represented by the formula Si=g(Fi,Ri) The calculation yields a stabilized image, and the function g represents the projective transformation. As can be seen from the foregoing description, the motion parameter matrix between two adjacent frames is the key point for video image stabilization, the gyroscope in the terminal can acquire the motion data of the terminal, and the camera in the terminal is fixed on the terminal, that is, the motion data acquired by the gyroscope and the motion data of the camera are identical, and in this embodiment, the motion parameter matrix between two adjacent video frames is acquired according to the gyroscope data of the terminal and the key point.
Specifically, in this embodiment, when a video frame to be processed needs to be processed, a previous frame (hereinafter referred to as a first video frame) of the video frame to be processed is obtained, that is, the first video frame is adjacent to the video frame to be processed and has an imaging time earlier than that of the video frame to be processed. After the first video frame is obtained, processing the first video frame, and extracting each corner point in the first video frame.
Specifically, in the field of image processing, the corner points are commonly used concepts, the corner points are extreme points, and the corner points may be detected by a corner point detection algorithm. The key point pair set comprises a plurality of key point pairs, and each key point pair comprises a first key point in the first video frame and a second key point corresponding to the first key point in the video frame to be processed. Specifically, the extracting the first keypoint from each of the corner points, and the obtaining a set of keypoints according to each of the corner points includes:
s110, filtering each corner point to obtain each first key point in the first video frame.
After the corner points are obtained, in order to avoid excessive corner points in the texture dense region and cause inaccurate subsequent calculation, in this embodiment, the corner points are filtered, a part of the corner points in the corner point distribution dense region is filtered, so that the corner points are uniformly distributed in the first video frame, the filtered corner points are used as first key points, specifically, the first video frame may be divided into a plurality of grids with equal size, and a same number of corner points are reserved in each grid as the first key points.
S120, acquiring an optical flow from the first video frame to the video frame to be processed.
The optical flow from the first video frame to the video frame to be processed reflects the motion situation of points in the image from the first video frame to the video frame to be processed, and the optical flow can be obtained by an existing optical flow calculation method.
S130, acquiring corresponding second key points of the first key points in the video frame to be processed according to the optical flow.
After the optical flow is obtained, according to the optical flow values respectively corresponding to the first key points, the corresponding second key points of the first key points in the video frame to be processed can be determined. Each first keypoint and the corresponding second keypoint constitute a keypoint pair, and a plurality of keypoint pairs constitute the keypoint pair set.
S200, acquiring gyroscope data and a calibration parameter vector, and acquiring a target motion parameter matrix of the video frame to be processed relative to the first video frame according to the gyroscope data, the calibration parameter vector and the key point pair set.
As already described above, since the gyroscope and the camera for shooting the video are on the same terminal, the data collected by the gyroscope may reflect the motion data of the camera.
In particular, the gyroscope data includes instantaneous rotational speeds of three directions acquired by the gyroscope, which may be expressed as ω ═ ux,uy,uz]The calibration parameter vector is a vector used for obtaining a motion parameter corresponding to the gyroscope according to the gyroscope data, and each component of the calibration parameter vector is respectively a preset camera parameter, a timestamp difference value of the camera and the gyroscope, time consumed by single-frame imaging of the camera and a rotation error of the gyroscope. The preset camera parameters comprise a horizontal direction imaging focal length of the camera, a vertical method imaging focal length of the camera, a horizontal direction imaging optical center position of the camera, a vertical direction imaging optical center position of the camera and a lens inclination distortion coefficient of the camera. Wherein the timestamp difference between the camera and the gyroscope is generated due to the fact that the camera and the gyroscope are not the same device and the timestamps adopted by the camera and the gyroscope are different, for example, for the camera, the current time is denoted as t1, but for the gyroscope, the current time is denoted as t2, and the difference between t1 and t2 is the timestamp difference, the rotation error is obtained by integrating the rotation speed when the gyroscope data collected by the gyroscope is the three-dimensional coordinate axis rotation speed and the gyroscope data is sent out at fixed instant intervals, and therefore, the error exists when the gyroscope data is obtained according to the gyroscope data, and the gyroscope data is obtained according to the gyroscope data because the rotation speed is a discrete value, in this embodiment, an error occurring when the specific rotation amplitude of the gyroscope is obtained from the gyroscope data is referred to as a rotation error of the gyroscope.
The calibration parameter vector may be expressed as: v ═ fx,fy,ox,oy,τ,td,ts,qd]Wherein f isx、fyRespectively representing the imaging focal lengths of the camera in the horizontal and vertical directions, ox、oyRespectively representing the horizontal and vertical imaging optical center positions of the camera, tau representing the lens inclination angle distortion coefficient of the camera, and tdRepresenting a difference in time stamps, t, of the camera and the gyroscopesRepresenting the time taken for a single frame of imaging of said camera, qdRepresenting the rotational error. Wherein the parameter fx、fy、ox、oyTau are combined to construct a parameter matrix of the camera
Figure BDA0002576544690000081
In practical application, data collected by a gyroscope is sent to a processor, the processor executes the video processing method provided in this embodiment, and sending the gyroscope to the processor is that the gyroscope is sent to the processor after a gyroscope data queue is filled, and when the gyroscope data is not obtained, the target motion parameter matrix cannot be obtained according to the gyroscope data, in this embodiment, the video processing method further includes:
and S01, acquiring the target motion parameter matrix according to the key point pair set.
Specifically, the obtaining the target motion parameter matrix according to the key point pair set includes:
s011, respectively acquiring first coordinates of each first key point and corresponding second coordinates of each second key point in the key point pair set;
and S012, acquiring the target motion parameter matrix through a preset first optimization function according to the first coordinate and the second coordinate.
Specifically, after the first keypoints and the second keypoints are obtained, second coordinates of the first keypoints and the second keypoints may be obtained, and for convenience of calculation, the first coordinates and the second coordinates may be represented by a matrix:
Figure BDA0002576544690000091
wherein i-1 is an index of the first video frameI is the index of the video frame to be processed, i is a positive integer greater than 1,
Figure BDA0002576544690000092
for the coordinates of the first keypoint of the keypoint pair indexed k in the set of keypoint pairs,
Figure BDA0002576544690000093
is composed of
Figure BDA0002576544690000094
The abscissa of the (c) axis of the (c),
Figure BDA0002576544690000095
is composed of
Figure BDA0002576544690000096
I is the index of the video frame to be processed,
Figure BDA0002576544690000097
for the coordinates of the second keypoint of the keypoint pair indexed k in the set of keypoint pairs,
Figure BDA0002576544690000098
is composed of
Figure BDA0002576544690000099
The abscissa of the (c) axis of the (c),
Figure BDA00025765446900000910
is composed of
Figure BDA00025765446900000911
The ordinate of (c).
Constructing the object motion parameter matrix as
Figure BDA00025765446900000912
Then, it is necessary to obtain the values of a, b, c, e, f, g, h, j, respectively, so as to obtain the target motion parameter matrix MiIn the present embodiment, by the preset firstObtaining the target motion parameter matrix by an optimization function, specifically, the first optimization function is:
Figure BDA00025765446900000913
Figure BDA00025765446900000914
and
Figure BDA00025765446900000915
:zk=1
else:zk=0
where K is the number of key point pairs in the key point pair set, μ is a preset acceptable effective error, and the unit is a pixel, and μmay be set according to the actual situation, for example, 1 pixel, 2 pixels, and so on may be taken.
That is, make
Figure BDA0002576544690000101
M consisting of the largest a, b, c, e, f, g, h, jiIs the optimal solution.
To simplify the calculation, in the present embodiment, the following linear equation system is constructed:
Figure BDA0002576544690000102
randomly extracting 3 optical flow point pairs, k, from the set of optical flow point pairs1,k2,k3The indexes of the extracted 3 optical flow point pairs respectively can obtain a plurality of groups of a, b, c, e, f, g, h and j according to the equation set, and the indexes are obtained so that
Figure BDA0002576544690000103
M composed of the largest group of a, b, c, e, f, g, h and ji
In one possible implementation, there may beIn a case that the calculation of the target motion parameter matrix corresponding to the video frame to be processed fails, for example, when the picture of the video frame to be processed is too blurred or the picture content is largely blocked, at this time, the target motion parameter matrix corresponding to the video frame to be processed may be obtained according to the motion parameter matrix used when the first video frame is processed, specifically, according to a formula: mi=fusion(Mi-1I) to obtain MiWherein fusion represents a weighting function, I is a preset identity matrix, and the size of I is consistent with that of the motion parameter matrix and is 3x 3.
And after the gyroscope data is acquired, acquiring the target motion parameter matrix by combining the gyroscope data.
Specifically, as shown in fig. 3, the obtaining a target motion parameter matrix of the to-be-processed video frame relative to the first video frame according to the gyroscope data, the calibration parameter vector, and the key point pair set includes:
s210, obtaining a first calibration parameter vector corresponding to the first video frame, and obtaining a second calibration parameter vector corresponding to the video frame to be processed according to the first calibration parameter vector.
As explained above, each component of the calibration parameter vector is the horizontal imaging focal length f of the camera headxThe vertical direction imaging focal length f of the camerayThe optical center position o of the camera in horizontal imagingxThe optical center position o of the camera in the vertical directionyA lens tilt distortion coefficient τ of the camera, a timestamp difference t of the camera and the gyroscopedTime t consumed by single-frame imaging of the camerasAnd the rotation error q of the gyroscoped. For example, although the camera has a certain frame rate, in an actual imaging process, a single-frame imaging time consumption is not necessarily exactly the reciprocal of the frame rate, but an accurate value of each component is necessarily within an interval, for example, according to an equipment characteristic of the camera, only the accurate value of each component is obtained accuratelyThe horizontal direction imaging focal length f of the camera is required to be normalxThe vertical direction imaging focal length f of the camerayThe optical center position o of the camera in horizontal imagingxThe optical center position o of the camera in the vertical directionyAnd the lens inclination distortion coefficient tau of the camera cannot exceed a certain range, and if the lens inclination distortion coefficient tau exceeds the certain range, the problem of camera hardware exists. In this embodiment, the range of the accurate value of each component in the calibrated parameter vector is recorded as a total parameter search range, a loss function is set, and the total parameter search range is searched according to a preset loss function, so as to obtain a calibrated parameter vector with a smaller loss function to calculate and obtain the target motion parameter matrix. It can be seen that, due to the existence of each component in the calibration parameter vector, the parameter search total interval includes the search interval corresponding to each component, specifically, the image optical center ox、oyNear the image center point; the distortion coefficient τ is a very small number; t is td∈[0,0.5s];
Figure BDA0002576544690000111
fps represents the video frame rate; q. q.sdIs in direct proportion to the amplitude of the motion estimated by the gyroscope, but is limited to a vector with a smaller module value.
When the video frame to be processed is obtained, in order to reduce the calculation amount caused by searching the calibration parameter vector each time, in this embodiment, a first calibration parameter vector corresponding to the first video frame is obtained first, where the first calibration parameter vector is a calibration parameter vector used when the motion parameter matrix corresponding to the first video frame is obtained when the first video frame is processed, and then it is determined whether the first calibration parameter vector needs to be further optimized when the first calibration parameter vector is used for processing the second video frame.
Specifically, as shown in fig. 4, the obtaining a second calibration parameter vector corresponding to the video frame to be processed according to the first calibration parameter vector includes:
s211, filtering the key point pairs according to the first calibration parameter vector.
In particular, the filtering of the key point pairs is to keep key points that move due to the camera motion and filter out points that move themselves, for example, when the video content includes streets, the motion of vehicles driving on the streets also causes the motion of points in the first video frame and the video frame to be processed, but the motion is not caused by the camera moving itself and needs to be proposed. As shown in fig. 5, the filtering the keypoints according to the first calibration parameter vector includes:
s2111, obtaining each gyroscope motion parameter matrix corresponding to each key point pair respectively according to the first calibration parameter vector.
Specifically, the motion parameter matrix of the gyroscope corresponding to the key point pair is a motion parameter matrix generated by the gyroscope between the imaging time of a first key point in the key point pair and the imaging time of a second key point, and the obtaining, according to the first calibration parameter vector, each motion parameter matrix of the gyroscope corresponding to each key point pair respectively includes:
and S2111a, converting the camera time into the gyroscope time according to the timestamp difference in the first calibration parameter vector.
Specifically, in this embodiment, a motion parameter matrix generated between the first video frame and the video frame to be processed is obtained according to the gyroscope data, then, it is necessary to obtain a rotation amplitude generated by the gyroscope between times corresponding to the first video frame and the video frame to be processed, and since timestamps of the gyroscope and the camera are different, at the same time, time representations of the camera and the camera are different, and therefore, it is necessary to convert the camera time into the gyroscope time, specifically, the gyroscope time corresponding to the camera time is a sum of the camera time and the timestamp difference, for example, the camera time when the first video frame starts imaging is recorded as ti-1And the difference value of the time stamps of the camera and the gyroscope is tdThen, what is obtainedThe gyroscope time when the first video frame starts imaging is ti-1+td
And S2111b, acquiring the gyroscope data corresponding to the camera moment according to the gyroscope moment, and acquiring the gyroscope motion parameters corresponding to the key point pairs respectively according to the gyroscope data and the first calibration parameter vector.
After the camera time is converted into the gyroscope time, the gyroscope data corresponding to the camera time can be acquired according to the gyroscope time, for example, the camera time when the first video frame starts to image is ti-1Corresponding to a gyroscope time ti-1+tdThen camera time ti-1The corresponding gyroscope data is
Figure BDA0002576544690000135
I.e. t on the gyroscope timestampi-1+tdAnd gyroscope data corresponding to the moment. It should be noted that, because the gyroscope acquires the gyroscope data at a certain interval, there may be a case that the gyroscope data corresponding to the camera time is not acquired by the gyroscope at the gyroscope time, that is, the gyroscope data corresponding to the camera time cannot be directly acquired.
In this embodiment, the motion parameters of the gyroscope respectively corresponding to each key point pair are obtained according to a first preset formula, specifically, the first preset formula is as follows:
Figure BDA0002576544690000131
Figure BDA0002576544690000132
wherein the content of the first and second substances,
Figure BDA0002576544690000133
for keypoint pairs of the set of keypoints indexed k,
Figure BDA0002576544690000134
a gyroscope motion parameter v corresponding to a key point pair with index k in the key point seti-1Is said first calibration parameter vector, ωtThe instantaneous rate of rotation of the gyroscope at time t, q (ω)t) Is omegatIs represented by a four-tuple matrix of (a),
Figure BDA0002576544690000141
representing the gyroscope at
Figure BDA0002576544690000142
Is timed to
Figure BDA0002576544690000143
Quadruple matrix representation of accumulated rotation amplitude of time instants, as described hereinbefore
Figure BDA0002576544690000144
For f in the calibration parameter vectorx、fy、ox、oyτ of said camera, qdIs the rotation error of the gyroscope, tdRepresenting a difference in time stamps, t, of the camera and the gyroscopesThe imaging time of a single frame of the camera is shown, h is the height of the video frame,
Figure BDA0002576544690000145
is composed of
Figure BDA0002576544690000146
The ordinate of (a) is,
Figure BDA0002576544690000147
is composed of
Figure BDA0002576544690000148
The ordinate of (a) is,
Figure BDA0002576544690000149
for approximating points
Figure BDA00025765446900001410
The moment of the gyroscope at the time of imaging,
Figure BDA00025765446900001411
for approximating points
Figure BDA00025765446900001412
At the moment of the gyroscope during imaging, it should be noted that the components of the calibration parameter vector used in the first preset formula are the components of the first calibration parameter vector. R () is a 3x3 matrix converted for the content in the parenthesis.
Specifically, since the gyroscope data is indicative of an instantaneous rotational speed of the gyroscope at a specific time, it is represented as ω ═ ux,uy,uz]The gyroscope collects gyroscope data at a certain interval, the interval time sampled by the gyroscope is recorded as Δ t, the accumulated rotation amplitude of the gyroscope is obtained according to the instantaneous rotation speed, and the instantaneous rotation speed is obtained by integrating the instantaneous rotation speed, for convenience of calculation, in this embodiment, a matrix mode is adopted for calculation, and a specific description is given below to a method for obtaining a matrix in the first preset formula: modulus of extraction
Figure BDA00025765446900001413
The modulus N represents the rotational speed in radians/second.
Figure BDA00025765446900001414
Indicating the direction of rotation. Definition of
Figure BDA00025765446900001415
Converting the gyroscope data into4-tuple matrix
Figure BDA00025765446900001416
In this way, a 4-tuple matrix representation of the respective gyroscope data acquired by the gyroscope at the time of acquisition can be obtained, while two 4-tuple matrices q1=[l1,m1,n1,o1]、q2=[l2,m2,n2,o2]The cumulative rotational movement between corresponding gyroscope data may be calculated as follows:
Figure BDA0002576544690000151
and for 4-tuple matrix [ l, m, n, o]The transformation into a 3x3 rotation matrix is done by:
Figure BDA0002576544690000152
according to the above mode, the calculation can be carried out
Figure BDA0002576544690000153
And then calculate to obtain
Figure BDA0002576544690000154
By the method, the gyroscope motion parameters respectively corresponding to each key point pair in the key point pair set can be acquired.
Referring again to fig. 5, the filtering the keypoints according to the first calibration parameter vector further includes:
s2112, filtering the key point pair set according to the gyroscope motion parameter matrixes.
It has been explained above that the gyroscope motion parameter matrix corresponding to the key point pair is a parameter matrix corresponding to the motion of the gyroscope generated between the imaging time of the first key point in the key point pair and the imaging time of the second key point, and since the coordinates of the first key point and the second key point in the key point pair are known, the actual motion parameter corresponding to the key point pair is known, and then if the gyroscope motion parameter corresponding to the key point pair is not consistent with the actual motion parameter, the motion of the first key point and the second key point in the key point pair is likely not generated due to the movement of the camera, and needs to be eliminated.
Filtering the set of keypoints may be implemented by a second preset formula, where the second preset formula is:
Figure BDA0002576544690000155
wherein the content of the first and second substances,
Figure BDA0002576544690000156
for the coordinates of the first keypoint of the keypoint pair indexed k in the set of keypoint pairs,
Figure BDA0002576544690000157
for the coordinates of the second keypoint of the keypoint pair indexed k in the set of keypoint pairs,
Figure BDA0002576544690000161
a motion variation parameter matrix of imaging time of a first key point to imaging time of a second key point in key point pair with index k in the key point pair set is used as the gyroscope, and alpha belongs to [0,100 ]]Is a ratio parameter, a loss function for each video frame
Figure BDA0002576544690000162
That is, in the second preset formula
Figure BDA0002576544690000163
A corresponding loss function for the first video frame.
It is easy to see because
Figure BDA0002576544690000164
A motion variation parameter matrix for the gyroscope from the imaging time of a first keypoint to the imaging time of a second keypoint of the keypoint pair indexed k in the set of keypoint pairs, i.e.,
Figure BDA0002576544690000165
representing the motion of the first keypoint caused only by the motion of the gyroscope,
Figure BDA0002576544690000166
the second key point is formed after the first key point only passes through the motion of the gyroscope.
Figure BDA0002576544690000167
The difference between the gyroscope motion parameter and the actual motion parameter corresponding to the key point pair with the index of k is reflected, and when the difference is too large, the motion of the key point pair is considered to be inconsistent with the motion of the gyroscope, and the key point pair needs to be removed.
Referring to fig. 4 again, the obtaining the second calibration parameter vector corresponding to the to-be-processed video frame according to the first calibration parameter vector further includes:
s212, when the number of the filtered key point pairs is larger than or equal to a preset threshold value, processing the first calibration parameter vector to obtain a second calibration parameter vector;
and S213, when the number of the filtered key point pairs is smaller than the preset threshold value, taking the first calibration parameter vector as the second calibration parameter vector.
When the number of the filtered key point pairs is smaller than the preset threshold, it is indicated that the number of the key points which are more consistent with the motion of the camera is small, at this time, the first calibration parameter vector is directly used as the second calibration parameter vector, that is, the search of the calibration parameter vector is not needed, the calculated amount is reduced, when the number of the filtered key point pairs is larger than or equal to the preset threshold, it is indicated that the number of the key points which are more consistent with the motion of the camera is small, at this time, the first calibration parameter vector needs to be updated, a more accurate calibration parameter vector is obtained, and the video processing effect is improved.
The processing the first calibration parameter vector comprises:
s2121, obtaining each loss function mean value corresponding to each parameter searching sub-interval according to a preset loss function, and determining a target parameter searching sub-interval according to each loss function mean value.
And each target parameter searching subinterval is obtained by dividing the parameter searching total interval corresponding to the calibration parameter vector.
In order to meet the speed requirement in video processing, in this embodiment, the parameter vector is not searched in the parameter search total interval each time, but the parameter search total interval is divided into H parameter search sub-intervals, when the calibration parameter vector needs to be processed when the video frame to be processed is processed, the target parameter search sub-interval is determined, and the optimal solution is searched in the target parameter search sub-interval to obtain the second calibration parameter vector.
Specifically, the parameter search total interval is divided into H parameter search sub-intervals, when i is 0, the cumulative loss function value of each parameter search sub-interval is set to be 0, the number of hits is set to be 0, the loss function mean is set to be ∞, and the parameter update compensation vector λ is set to beiRandomly generating an initial solution vector of the parameter search subinterval in each parameter search subinterval, and making the search upper bound of the parameter search subinterval with the index of i% H be
Figure BDA0002576544690000171
Lower boundary is
Figure BDA0002576544690000172
Define parameter update index queue F _ vec [ M [)][len(v)]Wherein M identifies the number of parameter search cycles, len (v) represents the number of parameters for solving optimization, and F _ vec [ M ] is subjected to parameter matching with a certain probability according to empirical parameters][len(v)]The assignments are true and the others are false. When the second calibration parameter vector corresponding to the video frame to be processed needs to be searched and obtained, firstly, the second calibration parameter vector corresponding to the video frame to be processed is obtained according to the second calibration parameter vectorSpecifically, the index of the initial search subinterval corresponding to the video frame to be processed is determined by using i% H, i is the video sequence index of the video frame to be processed, i% H is the remainder of taking i/H, for example, when i is 101 and H is 100, the parameter search subinterval with the index of 1 is determined as the initial search subinterval. After the initial search subinterval is determined, acquiring an accumulated loss function value of the initial search subinterval, wherein the accumulated loss function value of the parameter search subinterval is the sum of the loss function values corresponding to the solution vectors obtained by searching the parameter search subinterval each time, and the loss function is
Figure BDA0002576544690000181
vi%HThe solution vector for the subinterval is searched for the parameter with index i% H. When the video frame to be processed is processed, firstly, searching is carried out in the initial search subinterval to obtain a solution vector, and the loss function value of this time is obtained according to the solution vector, so that the accumulated loss function value of the initial search subinterval is updated.
After updating the accumulated loss function value of the initial search subinterval, obtaining a loss function mean value corresponding to each parameter search subinterval, namely, dividing the accumulated loss function value by the search frequency, obtaining an index i _ best of the parameter search subinterval with the minimum loss function mean value, and if the i _ best meets the condition that i _ best is not equal to i% H and the hit frequency of the i _ best is far more than the hit frequency of the i% H, directly taking the i _ best as the target parameter subspace. And if the hit frequency of i _ best is not more than the hit frequency of i% H or the hit frequency of i _ best is not more than the hit frequency of i% H, searching in the initial search subinterval to obtain the optimal solution vector in the initial search subinterval.
Specifically, the optimal solution vector is searched within the parameter search subinterval by:
initializing current parameter search step length lambdai
Figure BDA0002576544690000182
Figure BDA0002576544690000191
After the optimal solution vector of the parameter searching sub-interval is obtained every time, the number of hits of the parameter searching sub-interval is added by 1, and the accumulated loss function value of the parameter searching sub-interval is updated.
And searching the initial search sub-interval according to the above mode, obtaining the optimal solution vector of the initial search sub-interval, updating the accumulated loss function value of the initial search sub-interval, obtaining the loss function mean value of each parameter search sub-interval again, and searching the i _ best at the moment, namely obtaining the parameter search sub-interval with the minimum loss function mean value at the moment as i _ best, and if the i _ best is not equal to i% H yet, namely the i _ best is kept unchanged, taking the parameter search sub-interval i _ best as the target parameter search sub-interval.
The processing the first calibration parameter vector further comprises:
s2122, obtaining the optimal solution of the target parameter search sub-interval as the second calibration parameter vector.
And after the target parameter searching sub-interval is obtained, searching the parameter searching sub-interval, obtaining the optimal solution vector of the target parameter searching sub-interval as the second calibration parameter vector, and updating the number of times of hits and the accumulated loss function value of the target parameter searching sub-interval.
As shown in fig. 6-7, fig. 6 is a graph of a single-frame calibration loss value of a relevant parameter of a gyroscope (the abscissa is a video frame index, and the ordinate is a single-frame calibration loss value), and fig. 7 is a graph of an accumulated calibration loss mean value of a relevant parameter of a gyroscope (the abscissa is a video frame index, and the ordinate is an accumulated calibration loss value). The calculation at each optimization comprises 2M times of len (v) times of loss function calculation, the total calculation amount can be limited by setting F _ vec, and the requirements of precision and speed when video processing is considered are met.
Referring to fig. 3 again, after the second calibration parameter vector is obtained, the method includes:
and S230, acquiring the target motion parameter matrix according to the second calibration parameter vector.
Specifically, the obtaining of the target motion parameter matrix according to the second calibration parameter vector is obtained according to a third preset formula, where the third preset formula is:
Figure BDA0002576544690000201
Figure BDA0002576544690000202
wherein the content of the first and second substances,
Figure BDA0002576544690000203
is the object motion parameter matrix, viIs said first calibration parameter vector, ωtThe instantaneous rate of rotation of the gyroscope at time t, q (ω)t) Is omegatIs represented by a four-tuple matrix of (a),
Figure BDA0002576544690000204
representing the gyroscope at
Figure BDA0002576544690000205
Is timed to
Figure BDA0002576544690000206
Quadruple matrix representation of accumulated rotation amplitude of time instants, as described hereinbefore
Figure BDA0002576544690000207
For f in the calibration parameter vectorx、fy、ox、oyτ of said camera, qdIs the rotation error of the gyroscope, tdRepresenting a difference in time stamps, t, of the camera and the gyroscopesIt should be noted that, when the single-frame imaging of the camera is performed, each component of the calibration parameter vector used in the third preset formula is each component of the second calibration parameter vector. R () is a 3x3 matrix converted for the content in the parenthesis. The conversion method of each matrix in the third preset formula has been described in detail above, and is not described herein again.
As will be apparent from the foregoing description, the third predetermined formula is adopted
Figure BDA0002576544690000208
That is to say, the first motion parameter matrix is obtained by obtaining gyroscope data between the middle imaging time of the first video frame and the middle imaging time of the video frame to be processed, that is, the target motion parameter matrix reflects a motion condition between the middle imaging time of the first video frame and the middle imaging time of the video frame to be processed, so that the image stabilization processing on the video frame to be processed according to the target motion parameter matrix is more accurate
Figure BDA0002576544690000211
And (5) deleting.
Referring to fig. 2 again, the video processing method further includes:
s300, processing the video frame to be processed according to the target motion parameter matrix, and acquiring a target video frame corresponding to the video frame to be processed.
As already explained above, according to FIG. 1, the data can be represented by the formula Si=g(Fi,Ri) Calculating to obtain stabilized image, and function g represents projective transformation, wherein KiRepresents a pair MiIn this embodiment, because the real-time requirement of video processing requires that the target motion parameter matrix needs to be converted into a radial matrix form and then smoothed, specifically, the processing the video frame to be processed according to the target motion parameter matrix includes:
and S310, converting the target motion parameter matrix into an affine matrix.
Specifically, the object motion parameter matrix is represented as:
Figure BDA0002576544690000212
because the time interval between the video frame to be processed and the first video frame is short and the motion between the video frame to be processed and the first video frame is very small, the target motion parameter matrix is approximate to a radial matrix form, namely a is approximately equal to 1, e is approximately equal to 1, l is approximately equal to 1, b is approximately equal to 0, d is approximately equal to 0, g is approximately equal to 0, h is approximately equal to 0, if the characteristic is not met, the distortion of the picture is determined to be very likely to exist, and at the moment, M is enabled to be approximately equal to 1i=1。
When M isiApproximating in the form of a radial matrix, using the image center point [ x ] of the video frame to be processedcen,ycen]Point M for reference pointiConversion to a radiation matrix
Figure BDA0002576544690000221
Let A be g.xcen+h·ycen+l,
Figure BDA0002576544690000222
The definition is as follows:
Figure BDA0002576544690000223
and S320, smoothing the affine matrix by using a preset filter to obtain an intermediate parameter matrix.
In this embodiment, the preset filter is preset, and after the radiation matrix is obtained, the preset filter is used to smooth the radiation matrix to obtain the intermediate parameter matrix, where the preset filter may be a one-way mean smoothing filter, a gaussian weighted smoothing filter, or a Kalman filtering dynamic platform filter.
S330, processing the video frame to be processed according to the target parameter matrix and the intermediate parameter matrix.
By the formula Si=g(Fi,Ri) Processing the video frame to be processed to obtain the target video frame, wherein SiFor the target video frame, FiIn order for the video frame to be processed,
Figure BDA0002576544690000224
Kiis the intermediate parameter matrix. When the target video frame cannot be completely filled by the video frame to be processed, R can be forcibly modifiediMaking the target video frame complete, if the left side of the target video frame is not completely filled, correspondingly modifying
Figure BDA0002576544690000225
Shifting the picture to the left, if the left and the right of the target video frame are not completely filled, correspondingly modifying
Figure BDA0002576544690000226
The picture content is enlarged and filled.
In summary, the present invention provides a video processing method, in which a target motion parameter matrix between video frames is determined according to key points in video frames and acquired gyroscope data, and the video processing method does not require additional hardware-assisted processing, and only needs to utilize data acquired by a gyroscope carried by a terminal to stabilize a video, whereas most mobile phone terminals have gyroscopes at present, so that the video processing method provided by the present invention can be directly applied to a mobile phone without increasing additional cost.
It should be understood that, although the steps in the flowcharts shown in the figures of the present specification are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in the flowchart may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Example two
Based on the above embodiments, the present invention further provides a terminal, and a schematic block diagram thereof may be as shown in fig. 8. The terminal comprises a processor, a memory, a network interface, a display screen and a temperature sensor which are connected through a system bus. Wherein the processor of the terminal is configured to provide computing and control capabilities. The memory of the terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the terminal is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a video processing method. The display screen of the terminal can be a liquid crystal display screen or an electronic ink display screen, and the temperature sensor of the terminal is arranged in the terminal in advance and used for detecting the current operating temperature of internal equipment.
It will be understood by those skilled in the art that the block diagram of fig. 8 is a block diagram of only a portion of the structure associated with the inventive arrangements and is not intended to limit the terminals to which the inventive arrangements may be applied, and that a particular terminal may include more or less components than those shown, or may have some components combined, or may have a different arrangement of components.
In one embodiment, a terminal is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor when executing the computer program implementing at least the following steps:
acquiring a video frame to be processed and a first video frame, wherein the first video frame is a previous frame of the video frame to be processed, processing the first video frame, extracting each angular point in the first video frame, and acquiring a key point pair set according to each angular point;
acquiring gyroscope data and a calibration parameter vector, and acquiring a target motion parameter matrix of the video frame to be processed relative to the first video frame according to the gyroscope data, the calibration parameter vector and the key point pair set;
and processing the video frame to be processed according to the target motion parameter matrix to obtain a target video frame corresponding to the video frame to be processed.
Wherein the key point pair set includes a plurality of key point pairs, each key point pair includes a first key point in the first video frame and a second key point corresponding to the first key point in the video frame to be processed, and the obtaining a key point pair set according to each corner point includes:
filtering each corner point to obtain each first key point in the first video frame;
acquiring optical flow from the first video frame to the video frame to be processed;
and acquiring corresponding second key points of the first key points in the video frame to be processed according to the optical flow.
Wherein the video processing method further comprises:
and acquiring the target motion parameter matrix according to the key point pair set.
Wherein the obtaining the target motion parameter matrix according to the key point pair set includes:
respectively acquiring a first coordinate of each first key point in the key point pair set and a second coordinate of each corresponding second key point;
and acquiring the target motion parameter matrix through a preset first optimization function according to the first coordinate and the second coordinate.
Wherein, each component of the calibration parameter vector is respectively a preset camera parameter, a timestamp difference between the camera and the gyroscope, a single-frame imaging time consumption of the camera, and a rotation error of the gyroscope, and the acquiring a target motion parameter matrix of the to-be-processed video frame relative to the first video frame according to the gyroscope data, the calibration parameter vector, and the key point pair set includes:
acquiring a first calibration parameter vector corresponding to the first video frame, and acquiring a second calibration parameter vector corresponding to the video frame to be processed according to the first calibration parameter vector;
and acquiring the target motion parameter matrix according to the second calibration parameter vector.
Wherein the obtaining a second calibration parameter vector corresponding to the video frame to be processed according to the first calibration parameter vector comprises:
filtering the set of key point pairs according to the first calibration parameter vector;
when the number of the filtered key point pairs is larger than or equal to a preset threshold value, processing the first calibration parameter vector to obtain a second calibration parameter vector;
and when the number of the filtered key point pairs is smaller than the preset threshold value, taking the first calibration parameter vector as the second calibration parameter vector.
Wherein said filtering said set of keypoint pairs according to said first calibration parameter vector comprises:
obtaining each gyroscope motion parameter matrix corresponding to each key point pair according to the first calibration parameter vector;
and filtering the key point pair set according to the motion parameter matrixes of the gyroscopes.
Wherein, the obtaining of each gyroscope motion parameter matrix corresponding to each key point pair according to the first calibration parameter vector comprises:
converting the camera time into a gyroscope time according to the timestamp difference in the first calibration parameter vector;
and acquiring gyroscope data corresponding to the camera moment according to the gyroscope moment, and acquiring gyroscope motion parameters respectively corresponding to the key point pairs according to the gyroscope data and the first calibration parameter vector.
Wherein the processing the first calibration parameter vector comprises:
obtaining each loss function mean value corresponding to each parameter searching sub-interval according to a preset loss function, and determining a target parameter searching sub-interval according to each loss function mean value, wherein each parameter searching sub-interval is obtained by dividing a parameter searching total interval corresponding to the calibration parameter vector;
and acquiring the optimal solution of the target parameter searching subinterval as the second calibration parameter vector.
Wherein the processing the video frame to be processed according to the target motion parameter matrix comprises:
converting the target motion parameter matrix into an affine matrix;
smoothing the affine matrix by using a preset filter to obtain an intermediate parameter matrix;
and processing the video frame to be processed according to the target parameter matrix and the intermediate parameter matrix.
EXAMPLE III
The present invention also provides a storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the video processing method described in the first embodiment.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1. A video processing method, characterized in that the video processing method comprises:
acquiring a video frame to be processed and a first video frame, wherein the first video frame is a previous frame of the video frame to be processed, processing the first video frame, extracting each angular point in the first video frame, and acquiring a key point pair set according to each angular point;
acquiring gyroscope data and a calibration parameter vector, and acquiring a target motion parameter matrix of the video frame to be processed relative to the first video frame according to the gyroscope data, the calibration parameter vector and the key point pair set;
and processing the video frame to be processed according to the target motion parameter matrix to obtain a target video frame corresponding to the video frame to be processed.
2. The video processing method according to claim 1, wherein the set of keypoint pairs comprises a plurality of keypoint pairs, each keypoint pair comprising a first keypoint in the first video frame and a corresponding second keypoint of the first keypoint in the video frame to be processed, and wherein said obtaining a set of keypoint pairs from the respective corner points comprises:
filtering each corner point to obtain each first key point in the first video frame;
acquiring optical flow from the first video frame to the video frame to be processed;
and acquiring corresponding second key points of the first key points in the video frame to be processed according to the optical flow.
3. The video processing method of claim 1, wherein the video processing method further comprises:
and acquiring the target motion parameter matrix according to the key point pair set.
4. The video processing method according to claim 3, wherein said obtaining the target motion parameter matrix according to the set of key point pairs comprises:
respectively acquiring a first coordinate of each first key point in the key point pair set and a second coordinate of each corresponding second key point;
and acquiring the target motion parameter matrix through a preset first optimization function according to the first coordinate and the second coordinate.
5. The video processing method according to claim 1, wherein the components of the calibration parameter vector are respectively preset camera parameters, timestamp differences of the camera and the gyroscope, a single-frame imaging time of the camera, and a rotation error of the gyroscope, and the obtaining the target motion parameter matrix of the video frame to be processed relative to the first video frame according to the gyroscope data, the calibration parameter vector, and the key point pair set comprises:
acquiring a first calibration parameter vector corresponding to the first video frame, and acquiring a second calibration parameter vector corresponding to the video frame to be processed according to the first calibration parameter vector;
and acquiring the target motion parameter matrix according to the second calibration parameter vector.
6. The video processing method according to claim 5, wherein said obtaining a second scaled parameter vector corresponding to the video frame to be processed according to the first scaled parameter vector comprises:
filtering the set of key point pairs according to the first calibration parameter vector;
when the number of the filtered key point pairs is larger than or equal to a preset threshold value, processing the first calibration parameter vector to obtain a second calibration parameter vector;
and when the number of the filtered key point pairs is smaller than the preset threshold value, taking the first calibration parameter vector as the second calibration parameter vector.
7. The video processing method according to claim 6, wherein said filtering said set of keypoint pairs according to said first scaled parameter vector comprises:
obtaining each gyroscope motion parameter matrix corresponding to each key point pair according to the first calibration parameter vector;
and filtering the key point pair set according to the motion parameter matrixes of the gyroscopes.
8. The video processing method according to claim 7, wherein the obtaining, according to the first calibration parameter vector, each gyroscope motion parameter matrix corresponding to each key point pair respectively comprises:
converting the camera time into a gyroscope time according to the timestamp difference in the first calibration parameter vector;
and acquiring gyroscope data corresponding to the camera moment according to the gyroscope moment, and acquiring gyroscope motion parameters respectively corresponding to the key point pairs according to the gyroscope data and the first calibration parameter vector.
9. The video processing method according to claim 6, wherein said processing the first scaled parameter vector comprises:
obtaining each loss function mean value corresponding to each parameter searching sub-interval according to a preset loss function, and determining a target parameter searching sub-interval according to each loss function mean value, wherein each parameter searching sub-interval is obtained by dividing a parameter searching total interval corresponding to the calibration parameter vector;
and acquiring the optimal solution of the target parameter searching subinterval as the second calibration parameter vector.
10. The video processing method according to claim 1, wherein said processing the video frame to be processed according to the object motion parameter matrix comprises:
converting the target motion parameter matrix into an affine matrix;
smoothing the affine matrix by using a preset filter to obtain an intermediate parameter matrix;
and processing the video frame to be processed according to the target parameter matrix and the intermediate parameter matrix.
11. A terminal, characterized in that the terminal comprises: a processor, a storage medium communicatively coupled to the processor, the storage medium adapted to store a plurality of instructions, the processor adapted to invoke the instructions in the storage medium to perform the steps of implementing the video processing method of any of claims 1-10.
12. A storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of the video processing method according to any one of claims 1-10.
CN202010655275.4A 2020-07-09 2020-07-09 Video processing method, terminal and storage medium Active CN113923340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010655275.4A CN113923340B (en) 2020-07-09 2020-07-09 Video processing method, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010655275.4A CN113923340B (en) 2020-07-09 2020-07-09 Video processing method, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113923340A true CN113923340A (en) 2022-01-11
CN113923340B CN113923340B (en) 2023-12-29

Family

ID=79231806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010655275.4A Active CN113923340B (en) 2020-07-09 2020-07-09 Video processing method, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113923340B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008005109A (en) * 2006-06-21 2008-01-10 Sony Corp Hand shake correcting method, program for hand shake correcting method, recording medium where program for hand shake correcting method is recorded, and hand shake correcting device
US20120069203A1 (en) * 2010-09-21 2012-03-22 Voss Shane D Generating a stabilized video sequence based on motion sensor data
WO2015143892A1 (en) * 2014-03-25 2015-10-01 Tencent Technology (Shenzhen) Company Limited Video processing method, device and system
CN106878612A (en) * 2017-01-05 2017-06-20 中国电子科技集团公司第五十四研究所 A kind of video stabilizing method based on the optimization of online total variation
CN107241544A (en) * 2016-03-28 2017-10-10 展讯通信(天津)有限公司 Video image stabilization method, device and camera shooting terminal
WO2018095262A1 (en) * 2016-11-24 2018-05-31 腾讯科技(深圳)有限公司 Video stabilization method and device
CN108366201A (en) * 2018-02-12 2018-08-03 天津天地伟业信息系统集成有限公司 A kind of electronic flutter-proof method based on gyroscope
CN109089015A (en) * 2018-09-19 2018-12-25 厦门美图之家科技有限公司 Video stabilization display methods and device
CN109618103A (en) * 2019-01-28 2019-04-12 深圳慧源创新科技有限公司 The anti-fluttering method and unmanned plane of unmanned plane figure biography video
CN109977775A (en) * 2019-02-25 2019-07-05 腾讯科技(深圳)有限公司 Critical point detection method, apparatus, equipment and readable storage medium storing program for executing
CN110519507A (en) * 2019-07-23 2019-11-29 深圳岚锋创视网络科技有限公司 A kind of camera lens smoothing processing method, device and portable terminal

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008005109A (en) * 2006-06-21 2008-01-10 Sony Corp Hand shake correcting method, program for hand shake correcting method, recording medium where program for hand shake correcting method is recorded, and hand shake correcting device
US20120069203A1 (en) * 2010-09-21 2012-03-22 Voss Shane D Generating a stabilized video sequence based on motion sensor data
WO2015143892A1 (en) * 2014-03-25 2015-10-01 Tencent Technology (Shenzhen) Company Limited Video processing method, device and system
CN107241544A (en) * 2016-03-28 2017-10-10 展讯通信(天津)有限公司 Video image stabilization method, device and camera shooting terminal
WO2018095262A1 (en) * 2016-11-24 2018-05-31 腾讯科技(深圳)有限公司 Video stabilization method and device
CN106878612A (en) * 2017-01-05 2017-06-20 中国电子科技集团公司第五十四研究所 A kind of video stabilizing method based on the optimization of online total variation
CN108366201A (en) * 2018-02-12 2018-08-03 天津天地伟业信息系统集成有限公司 A kind of electronic flutter-proof method based on gyroscope
CN109089015A (en) * 2018-09-19 2018-12-25 厦门美图之家科技有限公司 Video stabilization display methods and device
CN109618103A (en) * 2019-01-28 2019-04-12 深圳慧源创新科技有限公司 The anti-fluttering method and unmanned plane of unmanned plane figure biography video
CN109977775A (en) * 2019-02-25 2019-07-05 腾讯科技(深圳)有限公司 Critical point detection method, apparatus, equipment and readable storage medium storing program for executing
CN110519507A (en) * 2019-07-23 2019-11-29 深圳岚锋创视网络科技有限公司 A kind of camera lens smoothing processing method, device and portable terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵赛等: "基于 MEMS 陀螺仪的电子稳像算法", vol. 48, no. 3 *

Also Published As

Publication number Publication date
CN113923340B (en) 2023-12-29

Similar Documents

Publication Publication Date Title
CN110430365B (en) Anti-shake method, anti-shake device, computer equipment and storage medium
CN111246089B (en) Jitter compensation method and apparatus, electronic device, computer-readable storage medium
KR102509466B1 (en) Optical image stabilization movement to create a super-resolution image of a scene
CN112017216B (en) Image processing method, device, computer readable storage medium and computer equipment
CN110800282B (en) Holder adjusting method, holder adjusting device, mobile platform and medium
CN107566688B (en) Convolutional neural network-based video anti-shake method and device and image alignment device
US11770613B2 (en) Anti-shake image processing method, apparatus, electronic device and storage medium
WO2013151873A1 (en) Joint video stabilization and rolling shutter correction on a generic platform
CN113556464B (en) Shooting method and device and electronic equipment
CN110866486A (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN113875219A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113286084B (en) Terminal image acquisition method and device, storage medium and terminal
CN114390188B (en) Image processing method and electronic equipment
CN115705651A (en) Video motion estimation method, device, equipment and computer readable storage medium
JP6282133B2 (en) Imaging device, control method thereof, and control program
CN113923340B (en) Video processing method, terminal and storage medium
CN114449130B (en) Multi-camera video fusion method and system
CN112804444B (en) Video processing method and device, computing equipment and storage medium
CN113438409B (en) Delay calibration method, delay calibration device, computer equipment and storage medium
CN116095484B (en) Video anti-shake method and device and electronic equipment
CN114979456B (en) Anti-shake processing method and device for video data, computer equipment and storage medium
CN113807124B (en) Image processing method, device, storage medium and electronic equipment
JP2018072941A (en) Image processing device, image processing method, program, and storage medium
CN115150549A (en) Imaging anti-shake method, imaging anti-shake apparatus, photographing device, and readable storage medium
CN116934654A (en) Image ambiguity determining method and related equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant