CN115396597A - Video anti-shake splicing method and related equipment - Google Patents

Video anti-shake splicing method and related equipment Download PDF

Info

Publication number
CN115396597A
CN115396597A CN202210979037.8A CN202210979037A CN115396597A CN 115396597 A CN115396597 A CN 115396597A CN 202210979037 A CN202210979037 A CN 202210979037A CN 115396597 A CN115396597 A CN 115396597A
Authority
CN
China
Prior art keywords
camera
offset
shake
video frame
shaking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210979037.8A
Other languages
Chinese (zh)
Inventor
余家忠
李飞
靳志娟
刘昱含
郭晓伟
陈欣
孙宁波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Tower Co Ltd
Original Assignee
China Tower Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Tower Co Ltd filed Critical China Tower Co Ltd
Priority to CN202210979037.8A priority Critical patent/CN115396597A/en
Publication of CN115396597A publication Critical patent/CN115396597A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The application provides a video anti-shake splicing method and related equipment, wherein first shake offset information obtained by shake detection of a first camera based on a first inertia measurement unit and second shake offset information obtained by shake detection of a second camera based on a second inertia measurement unit are obtained in the shooting process; splicing the video frames shot by the first camera and the second camera at the same moment according to the splicing offset to obtain a middle spliced video frame; and splicing and adjusting the middle spliced video frame according to the first shaking offset and the second shaking offset to obtain a target video frame. Therefore, the inertia measurement unit is installed on the camera, the shaking information of the camera is directly acquired, the image is prevented from being subjected to anti-shaking operation through an anti-shaking algorithm, the anti-shaking operation time is saved, and the real-time performance of the video anti-shaking splicing method is improved.

Description

Video anti-shake splicing method and related equipment
Technical Field
The application relates to the technical field of image processing, in particular to a video anti-shake splicing method and related equipment.
Background
In the processing process of the existing video anti-shake splicing method, an anti-shake algorithm is required to be used for carrying out anti-shake processing on each frame image in two videos which need to be spliced, and then splicing processing is carried out on the images. Because an anti-shake algorithm is required to be used for anti-shake processing of images, a large amount of operations are required to be carried out in the existing video anti-shake splicing method.
Because the existing video anti-shake splicing method needs a large amount of operation, the existing video anti-shake splicing method needs a large amount of operation time, namely the video anti-shake splicing method has the problem of poor real-time performance.
Disclosure of Invention
The embodiment of the application provides a video anti-shake splicing method and related equipment, which are used for solving the problem that the existing video anti-shake splicing method needs a large amount of operation, so that the existing video anti-shake splicing method needs a large amount of operation time, namely the video anti-shake splicing method has poor real-time performance.
In a first aspect, an embodiment of the present application provides a video anti-shake stitching method, where the video anti-shake stitching method includes:
acquiring first shaking deviation information obtained based on shaking detection of a first camera by a first inertia measurement unit and second shaking deviation information obtained based on shaking detection of a second camera by a second inertia measurement unit in a shooting process, wherein the first shaking deviation information comprises a first shaking deviation amount corresponding to each frame of video shot by the first camera, and the second shaking deviation information comprises a second shaking deviation amount corresponding to each frame of video shot by the second camera;
splicing the video frames shot by the first camera and the second camera at the same moment according to the splicing offset to obtain a middle spliced video frame, wherein the splicing offset represents the offset between the video frame shot by the first camera and the video frame shot by the second camera under the condition of no shaking;
and splicing and adjusting the middle spliced video frame according to the first shaking offset and the second shaking offset to obtain a target video frame.
In a second aspect, an embodiment of the present application provides a video anti-shake stitching device, which includes:
the acquisition module is used for acquiring first shaking deviation information obtained based on shaking detection of the first inertial measurement unit on the first camera and second shaking deviation information obtained based on shaking detection of the second inertial measurement unit on the second camera in the shooting process, wherein the first shaking deviation information comprises a first shaking deviation amount corresponding to each frame of video shot by the first camera, and the second shaking deviation information comprises a second shaking deviation amount corresponding to each frame of video shot by the second camera;
the splicing module is used for splicing the video frames shot by the first camera and the second camera at the same moment according to the splicing offset to obtain a middle spliced video frame, wherein the splicing offset represents the offset between the video frame shot by the first camera and the video frame shot by the second camera under the condition of no shaking;
and the adjusting module is used for splicing and adjusting the middle spliced video frame according to the first shaking offset and the second shaking offset to obtain a target video frame.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the implementation of the computer program by the processor is a step of implementing the above-mentioned video anti-shake splicing method.
In a fourth aspect, an embodiment of the present application provides a readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the video anti-shake splicing method.
In a fifth aspect, an embodiment of the present application provides a chip, which includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement the steps of the video anti-shake stitching method.
According to the embodiment of the application, first shaking deviation information obtained based on shaking detection of a first camera by a first inertia measurement unit and second shaking deviation information obtained based on shaking detection of a second camera by a second inertia measurement unit are obtained in the shooting process, the first shaking deviation information comprises a first shaking deviation amount corresponding to each frame of video shot by the first camera, and the second shaking deviation information comprises a second shaking deviation amount corresponding to each frame of video shot by the second camera; splicing the video frames shot by the first camera and the second camera at the same moment according to the splicing offset to obtain a middle spliced video frame, wherein the splicing offset represents the offset between the video frame shot by the first camera and the video frame shot by the second camera under the condition of no shaking; and splicing and adjusting the middle spliced video frame according to the first shaking offset and the second shaking offset to obtain a target video frame. Therefore, the inertia measurement unit is installed on the camera, the shaking information of the camera is directly acquired, the image is prevented from being subjected to anti-shaking operation through an anti-shaking algorithm, the anti-shaking operation time is saved, and the real-time performance of the video anti-shaking splicing method is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings may be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart of a video anti-shake stitching method provided in an embodiment of the present application;
fig. 2 is a structural diagram of a video anti-shake stitching apparatus provided in an embodiment of the present application;
fig. 3 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a block diagram of another electronic device provided in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, of the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. As used in this application, the terms "first," "second," and the like, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. "upper", "lower", "left", "right", and the like are used only to indicate relative positional relationships, and when the absolute position of the object to be described is changed, the relative positional relationships are changed accordingly.
As shown in fig. 1, an embodiment of the present application provides a flowchart of a video anti-shake stitching method, where the video anti-shake stitching method includes:
step 101, acquiring first shake deviation information obtained based on shake detection of a first camera by a first inertia measurement unit and second shake deviation information obtained based on shake detection of a second camera by a second inertia measurement unit in a shooting process, wherein the first shake deviation information comprises a first shake deviation amount corresponding to each frame of video shot by the first camera, and the second shake deviation information comprises a second shake deviation amount corresponding to each frame of video shot by the second camera;
optionally, in some embodiments, the number of the cameras may be two or three, and is not limited here. When the number of the cameras is three, the images shot by two of the cameras can be spliced, and then the image shot by the other camera is spliced with the image obtained by splicing the two cameras to obtain a final image; when the number of the cameras is four, the four cameras are divided into two groups, one group of two cameras are used for splicing images shot by the cameras in the same group, and then the spliced images are spliced. And by analogy, no matter the number of the cameras, the two cameras are used as a group to splice the shot images, if the number of the cameras is odd, the rest one camera is used as a group separately, and finally, the images among the groups are spliced according to the method.
Furthermore, the number of the cameras is not limited, but the installation positions of the cameras should be at the same horizontal height as much as possible, and the distances between the cameras should be set to be the same, and if a plurality of cameras are arranged around the same tower, the angles between the cameras should be the same.
Furthermore, when the plurality of cameras are grouped according to the method, the cameras in the same group are two adjacent cameras.
Optionally, in some embodiments, the number of the inertial measurement units installed in each camera may be one or two, which is not further limited herein. When the number of the inertia measurement units installed in each camera is multiple, the first inertia measurement unit can work, other inertia measurement units are in a standby state, and when the first inertia measurement unit fails, other inertia measurement units work; or a plurality of inertia measurement units work simultaneously, data obtained by the plurality of inertia measurement units are merged and calibrated, and then the merged and calibrated data are used for anti-shake processing on the image shot by the camera.
It should be understood that the camera shake is a process, and therefore the shake offset of the camera detected by the inertia measurement unit should be different or the same shake offset corresponding to different times, so as to perform the anti-shake processing on the image according to the shake offset of the camera at each time, and further, the shake offset is a space vector.
Note that the amount of shake deviation detected by the camera is a deviation from a position where the camera is not shaken.
102, splicing the video frames shot by the first camera and the second camera at the same moment according to a splicing offset to obtain a middle spliced video frame, wherein the splicing offset represents the offset between the video frame shot by the first camera and the video frame shot by the second camera under the condition of no shaking;
it should be further noted that the stitching offset indicates an offset between two video frames to be stitched under the condition of no shake, that is, an offset between video frames shot by two adjacent cameras at the same time, where the stitching offset may be derived from an error generated when the cameras are installed, a position of the camera caused by aging of a locking device of the camera after the cameras are installed, and a position of the camera caused by severe weather;
it should be understood that the inter-stitched video frame is not one frame image, but is composed of two frame images. The two images may be completely aligned or partially aligned.
And 103, splicing and adjusting the middle spliced video frame according to the first shaking offset and the second shaking offset to obtain a target video frame.
It should be noted that, the splicing adjustment is performed on the intermediate spliced video frame according to the first shaking offset and the second shaking offset, and on the basis of completing the splicing offset, the splicing adjustment is performed again on the two frames of images that have been completely aligned or partially aligned according to the shaking offset detected.
In the embodiment of the application, first shake deviation information obtained by shake detection of a first camera based on a first inertia measurement unit and second shake deviation information obtained by shake detection of a second camera based on a second inertia measurement unit are obtained in the shooting process, the first shake deviation information comprises a first shake deviation amount corresponding to each frame of video shot by the first camera, and the second shake deviation information comprises a second shake deviation amount corresponding to each frame of video shot by the second camera; splicing the video frames shot by the first camera and the second camera at the same moment according to the splicing offset to obtain a middle spliced video frame, wherein the splicing offset represents the offset between the video frame shot by the first camera and the video frame shot by the second camera under the condition of no shaking; and splicing and adjusting the middle spliced video frame according to the first shaking offset and the second shaking offset to obtain a target video frame. Therefore, the inertia measurement unit is installed on the camera, the shaking information of the camera is directly acquired, the image is prevented from being subjected to anti-shaking operation through an anti-shaking algorithm, the anti-shaking operation time is saved, and the real-time performance of the video anti-shaking splicing method is improved.
Optionally, in some embodiments, the obtaining first shake deviation information obtained based on shake detection of the first camera by the first inertial measurement unit and second shake deviation information obtained based on shake detection of the second camera by the second inertial measurement unit in the shooting process includes:
acquiring first initial shake offset information obtained based on shake detection of a first inertial measurement unit on a first camera and second initial shake offset information obtained based on shake detection of a second inertial measurement unit on a second camera in a shooting process;
and sequentially performing Kalman filtering, euler angle generation, quaternion conversion, rodrigues transformation and vector extraction on the first initial shaking offset information and the second initial shaking offset information to obtain first shaking offset information and second shaking offset information.
It should be noted that, because the inertial measurement unit is provided with the gyroscope and the accelerometer, and a higher noise error exists when the gyroscope and the accelerometer are used for independent measurement, the Kalman filtering is performed on the angular offset measured by the two inertial measurement components, so that the shaking offset can be relatively accurate;
the Kalman filtering needs to construct a corresponding state measurement equation and update the measurement equation as follows:
Figure BDA0003799592630000061
Figure BDA0003799592630000062
in the formula (1-1) and the formula (1-2), ω gyro Is the output angular velocity, omega, of the gyroscope g Is the noise of the gyroscope and is,
Figure BDA0003799592630000063
is the angular value, omega, of the accelerometer a Is the accelerometer measurement noise, n is the gyroscope drift error, ω g And omega a Independent of each other, set T s For the sampling period of the filter, then the state equation and the measurement equation are expressed as:
Figure BDA0003799592630000064
V i (k)=X(k)+ω a (k) Formulas (1-4);
the Kalman filtering provided by the invention adopts autoregressive circular filtering, and in order to estimate the true angle of the k moment, the angle value of the k-1 moment is used as a reference, and the optimal angle value is finally estimated through the predicted angle value of the k moment and the Gaussian noise covariance matrix of the k moment. Let Q be the noise covariance matrix, R be the covariance matrix of the measurement error, and the form of Q and R matrix is as follows:
Figure BDA0003799592630000071
in the formula (1-5), q acce And q is gyro Representing the covariance of the accelerometer and gyroscope measurements, respectively; the autoregressive state equation of equations (1-3) can be re-expressed as:
x (k | k-1) = AX (k-1 non-conducting component k-1) + BU (k) formula (1-6);
in the formula (1-6), A and B are parameters set by the system, and U (k) is a control quantity of the system at the time k. Let X (k-1 purple k-1) be the optimal result at time k-1. The formula of the covariance P is:
P(k|k-1)=AP(k-1|k-1)A T + Q formula (1-7);
in the formula, P (k-1 non-smoke k-1 is the covariance corresponding to X (k-1), and Q is the covariance in the system process, and the state of the system is updated by combining the two formulas to obtain the optimized estimated value X (k | k):
x (K | K) = X (K | K-1) + K (K) (Z (K) -HX (K | K-1) formula (1-8);
where H is the system parameter, g (k) is the Kalman gain, and Z (k) is the system measurement. The kalman gain is then expressed as:
g(k)=P(k|k-1)H t /(HP(k|k-1)H t + R) formula (1-9);
thus, the system calculates the optimal estimated value X (k | k) and continuously performs kalman iteration, and updates the covariance P:
p (k | k) = (I-g (k) P (k | k-1) formula (1-10);
errors caused by hardware of inertia components in the inertia measurement unit due to noise are offset according to Kalman gain, and the accuracy of detecting the instantaneous jitter offset of the camera is improved.
In the embodiment of the present application, the vector extraction of the inertial measurement is mainly divided into four parts. (1) The combined gyroscope and accelerometer magnetometer measures the displacement angle of an object when the object moves, and the Euler angle is used for representing the transformation of a standing center coordinate system. (2) And carrying out matrix conversion on the Euler angles to produce a rotation matrix, and converting the rotation matrix into a quaternion. (3) And performing Rodrigue transformation on the converted quaternion to decompose a vector and a rotation angle. And (4) extracting the vector. Because the Euler angle has very serious defects of universal deadlock and larger operand, in order to facilitate the conversion of a coordinate system, the Euler angle is firstly converted into a rotation matrix, and a quaternion is used as a data type of a Rodrigues conversion formula extraction vector, so that the rotation matrix needs to be converted again. Setting arbitrary quaternion
Figure BDA0003799592630000072
Rodriguese transformation on quaternions
Figure BDA0003799592630000073
Is converted into
Figure BDA0003799592630000081
In the formula (1-11), the content of the main component,
Figure BDA0003799592630000082
thus, when q · q ≠ 0, for any quaternion there is:
Figure BDA0003799592630000083
wherein the vector
Figure BDA0003799592630000084
The final wobble offset vector.
It should be noted that, in the shooting process, first initial shake offset information obtained based on shake detection of the first inertial measurement unit on the first camera and second initial shake offset information obtained based on shake detection of the second inertial measurement unit on the second camera are obtained; and sequentially performing Kalman filtering, euler angle generation, quaternion conversion, reed-Solomon transformation and vector extraction processing on the first initial shaking offset information and the second initial shaking offset information to obtain first shaking offset information and second shaking offset information. Therefore, relatively accurate data information can be obtained when the inertia measurement unit detects the shaking deviation information, and the splicing precision of the video anti-shake splicing method is improved.
Optionally, in some embodiments, before the video frame shot by the first camera and the video frame shot by the second camera at the same time are spliced according to the splicing offset to obtain an intermediate spliced video frame, the video anti-shake splicing method further includes:
acquiring angular points of video frames shot by a first camera and a second camera at the same moment;
it should be noted that, for better obtaining corner information, for example, in some embodiments, the corners of the video frames may be obtained through a FAST algorithm; further, the steps of obtaining the corner of the video frame by the FAST algorithm are as follows: and (1) setting a threshold value. For comparing whether the preliminary points coincide with candidate points for the corner point. And (2) constructing a moving window. Selecting a window center point, wherein the detection window with the radius of 3 is a region composed of 16 pixels, and comparing the 16 pixels with the center point pixel comparison value. (3) The pixels of the candidate feature points are compared to the constructed surrounding area. The algorithm flow is that the pixels at four positions 1,5,9 and 13 in the detection area are checked by using a position method in the figure, the pixels at the position 1 and the position 9 are compared by using a diagonal detection method, and when the pixels exceed a threshold value, the other two diagonal pixels are compared. For example, if Q is a corner point, the value of at least 3 of the four surrounding pixels is greater than the specified threshold. If so, all points within the circle are detected. If not satisfied to discard directly. (4) And carrying out non-maximum suppression on the extracted corner points, and finally outputting corner point information.
Optionally, in some embodiments, the corner points may also be detected by other corner point detection algorithms, such as harris corner point detection algorithm or Shi-Tomasi corner point detection algorithm.
Alternatively, the number of detected corner points may be five hundred, or may be another number, and is not further limited herein.
Determining identification information of the corner according to the relation between the corner and pixel blocks around the corner, wherein the identification information comprises pixel coordinates of the corner;
it should be noted that, in order to better distinguish the detected corner points, for example, in some embodiments, a BRIEF descriptor algorithm is used to describe the detected corner points, and the specific steps may be: and computing brief description of the corner point to generate 512-bit descriptor. In order to keep rigid invariance of image information, a brief descriptor selects a feature point as a center, and under a set window, pixel sums in the window are randomly compared between a sub-window and a neighborhood window to generate a binary descriptor.
Matching the identification information of the corner points of the video frame shot by the first camera and the video frame shot by the second camera at the same moment;
it will be appreciated that if there are overlapping portions of two video frames, then there are corners with the same descriptors, which are matched.
And acquiring the splicing offset between the video frames based on the identification information of the corner points successfully matched.
Optionally, in some embodiments, matching the identification information of the corner points of the video frame shot by the first camera and the video frame shot by the second camera at the same time includes:
video frames shot by the first camera and the second camera at the same moment are respectively a first video frame and a second video frame;
and sequentially traversing and matching the identification information of all the corner points of the first video frame with the identification information of the corner points in the second video frame, if the matching is successful, storing the identification information of the corner points, and if the matching is failed, deleting the identification information of the corner points.
It should be noted that the descriptors of the corner points in the first video frame are sorted, then the descriptors in the first video frame are selected in sequence, the selected descriptors are matched with the descriptors in the second video frame in a traversal manner, the descriptors which are successfully matched are stored, and if all the descriptors in the second video frame are matched in a traversal manner, or if the descriptors which are successfully matched cannot be found, the descriptors are deleted from the sequence.
Optionally, in some embodiments, obtaining a stitching offset between video frames based on the identification information of the corner points successfully matched includes:
calculating a first distance between any two corner points in the first video frame and a second distance between two corner points in the second video frame which is matched successfully;
determining the imaging ratio of the first video frame and the second video frame according to the first distance and the second distance;
resizing the first video frame and the second video frame to be consistent based on the imaging ratio;
when the sizes of the first video frame and the second video frame are consistent, calculating the splicing offset of the first video frame and the second video frame according to the Euclidean distance between corner points successfully matched between the first video frame and the second video frame
It should be noted that by the above method, the splicing offset generated when the camera is installed, the splicing offset generated at the position of the camera due to aging of the locking device of the camera after the camera is installed, and the splicing offset composed of the position offset of the camera due to bad weather can be obtained, so that the video frames shot by two adjacent cameras are spliced according to the splicing offset, and the video splicing precision is improved.
Optionally, in some embodiments, the stitching adjustment of the intermediate-stitched video frame according to the first shake offset and the second shake offset, and obtaining the target video frame includes:
performing Kalman filtering processing on the first shaking offset and the second shaking offset to obtain a prediction offset;
and splicing and adjusting the middle spliced video frame according to the predicted offset.
In the embodiment of the application, because the imaging frequency of the camera is 25 fps/s and the difference between the imaging frequency of the camera and the detection frequency of the inertial detection unit is about 40 times of 900 fps/s, in order to better fuse the two types of data, the offset prediction of the Kalman filter is carried out on the shaking offset. The kalman filtering algorithm involves two phases: prediction and measurement update.
The Kalman filter equation of the prediction stage is
Figure BDA0003799592630000101
P t|t-1 =F t P t-1|t-1 F t T +Q t Formula (2-2);
wherein, F t Is the amount of image detection shift at time t, B t Is the historical detection parameter and Qt is the process noise covariance matrix associated with the noise control input. The difference between the formula (2-1) and the formula (2-2) is given
Figure BDA0003799592630000111
To facilitate the derivation and understanding of the formulas, the meaning represented by each symbol is explained. As follows:
Figure BDA0003799592630000112
is a state vector after data fusion,
Figure BDA0003799592630000113
Updating vector, σ, for data 2 fused →P t|t Is a covariance matrix after data fusion,
Figure BDA0003799592630000114
Is covariance matrix, upsilon, before data fusion 2 →Z t For measuring the vector,
Figure BDA0003799592630000115
For the sum of uncertainty matrices associated with the noise measurement set H → H t Is a transformation matrix used to map the state vector parameters into the measurement domain.
In each measurement stage, the information input sources are the splicing offset prediction of a first video frame and a second video frame and an inertial measurement unit arranged in a camera, and the optimal image splicing offset is estimated by combining the following two information: due to the noise uncertainty of the acceleration from time t =0 to time t =1, the accuracy of the estimation increases correspondingly compared to t = 0. At t =1, the still image stitching measurement value at the same time is used. The kalman filter algorithm measures an update equation given by a predicted gaussian equation represented by a probability density function:
Figure BDA0003799592630000116
Figure BDA0003799592630000117
multiplying the probability density functions of equations (2-6) and (2-7), the new probability density function is:
Figure BDA0003799592630000118
the quadratic term of the equation is extended and the whole expression is rewritten in gaussian form:
Figure BDA0003799592630000119
wherein the content of the first and second substances,
Figure BDA0003799592630000121
Figure BDA0003799592630000122
in the embodiment of the application, two equations of prediction and measurement are adopted to represent the measurement updating step of the Kalman filtering algorithm. And mapping the predicted and measured values into the same domain, taking the splicing offset value successfully spliced for the first time as an initialization offset value, directly taking the initialization offset value as an initial value of an inertial measurement unit in inertial measurement, and measuring the shaking offset value at the frequency of 900 FPS/s. In order to multiply the prediction and the measured probability density function, the prediction needs to be mapped to the measurement domain using the transformation matrix Ht. Assuming y2 represents the probability distribution from the inertial measurement unit, the spatial prediction probability density function y1 is obtained by scaling the function to c. Thus, equations (2-6) and (2-7) are rewritten as:
Figure BDA0003799592630000123
Figure BDA0003799592630000124
according to the previous derivation, the probability density distribution may define the following equation in the measurement domain:
Figure BDA0003799592630000125
substitution into
Figure BDA0003799592630000126
And
Figure BDA0003799592630000127
obtaining:
υ fused =υ 1 +K*(υ 2 -Hυ 1 ) Formula (2-15);
similarly, the fused variance estimate becomes:
Figure BDA0003799592630000131
this scalar is compared to the standard vector used in the kalman filter algorithm, given the kalman gain K as:
Figure BDA0003799592630000132
the offset fused kalman filter equation can be expressed as:
Figure BDA0003799592630000133
Figure BDA0003799592630000134
Figure BDA0003799592630000135
Figure BDA0003799592630000136
P t|t =P t|t-1 -K t H t P t|t-1 formulas (2-18);
thereby, the prediction offset P at the time t is obtained t|t And calibrating the jitter of the camera caused by the wind resistance of the high tower according to the fused predicted offset.
As shown in fig. 2, an embodiment of the present application provides a structure diagram of a video anti-shake stitching apparatus, where the video anti-shake stitching apparatus includes:
an obtaining module 201, configured to obtain first shake deviation information obtained based on shake detection of a first camera by a first inertia measurement unit and second shake deviation information obtained based on shake detection of a second camera by a second inertia measurement unit in a shooting process, where the first shake deviation information includes a first shake deviation amount corresponding to each frame of video shot by the first camera, and the second shake deviation information includes a second shake deviation amount corresponding to each frame of video shot by the second camera;
a stitching module 202, configured to stitch video frames captured by the first camera and the second camera at the same time according to a stitching offset to obtain an intermediate stitched video frame, where the stitching offset indicates an offset between a video frame captured by the first camera and a video frame captured by the second camera in the absence of shake;
and the adjusting module 203 performs splicing adjustment on the middle spliced video frame according to the first shaking offset and the second shaking offset to obtain a target video frame.
The video anti-shake splicing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiments of the present application are not particularly limited.
As shown in fig. 3, fig. 3 is a structural diagram of an electronic device according to an embodiment of the present disclosure, and includes a processor 301, a memory 302, and a program or an instruction stored in the memory 302 and executable on the processor 301, where the program or the instruction is executed by the processor 301 to implement each process of the embodiment of the video anti-shake splicing method, and can achieve the same technical effect, and is not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
As shown in fig. 4, fig. 4 is a structural diagram of another electronic device provided in the embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in the drawings, or combine some components, or arrange different components, and thus, the description is omitted here.
A processor 410 for performing the following operations: acquiring first shake offset information obtained based on shake detection of a first camera by a first inertia measurement unit and second shake offset information obtained based on shake detection of a second camera by a second inertia measurement unit in a shooting process, wherein the first shake offset information comprises a first shake offset corresponding to each frame of video shot by the first camera, and the second shake offset information comprises a second shake offset corresponding to each frame of video shot by the second camera; splicing the video frames shot by the first camera and the second camera at the same moment according to a splicing offset to obtain a middle spliced video frame, wherein the splicing offset represents the offset between the video frame shot by the first camera and the video frame shot by the second camera under the condition of no shaking; and splicing and adjusting the intermediate spliced video frame according to the first shaking offset and the second shaking offset to obtain a target video frame.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the video anti-shake splicing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as computer Read-Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, so as to implement each process of the embodiment of the video anti-shake stitching method, and achieve the same technical effect, and in order to avoid repetition, the chip is not described herein again.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application also provide a computer program product, stored in a non-volatile storage medium, configured to be executed by at least one processor to implement the steps of the above method.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A video anti-shake splicing method is characterized by comprising the following steps:
acquiring first shake offset information obtained based on shake detection of a first camera by a first inertia measurement unit and second shake offset information obtained based on shake detection of a second camera by a second inertia measurement unit in a shooting process, wherein the first shake offset information comprises a first shake offset corresponding to each frame of video shot by the first camera, and the second shake offset information comprises a second shake offset corresponding to each frame of video shot by the second camera;
splicing the video frames shot by the first camera and the second camera at the same moment according to a splicing offset to obtain a middle spliced video frame, wherein the splicing offset represents the offset between the video frame shot by the first camera and the video frame shot by the second camera under the condition of no shaking;
and splicing and adjusting the intermediate spliced video frame according to the first shaking offset and the second shaking offset to obtain a target video frame.
2. The video anti-shake splicing method according to claim 1, wherein the obtaining of the first shake deviation information obtained based on shake detection of the first inertial measurement unit on the first camera and the second shake deviation information obtained based on shake detection of the second inertial measurement unit on the second camera during shooting comprises:
acquiring first initial shake offset information obtained based on shake detection of a first inertial measurement unit on a first camera and second initial shake offset information obtained based on shake detection of a second inertial measurement unit on a second camera in a shooting process;
and sequentially performing Kalman filtering, euler angle generation, quaternion conversion, reed-Solomon transformation and vector extraction on the first initial shaking offset information and the second initial shaking offset information to obtain the first shaking offset information and the second shaking offset information.
3. The video anti-shake stitching method according to claim 1, wherein before the video frames captured by the first camera and the second camera at the same time are stitched according to the stitching offset to obtain the middle stitched video frame, the method further comprises:
acquiring angular points of video frames shot by the first camera and the second camera at the same moment;
determining identification information of the corner according to the relation between the corner and pixel blocks around the corner, wherein the identification information comprises pixel coordinates of the corner;
matching the identification information of the corner points of the video frame shot by the first camera and the video frame shot by the second camera at the same moment;
and acquiring the splicing offset between the video frames based on the identification information of the corner points successfully matched.
4. The video anti-shake splicing method according to claim 3, wherein the matching the identification information of the corner points of the video frames captured by the first camera and the second camera at the same time comprises:
video frames shot by the first camera and the second camera at the same moment are respectively a first video frame and a second video frame;
and sequentially traversing and matching the identification information of all the corners of the first video frame with the identification information of the corners in the second video frame, if the matching is successful, storing the identification information of the corners, and if the matching is failed, deleting the identification information of the corners.
5. The video anti-shake splicing method according to claim 4, wherein the obtaining the splicing offset between the video frames based on the identification information of the corner points successfully matched comprises:
calculating a first distance between any two corner points in the first video frame and a second distance between two corner points in the second video frame which are matched successfully;
determining an imaging ratio of the first video frame and the second video frame according to the first distance and the second distance;
resizing the first video frame and the second video frame to be consistent based on the imaging ratio;
and when the sizes of the first video frame and the second video frame are consistent, calculating the splicing offset of the first video frame and the second video frame according to the Euclidean distance between the corner points successfully matched between the first video frame and the second video frame.
6. The video anti-shake stitching method according to claim 1, wherein the stitching adjustment of the intermediate stitched video frames according to the first shake offset and the second shake offset to obtain a target video frame comprises:
performing Kalman filtering processing on the first shaking offset and the second shaking offset to obtain a prediction offset;
and splicing and adjusting the middle spliced video frame according to the predicted offset.
7. The utility model provides a video anti-shake splicing apparatus which characterized in that, video anti-shake splicing apparatus includes:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring first shaking deviation information obtained based on shaking detection of a first inertial measurement unit on a first camera and second shaking deviation information obtained based on shaking detection of a second inertial measurement unit on a second camera in the shooting process, the first shaking deviation information comprises a first shaking deviation amount corresponding to each frame of video shot by the first camera, and the second shaking deviation information comprises a second shaking deviation amount corresponding to each frame of video shot by the second camera;
the splicing module is used for splicing the video frames shot by the first camera and the second camera at the same moment according to a splicing offset to obtain a middle spliced video frame, wherein the splicing offset represents the offset between the video frame shot by the first camera and the video frame shot by the second camera under the condition of no shaking;
and the adjusting module is used for splicing and adjusting the middle spliced video frame according to the first shaking offset and the second shaking offset to obtain a target video frame.
8. An electronic device comprising a memory and a processor, the memory storing a computer program, wherein execution of the computer program by the processor is to implement the steps of the method of any one of claims 1 to 6.
9. A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
10. A chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being arranged to execute a program or instructions to carry out the steps of the method of any of claims 1 to 6.
CN202210979037.8A 2022-08-16 2022-08-16 Video anti-shake splicing method and related equipment Pending CN115396597A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210979037.8A CN115396597A (en) 2022-08-16 2022-08-16 Video anti-shake splicing method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210979037.8A CN115396597A (en) 2022-08-16 2022-08-16 Video anti-shake splicing method and related equipment

Publications (1)

Publication Number Publication Date
CN115396597A true CN115396597A (en) 2022-11-25

Family

ID=84120081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210979037.8A Pending CN115396597A (en) 2022-08-16 2022-08-16 Video anti-shake splicing method and related equipment

Country Status (1)

Country Link
CN (1) CN115396597A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118102099A (en) * 2024-04-17 2024-05-28 江西省自然资源事业发展中心 Natural resource supervision video shooting anti-shake method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118102099A (en) * 2024-04-17 2024-05-28 江西省自然资源事业发展中心 Natural resource supervision video shooting anti-shake method and device

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN109087359B (en) Pose determination method, pose determination apparatus, medium, and computing device
CN110118554B (en) SLAM method, apparatus, storage medium and device based on visual inertia
CN110517319B (en) Method for determining camera attitude information and related device
CN108805917B (en) Method, medium, apparatus and computing device for spatial localization
CN107633526B (en) Image tracking point acquisition method and device and storage medium
CN110246182B (en) Vision-based global map positioning method and device, storage medium and equipment
JP6760957B2 (en) 3D modeling method and equipment
CN110111388B (en) Three-dimensional object pose parameter estimation method and visual equipment
CN110866480A (en) Object tracking method and device, storage medium and electronic device
US10204445B2 (en) Information processing apparatus, method, and storage medium for determining a failure of position and orientation measurement of an image capturing device
CN110660098B (en) Positioning method and device based on monocular vision
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
CN109211277A (en) The state of vision inertia odometer determines method, apparatus and electronic equipment
CN114494388B (en) Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment
CN114092720A (en) Target tracking method and device, computer equipment and storage medium
CN115396597A (en) Video anti-shake splicing method and related equipment
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN108304578B (en) Map data processing method, medium, device and computing equipment
CN112087728B (en) Method and device for acquiring Wi-Fi fingerprint spatial distribution and electronic equipment
WO2013032785A1 (en) Line tracking with automatic model initialization by graph matching and cycle detection
CN112085842B (en) Depth value determining method and device, electronic equipment and storage medium
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN114882194A (en) Method and device for processing room point cloud data, electronic equipment and storage medium
CN115086538B (en) Shooting position determining method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination