CN113256679A - Electronic image stabilization algorithm based on vehicle-mounted rearview mirror system - Google Patents
Electronic image stabilization algorithm based on vehicle-mounted rearview mirror system Download PDFInfo
- Publication number
- CN113256679A CN113256679A CN202110520501.2A CN202110520501A CN113256679A CN 113256679 A CN113256679 A CN 113256679A CN 202110520501 A CN202110520501 A CN 202110520501A CN 113256679 A CN113256679 A CN 113256679A
- Authority
- CN
- China
- Prior art keywords
- points
- frame
- point
- matrix
- vehicle body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 48
- 230000006641 stabilisation Effects 0.000 title claims abstract description 33
- 238000011105 stabilization Methods 0.000 title claims abstract description 33
- 230000033001 locomotion Effects 0.000 claims abstract description 104
- 238000000034 method Methods 0.000 claims abstract description 51
- 230000009466 transformation Effects 0.000 claims abstract description 38
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims abstract description 19
- 238000009499 grossing Methods 0.000 claims abstract description 12
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 74
- 238000001914 filtration Methods 0.000 claims description 60
- 210000000746 body region Anatomy 0.000 claims description 33
- 239000013598 vector Substances 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 18
- 238000005070 sampling Methods 0.000 claims description 12
- 238000006073 displacement reaction Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000005259 measurement Methods 0.000 claims description 7
- 230000008030 elimination Effects 0.000 claims description 6
- 238000003379 elimination reaction Methods 0.000 claims description 6
- 238000012546 transfer Methods 0.000 claims description 6
- 238000005192 partition Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 238000009795 derivation Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000000087 stabilizing effect Effects 0.000 abstract 1
- 230000003287 optical effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000001788 irregular Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000007664 blowing Methods 0.000 description 1
- 230000001684 chronic effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013016 damping Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an electronic image stabilization algorithm based on a vehicle-mounted rearview mirror system, which comprises the following specific steps: extracting images of continuous adjacent frames of an original video, preprocessing the images, and detecting and tracking corresponding angular points of vehicle body and non-vehicle body areas in two frames by adopting an SURF algorithm; eliminating mismatching angular points by adopting an optimized RANSAC algorithm; calculating a corresponding transformation model according to the matching points in the two frames; smoothing the motion parameters estimated by the affine transformation model; and performing frame-by-frame compensation on the video sequence by using the smoothed parameters to obtain a stable video sequence. The motion parameters are calculated by adopting a weighting method, and the compensated video frame is more stable, has better image stabilizing effect, higher speed and short time consumption.
Description
Technical Field
The invention belongs to the field of image processing, and particularly relates to an electronic image stabilization algorithm based on a vehicle-mounted rearview mirror system.
Background
With the development of science and technology, automobiles become unavailable vehicles, and in order to solve the field of vision problem of automobiles, the electronic rearview mirrors of the automobiles slowly replace physical rearview mirrors. However, in the practical application process, the influence of natural factors such as uneven road surface and wind blowing causes the acquired video to be unclear, and further causes the displayed video picture to be unstable, which causes certain influence on the observation of a driver, so that the problem is very important to solve.
At present, video image stabilization methods mainly comprise three main types: mechanical image stabilization, optical image stabilization, and electronic image stabilization. The mechanical image stabilization is to process the camera and the self structure of the vehicle, and to realize the purpose of image stabilization by sensing the motion to compensate the motion in reverse direction, so that the mechanical image stabilization is easily influenced by the sensor and has high cost; optical stabilization compensates for the jitter of a video frame by changing the pose of the elements before the video image is converted to a digital signal. Optical image stabilization is generally used in the medical field and is not suitable for machines with large vibration; the electronic image stabilization is to process a video frame sequence acquired by a camera and carry out relative displacement between frames to recover a video picture with poor robustness caused by irregular motion.
The method of electronic image stabilization generally comprises three steps: motion estimation, motion smoothing, and motion compensation. Motion estimation is performed by predicting the relative variation of a continuous frame sequence, and currently, a better motion estimation algorithm includes: a block matching method, a gray projection method, and a feature matching method. The block matching method has long calculation time and large calculation amount; the gray projection method has high speed and small calculated amount, but is greatly influenced by gray values, and the gray projection method can only estimate translational motion; the feature value matching method has small calculation amount and fast calculation speed, can estimate the rotation, translation and the like of the image, and generally adopts the feature value matching method. The motion is smooth, because of the jitter generated by the interference of external factors in the video acquisition process, the smoothing process is to separate the subjective displacement in the global motion vector from the motion vector generated by random jitter, and the common algorithm is as follows: mean filtering, curve fitting, and kalman filtering. The motion compensation is to recover the current frame by the acquired parameters of the relative displacement.
The invention patent 201711432341.6 discloses an electronic image stabilization method and system, the method mainly obtains the rotation center and angle by performing motion estimation on the current frame and the reference frame, and further compensates the current frame, the method compensates the image according to the obtained parameters, the obtained parameters are not smoothed, and the scaling of the image is not considered.
Patent 201710563620.X discloses an electronic image stabilization method based on improved KLT and kalman filtering, which detects and tracks matching points mainly by the Shi-Tomasi algorithm, which separates intentional motion and unintentional jitter of a video camera, and the LK optical flow method, which performs motion smoothing only by the kalman filtering method because noise is not only low but high in frequency, but also not fine enough, because processing is performed using mixed filtering.
Based on the problems, the method adopts an electronic image stabilization algorithm based on a vehicle-mounted rearview mirror system, adopts SURF angular point detection on image partitions, is shorter in time consumption, more in angular points of a vehicle body area, more accurate in motion estimation, irregular in jitter, is not used for detecting moving objects in the vehicle body area, and is suitable for rotating and translating scenes of images; the hybrid filtering method has a better motion smoothing effect, and the motion parameters obtained by the difference weighted values are more accurate and less time-consuming.
Disclosure of Invention
The invention aims to provide an electronic image stabilization method to solve the problem of video frame sequence jitter caused by high-frequency unintentional motion generated by wind excitation and road bumping when a vehicle runs.
In order to achieve the purpose, the invention adopts the following technical scheme:
an electronic image stabilization algorithm based on a vehicle-mounted rearview mirror system is characterized by comprising the following specific steps:
step 1, sampling the original video frame, and reducing the data volume of the processed video frame according to a method of changing the sampling interval;
Step 5, performing motion filtering on the motion estimation parameters by adopting a hybrid filter; the average filtering is rough filtering to remove some high-frequency noise and to obtain a motion parameter matrix M of the vehicle body region1And a motion parameter matrix M of the non-body area2Pre-filtering to obtain M1′,M2'; the fine filtering process adopts Kalman filtering, the image is subjected to fine processing to obtain M1″、M2″;
Step 6, weighting the difference value of the motion parameter matrixes of the vehicle body and the non-vehicle body area to obtain a practical and more accurate affine transformation parameter M;
and 7, compensating the video image to be processed by using the affine transformation parameters, and outputting a stable video frame.
In the above electronic image stabilization algorithm based on the vehicle-mounted rearview mirror system, in step 1, since the obtained video sequence frames are many and mostly similar, the video sequence frames obtained by sampling with a method of varying sampling intervals do not change much, the preprocessing is to perform gaussian filtering processing on the extracted video frame sequence to suppress noise, where the gaussian filtering uses a two-dimensional zero-mean discrete gaussian function as a smoothing filter, that is:
where G (x, y) is a Gaussian function, μ is a mathematical expectation, σ2Is the standard deviation, and (x, y) is the coordinates of the pixel points.
In the above electronic image stabilization algorithm based on the vehicle-mounted rearview mirror system, in step 2, the SURF algorithm is used to extract and track feature points, and the basic steps are as follows:
step 2.1, constructing Hessian matrix
The core of SURF is a Hessian matrix, the Hessian matrix H is composed of functions and partial derivatives, and each pixel point can calculate a Hessian matrix for feature extraction, namely:
wherein (x, y) is the coordinate of the pixel, and f (x, y) represents the pixel pointThe coordinate relationship of (a) to (b),meaning that two leads are taken for x,meaning that x is differentiated and then y is differentiated,representing twice derivation for y;
the discriminant of the Hessian matrix is as follows:
wherein H represents a matrix, and x and y are pixel point coordinates;
the value of the discriminant is the eigenvalue of the H matrix, all points can be classified by using the symbol of the decision result, and whether the point is an extreme point or not is judged according to the positive and negative values of the discriminant;
step 2.2, constructing a scale space
In a pyramid structure established in a traditional mode, the size of an image is changed, a Gaussian function is repeatedly used for carrying out smoothing processing on a sublayer, and an SURF algorithm keeps an original image unchanged and only changes the size of a filter;
step 2.3, feature point positioning
Comparing each pixel point processed by the Hessian matrix with 26 pixel points in the three-dimensional field of the pixel point, if the pixel point is the maximum value or the minimum value of the 26 pixel points, keeping the pixel points as preliminary characteristic points; filtering key points with weak energy and key points with wrong positioning to screen out final stable feature points;
step 2.4, feature point principal direction distribution
To express the feature points in the image exactly, the gradient modulus m and the direction angle θ should be calculated for each key point L (x, y); the following formula:
the scale value in the formula L (x, y) is the scale space to which each key point belongs, and the (x, y) represents the coordinates of the point;
when the main direction is determined, the SURF algorithm counts haar wavelet characteristics in the characteristic point field; in the field of the feature points, counting the sum of horizontal haar wavelet features and vertical haar wavelet features of all points in a 60-degree fan, rotating the 60-degree fan at certain intervals, and taking the direction of the fan with the maximum value as the main direction of the feature points;
step 2.5, generating feature point descriptors
Taking a square frame around the characteristic point, wherein the side length of the frame is 20s (s is the size of the detected characteristic point); the frame strip direction (the detected main direction); then dividing the frame into 16 subregions, wherein each subregion counts haar wavelet characteristics of 25 pixels in the horizontal direction and the vertical direction, and the horizontal direction and the vertical direction are relative to the main direction; the haar wavelet features are the sum of horizontal direction values, the sum of horizontal direction absolute values, the sum of vertical direction values and the sum of vertical direction absolute values;
step 2.6, feature point matching
Extracting a certain threshold amount of angular points of images of a reference frame and a current frame of a video sequence by a surf angular point detection algorithm, measuring the distance between descriptors by Euclidean distance, and searching for an optimal matching point; let the current frame image have n1A floating-point local feature point, where a certain feature point is represented by a ═ { x ═ x1,x2,…,xmWith reference to a frame picture having n2A floating-point type local feature point, wherein one of the feature points is represented by B ═ y1,y2,…,ymFrom n in the current frame1Medicine for treating chronic rhinitisSelecting one feature point from the feature points and respectively comparing the feature point with n in the reference frame2Calculating Euclidean distance according to the characteristic points, wherein the Euclidean distance is as follows:
where m represents the dimension of the feature point descriptor, so that n can be obtained2(ii) the Euclidean distance; then, from this n2Selecting the distance d with the minimum Euclidean distance from the calculation resultsminAnd a penultimate small distance d'minCalculating dminAnd d'minThe ratio of (A) to (B) is as follows:
wherein r represents the Euclidean distance ratio, rtIs a threshold value; when r < rtThe feature points to be matched can be considered as matching.
In the above-mentioned electronic image stabilization algorithm based on the vehicle rearview mirror system, in step 3, the RANSAC algorithm is implemented by finding an optimal homography matrix HPiSo that the number of data satisfying the matrix is maximized, matrix HPiThe transformation relationship between the coordinates of two image points can be described, namely:
in the formula, pi,p'iRespectively representing corresponding matching points, h, in the two images1、h2、h4、h5For image rotation amount and scale, h3Denotes horizontal displacement, h6Denotes the vertical displacement, h7、h8Respectively representing the deformation amount in the horizontal and vertical directions;
the homography matrix contains 8 unknown parameters, at least 4 groups of matching point pairs are needed to be solved, the four groups of matching point pairs are not collinear, and the algorithm comprises the following basic steps:
step 3.1, setting iteration times K;
step 3.2, randomly selecting 4 groups of non-collinear feature points from the feature point set collected in the step 2, and calculating a current parameter model HPi;
Step 3.3, substituting all the characteristic points in the characteristic point set into the model, and calculating all the data in the characteristic point set and a parameter model HPiProjection error ofRepresents p'iAnddistance between, preset threshold T_distIf d isi<T_distIf the matching point pair is an inner point, counting the number M of the inner points, and adding the inner points into the inner point set A, otherwise, regarding the characteristic point as an outer point;
step 3.4, repeating the step 3.2 and the step 3.3, repeating the steps for K times, selecting the model with the largest number of the obtained interior points, eliminating the exterior points in the model, and calculating a transformation matrix H by using all the interior points obtained by the modelPiThen the matrix is the optimal transformation matrix.
In the above electronic image stabilization algorithm based on the vehicle-mounted rearview mirror system, in step 4, an affine transformation function model is used for motion estimation, and the affine transformation is a function model for motion estimation with 6 parameters, that is:
in the formula,embodying the rotation and scaling transformation between images,then translational motion between the images is indicated, T represents the transformation relationship between the images, (x, y)) Representing coordinates of the pixel points; substituting three groups of non-collinear matching point pair coordinates to obtain a motion parameter matrix
After the characteristic points are removed in the step 3, U characteristic points corresponding to the k frame exist in the vehicle body area of the (k +1) th frame, V characteristic points corresponding to the k frame exist in the non-vehicle body area of the (k +1) th frame, and the characteristic point pair coordinates (x) of three groups of non-collinear vehicle body areas of the vehicle body area are extracted0,y0),(x'0,y'0),(x1,y1),(x1',y1'),(x2,y2),(x'2,y'2) One system of equations can be obtained:
solving the equation to obtain M1Obtaining the motion parameter matrix M of the non-vehicle body area in the same way2。
In the above electronic image stabilization algorithm based on the vehicle-mounted rearview mirror system, in the step 5, the average filtering is coarse filtering, and the global motion vector V of the continuous video frame images is subjected to coarse filteringkCarrying out equalization treatment, namely:
in the formula, VkFor the original global motion vector, the motion vector is,for the intended scanning movement of the camera, Δ VkFor high-frequency random dithering, the compensation component M corresponding to the current Kth frame imagek,MkRepresents the cumulative sum of all global motion vectors starting from frame 1 through frame K, then Mk=Mk-1+ΔVk,MkNamely the motion parameters;
because the vehicle body area and the non-vehicle body area have different motion vectors, two different global motion vectors M after mean filtering processing can be obtained according to the formula1' and M2′;
The fine filtering process adopts a Kalman filtering method, which is a linear discrete control process, and the estimation value of the current state variable is calculated by adopting the predicted value of the current time and the actual measured value of the current time at the previous time, and the state transfer equation and the measurement transfer equation formula of the Kalman filtering are as follows:
phi is a state transition matrix, T is a control input matrix of the system, H is a state transformation matrix, x and y are respectively a state value and a measured value of the system at the moment k, u is the control input of the system, q and r are respectively process noise and measurement noise of the system at the moment k, the process noise and the measurement noise can be regarded as Gaussian noise, and covariance is respectively Q (k) and R (k);
the kalman filtering essentially uses the predicted value of the system to obtain the estimated value of the next state, and can be divided into a prediction stage and a correction stage, that is:
wherein x (k | k-1) is a predicted value of the system state at the time k-1, P (k | k-1) is a predicted value of the covariance matrix of the state at the time k-1, x (k | k) is a corrected estimated value of the system state at the time k, P (k | k) is a corrected estimated value of the covariance matrix of the state at the time k, and k (k) is a kalman gain, and x (k | k) and P (k | k) are used to obtain x (k +1| k +1) and P (k +1| k +1) at the time k +1 according to the above equation, and so on, and iteration is carried out;
card passing throughAfter the Kalman filtering process, the point with changed coordinates is obtained, and then new M can be obtained1″、M2″。
In the above electronic image stabilization algorithm based on the vehicle-mounted rearview mirror system, in step 6, the actual camera motion parameter matrix M ═ k1M1″+k2M2"; wherein k is1+k2Since the corner points of the body region can represent the global camera motion, k is 11Should take a value comparison of k1Large, respectively take k1Is 0.6, 0.7, 0.8 …, then k1Corresponding values of 0.4, 0.3, 0.2 …, respectively1、k2Substituting into a formula, and comparing the values of the motion parameter matrixes to select the value which can represent the global motion vector most, namely the motion parameter to be solved; the step (7) is video frame compensation, namely:
Fcompen=MFsrc
Fsrcfor the original video frame, FcompenIs the compensated frame sequence.
Compared with the prior art, the invention has the advantages that: the motion parameters are calculated by adopting a weighting ratio method, and the vehicle body area of the vehicle body part is fixedly connected with video acquisition equipment, namely a camera, so that similar motion parameters exist, and the weighting ratio is heavier, and the accuracy is higher; due to the fact that different motion parameters exist between the vehicle body part and the camera when the vehicle is bumpy on the road surface or excited by wind due to different damping effects of the vehicle length and the vehicle body, the motion parameters of the camera cannot be completely estimated by the vehicle body motion parameters during motion estimation, and feature points of non-vehicle body areas are added into a video image, so that the feature points of the non-vehicle body parts are fewer, and time consumption is shorter; and the motion parameters are smoothed by adopting mixed filtering, so that the obtained parameters are more accurate.
Drawings
FIG. 1 is a detailed flow chart of the present invention.
Fig. 2 is a magnitude response diagram of the mean filtering process in the present invention.
FIG. 3 is a diagram of the motion smoothing effect of the Kalman filter and the hybrid filter in the present invention.
FIG. 4 is a distribution diagram of the characteristic points of the partitions in the embodiment of the invention.
Detailed Description
The invention is further defined in the following, but not limited to, the figures and examples in the description.
The first embodiment is as follows: the following describes the specific embodiment of the present invention with reference to fig. 1, and the specific steps are as follows:
(1) sampling the original video frame, and reducing the data volume of the processed video frame according to a method of changing the sampling interval;
(2) performing partition processing on the high information degree frame sequence sampled in the step (1), wherein the high information degree frame sequence is respectively a vehicle body region and a non-vehicle body region, angular points of the vehicle body region are in active motion, the non-vehicle body region is in local motion, the number of the angular points of the vehicle body region is set to be X, the number of the angular points of the non-vehicle body region is set to be Y, and then detecting and tracking the angular points by adopting an SURF algorithm;
(3) performing mismatching elimination on the feature points extracted in the step (2) by adopting an optimized RANSAC algorithm, wherein the characteristic points of the automobile body region and the non-automobile body region after the elimination are U, V which are expressed by MiA video frame image composed of frames;
(4) removing step 3 to obtain MiThe frame comprises a sequence frame of a vehicle body area and a non-vehicle body area, and affine transformation parameters M are calculated through vehicle body area characteristic points of a k frame and a k +1 frame1Calculating affine transformation parameters M through the non-body region characteristic points of the k frame and the k +1 frame2;
(5) Performing motion filtering on the motion estimation parameters by adopting a hybrid filter; the average filtering is rough filtering to remove some high-frequency noise and to obtain a motion parameter matrix M of the vehicle body region1And a motion parameter matrix M of the non-body area2Pre-filtering to obtain M1′,M2'; the fine filtering process adopts Kalman filtering, the image is subjected to fine processing to obtain M1″、M2″;
(6) Weighting the difference value of the motion parameter matrixes of the vehicle body and the non-vehicle body area to obtain a practical and more accurate affine transformation parameter M;
(7) and (5) compensating the video image to be processed by using the affine transformation parameters, and outputting a stable video frame.
The second embodiment is as follows: in the step (1), since the obtained video sequence frames are many and mostly similar, the video sequence frames obtained by sampling with the method of varying the sampling interval do not change much, the preprocessing is to perform gaussian filtering on the extracted video sequence frames to suppress noise, where the gaussian filtering uses a two-dimensional zero-mean discrete gaussian function as a smoothing filter, that is:
where G (x, y) is a Gaussian function, μ is a mathematical expectation, σ2Is the standard deviation, and (x, y) is the coordinates of the pixel points.
The third concrete implementation mode: in step (2), the SURF algorithm is used to extract and track feature points, and the basic steps are as follows:
(1) construction of Hessian matrix
The core of SURF is a Hessian matrix, the Hessian matrix H is composed of functions and partial derivatives, and each pixel point can calculate a Hessian matrix for feature extraction, namely:
wherein (x, y) is the coordinate of the pixel, f (x, y) represents the coordinate relation of the pixel,meaning that two leads are taken for x,meaning that x is differentiated and then y is differentiated,representing two derivations of y.
The discriminant of the Hessian matrix is as follows:
wherein H represents a matrix, and x and y are pixel point coordinates.
The value of the discriminant is the eigenvalue of the H matrix, and all points can be classified by the sign of the decision result, and whether the point is an extreme point or not can be determined by taking the value of the discriminant to be positive or negative.
(2) Constructing a scale space
In a pyramid structure established in the traditional mode, the size of an image is changed, the application can repeatedly use a Gaussian function to carry out smoothing processing on sub-layers, and the SURF algorithm keeps the original image unchanged and only changes the size of a filter.
(3) Feature point localization
And comparing each pixel point processed by the Hessian matrix with 26 pixel points in the three-dimensional field of the pixel point, and if the pixel point is the maximum value or the minimum value of the 26 pixel points, keeping the pixel point as a preliminary characteristic point. And filtering key points with weak energy and key points with wrong positioning to screen out final stable feature points.
(4) Feature point principal direction assignment
To accurately express the feature points in the image, the gradient modulus m and the direction angle θ should be calculated for each key point L (x, y) first. The following formula:
the scale value in the formula L (x, y) is the scale space to which each keypoint belongs, and (x, y) represents the coordinates of the point.
When determining the principal direction, the SURF algorithm counts haar wavelet features in the feature point field. In the field of the feature points, the sum of the horizontal haar wavelet features and the vertical haar wavelet features of all the points in a 60-degree fan is counted, then the 60-degree fan is rotated at certain intervals, and finally the direction of the fan with the maximum value is taken as the main direction of the feature point.
(5) Generating feature point descriptors
A square box is taken around the feature point, the side length of the box is 20s (s is the scale on which the feature point is detected). The frame strip direction (the above-mentioned detected main direction). The box is then divided into 16 subregions, each subregion counting 25 pixels of haar wavelet features in both the horizontal and vertical directions, where both the horizontal and vertical directions are relative to the principal direction. The haar wavelet features are the sum of horizontal direction values, the sum of horizontal direction absolute values, the sum of vertical direction values and the sum of vertical direction absolute values.
(6) Feature point matching
After angular points of a certain threshold quantity of images of a reference frame and a current frame of a video sequence are extracted through a surf angular point detection algorithm, the distance between descriptors is measured through Euclidean distance, and the best matching point is found. Let the current frame image have n1A floating-point local feature point, where a certain feature point is represented by a ═ { x ═ x1,x2,…,xmWith reference to a frame picture having n2A floating-point type local feature point, wherein one of the feature points is represented by B ═ y1,y2,…,ymFrom n in the current frame1Selecting one feature point from the feature points and respectively comparing the selected feature point with n in the reference frame2Calculating Euclidean distance according to the characteristic points, wherein the Euclidean distance is as follows:
wherein,m represents the dimension of the feature point descriptor, so that n can be obtained2The euclidean distance. Then, from this n2Selecting the distance d with the minimum Euclidean distance from the calculation resultsminAnd a penultimate small distance d'minCalculating dminAnd d'minThe ratio of (A) to (B) is as follows:
wherein r represents the Euclidean distance ratio, rtIs a threshold value. When r < rtThe feature points to be matched can be considered as matching.
The fourth concrete implementation mode: in this embodiment, the RANSAC algorithm in step (3) is performed by finding an optimal homography matrix HPiSo that the number of data satisfying the matrix is maximized, matrix HPiThe transformation relationship between the coordinates of two image points can be described, namely:
in the formula, pi,p'iRespectively representing corresponding matching points, h, in the two images1、h2、h4、h5For image rotation amount and scale, h3Denotes horizontal displacement, h6Denotes the vertical displacement, h7、h8Indicating the amount of deformation in the horizontal and vertical directions, respectively.
The homography matrix contains 8 unknown parameters, at least 4 groups of matching point pairs are needed to be solved, the four groups of matching point pairs are not collinear, and the algorithm comprises the following basic steps:
(a) setting iteration times K;
(b) randomly selecting 4 groups of non-collinear feature points from the feature point set acquired in the step 2, and calculating a current parameter model HPi;
(c) Substituting all the characteristic points in the characteristic point set into the model to calculate the characteristic pointsCentralizing all data and parametric models HPiProjection error ofRepresents p'iAnddistance between, preset threshold T_distIf d isi<T_distIf the matching point pair is an inner point, counting the number M of the inner points, and adding the inner points into the inner point set A, otherwise, regarding the characteristic point as an outer point;
(d) repeating the steps (b) and (c) K times, selecting the model with the largest number of the obtained interior points, eliminating the exterior points in the model, and calculating a transformation matrix H by using all the interior points obtained by the modelPiThen the matrix is the optimal transformation matrix.
The fifth concrete implementation mode: in this embodiment, a first specific embodiment is further described, in the step (4), motion estimation is performed by using an affine transformation function model, and the affine transformation is a function model for motion estimation with 6 parameters, that is:
in the formula,embodying the rotation and scaling transformation between images,then the translation motion between the images is represented, T represents the transformation relationship between the images, and (x, y) represents the coordinates of the pixel points. Substituting three groups of non-collinear matching point pair coordinates to obtain a motion parameter matrix
After the characteristic points in the step 3 are removed, U characteristics corresponding to the k frame exist in the body area of the k +1 frameThe feature points include V feature points corresponding to the k-th frame in the non-body region of the (k +1) -th frame, and feature point pair coordinates (x) of three non-collinear body regions of the body region are extracted0,y0),(x'0,y'0),(x1,y1),(x′1,y′1),(x2,y2),(x'2,y'2) One system of equations can be obtained:
solving the equation to obtain M1Obtaining the motion parameter matrix M of the non-vehicle body area in the same way2;
The sixth specific implementation mode: the following describes an embodiment with reference to fig. 3 and fig. 4, and this embodiment is further described as an embodiment, in step (5), the mean filtering is coarse filtering, and the global motion vector V of the continuous video frame image is processedkCarrying out equalization treatment, namely:
in the formula, VkFor the original global motion vector, the motion vector is,for the intended scanning movement of the camera, Δ VkFor high-frequency random dithering, the compensation component M corresponding to the current Kth frame imagek,MkRepresents the cumulative sum of all global motion vectors starting from frame 1 through frame K, then Mk=Mk-1+ΔVk,MkI.e. the motion parameters.
Because the vehicle body area and the non-vehicle body area have different motion vectors, two different global motion vectors M after mean filtering processing can be obtained according to the formula1' and M2′;
The fine filtering process adopts a Kalman filtering method, which is a linear discrete control process, and the estimation value of the current state variable is calculated by adopting the predicted value of the current time and the actual measured value of the current time at the previous time, and the state transfer equation and the measurement transfer equation formula of the Kalman filtering are as follows:
wherein φ is the state transition matrix, T is the control input matrix of the system, H is the state transformation matrix, x, y are the state value and the measured value of the system at the time k, u is the control input of the system, q, r are the process noise and the measured noise of the system at the time k, which can be regarded as Gaussian noise, and the covariance is Q (k) and R (k).
The kalman filtering essentially uses the predicted value of the system to obtain the estimated value of the next state, and can be divided into a prediction stage and a correction stage, that is:
in the formula, x (k | k-1) is a predicted value of the system state at the time k-1, P (k | k-1) is a predicted value of the covariance matrix of the state at the time k-1, x (k | k) is a corrected estimated value of the system state at the time k, P (k | k) is a corrected estimated value of the covariance matrix of the state at the time k, and k (k) is a kalman gain, and according to the above equation, x (k | k) and P (k | k) can be used to obtain x (k +1| k +1) and P (k +1| k +1) at the time k +1, and so on, and iteration is carried out.
After Kalman filtering processing, the point of coordinate change is obtained, and then new M can be obtained1″、M2″;
The seventh embodiment: in this embodiment, a first specific embodiment is further described, in the step (6), the actual camera motion parameter matrix M ═ k1M1″+k2M2". Wherein k is1+k2Since the corner points of the body region can represent the global camera motion, k is 11Should take a value comparison of k1Large, respectively take k1Is 0.6, 0.7, 0.8 …, then k1Corresponding values of 0.4, 0.3, 0.2 …, respectively1、k2Substituting into the formula, and comparing the values of their motion parameter matrixes, selecting the value which can represent the global motion vector most, namely the motion parameter to be solved. The step (7) is video frame compensation, namely:
Fcompen=MFsrc
Fsrcfor the original video frame, FcompenIs the compensated frame sequence.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Claims (7)
1. An electronic image stabilization algorithm based on a vehicle-mounted rearview mirror system is characterized by comprising the following specific steps:
step 1, sampling the original video frame, and reducing the data volume of the processed video frame according to a method of changing the sampling interval;
step 2, performing partition processing on the high information degree frame sequence sampled in the step 1, wherein the high information degree frame sequence is respectively a vehicle body region and a non-vehicle body region, angular points of the vehicle body region are in active motion, the non-vehicle body region is in local motion, the number of the angular points of the vehicle body region is set to be X, the number of the angular points of the non-vehicle body region is set to be Y, and then detecting and tracking the angular points by adopting an SURF algorithm;
step 3, carrying out mismatching elimination on the feature points extracted in the step 2 by adopting an optimized RANSAC algorithm, wherein the characteristic points of the automobile body region and the non-automobile body region after the elimination are U, V, and the result is MiA video frame image composed of frames;
step 4, removing the step 3Then obtaining MiThe frame comprises a sequence frame of a vehicle body area and a non-vehicle body area, and affine transformation parameters M are calculated through vehicle body area characteristic points of a k frame and a k +1 frame1Calculating affine transformation parameters M through the non-body region characteristic points of the k frame and the k +1 frame2;
Step 5, performing motion filtering on the motion estimation parameters by adopting a hybrid filter; the average filtering is rough filtering to remove some high-frequency noise and to obtain a motion parameter matrix M of the vehicle body region1And a motion parameter matrix M of the non-body area2Pre-filtering to obtain M1′,M2'; the fine filtering process adopts Kalman filtering, the image is subjected to fine processing to obtain M1″、M2″;
Step 6, weighting the difference value of the motion parameter matrixes of the vehicle body and the non-vehicle body area to obtain a practical and more accurate affine transformation parameter M;
and 7, compensating the video image to be processed by using the affine transformation parameters, and outputting a stable video frame.
2. The electronic image stabilization algorithm based on the vehicle-mounted rearview mirror system according to claim 1, wherein in the step 1, since the obtained video sequence frames are more and mostly similar, the video sequence frames obtained by sampling processing with a method of changing sampling intervals do not change much, the preprocessing is to perform gaussian filtering processing on the extracted video frame sequence to suppress noise, wherein the gaussian filtering is to use a two-dimensional zero-mean discrete gaussian function as a smoothing filter, that is:
where G (x, y) is a Gaussian function, μ is a mathematical expectation, σ2Is the standard deviation, and (x, y) is the coordinates of the pixel points.
3. The electronic image stabilization algorithm based on the vehicle rearview mirror system as claimed in claim 1, wherein in the step 2, the SURF algorithm is used for extracting and tracking feature points, and the basic steps are as follows:
step 2.1, constructing Hessian matrix
The core of SURF is a Hessian matrix, the Hessian matrix H is composed of functions and partial derivatives, and each pixel point can calculate a Hessian matrix for feature extraction, namely:
wherein (x, y) is the coordinate of the pixel, f (x, y) represents the coordinate relation of the pixel,meaning that two leads are taken for x,meaning that x is differentiated and then y is differentiated,representing twice derivation for y;
the discriminant of the Hessian matrix is as follows:
wherein H represents a matrix, and x and y are pixel point coordinates;
the value of the discriminant is the eigenvalue of the H matrix, all points can be classified by using the symbol of the decision result, and whether the point is an extreme point or not is judged according to the positive and negative values of the discriminant;
step 2.2, constructing a scale space
In a pyramid structure established in a traditional mode, the size of an image is changed, a Gaussian function is repeatedly used for carrying out smoothing processing on a sublayer, and an SURF algorithm keeps an original image unchanged and only changes the size of a filter;
step 2.3, feature point positioning
Comparing each pixel point processed by the Hessian matrix with 26 pixel points in the three-dimensional field of the pixel point, if the pixel point is the maximum value or the minimum value of the 26 pixel points, keeping the pixel points as preliminary characteristic points; filtering key points with weak energy and key points with wrong positioning to screen out final stable feature points;
step 2.4, feature point principal direction distribution
To express the feature points in the image exactly, the gradient modulus m and the direction angle θ should be calculated for each key point L (x, y); the following formula:
the scale value in the formula L (x, y) is the scale space to which each key point belongs, and the (x, y) represents the coordinates of the point;
when the main direction is determined, the SURF algorithm counts haar wavelet characteristics in the characteristic point field; in the field of the feature points, counting the sum of horizontal haar wavelet features and vertical haar wavelet features of all points in a 60-degree fan, rotating the 60-degree fan at certain intervals, and taking the direction of the fan with the maximum value as the main direction of the feature points;
step 2.5, generating feature point descriptors
Taking a square frame around the characteristic point, wherein the side length of the frame is 20s (s is the size of the detected characteristic point); the frame strip direction (the detected main direction); then dividing the frame into 16 subregions, wherein each subregion counts haar wavelet characteristics of 25 pixels in the horizontal direction and the vertical direction, and the horizontal direction and the vertical direction are relative to the main direction; the haar wavelet features are the sum of horizontal direction values, the sum of horizontal direction absolute values, the sum of vertical direction values and the sum of vertical direction absolute values;
step 2.6, feature point matching
Extracting a certain threshold amount of angular points of images of a reference frame and a current frame of a video sequence by a surf angular point detection algorithm, measuring the distance between descriptors by Euclidean distance, and searching for an optimal matching point; let the current frame image have n1A floating-point local feature point, where a certain feature point is represented by a ═ { x ═ x1,x2,…,xmWith reference to a frame picture having n2A floating-point type local feature point, wherein one of the feature points is represented by B ═ y1,y2,…,ymFrom n in the current frame1Selecting one feature point from the feature points and respectively comparing the selected feature point with n in the reference frame2Calculating Euclidean distance according to the characteristic points, wherein the Euclidean distance is as follows:
where m represents the dimension of the feature point descriptor, so that n can be obtained2(ii) the Euclidean distance; then, from this n2Selecting the distance d with the minimum Euclidean distance from the calculation resultsminAnd a penultimate small distance d'minCalculating dminAnd d'minThe ratio of (A) to (B) is as follows:
wherein r represents the Euclidean distance ratio, rtIs a threshold value; when r < rtThe feature points to be matched can be considered as matching.
4. The electronic image stabilization algorithm based on vehicular rearview mirror system as claimed in claim 1, wherein in step 3, the RANSAC algorithm is implemented by finding aAn optimal homography matrixSo as to maximize the number of data satisfying the matrixThe transformation relationship between the coordinates of two image points can be described, namely:
in the formula, pi,p′iRespectively representing corresponding matching points, h, in the two images1、h2、g4、g5For image rotation amount and scale, h3Denotes horizontal displacement, h6Denotes the vertical displacement, h7、h8Respectively representing the deformation amount in the horizontal and vertical directions;
the homography matrix contains 8 unknown parameters, at least 4 groups of matching point pairs are needed to be solved, the four groups of matching point pairs are not collinear, and the algorithm comprises the following basic steps:
step 3.1, setting iteration times K;
step 3.2, randomly selecting 4 groups of non-collinear feature points from the feature point set collected in the step 2, and calculating a current parameter model
Step 3.3, substituting all the characteristic points in the characteristic point set into the model, and calculating all the data in the characteristic point set and the parameter modelProjection error of Represents p'iAnddistance between, preset threshold T_distIf d isi<T_distIf the matching point pair is an inner point, counting the number M of the inner points, and adding the inner points into the inner point set A, otherwise, regarding the characteristic point as an outer point;
step 3.4, repeating the step 3.2 and the step 3.3, repeating the steps for K times, selecting the model with the largest number of the obtained interior points, eliminating the exterior points in the model, and calculating a transformation matrix by using all the interior points obtained by the modelThe matrix is the optimal transformation matrix.
5. The electronic image stabilization algorithm based on the vehicle-mounted rearview mirror system according to claim 1, wherein in the step 4, an affine transformation function model is adopted for motion estimation, and the affine transformation is a function model of 6-parameter motion estimation, namely:
in the formula,embodying the rotation and scaling transformation between images,then, the translation motion between the images is represented, T represents the transformation relation between the images, and (x, y) represents the coordinates of the pixel points; substituting three groups of non-collinear matching point pair coordinates to obtain a motion parameter matrix
After the characteristic points are removed in the step 3, U characteristic points corresponding to the k frame exist in the vehicle body area of the (k +1) th frame, V characteristic points corresponding to the k frame exist in the non-vehicle body area of the (k +1) th frame, and the characteristic point pair coordinates (x) of three groups of non-collinear vehicle body areas of the vehicle body area are extracted0,y0),(x′0,y′0),(x1,y1),(x′1,y′1),(x2,y2),(x′2,y′2) One system of equations can be obtained:
solving the equation to obtain M1Obtaining the motion parameter matrix M of the non-vehicle body area in the same way2。
6. The electronic image stabilization algorithm based on the vehicular rearview mirror system as claimed in claim 1, wherein in the step 5, the average filtering is coarse filtering, and the global motion vector V of the continuous video frame images is obtainedkCarrying out equalization treatment, namely:
in the formula, VkFor the original global motion vector, the motion vector is,for the intended scanning movement of the camera, Δ VkFor high-frequency random dithering, the compensation component M corresponding to the current Kth frame imagek,MkRepresents the cumulative sum of all global motion vectors starting from frame 1 through frame K, then Mk=Mk-1+ΔVk,MkNamely the motion parameters;
because the body area and the non-body area are differentAccording to the above formula, two different global motion vectors M after mean filtering can be obtained1' and M2′;
The fine filtering process adopts a Kalman filtering method, which is a linear discrete control process, and the estimation value of the current state variable is calculated by adopting the predicted value of the current time and the actual measured value of the current time at the previous time, and the state transfer equation and the measurement transfer equation formula of the Kalman filtering are as follows:
phi is a state transition matrix, T is a control input matrix of the system, H is a state transformation matrix, x and y are respectively a state value and a measured value of the system at the moment k, u is the control input of the system, q and r are respectively process noise and measurement noise of the system at the moment k, the process noise and the measurement noise can be regarded as Gaussian noise, and covariance is respectively Q (k) and R (k);
the kalman filtering essentially uses the predicted value of the system to obtain the estimated value of the next state, and can be divided into a prediction stage and a correction stage, that is:
wherein x (k | k-1) is a predicted value of the system state at the time k-1, P (k | k-1) is a predicted value of the covariance matrix of the state at the time k-1, x (k | k) is a corrected estimated value of the system state at the time k, P (k | k) is a corrected estimated value of the covariance matrix of the state at the time k, and k (k) is a kalman gain, and x (k | k) and P (k | k) are used to obtain x (k +1| k +1) and P (k +1| k +1) at the time k +1 according to the above equation, and so on, and iteration is carried out;
by KarlAfter the Manchester filtering processing, the point with changed coordinates is obtained, and then the new M can be obtained1″、M2″。
7. The electronic image stabilization algorithm based on the vehicle-mounted rearview mirror system according to claim 1, wherein in the step 6, the actual camera motion parameter matrix M-k1M1″+k2M2"; wherein k is1+k2Since the corner points of the body region can represent the global camera motion, k is 11Should take a value comparison of k1Large, respectively take k1Is 0.6, 0.7, 0.8 …, then k1Corresponding values of 0.4, 0.3, 0.2 …, respectively1、k2Substituting into a formula, and comparing the values of the motion parameter matrixes to select the value which can represent the global motion vector most, namely the motion parameter to be solved; the step (7) is video frame compensation, namely:
Fcompen=MFsrc
Fsrcfor the original video frame, FcompenIs the compensated frame sequence.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110520501.2A CN113256679A (en) | 2021-05-13 | 2021-05-13 | Electronic image stabilization algorithm based on vehicle-mounted rearview mirror system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110520501.2A CN113256679A (en) | 2021-05-13 | 2021-05-13 | Electronic image stabilization algorithm based on vehicle-mounted rearview mirror system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113256679A true CN113256679A (en) | 2021-08-13 |
Family
ID=77181605
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110520501.2A Pending CN113256679A (en) | 2021-05-13 | 2021-05-13 | Electronic image stabilization algorithm based on vehicle-mounted rearview mirror system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113256679A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113947608A (en) * | 2021-09-30 | 2022-01-18 | 西安交通大学 | High-precision measurement method for irregular structure movement based on geometric matching method control |
CN113949812A (en) * | 2021-10-21 | 2022-01-18 | 浙江大立科技股份有限公司 | Electronic image stabilization method based on partitioned Kalman motion prediction |
CN114216485A (en) * | 2022-02-23 | 2022-03-22 | 广州骏天科技有限公司 | Image calibration method for aerial surveying and mapping of unmanned aerial vehicle |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102231792A (en) * | 2011-06-29 | 2011-11-02 | 南京大学 | Electronic image stabilization method based on characteristic coupling |
CN103428408A (en) * | 2013-07-18 | 2013-12-04 | 北京理工大学 | Inter-frame image stabilizing method |
CN103426182A (en) * | 2013-07-09 | 2013-12-04 | 西安电子科技大学 | Electronic image stabilization method based on visual attention mechanism |
US20140362240A1 (en) * | 2013-06-07 | 2014-12-11 | Apple Inc. | Robust Image Feature Based Video Stabilization and Smoothing |
CN110493488A (en) * | 2018-05-15 | 2019-11-22 | 株式会社理光 | Video image stabilization method, Video Stabilization device and computer readable storage medium |
CN110796010A (en) * | 2019-09-29 | 2020-02-14 | 湖北工业大学 | Video image stabilization method combining optical flow method and Kalman filtering |
-
2021
- 2021-05-13 CN CN202110520501.2A patent/CN113256679A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102231792A (en) * | 2011-06-29 | 2011-11-02 | 南京大学 | Electronic image stabilization method based on characteristic coupling |
US20140362240A1 (en) * | 2013-06-07 | 2014-12-11 | Apple Inc. | Robust Image Feature Based Video Stabilization and Smoothing |
CN103426182A (en) * | 2013-07-09 | 2013-12-04 | 西安电子科技大学 | Electronic image stabilization method based on visual attention mechanism |
CN103428408A (en) * | 2013-07-18 | 2013-12-04 | 北京理工大学 | Inter-frame image stabilizing method |
CN110493488A (en) * | 2018-05-15 | 2019-11-22 | 株式会社理光 | Video image stabilization method, Video Stabilization device and computer readable storage medium |
CN110796010A (en) * | 2019-09-29 | 2020-02-14 | 湖北工业大学 | Video image stabilization method combining optical flow method and Kalman filtering |
Non-Patent Citations (1)
Title |
---|
尹丽华: "基于车载运动平台的全景稳像关键技术研究", 《中国优秀硕士学位论文全文数据库-工程科技Ⅱ辑;信息科技》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113947608A (en) * | 2021-09-30 | 2022-01-18 | 西安交通大学 | High-precision measurement method for irregular structure movement based on geometric matching method control |
CN113947608B (en) * | 2021-09-30 | 2023-10-20 | 西安交通大学 | High-precision measurement method for irregular movement of structure based on geometric matching control |
CN113949812A (en) * | 2021-10-21 | 2022-01-18 | 浙江大立科技股份有限公司 | Electronic image stabilization method based on partitioned Kalman motion prediction |
CN114216485A (en) * | 2022-02-23 | 2022-03-22 | 广州骏天科技有限公司 | Image calibration method for aerial surveying and mapping of unmanned aerial vehicle |
CN114216485B (en) * | 2022-02-23 | 2022-04-29 | 广州骏天科技有限公司 | Image calibration method for aerial surveying and mapping of unmanned aerial vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113256679A (en) | Electronic image stabilization algorithm based on vehicle-mounted rearview mirror system | |
CN109584282B (en) | Non-rigid image registration method based on SIFT (scale invariant feature transform) features and optical flow model | |
CN111667506B (en) | Motion estimation method based on ORB feature points | |
CN104574347B (en) | Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data | |
CN111080529A (en) | Unmanned aerial vehicle aerial image splicing method for enhancing robustness | |
CN107796391A (en) | A kind of strapdown inertial navigation system/visual odometry Combinated navigation method | |
CN107169972B (en) | Non-cooperative target rapid contour tracking method | |
CN104820996A (en) | Target tracking method based on self-adaptive blocks of video | |
CN109376641B (en) | Moving vehicle detection method based on unmanned aerial vehicle aerial video | |
CN109215053A (en) | Moving vehicle detection method containing halted state in a kind of unmanned plane video | |
CN112801141B (en) | Heterogeneous image matching method based on template matching and twin neural network optimization | |
CN110390338B (en) | SAR high-precision matching method based on nonlinear guided filtering and ratio gradient | |
CN107197121A (en) | A kind of electronic image stabilization method based on on-board equipment | |
CN109087333A (en) | Target scale estimation method and its device based on correlation filter tracking algorithm | |
CN108921170A (en) | A kind of effective picture noise detection and denoising method and system | |
CN107993193B (en) | Tunnel lining image splicing method based on illumination equalization and surf algorithm improvement | |
CN110910425B (en) | Target tracking method for approaching flight process | |
CN113639782A (en) | External parameter calibration method and device for vehicle-mounted sensor, equipment and medium | |
CN105741255A (en) | Image fusion method and device | |
CN110322476B (en) | Target tracking method for improving STC and SURF feature joint optimization | |
CN101976446B (en) | Tracking method of multiple feature points of microscopic sequence image | |
WO2023130842A1 (en) | Camera pose determining method and apparatus | |
CN111951178A (en) | Image processing method and device for remarkably improving image quality and electronic equipment | |
CN106296671B (en) | A kind of image partition method based on Gabor entropy of histogram | |
CN112652000B (en) | Method for judging small-scale movement direction of image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210813 |
|
RJ01 | Rejection of invention patent application after publication |