CN113296139A - Self-adaptive image optical flow and RTK fusion attitude determination method - Google Patents

Self-adaptive image optical flow and RTK fusion attitude determination method Download PDF

Info

Publication number
CN113296139A
CN113296139A CN202110583081.2A CN202110583081A CN113296139A CN 113296139 A CN113296139 A CN 113296139A CN 202110583081 A CN202110583081 A CN 202110583081A CN 113296139 A CN113296139 A CN 113296139A
Authority
CN
China
Prior art keywords
optical flow
moment
carrier
angular velocity
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110583081.2A
Other languages
Chinese (zh)
Other versions
CN113296139B (en
Inventor
王守华
黄涛
孙希延
纪元法
付文涛
邓洪高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202110583081.2A priority Critical patent/CN113296139B/en
Publication of CN113296139A publication Critical patent/CN113296139A/en
Application granted granted Critical
Publication of CN113296139B publication Critical patent/CN113296139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/53Determining attitude
    • G01S19/54Determining attitude using carrier phase measurements; using long or short baseline interferometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/393Trajectory determination or predictive tracking, e.g. Kalman filtering

Abstract

The invention discloses a self-adaptive image optical flow and RTK fusion attitude measurement method, which effectively inhibits the error accumulation of the image optical flow attitude measurement by fusing image optical flow attitude measurement data and RTK attitude measurement data through self-adaptive Kalman filtering, and enhances the stability and the attitude measurement precision of a system by using an image optical flow attitude measurement system to assist the RTK attitude measurement. By using a contrast-limited histogram equalization processing method, a cost function-based outlier elimination method and an improved optical flow motion equation, the accuracy of system attitude measurement is effectively improved.

Description

Self-adaptive image optical flow and RTK fusion attitude determination method
Technical Field
The invention relates to the technical field of attitude and direction measurement, in particular to a self-adaptive image optical flow and RTK fusion attitude measurement method.
Background
The method can provide attitude information of the carrier in real time, and can solve the course angle, roll angle and pitch angle of the carrier by using an RTK attitude measurement technology. Although errors generated by the RTK attitude determination system cannot accumulate over time, the method is limited by the influence of an actual observation environment, and the measurement accuracy is poor. Although the image optical flow attitude measurement system has strong autonomy and is not interfered by the outside during working, the image optical flow attitude measurement system adopts a relative measurement method, has the problem of error accumulation, and limits the application of the method.
Disclosure of Invention
The invention aims to solve the problems that the precision of a traditional RTK attitude measurement system is different from that of a precision inertial navigation system and is not suitable for a satellite signal shielding situation, and an image optical flow attitude measurement system has accumulated errors in the operation process and needs to correct the errors in real time, and provides a self-adaptive image optical flow and RTK fusion attitude measurement method.
In order to solve the problems, the invention is realized by the following technical scheme:
a self-adaptive image optical flow and RTK fusion attitude determination method comprises the following steps:
step 1, using an RTK attitude determination system with more than three GNSS antennas to perform carrier attitude calculation, and acquiring attitude angles of carriers at all times; the attitude angle comprises a pitch angle p, a roll angle r and a course angle y of the carrier;
step 2, acquiring images at various moments from a camera fixed on a carrier; preprocessing the images at all moments, and obtaining feature points of the images at all moments by using a SURF method; then, the coordinates of the feature points of the images at the front moment and the rear moment are used for carrying out difference to obtain displacement, and the displacement is divided by the system sampling time to obtain the optical flow value of the feature point of the image at each moment;
step 3, introducing a translational optical flow expansion point on the basis of the traditional optical flow motion equation to obtain an improved optical flow motion equation; wherein the improved optical flow equation of motion is:
Figure BDA0003086346100000021
in the formula, ωrIs the roll angular velocity, omega, of the carrierpIs the pitch angular velocity, omega, of the carrieryIs the heading angular velocity of the carrier, (u)E,vE) As the coordinates of the optical flow expansion points in the image, (u)n,vn) Is the coordinate of the nth feature point of the image,
Figure BDA0003086346100000022
the optical flow value of the N-th characteristic point of the image is obtained, f is the focal length of the camera, and N is 1, 2.
Step 4, an improved optical flow motion equation is used for the optical flow values of the feature points of the images at all moments, and a nonlinear least square method is used for solving the optical flow expansion point coordinates of the images at all moments and the attitude angular velocity of the carrier; wherein the attitude angular velocity comprises the pitch angular velocity ω of the carrierpRoll angular velocity ωrAnd course angular velocity ωy
And 5, performing fusion estimation on the attitude angle of the carrier at each moment obtained in the step 1 and the attitude angular velocity of the carrier at each moment obtained in the step 4 by using adaptive Kalman filtering to obtain an attitude angle estimation value of the carrier at each moment.
The specific process of the step 5 is as follows:
step 5.1, constructing a filtering model of the adaptive Kalman filtering; the observation vector Y of the filter model at each moment consists of the pitch angle p, the roll angle r, the course angle Y and the pitch angle speed omega of the carrier at the momentpRoll angular velocity ωrAnd course angular velocity ωyThe components are mixed; the state vector X of the filter model at each moment consists of the pitch angle p, the roll angle r, the course angle y and the pitch angle speed omega of the carrier at the momentpRoll angular velocity ωrCourse angular velocity omegayAngular acceleration a of pitchpAcceleration of roll angle arAnd course angular acceleration ayThe components are mixed;
step 5.2, theConstant initial pitch angular acceleration apInitial roll angular acceleration arInitial course angular acceleration ayInitial observation vector Y0Initial state vector X0Initial state vector error covariance matrix P0Initial observation error covariance matrix R0A Kalman filtering state transition matrix A, a state process covariance matrix Q and a measurement matrix C; and let i equal to 1;
step 5.3, according to the state vector X at the i-1 th timei-1And Kalman filtering state transition matrix A, calculating the prediction state vector at the ith moment
Figure BDA0003086346100000023
Figure BDA0003086346100000024
Step 5.4, according to the state vector error covariance matrix P at the i-1 th momenti-1A state process covariance matrix Q and a Kalman filtering state transition matrix A, and calculating a prediction state vector error covariance matrix P at the ith momenti -
Pi -=APi-1A+Q
Step 5.5, judging whether the current time i reaches the set time number M:
if the current time i does not reach the set time number M, observing the error covariance matrix R at the ith timeiObserving error covariance matrix R at the i-1 th momenti-1Instant Ri=Ri-1
If the current time i reaches the set time number M, predicting the state vector according to the previous M times of the ith time, the previous M time observation vectors of the ith time and the state vector error covariance matrix P at the ith-1 timei-1And a measurement matrix C for calculating an observation error covariance matrix R at the ith timei
Figure BDA0003086346100000032
Wherein, M is a set value,
Figure BDA0003086346100000033
is the state vector at the i-j time, Yi-jIs an observation vector at the ith-j moment;
step 5.6, predicting the state vector error covariance matrix P according to the ith momenti -And the covariance matrix R of the observation error at the ith momentiAnd a measurement matrix C for calculating a Kalman filtering gain matrix K at the ith momenti
Ki=Pi -CT(CPi -CT+Ri)-1
Step 5.7, predicting the state vector according to the ith moment
Figure BDA0003086346100000034
Observation vector Y at the ith timeiAnd the i-th moment Kalman filtering gain matrix KiAnd a measurement matrix C for calculating the state vector X at the ith timeiAnd the ith time state vector X is calculatediOutputting as attitude angle estimation values:
Figure BDA0003086346100000035
step 5.8, predicting the state vector error covariance matrix P by using the ith momenti -And the i-th moment Kalman filtering gain matrix KiMeasurement matrix C and 9X 9 identity matrix I9×9Updating the state vector error covariance matrix P at the ith timei
Pi=(I9×9-KiC)Pi -
And 5.9, making i equal to i +1, and returning to the step 5.3.
In step 5.1 above, the initial pitch acceleration apInitial roll angular acceleration arAnd an initial course angular acceleration ayAre all 0.
In the above-mentioned step 5.2,
initial state vector error covariance matrix P0Comprises the following steps:
P0=I9×9
initial observation error covariance matrix R0Comprises the following steps:
Figure BDA0003086346100000036
the Kalman filtering state transition matrix A is:
Figure BDA0003086346100000041
the state process covariance matrix Q is:
Figure BDA0003086346100000042
the measurement matrix C is:
C=[I6×6 06×3]
where Δ t is the system sampling time, I9×9Is a 9 × 9 identity matrix, I6×6Is a 6 × 6 identity matrix, I3×3Is a 3 × 3 identity matrix, 06×3A 6 x 3 matrix of all 0's.
As a modification, after the step 4 and before the step 5, the method further comprises the following steps:
step I, calculating residual error values J of feature points of images at all moments by using a cost function based on the coordinates of the optical flow expansion points obtained in the step 4 and the attitude angular velocity of the carriern
Step two, residual value J obtained in the step onenPerforming primary elimination on the characteristic points of the image exceeding the set residual value threshold;
preliminarily eliminating the remaining characteristic points of the images at each moment, using an improved optical flow motion equation, and solving the optical flow expansion point coordinates and the attitude angular velocity of the carrier of the images at each moment by using a nonlinear least square method;
fourthly, based on the coordinates of the optical flow expansion points and the attitude angular velocity of the carrier obtained in the third step, a cost function is used for recalculating a residual value J of the images at each moment and preliminarily eliminating the remaining feature pointsn
Fifthly, the residual value J obtained in the step IVnPrimarily removing the remaining characteristic points of the image with a larger preset proportion for removing again;
step sixthly, removing the remaining characteristic points of the images at each moment again, using an improved optical flow motion equation, and solving the optical flow expansion point coordinates and the attitude angular velocity of the carrier of the images at each moment by using a nonlinear least square method;
and step (c), replacing the posture angular velocity of the carrier obtained in the step (4) by the posture angular velocity of the carrier obtained in the step (c).
The cost function is:
Figure BDA0003086346100000051
wherein, JnIs the residual value, omega, corresponding to the nth characteristic point of the imagerIs the roll angular velocity, omega, of the carrierpIs the pitch angular velocity, omega, of the carrieryIs the heading angular velocity of the carrier, (u)E,vE) As the coordinates of the optical flow expansion points in the image, (u)n,vn) Is the coordinate of the nth feature point of the image,
Figure BDA0003086346100000052
the optical flow value of the n-th characteristic point of the image is shown, f is the focal length of the camera, and n is 1,2,3.
In the step 1, the image is preprocessed by gaussian filtering and contrast-limited histogram equalization.
Compared with the prior art, the method has the advantages that the self-adaptive Kalman filtering is used for fusing the image optical flow attitude measurement data and the RTK attitude measurement data, the error accumulation of the image optical flow attitude measurement is effectively inhibited, the image optical flow attitude measurement system is used for assisting the RTK attitude measurement, and the stability and the attitude measurement precision of the system are enhanced. By using a contrast-limited histogram equalization processing method, a cost function-based outlier elimination method and an improved optical flow motion equation, the accuracy of system attitude measurement is effectively improved.
Drawings
FIG. 1 is a flow chart of a self-adaptive image optical flow and RTK fusion attitude determination method.
FIG. 2 is a schematic view of the points of optical flow expansion.
FIG. 3 is a flow chart of cost function based outlier culling.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to specific examples.
Referring to fig. 1, a self-adaptive image optical flow and RTK fusion attitude determination method specifically includes the following steps:
step 1: and carrying out carrier attitude calculation by using an RTK attitude measurement system with more than three GNSS antennas to obtain the attitude angle of the carrier at each moment, wherein the attitude angle comprises a roll angle r, a pitch angle p and a course angle y of the carrier.
Step 2: firstly, acquiring images at various moments from a camera fixed on a carrier; preprocessing the images at all times, and then obtaining the feature points of the images at all times by using a Speed Up Robust Features (SURF) method; and then, obtaining displacement by using the coordinates of the characteristic points of the images at the front and rear moments as a difference, and dividing the displacement by the system sampling time to obtain the optical flow value of the characteristic points of the images at each moment.
The camera is fixedly arranged on the carrier and collects images in front of the carrier in real time. The adopted image preprocessing method comprises the following steps: image gaussian filtering and contrast-limited histogram equalization. Noise in the image is eliminated through Gaussian filtering, and the contrast of the image is adjusted through histogram equalization processing with limited contrast, so that the brightness can be better distributed on the histogram, the local contrast is enhanced, the brightness of a shadow part is improved, image texture is protruded, and the solving precision of the image optical flow is effectively improved.
And step 3: a translational optical flow expansion point is introduced on the basis of the traditional optical flow motion equation to obtain an improved optical flow motion equation.
A conventional optical flow motion equation that can be established by camera imaging principles to represent the relationship between the captured image and the camera motion is:
Figure BDA0003086346100000061
wherein, ω isrIs the roll angular velocity, omega, of the carrierpIs the pitch angular velocity, omega, of the carrieryIs the course angular velocity of the carrier, (v)x,vy,vz) Is the linear velocity of three axes of the carrier; znThe distance from the optical center of the camera to the nth characteristic point (u)n,vn) Is the coordinate of the nth feature point of the image,
Figure BDA0003086346100000062
the optical flow value of the nth feature point of the image, f is the focal length of the camera, N is 1, 2.
In order to reduce the difficulty of attitude estimation and improve the accuracy of attitude estimation, the invention introduces translational optical flow expansion points in the traditional optical flow motion equation to simplify the translational components of the optical flow, thereby reducing the number of integral unknowns. Assuming that the carrier is only in translational motion and does not have any rotational motion, the image in the camera appears to expand around a point with a certain optical flow value of 0, whose coordinates in the image can be expressed as (u)E,vE) And referred to as the optical flow expansion point, the flow expansion point is shown in fig. 2.
The optical flow expansion point coordinates at this time have the following relationship:
Figure BDA0003086346100000063
substituting equation (2) into the conventional optical flow equation of motion (1) can obtain the improved optical flow equation of motion as follows:
Figure BDA0003086346100000071
wherein, ω isrIs the roll angular velocity, omega, of the carrierpIs the pitch angular velocity, omega, of the carrieryIs the heading angular velocity of the carrier, (u)E,vE) As the coordinates of the optical flow expansion points in the image, (u)n,vn) Is the coordinate of the nth feature point of the image,
Figure BDA0003086346100000072
the optical flow value of the n-th characteristic point of the image is shown, f is the focal length of the camera, and n is 1,2,3.
And 4, step 4: using an improved optical flow motion equation for the optical flow values of the feature points of the images at all the moments, and solving the optical flow expansion point coordinates and the attitude angular velocity of the carrier of the images at all the moments by using a nonlinear least square method; wherein the attitude angular velocity comprises the pitch angular velocity ω of the carrierpRoll angular velocity ωrAnd course angular velocity ωy
And 5: and eliminating outliers from the feature points of the images at all times by using an outlier elimination method based on the cost function, as shown in fig. 3.
Residual value J after preliminary estimation of carrier attitude information is obtained because inevitable errors exist in the optical flow values of the feature points of the image and influence the estimation of motion parametersnResidual value J of optical flow value different from 0 and having large errornThe light flow value is larger than the light flow value with small error, so that the residual value J can be eliminatednLarger optical flow values improve the estimation accuracy.
Step 5.1: calculating each moment by using a cost function based on the optical flow expansion point coordinates obtained in the step 4 and the attitude angular velocity of the carrierResidual value J of feature point of imagen
Using a cost function as the criterion of image feature points, wherein the cost function is derived from an improved optical flow motion equation, and the formula is as follows:
Figure BDA0003086346100000073
wherein, JnIs the residual value, omega, corresponding to the nth characteristic point of the imagerIs the roll angular velocity, omega, of the carrierpIs the pitch angular velocity, omega, of the carrieryIs the heading angular velocity of the carrier, (u)E,vE) As the coordinates of the optical flow expansion points in the image, (u)n,vn) Is the coordinate of the nth feature point of the image,
Figure BDA0003086346100000081
the optical flow value of the n-th characteristic point of the image is shown, f is the focal length of the camera, and n is 1,2,3.
Step 5.2: the residual value J obtained in the step 5.1nThe feature points of the image exceeding the set residual value threshold are preliminarily removed (the residual value threshold is selected to be 5 in the example).
Step 5.3: and (3) primarily removing the remaining feature points of the images at all moments, using an improved optical flow motion equation, and solving the optical flow expansion point coordinates and the attitude angular velocity of the carrier of the images at all moments by using a nonlinear least square method.
Step 5.4: based on the coordinates of the optical flow expansion points obtained in the step 5.3 and the attitude angular velocity of the carrier, a cost function is used for recalculating a residual value J of the images at each moment and preliminarily eliminating the remaining feature pointsn
Step 5.5: the residual value J obtained in the step 5.4nThe remaining feature points of the larger predetermined proportion of the images are preliminarily removed and removed again (the predetermined proportion selected in the example is 10%).
Step 6: the optical flow values of the feature points of the images at all moments after the wild values are removed in the step 5 are changedAnd further solving the optical flow expansion point coordinates of the image at each moment and the attitude angular velocity of the carrier by using a nonlinear least square method, wherein the attitude angular velocity comprises the pitch angular velocity omega of the carrierpRoll angular velocity ωrAnd course angular velocity ωy
And 7: and (3) performing fusion estimation on the attitude angle of the carrier at each moment obtained in the step (1) and the attitude angular velocity of the carrier at each moment obtained in the step (6) by using adaptive Kalman filtering to obtain an attitude angle estimation value of the carrier at each moment.
The self-adaptive Kalman filtering is to continuously use the filtering result to judge whether each state of the system changes or not while performing Kalman filtering by using real-time measurement data of the system, and correct parameters and noise characteristics of a self filtering model in real time so as to reduce model errors of the Kalman filtering and improve the precision of the filtering result. In the conventional Kalman filtering, a covariance matrix Q and an observation vector error covariance matrix R representing the error state process of a filtering motion model are fixed, and when the motion model of a system per se has deviation or the observation environment of the system changes, the final filtering result is greatly influenced. The RTK attitude determination system is easy to influence attitude determination precision due to shielding of the building and the tree on GNSS satellite signals; in the image optical flow gesture measuring system, the measurement result of the angular velocity of the carrier is also easily influenced by illumination change. Therefore, the invention uses the self-adaptive Kalman filtering and utilizes the real-time observation value of the system and the Kalman filtering prediction value to correct the covariance matrix R of the observation vector error in real time.
Let the initial time be the 0 th time, let the current time be the ith time, let the previous time of the current time be the i-1 th time, the specific process of step 7 is as follows:
step 7.1: and constructing a filtering model of the adaptive Kalman filtering.
The observation vector Y of the filter model at each moment is obtained from the pitch angle p, the roll angle r, the course angle Y and the pitch angle speed omega of the carrier at each moment in the steps 1 and 6pRoll angular velocity ωrAnd course angular velocity ωyThe composition is as follows.
Y=[p r y ωp ωr ωy]T (5)
The state vector X of the filter model at each moment consists of the pitch angle p, the roll angle r, the course angle y and the pitch angle speed omega of the carrier at each momentpRoll angular velocity ωrCourse angular velocity omegayAngular acceleration a of pitchpAcceleration of roll angle arAnd course angular acceleration ayThe composition is as follows.
X=[p r y ωp ωr ωy ap ar ay]T (6)
Step 7.2: initialization: initial pitch angular acceleration apInitial roll angular acceleration arInitial course angular acceleration ayInitial observation vector Y0Initial state vector X0Initial state vector error covariance matrix P0Initial observation error covariance matrix R0The Kalman filtering system comprises a Kalman filtering state transition matrix A, a state process covariance matrix Q and a measurement matrix C.
Acceleration due to initial pitch angle apInitial roll angular acceleration arAnd an initial course angular acceleration ayAre all 0. Thus the initial observation vector Y0And an initial state vector X0Can be determined according to the attitude angle and the attitude angular velocity at the 0 th time.
Initial state vector error covariance matrix P0Comprises the following steps:
P0=I9×9 (7)
initial observation error covariance matrix R0Comprises the following steps:
Figure BDA0003086346100000091
the Kalman filtering state transition matrix A is:
Figure BDA0003086346100000092
the state process covariance matrix Q is:
Figure BDA0003086346100000093
the measurement matrix C is:
C=[I6×6 06×3] (11)
where Δ t is the system sampling time, I9×9Is a 9 × 9 identity matrix, I6×6Is a 6 × 6 identity matrix, I3×3Is a 3 × 3 identity matrix, 06×3A 6 x 3 matrix of all 0's.
Step 7.3, according to the state vector X at the i-1 th timei-1And Kalman filtering state transition matrix A, calculating the prediction state vector at the ith moment
Figure BDA0003086346100000094
Figure BDA0003086346100000095
Step 7.4, according to the state vector error covariance matrix P at the i-1 th momenti-1A state process covariance matrix Q and a Kalman filtering state transition matrix A, and calculating a prediction state vector error covariance matrix P at the ith momenti -
Pi -=APi-1A+Q
And 7.5, judging whether the current time i reaches the set time number M:
if the current time i does not reach the set time number M, observing the error covariance matrix R at the ith timeiObserving error covariance matrix R at the i-1 th momenti-1Instant Ri=Ri-1
If the current time i reaches the set time number M, predicting the state vector and the ith time according to the previous M times of the ith timeObservation vectors at the first M moments of time, state vector error covariance matrix P at the (i-1) th momenti-1And a measurement matrix C for calculating an observation error covariance matrix R at the ith timei
Figure BDA0003086346100000101
Where M is a set value (M is selected to have a value of 10 in the example),
Figure BDA0003086346100000102
is the state vector at the i-j time, Yi-jIs the observation vector at the i-j time.
Step 7.6, predicting the state vector error covariance matrix P according to the ith momenti -And the covariance matrix R of the observation error at the ith momentiAnd a measurement matrix C for calculating a Kalman filtering gain matrix K at the ith momenti
Ki=Pi -CT(CPi -CT+Ri)-1
Step 7.7, predicting the state vector according to the ith moment
Figure BDA0003086346100000103
Observation vector Y at the ith timeiAnd the i-th moment Kalman filtering gain matrix KiAnd a measurement matrix C for calculating the state vector X at the ith timeiAnd the ith time state vector X is calculatediOutputting as attitude angle estimation values:
Figure BDA0003086346100000104
step 7.8, predicting the state vector error covariance matrix P by using the ith momenti -And the i-th moment Kalman filtering gain matrix KiMeasurement matrix C and 9X 9 identity matrix I9×9Updating the state vector error covariance matrix P at the ith timei
Pi=(I9×9-KiC)Pi -
And 7.9, making i equal to i +1, and returning to the step 7.3.
It should be noted that, although the above-mentioned embodiments of the present invention are illustrative, the present invention is not limited thereto, and thus the present invention is not limited to the above-mentioned embodiments. Other embodiments, which can be made by those skilled in the art in light of the teachings of the present invention, are considered to be within the scope of the present invention without departing from its principles.

Claims (7)

1. A self-adaptive image optical flow and RTK fusion attitude determination method is characterized by comprising the following steps:
step 1, using an RTK attitude determination system with more than three GNSS antennas to perform carrier attitude calculation, and acquiring attitude angles of carriers at all times; the attitude angle comprises a pitch angle p, a roll angle r and a course angle y of the carrier;
step 2, acquiring images at various moments from a camera fixed on a carrier; preprocessing the images at all moments, and obtaining feature points of the images at all moments by using a SURF method; then, the coordinates of the feature points of the images at the front moment and the rear moment are used for carrying out difference to obtain displacement, and the displacement is divided by the system sampling time to obtain the optical flow value of the feature point of the image at each moment;
step 3, introducing a translational optical flow expansion point on the basis of the traditional optical flow motion equation to obtain an improved optical flow motion equation; wherein the improved optical flow equation of motion is:
Figure FDA0003086346090000011
in the formula, ωrIs the roll angular velocity, omega, of the carrierpIs the pitch angular velocity, omega, of the carrieryIs the heading angular velocity of the carrier, (u)E,vE) As the coordinates of the optical flow expansion points in the image, (u)n,vn) Is the coordinate of the nth feature point of the image,
Figure FDA0003086346090000012
the optical flow value of the N-th characteristic point of the image is obtained, f is the focal length of the camera, and N is 1, 2.
Step 4, an improved optical flow motion equation is used for the optical flow values of the feature points of the images at all moments, and a nonlinear least square method is used for solving the optical flow expansion point coordinates of the images at all moments and the attitude angular velocity of the carrier; wherein the attitude angular velocity comprises the pitch angular velocity ω of the carrierpRoll angular velocity ωrAnd course angular velocity ωy
And 5, performing fusion estimation on the attitude angle of the carrier at each moment obtained in the step 1 and the attitude angular velocity of the carrier at each moment obtained in the step 4 by using adaptive Kalman filtering to obtain an attitude angle estimation value of the carrier at each moment.
2. The adaptive image optical flow and RTK fusion attitude determination method according to claim 1, wherein the specific process of step 5 is as follows:
step 5.1, constructing a filtering model of the adaptive Kalman filtering; the observation vector Y of the filter model at each moment consists of the pitch angle p, the roll angle r, the course angle Y and the pitch angle speed omega of the carrier at the momentpRoll angular velocity ωrAnd course angular velocity ωyThe components are mixed; the state vector X of the filter model at each moment consists of the pitch angle p, the roll angle r, the course angle y and the pitch angle speed omega of the carrier at the momentpRoll angular velocity ωrCourse angular velocity omegayAngular acceleration a of pitchpAcceleration of roll angle arAnd course angular acceleration ayThe components are mixed;
step 5.2, initial pitch angle acceleration a is givenpInitial roll angular acceleration arInitial course angular acceleration ayInitial observation vector Y0Initial state vector X0Initial state vector error covariance matrix P0Initial observation error covariance matrix R0A Kalman filtering state transition matrix A, a state process covariance matrix Q and a measurement matrix C; and let i equal to 1;
step 5.3, according to the state vector X at the i-1 th timei-1And Kalman filtering state transition matrix A, calculating the prediction state vector at the ith moment
Figure FDA0003086346090000021
Figure FDA0003086346090000022
Step 5.4, according to the state vector error covariance matrix P at the i-1 th momenti-1A state process covariance matrix Q and a Kalman filtering state transition matrix A, and calculating the prediction state vector error covariance matrix at the ith moment
Figure FDA0003086346090000023
Figure FDA0003086346090000024
Step 5.5, judging whether the current time i reaches the set time number M:
if the current time i does not reach the set time number M, observing the error covariance matrix R at the ith timeiObserving error covariance matrix R at the i-1 th momenti-1Instant Ri=Ri-1
If the current time i reaches the set time number M, predicting the state vector according to the previous M times of the ith time, the previous M time observation vectors of the ith time and the state vector error covariance matrix P at the ith-1 timei-1And a measurement matrix C for calculating an observation error covariance matrix R at the ith timei
Figure FDA0003086346090000025
Wherein, M is a set value,
Figure FDA0003086346090000026
is the state vector at the i-j time, Yi-jIs an observation vector at the ith-j moment;
step 5.6, predicting the state vector error covariance matrix according to the ith moment
Figure FDA0003086346090000027
The covariance matrix R of the observation error at the ith momentiAnd a measurement matrix C for calculating a Kalman filtering gain matrix K at the ith momenti
Figure FDA0003086346090000028
Step 5.7, predicting the state vector according to the ith moment
Figure FDA0003086346090000029
Observation vector Y at the ith timeiAnd the i-th moment Kalman filtering gain matrix KiAnd a measurement matrix C for calculating the state vector X at the ith timeiAnd the ith time state vector X is calculatediOutputting as attitude angle estimation values:
Figure FDA00030863460900000210
step 5.8, predicting the state vector error covariance matrix by using the ith moment
Figure FDA00030863460900000211
Kalman filter gain matrix K at the ith momentiMeasurement matrix C and 9X 9 identity matrix I9×9Updating the state vector error covariance matrix P at the ith timei
Figure FDA00030863460900000212
And 5.9, making i equal to i +1, and returning to the step 5.3.
3. The adaptive image optical flow and RTK fusion attitude determination method as claimed in claim 2, wherein in step 5.1, the initial pitch angular acceleration apInitial roll angular acceleration arAnd an initial course angular acceleration ayAre all 0.
4. The adaptive image optical flow and RTK fusion attitude determination method of claim 4, wherein in step 5.2,
initial state vector error covariance matrix P0Comprises the following steps:
P0=I9×9
initial observation error covariance matrix R0Comprises the following steps:
Figure FDA0003086346090000031
the Kalman filtering state transition matrix A is:
Figure FDA0003086346090000032
the state process covariance matrix Q is:
Figure FDA0003086346090000033
the measurement matrix C is:
C=[I6×6 06×3]
where Δ t is the system sampling time, I9×9Is a 9 × 9 identity matrix, I6×6Is a 6 × 6 identity matrix, I3×3Is 3X 3 sheetBit matrix, 06×3A 6 x 3 matrix of all 0's.
5. The adaptive image optical flow and RTK fusion attitude determination method according to claim 1, wherein after step 4 and before step 5, further comprising the steps of:
step I, calculating residual error values J of feature points of images at all moments by using a cost function based on the coordinates of the optical flow expansion points obtained in the step 4 and the attitude angular velocity of the carriern
Step two, residual value J obtained in the step onenPerforming primary elimination on the characteristic points of the image exceeding the set residual value threshold;
preliminarily eliminating the remaining characteristic points of the images at each moment, using an improved optical flow motion equation, and solving the optical flow expansion point coordinates and the attitude angular velocity of the carrier of the images at each moment by using a nonlinear least square method;
fourthly, based on the coordinates of the optical flow expansion points and the attitude angular velocity of the carrier obtained in the third step, a cost function is used for recalculating a residual value J of the images at each moment and preliminarily eliminating the remaining feature pointsn
Fifthly, the residual value J obtained in the step IVnPrimarily removing the remaining characteristic points of the image with a larger preset proportion for removing again;
step sixthly, removing the remaining characteristic points of the images at each moment again, using an improved optical flow motion equation, and solving the optical flow expansion point coordinates and the attitude angular velocity of the carrier of the images at each moment by using a nonlinear least square method;
and step (c), replacing the posture angular velocity of the carrier obtained in the step (4) by the posture angular velocity of the carrier obtained in the step (c).
6. The adaptive image optical flow and RTK fusion attitude determination method of claim 5, wherein the cost function is:
Figure FDA0003086346090000041
wherein, JnIs the residual value, omega, corresponding to the nth characteristic point of the imagerIs the roll angular velocity, omega, of the carrierpIs the pitch angular velocity, omega, of the carrieryIs the heading angular velocity of the carrier, (u)E,vE) As the coordinates of the optical flow expansion points in the image, (u)n,vn) Is the coordinate of the nth feature point of the image,
Figure FDA0003086346090000042
the optical flow value of the n-th characteristic point of the image is shown, f is the focal length of the camera, and n is 1,2,3.
7. The adaptive image optical flow and RTK fusion attitude determination method according to claim 1, wherein in step 1, the image is preprocessed by image Gaussian filtering and contrast-limited histogram equalization.
CN202110583081.2A 2021-05-27 2021-05-27 Self-adaptive image optical flow and RTK fusion attitude determination method Active CN113296139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110583081.2A CN113296139B (en) 2021-05-27 2021-05-27 Self-adaptive image optical flow and RTK fusion attitude determination method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110583081.2A CN113296139B (en) 2021-05-27 2021-05-27 Self-adaptive image optical flow and RTK fusion attitude determination method

Publications (2)

Publication Number Publication Date
CN113296139A true CN113296139A (en) 2021-08-24
CN113296139B CN113296139B (en) 2022-05-03

Family

ID=77325435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110583081.2A Active CN113296139B (en) 2021-05-27 2021-05-27 Self-adaptive image optical flow and RTK fusion attitude determination method

Country Status (1)

Country Link
CN (1) CN113296139B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882733A (en) * 2022-03-15 2022-08-09 深圳市德驰微视技术有限公司 Parking space acquisition method based on domain controller, electronic device and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101033972A (en) * 2007-02-06 2007-09-12 华中科技大学 Method for obtaining three-dimensional information of space non-cooperative object
CN102538781A (en) * 2011-12-14 2012-07-04 浙江大学 Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
CN105807083A (en) * 2016-03-15 2016-07-27 深圳市高巨创新科技开发有限公司 Real-time speed measuring method and system for unmanned aerial vehicle
CN107478223A (en) * 2016-06-08 2017-12-15 南京理工大学 A kind of human body attitude calculation method based on quaternary number and Kalman filtering
CN107850673A (en) * 2015-07-27 2018-03-27 高通股份有限公司 Vision inertia ranging attitude drift is calibrated
EP3371619A1 (en) * 2015-11-06 2018-09-12 Squarehead Technology AS Uav detection
CN109188487A (en) * 2018-09-27 2019-01-11 无锡比特信息科技有限公司 The high-precision redundancy unmanned plane integrated board of positioning
CN109696689A (en) * 2019-01-23 2019-04-30 桂林电子科技大学 A kind of tracking distance measuring method of light stream in conjunction with laser
CN109716060A (en) * 2016-07-19 2019-05-03 视觉机械有限公司 The vehicle location of earth's surface is utilized by event camera
CN110501736A (en) * 2019-08-28 2019-11-26 武汉大学 Utilize vision imaging and GNSS distance measuring signal close coupling positioning system and method
CN112254721A (en) * 2020-11-06 2021-01-22 南京大学 Attitude positioning method based on optical flow camera

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101033972A (en) * 2007-02-06 2007-09-12 华中科技大学 Method for obtaining three-dimensional information of space non-cooperative object
CN102538781A (en) * 2011-12-14 2012-07-04 浙江大学 Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
CN107850673A (en) * 2015-07-27 2018-03-27 高通股份有限公司 Vision inertia ranging attitude drift is calibrated
EP3371619A1 (en) * 2015-11-06 2018-09-12 Squarehead Technology AS Uav detection
CN105807083A (en) * 2016-03-15 2016-07-27 深圳市高巨创新科技开发有限公司 Real-time speed measuring method and system for unmanned aerial vehicle
CN107478223A (en) * 2016-06-08 2017-12-15 南京理工大学 A kind of human body attitude calculation method based on quaternary number and Kalman filtering
CN109716060A (en) * 2016-07-19 2019-05-03 视觉机械有限公司 The vehicle location of earth's surface is utilized by event camera
CN109188487A (en) * 2018-09-27 2019-01-11 无锡比特信息科技有限公司 The high-precision redundancy unmanned plane integrated board of positioning
CN109696689A (en) * 2019-01-23 2019-04-30 桂林电子科技大学 A kind of tracking distance measuring method of light stream in conjunction with laser
CN110501736A (en) * 2019-08-28 2019-11-26 武汉大学 Utilize vision imaging and GNSS distance measuring signal close coupling positioning system and method
CN112254721A (en) * 2020-11-06 2021-01-22 南京大学 Attitude positioning method based on optical flow camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王峥羽 等: "基于光流法地形跟随的矿区复杂地形无人机摄影测量", 《金属矿山》 *
陈凯 等: "基于载波相位差分的形变监测高精度定位算法", 《计算机应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882733A (en) * 2022-03-15 2022-08-09 深圳市德驰微视技术有限公司 Parking space acquisition method based on domain controller, electronic device and storage medium
CN114882733B (en) * 2022-03-15 2023-12-01 深圳市德驰微视技术有限公司 Parking space acquisition method based on domain controller, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113296139B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN111795686B (en) Mobile robot positioning and mapping method
US9262828B2 (en) Method and device for online calibration of vehicle cameras
CN110081881B (en) Carrier landing guiding method based on unmanned aerial vehicle multi-sensor information fusion technology
JP2007256029A (en) Stereo image processing device
CN113223161B (en) Robust panoramic SLAM system and method based on IMU and wheel speed meter tight coupling
CN110827321B (en) Multi-camera collaborative active target tracking method based on three-dimensional information
CN113627473A (en) Water surface unmanned ship environment information fusion sensing method based on multi-mode sensor
CN113743385A (en) Unmanned ship water surface target detection method and device and unmanned ship
CN114419109B (en) Aircraft positioning method based on visual and barometric information fusion
CN113296139B (en) Self-adaptive image optical flow and RTK fusion attitude determination method
CN115639547A (en) Multi-line laser radar and GNSS-INS combined calibration method, system and medium
CN108900775B (en) Real-time electronic image stabilization method for underwater robot
CN112802195B (en) Underwater robot continuous occupying and mapping method based on sonar
CN112862818B (en) Underground parking lot vehicle positioning method combining inertial sensor and multi-fisheye camera
CN117173215A (en) Inland navigation ship whole-course track identification method and system crossing cameras
CN112581610A (en) Robust optimization method and system for establishing map from multi-beam sonar data
CN112465712A (en) Motion blur star map restoration method and system
CN113532445B (en) High-dynamic rapid autonomous capturing method of roller shutter exposure star sensor
CN115239814A (en) 3D vehicle tracking method based on combination of deep learning and UKF algorithm
CN114690226A (en) Monocular vision distance measurement method and system based on carrier phase difference technology assistance
CN114964276A (en) Dynamic vision SLAM method fusing inertial navigation
CN115344033A (en) Monocular camera/IMU/DVL tight coupling-based unmanned ship navigation and positioning method
CN113554705A (en) Robust positioning method for laser radar in changing scene
CN114119885A (en) Image feature point matching method, device and system and map construction method and system
CN110864685A (en) Vehicle monocular vision wheeled type odometer positioning method based on loose coupling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant