CN112762929A - Intelligent navigation method, device and equipment - Google Patents

Intelligent navigation method, device and equipment Download PDF

Info

Publication number
CN112762929A
CN112762929A CN202011552550.6A CN202011552550A CN112762929A CN 112762929 A CN112762929 A CN 112762929A CN 202011552550 A CN202011552550 A CN 202011552550A CN 112762929 A CN112762929 A CN 112762929A
Authority
CN
China
Prior art keywords
wireless
measurement
imu
measurements
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011552550.6A
Other languages
Chinese (zh)
Other versions
CN112762929B (en
Inventor
张胜凯
王巍
江涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202011552550.6A priority Critical patent/CN112762929B/en
Publication of CN112762929A publication Critical patent/CN112762929A/en
Application granted granted Critical
Publication of CN112762929B publication Critical patent/CN112762929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an intelligent navigation method, a device and equipment, belonging to the crossing field of wireless positioning technology and robot technology, wherein the method comprises the following steps: s1: acquiring an environment image and acquiring original visual features in the environment image; s2: screening original visual features by taking distance angle information obtained by wireless measurement as reference information to obtain target visual features; s3: initializing a location of a wireless signal source in an environment and a bias of wireless measurements with a target visual characteristic based on the IMU measurements; s4: estimating motion state information using the measurement results of the wireless measurements, the measurement results of the IMU measurements, and the target visual features, the motion state information including: state variables, depths of visual feature points, and biases for wireless measurements; s5: and navigating according to the motion state information and the three-dimensional information corresponding to the visual features in the environment. The invention can improve the reliability of the visual information in the weak texture environment, thereby realizing accurate navigation in the weak texture environment.

Description

Intelligent navigation method, device and equipment
Technical Field
The invention belongs to the crossing field of wireless positioning technology and robot technology, and particularly relates to an intelligent navigation method, device and equipment.
Background
Navigation technology of intelligent navigation devices such as robots, which has been rapidly developed in recent years, can perform various tasks instead of humans to reduce human life risks and improve work efficiency. For example: the robot can help the fire fighter to search for survivors in dangerous buildings, and can improve the storage inventory efficiency by 30 times through aerial scanning. The fundamental requirement of these applications is that the intelligent navigation device be able to perform tasks with precise control safely and efficiently in a wide variety of complex environments. The ability to precisely control complex environments is given by robust state estimation. The goal of state estimation is to estimate the motion state of the device in real time, including at least position, velocity, and attitude. Robustness is a key proposition of current state estimation systems.
One of the key challenges for robust state estimation is a weak texture environment. Such environments are quite common, for example: rooms with surrounding pure white walls, pure color floors, mirrors on walls, large French windows, and the like. The current mainstream scheme of the accurate state estimation system is the small, light and low-cost monocular vision. However, monocular vision requires an environment with good illumination texture conditions to capture enough visual features, and further realizes state estimation through projection geometry, such as: optical flow requires a distinct visual texture for better feature matching. One solution is to artificially add visual marks (two-dimensional codes, etc.) to the surface of a weakly textured object, but this is invasive and time consuming and laborious. Another approach is to decide whether to bypass the weak texture region by evaluating the texture level of the current environment. More conventional approaches use loop back detection as a remedy to correct the accumulated error of weak texture areas. While this may be sufficient to cope with short weak texture flights, it does not support long term operation of the device in weak texture areas. In addition, in practical applications, the motion trajectory of the device may not have the loop-back intent to correct the accumulated error.
Recently, many efforts have been made to use wireless signals, such as LoRa, UWB, and WiFi, as another sensing means for state estimation. They are not limited by visual conditions and are increasingly widespread in everyday life, for example: UWB has been integrated into iPhone 11. However, due to the large wavelength and environmental interference of wireless signals, the state estimation accuracy of the wireless signals is in the decimeter level, which is an order of magnitude worse than that of a monocular vision scheme under good vision conditions. Therefore, current wireless signal based state estimation systems cannot support high precision navigation of devices.
Disclosure of Invention
In view of the above defects or improvement needs of the prior art, the present invention provides an intelligent navigation method, apparatus and device, which aims to solve the problem that high-precision navigation cannot be supported in a weak texture environment.
To achieve the above object, according to an aspect of the present invention, there is provided an intelligent navigation method, including:
s1: acquiring an environment image and acquiring original visual features in the environment image;
s2: screening the original visual features by taking distance and angle information obtained by wireless measurement as reference information to obtain target visual features;
s3: initializing a location of a wireless signal source in an environment and a bias of wireless measurements with the target visual characteristic based on IMU measurements;
s4: estimating motion state information using the measurement results of the wireless measurements, the measurement results of the IMU measurements, and the target visual features, the motion state information comprising: state variables, depths of visual feature points, and biases of wireless measurements, the state variables including: three-dimensional position, three-dimensional velocity, and three-dimensional attitude;
s5: and navigating according to the motion state information and the three-dimensional information corresponding to the visual features in the environment.
In one embodiment, the step S1 includes:
and acquiring an environment image by using a monocular camera, a binocular optical camera, an aperture camera or a fish-eye camera and acquiring original visual features in the environment image.
In one embodiment, the step S2 includes:
s201: measuring the distance and angle between the signal source and the measuring device;
s202: acquiring acceleration and angular velocity measured by the carried IMU;
s203: aligning IMU measurement data with wireless measurements using an IMU pre-integration technique;
s204: and carrying out noise reduction processing on the wirelessly measured distance by a cosine law so as to offset the bias of the wireless measurement:
s205: adopting Kalman filtering to fuse the distance and angle of wireless measurement and the short-time integral of the IMU so as to obtain the relative displacement and rotation between adjacent wireless measurement information;
s206: based on the relative displacement and rotation between adjacent wireless measurement information, visual features in the image of the kth frame are projected to the (k +1) th frame through epipolar geometry to form a group of wireless matched feature points, and each feature point is associated with the optically matched feature point; finally, sorting is carried out according to the distance between the wireless matching characteristic points and the optical matching characteristic points, and the highest sorted characteristic point corresponds to the minimum distance;
s207: the original visual feature points are arranged according to the ascending order of feature distances, each original visual feature point is mapped with a score corresponding to the normalized distance, a threshold value is defined, and the original visual feature points with the scores smaller than the score are selected as target visual feature points to participate in subsequent state estimation.
In one embodiment, the step S203 includes: aligning IMU measurement data with wireless measurements using an IMU pre-integration technique;
the IMU pre-integration technique is as follows:
Figure BDA0002858418030000031
wherein the content of the first and second substances,
Figure BDA0002858418030000032
represents the acceleration measurement of the intelligent navigation device at the time t,
Figure BDA0002858418030000033
representing the angular velocity measurement of the smart navigation device at time t,
Figure BDA0002858418030000034
representing quaternion multiplicationOperator, (.)iRepresenting the measurements in the reference frame of the IMU,
Figure BDA0002858418030000035
is the IMU reference frame when the (k +1) th radio measurement is received relative to when the k-th radio measurement is received,
Figure BDA0002858418030000036
representing a quantity that the device measures directly or a quantity that can be estimated directly from the measured quantity.
In one embodiment, the step S3 includes:
s301: initializing a motion state and determining the position of a wireless signal source through wireless ranging; the initialization operation is performed only once before the navigation task is performed; the initialization vector is defined as:
Figure BDA0002858418030000041
wherein the content of the first and second substances,
Figure BDA0002858418030000042
Figure BDA0002858418030000043
and
Figure BDA0002858418030000044
the position and velocity at the jth key frame,
Figure BDA0002858418030000045
the gravity of a camera reference system at the initial moment, s is the scale information of monocular vision measurement, n is the number of key frames in a group of data, and n is a positive integer greater than or equal to 4;
s302: after initialization, the state of the MAV is estimated and updated by using the updated sensor measurement information, and a wireless channel corresponding to a wireless signal source changes along with the environment in the operation process, so that the distance offset of wireless measurement changes, and the distance measurement is corrected.
In one embodiment, the step S301 includes:
monocular vision motion recovery structure algorithm SfM provides scale-free position of image plane coordinate
Figure BDA0002858418030000046
And camera rotation
Figure BDA0002858418030000047
Wherein an initial camera reference frame is set
Figure BDA0002858418030000048
Setting a reference system of SfM;
Figure BDA0002858418030000049
representing the camera pose relative to the initial camera reference frame at the jth keyframe;
by manually measuring external parameters between IMU and camera
Figure BDA00028584180300000410
To transform the pose between the camera and the IMU reference frame; the two relations are as follows:
Figure BDA00028584180300000411
and is
Figure BDA00028584180300000412
Figure BDA00028584180300000413
Figure BDA00028584180300000414
Figure BDA00028584180300000415
The rotation matrix of (a);
will be relatively displaced
Figure BDA00028584180300000416
And rotation
Figure BDA00028584180300000417
Replacing the model with a non-scale pose based on monocular vision, and acquiring a measurement model according to the motion as follows:
Figure BDA00028584180300000418
Figure BDA00028584180300000419
is additive gaussian noise; solving the following least squares problem to obtain X0
Figure BDA00028584180300000420
F is all frame images in the data group participating in calculation;
obtaining a set of location data
Figure BDA0002858418030000051
And calculating the location of the wireless node by solving the following quadratic programming problem (QP) problem:
Figure BDA0002858418030000052
Figure BDA0002858418030000053
Figure BDA0002858418030000054
the distance between the equipment and the l wireless signal source when the j frame image is displayed;
Figure BDA0002858418030000055
is the relative displacement between the wireless measurement reference frame to the IMU reference frame.
In one embodiment, the step S4 includes:
establishing measurement models of the three sensors based on a sliding window estimator optimized by a graph, and minimizing measurement residual errors of the three sensors; and calculating the motion state information in real time by initializing the state of the equipment and solving a position drive optimization problem of a wireless signal source, wherein the motion state information also comprises the position, the speed, the attitude, the bias of wireless measurement and the depth of a visual feature point.
According to another aspect of the present invention, there is provided an intelligent navigation device, comprising:
the image acquisition module is used for acquiring an environment image and acquiring original visual features in the environment image;
the wireless measurement module is used for screening the original visual features by taking the distance angle information obtained by wireless measurement as reference information to obtain target visual features;
an IMU measurement module to initialize a location of a wireless signal source in an environment and a bias of wireless measurements with the target visual characteristic based on IMU measurements;
a fusion module, configured to estimate motion state information using a measurement result of the wireless measurement, a measurement result of the IMU measurement, and the target visual feature, where the motion state information includes: state variables, depths of visual feature points, and biases of wireless measurements, the state variables including: three-dimensional position, three-dimensional velocity, and three-dimensional attitude;
and the navigation module is used for navigating according to the motion state information and the three-dimensional information corresponding to the visual characteristics in the environment.
According to another aspect of the present invention, there is provided an intelligent navigation device, comprising: the intelligent navigation system comprises a wireless measurement node, an image acquisition unit, an inertia measurement unit, a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor executes the steps of the intelligent navigation method.
Generally, compared with the prior art, the above technical solution conceived by the present invention has the following beneficial effects:
(1) the invention utilizes the wireless signal to measure the azimuth angle and the distance of the intelligent navigation equipment relative to the wireless signal source, and simultaneously combines the visual image shot by the camera on the intelligent navigation equipment and the inertial measurement data to estimate the motion state (at least comprising the position, the speed and the attitude) of the equipment, and simultaneously estimates the depth of the visual characteristic and draws a sparse map. The adopted wireless radio frequency signal is not limited by visual sensing, and the method can improve the reliability of visual information in the weak texture environment, thereby realizing accurate navigation in the weak texture environment.
(2) The visual characteristic screening method based on wireless measurement can effectively select better visual characteristics in a weak texture environment, further optimizes the reliability of wireless measurement in return, and provides an accurate and instantly deployable navigation scheme for intelligent navigation equipment without knowing the position of a wireless signal source node in the environment in advance.
(3) The invention utilizes a factor graph optimization model to fuse various heterogeneous sensors, makes up for deficiencies, can keep stable performance in various complex environments including weak texture environments, and provides a more robust navigation scheme for intelligent navigation equipment.
(4) The initialization scheme combining wireless-vision-inertial navigation provided by the invention enables the intelligent navigation equipment to start navigation in any posture and has the function of autonomously recovering the navigation capability.
Drawings
Fig. 1a is a frame diagram of an intelligent navigation device with integrated wireless-visual-inertial navigation provided in an embodiment of the present invention;
fig. 1b is a schematic diagram of an intelligent navigation device, which is provided by an embodiment of the present invention and takes an unmanned aerial vehicle as an example;
FIG. 2 is a flowchart of an intelligent navigation method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of visual feature screening based on wireless measurement according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a wireless biasing and approximation method provided by an embodiment of the present invention;
FIG. 5 is a diagram illustrating an optimized model based on state estimation and mapping of a factor graph according to an embodiment of the present invention;
FIG. 6(a) is a comparison graph of the motion trajectory of the intelligent navigation device with the real trajectory and the conventional visual navigation trajectory;
FIG. 6(b) is a positioning error in the direction of the smart navigation device X, Y, Z;
FIG. 6(c) is the attitude estimation error of the intelligent navigation device in roll, pitch, yaw directions.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1a, an embodiment of the present invention provides an intelligent navigation device, and as shown in fig. 2, the method includes the following steps:
s1: acquiring an environment image and acquiring original visual features in the environment image;
s2: screening the original visual features by taking distance and angle information obtained by wireless measurement as reference information to obtain target visual features;
s3: initializing a location of a wireless signal source in an environment and a bias of wireless measurements with the target visual characteristic based on IMU measurements;
s4: estimating motion state information using the measurement results of the wireless measurements, the measurement results of the IMU measurements, and the target visual features, the motion state information comprising: state variables, depths of visual feature points, and biases of wireless measurements, the state variables including: three-dimensional position, three-dimensional velocity, and three-dimensional attitude;
s5: and navigating according to the motion state information and the three-dimensional information corresponding to the visual features in the environment.
The specific implementation process is as follows:
(1) the intelligent navigation equipment needs to be provided with wireless measurement nodes (such as UWB and WiFi nodes), a camera and an Inertial Measurement Unit (IMU), and a wireless signal source (such as a UWB base station and a WiFi router) is arranged in a working environment. Specifically, as shown in fig. 1b, the device needs to carry a wireless measurement node (e.g., UWB tag), an IMU, and an optical camera, and a wireless signal source (e.g., UWB base station, iPhone 11 with a built-in UWB chip, etc.) exists in the environment. The distance and the angle between the wireless measurement node and a wireless signal source are measured, the three-dimensional acceleration and the three-dimensional angular velocity of the equipment are measured by the IMU, and the camera acquires image information for the equipment to calculate the visual feature points.
(2) The device screens visual characteristics by using the distance and angle measured by the wireless signal as reference information; specifically, the device needs to combine the wirelessly measured distance and angle with the IMU to predict relative displacement and attitude change, which specifically includes:
(2.1) the device needs to align IMU measurement data with the wireless measurements by IMU pre-integration techniques:
Figure BDA0002858418030000081
wherein the content of the first and second substances,
Figure BDA0002858418030000082
represents the acceleration measurement of the intelligent navigation device at the time t,
Figure BDA0002858418030000083
representing the angular velocity measurement of the smart navigation device at time t,
Figure BDA0002858418030000084
representing a quaternion multiplication operator, (. i) representing a measurement in the IMU reference frame,
Figure BDA0002858418030000085
is the IMU reference frame when the (k +1) th radio measurement is received relative to when the k-th radio measurement is received,
Figure BDA0002858418030000086
representing a quantity that the device measures directly or a quantity that can be estimated directly from the measured quantity.
(2.2) as shown in fig. 4, the bias of the wireless measurement can be approximately cancelled by performing the following operations on the distance information of the wireless measurement by the cosine law:
Figure BDA0002858418030000087
and (2.3) adopting Kalman filtering to fuse the distance and angle of wireless measurement with the short-time integral of the IMU so as to resist Gaussian noise of wireless measurement and finally obtain the relative displacement and rotation between adjacent wireless measurement information.
(2.4) as shown in fig. 3, according to the obtained relative pose based on wireless measurement, the visual features in the image of the k frame are re-projected to the (k +1) frame through epipolar geometry. This operation forms a set of wirelessly matched feature points, each associated with an optically matched feature point. And finally, sequencing according to the distance between the wireless matching characteristic points and the optical matching characteristic points. The highest ranked feature point should correspond to the smallest distance. There may be multiple frames of images between two wireless measurements if the camera is not synchronized with the other sensors. Their disparity is too small to be a key frame, and such intermediate frames can be ignored, and only the last frame is processed.
(2.5) arranging the visual feature points in ascending order of feature distance, each feature having a score, i.e. normalized distance between its optical match and the wireless match result. The threshold e may be defined empirically and allow feature points with scores less than e to participate in state estimation.
(3) The device initializes the location of a wireless signal source (e.g., a UWB base station) and the bias of wireless measurements in the environment with the filtered visual features and measurements of the IMU;
specifically, the motion state of the intelligent navigation equipment is initialized, and then the position of the wireless signal source is determined by solving a multilateral positioning problem through wireless ranging. The initialization operation is only needed to be executed once before the navigation task is executed, and the handheld device performs variable speed movement in a small range in situ. The method specifically comprises the following steps:
(3.1) the initialization vector may be defined as:
Figure BDA0002858418030000091
wherein
Figure BDA0002858418030000092
Figure BDA0002858418030000093
And
Figure BDA0002858418030000094
is the position and velocity of the device at the time the jth key frame was obtained.
Figure BDA0002858418030000095
Is the gravity of the camera reference frame at the initial moment, s is the scale information given to the monocular vision measurement, and n is the number of keyframes in a set of data. n must be more than or equal to 4 to ensure that the problem can be solved, but n cannot be too large, otherwise the real-time performance of the system is influenced.
(3.1.1) monocular visual motion restoration structure Algorithm SfM provides non-scale position of image plane coordinate
Figure BDA0002858418030000096
And camera rotation
Figure BDA0002858418030000097
. Initial camera reference frame herein
Figure BDA0002858418030000098
Set as the reference frame of SfM.
Figure BDA0002858418030000099
Representing the camera pose at the jth keyframe relative to the initial camera reference frame. To bring scale information into monocular visual odometry, a need is attributed to the short-time integration of the IMU. By manually measuring external parameters between IMU and camera
Figure BDA0002858418030000101
Enabling the system to translate the pose between the camera and the IMU reference frame. In particular, the present invention relates to a method for producing,
Figure BDA00028584180300001014
and is
Figure BDA0002858418030000102
Wherein
Figure BDA0002858418030000103
Is that
Figure BDA0002858418030000104
The rotation matrix of (2).
(3.1.2) relative displacement
Figure BDA0002858418030000105
And rotation
Figure BDA0002858418030000106
Instead, a monocular vision-based non-scale pose is obtained, and a measurement model can be obtained according to kinematics as follows:
Figure BDA0002858418030000107
wherein
Figure BDA0002858418030000108
Is additive gaussian noise. Solving the following least square problem to obtain X0
Figure BDA0002858418030000109
Where F denotes all the frame images in the data group participating in the calculation.
(3.1.3) the System already knows a set of locations of the devices
Figure BDA00028584180300001015
The location of the wireless node is then calculated by solving the following quadratic programming problem (QP) problem:
Figure BDA00028584180300001010
wherein the content of the first and second substances,
Figure BDA00028584180300001011
Figure BDA00028584180300001012
and the distance between the equipment and the ith wireless signal source at the time of the jth frame image is shown.
Figure BDA00028584180300001013
Is the relative displacement between the wireless measurement reference frame to the IMU reference frame that can be manually calibrated after both the wireless receiver and IMU are installed.
(3.2) after initialization, the system will continually estimate and update the state of the MAV as new sensor measurement information. During operation, the wireless channel between the wireless signal source and the device changes with the environment, resulting in a change in the distance offset of the wireless measurement. The range measurements are corrected by tracking changes in the bias given the location of the wireless node.
Specifically, the radio measurement bias does not change during the channel coherence time, which is inversely proportional to the maximum doppler spread. Due to kinematic constraints, the coherence time does not vary drastically. Then, using a set of states within coherence time for the ith wireless node, the offset b of the distance is estimated by solving the following QP probleml
Figure BDA0002858418030000111
(4) The device integrates wireless measurement, IMU measurement and state variables (comprising three-dimensional position, three-dimensional speed and three-dimensional attitude) of the screened visual feature point estimation device with at most six degrees of freedom, the depth of the visual feature points in the environment and the bias of the wireless measurement.
Specifically, the system all-state vector set may be defined as:
Figure BDA0002858418030000112
wherein x isk∈R10Representing the state within the group when the kth key frame is obtained, including the position
Figure BDA0002858418030000113
Speed of rotation
Figure BDA0002858418030000114
And posture
Figure BDA0002858418030000115
blIs the ranging offset for the l-th wireless signal source, m represents the number of wireless signal sources connected to the intelligent navigation device, ληIs the first observed depth of the η th visual feature. A total of o features are tracked in the full state vector set. The bias is constant over the coherence time and it does not vary drastically. In the current model, assuming that the radio measurement bias is the same within the full state vector set, it will be updated in the form of a sliding window as new measurements arrive. The method fixes the scale of the all-state vector group, keeps enough multi-view constraint, simultaneously constrains the calculation complexity and guarantees the real-time performance.
In particular, the method employs a factor graph-based optimization model, as shown in FIG. 5. Obtaining a maximum a posteriori estimate by minimizing the mahalanobis norm of all measured residuals, solving the set of all state vectors:
Figure BDA0002858418030000116
where u represents a set of wireless measurement information and C is a set of observed visual featuresAnd I is the measurement information set of the IMU.
Figure BDA0002858418030000117
Figure BDA0002858418030000118
And
Figure BDA0002858418030000119
the measurement residual functions of the wireless measurement, camera and IMU, respectively. To solve the optimization problem, the measurement model of each sensor needs to be returned, and the steps are as follows:
(4.1) for the wireless measurement residual, the horizontal angle to the l-th wireless signal source
Figure BDA0002858418030000121
Roll and pitch angles provided by combining IMU are converted into postures under UWB coordinate system
Figure BDA0002858418030000122
Wireless measurement residual
Figure BDA0002858418030000123
(abbreviated as
Figure BDA0002858418030000124
) Can be expressed as:
Figure BDA0002858418030000125
wherein the content of the first and second substances,
Figure BDA0002858418030000126
is the location of the ith wireless signal source and the initialization step has been obtained.
Figure BDA0002858418030000127
Is the relative displacement of the airborne radio receiver and the IMU, qxyzThe vector part in the quaternion is extracted.
(4.2) for camerasMeasuring residual errors, and given the eta-th screened visual feature observed by the jth frame image
Figure BDA0002858418030000128
Assuming this feature is observed for the first time in the $ h $ frame image, its residual at the j frame
Figure BDA0002858418030000129
(abbreviated as
Figure BDA00028584180300001210
) Is its reprojection error. Its corresponding 3D point
Figure BDA00028584180300001211
Is that
Figure BDA00028584180300001212
Wherein the content of the first and second substances,
Figure BDA00028584180300001213
is the homogeneous coordinate of the feature point. Its reprojection point is
Figure BDA00028584180300001214
Thus, the visual residual can be defined as:
Figure BDA00028584180300001215
(4.3) measuring residuals for IMU, given IMU pre-integration
Figure BDA00028584180300001216
Figure BDA00028584180300001217
And
Figure BDA00028584180300001218
measurement residual of IMU
Figure BDA00028584180300001219
(abbreviated as
Figure BDA00028584180300001220
) Can be defined as:
Figure BDA00028584180300001221
(5) and adjusting the motion state of the unmanned aerial vehicle according to the motion state variable estimated value to realize the autonomous navigation of the unmanned aerial vehicle.
In order to verify the effectiveness of the method, a motion track is preset in a vision-limited dim environment, and under the motion of intelligent navigation equipment, the difference between the motion track of the equipment and a real track and the difference between the motion track of the equipment and a latest traditional vision method are shown in fig. 6(a) based on the wireless-vision-inertial navigation integrated intelligent navigation method provided by the invention, so that the navigation function of the method is intuitively reflected; the difference between the positioning of the actual three-dimensional position of the device in the direction X, Y, Z at any moment and the preset position estimation, namely the positioning error, is shown in fig. 6 (b); at any time, the attitude estimation error of the device in the roll, pitch, yaw directions is shown in fig. 6 (c). Therefore, the method can accurately implement the state estimation of the intelligent navigation equipment in the weak texture environment with limited vision, thereby realizing navigation without calibrating the position of a wireless signal source node in the environment in advance, and solving the problem that the navigation precision is reduced due to the influence of the weak texture condition caused by the fact that the conventional navigation system relies on the vision sensing means for positioning.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. An intelligent navigation method, characterized in that the method comprises:
s1: acquiring an environment image and acquiring original visual features in the environment image;
s2: screening the original visual features by taking distance and angle information obtained by wireless measurement as reference information to obtain target visual features;
s3: initializing a location of a wireless signal source in an environment and a bias of wireless measurements with the target visual characteristic based on IMU measurements;
s4: estimating motion state information using the measurement results of the wireless measurements, the measurement results of the IMU measurements, and the target visual features, the motion state information comprising: state variables, depths of visual feature points, and biases of wireless measurements, the state variables including: three-dimensional position, three-dimensional velocity, and three-dimensional attitude;
s5: and navigating according to the motion state information and the three-dimensional information corresponding to the visual features in the environment.
2. The intelligent navigation method according to claim 1, wherein the step S2 includes:
s201: measuring the distance and angle between the signal source and the measuring device;
s202: acquiring acceleration and angular velocity measured by the carried IMU;
s203: aligning IMU measurement data with wireless measurements using an IMU pre-integration technique;
s204: and carrying out noise reduction processing on the wirelessly measured distance by a cosine law so as to offset the bias of the wireless measurement:
s205: adopting Kalman filtering to fuse the distance and angle of wireless measurement and the short-time integral of the IMU so as to obtain the relative displacement and rotation between adjacent wireless measurement information;
s206: based on the relative displacement and rotation between adjacent wireless measurement information, visual features in the image of the kth frame are projected to the (k +1) th frame through epipolar geometry to form a group of wireless matched feature points, and each feature point is associated with the optically matched feature point; finally, sorting is carried out according to the distance between the wireless matching characteristic points and the optical matching characteristic points, and the highest sorted characteristic point corresponds to the minimum distance;
s207: the original visual feature points are arranged according to the ascending order of feature distances, each original visual feature point is mapped with a score corresponding to the normalized distance, a threshold value is defined, and the original visual feature points with the scores smaller than the score are selected as target visual feature points to participate in subsequent state estimation.
3. The intelligent navigation method according to claim 2, wherein the step S203 comprises: aligning IMU measurement data with wireless measurements using an IMU pre-integration technique;
the IMU pre-integration technique is as follows:
Figure FDA0002858418020000021
wherein the content of the first and second substances,
Figure FDA0002858418020000022
represents the acceleration measurement of the intelligent navigation device at the time t,
Figure FDA0002858418020000023
representing the angular velocity measurement of the smart navigation device at time t,
Figure FDA0002858418020000024
representing a quaternion multiplication operator, (-)iRepresenting the measurements in the reference frame of the IMU,
Figure FDA0002858418020000025
is the IMU reference frame when the (k +1) th radio measurement is received relative to when the k-th radio measurement is received,
Figure FDA0002858418020000026
representing a quantity that the device measures directly or a quantity that can be estimated directly from the measured quantity.
4. The intelligent navigation method according to claim 1, wherein the step S3 includes:
s301: initializing a motion state and determining the position of a wireless signal source through wireless ranging; the initialization operation is performed only once before the navigation task is performed; the initialization vector is defined as:
Figure FDA0002858418020000027
wherein the content of the first and second substances,
Figure FDA0002858418020000028
Figure FDA0002858418020000029
and
Figure FDA00028584180200000210
the position and velocity at the jth key frame,
Figure FDA00028584180200000211
the gravity of a camera reference system at the initial moment, s is the scale information of monocular vision measurement, n is the number of key frames in a group of data, and n is a positive integer greater than or equal to 4;
s302: after initialization, the state of the MAV is estimated and updated by using the updated sensor measurement information, and a wireless channel corresponding to a wireless signal source changes along with the environment in the operation process, so that the distance offset of wireless measurement changes, and the distance measurement is corrected.
5. The intelligent navigation method according to claim 4, wherein the step S301 comprises:
monocular vision motion recovery structure algorithm SfM provides scale-free position of image plane coordinate
Figure FDA0002858418020000031
And camera rotation
Figure FDA0002858418020000032
Wherein an initial camera reference frame is set
Figure FDA0002858418020000033
Setting a reference system of SfM;
Figure FDA0002858418020000034
representing the camera pose relative to the initial camera reference frame at the jth keyframe;
by manually measuring external parameters between IMU and camera
Figure FDA0002858418020000035
To transform the pose between the camera and the IMU reference frame; the two relations are as follows:
Figure FDA0002858418020000036
and is
Figure FDA0002858418020000037
Figure FDA0002858418020000038
Is that
Figure FDA0002858418020000039
The rotation matrix of (a);
will be relatively displaced
Figure FDA00028584180200000310
And rotation
Figure FDA00028584180200000311
Replacing the model with a non-scale pose based on monocular vision, and acquiring a measurement model according to the motion as follows:
Figure FDA00028584180200000312
Figure FDA00028584180200000313
is additive gaussian noise; solving the following least squares problem to obtain X0
Figure FDA00028584180200000314
F is all frame images in the data group participating in calculation;
obtaining a set of location data
Figure FDA00028584180200000315
And calculating the location of the wireless node by solving the following quadratic programming problem (QP) problem:
Figure FDA00028584180200000316
Figure FDA00028584180200000317
Figure FDA00028584180200000318
the distance between the equipment and the l wireless signal source when the j frame image is displayed;
Figure FDA00028584180200000319
is the relative displacement between the wireless measurement reference frame to the IMU reference frame.
6. The intelligent navigation method according to claim 5, wherein the step S4 includes:
establishing measurement models of the three sensors based on a sliding window estimator optimized by a graph, and minimizing measurement residual errors of the three sensors; and calculating the motion state information in real time by initializing the state of the equipment and solving a position drive optimization problem of a wireless signal source, wherein the motion state information also comprises the position, the speed, the attitude, the bias of wireless measurement and the depth of a visual feature point.
7. The intelligent navigation method according to any one of claims 1 to 6, wherein the step S1 includes: and acquiring an environment image by using a monocular camera, a binocular optical camera, an aperture camera or a fish-eye camera and acquiring original visual features in the environment image.
8. An intelligent navigation device, comprising:
the image acquisition module is used for acquiring an environment image and acquiring original visual features in the environment image;
the wireless measurement module is used for screening the original visual features by taking the distance angle information obtained by wireless measurement as reference information to obtain target visual features;
an IMU measurement module to initialize a location of a wireless signal source in an environment and a bias of wireless measurements with the target visual characteristic based on IMU measurements;
a fusion module, configured to estimate motion state information using a measurement result of the wireless measurement, a measurement result of the IMU measurement, and the target visual feature, where the motion state information includes: state variables, depths of visual feature points, and biases of wireless measurements, the state variables including: three-dimensional position, three-dimensional velocity, and three-dimensional attitude;
and the navigation module is used for navigating according to the motion state information and the three-dimensional information corresponding to the visual characteristics in the environment.
9. An intelligent navigation device, comprising: wireless measurement node, image acquisition unit, inertial measurement unit, memory and processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the intelligent navigation method according to any one of claims 1 to 7.
CN202011552550.6A 2020-12-24 2020-12-24 Intelligent navigation method, device and equipment Active CN112762929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011552550.6A CN112762929B (en) 2020-12-24 2020-12-24 Intelligent navigation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011552550.6A CN112762929B (en) 2020-12-24 2020-12-24 Intelligent navigation method, device and equipment

Publications (2)

Publication Number Publication Date
CN112762929A true CN112762929A (en) 2021-05-07
CN112762929B CN112762929B (en) 2022-08-02

Family

ID=75694124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011552550.6A Active CN112762929B (en) 2020-12-24 2020-12-24 Intelligent navigation method, device and equipment

Country Status (1)

Country Link
CN (1) CN112762929B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113124856A (en) * 2021-05-21 2021-07-16 天津大学 Visual inertia tight coupling odometer based on UWB online anchor point and metering method
CN114942026A (en) * 2022-06-01 2022-08-26 四川大学 Multimode three-dimensional image navigation system based on intelligent data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9031809B1 (en) * 2010-07-14 2015-05-12 Sri International Method and apparatus for generating three-dimensional pose using multi-modal sensor fusion
CN109341679A (en) * 2018-09-30 2019-02-15 华中科技大学 A kind of smart machine air navigation aid and navigation system
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision
CN111649737A (en) * 2020-05-08 2020-09-11 中国航空工业集团公司西安航空计算技术研究所 Visual-inertial integrated navigation method for precise approach landing of airplane

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9031809B1 (en) * 2010-07-14 2015-05-12 Sri International Method and apparatus for generating three-dimensional pose using multi-modal sensor fusion
CN109341679A (en) * 2018-09-30 2019-02-15 华中科技大学 A kind of smart machine air navigation aid and navigation system
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision
CN111649737A (en) * 2020-05-08 2020-09-11 中国航空工业集团公司西安航空计算技术研究所 Visual-inertial integrated navigation method for precise approach landing of airplane

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SHENGKAI ZHANG等: "Demo Abstract_ WINS_ WiFi-Inertial Indoor State Estimation for MAVs", 《2018 ASSOCIATION FOR COMPUTING MACHINERY》 *
卫文乐等: "利用惯导测量单元确定关键帧的实时SLAM算法", 《计算机应用》 *
李康: "室内环境无人机复合定位方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113124856A (en) * 2021-05-21 2021-07-16 天津大学 Visual inertia tight coupling odometer based on UWB online anchor point and metering method
CN113124856B (en) * 2021-05-21 2023-03-14 天津大学 Visual inertia tight coupling odometer based on UWB (ultra wide band) online anchor point and metering method
CN114942026A (en) * 2022-06-01 2022-08-26 四川大学 Multimode three-dimensional image navigation system based on intelligent data

Also Published As

Publication number Publication date
CN112762929B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
Qin et al. A general optimization-based framework for global pose estimation with multiple sensors
CN108827315B (en) Manifold pre-integration-based visual inertial odometer pose estimation method and device
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN106840148B (en) Wearable positioning and path guiding method based on binocular camera under outdoor working environment
Achtelik et al. Stereo vision and laser odometry for autonomous helicopters in GPS-denied indoor environments
Georgiev et al. Localization methods for a mobile robot in urban environments
US9990726B2 (en) Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
CN111288989B (en) Visual positioning method for small unmanned aerial vehicle
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN112740274A (en) System and method for VSLAM scale estimation on robotic devices using optical flow sensors
CN111932674A (en) Optimization method of line laser vision inertial system
CN112762929B (en) Intelligent navigation method, device and equipment
CN112254729A (en) Mobile robot positioning method based on multi-sensor fusion
Andert et al. Lidar-aided camera feature tracking and visual slam for spacecraft low-orbit navigation and planetary landing
JP2017524932A (en) Video-assisted landing guidance system and method
Li et al. Aerial-triangulation aided boresight calibration for a low-cost UAV-LiDAR system
Zhao et al. Subt-mrs: A subterranean, multi-robot, multi-spectral and multi-degraded dataset for robust slam
Wang et al. UAV navigation in large-scale GPS-denied bridge environments using fiducial marker-corrected stereo visual-inertial localisation
Giubilato et al. A comparison of monocular and stereo visual FastSLAM implementations
CN116380079A (en) Underwater SLAM method for fusing front-view sonar and ORB-SLAM3
Alliez et al. Indoor localization and mapping: Towards tracking resilience through a multi-slam approach
CN115355904A (en) Slam method for Lidar-IMU fusion of ground mobile robot
KR102406240B1 (en) Robust stereo visual inertial navigation apparatus and method
Ross et al. Uncertainty estimation for stereo visual odometry
CN114459467A (en) Target positioning method based on VI-SLAM in unknown rescue environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant