CN112762929A - Intelligent navigation method, device and equipment - Google Patents
Intelligent navigation method, device and equipment Download PDFInfo
- Publication number
- CN112762929A CN112762929A CN202011552550.6A CN202011552550A CN112762929A CN 112762929 A CN112762929 A CN 112762929A CN 202011552550 A CN202011552550 A CN 202011552550A CN 112762929 A CN112762929 A CN 112762929A
- Authority
- CN
- China
- Prior art keywords
- wireless
- measurement
- imu
- measurements
- environment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Navigation (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses an intelligent navigation method, a device and equipment, belonging to the crossing field of wireless positioning technology and robot technology, wherein the method comprises the following steps: s1: acquiring an environment image and acquiring original visual features in the environment image; s2: screening original visual features by taking distance angle information obtained by wireless measurement as reference information to obtain target visual features; s3: initializing a location of a wireless signal source in an environment and a bias of wireless measurements with a target visual characteristic based on the IMU measurements; s4: estimating motion state information using the measurement results of the wireless measurements, the measurement results of the IMU measurements, and the target visual features, the motion state information including: state variables, depths of visual feature points, and biases for wireless measurements; s5: and navigating according to the motion state information and the three-dimensional information corresponding to the visual features in the environment. The invention can improve the reliability of the visual information in the weak texture environment, thereby realizing accurate navigation in the weak texture environment.
Description
Technical Field
The invention belongs to the crossing field of wireless positioning technology and robot technology, and particularly relates to an intelligent navigation method, device and equipment.
Background
Navigation technology of intelligent navigation devices such as robots, which has been rapidly developed in recent years, can perform various tasks instead of humans to reduce human life risks and improve work efficiency. For example: the robot can help the fire fighter to search for survivors in dangerous buildings, and can improve the storage inventory efficiency by 30 times through aerial scanning. The fundamental requirement of these applications is that the intelligent navigation device be able to perform tasks with precise control safely and efficiently in a wide variety of complex environments. The ability to precisely control complex environments is given by robust state estimation. The goal of state estimation is to estimate the motion state of the device in real time, including at least position, velocity, and attitude. Robustness is a key proposition of current state estimation systems.
One of the key challenges for robust state estimation is a weak texture environment. Such environments are quite common, for example: rooms with surrounding pure white walls, pure color floors, mirrors on walls, large French windows, and the like. The current mainstream scheme of the accurate state estimation system is the small, light and low-cost monocular vision. However, monocular vision requires an environment with good illumination texture conditions to capture enough visual features, and further realizes state estimation through projection geometry, such as: optical flow requires a distinct visual texture for better feature matching. One solution is to artificially add visual marks (two-dimensional codes, etc.) to the surface of a weakly textured object, but this is invasive and time consuming and laborious. Another approach is to decide whether to bypass the weak texture region by evaluating the texture level of the current environment. More conventional approaches use loop back detection as a remedy to correct the accumulated error of weak texture areas. While this may be sufficient to cope with short weak texture flights, it does not support long term operation of the device in weak texture areas. In addition, in practical applications, the motion trajectory of the device may not have the loop-back intent to correct the accumulated error.
Recently, many efforts have been made to use wireless signals, such as LoRa, UWB, and WiFi, as another sensing means for state estimation. They are not limited by visual conditions and are increasingly widespread in everyday life, for example: UWB has been integrated into iPhone 11. However, due to the large wavelength and environmental interference of wireless signals, the state estimation accuracy of the wireless signals is in the decimeter level, which is an order of magnitude worse than that of a monocular vision scheme under good vision conditions. Therefore, current wireless signal based state estimation systems cannot support high precision navigation of devices.
Disclosure of Invention
In view of the above defects or improvement needs of the prior art, the present invention provides an intelligent navigation method, apparatus and device, which aims to solve the problem that high-precision navigation cannot be supported in a weak texture environment.
To achieve the above object, according to an aspect of the present invention, there is provided an intelligent navigation method, including:
s1: acquiring an environment image and acquiring original visual features in the environment image;
s2: screening the original visual features by taking distance and angle information obtained by wireless measurement as reference information to obtain target visual features;
s3: initializing a location of a wireless signal source in an environment and a bias of wireless measurements with the target visual characteristic based on IMU measurements;
s4: estimating motion state information using the measurement results of the wireless measurements, the measurement results of the IMU measurements, and the target visual features, the motion state information comprising: state variables, depths of visual feature points, and biases of wireless measurements, the state variables including: three-dimensional position, three-dimensional velocity, and three-dimensional attitude;
s5: and navigating according to the motion state information and the three-dimensional information corresponding to the visual features in the environment.
In one embodiment, the step S1 includes:
and acquiring an environment image by using a monocular camera, a binocular optical camera, an aperture camera or a fish-eye camera and acquiring original visual features in the environment image.
In one embodiment, the step S2 includes:
s201: measuring the distance and angle between the signal source and the measuring device;
s202: acquiring acceleration and angular velocity measured by the carried IMU;
s203: aligning IMU measurement data with wireless measurements using an IMU pre-integration technique;
s204: and carrying out noise reduction processing on the wirelessly measured distance by a cosine law so as to offset the bias of the wireless measurement:
s205: adopting Kalman filtering to fuse the distance and angle of wireless measurement and the short-time integral of the IMU so as to obtain the relative displacement and rotation between adjacent wireless measurement information;
s206: based on the relative displacement and rotation between adjacent wireless measurement information, visual features in the image of the kth frame are projected to the (k +1) th frame through epipolar geometry to form a group of wireless matched feature points, and each feature point is associated with the optically matched feature point; finally, sorting is carried out according to the distance between the wireless matching characteristic points and the optical matching characteristic points, and the highest sorted characteristic point corresponds to the minimum distance;
s207: the original visual feature points are arranged according to the ascending order of feature distances, each original visual feature point is mapped with a score corresponding to the normalized distance, a threshold value is defined, and the original visual feature points with the scores smaller than the score are selected as target visual feature points to participate in subsequent state estimation.
In one embodiment, the step S203 includes: aligning IMU measurement data with wireless measurements using an IMU pre-integration technique;
the IMU pre-integration technique is as follows:
wherein the content of the first and second substances,represents the acceleration measurement of the intelligent navigation device at the time t,representing the angular velocity measurement of the smart navigation device at time t,representing quaternion multiplicationOperator, (.)iRepresenting the measurements in the reference frame of the IMU,is the IMU reference frame when the (k +1) th radio measurement is received relative to when the k-th radio measurement is received,representing a quantity that the device measures directly or a quantity that can be estimated directly from the measured quantity.
In one embodiment, the step S3 includes:
s301: initializing a motion state and determining the position of a wireless signal source through wireless ranging; the initialization operation is performed only once before the navigation task is performed; the initialization vector is defined as:
wherein the content of the first and second substances, andthe position and velocity at the jth key frame,the gravity of a camera reference system at the initial moment, s is the scale information of monocular vision measurement, n is the number of key frames in a group of data, and n is a positive integer greater than or equal to 4;
s302: after initialization, the state of the MAV is estimated and updated by using the updated sensor measurement information, and a wireless channel corresponding to a wireless signal source changes along with the environment in the operation process, so that the distance offset of wireless measurement changes, and the distance measurement is corrected.
In one embodiment, the step S301 includes:
monocular vision motion recovery structure algorithm SfM provides scale-free position of image plane coordinateAnd camera rotationWherein an initial camera reference frame is setSetting a reference system of SfM;representing the camera pose relative to the initial camera reference frame at the jth keyframe;
by manually measuring external parameters between IMU and cameraTo transform the pose between the camera and the IMU reference frame; the two relations are as follows:and is The rotation matrix of (a);
will be relatively displacedAnd rotationReplacing the model with a non-scale pose based on monocular vision, and acquiring a measurement model according to the motion as follows: is additive gaussian noise; solving the following least squares problem to obtain X0,F is all frame images in the data group participating in calculation;
obtaining a set of location dataAnd calculating the location of the wireless node by solving the following quadratic programming problem (QP) problem:
the distance between the equipment and the l wireless signal source when the j frame image is displayed;is the relative displacement between the wireless measurement reference frame to the IMU reference frame.
In one embodiment, the step S4 includes:
establishing measurement models of the three sensors based on a sliding window estimator optimized by a graph, and minimizing measurement residual errors of the three sensors; and calculating the motion state information in real time by initializing the state of the equipment and solving a position drive optimization problem of a wireless signal source, wherein the motion state information also comprises the position, the speed, the attitude, the bias of wireless measurement and the depth of a visual feature point.
According to another aspect of the present invention, there is provided an intelligent navigation device, comprising:
the image acquisition module is used for acquiring an environment image and acquiring original visual features in the environment image;
the wireless measurement module is used for screening the original visual features by taking the distance angle information obtained by wireless measurement as reference information to obtain target visual features;
an IMU measurement module to initialize a location of a wireless signal source in an environment and a bias of wireless measurements with the target visual characteristic based on IMU measurements;
a fusion module, configured to estimate motion state information using a measurement result of the wireless measurement, a measurement result of the IMU measurement, and the target visual feature, where the motion state information includes: state variables, depths of visual feature points, and biases of wireless measurements, the state variables including: three-dimensional position, three-dimensional velocity, and three-dimensional attitude;
and the navigation module is used for navigating according to the motion state information and the three-dimensional information corresponding to the visual characteristics in the environment.
According to another aspect of the present invention, there is provided an intelligent navigation device, comprising: the intelligent navigation system comprises a wireless measurement node, an image acquisition unit, an inertia measurement unit, a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor executes the steps of the intelligent navigation method.
Generally, compared with the prior art, the above technical solution conceived by the present invention has the following beneficial effects:
(1) the invention utilizes the wireless signal to measure the azimuth angle and the distance of the intelligent navigation equipment relative to the wireless signal source, and simultaneously combines the visual image shot by the camera on the intelligent navigation equipment and the inertial measurement data to estimate the motion state (at least comprising the position, the speed and the attitude) of the equipment, and simultaneously estimates the depth of the visual characteristic and draws a sparse map. The adopted wireless radio frequency signal is not limited by visual sensing, and the method can improve the reliability of visual information in the weak texture environment, thereby realizing accurate navigation in the weak texture environment.
(2) The visual characteristic screening method based on wireless measurement can effectively select better visual characteristics in a weak texture environment, further optimizes the reliability of wireless measurement in return, and provides an accurate and instantly deployable navigation scheme for intelligent navigation equipment without knowing the position of a wireless signal source node in the environment in advance.
(3) The invention utilizes a factor graph optimization model to fuse various heterogeneous sensors, makes up for deficiencies, can keep stable performance in various complex environments including weak texture environments, and provides a more robust navigation scheme for intelligent navigation equipment.
(4) The initialization scheme combining wireless-vision-inertial navigation provided by the invention enables the intelligent navigation equipment to start navigation in any posture and has the function of autonomously recovering the navigation capability.
Drawings
Fig. 1a is a frame diagram of an intelligent navigation device with integrated wireless-visual-inertial navigation provided in an embodiment of the present invention;
fig. 1b is a schematic diagram of an intelligent navigation device, which is provided by an embodiment of the present invention and takes an unmanned aerial vehicle as an example;
FIG. 2 is a flowchart of an intelligent navigation method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of visual feature screening based on wireless measurement according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a wireless biasing and approximation method provided by an embodiment of the present invention;
FIG. 5 is a diagram illustrating an optimized model based on state estimation and mapping of a factor graph according to an embodiment of the present invention;
FIG. 6(a) is a comparison graph of the motion trajectory of the intelligent navigation device with the real trajectory and the conventional visual navigation trajectory;
FIG. 6(b) is a positioning error in the direction of the smart navigation device X, Y, Z;
FIG. 6(c) is the attitude estimation error of the intelligent navigation device in roll, pitch, yaw directions.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1a, an embodiment of the present invention provides an intelligent navigation device, and as shown in fig. 2, the method includes the following steps:
s1: acquiring an environment image and acquiring original visual features in the environment image;
s2: screening the original visual features by taking distance and angle information obtained by wireless measurement as reference information to obtain target visual features;
s3: initializing a location of a wireless signal source in an environment and a bias of wireless measurements with the target visual characteristic based on IMU measurements;
s4: estimating motion state information using the measurement results of the wireless measurements, the measurement results of the IMU measurements, and the target visual features, the motion state information comprising: state variables, depths of visual feature points, and biases of wireless measurements, the state variables including: three-dimensional position, three-dimensional velocity, and three-dimensional attitude;
s5: and navigating according to the motion state information and the three-dimensional information corresponding to the visual features in the environment.
The specific implementation process is as follows:
(1) the intelligent navigation equipment needs to be provided with wireless measurement nodes (such as UWB and WiFi nodes), a camera and an Inertial Measurement Unit (IMU), and a wireless signal source (such as a UWB base station and a WiFi router) is arranged in a working environment. Specifically, as shown in fig. 1b, the device needs to carry a wireless measurement node (e.g., UWB tag), an IMU, and an optical camera, and a wireless signal source (e.g., UWB base station, iPhone 11 with a built-in UWB chip, etc.) exists in the environment. The distance and the angle between the wireless measurement node and a wireless signal source are measured, the three-dimensional acceleration and the three-dimensional angular velocity of the equipment are measured by the IMU, and the camera acquires image information for the equipment to calculate the visual feature points.
(2) The device screens visual characteristics by using the distance and angle measured by the wireless signal as reference information; specifically, the device needs to combine the wirelessly measured distance and angle with the IMU to predict relative displacement and attitude change, which specifically includes:
(2.1) the device needs to align IMU measurement data with the wireless measurements by IMU pre-integration techniques:
wherein the content of the first and second substances,represents the acceleration measurement of the intelligent navigation device at the time t,representing the angular velocity measurement of the smart navigation device at time t,representing a quaternion multiplication operator, (. i) representing a measurement in the IMU reference frame,is the IMU reference frame when the (k +1) th radio measurement is received relative to when the k-th radio measurement is received,representing a quantity that the device measures directly or a quantity that can be estimated directly from the measured quantity.
(2.2) as shown in fig. 4, the bias of the wireless measurement can be approximately cancelled by performing the following operations on the distance information of the wireless measurement by the cosine law:
and (2.3) adopting Kalman filtering to fuse the distance and angle of wireless measurement with the short-time integral of the IMU so as to resist Gaussian noise of wireless measurement and finally obtain the relative displacement and rotation between adjacent wireless measurement information.
(2.4) as shown in fig. 3, according to the obtained relative pose based on wireless measurement, the visual features in the image of the k frame are re-projected to the (k +1) frame through epipolar geometry. This operation forms a set of wirelessly matched feature points, each associated with an optically matched feature point. And finally, sequencing according to the distance between the wireless matching characteristic points and the optical matching characteristic points. The highest ranked feature point should correspond to the smallest distance. There may be multiple frames of images between two wireless measurements if the camera is not synchronized with the other sensors. Their disparity is too small to be a key frame, and such intermediate frames can be ignored, and only the last frame is processed.
(2.5) arranging the visual feature points in ascending order of feature distance, each feature having a score, i.e. normalized distance between its optical match and the wireless match result. The threshold e may be defined empirically and allow feature points with scores less than e to participate in state estimation.
(3) The device initializes the location of a wireless signal source (e.g., a UWB base station) and the bias of wireless measurements in the environment with the filtered visual features and measurements of the IMU;
specifically, the motion state of the intelligent navigation equipment is initialized, and then the position of the wireless signal source is determined by solving a multilateral positioning problem through wireless ranging. The initialization operation is only needed to be executed once before the navigation task is executed, and the handheld device performs variable speed movement in a small range in situ. The method specifically comprises the following steps:
(3.1) the initialization vector may be defined as:
wherein Andis the position and velocity of the device at the time the jth key frame was obtained.Is the gravity of the camera reference frame at the initial moment, s is the scale information given to the monocular vision measurement, and n is the number of keyframes in a set of data. n must be more than or equal to 4 to ensure that the problem can be solved, but n cannot be too large, otherwise the real-time performance of the system is influenced.
(3.1.1) monocular visual motion restoration structure Algorithm SfM provides non-scale position of image plane coordinateAnd camera rotation. Initial camera reference frame hereinSet as the reference frame of SfM.Representing the camera pose at the jth keyframe relative to the initial camera reference frame. To bring scale information into monocular visual odometry, a need is attributed to the short-time integration of the IMU. By manually measuring external parameters between IMU and cameraEnabling the system to translate the pose between the camera and the IMU reference frame. In particular, the present invention relates to a method for producing,and isWhereinIs thatThe rotation matrix of (2).
(3.1.2) relative displacementAnd rotationInstead, a monocular vision-based non-scale pose is obtained, and a measurement model can be obtained according to kinematics as follows:
Where F denotes all the frame images in the data group participating in the calculation.
(3.1.3) the System already knows a set of locations of the devicesThe location of the wireless node is then calculated by solving the following quadratic programming problem (QP) problem:
wherein the content of the first and second substances, and the distance between the equipment and the ith wireless signal source at the time of the jth frame image is shown.Is the relative displacement between the wireless measurement reference frame to the IMU reference frame that can be manually calibrated after both the wireless receiver and IMU are installed.
(3.2) after initialization, the system will continually estimate and update the state of the MAV as new sensor measurement information. During operation, the wireless channel between the wireless signal source and the device changes with the environment, resulting in a change in the distance offset of the wireless measurement. The range measurements are corrected by tracking changes in the bias given the location of the wireless node.
Specifically, the radio measurement bias does not change during the channel coherence time, which is inversely proportional to the maximum doppler spread. Due to kinematic constraints, the coherence time does not vary drastically. Then, using a set of states within coherence time for the ith wireless node, the offset b of the distance is estimated by solving the following QP probleml:
(4) The device integrates wireless measurement, IMU measurement and state variables (comprising three-dimensional position, three-dimensional speed and three-dimensional attitude) of the screened visual feature point estimation device with at most six degrees of freedom, the depth of the visual feature points in the environment and the bias of the wireless measurement.
Specifically, the system all-state vector set may be defined as:
wherein x isk∈R10Representing the state within the group when the kth key frame is obtained, including the positionSpeed of rotationAnd postureblIs the ranging offset for the l-th wireless signal source, m represents the number of wireless signal sources connected to the intelligent navigation device, ληIs the first observed depth of the η th visual feature. A total of o features are tracked in the full state vector set. The bias is constant over the coherence time and it does not vary drastically. In the current model, assuming that the radio measurement bias is the same within the full state vector set, it will be updated in the form of a sliding window as new measurements arrive. The method fixes the scale of the all-state vector group, keeps enough multi-view constraint, simultaneously constrains the calculation complexity and guarantees the real-time performance.
In particular, the method employs a factor graph-based optimization model, as shown in FIG. 5. Obtaining a maximum a posteriori estimate by minimizing the mahalanobis norm of all measured residuals, solving the set of all state vectors:
where u represents a set of wireless measurement information and C is a set of observed visual featuresAnd I is the measurement information set of the IMU. Andthe measurement residual functions of the wireless measurement, camera and IMU, respectively. To solve the optimization problem, the measurement model of each sensor needs to be returned, and the steps are as follows:
(4.1) for the wireless measurement residual, the horizontal angle to the l-th wireless signal sourceRoll and pitch angles provided by combining IMU are converted into postures under UWB coordinate systemWireless measurement residual(abbreviated as) Can be expressed as:
wherein the content of the first and second substances,is the location of the ith wireless signal source and the initialization step has been obtained.Is the relative displacement of the airborne radio receiver and the IMU, qxyzThe vector part in the quaternion is extracted.
(4.2) for camerasMeasuring residual errors, and given the eta-th screened visual feature observed by the jth frame imageAssuming this feature is observed for the first time in the $ h $ frame image, its residual at the j frame(abbreviated as) Is its reprojection error. Its corresponding 3D pointIs that
Wherein the content of the first and second substances,is the homogeneous coordinate of the feature point. Its reprojection point isThus, the visual residual can be defined as:
(4.3) measuring residuals for IMU, given IMU pre-integration Andmeasurement residual of IMU(abbreviated as) Can be defined as:
(5) and adjusting the motion state of the unmanned aerial vehicle according to the motion state variable estimated value to realize the autonomous navigation of the unmanned aerial vehicle.
In order to verify the effectiveness of the method, a motion track is preset in a vision-limited dim environment, and under the motion of intelligent navigation equipment, the difference between the motion track of the equipment and a real track and the difference between the motion track of the equipment and a latest traditional vision method are shown in fig. 6(a) based on the wireless-vision-inertial navigation integrated intelligent navigation method provided by the invention, so that the navigation function of the method is intuitively reflected; the difference between the positioning of the actual three-dimensional position of the device in the direction X, Y, Z at any moment and the preset position estimation, namely the positioning error, is shown in fig. 6 (b); at any time, the attitude estimation error of the device in the roll, pitch, yaw directions is shown in fig. 6 (c). Therefore, the method can accurately implement the state estimation of the intelligent navigation equipment in the weak texture environment with limited vision, thereby realizing navigation without calibrating the position of a wireless signal source node in the environment in advance, and solving the problem that the navigation precision is reduced due to the influence of the weak texture condition caused by the fact that the conventional navigation system relies on the vision sensing means for positioning.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (9)
1. An intelligent navigation method, characterized in that the method comprises:
s1: acquiring an environment image and acquiring original visual features in the environment image;
s2: screening the original visual features by taking distance and angle information obtained by wireless measurement as reference information to obtain target visual features;
s3: initializing a location of a wireless signal source in an environment and a bias of wireless measurements with the target visual characteristic based on IMU measurements;
s4: estimating motion state information using the measurement results of the wireless measurements, the measurement results of the IMU measurements, and the target visual features, the motion state information comprising: state variables, depths of visual feature points, and biases of wireless measurements, the state variables including: three-dimensional position, three-dimensional velocity, and three-dimensional attitude;
s5: and navigating according to the motion state information and the three-dimensional information corresponding to the visual features in the environment.
2. The intelligent navigation method according to claim 1, wherein the step S2 includes:
s201: measuring the distance and angle between the signal source and the measuring device;
s202: acquiring acceleration and angular velocity measured by the carried IMU;
s203: aligning IMU measurement data with wireless measurements using an IMU pre-integration technique;
s204: and carrying out noise reduction processing on the wirelessly measured distance by a cosine law so as to offset the bias of the wireless measurement:
s205: adopting Kalman filtering to fuse the distance and angle of wireless measurement and the short-time integral of the IMU so as to obtain the relative displacement and rotation between adjacent wireless measurement information;
s206: based on the relative displacement and rotation between adjacent wireless measurement information, visual features in the image of the kth frame are projected to the (k +1) th frame through epipolar geometry to form a group of wireless matched feature points, and each feature point is associated with the optically matched feature point; finally, sorting is carried out according to the distance between the wireless matching characteristic points and the optical matching characteristic points, and the highest sorted characteristic point corresponds to the minimum distance;
s207: the original visual feature points are arranged according to the ascending order of feature distances, each original visual feature point is mapped with a score corresponding to the normalized distance, a threshold value is defined, and the original visual feature points with the scores smaller than the score are selected as target visual feature points to participate in subsequent state estimation.
3. The intelligent navigation method according to claim 2, wherein the step S203 comprises: aligning IMU measurement data with wireless measurements using an IMU pre-integration technique;
the IMU pre-integration technique is as follows:
wherein the content of the first and second substances,represents the acceleration measurement of the intelligent navigation device at the time t,representing the angular velocity measurement of the smart navigation device at time t,representing a quaternion multiplication operator, (-)iRepresenting the measurements in the reference frame of the IMU,is the IMU reference frame when the (k +1) th radio measurement is received relative to when the k-th radio measurement is received,representing a quantity that the device measures directly or a quantity that can be estimated directly from the measured quantity.
4. The intelligent navigation method according to claim 1, wherein the step S3 includes:
s301: initializing a motion state and determining the position of a wireless signal source through wireless ranging; the initialization operation is performed only once before the navigation task is performed; the initialization vector is defined as:
wherein the content of the first and second substances, andthe position and velocity at the jth key frame,the gravity of a camera reference system at the initial moment, s is the scale information of monocular vision measurement, n is the number of key frames in a group of data, and n is a positive integer greater than or equal to 4;
s302: after initialization, the state of the MAV is estimated and updated by using the updated sensor measurement information, and a wireless channel corresponding to a wireless signal source changes along with the environment in the operation process, so that the distance offset of wireless measurement changes, and the distance measurement is corrected.
5. The intelligent navigation method according to claim 4, wherein the step S301 comprises:
monocular vision motion recovery structure algorithm SfM provides scale-free position of image plane coordinateAnd camera rotationWherein an initial camera reference frame is setSetting a reference system of SfM;representing the camera pose relative to the initial camera reference frame at the jth keyframe;
by manually measuring external parameters between IMU and cameraTo transform the pose between the camera and the IMU reference frame; the two relations are as follows:and is Is thatThe rotation matrix of (a);
will be relatively displacedAnd rotationReplacing the model with a non-scale pose based on monocular vision, and acquiring a measurement model according to the motion as follows: is additive gaussian noise; solving the following least squares problem to obtain X0:F is all frame images in the data group participating in calculation;
obtaining a set of location dataAnd calculating the location of the wireless node by solving the following quadratic programming problem (QP) problem:
6. The intelligent navigation method according to claim 5, wherein the step S4 includes:
establishing measurement models of the three sensors based on a sliding window estimator optimized by a graph, and minimizing measurement residual errors of the three sensors; and calculating the motion state information in real time by initializing the state of the equipment and solving a position drive optimization problem of a wireless signal source, wherein the motion state information also comprises the position, the speed, the attitude, the bias of wireless measurement and the depth of a visual feature point.
7. The intelligent navigation method according to any one of claims 1 to 6, wherein the step S1 includes: and acquiring an environment image by using a monocular camera, a binocular optical camera, an aperture camera or a fish-eye camera and acquiring original visual features in the environment image.
8. An intelligent navigation device, comprising:
the image acquisition module is used for acquiring an environment image and acquiring original visual features in the environment image;
the wireless measurement module is used for screening the original visual features by taking the distance angle information obtained by wireless measurement as reference information to obtain target visual features;
an IMU measurement module to initialize a location of a wireless signal source in an environment and a bias of wireless measurements with the target visual characteristic based on IMU measurements;
a fusion module, configured to estimate motion state information using a measurement result of the wireless measurement, a measurement result of the IMU measurement, and the target visual feature, where the motion state information includes: state variables, depths of visual feature points, and biases of wireless measurements, the state variables including: three-dimensional position, three-dimensional velocity, and three-dimensional attitude;
and the navigation module is used for navigating according to the motion state information and the three-dimensional information corresponding to the visual characteristics in the environment.
9. An intelligent navigation device, comprising: wireless measurement node, image acquisition unit, inertial measurement unit, memory and processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the intelligent navigation method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011552550.6A CN112762929B (en) | 2020-12-24 | 2020-12-24 | Intelligent navigation method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011552550.6A CN112762929B (en) | 2020-12-24 | 2020-12-24 | Intelligent navigation method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112762929A true CN112762929A (en) | 2021-05-07 |
CN112762929B CN112762929B (en) | 2022-08-02 |
Family
ID=75694124
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011552550.6A Active CN112762929B (en) | 2020-12-24 | 2020-12-24 | Intelligent navigation method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112762929B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113124856A (en) * | 2021-05-21 | 2021-07-16 | 天津大学 | Visual inertia tight coupling odometer based on UWB online anchor point and metering method |
CN114942026A (en) * | 2022-06-01 | 2022-08-26 | 四川大学 | Multimode three-dimensional image navigation system based on intelligent data |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9031809B1 (en) * | 2010-07-14 | 2015-05-12 | Sri International | Method and apparatus for generating three-dimensional pose using multi-modal sensor fusion |
CN109341679A (en) * | 2018-09-30 | 2019-02-15 | 华中科技大学 | A kind of smart machine air navigation aid and navigation system |
CN109991636A (en) * | 2019-03-25 | 2019-07-09 | 启明信息技术股份有限公司 | Map constructing method and system based on GPS, IMU and binocular vision |
CN111649737A (en) * | 2020-05-08 | 2020-09-11 | 中国航空工业集团公司西安航空计算技术研究所 | Visual-inertial integrated navigation method for precise approach landing of airplane |
-
2020
- 2020-12-24 CN CN202011552550.6A patent/CN112762929B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9031809B1 (en) * | 2010-07-14 | 2015-05-12 | Sri International | Method and apparatus for generating three-dimensional pose using multi-modal sensor fusion |
CN109341679A (en) * | 2018-09-30 | 2019-02-15 | 华中科技大学 | A kind of smart machine air navigation aid and navigation system |
CN109991636A (en) * | 2019-03-25 | 2019-07-09 | 启明信息技术股份有限公司 | Map constructing method and system based on GPS, IMU and binocular vision |
CN111649737A (en) * | 2020-05-08 | 2020-09-11 | 中国航空工业集团公司西安航空计算技术研究所 | Visual-inertial integrated navigation method for precise approach landing of airplane |
Non-Patent Citations (3)
Title |
---|
SHENGKAI ZHANG等: "Demo Abstract_ WINS_ WiFi-Inertial Indoor State Estimation for MAVs", 《2018 ASSOCIATION FOR COMPUTING MACHINERY》 * |
卫文乐等: "利用惯导测量单元确定关键帧的实时SLAM算法", 《计算机应用》 * |
李康: "室内环境无人机复合定位方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113124856A (en) * | 2021-05-21 | 2021-07-16 | 天津大学 | Visual inertia tight coupling odometer based on UWB online anchor point and metering method |
CN113124856B (en) * | 2021-05-21 | 2023-03-14 | 天津大学 | Visual inertia tight coupling odometer based on UWB (ultra wide band) online anchor point and metering method |
CN114942026A (en) * | 2022-06-01 | 2022-08-26 | 四川大学 | Multimode three-dimensional image navigation system based on intelligent data |
Also Published As
Publication number | Publication date |
---|---|
CN112762929B (en) | 2022-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Qin et al. | A general optimization-based framework for global pose estimation with multiple sensors | |
CN108827315B (en) | Manifold pre-integration-based visual inertial odometer pose estimation method and device | |
CN111156998B (en) | Mobile robot positioning method based on RGB-D camera and IMU information fusion | |
CN106840148B (en) | Wearable positioning and path guiding method based on binocular camera under outdoor working environment | |
Achtelik et al. | Stereo vision and laser odometry for autonomous helicopters in GPS-denied indoor environments | |
Georgiev et al. | Localization methods for a mobile robot in urban environments | |
US9990726B2 (en) | Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image | |
CN111288989B (en) | Visual positioning method for small unmanned aerial vehicle | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN112740274A (en) | System and method for VSLAM scale estimation on robotic devices using optical flow sensors | |
CN111932674A (en) | Optimization method of line laser vision inertial system | |
CN112762929B (en) | Intelligent navigation method, device and equipment | |
CN112254729A (en) | Mobile robot positioning method based on multi-sensor fusion | |
Andert et al. | Lidar-aided camera feature tracking and visual slam for spacecraft low-orbit navigation and planetary landing | |
JP2017524932A (en) | Video-assisted landing guidance system and method | |
Li et al. | Aerial-triangulation aided boresight calibration for a low-cost UAV-LiDAR system | |
Zhao et al. | Subt-mrs: A subterranean, multi-robot, multi-spectral and multi-degraded dataset for robust slam | |
Wang et al. | UAV navigation in large-scale GPS-denied bridge environments using fiducial marker-corrected stereo visual-inertial localisation | |
Giubilato et al. | A comparison of monocular and stereo visual FastSLAM implementations | |
CN116380079A (en) | Underwater SLAM method for fusing front-view sonar and ORB-SLAM3 | |
Alliez et al. | Indoor localization and mapping: Towards tracking resilience through a multi-slam approach | |
CN115355904A (en) | Slam method for Lidar-IMU fusion of ground mobile robot | |
KR102406240B1 (en) | Robust stereo visual inertial navigation apparatus and method | |
Ross et al. | Uncertainty estimation for stereo visual odometry | |
CN114459467A (en) | Target positioning method based on VI-SLAM in unknown rescue environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |