CN113188557B - Visual inertial integrated navigation method integrating semantic features - Google Patents

Visual inertial integrated navigation method integrating semantic features Download PDF

Info

Publication number
CN113188557B
CN113188557B CN202110467584.3A CN202110467584A CN113188557B CN 113188557 B CN113188557 B CN 113188557B CN 202110467584 A CN202110467584 A CN 202110467584A CN 113188557 B CN113188557 B CN 113188557B
Authority
CN
China
Prior art keywords
semantic
plane
pose
visual
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110467584.3A
Other languages
Chinese (zh)
Other versions
CN113188557A (en
Inventor
黄郑
王红星
雍成优
朱洁
刘斌
吕品
陈玉权
何容
吴媚
赖际舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Jiangsu Fangtian Power Technology Co Ltd
Original Assignee
Nanjing University of Aeronautics and Astronautics
Jiangsu Fangtian Power Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics, Jiangsu Fangtian Power Technology Co Ltd filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202110467584.3A priority Critical patent/CN113188557B/en
Publication of CN113188557A publication Critical patent/CN113188557A/en
Application granted granted Critical
Publication of CN113188557B publication Critical patent/CN113188557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a visual inertial integrated navigation method for fusing semantic features, which is used for collecting RGBD visual sensor data S (k) and accelerometer data at k timeAnd gyroscope dataAccording to the vision sensor data S (k), calculating by utilizing a vision mileage calculation to obtain the current pose T (k) of the camera; extracting and matching semantic plane characteristics between two adjacent image frames by utilizing the vision sensor data S (k); pre-integrating between two adjacent image frames by utilizing inertial sensor data; the carrier navigation information is optimized and solved by combining the semantic plane observation residual error, the visual odometer relative pose observation residual error and the inertial pre-integration residual error; outputting carrier navigation information and camera internal parameters. The invention can effectively improve the positioning accuracy and robustness of the navigation system.

Description

Visual inertial integrated navigation method integrating semantic features
Technical Field
The invention belongs to the technical field of robot navigation, and particularly relates to a visual inertial integrated navigation method integrating semantic features.
Background
The visual SLAM algorithm becomes a big research hot spot in the field of autonomous navigation of robots due to the richness of perception information. The traditional visual SLAM method extracts characteristics such as points, lines and the like to perform environment characteristic description and pose calculation, the characteristics are described and matched through a bottom brightness relation, and structural redundancy information in a scene is not fully utilized. A single visual sensor has limited sensing dimension, so that the robust positioning of the unmanned aerial vehicle in an indoor complex environment is difficult to meet. These all have an impact on the accuracy and reliability of the navigation system.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a visual inertial integrated navigation method integrating semantic features so as to improve the accuracy of autonomous positioning of an unmanned aerial vehicle.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
a visual inertial integrated navigation method integrating semantic features comprises the following steps:
step 1, collecting RGBD vision sensor data S (k) at k time and addingSpeedometer dataAnd gyroscope data
Step 2, calculating by utilizing a visual mileage according to the visual sensor data S (k) to obtain the current pose T (k) of the camera;
step 3, based on the visual sensor data between two adjacent image frames, constructing a semantic plane feature map, and matching the semantic plane of the current frame with semantic signposts in the map to obtain the observation relationship between the current key frame and the semantic signposts;
step 4, inertial pre-integration between two adjacent image frames is carried out based on inertial sensor data, wherein the inertial sensor data comprises accelerometer data and gyroscope data;
and 5, based on the sum of the semantic plane observation residual, the visual odometer relative pose observation residual and the inertial pre-integration residual as a combined optimization function, performing nonlinear optimization on the optimization function to realize pose solving.
Step 6, outputting carrier navigation information and camera internal parameters, and returning to the step 1
Preferably, the obtaining the current pose of the camera specifically includes: and firstly, extracting ORB features from two adjacent frames, then, calculating the relative pose between two key frames by utilizing the ORB feature matching relation between the two frames and pnp, and obtaining the pose T (k) of the camera at the moment k through accumulation of the relative poses.
Preferably, plane information is acquired based on visual sensor data between two adjacent image frames, and semantic plane features are constructed based on semantic categories, centroids, normal and horizontal and vertical types of the planes:
·s p ={p x ,p y ,p z }
·s n ={n x ,n y ,n z ,n d }
·
·s c =corresponding detected semantic object class
wherein ,sp Is a plane centroid, s n Is the normal parameter of the plane, s o Is a plane class label (horizontal/vertical), s c A plane semantic class label which depends on the semantic object class corresponding to the plane;
constructing a semantic planar feature map based on the constructed semantic planar features;
defining initial semantic signpost, and detecting semantic plane S for each frame k And respectively carrying out matching judgment on the initial semantic signpost category, normal direction and centroid, and obtaining the observation relation between the current key frame and the semantic signpost.
Preferably, the inertial sensor data obtained at time k comprises accelerometer data from time k-1 to time kAnd gyroscope data->For constructing an inertial sensor measurement model:
wherein ,na and nω White noise of the accelerometer and the gyroscope respectively; and />To accelerateRandom walk of the dial gauge and the gyroscope, and the derivative is white noise; />Ideal value for accelerometer measurement, +.>Ideal values for gyroscope measurements; g W Is the gravity under the navigation system; />A rotation matrix from a coordinate system to a machine body coordinate system is navigated for sampling time;
a plurality of inertial sensor data are arranged between two adjacent image frames, and all the inertial sensor data between the two adjacent image frames are pre-integrated in an iterative mode:
wherein ,for position pre-integration, +.>For speed pre-integration, rotation is represented by the quaternion γ,>pre-integrating for rotation; initially, the _on> and />0->As unit quaternion, R (gamma) represents conversion of quaternion into rotation matrix, +.>Multiplication representing a quaternion;
the frequency of the visual data is kept consistent with the inertia pre-integration frequency, and the inertia pre-integration between two adjacent image frames is obtained and />
Preferably, the optimization function is:
wherein the Semantic part represents the residual of the Semantic landmark observation,representing a semantic landmark observation error observed by a certain frame of a camera, wherein a VO part represents an observation residual error of a relative pose in a visual odometer, a third term is an IMU pre-integration estimation residual error, ρ (·) represents a robust kernel function, Σ -1 An information matrix corresponding to each error amount, which is the inverse of the covariance, represents a preliminary estimation of each constraint accuracy, i and j refer to the i and j-th image frames, respectively, and k refers to the k-th semantic landmark.
Preferably, the semantic roadmap observation error
wherein ,for the position observation of semantic signposts under the camera coordinate system,/-for> and />For the variables to be optimized, the pose of the camera in the ith frame and the road sign L are respectively represented k The location in the world;
calculating e about and />A jacobian matrix of errors;
pose observation residual error
wherein ,for observing the relative pose of VO in the ith frame and in the jth frame,/for the relative pose of VO in the ith frame and in the jth frame>And->Is a variable to be optimized;
respectively solving the e-related poseA jacobian matrix of errors;
calculating inertial pre-integral residualObtained by the difference between the predicted value and the pre-integrated value between two adjacent image frames.
Preferably, e in the semantic landmark observation error is related to and />The jacobian matrix of errors is:
wherein xi is the poseIs [ X ] i ′ Y i ′ Z i ′] T For the semantic plane coordinates under the camera system at moment i, f x 、f y For the focal length in the camera intrinsic +.>Is->Is included in the rotation component of the motor.
Preferably, e in the pose observation residual is related to poseThe jacobian matrix of errors is:
preferably, the inertial pre-integral residual equation is:
wherein ,a rotation matrix from a coordinate system to a machine body coordinate system is navigated at the moment j; /> The positions of the machine body coordinate system under the navigation coordinate system at the moment i and the moment j are respectively; />The speeds of the machine body coordinate system at the moment i and the moment j in the navigation coordinate system are respectively; Δt (delta t) i Is the time interval between two adjacent image frames; />Quaternion of rotation of the machine body coordinate system under the navigation coordinate system at the moment i and the moment j respectively; />The accelerometer random walk under the machine body coordinate system at the moment i and the moment j respectively; />Gyro random walk [ gamma ] under machine body coordinate system at i time and j time respectively] xyz Representing the x, y, z components taking the quaternion gamma.
The invention discloses the following technical effects: when the camera is mobile, the motion estimation is performed only by virtue of visual characteristics, and the navigation positioning accuracy is low. According to the invention, semantic features and inertia information are fused into an optimized objective function of the visual odometer, high-dimensional feature observation and additional sensor observation constraint are added for the visual odometer, the relative motion in a short time is estimated, and pose optimization is performed, so that the pose estimation precision of the visual odometer under the condition of strong maneuverability can be improved, and the navigation positioning precision and robustness of the visual odometer can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
As shown in fig. 1, the invention provides a visual inertial integrated navigation method for fusing semantic features, which comprises the following steps:
step 1: acquisition of k-time RGBD vision sensor data S (k) and accelerometer dataAnd gyroscope data
Step 2: the current pose T (k) of the camera is obtained by utilizing a visual mileage calculation according to the visual sensor data S (k):
and firstly, ORB feature extraction is carried out on two adjacent frames, then, the ORB feature matching relation between the two frames and the Perspotive-n-point (pnp) are utilized to calculate the relative pose between the two key frames, and the pose T (k) of the camera at the moment k can be obtained through accumulation of the relative poses.
Step 3: semantic planar feature extraction and matching between two adjacent image frames is performed based on the visual sensor data S (k):
the RGBD vision sensor at the moment k and the moment k-1 is utilized to collect data S (k) and S (k-1), and the corresponding semantic plane feature extraction and matching method is as follows:
(a) Extracting semantic information
And detecting the set semantic object by using a YOLOv3 target detection algorithm to obtain a two-dimensional frame boundary bbox of the semantic object in the image frame.
(b) Semantic plane extraction
And in the bbox region, converting the depth image in the visual sensor data S (k) into a format of a structured point cloud, extracting normal vectors of the point cloud by utilizing a neighborhood covariance according to a structured point cloud plane segmentation algorithm based on the connected domain, and judging whether adjacent point clouds belong to the same connected domain or not based on the normal vectors of the point cloud and the distance between a tangent plane and an origin.
The least square method is utilized to ensure that the number of point clouds is larger than a set threshold N min And (3) carrying out plane fitting on the connected domain to obtain a plane equation of the connected domain. From this, the plane parameters (centroid { x, y, z }, normal n,) of the semantic object surface can be obtained p :{n x ,n y ,n z -j) belonging object class, realizing extraction of semantic planes.
(c) Based on the extracted plane information, carrying out semantic plane feature construction
First using the plane normal n p A priori the ground plane normal n g And judging the horizontal and vertical types of the plane: computing point cloud normal and prior ground plane normal n g Is the difference of (a): d, d hor =||n p -n g I, if n p And n g D of the difference of (d) hor Is smaller than a set level normal difference threshold t hor Then take n p The plane being the normal is marked as the horizontal plane. Calculating the difference between the plane normal and the perpendicular plane normal by means of dot product: d, d vert =n p ·n g If d vert Less than a set threshold t vert With n p The plane that is normal is marked as a vertical plane.
Then, using the semantic category, centroid, normal and horizontal and vertical types of the plane, constructing semantic plane features:
·s p ={p x ,p y ,p z }
·s n ={n x ,n y ,n z ,n d }
·
·s c =corresponding detected semantic object class
wherein ,sp Is a plane centroid, s n Is the normal parameter of the plane, s o Is a plane class label (horizontal/vertical), s c Is a plane semantic class label, which depends on the semantic object class to which the plane corresponds.
(d) And constructing a semantic plane feature map based on the semantic plane features, and matching the semantic plane of the current frame with semantic signposts in the map.
Based on the semantic plane obtained in step (b):
wherein ,Si Representing a semantic plane,representing semantic planar centroid->Representing semantic plane normal parameters, ++>Representing semantic plane class label (horizontal/vertical), +.>Representing a planar semantic category label.
Directly mapping the first received semantic plane to a first semantic landmark, denoted L i :
wherein Is->Respectively representing the centroid and normal of the semantic landmark in the world coordinate system. Its use->Andthe specific formula is as follows:
wherein xr Representing the current camera's position in the world coordinate system,representing a rotation matrix from the camera coordinate system to the world coordinate system.
After the initial semantic landmark setting is completed, the design realizes the detection of the semantic plane S of each frame through the following three steps k Data association with semantic roadmap:
firstly, matching the category and the plane type of a semantic plane extracted from a frame with a semantic landmark, after successful matching, calculating the number of point clouds and the area of a region of the semantic plane, and deleting the number of the point clouds to be smaller than a minimum threshold t p Or the planar area is less than a minimum threshold t a Is defined by the semantic plane of (a);
converting the normal direction of the semantic plane from a camera coordinate system to a world coordinate system by utilizing coordinate transformation to obtainNormal to semantic signpost +.>Matching is performed. If->And->The deviation between them is smaller than the set threshold t n The semantic plane is considered to be consistent with the normal direction of the road sign, the subsequent coordinate association is carried out, and otherwise, the matching fails;
converting the centroid of the semantic plane from a camera coordinate system to a world coordinate system, calculating the mahalanobis distance between the centroid of the semantic plane and the matched landmark centroid, and if the mahalanobis distance is larger than a set threshold value, mapping the detected semantic object into a new semantic landmark j Otherwise, the semantic object and the current semantic signpost l i Matching;
by the steps, the observation relation between the current key frame and the semantic signpost can be obtained.
Step 4: pre-integration between two adjacent image frames is performed based on inertial sensor data:
the inertial sensor data obtained at the moment k and />Comprising accelerometer data from time k-1 to time k->And gyroscope data->Where i=0, 1,2, …, (t (k) -t (k-1))/Δt, t (k) is the sampling time corresponding to time k, t (k-1) is the sampling time corresponding to time k-1, Δt is the sampling period of the inertial sensor, and the inertial sensor measures the model:
wherein ,na and nω White noise of the accelerometer and the gyroscope respectively; and />The derivative of the random walk of the accelerometer and the gyroscope is white noise; />Ideal value for accelerometer measurement, +.>Ideal values for gyroscope measurements; g W Is the gravity under the navigation system; />A rotation matrix from a coordinate system to a machine body coordinate system is navigated for sampling time;
pre-integration process between two adjacent inertial frames:
wherein ,for position pre-integration, +.>For speed pre-integration, rotation is represented by the quaternion γ,>pre-integrating for rotation; initially, the _on> and />Is set to be 0, the number of the components is set to be 0,/>as unit quaternion, R (gamma) represents conversion of quaternion into rotation matrix, +.>Multiplication representing a quaternion; a plurality of inertial sensor data are arranged between two adjacent image frames, all the inertial sensor data between the two adjacent image frames are pre-integrated in an iterative mode through the formula, so that the frequency of visual data is consistent with the frequency of inertial pre-integration, and the inertial pre-integration between the two adjacent image frames is obtained> and />
Step 5: and (3) optimizing and solving carrier navigation information by combining the semantic plane observation residual, the visual odometer relative pose observation residual and the inertial pre-integration residual:
a) Establishing an optimization variable X:
wherein ,n is the sequence number of the last frame, +.> and />Representing the carrier position, velocity, quaternion, accelerometer and/or frame k, respectivelyRandom walk of gyroscope; />The coordinates of the semantic planar features are represented by m, which is the sequence number of the last semantic planar feature;
(b) Establishing an optimization function e:
wherein the Semantic part represents the residual of the Semantic landmark observation,representing the semantic landmark position error observed by a frame of the camera. The VO portion represents the observation residual of the relative pose in the visual odometer. The third term is IMU pre-integration estimate residual. ρ (·) represents the robust kernel function Σ -1 The information matrix corresponding to each error amount is represented, which is the inverse of the covariance and represents our preliminary estimation of each constraint accuracy. Where i and j refer to the ith and j image frames, respectively, and k refers to the kth semantic landmark.
For semantic roadmap observation errors:
wherein ,for the position observation of semantic signposts under the camera coordinate system,/-for> and />For the variables to be optimized, the pose of the camera in the ith frame and the road sign L are respectively represented k In the world. e about-> and />The jacobian matrix of errors is as follows:
wherein xi is the poseIs [ X ] i ′ Y i ′ Z i ′] T For the semantic plane coordinates under the camera system at moment i, f x 、f y For the focal length in the camera intrinsic +.>Is->Is included in the rotation component of the motor.
Residual was observed for VO pose:
wherein ,and observing the relative pose of the VO in the ith frame and the jth frame. />And->Is the variable to be optimized. By utilizing the lie algebra right disturbance model and the join property thereof, the pose about e can be solved respectively>The jacobian matrix of errors is specifically as follows:
for inertial pre-integration residuals, they are derived from the difference between the predicted value and the pre-integration value between two neighboring image frames:
in the above-mentioned method, the step of,a rotation matrix from a coordinate system to a machine body coordinate system is navigated at the moment j; /> The positions of the machine body coordinate system under the navigation coordinate system at the moment i and the moment j are respectively; />The speeds of the machine body coordinate system at the moment i and the moment j in the navigation coordinate system are respectively; Δt (delta t) i Is the time interval between two adjacent image frames; />The machine body coordinate system at the moment i and the moment j are respectively guidedA quaternion rotated under an aerial coordinate system; />The accelerometer random walk under the machine body coordinate system at the moment i and the moment j respectively; />Gyro random walk [ gamma ] under machine body coordinate system at i time and j time respectively] xyz Representing the x, y and z components taking the quaternion gamma;
(c) And carrying out iterative solution on the optimization function by using a Levenberg-Marquardt algorithm, stopping iteration when the error converges or the set maximum iteration number is reached, and obtaining carrier navigation information.
Step 6: outputting carrier navigation information and camera internal parameters, and returning to the step 1.
When the camera is mobile, the motion estimation is performed only by virtue of visual characteristics, and the navigation positioning accuracy is low. According to the invention, semantic features and inertia information are fused into an optimized objective function of the visual odometer, high-dimensional feature observation and additional sensor observation constraint are added for the visual odometer, the relative motion in a short time is estimated, and pose optimization is performed, so that the pose estimation precision of the visual odometer under the condition of strong maneuverability can be improved, and the navigation positioning precision and robustness of the visual odometer can be improved.
The above embodiments are only illustrative of the preferred embodiments of the present invention and are not intended to limit the scope of the present invention, and various modifications and improvements made by those skilled in the art to the technical solutions of the present invention should fall within the protection scope defined by the claims of the present invention without departing from the design spirit of the present invention.

Claims (3)

1. The visual inertial integrated navigation method integrating semantic features is characterized by comprising the following steps of:
step 1, collecting RGBD vision sensor data S (k) and accelerometer data at k timeAnd gyroscope data->
Step 2, calculating by utilizing a visual mileage according to the visual sensor data S (k) to obtain the current pose T (k) of the camera;
step 3, based on the visual sensor data between two adjacent image frames, constructing a semantic plane feature map, and matching the semantic plane of the current frame with semantic signposts in the map to obtain the observation relationship between the current key frame and the semantic signposts;
step 4, inertial pre-integration between two adjacent image frames is carried out based on inertial sensor data, wherein the inertial sensor data comprises accelerometer data and gyroscope data;
step 5, based on the sum of the semantic plane observation residual, the visual odometer relative pose observation residual and the inertial pre-integration residual as a combined optimization function, performing nonlinear optimization on the optimization function to realize pose solving;
step 6, outputting carrier navigation information and camera internal parameters, and returning to the step 1;
the inertial sensor data obtained at time k includes accelerometer data from time k-1 to time kAnd gyroscope dataFor constructing an inertial sensor measurement model:
wherein ,na and nω White noise of the accelerometer and the gyroscope respectively; and />The derivative of the random walk of the accelerometer and the gyroscope is white noise; />Ideal value for accelerometer measurement, +.>Ideal values for gyroscope measurements; g W Is the gravity under the navigation system; />A rotation matrix from a coordinate system to a machine body coordinate system is navigated for sampling time;
a plurality of inertial sensor data are arranged between two adjacent image frames, and all the inertial sensor data between the two adjacent image frames are pre-integrated in an iterative mode:
wherein ,for position pre-integration, +.>For speed pre-integration, rotation is represented by the quaternion γ,>pre-integrating for rotation; initially, the _on> and />0->As unit quaternion, R (gamma) represents conversion of quaternion into rotation matrix, +.>Multiplication representing a quaternion;
the frequency of the visual data is kept consistent with the inertia pre-integration frequency, and the inertia pre-integration between two adjacent image frames is obtained and />
The optimization function is as follows:
wherein the Semantic part represents the residual of the Semantic landmark observation,representing a semantic landmark observation error observed by a certain frame of a camera, wherein a VO part represents an observation residual error of a relative pose in a visual odometer, a third term is an IMU pre-integration estimation residual error, ρ (·) represents a robust kernel function, Σ -1 An information matrix corresponding to each error amount, which is the inverse of covariance, and which represents a preliminary estimation of each constraint accuracy, i and j respectively refer to the ith and j image frames, and k refers to the kth semantic landmark;
semantic roadmap observation error
wherein ,for the position observation of semantic signposts under the camera coordinate system,/-for> and />For the variables to be optimized, the pose of the camera in the ith frame and the road sign L are respectively represented k The location in the world;
calculating e about and />A jacobian matrix of errors;
pose observation residual error
wherein ,for observing the relative pose of VO in the ith frame and in the jth frame,/for the relative pose of VO in the ith frame and in the jth frame>And->Is a variable to be optimized;
respectively solving the e-related poseA jacobian matrix of errors;
calculating inertial pre-integral residualObtaining through the difference between the predicted value and the pre-integrated value between two adjacent image frames;
e in the semantic landmark observation error is related to and />The jacobian matrix of errors is:
wherein xi is the poseIs represented by lie algebra of [ X ]' i Y′ i Z′ i ] T For the semantic plane coordinates under the camera system at moment i, f x 、f y For the focal length in the camera intrinsic +.>Is->A rotational component of (a);
e in the pose observation residual error is related to the poseThe jacobian matrix of errors is:
the inertial pre-integration residual formula is:
wherein ,a rotation matrix from a coordinate system to a machine body coordinate system is navigated at the moment j; />The positions of the machine body coordinate system under the navigation coordinate system at the moment i and the moment j are respectively; />The machine body coordinate systems at the moment i and the moment j are respectively under the navigation coordinate systemIs a speed of (2); Δt (delta t) i Is the time interval between two adjacent image frames; />Quaternion of rotation of the machine body coordinate system under the navigation coordinate system at the moment i and the moment j respectively; />The accelerometer random walk under the machine body coordinate system at the moment i and the moment j respectively; />Gyro random walk [ gamma ] under machine body coordinate system at i time and j time respectively] xyz Representing the x, y, z components taking the quaternion gamma.
2. The visual inertial integrated navigation method of claim 1, wherein,
the current pose of the camera is specifically obtained by: and firstly, extracting ORB features from two adjacent frames, then, calculating the relative pose between two key frames by utilizing the ORB feature matching relation between the two frames and pnp, and obtaining the pose T (k) of the camera at the moment k through accumulation of the relative poses.
3. The visual inertial integrated navigation method of claim 2, wherein,
based on visual sensor data between two adjacent image frames, plane information is acquired, and semantic plane characteristics are constructed based on semantic categories, centroids, normal directions and horizontal and vertical types of the planes:
·s p ={p x ,p y ,p z }
·s n ={n x ,n y ,n z ,n d }
·
·s c =corresponding detected semantic object class
wherein ,sp Is a plane centroid, s n Is the normal parameter of the plane, s o Is a plane class label (horizontal/vertical), s c A plane semantic class label which depends on the semantic object class corresponding to the plane;
constructing a semantic planar feature map based on the constructed semantic planar features;
defining initial semantic signpost, and detecting semantic plane S for each frame k And respectively carrying out matching judgment on the initial semantic signpost category, normal direction and centroid, and obtaining the observation relation between the current key frame and the semantic signpost.
CN202110467584.3A 2021-04-28 2021-04-28 Visual inertial integrated navigation method integrating semantic features Active CN113188557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110467584.3A CN113188557B (en) 2021-04-28 2021-04-28 Visual inertial integrated navigation method integrating semantic features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110467584.3A CN113188557B (en) 2021-04-28 2021-04-28 Visual inertial integrated navigation method integrating semantic features

Publications (2)

Publication Number Publication Date
CN113188557A CN113188557A (en) 2021-07-30
CN113188557B true CN113188557B (en) 2023-10-20

Family

ID=76979977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110467584.3A Active CN113188557B (en) 2021-04-28 2021-04-28 Visual inertial integrated navigation method integrating semantic features

Country Status (1)

Country Link
CN (1) CN113188557B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113465598B (en) * 2021-08-04 2024-02-09 北京云恒科技研究院有限公司 Inertial integrated navigation system suitable for unmanned aerial vehicle
CN114152937B (en) * 2022-02-09 2022-05-17 西南科技大学 External parameter calibration method for rotary laser radar
CN115493612A (en) * 2022-10-12 2022-12-20 中国第一汽车股份有限公司 Vehicle positioning method and device based on visual SLAM

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109945858A (en) * 2019-03-20 2019-06-28 浙江零跑科技有限公司 It parks the multi-sensor fusion localization method of Driving Scene for low speed
US10366508B1 (en) * 2016-08-29 2019-07-30 Perceptin Shenzhen Limited Visual-inertial positional awareness for autonomous and non-autonomous device
CN110794828A (en) * 2019-10-08 2020-02-14 福瑞泰克智能系统有限公司 Road sign positioning method fusing semantic information
CN111156997A (en) * 2020-03-02 2020-05-15 南京航空航天大学 Vision/inertia combined navigation method based on camera internal parameter online calibration
CN111325842A (en) * 2020-03-04 2020-06-23 Oppo广东移动通信有限公司 Map construction method, repositioning method and device, storage medium and electronic equipment
CN111693047A (en) * 2020-05-08 2020-09-22 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN112348921A (en) * 2020-11-05 2021-02-09 上海汽车集团股份有限公司 Mapping method and system based on visual semantic point cloud
CN112484725A (en) * 2020-11-23 2021-03-12 吉林大学 Intelligent automobile high-precision positioning and space-time situation safety method based on multi-sensor fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10366508B1 (en) * 2016-08-29 2019-07-30 Perceptin Shenzhen Limited Visual-inertial positional awareness for autonomous and non-autonomous device
CN109945858A (en) * 2019-03-20 2019-06-28 浙江零跑科技有限公司 It parks the multi-sensor fusion localization method of Driving Scene for low speed
CN110794828A (en) * 2019-10-08 2020-02-14 福瑞泰克智能系统有限公司 Road sign positioning method fusing semantic information
CN111156997A (en) * 2020-03-02 2020-05-15 南京航空航天大学 Vision/inertia combined navigation method based on camera internal parameter online calibration
CN111325842A (en) * 2020-03-04 2020-06-23 Oppo广东移动通信有限公司 Map construction method, repositioning method and device, storage medium and electronic equipment
CN111693047A (en) * 2020-05-08 2020-09-22 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN112348921A (en) * 2020-11-05 2021-02-09 上海汽车集团股份有限公司 Mapping method and system based on visual semantic point cloud
CN112484725A (en) * 2020-11-23 2021-03-12 吉林大学 Intelligent automobile high-precision positioning and space-time situation safety method based on multi-sensor fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Localization_Based_on_Semantic_Map_and_Visual_Inertial_Odometry;Jie Jin;2018 24th International Conference on Pattern Recognition (ICPR);全文 *
动态环境下稠密视觉同时定位与地图构建方法研究;周武根;中国博士学位论文全文数据库(信息科技辑);全文 *
多传感器融合的自动驾驶点云地图构建与更新方法研究;谢诗超;中国优秀硕士学位论文全文数据库(工程科技Ⅱ辑);全文 *

Also Published As

Publication number Publication date
CN113188557A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN113188557B (en) Visual inertial integrated navigation method integrating semantic features
CN112734852B (en) Robot mapping method and device and computing equipment
CN112083725B (en) Structure-shared multi-sensor fusion positioning system for automatic driving vehicle
CN112634451A (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
CN113781582A (en) Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration
CN111337943B (en) Mobile robot positioning method based on visual guidance laser repositioning
CN114018236B (en) Laser vision strong coupling SLAM method based on self-adaptive factor graph
CN112815939B (en) Pose estimation method of mobile robot and computer readable storage medium
CN103644904A (en) Visual navigation method based on SIFT (scale invariant feature transform) algorithm
CN114323033B (en) Positioning method and equipment based on lane lines and feature points and automatic driving vehicle
CN110412596A (en) A kind of robot localization method based on image information and laser point cloud
CN113503873B (en) Visual positioning method for multi-sensor fusion
CN114529576A (en) RGBD and IMU hybrid tracking registration method based on sliding window optimization
CN113763549A (en) Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU
Qimin et al. A methodology of vehicle speed estimation based on optical flow
CN113740864A (en) Self-pose estimation method for soft landing tail segment of detector based on laser three-dimensional point cloud
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
Panahandeh et al. Vision-aided inertial navigation using planar terrain features
Ye et al. Robust and efficient vehicles motion estimation with low-cost multi-camera and odometer-gyroscope
CN115560744A (en) Robot, multi-sensor-based three-dimensional mapping method and storage medium
CN113554705B (en) Laser radar robust positioning method under changing scene
CN110749327B (en) Vehicle navigation method in cooperative environment
Hu et al. Efficient Visual-Inertial navigation with point-plane map
CN113566827A (en) Transformer substation inspection robot indoor positioning method based on information fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant