CN110412635B - GNSS/SINS/visual tight combination method under environment beacon support - Google Patents

GNSS/SINS/visual tight combination method under environment beacon support Download PDF

Info

Publication number
CN110412635B
CN110412635B CN201910659046.7A CN201910659046A CN110412635B CN 110412635 B CN110412635 B CN 110412635B CN 201910659046 A CN201910659046 A CN 201910659046A CN 110412635 B CN110412635 B CN 110412635B
Authority
CN
China
Prior art keywords
feature
gnss
points
sins
observation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910659046.7A
Other languages
Chinese (zh)
Other versions
CN110412635A (en
Inventor
张小红
沈燕芬
朱锋
王雅仪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201910659046.7A priority Critical patent/CN110412635B/en
Publication of CN110412635A publication Critical patent/CN110412635A/en
Application granted granted Critical
Publication of CN110412635B publication Critical patent/CN110412635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Automation & Control Theory (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a GNSS/SINS/visual tight combination method supported by an environment beacon, which uses a visual sensor and adopts a visual feature point cloud map which is offline generated based on a visual + crowdsourcing mode and contains feature descriptors as a priori environment beacon database; when the carrier carries out real-time resolving navigation positioning, taking a differential GNSS/SINS tight combination as a basis, shooting images through a visual sensor fixedly connected with the carrier when GNSS signals are seriously shielded, carrying out dynamic interference removal and feature extraction on the images, and carrying out feature matching on the extracted features and environment beacons in a visual feature point cloud map; when the two road mark points are matched to be more than or equal to 1, a rear intersection equation formed by the pixel points and the beacon points is added into the tight combination as observation information to provide auxiliary positioning for the SINS. The invention can provide continuous and high-precision positioning results under the condition that GNSS signals are completely interrupted due to extremely complex urban environments.

Description

GNSS/SINS/visual tight combination method under environment beacon support
Technical Field
The invention relates to the field of integrated navigation, in particular to a GNSS/SINS/visual tight combination method supported by an environmental beacon.
Background
Through decades of research and development, the basic theory and method of GNSS (Global Navigation Satellite System, GNSS)/SINS (Strap-down inertial navigation system, SINS) combined navigation are mature, and stable centimeter-level positioning can be achieved under an open observation environment. Hardware, software integrated GNSS/SINS solutions offered by NovAtel corporation, canada are widely adopted due to their superior performance. In recent years, the industry of automatic driving is also under development, and accurate positioning is also attracting attention in the industry as a basic module for automatic driving. In automation driving enterprises such as Tesla, general and proPILOT, a combined navigation positioning system taking GNSS/inertia as a core is adopted. However, in a complex environment, especially in a typical urban environment where high-rise buildings stand up and high-rise tunnels are densely distributed, the quality is drastically reduced due to the influence of signal shielding and interference on the GNSS observation quality, so that the performance of the GNSS/SINS combined system is remarkably reduced. With the construction and development of the united states GPS, russian GLONASS, european union Galileo and chinese beidou, GNSS is evolving towards multiple frequency and multiple systems, and the combination of multiple frequency and multiple modes GNSS/SINS improves the ambiguity fixing rate in complex environments to a certain extent; the high-precision pose recursion function in the inertial navigation short time is effectively applied to the GNSS data preprocessing direction; many different types of robust adaptive filtering methods have also been proposed in the academy in an attempt to solve the problem of poor quality of state information and observed information. The method can achieve good repairing effect when the GNSS is interrupted in a short time (30 s) occasionally, but in a typical urban environment, once the GNSS is interrupted for a long time, the mechanical arrangement error of the SINS can be exponentially diverged, and the problem is still a difficult problem of GNSS/SINS combination and is also a big technical bottleneck in vehicle-mounted real-time positioning.
The vision sensor realizes the estimation of the pose of the carrier by sensing, identifying and tracking the characteristic information of the surrounding environment. The visual sensor has the advantages of mature hardware technology, small volume, low price, universal integration in the intelligent terminal, semantic segmentation, scene understanding, large-scale image construction and other computer vision capabilities, and has wide application prospect and research value. The AutoPilot system represented by Tesla adopts a vision-dominated multi-sensor fusion scheme. According to the presence or absence of the assistance of priori environmental map information, visual positioning can be divided into two modes of relative positioning and absolute positioning, relative positioning is relatively mature at present, the representative algorithm of the relative positioning comprises ORB-SLAM2, LSD-SLAM, DSO, SVO and the like based on pure vision and MSCKF, VINS-Mono, OKVIS and the like based on vision/inertial navigation fusion, and the relative positioning has a plurality of mature applications on mobile mapping equipment, unmanned aerial vehicle, sweeping robots and the like at present, and the positioning precision is about 1%. Because the relative positioning is under the condition of no closed loop correction, pose errors can be gradually accumulated, and the actual carrier running environment is always free of closed loop. So the high-precision map is actively laid out by the graphics vendors such as Golder, four-dimensional graphics new, tomTom, here and the like so as to realize the visual matching absolute positioning under the support of the prior map. At present, the high-precision map has no unified standard, so that no mature floor product exists.
The GNSS, the SINS and the vision sensor have strong advantage complementarity, the GNSS establishes a unique and unified space-time reference on the global scope, and a global and all-weather high-precision absolute positioning system can be provided; SINS is independent and autonomous, is not affected by environmental images, has high sampling rate, is stable and continuous, and can be used as a main filter for multi-source fusion; the visual sensor can carry out dead reckoning, and can realize absolute positioning under the support of a priori map, the performance of the visual sensor is between GNSS/SINS, and the visual sensor is an effective supplement of a GNSS/SINS combined system. It follows that GNSS/SINS/vision sensors are an important direction for future multisource fusion.
Aiming at the problems, the invention provides a GNSS/SINS/visual tight combination method supported by an environment beacon, the whole positioning system is based on differential GNSS/SINS tight combination, the acquired images are subjected to dynamic interference removal, feature extraction and environment beacon matching under the assistance of a feature point cloud map, and an observation constraint equation is constructed by utilizing the collinear relation of matched pixels and a beacon point pair to participate in tight combination calculation so as to realize continuous and high-precision positioning.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a GNSS/SINS/visual tight combination method supported by an environmental beacon aiming at the defects in the prior art.
The technical scheme adopted for solving the technical problems is as follows:
the invention provides a GNSS/SINS/visual tight combination method supported by an environment beacon, wherein in the method, an image is acquired through a visual sensor, and a visual feature point cloud map containing feature descriptors is generated offline to serve as a priori environment beacon database; when the carrier carries out real-time resolving navigation positioning, taking a differential GNSS/SINS tight combination as a basis, shooting images through a visual sensor arranged on the carrier when GNSS signals are blocked, carrying out dynamic interference removal and feature extraction on the images, and carrying out feature matching on the extracted features and environment beacons in a visual feature point cloud map; when the two road mark points are matched to be more than or equal to 1, a rear intersection equation formed by the matched pixel points and the beacon points is used as observation information and added into the tight combination to provide auxiliary positioning for the SINS.
Further, the method of the invention specifically comprises the following steps:
step 1, preprocessing an image acquired in real time to obtain an image after distortion correction, constructing a convolutional neural network by deep learning to perform semantic segmentation, and marking a pixel block corresponding to a dynamic object;
step 2, dividing grids of the image, extracting feature points of each grid by using a SIFT feature operator based on GPU acceleration, judging whether the feature points fall into a dynamic object marking area, deleting the feature points if the feature points fall into the dynamic object marking area, sorting and screening the rest feature points according to response intensity, and taking out the feature points of the first two of the response intensity of each grid;
step 3, storing the feature point cloud map with the aid of feature ten-way numbers, accelerating feature matching, obtaining preliminary matching by adopting a Euclidean geometric distance method of feature descriptors, and performing outlier interference elimination by utilizing a RANSAC algorithm and an EPnP algorithm when the matched environment beacon points are more than 4 points;
step 4, constructing a rear intersection equation for the matched pixel points and the matched beacon points, converting the equation into a relation between the inertial navigation pose error and the pixel observation value according to the external parameter relation between the inertial navigation and the camera, and adding the relation as the observation equation into a GNSS/INS tight combination for auxiliary positioning;
step 5, when the satellite observation quantity is larger than a certain threshold value, shielding the observation value information fed back by vision, and directly adopting GNSS/SINS tight combination positioning; otherwise, if the satellite observation quantity is smaller than the threshold value, GNSS/SINS/vision joint participation solution is adopted after the satellite observation quantity is screened; if no visible satellite exists or no satellite remains after screening, only visual observation and SINS are adopted for tightly combined positioning.
Further, the specific method of the step 1 of the invention is as follows:
before image acquisition, a Zhang Zhengyou camera calibration method is adopted, and an internal reference of a camera is calibrated by using a MATLAB calibration tool box to obtain focal length, image principal point offset, radial distortion parameters and tangential distortion parameters of the camera; after the image acquired in real time is obtained, carrying out distortion correction by using calibrated internal parameters and distortion parameters;
firstly, obtaining feature mapping of a last convolution layer by CNN convolution of the corrected image, obtaining feature extraction effects of 4 different scales by adopting a pyramid pooling module, carrying out 1*1 convolution on each layer of pyramid to reduce dimensionality and maintain global feature weight, carrying out up-sampling by bilinear interpolation to form feature characterization containing local and global information, and finally feeding back the characterization to the convolution layer to realize semantic segmentation at a pixel level; and marking the pixel blocks belonging to the dynamic object after the semantic segmentation result of the image is obtained.
Further, the specific method for extracting the characteristic points in the step 2 comprises the following steps:
step 2.1, SIFT feature extraction: extracting feature points by adopting a SIFT operator, and accelerating SIFT feature extraction by adopting a GPU;
step 2.2, eliminating the feature points of the dynamic region: sequentially marking and judging the feature points extracted by SIFT, and deleting the feature points if the feature points fall in the dynamic marking area;
step 2.3, feature point selection: sorting the cleaned feature points from large to small according to the feature response degree, and taking two feature points with the largest response degree and the second largest response degree as final candidate feature points; if the number of the feature points in the grid is less than two, selecting all the feature points as final candidate feature points; after the candidate feature points of each grid are screened, the feature points of each grid are summarized.
Further, the method for performing feature matching in the step 3 of the present invention specifically includes:
step 3.1, constructing a ten-way tree feature index: constructing a ten-way tree feature index for all data of the visual feature point cloud map, establishing a ten-way tree of an L layer, and searching for leaf nodes at most L times;
step 3.2, searching the nearest neighbor beacon point: comparing the feature descriptors of the extracted feature points with feature vectors of all nodes of a first layer of the ten-tree feature index, and selecting the node with the shortest Euclidean distance to continue searching downwards until the layer L; all feature descriptors of the child nodes of the L layer perform Euclidean distance calculation to find d with shortest Euclidean distance min And the next shortest distance d smin If the shortest euclidean distance at this time is smaller than the set minimum distance threshold and the ratio test is performed,adding the feature point and the beacon point into the candidate matching point pair;
step 3.3, eliminating outliers of the RANSAC algorithm and the EPnP algorithm: if the candidate matching point pairs are not less than 8 pairs, removing outliers by adopting a RANSAC+EPnP algorithm, randomly selecting four groups of matching point pairs, calculating a back projection error by using the EPnP to calculate the pose of the camera, marking the matching point pairs with the back projection error less than 2 pixels as interior points, and recording the subset of the interior points; and taking out the most number of the point subsets in the K times of operation cycle as the optimal subset, taking the camera pose corresponding to the optimal subset as the final estimated camera pose, if the geometric distance difference between the estimated camera position and the inertial navigation forecast position is smaller than 10m, considering that the RANSAC+EPnP algorithm is successful, judging the matching point pairs outside the optimal subset as outer points, and eliminating the outer points.
Further, the specific method of the step 4 of the invention is as follows:
step 4.1, construction of a rear intersection equation: knowing the position of the environmental beacon point and the pixel coordinates of the matched pixel points, and obtaining the position and the posture of the optical center of the camera; the camera position and the gesture are expressed as an error form, the state deviation of the observed value is calculated through a chain method, and a rear intersection observed equation can be constructed for each pair of matching points, wherein the rear intersection observed equation is obtained by the following steps:
wherein [ u v ]]Is the pixel coordinate, [ x y z ]]The coordinate of the beacon point under the camera coordinate system is represented by P, the coordinate of the beacon point under the ECEF system is represented by the state to be estimatedP c Is the coordinates of the camera's optical center in ECEF system, ">Is a rotation matrix from ECEF system to camera coordinate system, and the misalignment angle is +.>Delta P is the observation value of the pose and the position of the camera c As position error omega u,v Noise, which is an observation of a pixel;
step 4.2, vision/inertial navigation fusion observation equation: according to the external reference relation between the inertial navigation and the camera, converting the relation between the pixel observation value and the camera pose error into the relation between the pixel observation value and the inertial navigation pose error, and obtaining an observation equation as follows:
wherein u is 1 、u n As an observation value of a pixel,pixel value epsilon obtained for back projection of beacon point u For pixel error δr e 、δv e 、φ、a b 、ε b Delta N represents the position, speed, attitude error, accelerometer zero bias, gyro zero bias and double-difference ambiguity parameters of inertial navigation respectively, < ->Representing the partial derivative of the pixel observations with respect to camera pose error.
Further, the specific method of the step 5 of the invention is as follows:
when the number of GNSS satellites is larger than the threshold value and the PDOP value is smaller than the threshold value, the GNSS positioning geometric configuration is proved to meet the requirement, at the moment, the visual observation information is shielded, and the GNSS/SINS tight combination is only adopted, so that the observation updating equation is:
wherein,for the wary star geometric double difference distance calculated by SINS recursion position +.>Andrepresenting station star double difference observations calculated using pseudo-range and carrier phase output by reference station and rover receivers, respectively,/->GNSS stick in ECEF systemArm correction>For the cosine, epsilon of the double difference direction of each satellite P And epsilon L Observation noise for pseudo-range, phase;
when the number of GNSS satellites is smaller than the threshold value and the visual sensor is successfully matched with a plurality of beacon points, adding a visual pixel constraint observation value to obtain an observation equation:
when the GNSS satellite is completely shielded, but the vision can be successfully extracted and matched with the beacon point, the observation value is constrained by the vision pixels, and the observation equation adopts the vision/inertial navigation fusion observation equation obtained in the step 4.2.
The invention has the beneficial effects that: the GNSS/SINS/visual tight combination method supported by the environmental beacon of the invention 1) only adds the visual sensor, most of the vehicle carriers of the sensor are provided with the visual sensor, and the sensor has lower cost, so the method has certain touchdown property. 2) The invention fully utilizes the complementary characteristics of the positioning means of GNSS, SINS and vision, the GNSS shielded environment is often rich in vision characteristics, and the absolute positioning of the intersection behind the vision can greatly inhibit the error divergence of inertial navigation in all directions. 3) The invention can inhibit the position error drift of inertial navigation by only matching one road mark point, and can continuously and stably provide the positioning result of decimeter or even centimeter under the condition of long-time unlocking of GNSS. 4) According to different characteristics of the carrier running environment, whether the auxiliary constraint of visual information is added or not can be dynamically selected, and the positioning means are flexible.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a general flowchart of a GNSS/SINS/visual compact positioning algorithm according to an embodiment of the present invention;
FIG. 2 is a flow chart of SINS mechanical layout under ECEF system according to an embodiment of the present invention;
FIG. 3 is a flow chart of a visual constraint observation equation acquisition in accordance with an example of the present invention;
FIG. 4 is a schematic diagram illustrating image de-dynamic interference according to an embodiment of the present invention;
FIG. 5 is a flow chart of image feature extraction according to an embodiment of the invention;
FIG. 6 is a flow chart of image feature matching according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Since the GNSS/SINS combined system is nonlinear, the whole filtering framework of the invention adopts extended Kalman filtering, the total flow chart is shown in figure 1, the navigation coordinate system is ECEF system, and the corresponding mechanical arrangement is also performed under ECEF system, as shown in figure 2. The invention adopts inter-station inter-satellite double difference to eliminate the receiver clock difference, so that the GNSS/SINS tight combination model has double difference ambiguity of error states related to GNSS, and besides, the tight combination error states also comprise navigation states (position, speed and attitude errors) and error states related to IMU (gyro zero offset and accelerometer zero offset). The filtering adopts a closed loop correction technology to carry out feedback correction on the SINS system error. The tightly-packed filter equation is presented as follows:
since the ambiguity parameter δN does not change over time (assuming no cycle slip occurs), its corresponding error state equation is:
according to the inertial navigation error model under ECEF system and the error model of IMU sensor, the continuous error state equation of GNSS/SINS tight combination can be obtained as follows:
in the above-mentioned method, the step of,and->Representing inertial navigation position, velocity and attitude error vector, δb, respectively g And δb a Zero offset error vectors of the gyroscope and the accelerometer are respectively. />For the rotation matrix of b-series relative to e-series, < >>Is the error term f of the gyroscope instrument b For the output value of the accelerometer, δf b For accelerometer instrument error term, δg e Stress vector error, < >>Is the rotational angular velocity of the earth. />Respectively, the correlation times, omega of the corresponding first-order Gaussian-Markov processes a And omega g Representing the driving white noise of the gyroscope and accelerometer, respectively. The error state vector of the GNSS/SINS tight combination is:
to obtain a discrete time system error state equation, the continuous error state equation may be discretized as follows:
δx k+1 =Φ k+1,k δx kk (4)
wherein phi is k+1,k For state transition matrix omega k To drive white noiseSound. One-step state prediction may be performed according to the following equation:
X k,k-1 for the prediction state, Q is the process noise, Δt k For the time interval between two epochs, P k,k-1 To forecast variance. The measurement equation for the tightly-packed filter update can be expressed as follows:
δz=HδX+η (6)
in equation (6), δz is an observation residual, H is a design matrix, and η is observation noise. When the GNSS observation value exists, after data preprocessing, the GNSS observation value is used as filtered observation information to carry out observation updating:
in the formula (7), K k As gain coefficient, R k To observe noise, X k And P k Is the filtering state and its variance.
The above tightly combined Kalman filtering model is the basis of the algorithm of the present invention, and the key technology of each module of the present invention and its implementation method will be described in detail below in conjunction with the technical route shown in fig. 1.
1. Image dynamic interference removal
The visual characteristic point cloud map adopted in the invention is an environment beacon database only containing static objects such as buildings, public facilities, traffic signs and the like, and eliminates unreliable information such as pedestrians, vegetation, vehicles and the like. Only then can it be ensured that the environmental beacon point does not change significantly over time, weather, etc. Therefore, when the image is processed, the extracted characteristic points are also ensured not to be on dynamic objects, such as vehicles, pedestrians, vegetation which is easy to change along with season replacement, and the like.
Before image acquisition, a Zhang Zhengyou camera calibration method is needed, and a calibration tool box of MATLAB is used for calibrating internal parameters of a camera to obtain a focal length f of the camera x 、f y Like principal point offset c x 、c y And radial, tangential distortion parameters.
As shown in fig. 4, after acquiring the image acquired in real time, distortion correction is first required by using calibrated internal parameters and distortion parameters. The corrected image firstly utilizes CNN convolution to obtain the feature mapping of the last convolution layer, adopts a pyramid pooling module to obtain the feature extraction effect of 4 different scales, and carries out 1*1 convolution on each layer of pyramid so as to reduce the dimension and maintain the global feature weight, utilizes bilinear interpolation to carry out up-sampling so as to form feature characterization containing local and global information, and finally feeds the characterization back to the convolution layer so as to realize the semantic segmentation of pixel level. After the semantic segmentation result of the image is obtained, the pixel blocks belonging to dynamic objects (such as vehicles, pedestrians, vegetation and the like) are marked, and the subsequent extraction and matching of the environmental features do not consider the pixels, so that the environmental dynamic interference is effectively avoided.
2. Image feature extraction
The geometric distribution of the feature points directly influences the accuracy of the rear intersection of the images, and the better the geometric distribution is, the higher the accuracy of the rear intersection of the images is. Therefore, in order to ensure the even distribution of the feature points of the image, the corrected image is divided into grids, and then feature extraction is performed on each grid.
As shown in fig. 5, the whole feature extraction flowchart is that the corrected image is first divided into m×n grids, where the values of M and N depend on the size of the image itself, and the size of each grid is controlled to be about 10×10 pixels. After the grids are divided, the following parallel operation is respectively carried out on each grid:
1) SIFT feature extraction
According to the invention, the SIFT operator is adopted to extract the feature points, and as the SIFT (Scale Invariant Feature Transform) operator keeps unchanged for rotation, scale scaling and brightness change, the method has certain robustness for video angle change, radiation transformation and noise, and can realize more reliable feature matching in subsequent visual positioning. In order to accelerate the processing speed, a GPU is adopted to accelerate SIFT feature extraction.
2) Dynamic region feature point culling
In the step 1, the image is subjected to semantic segmentation by adopting a deep learning method, and dynamic objects such as pedestrians, vehicles, vegetation and the like are identified and marked. And sequentially carrying out marking judgment on the feature points extracted by the SIFT, and deleting the feature points if the feature points fall in the dynamic marking area.
3) Feature point selection
And sorting the cleaned feature points from large to small according to the feature response degree, and taking the two feature points with the largest response degree and the second largest response degree as final candidate feature points. If the number of feature points in the grid is less than two, all the feature points are selected as final candidate feature points. After the candidate feature points of each grid are screened, the feature points of each grid are summarized.
3. Image feature matching
Visual feature point cloud maps typically contain millions or even tens of millions of feature points and their descriptors, which are time consuming if each feature point match needs to be compared to the tens of millions of feature points. The invention adopts a data structure based on clustering and dividing of point cloud feature descriptors, namely word bag library auxiliary matching, and the essence of the invention is a ten-way tree clustered according to feature descriptors. Each node of the ten-way tree is a 128-dimensional SIFT feature descriptor obtained according to a K-means clustering algorithm. With the assistance of the ten-way tree, the matching speed can be greatly accelerated. The key steps of the whole feature matching are as follows:
1) Construction of ten-way tree feature index
And constructing a ten-way tree feature index for all data of the visual feature point cloud map. Firstly, a node of a layer 0 is characterized in that a feature vector is the average value of all point cloud feature vectors in a visual feature point cloud map, the layer can be ignored, a K-means clustering algorithm is used from the layer 1, the node is divided into 10 types according to the Euclidean distance of the feature vector, the center (feature average value) of each type is the feature vector of the node, each type in the 10 types is divided into 10 types by continuously using the K-means clustering algorithm until the node cannot be subdivided. In total, a ten-way tree of L layers is built (neglecting layer 0), and a leaf node can be found up to L times.
2) Nearest neighbor beacon point search
And comparing the feature descriptors of the extracted feature points with feature vectors of all nodes of a first layer of the ten-tree feature index, and selecting the node with the shortest Euclidean distance to continue searching downwards until the layer L. Performing Euclidean distance calculation with all feature descriptors of the child nodes of the L layer, and finding out the shortest Euclidean distance d min And the next shortest distance d smin If the shortest euclidean distance at this time is smaller than the set minimum distance threshold and the ratio test is performed,the feature point and the beacon point are added to the candidate matching point pair.
3) RANSAC+EPnP outlier rejection
According to the n three-dimensional Point coordinates and the corresponding pixel Point coordinates, the pose of the camera can be obtained under the condition of knowing the internal parameters of the camera, the solving problem is generally called a PnP (permanent-n-Point) problem, and EPnP is the most effective PnP solution at present. The RANSAC algorithm may be used to obtain valid sample data in a set of sample data sets containing outlier data. The combination of the two can effectively identify the point pairs except for partial matching errors.
Because the number of samples is required by the RANSAC algorithm, if the candidate matching point pairs are smaller than 8 pairs, skipping the operation, otherwise, adopting the RANSAC+EPnP algorithm to remove outliers, and adopting the specific operation as shown in figure 6, randomly selecting four groups of matching point pairs, calculating the pose of a camera by using the EPnP, calculating the back projection error according to the pose of the camera, marking the matching point pairs with the back projection error smaller than 2 pixels as interior points, and recording the subset of the interior points. And taking out the most number of the point subsets in the K times of operation cycles as the optimal subset, taking the camera pose corresponding to the optimal subset as the final estimated camera pose, if the geometric distance difference between the estimated camera position and the inertial navigation forecast position is smaller than 10m, considering that RANSAC+EPnP is successful, judging the matching point pairs outside the optimal subset as outer points, and eliminating the outer points.
4. Vision/inertial fusion positioning under environmental beacon support
By the steps 1,2 and 3, the more reliable matching point pair of the image pixel point and the environment beacon point is obtained, and the position and the gesture of the camera can be calculated according to the principle of rear intersection. Because of the large degree of nonlinearity of the vision co-linear equation, it must be a relatively accurate initial value to enable least squares solution. Without any external information assistance, at least 4 pairs of valid matching points are needed, and the initial position is provided by EPnP. However, with the assistance of the inertial navigation forecast information, the auxiliary positioning can be performed only by 1 pair of matching points, and the specific flow is as follows:
1) Construction of the rear intersection equation
Knowing the position of the environmental beacon point and the pixel coordinates of the matched pixel points, the position and the posture of the optical center of the camera are calculated, and the basic equation is as follows:
wherein [ u v ]]Is the pixel coordinate, [ x y z ]]The coordinate of the beacon point under the camera coordinate system is represented by P, the coordinate of the beacon point under the ECEF system is represented by the state to be estimatedP c Is the coordinates of the camera's optical center in ECEF system, ">Is a rotation matrix of the ECEF system to the camera coordinate system. Since the rotation matrix cannot be directly added or subtracted, it is expressed as misalignment angle +.>The form is:
the camera position and pose of equation (8) is expressed as error form:
in the above-mentioned method, the step of,delta P is the observation value of the pose and the position of the camera c Is a position error. And solving the state deviation of the observed value by a chain rule to obtain the following formula:
for each pair of matching points, a rear intersection observation equation can be constructed, which is organized as follows:
ω u,v is noise of the pixel observations.
2) Vision/inertial navigation fusion observation equation
Because the camera pose and the inertial navigation pose can be established through external parameters, the addition of vision in the GNSS/SINS tight combination does not add extra state quantity, and therefore, the filtered state is still the navigation parameters (position, speed and pose), SINS system errors and GNSS related parameters established by the inertial navigation error model. The relation between the pixel observation value and the camera pose error represented by the formula (12) is converted into the relation between the pixel observation value and the inertial navigation pose error according to the external parameter relation between the inertial navigation and the camera.
After inertial navigation and spatial calibration of the camera optical center, the method can be expressed as follows:
wherein P is SINS Is the coordinates of the inertial navigation center in ECEF system,is a rotation matrix from inertial navigation coordinate system to ECEF system, l b Is the lever arm component of the camera optical center under inertial navigation coordinates,>is a rotation matrix of the camera coordinate system to the ECEF system,>is a rotation matrix of the camera coordinate system to the inertial navigation coordinate system.
The equation (10) can be converted into according to equation (13):
and because ofCan be expressed as:
thus, the partial derivative of p pairs of inertial navigation pose can be obtained:
thus, equation (12) can be converted into:
the final complete observation equation is:
wherein u is 1 、u n As an observation value of a pixel,the pixel value obtained for the back projection of the beacon point, i.e. in formula (17)ε u Is the pixel error.
5. GNSS/SINS/visual compact combination
In the above steps, we have obtained a positioning observation equation with fusion of vision and inertial navigation under the support of environmental beacons, as shown in the formula.
When the number of GNSS satellites is large and the PDOP value is small, the GNSS positioning geometric configuration meets the requirement, at the moment, the visual observation information is shielded, and the observation updating equation is as follows only by adopting the GNSS/SINS tight combination:
in the above-mentioned method, the step of,for the wary star geometric double difference distance calculated by SINS recursion position +.>Andthe station star double difference observations calculated using the pseudoranges and carrier phases output by the reference station and rover receivers, respectively. />For GNSS lever arm correction under ECEF system, < ->For the cosine, epsilon of the double difference direction of each satellite P And epsilon L Is the pseudo-range and phase observation noise.
When the number of GNSS satellites is small and the visual sensor is successfully matched with a plurality of beacon points, adding a visual pixel constraint observation value on the basis of the above observation updating equation to obtain an observation equation as follows:
when the GNSS satellite is completely blocked, but the vision can successfully extract and match the beacon point, only the pixel constraint observation value of the vision is adopted, and the observation equation is shown as the formula (18).
It will be understood that modifications and variations will be apparent to those skilled in the art from the foregoing description, and it is intended that all such modifications and variations be included within the scope of the following claims.

Claims (6)

1. The GNSS/SINS/visual tight combination method supported by the environment beacon is characterized in that in the method, an image is acquired through a visual sensor, and a visual feature point cloud map containing feature descriptors is generated offline to serve as a priori environment beacon database; when the carrier carries out real-time resolving navigation positioning, taking a differential GNSS/SINS tight combination as a basis, shooting images through a visual sensor arranged on the carrier when GNSS signals are blocked, carrying out dynamic interference removal and feature extraction on the images, and carrying out feature matching on the extracted features and environment beacons in a visual feature point cloud map; when the SINS is matched with more than or equal to 1 beacon point, a rear intersection equation formed by the matched pixel points and the beacon points is used as observation information and added into a tight combination to provide auxiliary positioning for the SINS; the vision sensor is a camera;
the method specifically comprises the following steps:
step 1, preprocessing an image acquired in real time to obtain an image after distortion correction, constructing a convolutional neural network by deep learning to perform semantic segmentation, and marking a pixel block corresponding to a dynamic object;
step 2, dividing grids of the image, extracting feature points of each grid by using a SIFT feature operator based on GPU acceleration, judging whether the feature points fall into a dynamic object marking area, deleting the feature points if the feature points fall into the dynamic object marking area, sorting and screening the rest feature points according to response intensity, and taking out the feature points of the first two of the response intensity of each grid;
step 3, storing the feature point cloud map with the aid of a feature ten-way tree, accelerating feature matching, obtaining preliminary matching by adopting a Euclidean geometric distance method of feature descriptors, and performing outlier interference elimination by utilizing a RANSAC algorithm and an EPnP algorithm when the matched environment beacon points are more than 4 points;
step 4, constructing a rear intersection equation for the matched pixel points and the matched beacon points, converting the equation into a relation between the inertial navigation pose error and the pixel observation value according to the external parameter relation between the inertial navigation and the camera, and adding the relation as the observation equation into a GNSS/INS tight combination for auxiliary positioning;
step 5, when the satellite observation quantity is larger than a certain threshold value, shielding the observation value information fed back by vision, and directly adopting GNSS/SINS tight combination positioning; otherwise, if the satellite observation quantity is smaller than the threshold value, GNSS/SINS/vision joint participation solution is adopted after the satellite observation quantity is screened; if no visible satellite exists or no satellite remains after screening, only visual observation and SINS are adopted for tightly combined positioning.
2. The method of GNSS/SINS/visual tight combination under environmental beacon support according to claim 1, wherein the specific method of step 1 is:
before image acquisition, a Zhang Zhengyou camera calibration method is adopted, and an internal reference of a camera is calibrated by using a MATLAB calibration tool box to obtain focal length, image principal point offset, radial distortion parameters and tangential distortion parameters of the camera; after the image acquired in real time is obtained, carrying out distortion correction by using calibrated internal parameters and distortion parameters;
firstly, obtaining feature mapping of a last convolution layer by CNN convolution of the corrected image, obtaining feature extraction effects of 4 different scales by adopting a pyramid pooling module, carrying out 1*1 convolution on each layer of pyramid to reduce dimensionality and maintain global feature weight, carrying out up-sampling by bilinear interpolation to form feature characterization containing local and global information, and finally feeding back the characterization to the convolution layer to realize semantic segmentation at a pixel level; and marking the pixel blocks belonging to the dynamic object after the semantic segmentation result of the image is obtained.
3. The method of GNSS/SINS/visual tight combination under environmental beacon support according to claim 1, wherein the specific method of feature point extraction in step 2 is as follows:
step 2.1, SIFT feature extraction: extracting feature points by adopting a SIFT feature operator, and accelerating SIFT feature extraction by adopting a GPU;
step 2.2, eliminating characteristic points of the dynamic object marking area: sequentially marking and judging the feature points extracted by SIFT, and deleting the feature points if the feature points fall in a dynamic object marking area;
step 2.3, feature point selection: sorting the cleaned feature points from large to small according to the feature response degree, and taking two feature points with the largest response degree and the second largest response degree as final candidate feature points; if the number of the feature points in the grid is less than two, selecting all the feature points as final candidate feature points; after the candidate feature points of each grid are screened, the feature points of each grid are summarized.
4. The method of GNSS/SINS/visual tight combination under environmental beacon support according to claim 1, wherein the method of performing feature matching in step 3 specifically comprises:
step 3.1, constructing a ten-way tree feature index: constructing a ten-way tree feature index for all data of the visual feature point cloud map, establishing a ten-way tree of an L layer, and searching for leaf nodes at most L times;
step 3.2, searching the nearest neighbor beacon point: comparing the feature descriptors of the extracted feature points with feature vectors of all nodes of a first layer of the ten-fork tree feature index, and selectingThe node with the shortest Euclidean distance continues to search downwards until the layer L; all feature descriptors of the child nodes of the L layer perform Euclidean distance calculation to find d with shortest Euclidean distance min And the next shortest distance d smin If the shortest euclidean distance at this time is smaller than the set minimum distance threshold and the ratio test is performed,adding the feature point and the beacon point into the candidate matching point pair;
step 3.3, eliminating outliers of the RANSAC algorithm and the EPnP algorithm: if the candidate matching point pairs are not less than 8 pairs, removing outliers by adopting a RANSAC+EPnP algorithm, randomly selecting four groups of matching point pairs, calculating a back projection error by using the EPnP to calculate the pose of the camera, marking the matching point pairs with the back projection error less than 2 pixels as interior points, and recording the subset of the interior points; and taking out the most number of the point subsets in the K times of operation cycle as the optimal subset, taking the camera pose corresponding to the optimal subset as the final estimated camera pose, if the geometric distance difference between the estimated camera position and the inertial navigation forecast position is smaller than 10m, considering that the RANSAC+EPnP algorithm is successful, judging the matching point pairs outside the optimal subset as outer points, and eliminating the outer points.
5. The method of GNSS/SINS/visual tight combination under environmental beacon support according to claim 1, wherein the specific method of step 4 is:
step 4.1, construction of a rear intersection equation: knowing the position of the environmental beacon point and the pixel coordinates of the matched pixel points, and obtaining the position and the posture of the optical center of the camera; the camera position and the gesture are expressed as an error form, the state deviation of the observed value is calculated through a chain method, and a rear intersection observed equation can be constructed for each pair of matching points, wherein the rear intersection observed equation is as follows:
wherein the method comprises the steps of,[u v]Is the pixel coordinate, [ x y z ]]The coordinate of the beacon point under the camera coordinate system is represented by P, the coordinate of the beacon point under the ECEF system is represented by the state to be estimatedP c Is the coordinates of the camera's optical center in ECEF system, ">Is a rotation matrix from ECEF system to camera coordinate system, and the misalignment angle is +.>Delta P is the observation value of the pose and the position of the camera c As position error omega u,v Noise, which is an observation of a pixel;
step 4.2, vision/inertial navigation fusion observation equation: according to the external reference relation between the inertial navigation and the camera, converting the relation between the pixel observation value and the camera pose error into the relation between the pixel observation value and the inertial navigation pose error, and obtaining an observation equation as follows:
wherein u is 1 、u n As an observation value of a pixel,pixel value epsilon obtained for back projection of beacon point u For pixel error δr e 、δv e 、φ、a b 、ε b Delta N represents the position, speed, attitude error, accelerometer zero bias, gyro zero bias and double-difference ambiguity parameters of inertial navigation respectively, < ->Representing the partial derivative of the pixel observations with respect to camera pose error.
6. The method of GNSS/SINS/visual compact combination under environmental beacon support according to claim 5, wherein the specific method of step 5 is:
when the number of GNSS satellite observations is larger than a threshold value and the PDOP value is smaller than the threshold value, the GNSS positioning geometric configuration is proved to meet the requirement, at the moment, the visual observation information is shielded, and the observation updating equation is only adopted by the GNSS/SINS tight combination:
wherein,for the station star geometry double difference distance calculated using SINS recursive position, +.>And->Representing station star double difference observations calculated using pseudo-range and carrier phase output by reference station and rover receivers, respectively,/->For GNSS lever arm correction under ECEF system, < ->For the cosine, epsilon of the double difference direction of each satellite P And epsilon L Observation noise for pseudo-range, phase;
when the GNSS satellite observation number is smaller than the threshold value and the vision sensor is successfully matched with a plurality of beacon points, adding a vision pixel constraint observation value to obtain an observation equation:
when the GNSS satellite is completely shielded, but the vision can be successfully extracted and matched with the beacon point, the observation value is constrained by the vision pixels, and the observation equation adopts the vision/inertial navigation fusion observation equation obtained in the step 4.2.
CN201910659046.7A 2019-07-22 2019-07-22 GNSS/SINS/visual tight combination method under environment beacon support Active CN110412635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910659046.7A CN110412635B (en) 2019-07-22 2019-07-22 GNSS/SINS/visual tight combination method under environment beacon support

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910659046.7A CN110412635B (en) 2019-07-22 2019-07-22 GNSS/SINS/visual tight combination method under environment beacon support

Publications (2)

Publication Number Publication Date
CN110412635A CN110412635A (en) 2019-11-05
CN110412635B true CN110412635B (en) 2023-11-24

Family

ID=68362204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910659046.7A Active CN110412635B (en) 2019-07-22 2019-07-22 GNSS/SINS/visual tight combination method under environment beacon support

Country Status (1)

Country Link
CN (1) CN110412635B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110871824B (en) * 2019-11-22 2021-01-15 武汉纵横天地空间信息技术有限公司 Method and system for monitoring surrounding environment of track
CN111339483B (en) * 2020-01-19 2022-03-11 武汉大学 GNSS image generation method based on trend-removing cross-correlation analysis
CN111274343B (en) * 2020-01-20 2023-11-24 阿波罗智能技术(北京)有限公司 Vehicle positioning method and device, electronic equipment and storage medium
CN111735451B (en) * 2020-04-16 2022-06-07 中国北方车辆研究所 Point cloud matching high-precision positioning method based on multi-source prior information
CN111340952B (en) * 2020-05-19 2020-09-04 北京数字绿土科技有限公司 Method and device for mapping mobile measurement unlocking region
CN111854678B (en) * 2020-07-17 2022-02-15 浙江工业大学 Pose measurement method based on semantic segmentation and Kalman filtering under monocular vision
CN112363193B (en) * 2020-11-17 2023-11-24 西安恒图智源信息科技有限责任公司 High-precision positioning system based on residual modeling and GNSS combined inertial navigation system
CN112946697B (en) * 2021-01-29 2023-05-30 合肥工业大学智能制造技术研究院 Satellite signal cycle slip detection and repair method based on deep learning
CN113111973A (en) * 2021-05-10 2021-07-13 北京华捷艾米科技有限公司 Depth camera-based dynamic scene processing method and device
CN113433576B (en) * 2021-06-28 2023-09-01 中国科学院国家授时中心 GNSS and V-SLAM fusion positioning method and system
CN113255600B (en) * 2021-06-29 2021-10-01 上海影创信息科技有限公司 Point cloud map updating optimization method, system, medium and equipment based on client
CN114396943A (en) * 2022-01-12 2022-04-26 国家电网有限公司 Fusion positioning method and terminal
CN114199259B (en) * 2022-02-21 2022-06-17 南京航空航天大学 Multi-source fusion navigation positioning method based on motion state and environment perception
CN115661453B (en) * 2022-10-25 2023-08-04 腾晖科技建筑智能(深圳)有限公司 Tower crane object detection and segmentation method and system based on downward view camera
CN117268373B (en) * 2023-11-21 2024-02-13 武汉大学 Autonomous navigation method and system for multi-sensor information fusion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103292804A (en) * 2013-05-27 2013-09-11 浙江大学 Monocular natural vision landmark assisted mobile robot positioning method
CN103424114A (en) * 2012-05-22 2013-12-04 同济大学 Visual navigation/inertial navigation full combination method
CN103644904A (en) * 2013-12-17 2014-03-19 上海电机学院 Visual navigation method based on SIFT (scale invariant feature transform) algorithm
CN106780699A (en) * 2017-01-09 2017-05-31 东南大学 A kind of vision SLAM methods aided in based on SINS/GPS and odometer
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103424114A (en) * 2012-05-22 2013-12-04 同济大学 Visual navigation/inertial navigation full combination method
CN103292804A (en) * 2013-05-27 2013-09-11 浙江大学 Monocular natural vision landmark assisted mobile robot positioning method
CN103644904A (en) * 2013-12-17 2014-03-19 上海电机学院 Visual navigation method based on SIFT (scale invariant feature transform) algorithm
CN106780699A (en) * 2017-01-09 2017-05-31 东南大学 A kind of vision SLAM methods aided in based on SINS/GPS and odometer
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GNSS/INS/On-Board Camera Integration for Vehicle Self-Localization in Urban Canyon;Shunsuke Kamijo 等;《2015 IEEE 18th International Conference on Intelligent Transportation Systems》;20151231;2533-2538 *
New optimal smoothing scheme for improving relative and absolute accuracy of tightly coupled GNSS/SINS integration;Xiaohong Zhang 等;《Springer》;20170125;1-12 *
Tight Fusion of a Monocular Camera, MEMS-IMU, and Single-Frequency Multi-GNSS RTK for Precise Navigation in GNSS-Challenged Environments;Tuan Li 等;《remote sensing》;20190313;3-21 *

Also Published As

Publication number Publication date
CN110412635A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110412635B (en) GNSS/SINS/visual tight combination method under environment beacon support
CN109059906B (en) Vehicle positioning method and device, electronic equipment and storage medium
CN110411462B (en) GNSS/inertial navigation/lane line constraint/milemeter multi-source fusion method
CN112268559B (en) Mobile measurement method for fusing SLAM technology in complex environment
CN114199259B (en) Multi-source fusion navigation positioning method based on motion state and environment perception
CN113899375B (en) Vehicle positioning method and device, storage medium and electronic equipment
CN112987065B (en) Multi-sensor-integrated handheld SLAM device and control method thereof
CN113406682B (en) Positioning method, positioning device, electronic equipment and storage medium
CN113358112B (en) Map construction method and laser inertia odometer
CN114565674B (en) Method and device for purely visually positioning urban structured scene of automatic driving vehicle
CN111288984A (en) Multi-vehicle joint absolute positioning method based on Internet of vehicles
CN114019552A (en) Bayesian multi-sensor error constraint-based location reliability optimization method
CN108613675B (en) Low-cost unmanned aerial vehicle movement measurement method and system
CN111156986A (en) Spectrum red shift autonomous integrated navigation method based on robust adaptive UKF
CN115183762A (en) Airport warehouse inside and outside mapping method, system, electronic equipment and medium
Pan et al. Tightly-coupled multi-sensor fusion for localization with LiDAR feature maps
CN108921896B (en) Downward vision compass integrating dotted line characteristics
Mostafa et al. Optical flow based approach for vision aided inertial navigation using regression trees
CN110927765B (en) Laser radar and satellite navigation fused target online positioning method
CN116105729A (en) Multi-sensor fusion positioning method for reconnaissance of forest environment of field cave
CN115930948A (en) Orchard robot fusion positioning method
CN110849349A (en) Fusion positioning method based on magnetic sensor and wheel type odometer
CN112923934A (en) Laser SLAM technology suitable for combining inertial navigation in unstructured scene
CN116222541A (en) Intelligent multi-source integrated navigation method and device using factor graph
CN114088103B (en) Method and device for determining vehicle positioning information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant