CN113971438A - Multi-sensor fusion positioning and mapping method in desert environment - Google Patents

Multi-sensor fusion positioning and mapping method in desert environment Download PDF

Info

Publication number
CN113971438A
CN113971438A CN202111180440.6A CN202111180440A CN113971438A CN 113971438 A CN113971438 A CN 113971438A CN 202111180440 A CN202111180440 A CN 202111180440A CN 113971438 A CN113971438 A CN 113971438A
Authority
CN
China
Prior art keywords
positioning
information
odometer
environment
desert
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111180440.6A
Other languages
Chinese (zh)
Inventor
齐立哲
华中伟
陈骞
苏昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202111180440.6A priority Critical patent/CN113971438A/en
Publication of CN113971438A publication Critical patent/CN113971438A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a multi-sensor fusion positioning and mapping method in a desert environment, which comprises the following steps: s1, when the GNSS signal intensity exceeds a certain threshold, filtering and fusing the IMU signal and the GNSS signal to obtain an IMU and GNSS odometer; s2, selecting an odometer or a visual odometer adopting an IMU and a GNSS according to the richness and reliability of the target object information in the environment; s3, when the GNSS signal intensity is lower than a certain threshold value and the number of the detected target objects in the environment is less than a certain threshold value, constructing a odometer by adopting an optical flow method; s4, when the GNSS signal is weak and the target object is few, determining the final positioning information of the robot; s5, estimating the three-dimensional pose of the target object; and S6, constructing a desert three-dimensional environment topological map. The multi-sensor fusion positioning and mapping method in the desert environment can realize centimeter-level positioning and mapping in the desert environment and can reduce the calculation burden of a calculation unit.

Description

Multi-sensor fusion positioning and mapping method in desert environment
Technical Field
The invention belongs to the technical field of positioning and mapping in a desert environment, and particularly relates to a multi-sensor fusion positioning and mapping method in the desert environment.
Background
The simultaneous mapping and positioning technology (SLAM) means that in an unfamiliar environment, a robot can sense the surrounding environment through a sensor and complete self positioning, and further the robot can replace or cooperate with people to complete certain specific work in various environments.
The desert landscape environment is complex and changeable, and huge stones, steep slopes, potholes and soft sands can be seen everywhere. The desert environment perception and identification has high similarity and dense detail information in form, color and texture, a terrain scene in the desert environment is large, the problem of accurate map construction and positioning of the robot is difficult to realize by a single sensor, the positioning function is mainly realized by a plurality of current SLAM combination schemes, invalid information of the constructed three-dimensional map is redundant, further navigation planning operation of the mobile robot cannot be supported, and the map construction and positioning work cannot be performed under the special condition of the desert.
Disclosure of Invention
In view of the above, in order to overcome the above drawbacks, the present invention aims to provide a method for positioning and mapping a multi-sensor fusion in a desert environment.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the invention provides a multi-sensor fusion positioning and mapping method in a desert environment, which comprises the following steps:
s1, when the GNSS signal intensity exceeds a certain threshold, filtering and fusing the IMU signal and the GNSS signal to obtain an IMU and GNSS odometer;
s2, acquiring RGB information and depth information in an environment by using a camera, detecting a target object in the environment by using a target detection algorithm, selecting an odometer or a visual odometer adopting IMU and GNSS according to the richness and reliability of the target object information in the environment, wherein the construction of the visual odometer is also based on the information acquired by the camera and the target detection algorithm, and then performing feature matching construction to obtain the visual odometer;
s3, when the GNSS signal intensity is lower than a certain threshold value and the number of the detected target objects in the environment is less than a certain threshold value, constructing a odometer by adopting an optical flow method;
s4, obtaining pose transformation information of the robot through a correlation algorithm according to the feature matching information extracted by the visual mileage timing constructed in the step S2 or the optical flow information of the mileage timing constructed in the step S3, and performing nonlinear optimization to obtain the final positioning information of the robot;
s5, the depth information in the step S2 comprises the depth information of the center point of the target object, and the three-dimensional pose of the target object is estimated based on the information;
and S6, constructing the topological map of the desert three-dimensional environment by combining the positioning information and the three-dimensional pose of the target object.
Further, the specific method of step S1 is as follows:
s101, acquiring speed and acceleration data of the robot from the IMU signals, and acquiring pose information of the robot through pre-integration;
s102, based on GNSS signals, a real-time differential positioning technology is adopted to obtain centimeter-level outdoor positioning accuracy;
s103, filtering and fusing the IMU signal and the GNSS signal by using an extended Kalman filtering method;
and S104, further optimizing the pose information by combining the IMU signal advantages and the GNSS signal advantages to obtain the IMU and GNSS odometer.
Further, in step S2, a specific method for constructing the visual odometer is as follows:
and selecting a key frame according to the change degree of the image interframe information acquired by the camera, extracting the characteristic points of desert specific vegetation or barriers in the target object according to a target detection algorithm, and performing characteristic matching to construct a visual odometer.
Further, a related algorithm for constructing the visual odometer is as follows:
setting a group of well matched 3D characteristic points as P and P';
P={p1,…,pn},P′={p′1,…,p′n}
finding a Euclidean transformation R, t such that
Figure BDA0003297023760000031
The motion between two frames is obtained.
Further, in step S3, before constructing the odometer, the key frame of the image needs to be selected according to the frequency change of the GNSS signal, and the selecting method includes: and setting a threshold, namely, after the similarity of the input image signals is calculated, determining that the key frame is the key frame when the similarity exceeds the threshold, and reducing or skipping the set key frame if the key frame stays in the same picture for a long time.
Further, in step S3, a method of constructing an odometer using an optical flow method is as follows:
generating an image pyramid for a target detection area, wherein the size of a previous image is 4 times that of a next image from a 0 th layer to an nth layer, and smoothing by using a low-pass filter;
calculating an optical flow and an affine transformation matrix, taking the result of the previous layer as the initial value of the next layer, and calculating the optical flow and the affine transformation matrix of the lower layer on the basis until the first 0 th layer;
and (5) iterative solution is carried out, and the odometer is constructed.
Further, in step S4, the pose transformation information of the robot is obtained by a beam adjustment method or a sliding window method.
Further, in step S5, the method for calculating the depth information of the center point of the target object is as follows:
through the vegetation data set collected and manufactured locally, after the lightweight target detection model is trained, the target object in the desert environment is recognized in real time, and the distance between the center points of the target object is obtained according to the camera.
Further, in step S5, a bounding box algorithm is derived by combining the two-dimensional target detection result in the target detection algorithm with the depth information of the center point of the target object, and the three-dimensional pose of the target object is estimated.
Compared with the prior art, the multi-sensor fusion positioning and mapping method in the desert environment has the following beneficial effects:
(1) target detection is carried out in real time by combining RTK technology with multi-sensor information such as machine vision, a desert topological map is constructed, different odometers can be selected according to the abundance of features in the environment to realize routes, centimeter-level positioning and mapping in the desert environment can be realized, and the calculation burden of a calculation unit can be reduced.
(2) Can also realize the real-time accurate positioning of robot and note the planting position at any time for equidistant planting, the maintenance of watering accurately have become probably, have improved the rationality and the survival rate that desert was planted greatly, and can migrate the robot that works under other spacious environment like the farming robot.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a method for positioning and mapping a multi-sensor fusion in a desert environment according to the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
As shown in fig. 1, a method for positioning and mapping by multi-sensor fusion in desert environment comprises the following steps:
1. acquiring speed and acceleration observation data of the robot from the IMU signal, and acquiring pose information of the robot through but not limited to pre-integration, wherein the pose information is specifically as follows:
generally, the update frequency of a satellite positioning system is 10Hz, the update frequency of an IMU sensor can reach 1kHz at most, the pose of the robot can be obtained by integrating data such as speed, acceleration and the like in the IMU, but the data in the IMU is based on the initial moment, so that the problem that the velocity, acceleration and the like of the robot are obtained by re-integrating measured values of the IMU in each optimization iteration is avoided, and the burden of a calculation system is increased. And stripping an integral term for calculating the pose each time from the calculation formula to form a pre-integral term, and further obtaining pose information of the robot only according to an observed value of the sensor.
2. From GNSS signals, a real-time differential positioning (RTK) technique is adopted to obtain centimeter-level outdoor positioning accuracy, specifically:
on the basis of the GNSS, a base station is erected on the ground, the ground base station acquires satellite positioning and compares the satellite positioning with the actual position of the base station, the positioning error of the GNSS is calculated, the error is fed back to the mobile base station of the vehicle through the GNSS, and then positioning is corrected, so that the positioning error reaches the centimeter level.
3. Filtering and fusing an IMU signal and a GNSS signal by using a method of Extended Kalman Filtering (EKF), which is specifically as follows:
and (3) assuming that the noise of the IMU and the GNSS signal obeys Gaussian noise, updating an observation equation of the IMU sensor to obtain a state quantity and a covariance matrix of the system, and performing state updating by taking the state quantity and the covariance matrix as a system prediction state quantity and a system prediction covariance matrix of the GNSS signal, wherein each period is a cycle, and then the filtering and fusion work of the sensor is completed.
4. The pose information is further optimized by combining the advantages of the IMU under the high-speed short-time condition and the advantages of the GNSS under the large-scene long-time condition, so that the odometer of the robot is obtained, and the method specifically comprises the following steps:
under the condition that the GNSS signal is strong, the motion track of the robot relative to the initial position, namely the odometer, can be obtained by combining the advantage of the IMU under the high-speed short-time condition and the advantage of the GNSS low drift property under the large-scene long-time condition according to the mode.
5. RGB information and depth information in the environment is acquired by, but not limited to, a passive binocular depth camera.
6. Whether target objects such as trees, people and the like exist in the environment is detected through a target detection algorithm, and an odometer or a visual odometer adopting an IMU and a GNSS can be selected according to the richness and the reliability of the information of the target objects.
After the lightweight target detection model is trained through the vegetation data set collected and manufactured locally, the target object in the desert environment is recognized in real time, and the distance between the center points of the target object is obtained according to the depth camera.
7. And selecting the key frame according to the change degree of the information between the image frames.
8. According to the result of target detection, extracting the characteristic points of the special desert vegetation or barriers such as the floral bouquet and the like, and carrying out characteristic matching to construct a visual odometer, wherein the specific algorithm is as follows:
suppose we have a set of well-matched 3D feature points P and P';
P={p1,…,pn},P′={p′1,…,p′n}
finding a Euclidean transformation R, t such that
Figure BDA0003297023760000071
The motion between two frames can be obtained.
9. Under the condition that few targets can be detected in the environment with weak GNSS signals, selecting a key frame according to the change of the GNSS signals;
setting a threshold value, calculating the similarity of input image signals, determining that the input image signals are key frames when the similarity exceeds the threshold value, and reducing or skipping the key frames if the input image signals stay in the same picture for a long time;
10. and then constructing a odometer by adopting an L-K optical flow method according to a gray scale invariant theory, which comprises the following specific steps:
generating an image pyramid for the target detection area, wherein the size of a previous image is 4 times that of a next image from a 0 th layer to an nth layer, and smoothing by using a low-pass filter;
pyramid-based tracking; calculating an optical flow and an affine transformation matrix, taking the result of the previous layer as the initial value of the next layer, and calculating the optical flow and the affine transformation matrix of the lower layer on the basis until the first 0 th layer;
carrying out iterative solution; i.e., each layer, the optical flow and the radial transformation matrix are calculated so that the error is minimized.
11. And (4) according to the extracted feature matching information or optical flow information, obtaining the pose transformation of the robot by, but not limited to, a Beam Adjustment (BA) method or a sliding window method.
12. When the number of the key frames reaches a certain degree, the robot pose is subjected to nonlinear optimization again by adopting but not limited to the g2o algorithm.
13. And obtaining the final positioning information of the robot in the desert environment according to the nonlinear optimization algorithm in the last step.
By integrating a series of selections of the sensors and the algorithm route, the movement path of the robot relative to the starting point can be obtained finally, the position of the robot in the path can be known, and the robot can move and operate autonomously in desert terrains.
14. And deducing a bounding box algorithm by combining a two-dimensional target detection result with the depth information of the central point of the target object, and estimating the three-dimensional pose of the target object.
The complex desert detection target object is approximately replaced by a geometric body with simple characteristics (called AABB bounding box).
15. And combining the positioning information and the target three-dimensional information to construct a desert three-dimensional environment topological map.
And constructing a desert three-dimensional environment topological map by combining the self positioning information of the mobile robot, the three-dimensional coordinate information of the detection target relative to the mobile robot and the size and the position of the bounding box.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of clearly illustrating the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed method and system may be implemented in other ways. For example, the above described division of elements is merely a logical division, and other divisions may be realized, for example, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not executed. The units may or may not be physically separate, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. The multi-sensor fusion positioning and mapping method under the desert environment is characterized by comprising the following steps of:
s1, when the GNSS signal intensity exceeds a certain threshold, filtering and fusing the IMU signal and the GNSS signal to obtain an IMU and GNSS odometer;
s2, acquiring RGB information and depth information in an environment by using a camera, detecting a target object in the environment by using a target detection algorithm, selecting an odometer or a visual odometer adopting IMU and GNSS according to the richness and reliability of the target object information in the environment, wherein the construction of the visual odometer is also based on the information acquired by the camera and the target detection algorithm, and then performing feature matching construction to obtain the visual odometer;
s3, when the GNSS signal intensity is lower than a certain threshold value and the number of the detected target objects in the environment is less than a certain threshold value, constructing a odometer by adopting an optical flow method;
s4, obtaining pose transformation information of the robot through a correlation algorithm according to the feature matching information extracted by the visual mileage timing constructed in the step S2 or the optical flow information of the mileage timing constructed in the step S3, and performing nonlinear optimization to obtain the final positioning information of the robot;
s5, the depth information in the step S2 comprises the depth information of the center point of the target object, and the three-dimensional pose of the target object is estimated based on the information;
and S6, constructing the topological map of the desert three-dimensional environment by combining the positioning information and the three-dimensional pose of the target object.
2. The method for positioning and mapping multi-sensor fusion in desert environment according to claim 1, wherein the specific method of step S1 is as follows:
s101, acquiring speed and acceleration data of the robot from the IMU signals, and acquiring pose information of the robot through pre-integration;
s102, based on GNSS signals, a real-time differential positioning technology is adopted to obtain centimeter-level outdoor positioning accuracy;
s103, filtering and fusing the IMU signal and the GNSS signal by using an extended Kalman filtering method;
and S104, further optimizing the pose information by combining the IMU signal advantages and the GNSS signal advantages to obtain the IMU and GNSS odometer.
3. The method for positioning and mapping multi-sensor fusion in desert environment according to claim 1, wherein in step S2, the specific method for constructing the visual odometer is as follows:
and selecting a key frame according to the change degree of the image interframe information acquired by the camera, extracting the characteristic points of desert specific vegetation or barriers in the target object according to a target detection algorithm, and performing characteristic matching to construct a visual odometer.
4. The method for positioning and mapping multi-sensor fusion in the desert environment according to claim 3, wherein the related algorithm for constructing the visual odometer is as follows:
setting a group of well matched 3D characteristic points as P and P';
P={p1,…,pn},P′={p′1,…,p′n}
finding a Euclidean transformation R, t such that
Figure FDA0003297023750000021
The motion between two frames is obtained.
5. The method for positioning and mapping with multi-sensor fusion in desert environment as claimed in claim 1, wherein in step S3, before constructing the odometer, the keyframe of the image is selected according to the frequency variation of the GNSS signals, and the selecting method is as follows: and setting a threshold, namely, after the similarity of the input image signals is calculated, determining that the key frame is the key frame when the similarity exceeds the threshold, and reducing or skipping the set key frame if the key frame stays in the same picture for a long time.
6. The method for positioning and mapping multi-sensor fusion in desert environment according to claim 1 or 5, wherein in step S3, the method for constructing odometer by optical flow method comprises the following steps:
generating an image pyramid for a target detection area, wherein the size of a previous image is 4 times that of a next image from a 0 th layer to an nth layer, and smoothing by using a low-pass filter;
calculating an optical flow and an affine transformation matrix, taking the result of the previous layer as the initial value of the next layer, and calculating the optical flow and the affine transformation matrix of the lower layer on the basis until the first 0 th layer;
and (5) iterative solution is carried out, and the odometer is constructed.
7. The method for positioning and mapping the fusion of multiple sensors in the desert environment according to claim 1, wherein: in step S4, pose transformation information of the robot is obtained by a beam adjustment method or a sliding window method.
8. The method for positioning and mapping multi-sensor fusion in the desert environment according to claim 1, wherein in step S5, the method for calculating the depth information of the center point of the target object is as follows:
through the vegetation data set collected and manufactured locally, after the lightweight target detection model is trained, the target object in the desert environment is recognized in real time, and the distance between the center points of the target object is obtained according to the camera.
9. The method for positioning and mapping by fusing multiple sensors in the desert environment according to claim 1, wherein in step S5, a bounding box algorithm is deduced by combining the two-dimensional target detection result in the target detection algorithm with the depth information of the center point of the target, and the three-dimensional pose of the target is estimated.
CN202111180440.6A 2021-10-11 2021-10-11 Multi-sensor fusion positioning and mapping method in desert environment Pending CN113971438A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111180440.6A CN113971438A (en) 2021-10-11 2021-10-11 Multi-sensor fusion positioning and mapping method in desert environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111180440.6A CN113971438A (en) 2021-10-11 2021-10-11 Multi-sensor fusion positioning and mapping method in desert environment

Publications (1)

Publication Number Publication Date
CN113971438A true CN113971438A (en) 2022-01-25

Family

ID=79587304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111180440.6A Pending CN113971438A (en) 2021-10-11 2021-10-11 Multi-sensor fusion positioning and mapping method in desert environment

Country Status (1)

Country Link
CN (1) CN113971438A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114440892A (en) * 2022-01-27 2022-05-06 中国人民解放军军事科学院国防科技创新研究院 Self-positioning method based on topological map and odometer
CN116095114A (en) * 2023-01-11 2023-05-09 上海船舶运输科学研究所有限公司 Ship-shore data transmission method based on Internet of things mode

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114440892A (en) * 2022-01-27 2022-05-06 中国人民解放军军事科学院国防科技创新研究院 Self-positioning method based on topological map and odometer
CN114440892B (en) * 2022-01-27 2023-11-03 中国人民解放军军事科学院国防科技创新研究院 Self-positioning method based on topological map and odometer
CN116095114A (en) * 2023-01-11 2023-05-09 上海船舶运输科学研究所有限公司 Ship-shore data transmission method based on Internet of things mode
CN116095114B (en) * 2023-01-11 2023-11-03 上海船舶运输科学研究所有限公司 Ship-shore data transmission method based on Internet of things mode

Similar Documents

Publication Publication Date Title
CN113781582B (en) Synchronous positioning and map creation method based on laser radar and inertial navigation combined calibration
US11900536B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous tracking
Ramezani et al. The newer college dataset: Handheld lidar, inertial and vision with ground truth
EP2503510B1 (en) Wide baseline feature matching using collaborative navigation and digital terrain elevation data constraints
Sim et al. Integrated position estimation using aerial image sequences
Badino et al. Visual topometric localization
CN109341706A (en) A kind of production method of the multiple features fusion map towards pilotless automobile
WO2019092418A1 (en) Method of computer vision based localisation and navigation and system for performing the same
CN110298914B (en) Method for establishing fruit tree canopy feature map in orchard
JP5162849B2 (en) Fixed point position recorder
CN111288989B (en) Visual positioning method for small unmanned aerial vehicle
KR20180079428A (en) Apparatus and method for automatic localization
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
Engel et al. Deeplocalization: Landmark-based self-localization with deep neural networks
CN109154502A (en) System, method and apparatus for geo-location
JP4984659B2 (en) Own vehicle position estimation device
CN113971438A (en) Multi-sensor fusion positioning and mapping method in desert environment
CN111006655A (en) Multi-scene autonomous navigation positioning method for airport inspection robot
CN108426582B (en) Indoor three-dimensional map matching method for pedestrians
CN108549376A (en) A kind of navigation locating method and system based on beacon
CN108446710A (en) Indoor plane figure fast reconstructing method and reconstructing system
CN112729301A (en) Indoor positioning method based on multi-source data fusion
CN109978919A (en) A kind of vehicle positioning method and system based on monocular camera
CN115451948A (en) Agricultural unmanned vehicle positioning odometer method and system based on multi-sensor fusion
CN115183762A (en) Airport warehouse inside and outside mapping method, system, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination