CN110220517A - A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words - Google Patents

A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words Download PDF

Info

Publication number
CN110220517A
CN110220517A CN201910610069.9A CN201910610069A CN110220517A CN 110220517 A CN110220517 A CN 110220517A CN 201910610069 A CN201910610069 A CN 201910610069A CN 110220517 A CN110220517 A CN 110220517A
Authority
CN
China
Prior art keywords
indoor robot
map
data
robot
indoor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910610069.9A
Other languages
Chinese (zh)
Inventor
常俊龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unicloud Technology Co Ltd
Original Assignee
Unicloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unicloud Technology Co Ltd filed Critical Unicloud Technology Co Ltd
Priority to CN201910610069.9A priority Critical patent/CN110220517A/en
Publication of CN110220517A publication Critical patent/CN110220517A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/027Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising intertial navigation means, e.g. azimuth detector
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0272Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising means for registering the travel distance, e.g. revolutions of wheels
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present invention provides a kind of Indoor Robot robust slam method of combining environmental meaning of one's words, the positioning of the Indoor Robot including the fusion of GPS global signal, IMU Inertial Measurement Unit signal and laser radar signal;The perception of radar data image data, the external parameter of radar sensor calibration, the external and inherent parameter of front camera calibration, the speed of host and the Indoor Robot of angular speed fusion;The semantic map of the Indoor Robot of the fusion of the location information of the NavigationInfo of dreamview module, the LaneMarker from sensing module, locating module;The decision of obstacle information, vehicle-state, traffic lights and the Indoor Robot of cartographic information fusion;And the path planning of the Indoor Robot of positioning, perception, prediction, routing and map fusion.The invention enables robot move in the environment and decision in it is more robust to environment.

Description

A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words
Technical field
The invention belongs to technical field of computer vision, more particularly, to a kind of Indoor Robot Shandong of combining environmental meaning of one's words Stick slam method.
Background technique
With artificial intelligence technology, the rapid development of especially strong intelligence, robot application scene is from extreme environment The office work scene to interact closely with the mankind is gradually turned to industrial application scene.Based on this, we are for robot Design and use are gradually gradually shifted from functionality to robustness, but existing Indoor Robot meets in simple environment The demand of indoor high fine positioning and navigation, once but environment changes or high noisy, robot vision will generate not With the accumulated error under scale, this is very unfavorable for the navigation of robot;In addition, coming into the indoor machine of human society People can't understand environmental information and therefore complete the support that appointed task also needs subsequent path to plan, can't be referred to as real " strong " intelligence
Merchandising machine people can complete intelligent storage and automatic material flow indoors, but often due to sealed storage ring Border often based on white background, seriously lack textural characteristics, this can to robot navigation when cause dimensional information lose and Destroy the positioning accuracy of visual odometry;And the environment meaning of one's words is exactly dissolved into map, using optimizations such as potential field guidance by the present invention Means obtain a kind of robust slam method that can understand environmental map.
Summary of the invention
In view of this, the present invention is directed to propose a kind of Indoor Robot robust slam method of combining environmental meaning of one's words, with solution The problem of being mentioned in certainly above-mentioned background technique.
In order to achieve the above objectives, the technical scheme of the present invention is realized as follows:
A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words, including
The indoor machine of GPS global signal, IMU Inertial Measurement Unit signal and laser radar signal fusion
The positioning of people;
Radar data image data, the external parameter of radar sensor calibration, front camera are calibrated external and inherent The perception of the Indoor Robot of parameter, the speed of host and angular speed fusion;
The positioning of the NavigationInfo of dreamview module, the LaneMarker from sensing module, locating module The semantic map of the Indoor Robot of the fusion of information;
The decision of obstacle information, vehicle-state, traffic lights and the Indoor Robot of cartographic information fusion;
And the path planning of the Indoor Robot of positioning, perception, prediction, routing and map fusion;
The positioning of Indoor Robot, the perception of Indoor Robot, the semantic map of Indoor Robot, Indoor Robot certainly Path planning is carried out in the path planning for being input to Indoor Robot of plan and obtains motion profile, and bottom control end is according to road Diameter planning issues control command.
Further, IMU inertance element is added using whole world GPS global positioning signal in the positioning of the Indoor Robot Prior information as laser radar.
Further, the semantic map of the Indoor Robot includes:
(1) data acquire: using laser radar acquisition full-view image data, Centimeter Level High Precision GPS Data and scanning road The road surface scene data of section;
(2) data prediction: by collected data progress data vacuate with data multidomain treat-ment, obtaining can be further The data format of processing;
(3) lane line and road profile line high-precision Map Vectorization: are extracted according to data obtained in the previous step;
(4) lane center generates: then progress lane center line drawing first carries out data merging, finally carries out vehicle The format of road center line is converted;
(5) it generates orderly topological structure: carrying out artificial interpretation first, then carries out process control, generate released version High-precision map;
(6) map quality restriction: acute for the visual effect of map, the logic of map and precision using computer program Verifying, to generate the high-precision map of final version.
Further, the decision of the Indoor Robot includes
(1) using the amcl packet positioned in map: the probabilistic localization moved in 2D configures overall filter in ROS, swashs Light model and odometer model parameter, input laser scanning data, tf are converted and the average value and covariance of particle filter, defeated Out in map Indoor Robot estimation posture, realize from basic frame to milestone frame again to the transformation of map frame;
(2) using laser radar, the gmapping packet of depth camera map making: passing through sensor_msgs/ LaserScan obtains data and creates 2D grating map, inputs laser scans data and laser coordinates system, pedestal coordinate It is converted between system, odometer coordinate system, periodically issues map metadata and cartographic information;
(3) using the move_base packet for being moved to target position: realizing that the planning to Indoor Robot path and intelligence are kept away Barrier, can independently select optimal path by multiple study.
Further, the path planning of the Indoor Robot uses the innovatory algorithm of lattice, specifically includes
(1) horizontal and vertical to spread respectively a little, according to real-time decision objective, different ends is taken in the state space of vehicle Point.With the terminal of luminance curve link starting point and different conditions;
(2) according to body-sensing, if reach terminal state etc., the cost different for horizontal and vertical curve assign;
(3) final output meets the optimal solution of dynamics of vehicle and safety, body-sensing condition.
Compared with the existing technology, the Indoor Robot robust slam method tool of a kind of combining environmental meaning of one's words of the present invention There is following advantage:
(1) for localization method proposed by the present invention still can be used under the indoor environment of few texture, low contrast, Indoor Robot can be effectively promoted to the robustness of no feature environment using potential field visual odometry method;
(2) use positioning, perception, prediction, map, path planning and bottom control module that Indoor Robot is felt Know the meaning of ambient enviroment, provides prior information more abundant for the efficient decision of robot and path planning.
Detailed description of the invention
The attached drawing for constituting a part of the invention is used to provide further understanding of the present invention, schematic reality of the invention It applies example and its explanation is used to explain the present invention, do not constitute improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is a kind of Indoor Robot robust slam method ensemble stream of the combining environmental meaning of one's words described in the embodiment of the present invention Journey schematic diagram.
Specific embodiment
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the present invention can phase Mutually combination.
The present invention will be described in detail below with reference to the accompanying drawings and embodiments.
As shown in Figure 1, the present invention provides a kind of Indoor Robot robust slam method of combining environmental meaning of one's words, including as follows Step:
1, GPS global signal, IMU Inertial Measurement Unit signal and laser radar signal the positioning of Indoor Robot: are used As original input signal, the method that priori inputs and observation is corrected is used so that Indoor Robot positioning is more accurate simultaneously And robust.
In order to eliminate robot, corner is and this fixed there is position error caused by when on-right angle turn indoors Position error is accumulation accumulation, and more arriving subsequent section error will all the more be amplified, and Indoor Robot robust of the invention is fixed Position technology is positioned using the strategy of multi-thread laser sensor Global localization+IMU local correction, while global GPS is global Positioning signal adds prior information of the IMU inertance element as laser radar, and the method that correction is inputted and observed using priori is come So that Indoor Robot positions more accurate and robust.
2, the perception of Indoor Robot: the present invention devise with radar data image data, radar sensor calibration it is outer External and inherent parameter, the speed of host and the angular speed that portion's parameter, front camera are calibrated are as input data;With 3D obstacle Track course, speed and classification information;Broken line or polynomial curve;Carriageway type opsition dependent: L1 (lower-left lane line), L0 are (left Lane line), R0 (right-lane line), R1 (bottom right lane line) are used as output data, obtain perception of the robot for external environment.
3, the semantic map of Indoor Robot: input is the NavigationInfo from dreamview module;Carry out self-induction Know the LaneMarker of module;The location information of locating module is obtained by the grating map that robot establishes based on map Path planning navigation information.It specifically includes:
Data acquisition: laser radar acquisition full-view image data, Centimeter Level High Precision GPS Data and scanning section are used Road surface scene data.
Data prediction: by collected data progress data vacuate with data multidomain treat-ment, obtain further locating The data format of reason.
High-precision Map Vectorization: lane line drawing and road profile line mainly are carried out according to data obtained in the previous step It extracts.
Lane center generates: then progress lane center line drawing first carries out data merging, finally carries out in lane The format of heart line is converted.
It generates orderly topological structure: carrying out artificial interpretation first, then carries out process control, generate the height of released version Precision map.
Map quality restriction: it relates generally to using computer program for the visual effect of map, the logic of map and essence Acute verifying is spent, to generate the high-precision map of final version.
4, the decision of Indoor Robot: input is obstacle information, vehicle-state, traffic lights and cartographic information, defeated Go out under the states such as overtake other vehicles, trail, avoid for the control information of vehicle, the obstacle of prediction, has been issued for barrier Control information.It specifically includes as follows:
(a) it is wrapped using the amcl (adaptive Monte Carlo localization) positioned in map: the probabilistic localization moved in 2D, Overall filter, laser model and odometer model parameter, input laser scanning data, tf conversion and particle filter are configured in ROS The average value and covariance of wave device, the estimation posture of Indoor Robot in output map, are realized from basic frame to milestone frame again To the transformation of map frame.
(b) using laser radar, the gmapping packet of depth camera map making: passing through sensor_msgs/ LaserScan obtains data and creates 2D grating map, inputs laser scans data and laser coordinates system, pedestal coordinate It is converted between system, odometer coordinate system, periodically issues map metadata and cartographic information.
(c) using the move_base packet for being moved to target position: realizing that the planning to Indoor Robot path and intelligence are kept away Barrier, can independently select optimal path by multiple study.
5, the path planning of Indoor Robot: input is positioning, perception, prediction, routing and map, and output is that a kind of nothing is touched It hits and comfortable track.Indoor Robot path planning uses the innovatory algorithm of lattice, and key step is as follows:
A) horizontal and vertical to spread point respectively (totally 6 parameters: three lateral parameters, three longitudinal parameters have associated speed And timestamp), according to real-time decision objective, such as follow the bus or stopping, different terminals is taken in the state space of vehicle. With the terminal of luminance curve link starting point and different conditions.
B) according to body-sensing, if reach terminal state etc., the cost different for horizontal and vertical curve assign.
C) the curve bundle of transverse and longitudinal is got up to form final curve, later according to the cost after curve bundle, It sorts from small to large, whether the curve after reexamining bundler meets various gometry constraint and pass through collision check。
D) final output meets the optimal solution of dynamics of vehicle and safety, body-sensing condition.
Indoor Robot path planning mainly has following steps in the implementation of algorithm:
1, it obtains a reference line and is converted into PathPoint format
2, the match point of initialization planning point is calculated in obtained reference line
3, according to the match point being calculated, the initial state information for calculating Frenet frame (is primarily referred to as ptr_ prediction_querier)
4, it analytical strategy and obtains the object of planning: obtaining ptr_path_time_graph, pass through reference_line_ Info- > planning_target () obtains the object of planning.
5, the one-dimensional path beam of vertical and horizontal is generated respectively:
6, the feasibility in one-dimensional path is assessed according to dynamic constrained, and assesses feasible vertical and horizontal path, at cost Sequence.
7, optimal path Shu Zuhe is obtained, and returns to first collisionless path of complete avoidance.
1) whether be divided to is that two kinds of situations of auto_tuning generate trajectory_pair_cost respectively;
2) a two-dimensional path is merged into two 1 dimension paths;
3) vertical and horizontal acceleration is calculated separately when considering path curvatures, is called ConstraintChecker::ValidTrajectory makes restricted inspection, to the failure difference of various types of vehicles physical index Make to count;
4) inspection is made to whether vehicle and barrier collide, the situation that do not collide then collides fail count+1;
5) two dimensional path is imported into tune-up data: pass through reference_line_info- > SetTrajectory, Reference_line_info- > SetCost and reference_line_info- > SetDrivable updates tune-up data;
6) auto_tuning generates path
I) Future Path is obtained from Localization:
IscretizedTrajectory future_trajectory=GetFutureTrajectory ();
Ii existing longitude and latitude path) is mapped to following path pair:
MapFutureTrajectoryToSL(future_trajectory,
*ptr_reference_line,
&lon_future_trajectory,&lat_future_trajectory)
Iii the consumption of Future Path pair) is calculated:
trajectory_evaluator.EvaluateDiscreteTrajectory(planning_targ
et,
lon_future_trajectory,lat_future_trajectory,&future_traj_comp
onent_cost;
Iv) auto_tuning_data is copied to planning_internal::PlanningData
V) routing cost and two dimensional path point are generated: pass through reference_line_info- > PriorityCost (), Combined_trajectory_points [i] is generated;
Vi it) if num_lattice_traj is greater than 0, plans successfully number of path+1, otherwise enters backup planning path (FLAGS_enable_backup_trajectory=true) or prompt is without planning path.
The present invention devises the fine of Indoor Robot positioning, perception, prediction, map, path planning and bottom control module New architecture, in conjunction with the algorithm steps of Multi-sensor Fusion, prediction and path planning, so that robot moves in the environment and decision In it is more robust to environment.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Within mind and principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (5)

1. a kind of Indoor Robot robust slam method of combining environmental meaning of one's words, it is characterised in that: including
The positioning of the Indoor Robot of GPS global signal, IMU Inertial Measurement Unit signal and laser radar signal fusion;
The external and inherent parameter of external parameter, front camera calibration that radar data image data, radar sensor are calibrated, The perception of the Indoor Robot of speed and the angular speed fusion of host;
The positioning tionInfo of the Naviga Indoor Robot of dreamview module, the LaneMarker from sensing module, determine The semantic map of the Indoor Robot of the fusion of the location information of position module;
The decision of obstacle information, vehicle-state, traffic lights and the Indoor Robot of cartographic information fusion;
And the path planning of the Indoor Robot of positioning, perception, prediction, routing and map fusion;
The positioning of Indoor Robot, the perception of Indoor Robot, the semantic map of Indoor Robot, Indoor Robot decision It is input to progress path planning in the path planning of Indoor Robot and obtains motion profile, bottom control end is advised according to path It draws and issues control command.
2. a kind of Indoor Robot robust slam method of combining environmental meaning of one's words according to claim 1, it is characterised in that: IMU inertance element is added as the elder generation of laser radar using whole world GPS global positioning signal in the positioning of the Indoor Robot Test information.
3. a kind of Indoor Robot robust slam method of combining environmental meaning of one's words according to claim 1, it is characterised in that: The semantic map of the Indoor Robot includes:
(1) data acquire: using laser radar acquisition full-view image data, Centimeter Level High Precision GPS Data and scanning section Road surface scene data;
(2) data prediction: by collected data progress data vacuate with data multidomain treat-ment, obtain to be further processed Data format;
(3) lane line and road profile line high-precision Map Vectorization: are extracted according to data obtained in the previous step;
(4) lane center generates: then progress lane center line drawing first carries out data merging, finally carries out in lane The format of heart line is converted;
(5) it generates orderly topological structure: carrying out artificial interpretation first, then carries out process control, generate the height of released version Precision map;
(6) map quality restriction: using computer program is for the visual effect of map, the logic of map and precision is acute tests Card, to generate the high-precision map of final version.
4. a kind of Indoor Robot robust slam method of combining environmental meaning of one's words according to claim 1, it is characterised in that: The decision of the Indoor Robot includes
(1) using the amcl packet positioned in map: the probabilistic localization moved in 2D configures overall filter, mode of laser in ROS Type and odometer model parameter, input laser scanning data, tf are converted and the average value and covariance of particle filter, output ground The estimation posture of Indoor Robot in figure is realized from basic frame to milestone frame again to the transformation of map frame;
(2) using laser radar, the gmapping packet of depth camera map making: passing through sensor_msgs/LaserScan It obtains data and creates 2D grating map, input laser scans data and laser coordinates system, base coordinate system, odometer are sat It is converted between mark system, periodically issues map metadata and cartographic information;
(3) using the move_base packet for being moved to target position: realize the planning to Indoor Robot path and intelligent barrier avoiding, Optimal path can be independently selected by multiple study.
5. a kind of Indoor Robot robust slam method of combining environmental meaning of one's words according to claim 1, it is characterised in that: The path planning of the Indoor Robot uses the innovatory algorithm of lattice, specifically includes
(1) horizontal and vertical to spread respectively a little, according to real-time decision objective, different terminals is taken in the state space of vehicle, With the terminal of luminance curve link starting point and different conditions;
(2) according to body-sensing, if reach terminal state etc., the cost different for horizontal and vertical curve assign;
(3) final output meets the optimal solution of dynamics of vehicle and safety, body-sensing condition.
CN201910610069.9A 2019-07-08 2019-07-08 A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words Pending CN110220517A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910610069.9A CN110220517A (en) 2019-07-08 2019-07-08 A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910610069.9A CN110220517A (en) 2019-07-08 2019-07-08 A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words

Publications (1)

Publication Number Publication Date
CN110220517A true CN110220517A (en) 2019-09-10

Family

ID=67812831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910610069.9A Pending CN110220517A (en) 2019-07-08 2019-07-08 A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words

Country Status (1)

Country Link
CN (1) CN110220517A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111258312A (en) * 2020-01-20 2020-06-09 深圳市商汤科技有限公司 Movable model, control method, device, system, equipment and storage medium thereof
CN111486855A (en) * 2020-04-28 2020-08-04 武汉科技大学 Indoor two-dimensional semantic grid map construction method with object navigation points
CN112711249A (en) * 2019-10-24 2021-04-27 科沃斯商用机器人有限公司 Robot positioning method and device, intelligent robot and storage medium
CN113256716A (en) * 2021-04-21 2021-08-13 中国科学院深圳先进技术研究院 Robot control method and robot

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106681330A (en) * 2017-01-25 2017-05-17 北京航空航天大学 Robot navigation method and device based on multi-sensor data fusion
CN108776474A (en) * 2018-05-24 2018-11-09 中山赛伯坦智能科技有限公司 Robot embedded computing terminal integrating high-precision navigation positioning and deep learning
CN108873908A (en) * 2018-07-12 2018-11-23 重庆大学 The robot city navigation system that view-based access control model SLAM and network map combine
CN108955702A (en) * 2018-05-07 2018-12-07 西安交通大学 Based on the lane of three-dimensional laser and GPS inertial navigation system grade map creation system
US20190004510A1 (en) * 2017-07-03 2019-01-03 Baidu Usa Llc Human-machine interface (hmi) architecture
CN109410301A (en) * 2018-10-16 2019-03-01 张亮 High-precision semanteme map production method towards pilotless automobile
CN109556617A (en) * 2018-11-09 2019-04-02 同济大学 A kind of map elements extracting method of automatic Jian Tu robot
CN109900280A (en) * 2019-03-27 2019-06-18 浙江大学 A kind of livestock and poultry information Perception robot and map constructing method based on independent navigation

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106681330A (en) * 2017-01-25 2017-05-17 北京航空航天大学 Robot navigation method and device based on multi-sensor data fusion
US20190004510A1 (en) * 2017-07-03 2019-01-03 Baidu Usa Llc Human-machine interface (hmi) architecture
CN108955702A (en) * 2018-05-07 2018-12-07 西安交通大学 Based on the lane of three-dimensional laser and GPS inertial navigation system grade map creation system
CN108776474A (en) * 2018-05-24 2018-11-09 中山赛伯坦智能科技有限公司 Robot embedded computing terminal integrating high-precision navigation positioning and deep learning
CN108873908A (en) * 2018-07-12 2018-11-23 重庆大学 The robot city navigation system that view-based access control model SLAM and network map combine
CN109410301A (en) * 2018-10-16 2019-03-01 张亮 High-precision semanteme map production method towards pilotless automobile
CN109556617A (en) * 2018-11-09 2019-04-02 同济大学 A kind of map elements extracting method of automatic Jian Tu robot
CN109900280A (en) * 2019-03-27 2019-06-18 浙江大学 A kind of livestock and poultry information Perception robot and map constructing method based on independent navigation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
天涯0508: "ROS开发笔记(7)——利用amcl、move_base 进行导航、基于Python编写巡逻机器人导航代码", 《HTTPS://BLOG.CSDN.NET/WSC820508/ARTICLE/DETAILS/81627858》 *
许珂诚: "lattice planner规划详解", 《HTTPS://WWW.CNBLOGS.COM/FUHANG/P/9563884.HTML》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112711249A (en) * 2019-10-24 2021-04-27 科沃斯商用机器人有限公司 Robot positioning method and device, intelligent robot and storage medium
WO2021077941A1 (en) * 2019-10-24 2021-04-29 科沃斯商用机器人有限公司 Method and device for robot positioning, smart robot, and storage medium
CN111258312A (en) * 2020-01-20 2020-06-09 深圳市商汤科技有限公司 Movable model, control method, device, system, equipment and storage medium thereof
CN111258312B (en) * 2020-01-20 2024-04-02 深圳市商汤科技有限公司 Movable model, control method, device, system, equipment and storage medium thereof
CN111486855A (en) * 2020-04-28 2020-08-04 武汉科技大学 Indoor two-dimensional semantic grid map construction method with object navigation points
CN111486855B (en) * 2020-04-28 2021-09-14 武汉科技大学 Indoor two-dimensional semantic grid map construction method with object navigation points
CN113256716A (en) * 2021-04-21 2021-08-13 中国科学院深圳先进技术研究院 Robot control method and robot
WO2022222490A1 (en) * 2021-04-21 2022-10-27 中国科学院深圳先进技术研究院 Robot control method and robot
CN113256716B (en) * 2021-04-21 2023-11-21 中国科学院深圳先进技术研究院 Control method of robot and robot

Similar Documents

Publication Publication Date Title
JP7020728B2 (en) System, method and program
JP7077520B2 (en) A system that determines lane assignments for autonomous vehicles, computer-implemented methods for determining lane assignments for autonomous vehicles, and computer programs.
JP7432285B2 (en) Lane mapping and navigation
KR102565533B1 (en) Framework of navigation information for autonomous navigation
CN107144285B (en) Pose information determination method and device and movable equipment
CN111912417B (en) Map construction method, map construction device, map construction equipment and storage medium
CN110220517A (en) A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words
JP7082545B2 (en) Information processing methods, information processing equipment and programs
US9255989B2 (en) Tracking on-road vehicles with sensors of different modalities
CN110057373A (en) For generating the method, apparatus and computer storage medium of fine semanteme map
WO2018126228A1 (en) Sign and lane creation for high definition maps used for autonomous vehicles
CN109991984A (en) For generating the method, apparatus and computer storage medium of fine map
CN106980657A (en) A kind of track level electronic map construction method based on information fusion
JP2022542289A (en) Mapping method, mapping device, electronic device, storage medium and computer program product
US20200033155A1 (en) Method of navigating an unmaned vehicle and system thereof
CN109541535A (en) A method of AGV indoor positioning and navigation based on UWB and vision SLAM
CN110211228A (en) For building the data processing method and device of figure
CN105204510A (en) Generation method and device for probability map for accurate positioning
CN115639823B (en) Method and system for controlling sensing and movement of robot under rugged undulating terrain
US11035679B2 (en) Localization technique
CN109443368A (en) Air navigation aid, device, robot and the storage medium of unmanned machine people
JP2008065087A (en) Apparatus for creating stationary object map
CN112068152A (en) Method and system for simultaneous 2D localization and 2D map creation using a 3D scanner
Li et al. Hybrid filtering framework based robust localization for industrial vehicles
CN116047565A (en) Multi-sensor data fusion positioning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190910