CN115342805A - High-precision robot positioning navigation system and navigation method - Google Patents

High-precision robot positioning navigation system and navigation method Download PDF

Info

Publication number
CN115342805A
CN115342805A CN202210733489.8A CN202210733489A CN115342805A CN 115342805 A CN115342805 A CN 115342805A CN 202210733489 A CN202210733489 A CN 202210733489A CN 115342805 A CN115342805 A CN 115342805A
Authority
CN
China
Prior art keywords
robot
data
inertial
positioning
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210733489.8A
Other languages
Chinese (zh)
Other versions
CN115342805B (en
Inventor
李鑫鑫
胡荣林
段媛媛
谢岩
张亚光
刘宬邑
蔡进恒
谢敏
董甜甜
张海艳
邵鹤帅
朱全银
肖绍章
王忆雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202210733489.8A priority Critical patent/CN115342805B/en
Publication of CN115342805A publication Critical patent/CN115342805A/en
Application granted granted Critical
Publication of CN115342805B publication Critical patent/CN115342805B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a high-precision robot positioning and navigation method and a high-precision robot positioning and navigation system. The robot chassis comprises a robot control system, a mobile communication module and a positioning navigation system. The positioning navigation system consists of a front camera, a rear camera and an inertial sensor, wherein the cameras acquire information of the ground auxiliary device, and the inertial sensor acquires position information of the robot. The front camera and the rear camera simultaneously follow the line to correct the left-right driving deviation of the robot; and at the position correction identification point in the ground auxiliary device, the robot performs vision-inertia information fusion to calibrate the position of the robot. The invention corrects the left and right deviation by the simultaneous line circulation of the front camera and the rear camera, and performs visual-inertial information fusion when a position correction identification point exists, thereby calibrating the position of the robot and improving the positioning precision of the robot.

Description

一种高精度机器人定位导航系统及导航方法A high-precision robot positioning and navigation system and navigation method

技术领域technical field

本发明属于机器人定位导航领域,尤其涉及一种高精度机器人定位导航系统及导航方法。The invention belongs to the field of robot positioning and navigation, and in particular relates to a high-precision robot positioning and navigation system and a navigation method.

背景技术Background technique

随着社会的进步与发展,机器人代替人工搬运繁重的物品是当今工业生产的发展趋势。现在市场上相同类型的机器人主要是通过磁导航、激光导航、二维码导航、惯性导航等导航方式。With the progress and development of society, it is the development trend of today's industrial production that robots replace manual handling of heavy items. Now the same type of robots on the market mainly use magnetic navigation, laser navigation, two-dimensional code navigation, inertial navigation and other navigation methods.

磁导航利用磁条和一些标记完成机器人的定位导航,磁条引导机器人沿着固定轨迹行进。实际应用中,磁条铺设困难而且施工成本高,增大了生产成本,降低了企业收益。Magnetic navigation uses magnetic strips and some markers to complete the positioning and navigation of the robot, and the magnetic strips guide the robot along a fixed trajectory. In practical application, the laying of magnetic strips is difficult and the construction cost is high, which increases the production cost and reduces the profit of the enterprise.

激光导航是在机器人行驶路径的周围安装激光反射板,机器人通过发射激光束,同时采集由反射板反射的激光束,来确定其当前的位置和方向。激光导航虽然具有高精度的优点,但是其成本高,而且对环境要求比较苛刻,难以应用于复杂的环境。Laser navigation is to install a laser reflector around the driving path of the robot. The robot emits a laser beam and collects the laser beam reflected by the reflector to determine its current position and direction. Although laser navigation has the advantages of high precision, its cost is high, and it has strict environmental requirements, so it is difficult to apply in complex environments.

二维码导航是通过离散铺设二维码,通过机器人摄像头扫描解析二维码获取实时坐标。由于二维码易磨损,需定期维护,增加了后期维护成本,而且该方式仅适用于环境好的仓库,不适用于复杂的环境。Two-dimensional code navigation is to lay out two-dimensional codes discretely, and obtain real-time coordinates by scanning and analyzing the two-dimensional codes through the robot camera. Since the two-dimensional code is easy to wear and needs regular maintenance, it increases the cost of later maintenance, and this method is only suitable for warehouses with good environments, not for complex environments.

惯性导航是在机器人上安装陀螺仪,利用陀螺仪可以获取机器人的三轴角速度和加速度,通过积分运算对机器人进行导航定位,由于陀螺仪本身随着时间增长,误差会累积增大,甚至是丢失位置,造成严重损失,不能满足高精度定位导航的需求。Inertial navigation is to install a gyroscope on the robot. The gyroscope can be used to obtain the three-axis angular velocity and acceleration of the robot, and the robot can be navigated and positioned by integral calculation. As the gyroscope itself grows with time, the error will accumulate and increase, or even be lost. location, causing serious losses, and cannot meet the needs of high-precision positioning and navigation.

以上这些导航方式存在场景铺设价格高昂,定位导航精度低,对使用场景要求高,难以适用于复杂的使用环境等情况,因此就需要一种成本较低、场景铺设简单且高精度的定位导航方式。The above navigation methods have high cost of scene laying, low positioning and navigation accuracy, high requirements for use scenarios, and are difficult to apply to complex use environments. Therefore, a positioning and navigation method with low cost, simple scene laying and high precision is needed. .

发明内容Contents of the invention

发明目的:本发明的目的在于提供一种高精度机器人定位导航方法及系统,本发明有效地解决了现有机器人定位导航精度不高,高精度机器人定位导航系统对使用场景要求高等问题。Purpose of the invention: The purpose of the present invention is to provide a high-precision robot positioning and navigation method and system. The present invention effectively solves the problems that the existing robot positioning and navigation accuracy is not high, and the high-precision robot positioning and navigation system requires high usage scenarios.

技术方案:本发明的高精度机器人定位导航系统,所述高精度机器人定位导航系统包括机器人底盘和地面辅助装置;Technical solution: the high-precision robot positioning and navigation system of the present invention, the high-precision robot positioning and navigation system includes a robot chassis and a ground auxiliary device;

所述机器人底盘包括机器人控制系统、定位导航系统以及移动通信模块;The robot chassis includes a robot control system, a positioning and navigation system, and a mobile communication module;

所述地面辅助装置包括易于机器人识别的循迹线以及不同于循迹线颜色的位置校正标识点,提供便于机器人获取有效的检测信息。The ground auxiliary device includes tracking lines that are easy for the robot to recognize and position correction marking points that are different in color from the tracking lines, so as to provide effective detection information for the robot to obtain.

进一步地,所述定位导航系统包括安装在底盘前部中间的前相机、安装在底盘后部中间的后相机以及安装在底盘中央的惯性传感器。Further, the positioning and navigation system includes a front camera installed in the middle of the front of the chassis, a rear camera installed in the middle of the rear of the chassis, and an inertial sensor installed in the center of the chassis.

进一步地,所述移动通信模块用于控制系统与后台之间的数据传输,机器人的行驶数据通过移动通信模块发送给后台,后台接收来自用户终端的操作指令以及工作任务,后台数据同步在用户终端显示。Further, the mobile communication module is used for data transmission between the control system and the background, the driving data of the robot is sent to the background through the mobile communication module, and the background receives operation instructions and work tasks from the user terminal, and the background data is synchronized in the user terminal show.

进一步地,所述地面辅助装置的位置校正标识点的位置,循迹线直线部分每间隔距离为0.5l1~3l1(l1为机器人长度)标定一个位置校正标识点,循迹线曲线部分每间隔距离为

Figure BDA0003714760950000021
(l2为机器人转弯半径)标定一个位置校正标识点。Further, for the position of the position correction mark point of the ground auxiliary device, a position correction mark point is calibrated at every interval distance of 0.5l 1 to 3l 1 (l 1 is the length of the robot) for the straight part of the tracking line, and the curved part of the track line Each distance is
Figure BDA0003714760950000021
(l 2 is the turning radius of the robot) to calibrate a position calibration mark point.

本发明还公开一种高精度机器人定位导航方法,包括如下步骤:The invention also discloses a high-precision robot positioning and navigation method, which includes the following steps:

步骤1:通过用户终端发送指令启动机器人,机器人进行初始化;Step 1: Start the robot by sending an instruction through the user terminal, and the robot is initialized;

步骤2:获取机器人初始位置数据,其中位置初始数据包括:视觉初始化数据、惯性初始化数据、视觉-惯性信息融合数据;视觉数据估计机器人所处的位置,惯性数据估计机器人的位姿,然后进行视觉-惯性信息融合,进一步精确估计机器人的位置;Step 2: Obtain the initial position data of the robot, where the initial position data includes: visual initialization data, inertial initialization data, visual-inertial information fusion data; visual data to estimate the position of the robot, inertial data to estimate the pose of the robot, and then perform visual -Inertial information fusion to further accurately estimate the position of the robot;

步骤3:将步骤2中机器人的视觉初始化数据、惯性传感器初始化数据、视觉-惯性信息融合数据通过移动通信模块实时发送给后台;Step 3: Send the visual initialization data, inertial sensor initialization data, and visual-inertial information fusion data of the robot in step 2 to the background in real time through the mobile communication module;

步骤4:机器人进入待机状态,等待用户通过用户终端发布机器人的工作任务至后台;Step 4: The robot enters the standby state, waiting for the user to release the robot's work tasks to the background through the user terminal;

步骤5:机器人检测是否有工作任务,如果没有则跳转至步骤4;如果有工作任务,则机器人进入工作状态,执行以下步骤;Step 5: The robot detects whether there is a work task, and if not, jumps to step 4; if there is a work task, the robot enters the working state and performs the following steps;

步骤6:视觉传感器与惯性传感器实时获取机器人的行驶数据,行驶数据包括:前后相机获取地面循迹线的关键帧数据和后相机获取机器人位置校正点的关键帧数据;Step 6: The visual sensor and the inertial sensor obtain the driving data of the robot in real time. The driving data includes: the key frame data of the ground tracking line obtained by the front and rear cameras and the key frame data of the robot position correction point obtained by the rear camera;

步骤7:后相机检测机器人是否行驶至位置校正标识点,判断是否需要进行视觉-惯性信息融合,进一步精准机器人定位信息;若机器人行驶至位置校正标识点,则进入步骤8,否则,直接进入步骤9;Step 7: After the camera detects whether the robot has traveled to the position correction mark point, it is judged whether visual-inertial information fusion is required to further refine the robot positioning information; if the robot travels to the position correction mark point, go to step 8, otherwise, go directly to step 9;

步骤8:视觉-惯性信息融合;Step 8: Visual-inertial information fusion;

步骤9:前后相机同时循线行驶并根据理论循迹线的位置,纠正机器人的行驶误差,采用控制算法,控制机器人的行驶;Step 9: The front and rear cameras follow the line at the same time and correct the driving error of the robot according to the position of the theoretical tracking line, and use the control algorithm to control the driving of the robot;

步骤10:机器人的工作任务是否完成,若未完成,则跳转至步骤6;若机器人的任务已完成,则进入下一步骤;Step 10: Whether the task of the robot is completed, if not, go to step 6; if the task of the robot is completed, enter the next step;

步骤11:机器人任务结束;若用户未执行任何操作,机器人则进入步骤2;若用户关闭机器人,则机器人进入关机状态;Step 11: The robot task ends; if the user does not perform any operation, the robot enters step 2; if the user turns off the robot, the robot enters the shutdown state;

步骤12:结束。Step 12: End.

进一步地,步骤6中,所述前后相机获取地面循迹线的关键帧数据:Further, in step 6, the front and back cameras acquire key frame data of ground tracking lines:

前相机计算出所观测的机器人循线数据,其中包括循迹线首行横坐标xa0,隔n行的循迹线横坐标xan,计算出前相机观测的机器人偏移角度θa,其中The front camera calculates the observed tracking data of the robot, including the abscissa x a0 of the first row of the tracking line, the abscissa x an of the tracking line every n lines, and calculates the robot offset angle θ a observed by the front camera, where

Figure BDA0003714760950000031
Figure BDA0003714760950000031

后相机计算出所观测的机器人循线数据,其中包括循迹线首行横坐标xb0,隔n行的循迹线横坐标xbn,计算出后相机观测的机器人偏移角度αb,其中The rear camera calculates the observed robot tracking data, which includes the abscissa x b0 of the first row of the tracking line, and the abscissa x bn of the tracking line every n rows, and calculates the robot offset angle α b observed by the rear camera, where

Figure BDA0003714760950000032
Figure BDA0003714760950000032

所述后相机获取机器人位置校正点的关键帧数据:The rear camera obtains key frame data of robot position correction points:

步骤1.1、若后相机检测到位置校正点,则根据地面位置校正标识点的已知地面设置方式,计算出机器人视觉里程d2,其中Step 1.1. If the rear camera detects the position correction point, then calculate the robot visual mileage d 2 according to the known ground setting method of the ground position correction mark point, where

d2=d′2+ld 2 =d′ 2 +l

式中,d′2为上一位置校正点的视觉里程,l为已知地面设置的位置校正标识点距离;In the formula, d' 2 is the visual mileage of the previous position correction point, and l is the distance of the position correction mark point set on the known ground;

步骤1.2、若后相机未检测到位置校正标识点,则机器人视觉里程d2保持不变;Step 1.2, if the rear camera does not detect the position correction mark point, the robot visual mileage d 2 remains unchanged;

步骤1.3、惯性传感器用于计算出机器人惯性里程d1,其中d1计算如下:Step 1.3, the inertial sensor is used to calculate the inertial mileage d 1 of the robot, where d 1 is calculated as follows:

Figure BDA0003714760950000033
Figure BDA0003714760950000033

式中,d0为上一周期的惯性里程,v为惯性传感器测得的机器人行驶速度,a为惯性传感器测得的机器人加速度,t为惯性传感器的采样周期。In the formula, d 0 is the inertia mileage of the previous cycle, v is the driving speed of the robot measured by the inertial sensor, a is the acceleration of the robot measured by the inertial sensor, and t is the sampling period of the inertial sensor.

进一步地,步骤8中,所述视觉-惯性信息融合方法如下:Further, in step 8, the visual-inertial information fusion method is as follows:

步骤2.1、若视觉里程d2与惯性里程d1不超过地面已知两定位标识点距离l一半时,即满足Step 2.1, if the visual mileage d 2 and the inertial mileage d 1 do not exceed half of the distance l between the two known positioning marks on the ground, then the

Figure BDA0003714760950000041
Figure BDA0003714760950000041

则惯性里程d1更新为视觉里程d2,即此时d1=d2Then the inertial mileage d 1 is updated to the visual mileage d 2 , that is, d 1 =d 2 at this time;

步骤2.2、若视觉里程d2与惯性里程d1超过地面已知两定位标识点距离l时,即满足Step 2.2, if the visual mileage d 2 and the inertial mileage d 1 exceed the distance l between the two known positioning marks on the ground, then the

max{|d2-d1|,|d1-d2|}>l+lk(3≥k≥0)max{|d 2 -d 1 |, |d 1 -d 2 |}>l+lk(3≥k≥0)

此情况为地面缺失位置校正标识点,处理方法如下:直线行驶时惯性里程d1更新为定位标识点距离d2=d2+lk,若k不满足约束条件,则机器人则发送异常数据至后台,等待用户重新校正数据,用户校正数据完成后,进入步骤9。This situation is the lack of position correction marker points on the ground, and the processing method is as follows: when driving in a straight line, the inertia mileage d 1 is updated to the distance of the positioning marker point d 2 =d 2 +lk, if k does not meet the constraint conditions, the robot will send abnormal data to the background , and wait for the user to re-calibrate the data. After the user corrects the data, go to step 9.

进一步地,步骤9的具体步骤为:Further, the specific steps of step 9 are:

步骤3.1、根据前相机的数据控制机器人循线行驶,能够保持循迹线处于机器人前部的中间位置,控制系统输出量计算公式如下:Step 3.1. Control the robot to follow the line according to the data of the front camera, and keep the tracking line in the middle of the front of the robot. The formula for calculating the output of the control system is as follows:

u1=Kp1e1+Ki1∑e1+Kd1e1 u 1 =K p1 e 1 +K i1 ∑e 1 +K d1 e 1

e1=θa1 e 1 = θ a - θ 1

式中,Kp1、Ki1、Kd1为控制系数,e1为前相机观测值θa与理论值θ1之间的误差,u1为控制系统的输出量;In the formula, K p1 , K i1 , K d1 are the control coefficients, e 1 is the error between the observed value θ a of the front camera and the theoretical value θ 1 , and u 1 is the output of the control system;

步骤3.2、根据后相机的数据控制机器人循线行驶,能够保持循迹线处于机器人后部的中间位置,控制系统输出量计算公式如下:Step 3.2. Control the robot to follow the line according to the data of the rear camera, and keep the tracking line in the middle of the rear of the robot. The output calculation formula of the control system is as follows:

u2=Kp2e2+Ki2∑e2+Kd2e2 u 2 =K p2 e 2 +K i2 ∑e 2 +K d2 e 2

e2=θb2 e 2 = θ b - θ 2

式中,Kp2、Ki2、Kd2为控制系数,e2为后相机观测值θa与理论值θ2,u2为控制系统的输出量。In the formula, K p2 , K i2 , and K d2 are control coefficients, e 2 is the post-camera observation value θ a and theoretical value θ 2 , and u 2 is the output of the control system.

有益效果:与现有技术相比,本发明具有如下显著优点:Beneficial effect: compared with the prior art, the present invention has the following significant advantages:

(1)本发明的高精度机器人定位导航系统,视觉-惯性传感器定位导航系统价格较低,并且能够实现高精度的定位导航。(1) In the high-precision robot positioning and navigation system of the present invention, the price of the visual-inertial sensor positioning and navigation system is relatively low, and high-precision positioning and navigation can be realized.

(2)本发明中使用前后两个相机且位于同一直线上,确保机器人整体无左右偏差,而且算法比较简单,能够极大地减少控制器的运算。(2) In the present invention, the front and rear cameras are used and located on the same straight line to ensure that the whole robot has no left-right deviation, and the algorithm is relatively simple, which can greatly reduce the calculation of the controller.

(3)本发明依靠简单的地面辅助装置,便于在不同场景布置,更加易于使用在复杂的工业场景。(3) The present invention relies on simple ground auxiliary devices, which is convenient for deployment in different scenes, and is easier to use in complex industrial scenes.

附图说明Description of drawings

图1为本发明高精度机器人系统示意图;Fig. 1 is the schematic diagram of high-precision robot system of the present invention;

图2为本发明定位导航系统装置图;Fig. 2 is a device diagram of the positioning and navigation system of the present invention;

图3为本发明地面辅助装置示意图;Fig. 3 is a schematic diagram of the ground auxiliary device of the present invention;

图4为本发明机器人系统的结构框图;Fig. 4 is the structural block diagram of robot system of the present invention;

图5为本发明机器人定位导航方法流程图。Fig. 5 is a flow chart of the robot positioning and navigation method of the present invention.

具体实施方式Detailed ways

下面结合附图对本发明的技术方案作进一步说明。The technical solution of the present invention will be further described below in conjunction with the accompanying drawings.

针对相关技术中定位导航精度不高的问题,本发明实施例提出了一种高精度定位导航系统,该系统利用一种高精度搬运机器人,该搬运机器人不仅能够定位精准,而且确保机器人整体无左右偏差,满足实际应用中高精度机器人定位导航的需求。Aiming at the problem of low positioning and navigation accuracy in related technologies, the embodiment of the present invention proposes a high-precision positioning and navigation system. The system uses a high-precision handling robot. Deviation, to meet the needs of high-precision robot positioning and navigation in practical applications.

下面参照附图描述根据本发明实施例提出的一种高精度机器人定位导航系统。A high-precision robot positioning and navigation system proposed according to an embodiment of the present invention will be described below with reference to the accompanying drawings.

如图1所示,高精度机器人定位导航系统包括机器人底盘以及地面辅助装置。As shown in Figure 1, the high-precision robot positioning and navigation system includes a robot chassis and ground auxiliary devices.

机器人底盘,包括机器人控制系统、移动通信模块以及定位导航系统。其中定位导航系统由前相机、后相机以及惯性传感器组成。前相机、后相机用于识别循迹线,后相机同时用于检测是否有位置校正标识点。用户终端,用户终端与后台实时通信,用户可以通过用户终端便捷地查看机器人的运行状态、行驶位置以及操控机器人。地面辅助装置,包括易于机器人识别的循迹线以及不同于循迹线颜色的位置校正标识点,提供便于机器人获取有效的检测信息。Robot chassis, including robot control system, mobile communication module and positioning and navigation system. The positioning and navigation system consists of a front camera, a rear camera, and an inertial sensor. The front camera and the rear camera are used to identify the tracking line, and the rear camera is also used to detect whether there is a position correction mark point. The user terminal, the user terminal communicates with the background in real time, and the user can conveniently view the robot's running status, driving position and control the robot through the user terminal. Ground auxiliary devices, including tracking lines that are easy for the robot to identify and position correction marking points that are different in color from the tracking lines, provide effective detection information for the robot to obtain.

根据本发明的高精度定位导航系统实施例,用户提前铺设好地面辅助装置,能够提供机器人行驶时所需要的定位导航信息。用户通过用户终端启动机器人后,机器人开始进行定位导航系统初始化,初始化完成后,等待控制系统通过移动通信模块来自后台的工作任务。机器人执行工作任务时,惯性传感器计算机器人的惯性里程,前相机、后相机同时循线,保证机器人整体无左右偏差,后相机同时检测机器人是否行驶至位置校正标识点,行驶至位置校正标识点时,进行视觉-惯性信息融合,校正机器人的定位误差。According to the embodiment of the high-precision positioning and navigation system of the present invention, the user lays the ground auxiliary device in advance, which can provide the positioning and navigation information required by the robot when driving. After the user starts the robot through the user terminal, the robot starts to initialize the positioning and navigation system. After the initialization is completed, it waits for the control system to come from the background through the mobile communication module. When the robot performs work tasks, the inertial sensor calculates the inertial mileage of the robot. The front camera and the rear camera follow the line at the same time to ensure that the robot as a whole has no left-right deviation. , perform visual-inertial information fusion, and correct the positioning error of the robot.

如图2所示,机器人左前轮101、左后轮102、右前轮103、右后轮104的安放如图所示。相机嵌入在机器人底盘下面,其中前相机2111安装在机器人底盘左、右前轮中间位置,后相机2112安装在机器人底盘左、右前轮中间位置。由于机器人底盘距离地面比较近,前后相机均为薄相机。惯性传感器212放置在机器人底盘几何中心的位置。As shown in FIG. 2 , the robot left front wheel 101 , left rear wheel 102 , right front wheel 103 , and right rear wheel 104 are placed as shown in the figure. The camera is embedded under the robot chassis, wherein the front camera 2111 is installed in the middle of the left and right front wheels of the robot chassis, and the rear camera 2112 is installed in the middle of the left and right front wheels of the robot chassis. Since the robot chassis is relatively close to the ground, the front and rear cameras are thin cameras. The inertial sensor 212 is placed at the geometric center of the robot chassis.

机器人正常行驶时,直线行驶时,前相机2111、后相机2112同时循迹行驶,保持前相机2111、后相机2112与地面辅助装置处于同一直线上。曲线行驶时,前相机2111、后相机2112同时循迹行驶,保持前相机2111、后相机2112与地面辅助装置的循迹线处于同一切线上,机器人行驶正常。后相机2112检测到地面位置校正标识点时,进行视觉-惯性信息融合,校正定位误差。When the robot is running normally and straight, the front camera 2111 and the rear camera 2112 track and drive at the same time, keeping the front camera 2111, the rear camera 2112 and the ground auxiliary device on the same straight line. When driving in a curve, the front camera 2111 and the rear camera 2112 track at the same time, keep the tracking line of the front camera 2111, the rear camera 2112 and the ground auxiliary device on the same tangent, and the robot drives normally. When the rear camera 2112 detects the ground position calibration mark point, visual-inertial information fusion is performed to correct the positioning error.

如图3所示,所述地面辅助装置的位置校正标识点的位置,循迹线直线部分每间隔距离为0.5l1~3l1(l1为机器人长度)标定一个位置校正标识点,循迹线曲线部分每间隔距离为

Figure BDA0003714760950000061
(l2为机器人转弯半径)标定一个位置校正标识点。。As shown in Figure 3, the position of the position correction mark point of the ground auxiliary device, the linear part of the tracking line is calibrated with a position correction mark point every interval distance of 0.5l 1 ~ 3l 1 (l 1 is the length of the robot), and the tracking line The distance between the curve parts of the line is
Figure BDA0003714760950000061
(l 2 is the turning radius of the robot) to calibrate a position calibration mark point. .

高精度机器人执行任务时,前相机、后相机同时循迹行驶,能够保证机器人整体处于循迹线正上方,避免产生仅机器人前部处于循迹线正上方,而它部位已经偏离循迹线的情况。后相机同时检测机器人是否行驶至位置校正标识点,当检测到位置校正标识时,进行视觉-惯性信息融合,纠正机器人的定位偏差。When a high-precision robot performs a task, the front camera and the rear camera track at the same time, which can ensure that the robot as a whole is directly above the tracking line, and avoid situations where only the front of the robot is directly above the tracking line, while other parts have deviated from the tracking line. Happening. At the same time, the rear camera detects whether the robot has traveled to the position correction mark point. When the position correction mark is detected, visual-inertial information fusion is performed to correct the positioning deviation of the robot.

如图4所示,本实施例中机器人主控系统由电源模块、控制系统、定位导航系统、移动通信模块组成,定位导航系统又由前相机、后相机以及惯性传感器组成。As shown in Figure 4, the main control system of the robot in this embodiment is composed of a power supply module, a control system, a positioning and navigation system, and a mobile communication module. The positioning and navigation system is composed of a front camera, a rear camera, and an inertial sensor.

电源模块为整个电路的工作提供电能。移动通信模块接收来自后台的机器人工作任务,移动通信模块通过串口将工作任务发送给主控系统;反过来主控系统又通过串口将机器人运行状态发送给移动通信模块,移动通信模块将机器人运行状态发送给后台。惯性传感器计算惯性里程,前相机、后相机同时循迹行驶,纠正机器人行驶时的左右偏差,保证机器人行驶时机器人整体无左右偏差。后相机又同时检测是否有位置校正标识点,当检测到位置校正标识点时,视觉-惯性信息融合,校准机器人的定位。前相机、后相机与主控系统通过串口进行数据发送与接收,前相机、后相机选用的是超薄相机。具体地,根据本发明的一个具体实施例,惯性传感器可以选用MPU6050,惯性传感器与主控系统可以通过IIC总线协议进行数据发送与接收。The power supply module provides electric energy for the work of the whole circuit. The mobile communication module receives the robot work tasks from the background, and the mobile communication module sends the work tasks to the main control system through the serial port; in turn, the main control system sends the robot running status to the mobile communication module through the serial port, and the mobile communication module sends the robot running status sent to the background. The inertial sensor calculates the inertial mileage, and the front camera and the rear camera track at the same time to correct the left-right deviation when the robot is driving, so as to ensure that the robot as a whole has no left-right deviation when driving. The rear camera also detects whether there is a position correction mark point at the same time. When the position correction mark point is detected, the visual-inertial information is fused to calibrate the positioning of the robot. The front camera, the rear camera and the main control system send and receive data through the serial port, and the front camera and the rear camera are ultra-thin cameras. Specifically, according to a specific embodiment of the present invention, the inertial sensor can use MPU6050, and the inertial sensor and the main control system can send and receive data through the IIC bus protocol.

如图5是根据本发明的一个具体实施例的高精度定位导航方法及系统的方法流程图。具体步骤如下:FIG. 5 is a flow chart of the high-precision positioning and navigation method and system according to a specific embodiment of the present invention. Specific steps are as follows:

步骤1:通过用户终端发送指令启动机器人,机器人进行初始化;Step 1: Start the robot by sending an instruction through the user terminal, and the robot is initialized;

步骤2:获取机器人初始位置数据,其中位置初始数据包括:视觉初始化数据、惯性初始化数据、视觉-惯性信息融合数据。视觉数据估计机器人所处的位置,惯性数据估计机器人的位姿,然后进行视觉-惯性信息融合,进一步精确估计机器人的位置;Step 2: Obtain the initial position data of the robot, where the initial position data includes: visual initialization data, inertial initialization data, and visual-inertial information fusion data. Visual data estimates the position of the robot, inertial data estimates the robot's pose, and then performs visual-inertial information fusion to further accurately estimate the robot's position;

步骤3:将步骤2中机器人的视觉初始化数据、惯性传感器初始化数据、视觉-惯性信息融合数据通过移动通信模块实时传给后台;Step 3: Send the visual initialization data, inertial sensor initialization data, and visual-inertial information fusion data of the robot in step 2 to the background in real time through the mobile communication module;

步骤4:机器人进入待机状态,等待用户通过用户终端发布机器人的工作任务至后台;Step 4: The robot enters the standby state, waiting for the user to release the robot's work tasks to the background through the user terminal;

步骤5:机器人检测是否有工作任务,如果没有则跳转至步骤4;如果有工作任务,则机器人进入工作状态,执行以下步骤;Step 5: The robot detects whether there is a work task, and if not, jumps to step 4; if there is a work task, the robot enters the working state and performs the following steps;

步骤6:视觉传感器与惯性传感器实时获取机器人的行驶数据。行驶数据包括,Step 6: The visual sensor and inertial sensor acquire the driving data of the robot in real time. Driving data includes,

(1)前后相机获取地面循迹线的关键帧数据:(1) The front and rear cameras obtain the key frame data of the ground tracking line:

前相机计算出所观测的机器人循线数据,其中包括循迹线首行横坐标xa0,隔n行的循迹线横坐标xan,计算出前相机观测的机器人偏移角度θa,其中The front camera calculates the observed tracking data of the robot, including the abscissa x a0 of the first row of the tracking line, the abscissa x an of the tracking line every n lines, and calculates the robot offset angle θ a observed by the front camera, where

Figure BDA0003714760950000071
Figure BDA0003714760950000071

后相机计算出所观测的机器人循线数据,其中包括循迹线首行横坐标xb0,隔n行的循迹线横坐标xbn,计算出后相机观测的机器人偏移角度θb,其中The rear camera calculates the observed robot tracking data, including the abscissa x b0 of the first line of the tracking line, the abscissa x bn of the tracking line every n lines, and calculates the robot offset angle θ b observed by the rear camera, where

Figure BDA0003714760950000081
Figure BDA0003714760950000081

(2)后相机获取机器人位置校正点的关键帧数据:(2) The rear camera obtains the key frame data of the robot position correction point:

1)若后相机检测到位置校正点,则根据地面位置校正标识点的已知地面设置方式,计算出机器人视觉里程d2,其中1) If the rear camera detects the position correction point, then calculate the robot visual mileage d 2 according to the known ground setting method of the ground position correction mark point, where

d2=d′2+ld 2 =d′ 2 +l

式中,d′2为上一位置校正点的视觉里程,l为已知地面设置的位置校正标识点距离;In the formula, d' 2 is the visual mileage of the previous position correction point, and l is the distance of the position correction mark point set on the known ground;

2)若后相机未检测到位置校正标识点,则机器人视觉里程d2保持不变;2) If the rear camera does not detect the position correction mark point, the robot visual mileage d2 remains unchanged;

(3)惯性传感器用于计算出机器人惯性里程d1,其中d1计算如下:(3) The inertial sensor is used to calculate the inertial mileage d 1 of the robot, where d 1 is calculated as follows:

Figure BDA0003714760950000082
Figure BDA0003714760950000082

式中,d0为上一周期的惯性里程,v为惯性传感器测得的机器人行驶速度,a为惯性传感器测得的机器人加速度,t为惯性传感器的采样周期;In the formula, d 0 is the inertia mileage of the previous cycle, v is the driving speed of the robot measured by the inertial sensor, a is the acceleration of the robot measured by the inertial sensor, and t is the sampling period of the inertial sensor;

步骤7:后相机检测机器人是否行驶至位置校正标识点,判断是否需要进行视觉-惯性信息融合,进一步精准机器人定位信息。若机器人行驶至位置校正标识点,则进入步骤8,否则,直接进入步骤9;Step 7: The rear camera detects whether the robot has traveled to the position correction mark point, and judges whether visual-inertial information fusion is required to further accurately position the robot. If the robot travels to the position correction mark point, then go to step 8, otherwise, go directly to step 9;

步骤8:视觉-惯性信息融合方法如下:Step 8: The visual-inertial information fusion method is as follows:

(1)若视觉里程d2与惯性里程d1不超过地面已知两定位标识点距离l一半时,即满足(1) If the visual mileage d 2 and the inertial mileage d 1 do not exceed half of the distance l between two known positioning marker points on the ground, then the

Figure BDA0003714760950000091
Figure BDA0003714760950000091

则惯性里程d1更新为视觉里程d2,即此时d1=d2Then the inertial mileage d 1 is updated to the visual mileage d 2 , that is, d 1 =d 2 at this time;

(2)若视觉里程d2与惯性里程d1超过地面已知两定位标识点距离l时,即满足(2) If the visual mileage d 2 and the inertial mileage d 1 exceed the distance l between two known positioning marks on the ground, then the

max{|d2-d1|,|d1-d2|}>l+lk(3≥k≥0)max{|d 2 -d 1 |, |d 1 -d 2 |}>l+lk(3≥k≥0)

此情况为地面缺失位置校正标识点,处理方法如下:直线行驶时惯性里程d1更新为定位标识点距离d2=d2+lk。若k不满足约束条件,则机器人则发送异常数据至后台,等待用户重新校正数据。用户校正数据完成后,进入步骤9;This situation is that the ground is missing a position correction mark point, and the processing method is as follows: when driving straight, the inertia mileage d 1 is updated to the distance of the positioning mark point d 2 =d 2 +lk. If k does not meet the constraints, the robot will send abnormal data to the background, waiting for the user to re-correct the data. After the user corrects the data, go to step 9;

步骤9:前后相机同时循线行驶并根据理论循迹线的位置,纠正机器人的行驶误差。采用控制算法,控制机器人的行驶。Step 9: The front and rear cameras follow the line at the same time and correct the robot's driving error according to the position of the theoretical tracking line. The control algorithm is used to control the driving of the robot.

(1)根据前相机的数据控制机器人循线行驶,能够保持循迹线处于机器人前部的中间位置,控制系统输出量计算公式如下:(1) According to the data of the front camera, the robot is controlled to follow the line, and the tracking line can be kept in the middle of the front of the robot. The calculation formula of the output of the control system is as follows:

u1=Kp1e1+Ki1∑e1+Kd1e1 u 1 =K p1 e 1 +K i1 ∑e 1 +K d1 e 1

e1=θa1 e 1 = θ a - θ 1

式中,Kp1、Ki1、Kd1为控制系数,e1为前相机观测值θa与理论值θ1之间的误差,u1为控制系统的输出量;In the formula, K p1 , K i1 , K d1 are the control coefficients, e 1 is the error between the observed value θ a of the front camera and the theoretical value θ 1 , and u 1 is the output of the control system;

(2)根据后相机的数据控制机器人循线行驶,能够保持循迹线处于机器人后部的中间位置,控制系统输出量计算公式如下:(2) According to the data of the rear camera, the robot is controlled to follow the line, and the tracking line can be kept in the middle of the rear of the robot. The calculation formula of the output of the control system is as follows:

u2=Kp2e2+Ki2Σe2+Kd2e2 u 2 =K p2 e 2 +K i2 Σe 2 +K d2 e 2

e2=θb2 e 2 = θ b - θ 2

式中,Kp2、Ki2、Kd2为控制系数,e2为后相机观测值θa与理论值θ2,u2为控制系统的输出量;In the formula, K p2 , K i2 , K d2 are control coefficients, e 2 is the post-camera observation value θ a and theoretical value θ 2 , u 2 is the output of the control system;

步骤10:机器人的工作任务是否完成,若未完成,则跳转至步骤6;若机器人的任务已完成,则进入下一步骤。Step 10: Whether the task of the robot is completed, if not, go to step 6; if the task of the robot is completed, go to the next step.

步骤11:机器人任务结束:Step 11: End of robot mission:

(1)若用户未执行任何操作,机器人则进入步骤2;(1) If the user does not perform any operation, the robot will enter step 2;

(2)若用户关闭机器人,则机器人进入关机状态。(2) If the user turns off the robot, the robot enters the shutdown state.

步骤12:结束。Step 12: End.

Claims (8)

1. A high-precision robot positioning navigation system is characterized by comprising a robot chassis and a ground auxiliary device;
the robot chassis comprises a robot control system, a positioning navigation system and a mobile communication module;
the ground auxiliary device comprises a tracking line easy to recognize by the robot and a position correction identification point different from the color of the tracking line, and provides convenience for the robot to acquire effective detection information.
2. The high accuracy robot positioning navigation system of claim 1, wherein the positioning navigation system comprises a front camera mounted in the middle of the front of the chassis, a rear camera mounted in the middle of the rear of the chassis, and an inertial sensor mounted in the center of the chassis.
3. The high-precision robot positioning and navigation system according to claim 1, wherein the mobile communication module is used for controlling data transmission between the system and the background, the robot driving data is sent to the background through the mobile communication module, the background receives the operation instruction and the work task from the user terminal, and the background data is synchronously displayed on the user terminal.
4. The high accuracy robot positioning and navigating system according to claim 1, wherein the position of the ground assisting device corrects the position of the identification point, and the linear part of the tracking line is 0.5l per interval distance 1 ~3l 1 (l 1 For the length of the robot) to calibrate an identification point for the position, the tracing curve part has the following distance
Figure FDA0003714760940000011
(l 2 For the robot turning radius) to calibrate a position correction mark point.
5. A high-precision robot positioning and navigation method is characterized by comprising the following steps:
step 1: the robot is started by sending an instruction through a user terminal, and the robot is initialized;
step 2: acquiring initial position data of the robot, wherein the initial position data comprises: visual initialization data, inertial initialization data and visual-inertial information fusion data; the position of the robot is estimated through the visual data, the pose of the robot is estimated through the inertial data, then the visual-inertial information fusion is carried out, and the position of the robot is further accurately estimated;
and step 3: the vision initialization data, the inertial sensor initialization data and the vision-inertial information fusion data of the robot in the step 2 are sent to a background in real time through a mobile communication module;
and 4, step 4: the robot enters a standby state, and waits for a user to release a work task of the robot to a background through a user terminal;
and 5: the robot detects whether a work task exists, and if not, the robot jumps to the step 4; if the robot has the work task, the robot enters a working state and executes the following steps;
step 6: the vision sensor and the inertial sensor acquire the driving data of the robot in real time, and the driving data comprises: the method comprises the steps that a front camera and a rear camera acquire key frame data of ground tracking lines and key frame data of robot position correction points;
and 7: the rear camera detects whether the robot runs to the position correction identification point or not, judges whether visual-inertial information fusion is needed or not, and further accurately positions the robot; if the robot runs to the position correction identification point, the step 8 is carried out, otherwise, the step 9 is directly carried out;
and 8: visual-inertial information fusion;
and step 9: the front camera and the rear camera simultaneously run along the tracks, the running error of the robot is corrected according to the position of the theoretical tracking line, and the running of the robot is controlled by adopting a control algorithm;
step 10: whether the work task of the robot is finished or not is judged, and if not, the step 6 is skipped; if the task of the robot is completed, entering the next step;
step 11: the robot task is finished; if the user does not execute any operation, the robot enters the step 2; if the user closes the robot, the robot enters a shutdown state;
step 12: and (6) ending.
6. The method according to claim 5, wherein in step 6, the front and back cameras acquire the key frame data of ground tracking:
front camera calculates the robot cycle observedLine data, including the trace-following primary abscissa x a0 Transverse axis x of the tracking line of every n rows an Calculating the offset angle theta of the robot observed by the front camera a Wherein
Figure FDA0003714760940000021
The rear camera calculates the observed robot tracking data, including the first row abscissa x of the tracking line b0 Transverse axis x of the tracking line of every n rows bn Calculating the offset angle theta of the robot observed by the rear camera b Wherein
Figure FDA0003714760940000022
The rear camera acquires key frame data of a robot position correction point:
step 1.1, if the rear camera detects the position correction point, calculating the robot visual mileage d according to the known ground setting mode of the ground position correction identification point 2 In which
d 2 =d′ 2 +l
In the formula (II), d' 2 The visual mileage of the last position correction point is represented by l, which is the distance of the position correction identification point set on the known ground;
step 1.2, if the position correction identification point is not detected by the rear camera, the robot vision mileage d 2 Keeping the original shape;
step 1.3, the inertial sensor is used for calculating the inertial mileage d of the robot 1 In which d is 1 The calculation is as follows:
Figure FDA0003714760940000031
in the formula (d) 0 Is the inertial mileage of the last cycle, v is the robot running speed measured by the inertial sensor, and a is the machine measured by the inertial sensorHuman acceleration, t, is the sampling period of the inertial sensor.
7. The high-precision robot positioning and navigating method according to claim 5, wherein in step 8, the visual-inertial information fusion method comprises the following steps:
step 2.1, if vision mileage d 2 And inertia mileage d 1 When the distance between two known positioning mark points on the ground is not more than half l, the requirement is met
Figure FDA0003714760940000032
Inertia mileage d 1 Updated to the visual mileage d 2 At this time, i.e. d 1 =d 2
Step 2.2, if vision mileage d 2 And inertia mileage d 1 When the distance l of the two known positioning mark points exceeds the ground, the requirement is met
max{|d 2 -d 1 |,|d 1 -d 2 |}>l+lk(3≥k≥0)
In this case, the processing method is as follows: inertial distance d in straight line 1 Updated to the positioning identification point distance d 2 =d 2 And + lk, if k does not meet the constraint condition, the robot sends abnormal data to a background, waits for the user to correct the data again, and enters step 9 after the user finishes correcting the data.
8. The high-precision robot positioning and navigating method according to claim 5, wherein the specific steps of step 9 are as follows:
step 3.1, controlling the robot to run in a tracking manner according to the data of the front camera, keeping the tracking line at the middle position of the front part of the robot, and controlling a system output quantity calculation formula as follows:
u 1 =K p1 e 1 +K i1 ∑e 1 +K d1 e 1
e 1 =θ a1
in the formula, K p1 、K i1 、K d1 To control the coefficient, e 1 As a front camera observation value theta a And the theoretical value theta 1 Error between u 1 To control the output of the system;
and 3.2, controlling the robot to run in a tracking manner according to the data of the rear camera, keeping the tracking line at the middle position of the rear part of the robot, and controlling a system output quantity calculation formula as follows:
u 2 =K p2 e 2 +K i2 ∑e 2 +K d2 e 2
e 2 =θ b2
in the formula, K p2 、K i2 、K d2 To control the coefficient, e 2 As a rear camera observation value theta a To the theoretical value theta 2 ,u 2 To control the output of the system.
CN202210733489.8A 2022-06-27 2022-06-27 High-precision robot positioning navigation system and navigation method Active CN115342805B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210733489.8A CN115342805B (en) 2022-06-27 2022-06-27 High-precision robot positioning navigation system and navigation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210733489.8A CN115342805B (en) 2022-06-27 2022-06-27 High-precision robot positioning navigation system and navigation method

Publications (2)

Publication Number Publication Date
CN115342805A true CN115342805A (en) 2022-11-15
CN115342805B CN115342805B (en) 2024-11-19

Family

ID=83948391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210733489.8A Active CN115342805B (en) 2022-06-27 2022-06-27 High-precision robot positioning navigation system and navigation method

Country Status (1)

Country Link
CN (1) CN115342805B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116592888A (en) * 2023-05-08 2023-08-15 五八智能科技(杭州)有限公司 Global positioning method, system, device and medium for patrol robot

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101850727A (en) * 2010-02-26 2010-10-06 湖南山河智能机械股份有限公司 Remote control wheel type mobile robot platform
CN102794767A (en) * 2012-08-31 2012-11-28 江南大学 B spline track planning method of robot joint space guided by vision
CN203745904U (en) * 2014-02-27 2014-07-30 梁学坚 Restaurant service robot system
CN104848858A (en) * 2015-06-01 2015-08-19 北京极智嘉科技有限公司 Two-dimensional code and vision-inert combined navigation system and method for robot
CN106985142A (en) * 2017-04-28 2017-07-28 东南大学 A kind of double vision for omni-directional mobile robots feels tracking device and method
CN107063246A (en) * 2017-04-24 2017-08-18 齐鲁工业大学 A Loose Combination Navigation Method of Visual Navigation/Inertial Navigation
CN107562059A (en) * 2017-09-20 2018-01-09 浙江映美智能装备科技有限公司 A kind of intelligent carriage tracking system with Quick Response Code site location information
CN111123953A (en) * 2020-01-09 2020-05-08 哈尔滨工程大学 Particle-based mobile robot group under artificial intelligence big data and control method thereof
CN111624995A (en) * 2020-05-09 2020-09-04 太仓臻溢科技有限公司 High-precision navigation positioning method for mobile robot

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101850727A (en) * 2010-02-26 2010-10-06 湖南山河智能机械股份有限公司 Remote control wheel type mobile robot platform
CN102794767A (en) * 2012-08-31 2012-11-28 江南大学 B spline track planning method of robot joint space guided by vision
CN203745904U (en) * 2014-02-27 2014-07-30 梁学坚 Restaurant service robot system
CN104848858A (en) * 2015-06-01 2015-08-19 北京极智嘉科技有限公司 Two-dimensional code and vision-inert combined navigation system and method for robot
CN107063246A (en) * 2017-04-24 2017-08-18 齐鲁工业大学 A Loose Combination Navigation Method of Visual Navigation/Inertial Navigation
CN106985142A (en) * 2017-04-28 2017-07-28 东南大学 A kind of double vision for omni-directional mobile robots feels tracking device and method
CN107562059A (en) * 2017-09-20 2018-01-09 浙江映美智能装备科技有限公司 A kind of intelligent carriage tracking system with Quick Response Code site location information
CN111123953A (en) * 2020-01-09 2020-05-08 哈尔滨工程大学 Particle-based mobile robot group under artificial intelligence big data and control method thereof
CN111624995A (en) * 2020-05-09 2020-09-04 太仓臻溢科技有限公司 High-precision navigation positioning method for mobile robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王兴松: "Mecanum轮全方位移动机器人原理与应用", 30 June 2018, 《东南大学出版社》, pages: 1 - 181 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116592888A (en) * 2023-05-08 2023-08-15 五八智能科技(杭州)有限公司 Global positioning method, system, device and medium for patrol robot

Also Published As

Publication number Publication date
CN115342805B (en) 2024-11-19

Similar Documents

Publication Publication Date Title
CN106054878B (en) Inertial guidance vehicle navigation method based on two-dimensional code positioning and inertial guidance vehicle
JP4079792B2 (en) Robot teaching method and robot with teaching function
WO2022121459A1 (en) Method and device for calculating installation position deviation of laser scanner of agv forklift
KR101214143B1 (en) Method and apparatus for detecting position and orientation
CN110347160A (en) A kind of automatic guide vehicle and its air navigation aid based on dual camera barcode scanning
WO2017158973A1 (en) Automatic guided vehicle
CN108592906A (en) AGV complex navigation methods based on Quick Response Code and inertial sensor
CN111624995B (en) High-precision navigation and positioning method for mobile robot
CN109387194B (en) Mobile robot positioning method and positioning system
CN110837257B (en) AGV composite positioning navigation system based on iGPS and vision
CN109813305A (en) Unmanned fork lift based on laser SLAM
CN111474938A (en) Inertial navigation automatic guided vehicle and track determination method thereof
CN107272690A (en) Inertial guide car air navigation aid and inertial guide car based on binocular stereo vision
CN109703650A (en) A kind of automatic guided transport vehicle and guidance and tracking method
CN115342805B (en) High-precision robot positioning navigation system and navigation method
CN112462762B (en) A robot outdoor autonomous mobile system and method based on roadside two-dimensional code unit
CN106168803A (en) A kind of location aware method for moving robot
CN107943026B (en) Mecanum wheel patrol robot and its patrol method
CN109211260A (en) The driving path method and device for planning of intelligent vehicle, intelligent vehicle
JP2007156576A (en) Method and device for adjusting odometry(wheel range finder) parameter for traveling carrier
CN115903857A (en) RFID-based unmanned grain surface inspection device and positioning method
CN110989596A (en) Pile alignment control method and device, intelligent robot and storage medium
WO2022252220A1 (en) Precise stopping system and method for multi-axis flatbed vehicle
JP2013250795A (en) Movable body guiding device and movable body guiding method
CN206848817U (en) Inertial guided vehicle based on binocular stereo vision navigation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant