CN115342805A - High-precision robot positioning navigation system and navigation method - Google Patents
High-precision robot positioning navigation system and navigation method Download PDFInfo
- Publication number
- CN115342805A CN115342805A CN202210733489.8A CN202210733489A CN115342805A CN 115342805 A CN115342805 A CN 115342805A CN 202210733489 A CN202210733489 A CN 202210733489A CN 115342805 A CN115342805 A CN 115342805A
- Authority
- CN
- China
- Prior art keywords
- robot
- data
- inertial
- positioning
- rear camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 14
- 238000012937 correction Methods 0.000 claims abstract description 44
- 230000004927 fusion Effects 0.000 claims abstract description 19
- 238000010295 mobile communication Methods 0.000 claims abstract description 16
- 230000000007 visual effect Effects 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000001133 acceleration Effects 0.000 claims description 4
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 238000007500 overflow downdraw method Methods 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0014—Image feed-back for automatic industrial control, e.g. robot with camera
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Robotics (AREA)
- Theoretical Computer Science (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides a high-precision robot positioning and navigation method and a high-precision robot positioning and navigation system. The robot chassis comprises a robot control system, a mobile communication module and a positioning navigation system. The positioning navigation system consists of a front camera, a rear camera and an inertial sensor, wherein the cameras acquire information of the ground auxiliary device, and the inertial sensor acquires position information of the robot. The front camera and the rear camera simultaneously follow the line to correct the left-right driving deviation of the robot; and at the position correction identification point in the ground auxiliary device, the robot performs vision-inertia information fusion to calibrate the position of the robot. The invention corrects the left and right deviation by the simultaneous line circulation of the front camera and the rear camera, and performs visual-inertial information fusion when a position correction identification point exists, thereby calibrating the position of the robot and improving the positioning precision of the robot.
Description
Technical Field
The invention belongs to the field of robot positioning and navigation, and particularly relates to a high-precision robot positioning and navigation system and a navigation method.
Background
With the progress and development of society, the robot replaces the manual work to carry heavy articles is the development trend of industrial production at present. At present, robots of the same type in the market mainly adopt navigation modes such as magnetic navigation, laser navigation, two-dimensional code navigation, inertial navigation and the like.
Magnetic navigation accomplishes the positioning navigation of the robot using a magnetic strip and some markers, the magnetic strip guides the robot to travel along a fixed track. In practical application, the magnetic strips are difficult to lay and high in construction cost, so that the production cost is increased, and the enterprise income is reduced.
The laser navigation is to install a laser reflecting plate around the running path of the robot, and the robot determines the current position and direction by emitting laser beams and collecting the laser beams reflected by the reflecting plate. Although laser navigation has the advantage of high precision, it is costly, and has severe environmental requirements, and is difficult to apply to complex environments.
The two-dimension code navigation is to lay the two-dimension code discretely and scan and analyze the two-dimension code through a robot camera to obtain a real-time coordinate. Because the two-dimensional code is easy to wear and tear, need regular maintenance, increased the later maintenance cost, this mode is only applicable to the good warehouse of environment moreover, is not suitable for complicated environment.
The inertial navigation is that a gyroscope is installed on a robot, the gyroscope can be used for acquiring the three-axis angular velocity and the acceleration of the robot, the robot is navigated and positioned through integral operation, and because the gyroscope increases along with the increase of time, errors can be accumulated and increased, even the position is lost, serious loss is caused, and the requirement of high-precision positioning navigation cannot be met.
The navigation modes have the problems of high price of scene laying, low positioning navigation precision, high requirement on a use scene, difficulty in being suitable for complex use environments and the like, so that a positioning navigation mode with low cost, simple scene laying and high precision is required.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a high-precision robot positioning and navigation method and system, and effectively solves the problems that the existing robot positioning and navigation system is low in precision, the high-precision robot positioning and navigation system has high requirements on use scenes and the like.
The technical scheme is as follows: the high-precision robot positioning navigation system comprises a robot chassis and a ground auxiliary device;
the robot chassis comprises a robot control system, a positioning navigation system and a mobile communication module;
the ground auxiliary device comprises a tracking line easy to recognize by the robot and a position correction identification point different from the color of the tracking line, and provides effective detection information convenient for the robot to obtain.
Further, the positioning and navigation system comprises a front camera installed in the middle of the front part of the chassis, a rear camera installed in the middle of the rear part of the chassis and an inertial sensor installed in the center of the chassis.
Furthermore, the mobile communication module is used for controlling data transmission between the system and the background, the running data of the robot is sent to the background through the mobile communication module, the background receives operation instructions and work tasks from the user terminal, and background data are synchronously displayed on the user terminal.
Further, the position of the ground auxiliary device is corrected to the position of the mark point, and the interval distance of the linear part of the tracking line is 0.5l 1 ~3l 1 (l 1 For the length of the robot) a position correction mark point is marked, the following curve part has the interval distance of(l 2 For the robot turning radius) to calibrate a position correction mark point.
The invention also discloses a high-precision robot positioning and navigation method, which comprises the following steps:
step 1: the robot is started by sending an instruction through the user terminal, and the robot is initialized;
and 2, step: acquiring initial position data of the robot, wherein the initial position data comprises: visual initialization data, inertial initialization data and visual-inertial information fusion data; the position of the robot is estimated through the visual data, the pose of the robot is estimated through the inertial data, then the visual-inertial information fusion is carried out, and the position of the robot is further accurately estimated;
and 3, step 3: the vision initialization data, the inertial sensor initialization data and the vision-inertial information fusion data of the robot in the step 2 are sent to a background in real time through a mobile communication module;
and 4, step 4: the robot enters a standby state, and waits for a user to release a work task of the robot to a background through a user terminal;
and 5: the robot detects whether a work task exists, and if not, the robot jumps to the step 4; if the robot has a work task, the robot enters a working state and executes the following steps;
and 6: the vision sensor and the inertial sensor acquire the running data of the robot in real time, and the running data comprises: the front camera and the rear camera acquire key frame data of ground tracking lines and the rear camera acquires key frame data of robot position correction points;
and 7: the rear camera detects whether the robot runs to the position correction identification point or not, judges whether visual-inertial information fusion is needed or not, and further accurately positions the robot; if the robot drives to the position correction identification point, entering step 8, otherwise, directly entering step 9;
and 8: visual-inertial information fusion;
and step 9: the front camera and the rear camera simultaneously run along the tracks, the running error of the robot is corrected according to the position of the theoretical tracking line, and the running of the robot is controlled by adopting a control algorithm;
step 10: whether the work task of the robot is finished or not is judged, and if the work task is not finished, the step 6 is skipped; if the task of the robot is completed, entering the next step;
step 11: the robot task is finished; if the user does not execute any operation, the robot enters the step 2; if the user closes the robot, the robot enters a shutdown state;
step 12: and (6) ending.
Further, in step 6, the front and rear cameras acquire key frame data of ground tracking lines:
the front camera calculates the observed robot tracking data, including the first row of the tracking line abscissa x a0 Transverse axis x of the tracking line of every n rows an Calculating the offset angle theta of the robot observed by the front camera a In which
The rear camera calculates the observed robot tracking data, including the first row of the tracking abscissa x b0 Transverse axis x of the tracking line of every n rows bn Calculating the offset angle alpha of the robot observed by the rear camera b In which
The rear camera acquires key frame data of a robot position correction point:
step 1.1, if the rear camera detects the position correction point, calculating the robot visual mileage d according to the known ground setting mode of the ground position correction identification point 2 Wherein
d 2 =d′ 2 +l
Of formula (II) to' 2 The visual mileage of the last position correction point is represented by l, which is the distance of the position correction identification point set on the known ground;
step 1.2, if the rear camera does not detect the position correction identification point, the robot vision mileage d 2 Keeping the same;
step 1.3, the inertial sensor is used for calculating the inertial mileage d of the robot 1 In which d is 1 The calculation is as follows:
in the formula (d) 0 And v is the inertia mileage of the last period, v is the running speed of the robot measured by the inertia sensor, a is the acceleration of the robot measured by the inertia sensor, and t is the sampling period of the inertia sensor.
Further, in step 8, the visual-inertial information fusion method is as follows:
step 2.1, if the vision mileage d 2 And inertia mileage d 1 When the distance between two known positioning mark points on the ground is not more than half of the distance l, the requirement is met
Inertia mileage d 1 Updated to the visual mileage d 2 At this time d 1 =d 2 ;
Step 2.2, if the vision mileage d 2 And inertia mileage d 1 When the distance l of the two known positioning mark points exceeds the ground, the requirement is met
max{|d 2 -d 1 |,|d 1 -d 2 |}>l+lk(3≥k≥0)
In this case, the processing method is as follows: inertia mileage d in straight line driving 1 Updated to the positioning identification point distance d 2 =d 2 And+ lk, if k does not meet the constraint condition, the robot sends abnormal data to the background, waits for the user to correct the data again, and enters step 9 after the user finishes correcting the data.
Further, the specific steps of step 9 are:
step 3.1, controlling the robot to run in a tracking manner according to the data of the front camera, keeping the tracking line at the middle position of the front part of the robot, and controlling a system output quantity calculation formula as follows:
u 1 =K p1 e 1 +K i1 ∑e 1 +K d1 e 1
e 1 =θ a -θ 1
in the formula, K p1 、K i1 、K d1 To control the coefficient, e 1 As a front camera observation value theta a And the theoretical value theta 1 Error between u 1 Is the output quantity of the control system;
and 3.2, controlling the robot to run along a track according to the data of the rear camera, keeping the track at the middle position of the rear part of the robot, and controlling a system output quantity calculation formula as follows:
u 2 =K p2 e 2 +K i2 ∑e 2 +K d2 e 2
e 2 =θ b -θ 2
in the formula, K p2 、K i2 、K d2 To control the coefficient, e 2 As a rear camera observation value theta a To the theoretical value theta 2 ,u 2 To control the output of the system.
Has the advantages that: compared with the prior art, the invention has the following remarkable advantages:
(1) The high-precision robot positioning and navigation system has the advantages that the visual-inertial sensor positioning and navigation system is low in price, and high-precision positioning and navigation can be realized.
(2) The invention uses the front camera and the rear camera which are positioned on the same straight line, ensures that the robot has no left-right deviation as a whole, has simpler algorithm and can greatly reduce the operation of the controller.
(3) The invention is convenient to arrange in different scenes by depending on a simple ground auxiliary device, and is more easy to use in complex industrial scenes.
Drawings
FIG. 1 is a schematic diagram of a high precision robotic system of the present invention;
FIG. 2 is a diagram of a positioning navigation system according to the present invention;
FIG. 3 is a schematic view of a ground support apparatus according to the present invention;
FIG. 4 is a block diagram of the construction of the robotic system of the present invention;
FIG. 5 is a flowchart of a robot positioning and navigation method according to the present invention.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
Aiming at the problem of low positioning navigation precision in the related technology, the embodiment of the invention provides a high-precision positioning navigation system, which utilizes a high-precision carrying robot, and the carrying robot not only can be accurately positioned, but also can ensure that the whole robot has no left-right deviation, and meets the requirements of positioning and navigation of a high-precision robot in practical application.
A high-precision robot positioning navigation system according to an embodiment of the present invention will be described with reference to the accompanying drawings.
As shown in fig. 1, the high-precision robot positioning navigation system includes a robot chassis and a ground assisting device.
The robot chassis comprises a robot control system, a mobile communication module and a positioning navigation system. The positioning navigation system consists of a front camera, a rear camera and an inertial sensor. The front camera and the rear camera are used for identifying the tracking line, and the rear camera is used for detecting whether a position correction identification point exists or not. And the user terminal is in real-time communication with the background, so that a user can conveniently check the running state and the running position of the robot and operate the robot through the user terminal. The ground auxiliary device comprises a tracking line easy to recognize by the robot and a position correction identification point different from the color of the tracking line, and provides effective detection information convenient for the robot to obtain.
According to the embodiment of the high-precision positioning navigation system, a user lays a surface auxiliary device in advance, and positioning navigation information required by the robot during running can be provided. After a user starts the robot through the user terminal, the robot starts to initialize the positioning navigation system, and after the initialization is completed, the robot waits for a work task of the control system from the background through the mobile communication module. When the robot executes a work task, the inertial sensor calculates the inertial mileage of the robot, the front camera and the rear camera are simultaneously along the line, the robot is guaranteed to have no left-right deviation as a whole, the rear camera simultaneously detects whether the robot runs to the position correction identification point or not, and when the robot runs to the position correction identification point, visual-inertial information fusion is carried out, and the positioning error of the robot is corrected.
As shown in fig. 2, the robot has a left front wheel 101, a left rear wheel 102, a right front wheel 103, and a right rear wheel 104 as shown. The cameras are embedded under the robot chassis with front camera 2111 mounted at a position intermediate the left and right front wheels of the robot chassis and rear camera 2112 mounted at a position intermediate the left and right front wheels of the robot chassis. Because the chassis of the robot is closer to the ground, the front camera and the rear camera are thin cameras. The inertial sensor 212 is placed at the geometric center of the robot chassis.
When the robot is traveling normally and in a straight line, the front camera 2111 and the rear camera 2112 are simultaneously tracked and travel, and the front camera 2111 and the rear camera 2112 are kept on the same straight line as the ground assistance device. When the robot runs on a curve, the front camera 2111 and the rear camera 2112 simultaneously track, the front camera 2111, the rear camera 2112 and the track line of the ground auxiliary device are kept on the same tangent line, and the robot runs normally. When the rear camera 2112 detects the ground position correction mark point, visual-inertial information fusion is performed, and a positioning error is corrected.
As shown in FIG. 3, the position of the ground auxiliary device is corrected by the position of the mark point, and the linear part of the tracking line is 0.5l per interval 1 ~3l 1 (l 1 For the length of the robot) a position correction mark point is marked, the following curve part has the interval distance of(l 2 For the robot turning radius) to calibrate a position correction mark point. .
When the high-precision robot executes a task, the front camera and the rear camera simultaneously track and run, the whole robot can be ensured to be positioned right above the tracking line, and the condition that only the front part of the robot is positioned right above the tracking line and the other part of the robot deviates from the tracking line is avoided. And the rear camera simultaneously detects whether the robot runs to the position correction identification point, and when the position correction identification is detected, visual-inertial information fusion is carried out to correct the positioning deviation of the robot.
As shown in fig. 4, the robot master control system in this embodiment is composed of a power module, a control system, a positioning navigation system and a mobile communication module, and the positioning navigation system is composed of a front camera, a rear camera and an inertial sensor.
The power module provides electric energy for the work of whole circuit. The mobile communication module receives a robot work task from a background, and sends the work task to the main control system through a serial port; and in turn, the master control system sends the running state of the robot to the mobile communication module through the serial port, and the mobile communication module sends the running state of the robot to the background. The inertial sensor calculates the inertial mileage, the front camera and the rear camera simultaneously track and drive, the left and right deviation of the robot during driving is corrected, and the fact that the robot is free of left and right deviation integrally during driving is guaranteed. And the rear camera simultaneously detects whether a position correction identification point exists, and when the position correction identification point is detected, vision-inertia information is fused to calibrate the positioning of the robot. The front camera, the rear camera and the master control system send and receive data through serial ports, and the front camera and the rear camera are ultrathin cameras. Specifically, according to an embodiment of the present invention, the inertial sensor may be an MPU6050, and the inertial sensor and the main control system may perform data transmission and reception through an IIC bus protocol.
Fig. 5 is a flowchart of a method and system for high-precision positioning and navigation according to an embodiment of the invention. The method comprises the following specific steps:
step 1: the robot is started by sending an instruction through the user terminal, and the robot is initialized;
step 2: acquiring initial position data of the robot, wherein the initial position data comprises: visual initialization data, inertial initialization data and visual-inertial information fusion data. The position of the robot is estimated through the visual data, the pose of the robot is estimated through the inertial data, then the visual-inertial information fusion is carried out, and the position of the robot is further accurately estimated;
and step 3: transmitting the vision initialization data, the inertial sensor initialization data and the vision-inertial information fusion data of the robot in the step 2 to a background in real time through a mobile communication module;
and 4, step 4: the robot enters a standby state, and waits for a user to release a work task of the robot to a background through a user terminal;
and 5: the robot detects whether a work task exists, and if not, the robot jumps to the step 4; if the robot has a work task, the robot enters a working state and executes the following steps;
step 6: and the vision sensor and the inertia sensor acquire the driving data of the robot in real time. The travel data includes the information on the traveling state,
(1) The front camera and the rear camera acquire key frame data of ground tracking lines:
the front camera calculates the observed robot cycle data, the packet of whichFirst row horizontal coordinate x of tracing line a0 Transverse axis x of the tracking line of every n rows an Calculating the offset angle theta of the robot observed by the front camera a In which
The rear camera calculates the observed robot tracking data, including the first row abscissa x of the tracking line b0 Transverse axis x of the tracking line of every n rows bn Calculating the offset angle theta of the robot observed by the rear camera b In which
(2) The rear camera acquires key frame data of the robot position correction point:
1) If the rear camera detects the position correction point, the visual mileage d of the robot is calculated according to the known ground setting mode of the ground position correction identification point 2 In which
d 2 =d′ 2 +l
Of formula (II) to' 2 The visual mileage of the last position correction point is represented by l, which is the distance of the position correction identification point set on the known ground;
2) If the rear camera does not detect the position correction identification point, the robot can observe the mileage d 2 Keeping the same;
(3) The inertial sensor is used for calculating the inertial mileage d of the robot 1 In which d is 1 The calculation is as follows:
in the formula, d 0 The inertial mileage of the previous period, v is the running speed of the robot measured by the inertial sensor, a is the acceleration of the robot measured by the inertial sensor, and t is the sampling period of the inertial sensor;
and 7: and the rear camera detects whether the robot runs to the position correction identification point or not, judges whether visual-inertial information fusion is needed or not, and further accurately positions the robot. If the robot runs to the position correction identification point, the step 8 is carried out, otherwise, the step 9 is directly carried out;
and 8: the visual-inertial information fusion method comprises the following steps:
(1) If vision mileage d 2 And inertia mileage d 1 When the distance between two known positioning mark points on the ground is not more than half l, the requirement is met
Inertia mileage d 1 Updated to the visual mileage d 2 At this time d 1 =d 2 ;
(2) If vision mileage d 2 And inertia mileage d 1 When the distance l of the two known positioning mark points exceeds the ground, the requirement is met
max{|d 2 -d 1 |,|d 1 -d 2 |}>l+lk(3≥k≥0)
In this case, the processing method is as follows: inertia mileage d in straight line driving 1 Updated to the positioning identification point distance d 2 =d 2 + lk. And if k does not meet the constraint condition, the robot sends abnormal data to the background and waits for the user to correct the data again. After the user finishes correcting the data, entering step 9;
and step 9: and the front camera and the rear camera simultaneously run along the tracks and correct the running error of the robot according to the position of the theoretical tracking line. And controlling the running of the robot by adopting a control algorithm.
(1) The robot is controlled to run along the track according to the data of the front camera, the track line can be kept at the middle position of the front part of the robot, and the output quantity calculation formula of the control system is as follows:
u 1 =K p1 e 1 +K i1 ∑e 1 +K d1 e 1
e 1 =θ a -θ 1
in the formula, K p1 、K i1 、K d1 To control the coefficient, e 1 As a front camera observation value theta a To the theoretical value theta 1 Error between u 1 Is the output quantity of the control system;
(2) The robot is controlled to run along a track according to the data of the rear camera, the track can be kept at the middle position of the rear part of the robot, and the output quantity calculation formula of the control system is as follows:
u 2 =K p2 e 2 +K i2 Σe 2 +K d2 e 2
e 2 =θ b -θ 2
in the formula, K p2 、K i2 、K d2 To control the coefficient, e 2 As a rear camera observation value theta a And the theoretical value theta 2 ,u 2 Is the output quantity of the control system;
step 10: whether the work task of the robot is finished or not is judged, and if the work task is not finished, the step 6 is skipped; and if the task of the robot is completed, entering the next step.
Step 11: and (5) ending the robot task:
(1) If the user does not execute any operation, the robot enters the step 2;
(2) And if the user closes the robot, the robot enters a shutdown state.
Step 12: and (6) ending.
Claims (8)
1. A high-precision robot positioning navigation system is characterized by comprising a robot chassis and a ground auxiliary device;
the robot chassis comprises a robot control system, a positioning navigation system and a mobile communication module;
the ground auxiliary device comprises a tracking line easy to recognize by the robot and a position correction identification point different from the color of the tracking line, and provides convenience for the robot to acquire effective detection information.
2. The high accuracy robot positioning navigation system of claim 1, wherein the positioning navigation system comprises a front camera mounted in the middle of the front of the chassis, a rear camera mounted in the middle of the rear of the chassis, and an inertial sensor mounted in the center of the chassis.
3. The high-precision robot positioning and navigation system according to claim 1, wherein the mobile communication module is used for controlling data transmission between the system and the background, the robot driving data is sent to the background through the mobile communication module, the background receives the operation instruction and the work task from the user terminal, and the background data is synchronously displayed on the user terminal.
4. The high accuracy robot positioning and navigating system according to claim 1, wherein the position of the ground assisting device corrects the position of the identification point, and the linear part of the tracking line is 0.5l per interval distance 1 ~3l 1 (l 1 For the length of the robot) to calibrate an identification point for the position, the tracing curve part has the following distance(l 2 For the robot turning radius) to calibrate a position correction mark point.
5. A high-precision robot positioning and navigation method is characterized by comprising the following steps:
step 1: the robot is started by sending an instruction through a user terminal, and the robot is initialized;
step 2: acquiring initial position data of the robot, wherein the initial position data comprises: visual initialization data, inertial initialization data and visual-inertial information fusion data; the position of the robot is estimated through the visual data, the pose of the robot is estimated through the inertial data, then the visual-inertial information fusion is carried out, and the position of the robot is further accurately estimated;
and step 3: the vision initialization data, the inertial sensor initialization data and the vision-inertial information fusion data of the robot in the step 2 are sent to a background in real time through a mobile communication module;
and 4, step 4: the robot enters a standby state, and waits for a user to release a work task of the robot to a background through a user terminal;
and 5: the robot detects whether a work task exists, and if not, the robot jumps to the step 4; if the robot has the work task, the robot enters a working state and executes the following steps;
step 6: the vision sensor and the inertial sensor acquire the driving data of the robot in real time, and the driving data comprises: the method comprises the steps that a front camera and a rear camera acquire key frame data of ground tracking lines and key frame data of robot position correction points;
and 7: the rear camera detects whether the robot runs to the position correction identification point or not, judges whether visual-inertial information fusion is needed or not, and further accurately positions the robot; if the robot runs to the position correction identification point, the step 8 is carried out, otherwise, the step 9 is directly carried out;
and 8: visual-inertial information fusion;
and step 9: the front camera and the rear camera simultaneously run along the tracks, the running error of the robot is corrected according to the position of the theoretical tracking line, and the running of the robot is controlled by adopting a control algorithm;
step 10: whether the work task of the robot is finished or not is judged, and if not, the step 6 is skipped; if the task of the robot is completed, entering the next step;
step 11: the robot task is finished; if the user does not execute any operation, the robot enters the step 2; if the user closes the robot, the robot enters a shutdown state;
step 12: and (6) ending.
6. The method according to claim 5, wherein in step 6, the front and back cameras acquire the key frame data of ground tracking:
front camera calculates the robot cycle observedLine data, including the trace-following primary abscissa x a0 Transverse axis x of the tracking line of every n rows an Calculating the offset angle theta of the robot observed by the front camera a Wherein
The rear camera calculates the observed robot tracking data, including the first row abscissa x of the tracking line b0 Transverse axis x of the tracking line of every n rows bn Calculating the offset angle theta of the robot observed by the rear camera b Wherein
The rear camera acquires key frame data of a robot position correction point:
step 1.1, if the rear camera detects the position correction point, calculating the robot visual mileage d according to the known ground setting mode of the ground position correction identification point 2 In which
d 2 =d′ 2 +l
In the formula (II), d' 2 The visual mileage of the last position correction point is represented by l, which is the distance of the position correction identification point set on the known ground;
step 1.2, if the position correction identification point is not detected by the rear camera, the robot vision mileage d 2 Keeping the original shape;
step 1.3, the inertial sensor is used for calculating the inertial mileage d of the robot 1 In which d is 1 The calculation is as follows:
in the formula (d) 0 Is the inertial mileage of the last cycle, v is the robot running speed measured by the inertial sensor, and a is the machine measured by the inertial sensorHuman acceleration, t, is the sampling period of the inertial sensor.
7. The high-precision robot positioning and navigating method according to claim 5, wherein in step 8, the visual-inertial information fusion method comprises the following steps:
step 2.1, if vision mileage d 2 And inertia mileage d 1 When the distance between two known positioning mark points on the ground is not more than half l, the requirement is met
Inertia mileage d 1 Updated to the visual mileage d 2 At this time, i.e. d 1 =d 2 ;
Step 2.2, if vision mileage d 2 And inertia mileage d 1 When the distance l of the two known positioning mark points exceeds the ground, the requirement is met
max{|d 2 -d 1 |,|d 1 -d 2 |}>l+lk(3≥k≥0)
In this case, the processing method is as follows: inertial distance d in straight line 1 Updated to the positioning identification point distance d 2 =d 2 And + lk, if k does not meet the constraint condition, the robot sends abnormal data to a background, waits for the user to correct the data again, and enters step 9 after the user finishes correcting the data.
8. The high-precision robot positioning and navigating method according to claim 5, wherein the specific steps of step 9 are as follows:
step 3.1, controlling the robot to run in a tracking manner according to the data of the front camera, keeping the tracking line at the middle position of the front part of the robot, and controlling a system output quantity calculation formula as follows:
u 1 =K p1 e 1 +K i1 ∑e 1 +K d1 e 1
e 1 =θ a -θ 1
in the formula, K p1 、K i1 、K d1 To control the coefficient, e 1 As a front camera observation value theta a And the theoretical value theta 1 Error between u 1 To control the output of the system;
and 3.2, controlling the robot to run in a tracking manner according to the data of the rear camera, keeping the tracking line at the middle position of the rear part of the robot, and controlling a system output quantity calculation formula as follows:
u 2 =K p2 e 2 +K i2 ∑e 2 +K d2 e 2
e 2 =θ b -θ 2
in the formula, K p2 、K i2 、K d2 To control the coefficient, e 2 As a rear camera observation value theta a To the theoretical value theta 2 ,u 2 To control the output of the system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210733489.8A CN115342805A (en) | 2022-06-27 | 2022-06-27 | High-precision robot positioning navigation system and navigation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210733489.8A CN115342805A (en) | 2022-06-27 | 2022-06-27 | High-precision robot positioning navigation system and navigation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115342805A true CN115342805A (en) | 2022-11-15 |
Family
ID=83948391
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210733489.8A Pending CN115342805A (en) | 2022-06-27 | 2022-06-27 | High-precision robot positioning navigation system and navigation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115342805A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116592888A (en) * | 2023-05-08 | 2023-08-15 | 五八智能科技(杭州)有限公司 | Global positioning method, system, device and medium for patrol robot |
-
2022
- 2022-06-27 CN CN202210733489.8A patent/CN115342805A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116592888A (en) * | 2023-05-08 | 2023-08-15 | 五八智能科技(杭州)有限公司 | Global positioning method, system, device and medium for patrol robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4079792B2 (en) | Robot teaching method and robot with teaching function | |
CN108592906B (en) | AGV composite navigation method based on two-dimensional code and inertial sensor | |
JP3442335B2 (en) | Position correction method and position correction device for automatic guided vehicle | |
WO2022121459A1 (en) | Method and device for calculating installation position deviation of laser scanner of agv forklift | |
CN110347160B (en) | Automatic guide vehicle based on double-camera code scanning and navigation method thereof | |
US5983166A (en) | Structure measurement system | |
CN107065864A (en) | The list of magnetic stripe navigation drives unidirectional automatical pilot transportation vehicle deviation correction control system and method | |
CN112462749B (en) | Automatic agricultural machine navigation method, automatic agricultural machine navigation system and agricultural machine | |
JP2007156576A (en) | Method and device for adjusting odometry(wheel range finder) parameter for traveling carrier | |
CN107272690B (en) | Inertial guided vehicle navigation method based on binocular stereoscopic vision and inertial guided vehicle | |
CN103472434B (en) | Robot sound positioning method | |
CN109211260B (en) | Intelligent vehicle driving path planning method and device and intelligent vehicle | |
CN111476166A (en) | Self-service charging alignment method for ground inspection robot | |
CN115342805A (en) | High-precision robot positioning navigation system and navigation method | |
CN110837257A (en) | AGV composite positioning navigation system based on iGPS and vision | |
CN107943026B (en) | Mecanum wheel inspection robot and inspection method thereof | |
KR20170106906A (en) | Automatically driven vehicle | |
CN117848352B (en) | Auxiliary positioning system based on computer vision | |
CN112150547B (en) | Method and device for determining vehicle body pose and looking around vision odometer system | |
JPH1158273A (en) | Mobile robot device | |
CN113341968A (en) | Accurate parking system and method for multi-axis flat car | |
CN115993089B (en) | PL-ICP-based online four-steering-wheel AGV internal and external parameter calibration method | |
KR100792852B1 (en) | Method for extracting distance of landmark of mobile robot with a single camera | |
CN116481541A (en) | Vehicle autonomous return control method, device and medium without satellite navigation | |
JP2023145130A (en) | Autonomous mobile robot stopping method and autonomous mobile robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |