CN111971633B - Position estimation system, mobile body having the position estimation system, and recording medium - Google Patents

Position estimation system, mobile body having the position estimation system, and recording medium Download PDF

Info

Publication number
CN111971633B
CN111971633B CN201980022370.6A CN201980022370A CN111971633B CN 111971633 B CN111971633 B CN 111971633B CN 201980022370 A CN201980022370 A CN 201980022370A CN 111971633 B CN111971633 B CN 111971633B
Authority
CN
China
Prior art keywords
reference map
map
scan data
position estimation
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980022370.6A
Other languages
Chinese (zh)
Other versions
CN111971633A (en
Inventor
铃木慎治
佐伯哲夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nideco Transmission Technology Co ltd
Original Assignee
Nidec Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nidec Corp filed Critical Nidec Corp
Publication of CN111971633A publication Critical patent/CN111971633A/en
Application granted granted Critical
Publication of CN111971633B publication Critical patent/CN111971633B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The position estimation system of the present disclosure is a position estimation system for a mobile body, which is used by being connected to an external sensor that repeatedly scans an environment and outputs sensor data at each scan, and includes: a processor; a 1 st memory that stores an environment map prepared in advance; and a 2 nd memory storing a computer program for causing the processor to operate. The processor performs the following: (A) A 1 st position estimation process of generating a 1 st estimated value of the position and orientation of the mobile body from a result of comparing the environment map with the sensor data; (B) A 2 nd position estimation process of generating a 2 nd estimated value of the position and orientation of the mobile body based on a result of comparing the reference map with the sensor data while generating a reference map of the surroundings using the sensor data; and (C) selecting one of the 1 st estimated value and the 2 nd estimated value, and outputting the estimated value of the selected one as the estimated value selected for the position and posture of the mobile body.

Description

Position estimation system, mobile body having the position estimation system, and recording medium
Technical Field
The present disclosure relates to a position estimation system and a mobile body having the position estimation system. In addition, the present disclosure also relates to a computer program for use in position estimation.
Background
Moving bodies that can autonomously move such as unmanned vehicles (unmanned vehicles) and mobile robots are being developed.
Japanese patent application laid-open No. 2008-250905 discloses a mobile robot including: the self-position estimation is performed by matching the sensor data acquired from the laser range finder with a map prepared in advance.
Prior art literature
Patent literature
Patent document 1: japanese patent application laid-open No. 2008-250905
Disclosure of Invention
After the map is created, the environment may change. There are also many situations where the layout is temporarily changed, for example in factories and logistics stores. In addition, there are cases where the free space is temporarily reduced because goods, devices, or other moving objects are placed in a place (free space) on the map where the moving object can pass through. If matching is performed using an environment map that does not reflect a real environment, a situation may occur in which the own position cannot be inferred.
Embodiments of the present disclosure provide a self-estimation system and a mobile body capable of estimating a self-position even when a part of a map does not reflect a real environment, and a computer program used in the self-estimation.
In a non-limiting exemplary embodiment, the position estimation system of the present disclosure is a position estimation system of a mobile body, which is used in connection with an external sensor that repeatedly scans an environment and outputs sensor data at each scan, wherein the position estimation system has: at least one processor; a 1 st memory that stores an environment map prepared in advance; and a 2 nd memory storing a computer program for causing the processor to operate. The at least one processor executes the following in accordance with instructions of the computer program: (A) A 1 st position estimation process of generating a 1 st estimated value of the position and orientation of the mobile body from a result of comparing the environment map with the sensor data; (B) A 2 nd position estimation process of generating a 2 nd estimated value of the position and orientation of the mobile body based on a result of comparing the reference map with the sensor data while generating a reference map of the surroundings using the sensor data; and (C) selecting one of the 1 st estimated value and the 2 nd estimated value, and outputting the estimated value of the selected one as the estimated value selected for the position and posture of the mobile body.
In a non-limiting exemplary embodiment, a mobile body of the present disclosure has: the above-described position estimation system; the external sensor; and a driving device for moving.
In a non-limiting illustrated embodiment, the computer program of the present disclosure is a computer program for use in any of the above-described position estimation systems.
According to the embodiments of the present disclosure, the self-position can be inferred even in the case where a part of the map does not reflect the real environment.
Drawings
Fig. 1 is a diagram showing a structure of an embodiment of a mobile body of the present disclosure.
Fig. 2 is a plan view schematically showing an example of an environment in which a moving body moves.
Fig. 3 is a diagram illustrating an environment map of the environment illustrated in fig. 2.
Fig. 4A is a diagram schematically showing an example of scan data SD (t) acquired by an external sensor at time t.
Fig. 4B is a diagram schematically showing an example of scan data SD (t+Δt) acquired by the external sensor at time t+Δt.
Fig. 4C is a diagram schematically showing a state after the scan data SD (t+Δt) and the scan data SD (t) are matched.
Fig. 5 is a diagram schematically showing a case where a point cloud constituting scan data is rotated and translated from an initial position to approach a point cloud of an environment map.
Fig. 6 is a diagram showing the position and posture after rigid body transformation of scan data.
Fig. 7A is a diagram schematically showing a state in which, after scanning data is acquired from an external sensor and a reference map is created from the scanning data, the newly acquired scanning data is matched with the reference map.
Fig. 7B is a diagram schematically showing a reference map updated by adding newly acquired scan data to the reference map of fig. 7A.
Fig. 7C is a diagram schematically showing a reference map updated by adding newly acquired scan data to the reference map of fig. 7B.
Fig. 8A is a diagram schematically showing an example of scan data SD (t) acquired by an external sensor at time t.
Fig. 8B is a diagram schematically showing a state when the matching of the scan data SD (t) and the environment map M is started.
Fig. 8C is a diagram schematically showing a state in which the matching of the scan data SD (t) and the environment map M is completed.
Fig. 9 is a diagram schematically showing a history of the position and posture of the mobile body obtained in the past and predicted values of the current position and posture.
Fig. 10 is a flowchart showing a part of the operation of the position estimation device according to the embodiment of the present disclosure.
Fig. 11A is a diagram showing an example of variation fluctuation of the 1 st estimated value based on the 1 st position estimation process (offline SLAM).
Fig. 11B is a diagram showing an example of variation in the difference between the 1 st estimated value based on the 1 st position estimation process (offline SLAM) and the measured value based on the sensor.
Fig. 11C is a diagram showing an example of reliability of the 1 st inferred value based on the 1 st position inference process (offline SLAM) and reliability fluctuation of the 2 nd inferred value based on the 2 nd position inference process (online SLAM).
Fig. 12 is a flowchart showing a part of the operation of the position estimation device according to the embodiment of the present disclosure.
Fig. 13 is a flowchart showing an example of the 2 nd position estimation process (online SLAM) of the position estimation device according to the embodiment of the present disclosure.
Fig. 14 is a diagram showing an outline of a control system for controlling travel of each AGV according to the present disclosure.
Fig. 15 is a perspective view showing an example of an environment in which an AGV is located.
FIG. 16 is a perspective view showing the AGV and traction cart prior to connection.
FIG. 17 is a perspective view showing the AGV and traction cart after connection.
Fig. 18 is an external view of an exemplary AGV according to the present embodiment.
Fig. 19A is a diagram showing a 1 st hardware configuration example of the AGV.
Fig. 19B is a diagram showing an example of the 2 nd hardware configuration of the AGV.
Fig. 20 is a diagram showing an example of a hardware configuration of the operation management device.
Detailed Description
< words >
"automated guided vehicle (AGC)" refers to a trackless vehicle that manually or automatically loads cargo into a body, automatically travels to an indicated location, and then manually or automatically unloads cargo. The "unmanned vehicle" includes unmanned tractors and unmanned forklifts.
The word "unmanned" refers to the fact that no human is required to maneuver the vehicle, and the case where an unmanned vehicle is not precluded from transporting "people (e.g., those handling cargo)".
An "unmanned tractor" is a trolley that automatically travels to an indicated location by pulling a trolley that manually or automatically loads and unloads cargo.
An "unmanned forklift" is a trackless vehicle having a mast for lifting and lowering a fork or the like for transferring a load, automatically transferring the load to the fork or the like, automatically traveling to a designated place, and performing an automatic load-and-unload operation.
A "trackless vehicle" is a moving body (vehicle) having wheels and an electric motor or engine that rotates the wheels.
The "moving body" is a device for carrying a person or a load to move, and includes a wheel, a bipedal or multipedal running device, a propeller, and other driving devices that generate a driving force (track) for movement. The term "mobile body" in the present disclosure includes not only an unmanned conveyance vehicle in a narrow sense but also a mobile robot, a service robot, and an unmanned aerial vehicle.
The "automatic travel" includes travel of the automated guided vehicle based on an instruction of an operation management system of a computer connected by communication and autonomous travel based on a control device included in the automated guided vehicle. Autonomous traveling includes not only traveling of the automated guided vehicle toward a destination along a predetermined path, but also traveling following a tracking target. Further, the automated guided vehicle may temporarily perform manual travel based on an instruction from an operator. Although "automatic running" generally includes both "guided" running and "unguided" running, in the present disclosure, "unguided" running is referred to.
The term "guide type" refers to a system in which guide bodies are provided continuously or intermittently, and the unmanned conveyance vehicle is guided by the guide bodies.
The "unguided" refers to a system in which guidance is performed without providing a guide body. The unmanned vehicle according to the embodiment of the present disclosure has a position estimating device and can perform unguided traveling.
The "position estimating device" is a device that estimates its own position on the environment map from sensor data acquired by an external sensor such as a laser range finder.
The "external sensor" is a sensor that senses a state of the outside of the moving body. Examples of external sensors are laser rangefinders (also known as ranging sensors), cameras (or image sensors), LIDAR (Light Detection and Ranging: light detection and ranging), millimeter wave radar, ultrasonic sensors, and magnetic sensors.
The "inner limit sensor" is a sensor that senses the state of the inside of the moving body. Examples of the inner sensor include a rotary encoder (hereinafter, sometimes simply referred to as an "encoder"), an acceleration sensor, and an angular acceleration sensor (for example, a gyro sensor).
The term "SLAM" is an abbreviation for Simultaneous Localization and Mapping (simultaneous localization and mapping), and means that self-location estimation and environment mapping are performed simultaneously.
Basic structure of the moving body of the present disclosure
Reference is made to fig. 1. In the illustrated embodiment shown in fig. 1, the mobile body 10 of the present disclosure has an ambient sensor 102 that scans an environment and periodically outputs scan data. A typical example of an external sensor 102 is a Laser Rangefinder (LRF). The LRF periodically emits a laser beam, such as infrared or visible light, to the surroundings to scan the surrounding environment. The laser beam is reflected by a surface of a structure such as a wall, a column, or an object placed on the ground, for example. The LRF receives the reflected light of the laser beam, calculates the distance from each reflection point, and outputs data indicating the measurement result of the position of each reflection point. The arrival direction and distance of the reflected light are reflected at the positions of the respective reflection points. The data (scan data) of the measurement result is sometimes referred to as "environmental measurement data" or "sensor data".
The environment is scanned by the external sensor 102, for example, for an environment in a range of 135 degrees (270 degrees total) from the front surface of the external sensor 102. Specifically, a pulsed laser beam is emitted while changing its direction at a predetermined step angle in a horizontal plane, and the reflected light of each laser beam is detected to measure the distance. If the step angle is 0.3 degrees, measurement data of the distance from the reflection point in the direction determined by the angle of 901 steps in total can be obtained. In this example, the scan of the surrounding space by the ambient sensor 102 is substantially parallel to the ground, and planar (two-dimensional). However, the external sensor may perform three-dimensional scanning.
A typical example of the scan data can be expressed by position coordinates of each point constituting a point cloud (point group) acquired by each scan. The position coordinates of the points are defined by a local coordinate system that moves together with the moving body 10. Such a local coordinate system may be referred to as a moving body coordinate system or a sensor coordinate system. In the present disclosure, the origin of the local coordinate system fixed to the mobile body 10 is defined as the "position" of the mobile body 10, and the orientation (orientation) of the local coordinate system is defined as the "posture" of the mobile body 10. Hereinafter, the position and the posture may be collectively referred to as "posture". For simplicity, the "posture" may be simply referred to as "position".
In the case of displaying the scan data with a polar coordinate system, the positions of the points may be composed of a set of values representing the "direction" and the "distance" with respect to the origin of the local coordinate system. The display of the polar coordinate system can be converted into the display of the orthogonal coordinate system. In the following description, for simplicity, it is assumed that scan data output from an external sensor is displayed in an orthogonal coordinate system.
The mobile body 10 has a position estimation system 115 and a storage device (1 st memory) 104 that stores an environment map. The environment map is prepared by the mobile body 10 or another map making device, and stored in the storage device 104.
The position estimation system 115 is used in connection with the external sensor 102, and includes a processor 106 and a memory (memory 2) 107, and the memory 107 stores a computer program for controlling the operation of the processor. Although 1 processor 106 and 1 memory 107 are shown in fig. 1, the number of processors 106 and memories 107 may be 2 or 3 or more.
The position estimation system 115 performs matching between the scan data acquired from the external sensor 102 and the environment map read from the storage device 104, and estimates the position and posture, that is, the posture of the mobile body 10. This matching, known as pattern matching or scan matching, can be performed according to various algorithms. A typical example of a matching algorithm is the Iterative Closest Point (ICP: iterative closest point) algorithm.
As described later, the position estimation system 115 can create an environment map by aligning and linking a plurality of scan data output from the external sensor 102 by matching.
The position estimation system 115 of the embodiment of the present disclosure is implemented by the processor 106 and the memory 107 storing a computer program that causes the processor 106 to operate. The processor 106 performs the following actions in accordance with instructions of the computer program.
(A) Position estimation processing 1 (offline SLAM):
based on the result of the comparison between the environment map and the sensor data, the 1 st estimated value of the position and posture of the external sensor 102 (i.e., the position and posture of the mobile body 10) is generated.
(B) Position estimation processing (online SLAM):
the 2 nd estimated value of the position and orientation of the mobile body 10 is generated from the result of the comparison between the reference map and the sensor data while generating a map (reference map) of the surroundings from the sensor data.
(C) One of the 1 st estimated value and the 2 nd estimated value is selected, and the estimated value of the selected one is outputted as a "selected estimated value" of the position and orientation of the mobile body 10.
In an embodiment of the present disclosure, the processor 106 performs the 1 st position estimation process and the 2 nd position estimation process simultaneously or in parallel. For example, the processor 106 may have a 1 st processor portion that performs the 1 st position estimation process and a 2 nd processor portion that performs the 2 nd position estimation process. Further, the single processor 106 may alternately execute the operations required for the 1 st position estimation process and the 2 nd position estimation process.
In the above-described process (C), when outputting the 1 st inferred value as the selected inferred value, the processor 106 outputs the 2 nd inferred value as the selected inferred value without outputting the 1 st inferred value if any of the following events occurs.
Event(s)
Calculate the reliability of the 1 st inferred value, which is below the threshold.
Calculate the reliability of the 1 st inferred value (1 st reliability) and the reliability of the 2 nd inferred value (2 nd reliability), the 1 st reliability being lower than the 2 nd reliability.
The current 1 st estimated value is changed from the past 1 st estimated value (time-series data) to exceed a predetermined range.
The difference between the measured value (odometer value) of the position and/or orientation obtained from the internal sensor and the 1 st estimated value exceeds the threshold value.
The operation of determining the 1 st inferred value cannot be completed within a prescribed time (the matching error does not converge to a sufficiently small level within a prescribed time).
The reliability can be quantitatively expressed by a final error ("positional deviation amount" or "coincidence rate") of ICP matching, which will be described later, for example.
Each of the events described above may occur when the environment map used by the offline SLAM does not accurately reflect the current environment. For example, when the layout is temporarily changed in factories, logistics warehouses, and the like, or when goods, devices, or other moving objects are placed in places (free spaces) where moving objects can pass on an environment map. The above-described event easily occurs when the previous environment map is used for matching despite such an environment change.
In the embodiment of the present disclosure, the position estimation is performed by the 1 st position estimation process (offline SLAM), and on the other hand, the position estimation by the 2 nd position estimation process (online SLAM) is also performed in the background. Therefore, when a deviation occurs between the environment map and the real environment, the estimated value (the 2 nd estimated value) based on the online SLAM can be rapidly outputted and used.
The 2 nd position estimation process (online SLAM) can be executed by the following process, for example.
(1) Scan data is acquired from the external sensor 102, and a reference map (local map) is created from the scan data.
(2) When the scan data is newly acquired from the external sensor 102, the newly acquired latest scan data is matched with the reference map, whereby the position and orientation of the mobile body 10 on the reference map are estimated, and the latest scan data is added to the reference map to update the reference map.
(3) The reference map is reset by deleting a part other than the part containing the latest scan data from the reference map updated a plurality of times.
The following processing may be additionally performed, if necessary.
(4) In resetting, the environment map is updated (online map update) based on the reference map that was updated a plurality of times before resetting.
By updating such an environment map itself, even when a state (for example, layout change) in which the environment is changed is continued, the above-described event does not occur, and the 1 st estimated value by the 1 st position estimation process (offline SLAM) can be used as the selected estimated value.
In the illustrated example, the mobile body 10 further includes a driving device 108, an automatic travel control device 110, and a communication circuit 112. The driving device 108 is a device that generates a driving force for moving the moving body 10. Examples of the driving device 108 include wheels (driving wheels) rotated by an electric motor or an engine, and bipedal or bipedal running devices operated by a motor or other actuators. The wheels may be omni-directional wheels such as Mecanum wheels. The mobile body 10 may be a mobile body that moves in the air or in the water, or a hovercraft, and the driving device 108 in this case may include a propeller that is rotated by a motor.
The automatic travel control device 110 controls the movement conditions (speed, acceleration, movement direction, etc.) of the mobile body 10 by operating the driving device 108. The automatic travel control device 110 may move the mobile body 10 along a predetermined travel path, or may move the mobile body 10 in accordance with an instruction provided from the outside. The position estimation system 115 calculates an estimated value of the position and posture of the mobile body 10 during the movement of the mobile body 10 or during the stop. The automatic travel control device 110 refers to the estimated value and controls the travel of the mobile body 10.
The position estimation system 115 and the automatic travel control device 110 may be referred to as a travel control device 120 as a whole. The automatic travel control device 110 may be configured by the above-described processor 106 and the memory 107 storing a computer program for controlling the operation of the processor 106 together with the position estimation system 115. Such a processor 106 and memory 107 can be implemented by one or more semiconductor integrated circuits.
The communication circuit 112 is a circuit for exchanging data and/or instructions by connecting the mobile unit 10 to a communication network including an external management device, another mobile unit, a mobile terminal device of an operator, and the like.
< Environment map >
Fig. 2 is a plan view schematically showing an example of an environment 200 in which the mobile body 10 moves. The environment 200 is part of a larger environment. In fig. 2, a thick straight line represents a fixed wall 202 of a building, for example.
Fig. 3 is a diagram showing a map (environment map M) constituting the environment 200 shown in fig. 2. Each point 204 in the figure corresponds to each point of the point cloud constituting the environment map M. In the present disclosure, the point cloud of the environment map M is sometimes referred to as "reference point cloud", and the point cloud of the scan data is referred to as "data point cloud" or "source point cloud". For example, matching refers to positioning the scan data (data point cloud) with respect to an environment map (reference point cloud) whose position is fixed. In the case of performing matching based on the ICP algorithm, specifically, a corresponding pair of points is selected between the reference point cloud and the data point cloud, and the position and orientation of the data point cloud are adjusted so as to minimize the distance (error) between the points constituting each pair.
In fig. 3, for simplicity, the dots 204 are arranged on a plurality of line segments at equal intervals. The point cloud in the real environment map M may have a more complex configuration pattern. The environment map M is not limited to the point cloud map, and may be a map having a straight line or a curved line as a constituent element, or may be an occupied grid map. That is, the environment map M may have a structure capable of matching between the scan data and the environment map M. In addition, when the grid map is occupied, monte carlo position estimation can be performed. Hereinafter, embodiments of the present disclosure will be described with reference to matching based on an ICP algorithm as an example, but embodiments of the present disclosure are not limited to this example.
When the moving body 10 is located at each of the positions PA, PB, and PC shown in fig. 3, the scan data acquired by the external sensor 102 of the moving body 10 has an arrangement of point clouds different from each other. In the case where the moving time of the moving body 10 from the position PA to the position PC is much longer than the scanning period of the outside sensor 102, that is, in the case where the movement of the moving body 10 is slow, two pieces of scanning data adjacent on the time axis are extremely similar. However, when the moving object 10 moves very fast, there is a possibility that two scan data adjacent to each other on the time axis are greatly different.
In this way, when the latest scan data among the scan data sequentially output from the external sensor 102 is similar to the immediately preceding scan data, the matching is relatively easy, and it can be expected to perform the matching with high reliability in a short time. However, when the moving speed of the moving body 10 is relatively high, the latest scan data may not be similar to the immediately preceding scan data, and the time required for matching may be long or the matching may not be completed within a predetermined time.
< matching at map making >
In embodiments of the present disclosure, the mapping may be online or offline. Therefore, the following description can be applied to any of the offline SLAM and the online SLAM with respect to the principle of map creation. However, in the case of the offline SLAM, the time for the calculation required for creating a map using the scan data is not particularly limited, and matching that takes time but has fewer errors can be performed.
Fig. 4A is a diagram schematically showing an example of the scan data SD (t) acquired by the external sensor 102 at time t. The scan data SD (t) is represented by a sensor coordinate system whose position and posture change with the moving body 10. The scan data SD (t) is expressed by a UV coordinate system in which the front surface of the external sensor 102 is set as the V axis and the direction rotated 90 ° clockwise from the V axis is set as the U axis. The mobile body 10, more precisely the ambient sensor 102, is located at the origin of the UV coordinate system. In the present disclosure, when the moving body 10 advances, the moving body 10 advances toward the front of the external sensor 102, i.e., the direction of the V-axis. For easy understanding, dots constituting the scan data SD (t) are described with black dots.
In the present specification, the period in which the position estimation system 115 acquires the scan data from the external sensor 102 is referred to as Δt. Δt is, for example, 200 milliseconds. When the mobile body 10 moves, the content of the scan data periodically acquired from the external sensor 102 may change.
Fig. 4B is a diagram schematically showing an example of scan data SD (t+Δt) acquired by the external sensor 102 at time t+Δt. For easy understanding, the dots constituting the scan data SD (t+Δt) are described with white dots.
In the case where Δt is, for example, 200 milliseconds, when the mobile body 10 moves at a speed of 1 meter per second, the mobile body 10 moves by about 20 cm during Δt. Generally, the environment of the moving body 10 does not change greatly due to a movement of about 20 cm, and therefore, a wide range of overlap is included between the environment scanned by the external sensor 102 at time t+Δt and the environment scanned at time t. Therefore, a plurality of corresponding points are included between the point cloud of the scan data SD (t) and the point cloud of the scan data SD (t+Δt).
Fig. 4C schematically shows a state in which the matching of the scan data SD (t) and the scan data SD (t+Δt) is completed. In this example, the alignment is performed so that the scan data SD (t+Δt) is aligned with the scan data SD (t). The moving body 10 at time t is located at the origin of the UV coordinate system in fig. 4C, and the moving body 10 at time t+Δt is located at a position shifted from the origin of the UV coordinate system. The arrangement relation of one local coordinate system with respect to the other local coordinate system is obtained by matching the two pieces of scan data.
In this way, by connecting a plurality of scan data SD (t), SD (t+Δt), and SD (t+n×Δt) acquired periodically, a local environment map can be created. Wherein N is an integer of 1 or more. By synthesizing a plurality of local environment maps, an entire environment map can be obtained. In the case of online SLAM, the entire environment map obtained by performing this synthesis is completed. In the present disclosure, a "local environment map" created during movement of a mobile body is sometimes referred to as a "reference map" to distinguish from an "environment map" used in offline SLAM.
Fig. 5 is a diagram schematically showing a case where a point cloud constituting scan data at time t is rotated and translated from an initial position to approach a point cloud of a map. The coordinate value of the kth (k=1, 2,) point, K-1, K, of the K points of the point cloud constituting the scan data of the time t is set as Z t,k The coordinate value of the point on the map corresponding to the point is set as m k . At this time, the sum of squares of the errors calculated for the K corresponding points, that is, Σ (Z t,k -m k ) 2 The error of the corresponding point in the two point clouds is evaluated as a cost function. To reduce (Z) t,k -m k ) 2 To determine the rigid transformation of rotation and translation. The rigid body transformation is defined by a transformation matrix (homogeneous transformation matrix) including a vector of rotation angle and translation as parameters.
Fig. 6 is a diagram showing the position and posture after rigid body transformation of scan data. In the example shown in fig. 6, the matching of the scan data with the map has not been completed yet, and there is still a large error (positional deviation) between the two point clouds. In order to reduce the positional deviation, a rigid body transformation is further performed. In this way, when the error becomes lower than the predetermined value, the matching is completed.
< preparation of reference map >)
The production of a reference map in online SLAM will be described below.
Fig. 7A is a diagram schematically showing a state in which matching between the latest scan data SD (b) newly acquired and the scan data SD (a) acquired last time is completed. In fig. 7A, a point cloud of black points represents the last scan data, and a point cloud of white points represents the latest scan data. Fig. 7A shows a position a of the mobile body 10 when the previous scan data is acquired and a position b of the mobile body 10 when the latest scan data is acquired.
In this example, the scan data SD (a) acquired last time constitutes "the reference map RM". The reference map RM is a part of the environment map being created. The matching is performed so that the position and orientation of the latest scan data SD (b) are aligned with the position and orientation of the scan data SD (a) acquired last time.
By performing such matching, the position and posture of the mobile body 10b on the reference map RM can be understood. After the matching is completed, the scan data SD (b) is added to the reference map RM to update the reference map RM.
The coordinate system of the scan data SD (b) is connected to the coordinate system of the scan data SD (a). The connection is represented as a matrix of transformations (rigid transformations) defining rotation and translation of the two coordinate systems. According to such a transformed matrix, coordinate values of each point on the scan data SD (b) can be transformed into coordinate values in the coordinate system of the scan data SD (a).
Fig. 7B shows the reference map RM updated by adding the next acquired scan data to the reference map RM of fig. 7A. In fig. 7B, the point cloud of black points represents the reference map RM before update, and the point cloud of white points represents the latest scan data SD (c). Fig. 7B shows positions a, B, and c of the moving object 10 when the previous, last, and latest scan data are acquired. The entirety of the point cloud of white points and the point cloud of black points of fig. 7B constitutes an updated reference map RM.
Fig. 7C shows the reference map RM updated by adding the newly acquired scan data SD (d) to the reference map RM of fig. 7B. In fig. 7C, the point cloud of black points represents the reference map RM before update, and the point cloud of white points represents the latest scan data SD (d). Fig. 7C shows, in addition to positions a, b, and C of the moving body 10 located at the past estimated positions, a position d of the moving body 10 located at a position estimated by matching of the latest scan data SD (d). The entirety of the point cloud of white points and the point cloud of black points of fig. 7C constitutes the updated reference map RM.
In this way, the reference map RM is updated successively, and thus the number of points within the reference map RM increases each time the external sensor 102 scans. This results in an increase in the amount of computation when matching the latest scan data with the reference map RM. For example, in the case where one scan data includes at most about 1000 points, when 2000 scan data are connected and combined to create one reference map RM, the number of points in the reference map RM may be at most about 200 tens of thousands. When searching for a corresponding point and performing an operation for matching iteratively, if the point cloud of the reference map RM is too large, there is a possibility that matching cannot be completed in the scan period, that is, Δt.
In the position estimation system of the present disclosure, a part other than a part including the latest scan data is deleted from the reference map updated a plurality of times, and the reference map is reset. In addition, at the time of resetting, the environment map can be updated based on the reference map that has been updated a plurality of times before resetting. The environment map itself prepared in advance by the offline SLAM can be held without being updated.
The resetting of the reference map can be performed, for example, when (i) the number of times the reference map is updated reaches a predetermined number of times, (ii) the data amount of the reference map reaches a predetermined amount, or (iii) the elapsed time from the last resetting reaches a predetermined length. (i) The "predetermined number of times" in this case may be, for example, 100 times. (ii) The "predetermined amount" in this case may be 10000, for example. (iii) The "predetermined length" in this case may be, for example, 5 minutes.
In order to minimize the amount of data of the reference map after the reset, the latest scan data, that is, the data acquired by the latest scan at the time of the reset, may be left, and the other scan data may be deleted. In the case where the number of points included in the latest scan data is equal to or less than a predetermined value, in order to improve the matching accuracy after the resetting, a plurality of scan data close to the current may be included in the reference map after the resetting in addition to the latest scan data.
When a reference map is created from a plurality of scan data, a case where the dot density per unit area of the dot cloud increases so as to exceed a prescribed value may be wasteful for matching. For example, when within an environment and having a length of 10X 10cm 2 If a plurality of points (measurement points) exist in a portion corresponding to the rectangular region of the size of (i) a matching accuracy may be saturated without sufficiently improving the ratio of an increase in the amount of computation required for matching. To suppress such waste, the following processing may be performed: when the density of the point cloud constituting the scan data and/or the reference map exceeds a predetermined density, a few points are divided from the middle of the point cloud so that the density of the point cloud is reduced to a predetermined density or less. The "prescribed density" may be, for example, 1/(10 cm) 2
< position estimation Using Environment map >
Fig. 8A is a diagram schematically showing an example of scan data SD (t) acquired by an external sensor at time t. The scan data SD (t) is represented by a sensor coordinate system whose position and posture change with the moving body 10, and points constituting the scan data SD (t) are described by white points.
Fig. 8B is a diagram schematically showing a state when the matching of the scan data SD (t) and the environment map M is started. When the processor 106 in fig. 1 acquires the scan data SD (t) from the external sensor 102, the processor can estimate the position and orientation of the mobile body 10 on the environment map M by matching the scan data SD (t) with the environment map M read from the storage device 104. When starting such matching, it is necessary to determine initial values of the position and orientation of the mobile body 10 at time t (see fig. 5). The closer the initial value is to the actual position and posture of the mobile body 10, the shorter the time required for matching.
Fig. 8C is a diagram schematically showing a state in which the matching of the scan data SD (t) and the environment map M is completed.
In the embodiments of the present disclosure, two methods can be employed in determining the initial value.
In the 1 st method, the amount of change with respect to the position and orientation inferred by the last matching is measured by an odometer. For example, when the movable body 10 is moved by two driving wheels, the amount and direction of movement of the movable body 10 can be obtained by encoders attached to the driving wheels or the motors. Methods of using odometers are well known and therefore do not require deliberate further elaboration.
The 2 nd method predicts the current position and orientation from the history of the inferred values of the position and orientation of the mobile body 10. This point will be explained below.
< prediction of initial value >
Fig. 9 is a diagram schematically showing a history of the position and orientation of the mobile body 10 obtained in the past by the position estimation system 115 of fig. 1, and predicted values of the current position and orientation. The history of positions and gestures is stored in a memory 107 internal to the position inference system 115. Part or all of such history may be stored in a storage device external to the position estimation device 105, for example, the storage device 104 of fig. 1.
Fig. 9 also shows a UV coordinate system as a local coordinate system (sensor coordinate system) of the moving body 10. The scan data is represented using a UV coordinate system. The position of the mobile body 10 on the environment map M is a coordinate value (x) of the origin of the UV coordinate system in the coordinate system of the environment map M i ,y i ). The posture (orientation) of the mobile body 10 is the orientation (θ) of the UV coordinate system with respect to the coordinate system of the environment map M i )。θ i In a counterclockwise direction.
In an embodiment of the present disclosure, the predicted value of the current position and orientation is calculated from the history of positions and orientations acquired in the past by the position estimating device.
The position and posture of the moving body obtained by the last matching are set as (x) i-1 ,y i-1 ,θ i-1 ) And the position and posture of the moving body obtained by the previous matching are set as (x) i-2 ,y i-2 ,θ i-2 ). Further, the predicted value of the current position and orientation of the mobile body is set as (x) i ,y i ,θ i ). At this time, the following assumption is assumed to be true.
Suppose 1: from the position (x i-1 ,y i-1 ) To the position (x) i ,y i ) The time required for the movement of (a) is equal to the time required for the movement of (c) from the position (x i-2 ,y i-2 ) To the position (x) i-1 ,y i-1 ) Is required for the movement of the vehicle.
Suppose 2: from the position (x i-1 ,y i-1 ) To the position (x) i ,y i ) The moving speed at the time of the movement of (a) is equal to the moving speed at the time of the movement of (b) from the position (x i-2 ,y i-2 ) To the position (x) i-1 ,y i-1 ) Is a moving speed at the time of moving.
Based on the above procedure, the following equation 1 is established.
[ mathematics 1]
Wherein Δθ is θ ii-1
Regarding the posture (orientation) of the moving body, the following relationship of equation 2 holds.
(mathematics 2)
θ i =θ i-1 +Δθ
If Δθ is approximated to zero, the matrix of the right 2 nd term of equation 2 becomes an identity matrix, and the calculation can be simplified.
Above mentionedIf 1 is not satisfied, the position (x i-1 ,y i-1 ) To the position (x) i ,y i ) Let Δt be the time required for the movement of the slave position (x i-2 ,y i-2 ) To the position (x) i-1 ,y i-1 ) The time required for the movement is set to deltas. In this case, the right (x) of equation 1 is simply expressed i-1 -x i-2 ) And (y) i-1 -y i-2 ) The correction by multiplying Δt/Δs is sufficient to multiply Δθ by Δt/Δs in the matrix on the right of equation 1.
< flow of action of position inference System >)
The operation flow of the position estimation system according to the embodiment of the present disclosure will be described with reference to fig. 1 and 10 to 13.
First, refer to fig. 10.
In step S10, the processor 106 of the position estimation system 115 acquires the latest (current) scan data from the external sensor 102.
In step S12, the processor 106 obtains the current position and orientation values by the odometer. In this case, as described with reference to fig. 9, the current position and orientation values may be predicted.
In step S14, the processor 106 performs initial alignment of the latest scan data with respect to the environment map using the current position and orientation values acquired from the odometer as initial values.
In step S16, the processor 106 performs positional deviation correction based on the ICP algorithm.
In step S18, the processor 106 generates the 1 st inferred value based on the position and orientation of the offline SLAM.
In step S20, it is determined whether an event has occurred in which the 2 nd estimated value based on the position and orientation of the online SLAM is to be outputted as the selected estimated value instead of the 1 st estimated value based on the position and orientation of the offline SLAM. In the case of no, the flow proceeds to step S21, and the 1 st estimated value is outputted as the selected estimated value. Then, the process returns to step S10 to acquire the next scan data. In the case of yes, the process proceeds to step S22.
An example of the case where the determination is yes will be described.
FIG. 11A is a graph showing the amount of change ΔP of the 1 st inferred value based on the 1 st position inference process (offline SLAM) t I.e. P t -P t-1 A diagram of an example of variation. This is achieved by setting the 1 st inferred value at the current time t to P t The 1 st inferred value of the last time before a moment (for example, 200 ms before) is set to P t-1 In this case, the inferred abnormality can be detected by monitoring the difference. For example, in the variation ΔP t When the threshold value shown by the broken line in fig. 11A is exceeded, the 2 nd inferred value based on the 2 nd position inference process (online SLAM) can be selected as the inferred value with higher accuracy instead of the 1 st inferred value based on the 1 st position inference process (offline SLAM). In this case, the change amount Δp may be t The 2 nd inferred value based on the 2 nd position inference process (online SLAM) is not immediately selected only once when the threshold value is exceeded, but is selected when the threshold value is exceeded a predetermined number of times (for example, three times) in succession.
Fig. 11B is a diagram showing an example of variation in the difference between the 1 st estimated value based on the 1 st position estimation process (offline SLAM) and the measured value based on the sensor. The measurement value of the sensor is a value of the position and orientation of the mobile body measured by an odometer such as a rotary encoder. When the difference between the 1 st inferred value and the measured value exceeds the predetermined Range (Range), the 2 nd inferred value based on the 2 nd position inference process (online SLAM) can be selected as the inferred value with higher accuracy than the 1 st inferred value based on the 1 st position inference process (offline SLAM). In this case, when the difference is out of the predetermined range only once, the 2 nd estimated value by the 2 nd position estimation process (online SLAM) may be selected not immediately but when the difference is out of the predetermined range a predetermined number of times (for example, three times) consecutively.
Fig. 11C is a diagram showing an example of reliability of the 1 st inferred value based on the 1 st position inference process (offline SLAM) and reliability fluctuation of the 2 nd inferred value based on the 2 nd position inference process (online SLAM). In the figure, when the reliability of the 1 st inferred value represented by the black dot is lower than the reliability of the 2 nd inferred value represented by the white dot, the 2 nd inferred value based on the 2 nd position inference process (online SLAM) can be selected as the inferred value with higher accuracy than the 1 st inferred value based on the 1 st position inference process (offline SLAM). In this case, the 2 nd estimated value may be selected when the number of times that the reliability of the 1 st estimated value is lower than the reliability of the 2 nd estimated value exceeds a predetermined number of times.
Referring again to fig. 10.
In step S22, the processor 106 makes an inference based on the position and orientation of the online SLAM. Specifically, the process proceeds to step S40. The flow of online SLAM is described later.
Next, the positional deviation correction in step S16 will be described with reference to fig. 12.
First, in step S32, the processor 106 searches for a corresponding point from the two sets of point clouds. Specifically, the processor 106 selects points on the environment map corresponding to points constituting the point cloud included in the scan data.
In step S34, the processor 106 performs a rigid body transformation (coordinate transformation) of rotation and translation of the scan data to reduce the corresponding inter-point distance between the scan data and the environment map. This is to optimize the parameters of the coordinate transformation matrix to reduce the distance between the corresponding points, i.e. the sum of the errors of the corresponding points (sum of squares). The optimization is performed by iterative computation.
In step S36, the processor 106 determines whether the result of the iterative calculation converges. Specifically, the processor 106 determines that convergence is achieved when the sum of errors (square sum) of corresponding points is reduced below a predetermined value even if the parameters of the coordinate transformation matrix are changed. When not converging, the process returns to step S32, and the processor 106 repeats the process from searching for the corresponding point. In step S36, when it is determined to be convergent, the process proceeds to step S38.
In step S38, the processor 106 converts the coordinate values of the scan data from the values of the sensor coordinate system to the values of the coordinate system of the environment map using the coordinate conversion matrix. The coordinate values of the scan data thus acquired can be used for updating the environment map.
Next, estimation based on the position and posture of the online SLAM will be described with reference to fig. 13.
In step S40, the processor 106 of the position estimation system 115 acquires the latest (current) scan data from the external sensor 102.
In step S42, the processor 106 obtains the current position and orientation values by means of an odometer.
In step S44, the processor 106 performs initial alignment of the latest scan data with respect to the reference map using the current position and orientation values acquired from the odometer as initial values.
In step S46, the processor 106 performs positional deviation correction based on the ICP algorithm.
In step S48, the processor 106 generates an estimated value (the 2 nd estimated value) of the position and orientation of the mobile body obtained as a result of the comparison with the reference map. When it is determined to be "yes" in step S20 of fig. 10, the 2 nd inferred value is output as the selected inferred value instead of the 1 st inferred value. In addition, regardless of the result of the determination in step S20, the estimation based on the position and orientation of the online SLAM is continuously performed, and the 2 nd estimation value is generated. Whether or not to use the 2 nd inferred value as the selected inferred value depends on whether or not the event described with reference to fig. 11 is present.
In step S50, it is determined whether the update condition is satisfied with respect to the map. As described above, the update condition is a condition such as (i) when the number of times the reference map is updated reaches a predetermined number of times, (ii) when the data amount of the reference map reaches a predetermined amount, or (iii) when the elapsed time from the last reset reaches a predetermined length. If no, the process returns to step S40 to acquire the next scan data. In the case of yes, the process proceeds to step S52.
In step S52, the processor 106 deletes a part other than a part including the latest scan data from the reference map updated a plurality of times, and resets the reference map. In this way, the number and density of points in the point cloud constituting the reference map can be reduced.
In addition, the determination made in step S20 of fig. 10 is also performed at any time or periodically during online SLAM. Therefore, when online SLAM is not required, the online SLAM is returned to the offline SLAM quickly.
The position estimation system of the present disclosure can be applied to various moving bodies that move by a multi-purpose driving device. The position estimation system of the present disclosure may be used not to be mounted on a moving body having a driving device. For example, the map may be mounted on a cart driven by a user.
< exemplary embodiment >
Embodiments of a mobile body having a position estimation system according to the present disclosure will be described in more detail below. In the present embodiment, an unmanned vehicle is given as an example of the mobile body. In the following description, the unmanned conveyance vehicle is expressed as "AGV (Automatic Guided Vehicle)" using abbreviations. Hereinafter, the "AGV" is denoted by the reference numeral "10" in the same manner as the mobile body 10.
(1) Basic structure of system
Fig. 14 shows a basic configuration example of an exemplary mobile management system 100 of the present disclosure. The mobile body management system 100 includes at least one AGV 10 and an operation management device 50 that manages the operation of the AGV 10. Fig. 14 also illustrates a terminal device 20 operated by the user 1.
The AGV 10 is an unmanned transport vehicle capable of performing "unguided" traveling without requiring a guide such as a magnetic tape during traveling. The AGV 10 can estimate its own position and transmit the estimation result to the terminal device 20 and the operation management device 50. The AGV 10 can automatically travel in the environment S in accordance with an instruction from the operation management device 50.
The operation management device 50 is a computer system that manages the travel of each AGV 10 by tracking the position of each AGV 10. The operation management device 50 may be a desktop PC, a notebook PC, and/or a server computer. The operation management device 50 communicates with each AGV 10 via a plurality of access points 2. For example, the operation management device 50 transmits data of coordinates of a position to which each AGV 10 should next face to each AGV 10. Each AGV 10 periodically transmits data indicating its own position and orientation (orientation), for example, every 250 milliseconds, to the operation management device 50. When the AGV 10 reaches the indicated position, the operation management device 50 transmits data of coordinates of the position to be oriented next. The AGV 10 can travel in the environment S according to the operation of the user 1 inputted to the terminal device 20. An example of the terminal device 20 is a tablet computer.
Fig. 15 shows an example of the environment S in which three AGVs 10a, 10b, and 10c are located. It is assumed that any AGV is traveling in the depth direction of the drawing. The AGVs 10a and 10b are transporting loads placed on the roof. The AGV 10c travels following the front AGV 10 b. For convenience of explanation, reference numerals 10a, 10b, and 10c are given to fig. 15, but will be hereinafter referred to as "AGV 10".
The AGV 10 can transport the load by using a traction carriage connected to itself, in addition to the method of transporting the load placed on the top plate. Fig. 16 shows the AGV 10 and the traction cart 5 prior to connection. Casters are provided on each leg of the traction carriage 5. The AGV 10 is mechanically coupled to the traction carriage 5. FIG. 17 shows the AGV 10 and traction cart 5 after connection. When the AGV 10 travels, the traction carriage 5 is towed by the AGV 10. By pulling the traction carriage 5, the AGV 10 can transport the load placed on the traction carriage 5.
The method of connecting the AGV 10 to the traction carriage 5 is arbitrary. Here, an example will be described. A plate 6 is fixed to the top plate of the AGV 10. A guide 7 having a slit is provided on the traction carriage 5. The AGV 10 approaches the traction carriage 5 to insert the plate 6 into the slot of the guide 7. When the insertion is completed, the AGV 10 passes through the plate 6 and the guide 7 with an electromagnetic lock pin, not shown, and applies an electromagnetic lock. Thereby, the AGV 10 is physically connected to the traction carriage 5.
Referring again to fig. 14. Each AGV 10 can perform communication according to the Bluetooth (registered trademark) standard by, for example, one-to-one connection with the terminal device 20. Each AGV 10 and the terminal apparatus 20 can also perform Wi-Fi (registered trademark) communication by using one or a plurality of access points 2. The plurality of access points 2 are connected to each other via, for example, a switching hub 3. In fig. 14 two access points 2a, 2b are depicted. The AGV 10 is wirelessly connected to the access point 2 a. The terminal device 20 is connected to the access point 2b wirelessly. The data sent from the AGV 10 is received by the access point 2a, transferred to the access point 2b via the switching hub 3, and transmitted from the access point 2b to the terminal device 20. The data transmitted from the terminal device 20 is received by the access point 2b, transferred to the access point 2a via the switching hub 3, and transmitted from the access point 2a to the AGV 10. Thereby, two-way communication between the AGV 10 and the terminal device 20 is achieved. The plurality of access points 2 are also connected to the operation management device 50 via the switching hub 3. Thereby, two-way communication is also achieved between the operation management device 50 and each AGV 10.
(2) Making environmental map
In order for the AGV 10 to travel while estimating its own position by offline SLAM, a map of the environment S is created in advance. The AGV 10 is equipped with a position estimation device and an LRF, and can create an environment map using the output of the LRF.
The AGV 10 transitions to the data acquisition mode by the user's operation. In the data acquisition mode, the AGV 10 starts to acquire sensor data (scan data) using the LRF. The processing thereafter is as described above.
The movement in the environment S for acquiring the sensor data can be achieved by the AGV 10 traveling in accordance with the operation of the user. For example, the AGV 10 wirelessly receives a travel command from the user via the terminal device 20 to instruct the travel in each of the front-rear, left-right directions. The AGV 10 travels forward, backward, leftward, rightward, and leftward in the environment S in accordance with the travel command to create a map. When the AGV 10 is connected to an operating device such as a joystick in a wired manner, the map can be created by traveling forward, backward, leftward, and rightward in the environment S in response to a control signal from the operating device. The sensor data may be acquired by a person walking by pushing a measurement carriage on which the LRF is mounted.
Although a plurality of AGVs 10 are shown in fig. 14 and 15, one AGV may be used. When there are a plurality of AGVs 10, the user 1 can use the terminal device 20 to select one AGV 10 from the plurality of AGVs registered to create a map of the environment S.
(3) AGV structure
Fig. 18 is an external view of the exemplary AGV 10 according to the present embodiment. The AGV 10 has two drive wheels 11a and 11b, four casters 11c, 11d, 11e, and 11f, a frame 12, a conveyance table 13, a travel control device 14, and an LRF 15. Two drive wheels 11a and 11b are provided on the right and left sides of the AGV 10, respectively. Four casters 11c, 11d, 11e, 11f are arranged at four corners of the AGV 10. In addition, although the AGV 10 also has a plurality of motors connected to the two drive wheels 11a and 11b, the plurality of motors are not shown in fig. 18. In fig. 18, one drive wheel 11a and two casters 11c, 11e on the right side of the AGV 10, and a caster 11f on the left rear side are shown, but the left drive wheel 11b and the left front caster 11d are not shown explicitly because they are hidden by the frame 12. The four casters 11c, 11d, 11e, and 11f can freely turn. In the following description, the driving wheels 11a and 11b are also referred to as wheels 11a and 11b, respectively.
The travel control device 14 is a device that controls the operation of the AGV 10, and mainly includes an integrated circuit including a microcomputer (described below), electronic components, and a board on which the integrated circuit is mounted. The travel control device 14 performs the above-described data transmission/reception with the terminal device 20 and preprocessing calculation.
LRF 15 is for example an optical device as follows: the distance from the reflection point is measured by radiating the laser beam 15a of infrared and detecting the reflected light of the laser beam 15 a. In the present embodiment, the LRF 15 of the AGV 10 irradiates pulsed laser beams 15a while changing the direction every 0.25 degrees in a space ranging from, for example, 135 degrees (270 degrees in total) on the left and right sides with respect to the front surface of the AGV 10, and detects the reflected light of each laser beam 15 a. This makes it possible to acquire data of the distance from the reflection point in the direction determined by the total 1081 step angle for every 0.25 degrees. In the present embodiment, the scan of the surrounding space by the LRF 15 is substantially parallel to the ground, and is planar (two-dimensional). However, the LRF 15 may also perform scanning in the height direction.
Based on the position and posture (orientation) of the AGV 10 and the scan result of the LRF 15, the AGV 10 can map the environment S. The arrangement of the objects placed on the ground can be reflected on the map by structures such as walls and columns around the AGV. The map data is stored in a memory device provided in the AGV 10.
Hereinafter, the position and posture, that is, the posture (x, y, θ) of the AGV 10 may be simply referred to as "position".
As described above, the travel control device 14 compares the measurement result of the LRF 15 with the map data held by itself to estimate the current position of itself. The map data may be map data created by another AGV 10.
Fig. 19A shows a 1 st hardware configuration example of the AGV 10. Fig. 19A also shows a specific structure of the travel control device 14.
The AGV 10 has a travel control device 14, an LRF 15, two motors 16a and 16b, a drive device 17, and wheels 11a and 11b.
The travel control device 14 includes a microcomputer 14a, a memory 14b, a storage device 14c, a communication circuit 14d, and a position estimation device 14e. The microcomputer 14a, the memory 14b, the storage device 14c, the communication circuit 14d, and the position estimation device 14e are connected via a communication bus 14f, and can transmit and receive data to and from each other. The LRF 15 is also connected to the communication bus 14f via a communication interface (not shown), and transmits measurement data as a measurement result to the microcomputer 14a, the position estimation device 14e, and/or the memory 14b.
The microcomputer 14a is a processor or a control circuit (computer) that executes an operation for controlling the entire AGV 10 including the travel control device 14. Typically, the microcomputer 14a is a semiconductor integrated circuit. The microcomputer 14a transmits a PWM (Pulse Width Modulation: pulse width modulation) signal as a control signal to the driving device 17 to control the driving device 17 to adjust the voltage applied to the motor. Thereby, the motors 16a and 16b are rotated at desired rotational speeds, respectively.
One or more control circuits (for example, a microcomputer) for controlling the driving of the left and right motors 16a and 16b may be provided independently of the microcomputer 14 a. For example, the motor driving device 17 may have two microcomputers for controlling the driving of the motors 16a and 16b, respectively.
The memory 14b is a volatile storage device that stores a computer program executed by the microcomputer 14 a. The memory 14b may be used as a work memory for the operation of the microcomputer 14a and the position estimating device 14 e.
The storage device 14c is a nonvolatile semiconductor storage device. However, the storage device 14c may be a magnetic recording medium typified by a hard disk or an optical recording medium typified by an optical disk. The storage device 14c may include a head device for writing and/or reading data to/from any recording medium, and a control device for the head device.
The storage device 14c stores an environment map M of the environment S to be traveled, and data (travel path data) R of one or more travel paths. The environment map M is created by the AGV 10 operating in the map creation mode, and is stored in the storage device 14 c. The travel path data R is transmitted from the outside after the map M is produced. In the present embodiment, the environment map M and the travel path data R are stored in the same storage device 14c, but may be stored in different storage devices.
An example of the travel path data R will be described.
In the case where the terminal device 20 is a tablet computer, the AGV 10 receives travel path data R indicating a travel path from the tablet computer. The travel path data R at this time includes mark data indicating positions of a plurality of marks. The "mark" indicates the passing position (via a dot) of the AGV 10 to be driven. The travel route data R includes at least position information of a start flag indicating a travel start position and an end flag indicating a travel end position. The travel path data R may also contain position information of marks of one or more intermediate transit points. When the travel route includes one or more intermediate transit points, a route from the start marker to the end marker sequentially through the travel transit points is defined as a travel route. The data of each mark may include, in addition to the coordinate data of the mark, data of the direction (angle) and travel speed of the AGV 10 before moving to the next mark. When the AGV 10 temporarily stops at the position of each mark to perform the self-position estimation, the notification to the terminal device 20, and the like, the data of each mark may include data of an acceleration time required for accelerating to reach the travel speed and/or a deceleration time required for decelerating from the travel speed to stop at the position of the next mark.
The movement of the AGV 10 may be controlled not by the terminal apparatus 20 but by the operation management device 50 (e.g., a PC and/or a server computer). In this case, the operation management device 50 may instruct the AGV 10 to move to the next mark every time the AGV 10 reaches the standard mark. For example, the AGV 10 receives, from the operation management device 50, coordinate data of a target position to be faced next, or data of a distance from the target position and an angle to be advanced, as travel path data R indicating a travel path.
The AGV 10 can travel along the stored travel route while estimating its own position using the created map and the sensor data output from the LRF 15 acquired during travel. Details of the operation in this regard are described above. According to the embodiment of the present disclosure, even in a case where a part of the environment map prepared in advance does not reflect the real environment, it is possible to continue to estimate the own position by rapidly switching to the online SLAM.
The communication circuit 14d is, for example, a wireless communication circuit that performs wireless communication according to Bluetooth (registered trademark) and/or Wi-Fi (registered trademark) standards. Any standard includes a wireless communication standard using frequencies in the 2.4GHz band. For example, in a mode in which the AGV 10 is driven to create a map, the communication circuit 14d performs wireless communication according to the Bluetooth (registered trademark) standard, and communicates with the terminal device 20 one-to-one.
The position estimating device 14e performs a process of creating a map and estimating its position during traveling. The position estimating device 14e creates a map of the environment S based on the position and posture of the AGV 10 and the scan result of the LRF. At the time of traveling, the position estimation device 14e receives sensor data from the LRF 15, and reads out the environment map M stored in the storage device 14 c. The own position (x, y, θ) on the environment map M is identified by matching the local map data (sensor data) made from the scan result of the LRF 15 with the environment map M of a larger range. The position estimation device 14e generates data indicating the "reliability" of the degree to which the local map data matches the environment map M. The data of the own position (x, y, θ) and the reliability can be transmitted from the AGV 10 to the terminal device 20 or the operation management apparatus 50. The terminal device 20 or the operation management device 50 can receive the data of the own position (x, y, θ) and the reliability and display the data on a built-in or connected display device.
In the present embodiment, the microcomputer 14a and the position estimation device 14e are different components, but this is only an example. The microcomputer 14a and the position estimating device 14e may be a single chip circuit or a semiconductor integrated circuit capable of independently performing the respective operations. Fig. 19A shows a chip circuit 14g including a microcomputer 14a and a position estimation device 14 e. Hereinafter, an example in which the microcomputer 14a and the position estimation device 14e are provided independently of each other will be described.
Two motors 16a and 16b are mounted to the two wheels 11a and 11b, respectively, to rotate the wheels. That is, the two wheels 11a and 11b are driving wheels, respectively. In this specification, a case will be described in which the motors 16a and 16b are motors that drive the right and left wheels of the AGV 10, respectively.
The mobile body 10 may further have a rotary encoder for measuring the rotational position or rotational speed of the wheels 11a and 11 b. The microcomputer 14a may estimate the position and posture of the mobile body 10 using not only the signal received from the position estimating device 14e but also the signal received from the rotary encoder.
The driving device 17 has motor driving circuits 17a and 17b, and the motor driving circuits 17a and 17b are used for adjusting voltages applied to the two motors 16a and 16b, respectively. The motor drive circuits 17a and 17b each include a so-called inverter circuit. The motor driving circuits 17a and 17b turn on or off the current flowing in each motor according to the PWM signal transmitted from the microcomputer 14a or the microcomputer within the motor driving circuit 17a, thereby adjusting the voltage applied to the motor.
Fig. 19B shows an example of the 2 nd hardware configuration of the AGV 10. The 2 nd hardware configuration example has a laser positioning system 14h, and the microcomputer 14a is connected to each component one by one, and these points are different from the 1 st hardware configuration example (fig. 19A).
The laser positioning system 14h has a position estimation device 14e and an LRF 15. The position estimation device 14e and the LRF 15 are connected by, for example, an ethernet (registered trademark) cable. The respective operations of the position estimation device 14e and the LRF 15 are as described above. The laser positioning system 14h outputs information indicating the posture (x, y, θ) of the AGV 10 to the microcomputer 14 a.
The microcomputer 14a has various general purpose I/O interfaces or general purpose input/output ports (not shown). The microcomputer 14a is directly connected to other components in the travel control device 14 such as the communication circuit 14d and the laser positioning system 14h via the general-purpose input/output port.
With respect to fig. 19B, the configuration is the same as that of fig. 19A except for the above-described configuration. Therefore, the description of the same structure is omitted.
The AGV 10 in the embodiment of the present disclosure may have a safety sensor such as an obstacle detection sensor and a bumper switch, which are not shown.
(4) Structural example of operation management device
Fig. 20 shows an example of a hardware configuration of the operation management device 50. The operation management device 50 has a CPU 51, a memory 52, a position database (position DB) 53, a communication circuit 54, a map database (map DB) 55, and an image processing circuit 56.
The CPU 51, the memory 52, the position DB 53, the communication circuit 54, the map DB 55, and the image processing circuit 56 are connected via a communication bus 57, and can transmit and receive data to and from each other.
The CPU 51 is a signal processing circuit (computer) that controls the operation of the operation management device 50. Typically, the CPU 51 is a semiconductor integrated circuit.
The memory 52 is a volatile storage device that stores a computer program executed by the CPU 51. The memory 52 may also be used as a work memory when the CPU 51 performs an operation.
The position DB 53 stores position data indicating each position that may be the destination of each AGV 10. The position data may be represented by coordinates virtually set in the factory by a manager, for example. The location data is determined by the manager.
The communication circuit 54 performs wired communication according to the ethernet (registered trademark) standard, for example. The communication circuit 54 is connected to the access point 2 (fig. 14) in a wired manner and is capable of communicating with the AGV 10 via the access point 2. The communication circuit 54 receives data to be transmitted to the AGV 10 from the CPU 51 via the bus 57. In addition, the communication circuit 54 transmits data (notification) received from the AGV 10 to the CPU 51 and/or the memory 52 via the bus 57.
The map DB 55 stores data of a map of the interior of a factory or the like on which the AGV 10 travels. The form of the data is not limited as long as it is a map having a one-to-one correspondence with the position of each AGV 10. For example, the map stored in the map DB 55 may be a map created by CAD.
The position DB 53 and the map DB 55 may be built on a nonvolatile semiconductor memory, or may be built on a magnetic recording medium typified by a hard disk or an optical recording medium typified by an optical disk.
The image processing circuit 56 is a circuit for generating data of an image displayed on the monitor 58. The image processing circuit 56 operates exclusively when the manager operates the operation management device 50. In this embodiment, a further detailed description is omitted in particular. The monitor 58 may be integrated with the operation management device 50. The CPU 51 may perform the processing of the image processing circuit 56.
In the description of the above embodiment, an AGV traveling in a two-dimensional space (floor) is given as an example. However, the present disclosure can also be applied to a moving body that moves in a three-dimensional space, such as a flying body (unmanned aerial vehicle). When the unmanned aerial vehicle makes a three-dimensional space map while flying, the two-dimensional space can be expanded into a three-dimensional space.
The above summary or detailed description may also be implemented by a system, method, integrated circuit, computer program, or recording medium. Alternatively, the present invention may be implemented by any combination of systems, apparatuses, methods, integrated circuits, computer programs, and recording media.
Industrial applicability
The mobile object of the present disclosure can be suitably used for moving and transporting goods, components, finished products, and the like in factories, warehouses, construction sites, logistics, hospitals, and the like.
Description of the reference numerals
1: a user; 2a, 2b: an access point; 10: AGVs (moving bodies); 11a, 11b: a drive wheel (wheel); 11c, 11d, 11e, 11f: casters; 12: a frame; 13: a conveying table; 14: a travel control device; 14a: a microcomputer; 14b: a memory; 14c: a storage device; 14d: a communication circuit; 14e: a position estimating device; 16a, 16b: a motor; 15: a laser range finder; 17a, 17b: a motor driving circuit; 20: terminal devices (mobile computers such as tablet computers); 50: an operation management device; 51: a CPU;52: a memory; 53: a location database (location DB); 54: a communication circuit; 55: map database (map DB); 56: an image processing circuit; 100: a mobile management system.

Claims (16)

1. A position estimation system for a mobile body, which is used by being connected to an external sensor that repeatedly scans the environment and outputs scan data every time of scanning,
the position estimation system has:
At least one processor;
a 1 st memory that stores an environment map prepared in advance; and
a 2 nd memory storing a computer program for causing the processor to operate,
the at least one processor executes the following in accordance with instructions of the computer program:
(A) A 1 st position estimation process of generating a 1 st estimated value of the position and orientation of the mobile body from a result of comparing the environment map with the scan data;
(B) A 2 nd position estimation process of generating a 2 nd estimated value of the position and orientation of the mobile body based on a result of comparing the reference map with the scan data while generating a reference map of the surroundings using the scan data; and
(C) One of the 1 st estimated value and the 2 nd estimated value is selected, and the estimated value of the selected one is outputted as the estimated value selected for the position and orientation of the mobile body.
2. The position inference system of claim 1, wherein,
in performing the 2 nd position estimation process, the at least one processor performs the following in accordance with instructions of the computer program:
acquiring the scanning data from the external sensor, and creating a reference map based on the scanning data;
When the scan data is newly acquired from the external sensor, matching the newly acquired latest scan data with the reference map, thereby deducing the position and posture of the external sensor on the reference map, and adding the latest scan data to the reference map to update the reference map; and
and deleting a part except a part containing the latest scanning data from the reference map subjected to the multiple updating, and resetting the reference map.
3. The position estimation system according to claim 1 or 2, wherein,
the processor calculates the reliability of the 1 st inferred value, and outputs the 2 nd inferred value as the selected inferred value when the reliability is lower than a threshold.
4. The position estimation system according to claim 1 or 2, wherein,
the processor calculates a 1 st reliability of the 1 st inferred value and a 2 nd reliability of the 2 nd inferred value, and outputs the 2 nd inferred value as the selected inferred value when the 1 st reliability is lower than the 2 nd reliability.
5. The position estimation system according to claim 1 or 2, wherein,
the processor outputs the 2 nd inferred value as the selected inferred value when a change in the current value of the 1 st inferred value from the past value of the 1 st inferred value exceeds a prescribed range.
6. The position estimation system according to claim 1 or 2, wherein,
the processor calculates a difference between a measured value of the position and/or orientation obtained from the internal sensor and the 1 st estimated value, and outputs the 2 nd estimated value as the selected estimated value when the difference exceeds a threshold value.
7. The position estimation system according to claim 1 or 2, wherein,
when the operation of determining the 1 st inferred value cannot be completed within a prescribed time, the processor outputs the 2 nd inferred value as the selected inferred value.
8. The position inference system of claim 1, wherein,
in performing the 2 nd position estimation process, the at least one processor performs the following in accordance with instructions of the computer program:
acquiring the scanning data from the external sensor, and creating a reference map based on the scanning data;
when the scan data is newly acquired from the external sensor, matching the newly acquired latest scan data with the reference map, thereby deducing the position and posture of the external sensor on the reference map, and adding the latest scan data to the reference map to update the reference map;
Deleting a part other than a part containing the latest scan data from the reference map subjected to the plurality of updates, and resetting the reference map; and
when the resetting is performed, the environment map is updated according to the reference map which is updated for a plurality of times before the resetting.
9. The position inference system of claim 8, wherein,
the processor resets the reference map when the number of times the reference map is updated reaches a prescribed number of times.
10. The position inference system of claim 8, wherein,
the processor resets the reference map when the data amount of the reference map reaches a prescribed amount.
11. The position inference system of claim 8, wherein,
when the elapsed time from the last reset reaches a predetermined length, the processor resets the reference map.
12. The position estimation system according to claim 2 or 8, wherein,
the processor determines the amount of movement of the external sensor based on the output of the internal sensor,
the processor determines initial values of the position and the posture of the external sensor used in the matching according to the movement amount of the external sensor.
13. The position estimation system according to claim 2 or 8, wherein,
the processor calculates a predicted value of a current position and orientation of the ambient sensor from a history of the position and orientation of the ambient sensor,
the processor uses the predicted value as an initial value of the position and the posture of the external sensor used in the matching.
14. A mobile body, comprising:
the position inference system of any one of claims 1 to 13;
the external sensor; and
a driving device for moving.
15. The mobile unit according to claim 14, wherein,
the mobile body also has an inner sensor.
16. A recording medium storing a computer program for the position estimation system according to any one of claims 1 to 13.
CN201980022370.6A 2018-04-02 2019-03-28 Position estimation system, mobile body having the position estimation system, and recording medium Active CN111971633B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2018070527 2018-04-02
JP2018-070527 2018-04-02
PCT/JP2019/013741 WO2019194079A1 (en) 2018-04-02 2019-03-28 Position estimation system, moving body comprising said position estimation system, and computer program

Publications (2)

Publication Number Publication Date
CN111971633A CN111971633A (en) 2020-11-20
CN111971633B true CN111971633B (en) 2023-10-20

Family

ID=68100670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980022370.6A Active CN111971633B (en) 2018-04-02 2019-03-28 Position estimation system, mobile body having the position estimation system, and recording medium

Country Status (2)

Country Link
CN (1) CN111971633B (en)
WO (1) WO2019194079A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3882649B1 (en) * 2020-03-20 2023-10-25 ABB Schweiz AG Position estimation for vehicles based on virtual sensor response
CN114993314A (en) * 2022-05-20 2022-09-02 上海景吾酷租科技发展有限公司 Fusion positioning method and system for hotel cleaning robot
JP7424438B1 (en) 2022-09-22 2024-01-30 いすゞ自動車株式会社 Vehicle position estimation device
CN116466382B (en) * 2023-04-24 2024-07-02 北京中软政通信息技术有限公司 GPS-based high-precision real-time positioning system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010086038A (en) * 2008-09-29 2010-04-15 Kajima Corp Moving body guidance system and guidance method
JP2010277548A (en) * 2009-06-01 2010-12-09 Hitachi Ltd Robot management system, robot management terminal, method for managing robot, and program
CN102057251A (en) * 2008-06-04 2011-05-11 株式会社日立制作所 Navigation device, navigation method and navigation system
CN105953798A (en) * 2016-04-19 2016-09-21 深圳市神州云海智能科技有限公司 Determination method and apparatus for poses of mobile robot
JP2017045447A (en) * 2015-08-28 2017-03-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Map generation method, own position estimation method, robot system and robot
JP2017146893A (en) * 2016-02-19 2017-08-24 トヨタ自動車株式会社 Self-position estimation method
CN107167148A (en) * 2017-05-24 2017-09-15 安科机器人有限公司 Synchronous superposition method and apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102057251A (en) * 2008-06-04 2011-05-11 株式会社日立制作所 Navigation device, navigation method and navigation system
JP2010086038A (en) * 2008-09-29 2010-04-15 Kajima Corp Moving body guidance system and guidance method
JP2010277548A (en) * 2009-06-01 2010-12-09 Hitachi Ltd Robot management system, robot management terminal, method for managing robot, and program
JP2017045447A (en) * 2015-08-28 2017-03-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Map generation method, own position estimation method, robot system and robot
CN106796434A (en) * 2015-08-28 2017-05-31 松下电器(美国)知识产权公司 Ground drawing generating method, self-position presumption method, robot system and robot
JP2017146893A (en) * 2016-02-19 2017-08-24 トヨタ自動車株式会社 Self-position estimation method
CN105953798A (en) * 2016-04-19 2016-09-21 深圳市神州云海智能科技有限公司 Determination method and apparatus for poses of mobile robot
CN107167148A (en) * 2017-05-24 2017-09-15 安科机器人有限公司 Synchronous superposition method and apparatus

Also Published As

Publication number Publication date
WO2019194079A1 (en) 2019-10-10
CN111971633A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
JP6816830B2 (en) A position estimation system and a mobile body equipped with the position estimation system.
US20200110410A1 (en) Device and method for processing map data used for self-position estimation, mobile body, and control system for mobile body
JP6825712B2 (en) Mobiles, position estimators, and computer programs
TWI665538B (en) A vehicle performing obstacle avoidance operation and recording medium storing computer program thereof
CN111971633B (en) Position estimation system, mobile body having the position estimation system, and recording medium
JP2019168942A (en) Moving body, management device, and moving body system
JP7111424B2 (en) Mobile object, position estimation device, and computer program
JP7081881B2 (en) Mobiles and mobile systems
JPWO2019026761A1 (en) Mobile and computer programs
JP7136426B2 (en) Management device and mobile system
JP2019053391A (en) Mobile body
WO2019054209A1 (en) Map creation system and map creation device
JP2019175137A (en) Mobile body and mobile body system
JP2020166702A (en) Mobile body system, map creation system, route creation program and map creation program
CN112578789A (en) Moving body
CN113711153B (en) Map creation system, signal processing circuit, mobile object, and map creation method
JP2019148871A (en) Movable body and movable body system
JPWO2019059299A1 (en) Operation management device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240223

Address after: Kyoto Japan

Patentee after: NIDECO Transmission Technology Co.,Ltd.

Country or region after: Japan

Address before: Kyoto City, Kyoto, Japan

Patentee before: NIDEC Corp.

Country or region before: Japan

TR01 Transfer of patent right