WO2019194079A1 - Position estimation system, moving body comprising said position estimation system, and computer program - Google Patents

Position estimation system, moving body comprising said position estimation system, and computer program Download PDF

Info

Publication number
WO2019194079A1
WO2019194079A1 PCT/JP2019/013741 JP2019013741W WO2019194079A1 WO 2019194079 A1 WO2019194079 A1 WO 2019194079A1 JP 2019013741 W JP2019013741 W JP 2019013741W WO 2019194079 A1 WO2019194079 A1 WO 2019194079A1
Authority
WO
WIPO (PCT)
Prior art keywords
estimated value
position estimation
map
scan data
reference map
Prior art date
Application number
PCT/JP2019/013741
Other languages
French (fr)
Japanese (ja)
Inventor
慎治 鈴木
佐伯 哲夫
Original Assignee
日本電産株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電産株式会社 filed Critical 日本電産株式会社
Priority to CN201980022370.6A priority Critical patent/CN111971633B/en
Publication of WO2019194079A1 publication Critical patent/WO2019194079A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions

Definitions

  • the present disclosure relates to a position estimation system and a moving object including the position estimation system.
  • the present disclosure also relates to a computer program used for position estimation.
  • Japanese Patent Application Laid-Open No. 2008-250905 discloses a mobile robot that performs self-position estimation by matching sensor data acquired from a laser range finder with a map prepared in advance.
  • the environment may change after the map is created. For example, layouts are often temporarily changed in factories and distribution warehouses.
  • a free space may be temporarily reduced by placing a load, a device, or another moving body in a place (free space) where the moving body can pass on the map. If matching is performed using an environment map that does not reflect the actual environment, a situation in which the self-position cannot be estimated may occur. *
  • Embodiments of the present disclosure provide a self-estimation system and a moving body that can estimate a self-position even when a part of a map does not reflect an actual environment, and a computer program used for the self-estimation.
  • the position estimation system of the present disclosure is a position estimation system for a moving body that is used in connection with an external sensor that repeatedly scans an environment and outputs sensor data for each scan in a non-limiting exemplary embodiment. And a first memory for storing an environmental map prepared in advance, and a second memory for storing a computer program for operating the processor.
  • the at least one processor generates a first estimated value of the position and orientation of the moving body based on a result of collation between the environmental map and the sensor data in accordance with a command of the computer program.
  • a moving body of the present disclosure includes the position estimation system described above, the external sensor, and a driving device for movement.
  • the computer program of the present disclosure is a computer program used in any one of the above-described position estimation systems in a non-limiting exemplary embodiment.
  • FIG. 1 is a diagram illustrating a configuration of an embodiment of a moving object according to the present disclosure.
  • FIG. 2 is a plan layout diagram schematically showing an example of an environment in which a moving body moves.
  • FIG. 3 is a diagram showing an environment map of the environment shown in FIG.
  • FIG. 4A is a diagram schematically illustrating an example of scan data SD (t) acquired by the external sensor at time t.
  • FIG. 4B is a diagram schematically illustrating an example of scan data SD (t + ⁇ t) acquired by the external sensor at time t + ⁇ t.
  • FIG. 4C is a diagram schematically illustrating a state in which the scan data SD (t + ⁇ t) is matched with the scan data SD (t).
  • FIG. 5 is a diagram schematically illustrating how the point group constituting the scan data rotates and translates from the initial position and approaches the point group of the environment map.
  • FIG. 6 is a diagram illustrating the position and orientation of the scan data after the rigid body conversion.
  • FIG. 7A is a diagram schematically illustrating a state in which scan data is acquired from an external sensor, a reference map is created from the scan data, and then newly acquired scan data is matched with the reference map.
  • FIG. 7B is a diagram schematically illustrating a reference map that is updated by adding newly acquired scan data to the reference map of FIG. 7A.
  • FIG. 7C is a diagram schematically illustrating a reference map that is updated by adding newly acquired scan data to the reference map of FIG. 7B.
  • FIG. 7A is a diagram schematically illustrating a state in which scan data is acquired from an external sensor, a reference map is created from the scan data, and then newly acquired scan data is matched with the reference map.
  • FIG. 7B is a diagram
  • FIG. 8A is a diagram schematically illustrating an example of scan data SD (t) acquired by the external sensor at time t.
  • FIG. 8B is a diagram schematically illustrating a state when matching of the scan data SD (t) with the environment map M is started.
  • FIG. 8C is a diagram schematically illustrating a state where the matching of the scan data SD (t) with the environment map M is completed.
  • FIG. 9 is a diagram schematically illustrating the history of the position and orientation of the moving body obtained in the past and the predicted values of the current position and orientation.
  • FIG. 10 is a flowchart illustrating a part of the operation of the position estimation device according to the embodiment of the present disclosure.
  • FIG. 10 is a flowchart illustrating a part of the operation of the position estimation device according to the embodiment of the present disclosure.
  • FIG. 11A is a diagram illustrating an example in which the amount of change in the first estimated value due to the first position estimation process (offline SLAM) varies.
  • FIG. 11B is a diagram illustrating an example in which the difference between the first estimated value obtained by the first position estimating process (offline SLAM) and the measured value obtained by the sensor varies.
  • FIG. 11C is a diagram illustrating an example in which the reliability of the first estimated value by the first position estimation process (offline SLAM) and the reliability of the second estimated value by the second position estimation process (online SLAM) vary.
  • FIG. 12 is a flowchart illustrating a part of the operation of the position estimation device according to the embodiment of the present disclosure.
  • FIG. 12 is a flowchart illustrating a part of the operation of the position estimation device according to the embodiment of the present disclosure.
  • FIG. 13 is a flowchart illustrating an example of second position estimation processing (online SLAM) of the position estimation device according to the embodiment of the present disclosure.
  • FIG. 14 is a diagram illustrating an overview of a control system that controls traveling of each AGV according to the present disclosure.
  • FIG. 15 is a perspective view showing an example of an environment where AGVs exist.
  • FIG. 16 is a perspective view showing the AGV and the towing cart before being connected.
  • FIG. 17 is a perspective view showing the AGV and the traction cart connected to each other.
  • FIG. 18 is an external view of an exemplary AGV according to the present embodiment.
  • FIG. 19A is a diagram illustrating a first hardware configuration example of AGV.
  • FIG. 19B is a diagram illustrating a second hardware configuration example of AGV.
  • FIG. 20 is a diagram illustrating a hardware configuration example of the operation management apparatus.
  • AGV Automatic guided vehicle
  • Automated guided vehicle means a trackless vehicle in which a load is loaded on the main body manually or automatically, automatically travels to a designated place, and unloads manually or automatically.
  • Automated guided vehicle includes automatic guided vehicles and automatic forklifts.
  • unmanned means that no person is required to steer the vehicle, and it does not exclude that the automated guided vehicle transports “person (for example, a person who loads and unloads luggage)”.
  • An “unmanned towing vehicle” is a trackless vehicle that automatically pulls a cart that loads and unloads luggage manually or automatically travels to a designated location.
  • An “unmanned forklift” is a trackless vehicle that includes a mast that moves up and down a load transfer fork, automatically transfers the load to the fork, etc., automatically travels to a designated location, and performs automatic cargo handling work.
  • a “trackless vehicle” is a vehicle that includes wheels and an electric motor or engine that rotates the wheels.
  • the “moving body” is a device that carries a person or a load and moves, and includes a driving device such as a wheel, a biped or multi-legged walking device, and a propeller that generate driving force (traction) for movement.
  • a driving device such as a wheel, a biped or multi-legged walking device, and a propeller that generate driving force (traction) for movement.
  • the term “mobile body” in the present disclosure includes not only a narrow automatic guided vehicle but also a mobile robot, a service robot, and a drone. *
  • the “automatic traveling” includes traveling based on a command of a computer operation management system to which the automatic guided vehicle is connected by communication, and autonomous traveling by a control device included in the automatic guided vehicle. Autonomous traveling includes not only traveling where the automated guided vehicle travels to a destination along a predetermined route, but also traveling following a tracking target. Moreover, the automatic guided vehicle may temporarily perform manual travel based on an instruction from the worker. “Automatic travel” generally includes both “guided” travel and “guideless” travel, but in the present disclosure, it means “guideless” travel. *
  • the “guide type” is a system in which a derivative is installed continuously or intermittently and the guided vehicle is guided using the derivative.
  • the “guideless type” is a method of guiding without installing a derivative.
  • An automatic guided vehicle according to an embodiment of the present disclosure includes a position estimation device and can travel in a guideless manner. *
  • the “position estimation device” is a device that estimates a self-position on an environmental map based on sensor data acquired by an external sensor such as a laser range finder. *
  • An “external sensor” is a sensor that senses an external state of a moving body.
  • the external sensor include a laser range finder (also referred to as a range sensor), a camera (or an image sensor), a LIDAR (Light Detection and Ranging), a millimeter wave radar, an ultrasonic sensor, and a magnetic sensor.
  • the “inner world sensor” is a sensor that senses the state inside the moving body.
  • Examples of the internal sensor include a rotary encoder (hereinafter sometimes simply referred to as “encoder”), an acceleration sensor, and an angular acceleration sensor (for example, a gyro sensor).
  • encoder rotary encoder
  • acceleration sensor acceleration sensor
  • angular acceleration sensor for example, a gyro sensor
  • SAM is an abbreviation of “Simultaneous” “Localization” and “Mapping”, and means that self-location estimation and environmental map creation are performed simultaneously.
  • the mobile body 10 of the present disclosure includes an external sensor 102 that scans an environment and periodically outputs scan data in the exemplary embodiment shown in FIG. 1.
  • a typical example of the external sensor 102 is a laser range finder (LRF).
  • the LRF periodically emits, for example, an infrared or visible laser beam to the surroundings to scan the surrounding environment.
  • the laser beam is reflected by a surface such as a structure such as a wall or a pillar or an object placed on the floor.
  • the LRF receives the reflected light of the laser beam, calculates the distance to each reflection point, and outputs measurement result data indicating the position of each reflection point.
  • the direction and distance of the reflected light are reflected at the position of each reflection point.
  • the measurement result data (scan data) may be referred to as “environmental measurement data” or “sensor data”. *
  • the environment scan by the external sensor 102 is performed, for example, on an environment in a range of 135 degrees to the left and right (total 270 degrees) with respect to the front of the external sensor 102. Specifically, a pulsed laser beam is emitted while changing the direction at predetermined step angles in the horizontal plane, and the reflected light of each laser beam is detected to measure the distance. If the step angle is 0.3 degree, measurement data of the distance to the reflection point in the direction determined by the angle corresponding to the total of 901 steps can be obtained.
  • the scanning of the surrounding space performed by the external sensor 102 is substantially parallel to the floor surface and is planar (two-dimensional). However, the external sensor may perform a three-dimensional scan. *
  • a typical example of the scan data can be expressed by the position coordinates of each point constituting a point cloud acquired for each scan.
  • the position coordinates of the points are defined by a local coordinate system that moves with the moving body 10.
  • a local coordinate system may be referred to as a mobile coordinate system or a sensor coordinate system.
  • the origin of the local coordinate system fixed to the moving body 10 is defined as the “position” of the moving body 10
  • the orientation of the local coordinate system is defined as the “posture” of the moving body 10.
  • the position and orientation may be collectively referred to as “pose”.
  • “pose” may be simply referred to as “position”.
  • the scan data When the scan data is displayed in a polar coordinate system, the scan data may be constituted by a numerical set indicating the position of each point by “direction” and “distance” from the origin in the local coordinate system.
  • the polar coordinate system representation may be converted to an orthogonal coordinate system representation.
  • the moving body 10 includes a storage device (first memory) 104 that stores an environment map, and a position estimation system 115.
  • the environment map is prepared by the mobile object 10 or another map creation device and stored in the storage device 104. *
  • the position estimation system 115 is used by being connected to the external sensor 102, and includes a processor 106 and a memory (second memory) 107 that stores a computer program for controlling the operation of the processor.
  • FIG. 1 shows one processor 106 and one memory 107, the number of processors 106 and memory 107 may be two or three or more, respectively. *
  • the position estimation system 115 performs matching between the scan data acquired from the external sensor 102 and the environment map read from the storage device 104, and estimates the position and orientation of the moving body 10, that is, the pose. This matching is called pattern matching or scan matching and can be performed according to various algorithms.
  • a typical example of the matching algorithm is an Iterative Closest Point (ICP: iterative nearest neighbor) algorithm. *
  • the position estimation system 115 can create an environmental map by matching and connecting a plurality of scan data output from the external sensor 102 by matching.
  • the position estimation system 115 in the embodiment of the present disclosure is realized by the processor 106 and the memory 107 that stores a computer program that operates the processor 106.
  • the processor 106 performs the following operations in accordance with instructions from the computer program.
  • First position estimation process (offline SLAM): Generates a first estimated value of the position and posture of the external sensor 102 (that is, the position and posture of the moving body 10) based on the result of collation between the environmental map and the sensor data. . *
  • the processor 106 executes the first position estimation process and the second position estimation process simultaneously or in parallel.
  • the processor 106 may include a first processor part that executes a first position estimation process and a second processor part that executes a second position estimation process. Further, one processor 106 may alternately perform operations necessary for the first position estimation process and the second position estimation process.
  • the processor 106 when the processor 106 outputs the first estimated value as the selected estimated value and any of the following events occurs, the processor 106 outputs the second estimated value instead of the first estimated value. Is output as the selected estimated value. *
  • first reliability The reliability of the first estimated value (first reliability) and the reliability of the second estimated value (second reliability) were calculated, and the first reliability was lower than the second reliability.
  • the current first estimated value has changed from the past first estimated value (time-series data) beyond a predetermined range.
  • the calculation for determining the first estimated value could not be completed within a predetermined time (the matching error did not converge to a sufficiently small level within the predetermined time).
  • the reliability can be expressed quantitatively by, for example, a final error (“position shift amount” or “match rate”) of ICP matching described later. *
  • Each of the above events occurs when the environment map used in offline SLAM does not accurately reflect the current environment. For example, when a layout is temporarily changed in a factory or a distribution warehouse, a load, a device, or another moving body is placed in a place (free space) where the moving body can pass on the environmental map. In spite of such an environmental change, if matching is performed using the previous environmental map as it is, the above-mentioned phenomenon is likely to occur.
  • position estimation is performed by the first position estimation process (offline SLAM), while position estimation by the second position estimation process (online SLAM) is also performed in the background. For this reason, when a shift
  • the second position estimation process (online SLAM) can be executed by the following process, for example. *
  • Scan data is acquired from the external sensor 102, and a reference map (local map) is created from the scan data.
  • the reference map is reset by deleting a part other than the part including the latest scan data from the reference map updated a plurality of times.
  • the environment map is updated based on the reference map updated multiple times before resetting (online map update).
  • the first estimated value obtained by the first position estimation process (offline SLAM) can be obtained without the above-described phenomenon even when the environment change state (for example, layout change) continues. Can be used as the selected estimated value.
  • the moving body 10 further includes a drive device 108, an automatic travel control device 110, and a communication circuit 112.
  • the driving device 108 is a device that generates a driving force for the moving body 10 to move.
  • Examples of drive device 108 include a biped or multi-legged walking device that is operated by a wheel (drive wheel) that is rotated by an electric motor or engine, a motor or other actuator.
  • the wheel may be an omnidirectional wheel such as a Mecanum wheel.
  • the moving body 10 may be a moving body that moves in the air or underwater, or a hovercraft.
  • the driving device 108 includes a propeller that is rotated by a motor. *
  • the automatic travel control device 110 operates the driving device 108 to control the moving conditions (speed, acceleration, moving direction, etc.) of the moving body 10.
  • the automatic traveling control device 110 may move the moving body 10 along a predetermined traveling route, or may move it according to a command given from the outside.
  • the position estimation system 115 calculates an estimated value of the position and orientation of the moving body 10 while the moving body 10 is moving or stopped.
  • the automatic traveling control device 110 controls traveling of the moving body 10 with reference to the estimated value. *
  • the position estimation system 115 and the automatic travel control device 110 may be collectively referred to as a travel control device 120.
  • the automatic traveling control device 110 may be configured by the position estimation system 115 and the processor 106 described above and a memory 107 storing a computer program for controlling the operation of the processor 106.
  • Such a processor 106 and memory 107 can be realized by one or a plurality of semiconductor integrated circuits.
  • the communication circuit 112 is a circuit in which the mobile unit 10 is connected to a communication network including an external management device, another mobile unit, or an operator's mobile terminal device, and exchanges data and / or commands.
  • FIG. 2 is a plan layout diagram schematically showing an example of an environment 200 in which the moving body 10 moves.
  • the environment 200 is part of a wider environment.
  • a thick straight line indicates a fixed wall 202 of a building, for example.
  • FIG. 3 is a diagram showing a map (environment map M) of the environment 200 shown in FIG.
  • Each dot 204 in the figure corresponds to each point of the point group constituting the environment map M.
  • the point cloud of the environment map M may be referred to as a “reference point cloud”
  • the scan data point cloud may be referred to as a “data point cloud” or a “source point cloud”.
  • Matching is, for example, aligning scan data (data point group) with respect to an environmental map (reference point group) having a fixed position.
  • a pair of corresponding points is selected between the reference point group and the data point group, and the distance (error) between the points constituting each pair is minimized. The position and orientation of the data point group are adjusted. *
  • the dots 204 are arranged at equal intervals on a plurality of line segments for simplicity.
  • the point cloud in the actual environment map M may have a more complicated arrangement pattern.
  • the environment map M is not limited to a point cloud map, and may be a map having a straight line or a curve as a constituent element, or may be an occupied grid map. That is, it is only necessary that the environment map M has a structure capable of performing matching between the scan data and the environment map M. In the case of an occupied grid map, Monte Carlo position estimation can be performed.
  • an embodiment of the present disclosure will be described by taking matching by the ICP algorithm as an example, the embodiment of the present disclosure is not limited to this example. *
  • the scan data acquired by the external sensor 102 of the moving body 10 has a different point cloud arrangement. If the moving time from the position PA to the position PC via the position PB is sufficiently long compared to the scanning period of the external sensor 102, that is, if the moving of the moving body 10 is slow, the time axis
  • the two adjacent scan data above are very similar. However, when the moving body 10 moves remarkably fast, the two adjacent scan data on the time axis may be greatly different.
  • map creation can be performed online or offline. For this reason, as for the principle of map creation, the following description can be applied to both offline SLAM and online SLAM. However, in the case of offline SLAM, there is no particular restriction on the time required for creating a map using scan data, and it is possible to perform matching with less error by taking time. *
  • FIG. 4A is a diagram schematically illustrating an example of scan data SD (t) acquired by the external sensor 102 at time t.
  • the scan data SD (t) It is displayed in a sensor coordinate system whose position and orientation change with zero.
  • the scan data SD (t) is represented by a UV coordinate system in which the front surface of the external sensor 102 is the V axis, and the direction rotated 90 ° clockwise from the V axis is the U axis.
  • the moving body 10 more precisely, the external sensor 102 is located at the origin of the UV coordinate system. In the present disclosure, when the moving body 10 moves forward, the moving body 10 advances in front of the external sensor 102, that is, in the direction of the V-axis.
  • the points constituting the scan data SD (t) are indicated by black circles.
  • ⁇ t the period at which the position estimation system 115 acquires scan data from the external sensor 102 is denoted by ⁇ t.
  • ⁇ t is, for example, 200 milliseconds.
  • FIG. 4B is a diagram schematically illustrating an example of scan data SD (t + ⁇ t) acquired by the external sensor 102 at time t + ⁇ t.
  • the points constituting the scan data SD (t + ⁇ t) are indicated by white circles. *
  • ⁇ t is, for example, 200 milliseconds
  • the moving body 10 moves at a speed of 1 mail per second
  • the moving body 10 moves about 20 centimeters during ⁇ t.
  • the environment of the moving body 10 does not change greatly due to movement of about 20 centimeters. Therefore, there is a wide overlap between the environment scanned by the external sensor 102 at time t + ⁇ t and the environment scanned at time t. Is included. Therefore, many corresponding points are included between the point group of the scan data SD (t) and the point group of the scan data SD (t + ⁇ t). *
  • FIG. 4C schematically shows a state where matching between the scan data SD (t) and the scan data SD (t + ⁇ t) is completed.
  • alignment is performed so that the scan data SD (t + ⁇ t) is aligned with the scan data SD (t).
  • the moving body 10 at time t is located at the origin of the UV coordinate system in FIG. 4C, and the moving body 10 at time t + ⁇ t is at a position moved from the origin of the UV coordinate system.
  • the arrangement relationship of the other local coordinate system with respect to one local coordinate system is obtained.
  • a local environment map can be created by connecting a plurality of scan data SD (t), SD (t + ⁇ t),..., SD (t + N ⁇ ⁇ t) acquired periodically.
  • N is an integer of 1 or more.
  • FIG. 5 is a diagram schematically showing how the point group constituting the scan data at time t rotates and translates from the initial position and approaches the point group on the map.
  • m k be the coordinate value of the point on the map corresponding to.
  • the error of the corresponding points in the two point groups can be evaluated by using ⁇ (Z t, k ⁇ m k ) 2 that is the sum of squares of the errors calculated for the K corresponding points as a cost function.
  • Rotational and translational rigid body transformations are determined so as to reduce ⁇ (Z t, k ⁇ m k ) 2 .
  • the rigid transformation is defined by a transformation matrix (homogeneous transformation matrix) that includes a rotation angle and a translation vector as parameters.
  • FIG. 6 is a diagram illustrating the position and orientation of the scan data after the rigid body conversion.
  • matching between the scan data and the map is not completed, and a large error (positional deviation) still exists between the two point groups.
  • rigid body transformation is further performed.
  • the error becomes smaller than a predetermined value, matching is completed.
  • FIG. 7A is a diagram schematically illustrating a state where matching between the latest scan data SD (b) newly acquired and the scan data SD (a) acquired last time is completed.
  • a black circle point group represents the previous scan data
  • a white circle point group represents the latest scan data.
  • FIG. 7A shows the position a of the moving body 10 when the previous scan data is acquired and the position b of the moving body 10 when the latest scan data is acquired.
  • the scan data SD (a) acquired last time constitutes a “reference map RM”.
  • the reference map RM is a part of the environmental map that is being created. Matching is executed so that the position and orientation of the latest scan data SD (b) matches the position and orientation of the previously acquired scan data SD (a).
  • the position and orientation of the moving body 10b on the reference map RM can be known.
  • the scan data SD (b) is added to the reference map RM to update the reference map RM.
  • the coordinate system of the scan data SD (b) is connected to the coordinate system of the scan data SD (a).
  • This connection is represented in a matrix that defines rotation and translational transformation (rigid transformation) of the two coordinate systems. According to such a conversion matrix, the coordinate value of each point on the scan data SD (b) can be converted into the coordinate value in the coordinate system of the scan data SD (a).
  • FIG. 7B shows a reference map RM obtained by adding the next acquired scan data to the reference map RM of FIG. 7A and updating it.
  • a black circle point cloud represents the reference map RM before update
  • a white circle dot cloud represents the latest scan data SD (c).
  • FIG. 7B shows the positions a, b, and c of the moving body 10 when the previous scan data, the previous scan data, and the latest scan data are acquired.
  • the whole point group of white circles and the point group of black circles in FIG. 7B constitute an updated reference map RM. *
  • FIG. 7C shows a reference map RM updated by adding newly acquired scan data SD (d) to the reference map RM of FIG. 7B.
  • the black circle point cloud represents the reference map RM before update
  • the white circle dot cloud represents the latest scan data SD (d).
  • the position d of the moving body 10 at the position estimated by matching of the latest scan data SD (d) is shown. ing.
  • the whole of the white circle point group and the black circle point group in FIG. 7C constitute an updated reference map RM. *
  • the number of points in the reference map RM increases every time the external sensor 102 scans. This causes an increase in the amount of calculation when matching the latest scan data with the reference map RM. For example, when one piece of scan data includes a maximum of about 1000 points, when 2000 pieces of scan data are connected to create one reference map RM, the number of points in the reference map RM is about maximum. It reaches 2 million. When the corresponding point is found and the calculation for matching is repeated, if the point group of the reference map RM is too large, the matching may not be completed within the period of ⁇ t that is the scan cycle. *
  • the reference map is reset by deleting a part other than the part including the latest scan data from the reference map updated a plurality of times. Moreover, when resetting, it is also possible to update an environmental map based on the reference map updated several times before resetting.
  • the environment map itself prepared in advance by the offline SLAM can be maintained without being updated.
  • the “predetermined number” may be 100 times, for example.
  • the “predetermined amount” in the case of (ii) may be 10,000, for example.
  • the “predetermined length” in the case of (iii) can be, for example, 5 minutes.
  • the matching accuracy is sufficiently higher than the rate at which the amount of calculation required for matching increases. Saturation can occur without improvement.
  • the density of the point cloud constituting the scan data and / or the reference map exceeds a predetermined density, several points are thinned out from the point cloud, and the density of the point cloud is set to a predetermined density or less. You may perform the process to reduce.
  • the “predetermined density” may be, for example, 1 piece / (10 cm) 2 .
  • FIG. 8A is a diagram schematically illustrating an example of scan data SD (t) acquired by an external sensor at time t.
  • the scan data SD (t) is displayed in a sensor coordinate system whose position and orientation change together with the moving body 10, and points constituting the scan data SD (t) are described by white circles. *
  • FIG. 8B is a diagram schematically illustrating a state when matching of the scan data SD (t) with the environment map M is started.
  • the processor 106 in FIG. 1 acquires the scan data SD (t) from the external sensor 102
  • the processor 106 performs matching between the scan data SD (t) and the environment map M read from the storage device 104, thereby The position and orientation on the map M can be estimated.
  • FIG. 8C is a diagram schematically illustrating a state where the matching of the scan data SD (t) with the environment map M is completed. *
  • the amount of change is measured by odometry from the position and orientation estimated by the previous matching.
  • the moving amount and moving direction of the moving body 10 can be obtained by encoders attached to the respective driving wheels or motors. Since the method using odometry is known, no further detailed explanation is necessary. *
  • the second method is to predict the current position and posture based on the history of the estimated values of the position and posture of the moving body 10.
  • FIG. 9 is a diagram schematically showing the history of the position and orientation of the moving body 10 obtained in the past by the position estimation system 115 of FIG. 1, and the predicted value of the current position and orientation. .
  • the position and orientation history is stored in the memory 107 inside the position estimation system 115. Part or all of such history may be stored in a storage device outside the position estimation device 105, for example, the storage device 104 in FIG. *
  • FIG. 9 also shows a UV coordinate system that is a local coordinate system (sensor coordinate system) of the moving body 10.
  • Scan data is expressed in the UV coordinate system.
  • the position of the moving body 10 on the environment map M is the coordinate value (xi, yi) of the origin of the UV coordinate system in the coordinate system of the environment map M.
  • the posture (orientation) of the moving body 10 is the orientation ( ⁇ i) of the UV coordinate system with respect to the coordinate system of the environment map M. ⁇ i is “positive” in the counterclockwise direction.
  • the predicted value of the current position and orientation is calculated from the history of positions and orientations obtained in the past by the position estimation device.
  • the position and orientation of the moving body obtained by the previous matching are (x i ⁇ 1 , y i ⁇ 1 , ⁇ i ⁇ 1 ), and the position and orientation of the moving body obtained by the previous matching are further represented by (x i -2 , y i-2 , ⁇ i-2 ). Further, the predicted value of the current position and orientation of the moving object is (x i , y i , ⁇ i ). At this time, it is assumed that the following assumptions hold.
  • the moving speed from the position (x i ⁇ 1 , y i ⁇ 1 ) to the position (x i , y i ) is from the position (x i ⁇ 2 , y i ⁇ 2 ) to the position (x i ⁇ 1 , y i ⁇ 1 ) equal to the moving speed during movement.
  • Equation 2 When approximation that ⁇ is zero is performed, the matrix of the second term on the right side of Equation 2 can be simplified as a unit matrix. *
  • step S ⁇ b> 10 the processor 106 of the position estimation system 115 acquires the latest scan data from the external sensor 102.
  • step S12 the processor 106 acquires the current position and orientation values by odometry.
  • the current position and orientation values may be predicted as described with reference to FIG. *
  • step S14 the processor 106 performs initial alignment of the latest scan data with respect to the environment map using the current position and orientation values acquired from odometry as initial values.
  • step S16 the processor 106 performs misregistration correction by the ICP algorithm.
  • step S18 the processor 106 generates a first estimated value of the position and orientation by offline SLAM.
  • step S20 it is determined whether an event has occurred in which the second estimated value of the position and orientation based on the online SLAM is output as the selected estimated value instead of the first estimated value of the position and orientation based on the offline SLAM. To do. In No, it progresses to step S21 and outputs a 1st estimated value as a selected estimated value. Thereafter, the process returns to step S10, and the next scan data is acquired. In the case of Yes, it progresses to step S22. *
  • FIG. 11A is a diagram illustrating an example in which the change amount ⁇ P t of the first estimated value by the first position estimation process (offline SLAM), that is, P t ⁇ P t ⁇ 1 fluctuates. This is because when the first estimated value at the current time t is P t and the previous first estimated value one time before (for example, 200 milliseconds before) is P t ⁇ 1 , the difference is monitored. It is possible to detect an estimated abnormality. For example, when the change amount ⁇ P t exceeds the threshold value indicated by the broken line in FIG. 11A, the second estimated value by the second position estimating process (online SLAM) is used instead of the first estimated value by the first position estimating process (offline SLAM).
  • online SLAM the second estimated value by the second position estimating process
  • the estimated value can be selected as a more accurate estimated value.
  • the threshold value is not selected immediately after the second position estimation process (online SLAM) but is continued for a predetermined number of times (for example, three times). You may make it select a 2nd estimated value when exceeding.
  • FIG. 11B is a diagram illustrating an example in which the difference between the first estimated value obtained by the first position estimating process (offline SLAM) and the measured value obtained by the sensor varies.
  • the sensor is a position and orientation value of the moving body measured by odometry such as a rotary encoder.
  • the second estimated value based on the second position estimating process is used instead of the first estimated value based on the first position estimating process (offline SLAM). Can be selected as a more accurate estimate.
  • the second estimated value by the second position estimation process (online SLAM) is not selected immediately, but the predetermined range is continued for a predetermined number of times (for example, three times).
  • the second estimated value may be selected when deviating from the above.
  • FIG. 11C is a diagram illustrating an example in which the reliability of the first estimated value by the first position estimation process (offline SLAM) and the reliability of the second estimated value by the second position estimation process (online SLAM) vary.
  • the first estimation value by the first position estimation process (offline SLAM)
  • the second estimated value obtained by the two-position estimation process (online SLAM) can be selected as a more accurate estimated value.
  • the second estimated value may be selected when the number of times that the reliability of the first estimation is lower than the reliability of the second estimated value exceeds a predetermined number.
  • step S22 the processor 106 performs position and orientation estimation by online SLAM. Specifically, the process proceeds to step S40.
  • the online SLAM flow will be described later. *
  • step S32 the processor 106 searches for corresponding points from two sets of point groups. Specifically, the processor 106 selects a point on the environment map corresponding to each point constituting the point group included in the scan data. *
  • step S34 the processor 106 performs rigid transformation (coordinate transformation) of rotation and translation of the scan data so as to reduce the distance between corresponding points between the scan data and the environment map. This is to optimize the parameters of the coordinate transformation matrix so as to reduce the distance between corresponding points, that is, the total sum (square sum) of errors of corresponding points. This optimization is performed by iterative calculation. *
  • step S36 the processor 106 determines whether or not the result of the iterative calculation has converged. Specifically, the processor 106 determines that it has converged when the amount of reduction in the sum of the errors (corresponding to the squares) of the corresponding points falls below a predetermined value even if the parameters of the coordinate transformation matrix are changed. If not converged, the process returns to step S32, and the processor 106 repeats the processing from the corresponding point search. If it is determined in step S36 that the process has converged, the process proceeds to step S38. *
  • step S38 the processor 106 converts the coordinate value of the scan data from the value of the sensor coordinate system to the value of the coordinate system of the environment map using the coordinate conversion matrix.
  • the coordinate values of the scan data obtained in this way can be used for updating the environmental map.
  • step S ⁇ b> 40 the processor 106 of the position estimation system 115 acquires the latest scan data from the external sensor 102.
  • step S42 the processor 106 acquires the current position and orientation values by odometry.
  • step S44 the processor 106 performs initial alignment of the latest scan data with respect to the reference map using the current position and orientation values acquired from odometry as initial values.
  • step S ⁇ b> 46 the processor 106 performs misalignment correction using an ICP algorithm.
  • step S48 the processor 106 generates an estimated value (second estimated value) of the position and orientation of the moving object obtained as a result of matching with the reference map.
  • This second estimated value is output as a selected estimated value instead of the first estimated value when a “Yes” determination is made in step S20 of FIG.
  • the estimation of the position and orientation by online SLAM is continuously executed, and the second estimated value is generated. Whether or not to adopt the second estimated value as the selected estimated value depends on the presence or absence of the event described with reference to FIG. *
  • step S50 it is determined whether or not the reference map satisfies the update condition.
  • the update conditions are (i) when the number of times of updating the reference map reaches a predetermined number, (ii) when the data amount of the reference map reaches a predetermined amount, or (iii) since the previous reset. This is a condition such as when the elapsed time of a predetermined length has been reached. In No, it returns to step S40 and acquires the next scan data. In the case of Yes, it progresses to step S52. *
  • step S52 the processor 106 deletes a portion other than the portion including the latest scan data from the reference map updated a plurality of times, and resets the reference map. In this way, the number and density of points in the point group constituting the reference map can be reduced.
  • step S20 of FIG. 10 determines whether there is no need to perform online SLAM. For this reason, when there is no need to perform online SLAM, the online SLAM quickly returns to the offline SLAM. *
  • the position estimation system according to the present disclosure can be applied to various moving bodies that are moved by various driving devices.
  • the position estimation system according to the present disclosure may not be used by being mounted on a moving body including a driving device. For example, it may be placed on a handcart driven by a user and used for map creation. *
  • an automatic guided vehicle is taken as an example of the moving body.
  • an abbreviation is used to describe an automatic guided vehicle as “AGV: Automatic Guided Vehicle”.
  • AGV As for the mobile object 10 as well, the reference symbol “10” is attached thereto.
  • FIG. 14 shows a basic configuration example of an exemplary mobile management system 100 according to the present disclosure.
  • the mobile management system 100 includes at least one AGV 10 and an operation management device 50 that manages the operation of the AGV 10.
  • FIG. 14 also shows a terminal device 20 operated by the user 1. *
  • the AGV 10 is an automatic guided vehicle capable of “guideless type” traveling that does not require a derivative such as a magnetic tape for traveling.
  • the AGV 10 can perform self-position estimation and transmit the estimation result to the terminal device 20 and the operation management device 50.
  • the AGV 10 can automatically travel in the environment S according to a command from the operation management device 50.
  • the operation management device 50 is a computer system that tracks the position of each AGV 10 and manages the running of each AGV 10.
  • the operation management device 50 may be a desktop PC, a notebook PC, and / or a server computer.
  • the operation management device 50 communicates with each AGV 10 via the plurality of access points 2. For example, the operation management device 50 transmits the data of the coordinates of the position to which each AGV 10 should go next to each AGV 10.
  • Each AGV 10 transmits data indicating its position and orientation to the operation management device 50 periodically, for example, every 250 milliseconds.
  • the operation management device 50 transmits data on the coordinates of the position to be further headed.
  • the AGV 10 can also travel in the environment S according to the operation of the user 1 input to the terminal device 20.
  • An example of the terminal device 20 is a tablet computer. *
  • FIG. 15 shows an example of an environment S in which three AGVs 10a, 10b, and 10c exist. Assume that all AGVs are traveling in the depth direction in the figure. The AGVs 10a and 10b are transporting loads placed on the top board. The AGV 10c travels following the front AGV 10b. For convenience of explanation, reference numerals 10a, 10b, and 10c are attached in FIG. 15, but are described as “AGV10” below. *
  • the AGV 10 can also transport the load using a tow cart connected to itself.
  • FIG. 16 shows the AGV 10 and the traction cart 5 before being connected.
  • a caster is provided on each foot of the traction cart 5.
  • the AGV 10 is mechanically connected to the traction cart 5.
  • FIG. 17 shows the AGV 10 and the traction cart 5 connected to each other.
  • connection method between the AGV 10 and the traction cart 5 is arbitrary.
  • a plate 6 is fixed to the top plate of the AGV 10.
  • the pulling cart 5 is provided with a guide 7 having a slit.
  • the AGV 10 approaches the tow truck 5 and inserts the plate 6 into the slit of the guide 7.
  • the AGV 10 passes an electromagnetic lock pin (not shown) through the plate 6 and the guide 7 and applies an electromagnetic lock. Thereby, AGV10 and tow cart 5 are physically connected.
  • Each AGV 10 and the terminal device 20 can be connected, for example, on a one-to-one basis, and can perform communication based on the Bluetooth (registered trademark) standard.
  • Each AGV 10 and the terminal device 20 can perform communication based on Wi-Fi (registered trademark) using one or a plurality of access points 2.
  • the plurality of access points 2 are connected to each other via, for example, the switching hub 3.
  • FIG. 14 shows two access points 2a and 2b.
  • the AGV 10 is wirelessly connected to the access point 2a.
  • the terminal device 20 is wirelessly connected to the access point 2b.
  • the data transmitted by the AGV 10 is received by the access point 2a, then transferred to the access point 2b via the switching hub 3, and transmitted from the access point 2b to the terminal device 20.
  • the data transmitted by the terminal device 20 is received by the access point 2b, then transferred to the access point 2a via the switching hub 3, and transmitted from the access point 2a to the AGV 10.
  • bidirectional communication between the AGV 10 and the terminal device 20 is realized.
  • the plurality of access points 2 are also connected to the operation management device 50 via the switching hub 3. Thereby, bidirectional communication is also realized between the operation management device 50 and each AGV 10. *
  • a map in the environment S is created in advance so that the AGV 10 can run while estimating its own position by offline SLAM.
  • the AGV 10 is equipped with a position estimation device and an LRF, and an environmental map can be created using the output of the LRF.
  • the AGV 10 transitions to a data acquisition mode by a user operation.
  • the AGV 10 starts acquiring sensor data (scan data) using LRF. Subsequent processing is as described above. *
  • the movement in the environment S for acquiring the sensor data can be realized by the AGV 10 traveling according to the user's operation.
  • the AGV 10 receives a travel command instructing movement in the front, rear, left, and right directions from the user via the terminal device 20 wirelessly.
  • the AGV 10 travels forward / backward / left / right in the environment S according to the travel command and creates a map.
  • a map may be created by traveling in the environment S in the front / rear and left / right directions according to a control signal from the steering device.
  • Sensor data may be acquired by a person walking on a measurement carriage equipped with an LRF. *
  • the number of AGVs may be one.
  • the user 1 can use the terminal device 20 to select one AGV 10 from among the plurality of registered AGVs and create a map of the environment S. *
  • FIG. 18 is an external view of an exemplary AGV 10 according to the present embodiment.
  • the AGV 10 includes two drive wheels 11a and 11b, four casters 11c, 11d, 11e, and 11f, a frame 12, a transport table 13, a travel control device 14, and an LRF 15.
  • the two drive wheels 11a and 11b are provided on the right side and the left side of the AGV 10, respectively.
  • the four casters 11c, 11d, 11e, and 11f are arranged at the four corners of the AGV 10.
  • the AGV 10 also has a plurality of motors connected to the two drive wheels 11a and 11b, but the plurality of motors are not shown in FIG. Also, FIG.
  • the travel control device 14 is a device that controls the operation of the AGV 10, and mainly includes an integrated circuit including a microcomputer (described later), electronic components, and a board on which they are mounted.
  • the traveling control device 14 performs the above-described data transmission / reception with the terminal device 20 and the preprocessing calculation.
  • the LRF 15 is an optical device that measures the distance to a reflection point by, for example, emitting an infrared laser beam 15a and detecting the reflected light of the laser beam 15a.
  • the LRF 15 of the AGV 10 emits a pulsed laser beam 15a while changing its direction every 0.25 degrees in a space of 135 degrees left and right (total 270 degrees) with respect to the front of the AGV 10, for example. Then, the reflected light of each laser beam 15a is detected. Thereby, data of the distance to the reflection point in the direction determined by the angle corresponding to the total of 1081 steps every 0.25 degrees can be obtained.
  • the scanning of the surrounding space performed by the LRF 15 is substantially parallel to the floor surface and is planar (two-dimensional). However, the LRF 15 may scan in the height direction. *
  • the AGV 10 can create a map of the environment S based on the position and orientation (orientation) of the AGV 10 and the scan result of the LRF 15.
  • the map may reflect the arrangement of walls, pillars and other structures around the AGV, and objects placed on the floor.
  • the map data is stored in a storage device provided in the AGV 10. *
  • the position and orientation of the AGV 10, that is, the pose (x, y, ⁇ ) may be simply referred to as “position”.
  • the traveling control device 14 compares the measurement result of the LRF 15 with the map data held by itself, and estimates its current position.
  • the map data may be map data created by another AGV 10.
  • FIG. 19A shows a first hardware configuration example of the AGV 10.
  • FIG. 19A also shows a specific configuration of the travel control device 14. *
  • the AGV 10 includes a travel control device 14, an LRF 15, two motors 16a and 16b, a drive device 17, and wheels 11a and 11b. *
  • the travel control device 14 includes a microcomputer 14a, a memory 14b, a storage device 14c, a communication circuit 14d, and a position estimation device 14e.
  • the microcomputer 14a, the memory 14b, the storage device 14c, the communication circuit 14d, and the position estimation device 14e are connected by a communication bus 14f and can exchange data with each other.
  • the LRF 15 is also connected to the communication bus 14f via a communication interface (not shown), and transmits measurement data as a measurement result to the microcomputer 14a, the position estimation device 14e, and / or the memory 14b. *
  • the microcomputer 14 a is a processor or a control circuit (computer) that performs an operation for controlling the entire AGV 10 including the travel control device 14.
  • the microcomputer 14a is a semiconductor integrated circuit.
  • the microcomputer 14a transmits a PWM (Pulse Width Modulation) signal, which is a control signal, to the drive device 17 to control the drive device 17 and adjust the voltage applied to the motor.
  • PWM Pulse Width Modulation
  • One or more control circuits for example, a microcomputer
  • the motor driving device 17 may include two microcomputers that control the driving of the motors 16a and 16b, respectively.
  • the memory 14b is a volatile storage device that stores a computer program executed by the microcomputer 14a.
  • the memory 14b can also be used as a work memory when the microcomputer 14a and the position estimation device 14e perform calculations. *
  • the storage device 14c is a nonvolatile semiconductor memory device.
  • the storage device 14c may be a magnetic recording medium typified by a hard disk or an optical recording medium typified by an optical disk.
  • the storage device 14c may include a head device for writing and / or reading data on any recording medium and a control device for the head device.
  • the storage device 14c stores an environment map M of the traveling environment S and data (travel route data) R of one or more travel routes.
  • the environmental map M is created by the AGV 10 operating in the map creation mode and stored in the storage device 14c.
  • the travel route data R is transmitted from the outside after the environment map M is created.
  • the environment map M and the travel route data R are stored in the same storage device 14c, but may be stored in different storage devices.
  • the AGV 10 receives travel route data R indicating a travel route from the tablet computer.
  • the travel route data R at this time includes marker data indicating the positions of a plurality of markers. “Marker” indicates the passing position (route point) of the traveling AGV 10.
  • the travel route data R includes at least position information of a start marker indicating a travel start position and an end marker indicating a travel end position.
  • the travel route data R may further include position information of one or more intermediate waypoint markers. When the travel route includes one or more intermediate waypoints, a route from the start marker to the end marker via the travel route point in order is defined as the travel route.
  • the data of each marker may include data on the direction (angle) and traveling speed of the AGV 10 until moving to the next marker, in addition to the coordinate data of the marker.
  • the data of each marker includes acceleration time required for acceleration to reach the travel speed, and / or Further, it may include data of deceleration time required for deceleration from the traveling speed until the vehicle stops at the position of the next marker.
  • the operation management device 50 may control the movement of the AGV 10 instead of the terminal device 20. In that case, the operation management device 50 may instruct the AGV 10 to move to the next marker every time the AGV 10 reaches the marker. For example, the AGV 10 receives, from the operation management device 50, coordinate data of a target position to be next, or data of a distance to the target position and an angle to be traveled as travel route data R indicating a travel route. *
  • the AGV 10 can travel along the stored travel route while estimating its own position using the created map and the sensor data output by the LRF 15 acquired during travel.
  • the details of the operation at this point are as described above. According to the embodiment of the present disclosure, even when a part of the environmental map prepared in advance does not reflect the actual environment, the self-position can be continuously estimated by quickly switching to the online SLAM. *
  • the communication circuit 14d is a wireless communication circuit that performs wireless communication conforming to, for example, Bluetooth (registered trademark) and / or Wi-Fi (registered trademark) standards. Each standard includes a wireless communication standard using a frequency of 2.4 GHz band. For example, in the mode in which the AGV 10 is run to create a map, the communication circuit 14d performs wireless communication based on the Bluetooth (registered trademark) standard and communicates with the terminal device 20 one-on-one.
  • Bluetooth registered trademark
  • Wi-Fi registered trademark
  • the position estimation device 14e performs map creation processing and self-position estimation processing during traveling.
  • the position estimation device 14e creates a map of the environment S based on the position and orientation of the AGV 10 and the LRF scan result.
  • the position estimation device 14e receives sensor data from the LRF 15 and reads the environmental map M stored in the storage device 14c.
  • the local map data (sensor data) created from the scan result of the LRF 15 is matched with a wider range of the environment map M to identify the self-position (x, y, ⁇ ) on the environment map M.
  • the position estimation device 14e generates “reliability” data representing the degree to which the local map data matches the environmental map M.
  • Each data of the self position (x, y, ⁇ ) and the reliability can be transmitted from the AGV 10 to the terminal device 20 or the operation management device 50.
  • the terminal device 20 or the operation management device 50 can receive each data of its own position (x, y, ⁇ ) and reliability and display it on a built-in or connected display device.
  • the microcomputer 14a and the position estimation device 14e are separate components, but this is an example. It may be a single chip circuit or a semiconductor integrated circuit capable of independently performing the operations of the microcomputer 14a and the position estimation device 14e.
  • FIG. 19A shows a chip circuit 14g including the microcomputer 14a and the position estimation device 14e. Below, the example in which the microcomputer 14a and the position estimation apparatus 14e are provided independently is demonstrated. *
  • the two motors 16a and 16b are attached to the two wheels 11a and 11b, respectively, and rotate each wheel. That is, the two wheels 11a and 11b are drive wheels, respectively.
  • the motor 16a and the motor 16b are described as being motors that drive the right wheel and the left wheel of the AGV 10, respectively. *
  • the moving body 10 may further include a rotary encoder that measures the rotational positions or rotational speeds of the wheels 11a and 11b.
  • the microcomputer 14a may estimate not only the signal received from the position estimation device 14e but also the position and orientation of the moving body 10 using a signal received from the rotary encoder. *
  • the drive device 17 has motor drive circuits 17a and 17b for adjusting the voltage applied to each of the two motors 16a and 16b.
  • Each of motor drive circuits 17a and 17b includes a so-called inverter circuit.
  • the motor drive circuits 17a and 17b turn on or off the current flowing through each motor by a PWM signal transmitted from the microcomputer 14a or the microcomputer in the motor drive circuit 17a, thereby adjusting the voltage applied to the motor.
  • FIG. 19B shows a second hardware configuration example of the AGV 10.
  • the second hardware configuration example is different from the first hardware configuration example (FIG. 19A) in that it has a laser positioning system 14h and the microcomputer 14a is connected to each component in a one-to-one relationship. To do. *
  • the laser positioning system 14h includes a position estimation device 14e and an LRF 15.
  • the position estimation device 14e and the LRF 15 are connected by, for example, an Ethernet (registered trademark) cable. Each operation of the position estimation device 14e and the LRF 15 is as described above.
  • the laser positioning system 14h outputs information indicating the pause (x, y, ⁇ ) of the AGV 10 to the microcomputer 14a. *
  • the microcomputer 14a has various general purpose I / O interfaces or general purpose input / output ports (not shown).
  • the microcomputer 14a is directly connected to other components in the travel control device 14 such as the communication circuit 14d and the laser positioning system 14h via the general-purpose input / output port.
  • AGV10 in embodiment of this indication may be provided with safety sensors, such as an obstacle detection sensor and a bumper switch which are not illustrated. *
  • FIG. 20 shows a hardware configuration example of the operation management device 50.
  • the operation management device 50 includes a CPU 51, a memory 52, a position database (position DB) 53, a communication circuit 54, a map database (map DB) 55, and an image processing circuit 56.
  • position database position DB
  • map database map DB
  • the CPU 51, the memory 52, the position DB 53, the communication circuit 54, the map DB 55, and the image processing circuit 56 are connected by a communication bus 57, and can exchange data with each other.
  • the CPU 51 is a signal processing circuit (computer) that controls the operation of the operation management device 50.
  • the CPU 51 is a semiconductor integrated circuit.
  • the memory 52 is a volatile storage device that stores a computer program executed by the CPU 51.
  • the memory 52 can also be used as a work memory when the CPU 51 performs calculations. *
  • the position DB 53 stores position data indicating each position that can be a destination of each AGV 10.
  • the position data can be represented by coordinates virtually set in the factory by an administrator, for example.
  • the location data is determined by the administrator. *
  • the communication circuit 54 performs wired communication based on, for example, the Ethernet (registered trademark) standard.
  • the communication circuit 54 is connected to the access point 2 (FIG. 14) by wire, and can communicate with the AGV 10 via the access point 2.
  • the communication circuit 54 receives data to be transmitted to the AGV 10 from the CPU 51 via the bus 57.
  • the communication circuit 54 transmits the data (notification) received from the AGV 10 to the CPU 51 and / or the memory 52 via the bus 57. *
  • the map DB 55 stores internal map data of a factory or the like where the AGV 10 travels. As long as the map has a one-to-one correspondence with the position of each AGV 10, the format of the data is not limited.
  • the map stored in the map DB 55 may be a map created by CAD. *
  • the position DB 53 and the map DB 55 may be constructed on a nonvolatile semiconductor memory, or may be constructed on a magnetic recording medium represented by a hard disk or an optical recording medium represented by an optical disk.
  • the image processing circuit 56 is a circuit that generates video data to be displayed on the monitor 58.
  • the image processing circuit 56 operates exclusively when the administrator operates the operation management device 50. In the present embodiment, further detailed explanation is omitted.
  • the monitor 59 may be integrated with the operation management device 50. Further, the CPU 51 may perform the processing of the image processing circuit 56. *
  • an AGV that travels in a two-dimensional space is taken as an example.
  • the present disclosure can also be applied to a moving object that moves in a three-dimensional space, such as a flying object (drone).
  • a flying object drone
  • the 2D space can be expanded to a 3D space.
  • the comprehensive or specific aspect described above may be realized by a system, a method, an integrated circuit, a computer program, or a recording medium.
  • the present invention may be realized by any combination of a system, an apparatus, a method, an integrated circuit, a computer program, and a recording medium.
  • the mobile body of the present disclosure can be suitably used for moving and transporting goods such as luggage, parts, and finished products in factories, warehouses, construction sites, logistics, hospitals, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

[Problem] To provide a self-estimation system capable of self-position estimation even if a portion of a map does not represent the actual environment. [Solution] A position estimation system (115) according to this disclosure is for a moving body and is used while being connected to an external sensor for repeatedly scanning an environment and outputting sensor data for each scan. The position estimation system (115) comprises a processor (106), a first memory (104) for storing an environment map that has been prepared beforehand, and a second memory (107) for storing a computer program for operating the processor. The processor carries out (A) first position estimation processing for generating first estimated values for the position and orientation of the moving body on the basis of the results of comparing the environment map and the sensor data, (B) second position estimation processing for using the sensor data to generate a reference map for the surrounding area while generating second estimated values for the position and orientation of the moving body on the basis of the results of comparing the reference map and the sensor data, and (C) processing in which the first estimated values or second estimated values are selected and the selected estimated values are output as the selected estimated values for the position and orientation of the moving body.

Description

位置推定システム、当該位置推定システムを備える移動体、およびコンピュータプログラムPOSITION ESTIMATION SYSTEM, MOBILE BODY HAVING THE POSITION ESTIMATION SYSTEM, AND COMPUTER PROGRAM
本開示は、位置推定システムおよび当該位置推定システムを備える移動体に関する。また、本開示は、位置推定に使用されるコンピュータプログラムにも関する。 The present disclosure relates to a position estimation system and a moving object including the position estimation system. The present disclosure also relates to a computer program used for position estimation.
無人搬送車(無人搬送台車)および移動ロボットのように自律移動可能な移動体の開発が進められている。  Development of a mobile body that can move autonomously such as an automatic guided vehicle (automatic transport cart) and a mobile robot is in progress. *
特開2008-250905号公報は、レーザレンジファインダから取得したセンサデータと、前もって用意された地図とのマッチングによって自己位置推定を行う移動ロボットを開示している。 Japanese Patent Application Laid-Open No. 2008-250905 discloses a mobile robot that performs self-position estimation by matching sensor data acquired from a laser range finder with a map prepared in advance.
特開2008-250905号公報JP 2008-250905 A
地図が作成された後、環境が変化することがある。例えば工場および物流倉庫では、一時的にレイアウトが変更されることも多い。また、地図上では移動体が通行可能な場所(自由空間)に荷物、装置、または他の移動体が置かれることにより、一時的に自由空間が縮小することもある。現実の環境を反映していない環境地図を用いてマッチングを行うと、自己位置を推定できない事態が生じ得る。  The environment may change after the map is created. For example, layouts are often temporarily changed in factories and distribution warehouses. In addition, a free space may be temporarily reduced by placing a load, a device, or another moving body in a place (free space) where the moving body can pass on the map. If matching is performed using an environment map that does not reflect the actual environment, a situation in which the self-position cannot be estimated may occur. *
本開示の実施形態は、地図の一部が現実の環境を反映していない場合でも自己位置を推定し得る自己推定システムおよび移動体、ならびに当該自己推定に用いられるコンピュータプログラムを提供する。 Embodiments of the present disclosure provide a self-estimation system and a moving body that can estimate a self-position even when a part of a map does not reflect an actual environment, and a computer program used for the self-estimation.
本開示の位置推定システムは、非限定的で例示的な実施形態において、環境を繰り返しスキャンしてスキャンごとにセンサデータを出力する外界センサに接続されて使用される、移動体の位置推定システムであって、少なくとも1個のプロセッサと、予め用意された環境地図を記憶する第1メモリと、前記プロセッサを動作させるコンピュータプログラムを記憶する第2メモリとを備える。前記少なくとも1個のプロセッサは、前記コンピュータプログラムの指令にしたがって、(A)前記環境地図と前記センサデータとの照合結果に基づき、前記移動体の位置および姿勢の第1推定値を生成する第1位置推定処理と、(B)前記センサデータを利用して周囲の参照地図を生成しながら、前記参照地図と前記センサデータの照合結果に基づき、前記移動体の位置および姿勢の第2推定値を生成する第2位置推定処理と、(C)前記第1推定値および前記第2推定値の一方を選択し、選択された一方の推定値を、前記移動体の位置および姿勢の選択された推定値として出力することを実行する。  The position estimation system of the present disclosure is a position estimation system for a moving body that is used in connection with an external sensor that repeatedly scans an environment and outputs sensor data for each scan in a non-limiting exemplary embodiment. And a first memory for storing an environmental map prepared in advance, and a second memory for storing a computer program for operating the processor. The at least one processor generates a first estimated value of the position and orientation of the moving body based on a result of collation between the environmental map and the sensor data in accordance with a command of the computer program. A position estimation process; and (B) generating a surrounding reference map using the sensor data, and calculating a second estimated value of the position and orientation of the moving body based on a result of matching the reference map with the sensor data. A second position estimation process to be generated; and (C) one of the first estimated value and the second estimated value is selected, and the selected estimated value is selected as the selected position and orientation of the moving object. Perform output as a value. *
本開示の移動体は、非限定的で例示的な実施形態において、上記の位置推定システムと、前記外界センサと、移動のための駆動装置とを備える。  In a non-limiting exemplary embodiment, a moving body of the present disclosure includes the position estimation system described above, the external sensor, and a driving device for movement. *
本開示のコンピュータプログラムは、非限定的で例示的な実施形態において、上記いずれかの位置推定システムに使用されるコンピュータプログラムである。 The computer program of the present disclosure is a computer program used in any one of the above-described position estimation systems in a non-limiting exemplary embodiment.
本開示の実施形態によれば、地図の一部が現実の環境を反映していない場合でも自己位置を推定することが可能になる。 According to the embodiment of the present disclosure, it is possible to estimate the self position even when a part of the map does not reflect the actual environment.
図1は、本開示による移動体の実施形態の構成を示す図である。FIG. 1 is a diagram illustrating a configuration of an embodiment of a moving object according to the present disclosure. 図2は、移動体が移動する環境の例を模式的に示す平面レイアウト図である。FIG. 2 is a plan layout diagram schematically showing an example of an environment in which a moving body moves. 図3は、図2に示される環境の環境地図を示す図である。FIG. 3 is a diagram showing an environment map of the environment shown in FIG. 図4Aは、外界センサが時刻tに取得したスキャンデータSD(t)の例を模式的に示す図である。FIG. 4A is a diagram schematically illustrating an example of scan data SD (t) acquired by the external sensor at time t. 図4Bは、外界センサが時刻t+Δtに取得したスキャンデータSD(t+Δt)の例を模式的に示す図である。FIG. 4B is a diagram schematically illustrating an example of scan data SD (t + Δt) acquired by the external sensor at time t + Δt. 図4Cは、スキャンデータSD(t)にスキャンデータSD(t+Δt)をマッチングさせた状態を模式的に示す図である。FIG. 4C is a diagram schematically illustrating a state in which the scan data SD (t + Δt) is matched with the scan data SD (t). 図5は、スキャンデータを構成する点群が初期の位置から回転および並進して、環境地図の点群に近づく様子を模式的に示す図である。FIG. 5 is a diagram schematically illustrating how the point group constituting the scan data rotates and translates from the initial position and approaches the point group of the environment map. 図6は、スキャンデータの剛体変換後の位置および姿勢を示す図である。FIG. 6 is a diagram illustrating the position and orientation of the scan data after the rigid body conversion. 図7Aは、外界センサからスキャンデータを取得し、スキャンデータから参照地図を作成した後、新たに取得したスキャンデータを参照地図にマッチングさせた状態を模式的に示す図である。FIG. 7A is a diagram schematically illustrating a state in which scan data is acquired from an external sensor, a reference map is created from the scan data, and then newly acquired scan data is matched with the reference map. 図7Bは、新たに取得したスキャンデータを図7Aの参照地図に追加して更新した参照地図を模式的に示す図である。FIG. 7B is a diagram schematically illustrating a reference map that is updated by adding newly acquired scan data to the reference map of FIG. 7A. 図7Cは、新たに取得したスキャンデータを図7Bの参照地図に追加して更新した参照地図を模式的に示す図である。FIG. 7C is a diagram schematically illustrating a reference map that is updated by adding newly acquired scan data to the reference map of FIG. 7B. 図8Aは、外界センサが時刻tに取得したスキャンデータSD(t)の例を模式的に示す図である。FIG. 8A is a diagram schematically illustrating an example of scan data SD (t) acquired by the external sensor at time t. 図8Bは、環境地図Mに対するスキャンデータSD(t)のマッチングを開始するときの状態を模式的に示す図である。FIG. 8B is a diagram schematically illustrating a state when matching of the scan data SD (t) with the environment map M is started. 図8Cは、環境地図Mに対するスキャンデータSD(t)のマッチングが完了した状態を模式的に示す図である。FIG. 8C is a diagram schematically illustrating a state where the matching of the scan data SD (t) with the environment map M is completed. 図9は、過去に得られた移動体の位置および姿勢の履歴と、現在の位置および姿勢の予測値を模式的に示す図である。FIG. 9 is a diagram schematically illustrating the history of the position and orientation of the moving body obtained in the past and the predicted values of the current position and orientation. 図10は、本開示の実施形態における位置推定装置の動作の一部を示すフローチャートである。FIG. 10 is a flowchart illustrating a part of the operation of the position estimation device according to the embodiment of the present disclosure. 図11Aは、第1位置推定処理(オフラインSLAM)による第1推定値の変化量が変動する例を示す図である。FIG. 11A is a diagram illustrating an example in which the amount of change in the first estimated value due to the first position estimation process (offline SLAM) varies. 図11Bは、第1位置推定処理(オフラインSLAM)による第1推定値とセンサによる計測値との差が変動する例を示す図である。FIG. 11B is a diagram illustrating an example in which the difference between the first estimated value obtained by the first position estimating process (offline SLAM) and the measured value obtained by the sensor varies. 図11Cは、第1位置推定処理(オフラインSLAM)による第1推定値の信頼度および第2位置推定処理(オンラインSLAM)による第2推定値の信頼度の変動する例を示す図である。FIG. 11C is a diagram illustrating an example in which the reliability of the first estimated value by the first position estimation process (offline SLAM) and the reliability of the second estimated value by the second position estimation process (online SLAM) vary. 図12は、本開示の実施形態における位置推定装置の動作の一部を示すフローチャートである。FIG. 12 is a flowchart illustrating a part of the operation of the position estimation device according to the embodiment of the present disclosure. 図13は、本開示の実施形態における位置推定装置の第2位置推定処理(オンラインSLAM)の例を示すフローチャートである。FIG. 13 is a flowchart illustrating an example of second position estimation processing (online SLAM) of the position estimation device according to the embodiment of the present disclosure. 図14は、本開示による、各AGVの走行を制御する制御システムの概要を示す図である。FIG. 14 is a diagram illustrating an overview of a control system that controls traveling of each AGV according to the present disclosure. 図15は、AGVが存在する環境の一例を示す斜視図である。FIG. 15 is a perspective view showing an example of an environment where AGVs exist. 図16は、接続される前のAGVおよび牽引台車を示す斜視図である。FIG. 16 is a perspective view showing the AGV and the towing cart before being connected. 図17は、接続されたAGVおよび牽引台車を示す斜視図である。FIG. 17 is a perspective view showing the AGV and the traction cart connected to each other. 図18は、本実施形態にかかる例示的なAGVの外観図である。FIG. 18 is an external view of an exemplary AGV according to the present embodiment. 図19Aは、AGVの第1のハードウェア構成例を示す図である。FIG. 19A is a diagram illustrating a first hardware configuration example of AGV. 図19Bは、AGVの第2のハードウェア構成例を示す図である。FIG. 19B is a diagram illustrating a second hardware configuration example of AGV. 図20は、運行管理装置のハードウェア構成例を示す図である。FIG. 20 is a diagram illustrating a hardware configuration example of the operation management apparatus.
<用語> 「無人搬送車」(AGV)とは、本体に人手または自動で荷物を積み込み、指示された場所まで自動走行し、人手または自動で荷卸しをする無軌道車両を意味する。「無人搬送車」は、無人牽引車および無人フォークリフトを含む。  <Terminology> “Automated guided vehicle” (AGV) means a trackless vehicle in which a load is loaded on the main body manually or automatically, automatically travels to a designated place, and unloads manually or automatically. “Automated guided vehicle” includes automatic guided vehicles and automatic forklifts. *
「無人」の用語は、車両の操舵に人を必要としないことを意味しており、無人搬送車が「人(たとえば荷物の積み下ろしを行う者)」を搬送することは除外しない。  The term “unmanned” means that no person is required to steer the vehicle, and it does not exclude that the automated guided vehicle transports “person (for example, a person who loads and unloads luggage)”. *
「無人牽引車」とは、人手または自動で荷物の積み込み荷卸しをする台車を牽引して、指示された場所まで自動走行する無軌道車両である。  An “unmanned towing vehicle” is a trackless vehicle that automatically pulls a cart that loads and unloads luggage manually or automatically travels to a designated location. *
「無人フォークリフト」とは、荷物移載用のフォークなどを上下させるマストを備え、フォークなどに荷物を自動移載し指示された場所まで自動走行し、自動荷役作業をする無軌道車両である。  An “unmanned forklift” is a trackless vehicle that includes a mast that moves up and down a load transfer fork, automatically transfers the load to the fork, etc., automatically travels to a designated location, and performs automatic cargo handling work. *
「無軌道車両」とは、車輪と、車輪を回転させる電気モータまたはエンジンを備える移動体(vehicle)である。  A “trackless vehicle” is a vehicle that includes wheels and an electric motor or engine that rotates the wheels. *
「移動体」とは、人または荷物を載せて移動する装置であり、移動のための駆動力(traction)を発生させる車輪、二足または多足歩行装置、プロペラなどの駆動装置を備える。本開示における「移動体」の用語は、狭義の無人搬送車のみならず、モバイルロボット、サービスロボット、およびドローンを含む。  The “moving body” is a device that carries a person or a load and moves, and includes a driving device such as a wheel, a biped or multi-legged walking device, and a propeller that generate driving force (traction) for movement. The term “mobile body” in the present disclosure includes not only a narrow automatic guided vehicle but also a mobile robot, a service robot, and a drone. *
「自動走行」は、無人搬送車が通信によって接続されるコンピュータの運行管理システムの指令に基づく走行と、無人搬送車が備える制御装置による自律的走行とを含む。自律的走行には、無人搬送車が所定の経路に沿って目的地に向かう走行のみならず、追尾目標に追従する走行も含まれる。また、無人搬送車は、一時的に作業者の指示に基づくマニュアル走行を行ってもよい。「自動走行」は、一般には「ガイド式」の走行および「ガイドレス式」の走行の両方を含むが、本開示では「ガイドレス式」の走行を意味する。  The “automatic traveling” includes traveling based on a command of a computer operation management system to which the automatic guided vehicle is connected by communication, and autonomous traveling by a control device included in the automatic guided vehicle. Autonomous traveling includes not only traveling where the automated guided vehicle travels to a destination along a predetermined route, but also traveling following a tracking target. Moreover, the automatic guided vehicle may temporarily perform manual travel based on an instruction from the worker. “Automatic travel” generally includes both “guided” travel and “guideless” travel, but in the present disclosure, it means “guideless” travel. *
「ガイド式」とは、誘導体を連続的または断続的に設置し、誘導体を利用して無人搬送車を誘導する方式である。  The “guide type” is a system in which a derivative is installed continuously or intermittently and the guided vehicle is guided using the derivative. *
「ガイドレス式」とは、誘導体を設置せずに誘導する方式である。本開示の実施形態における無人搬送車は、位置推定装置を備え、ガイドレス式で走行することができる。  The “guideless type” is a method of guiding without installing a derivative. An automatic guided vehicle according to an embodiment of the present disclosure includes a position estimation device and can travel in a guideless manner. *
「位置推定装置」は、レーザレンジファインダなどの外界センサによって取得されたセンサデータに基づいて環境地図上における自己位置を推定する装置である。  The “position estimation device” is a device that estimates a self-position on an environmental map based on sensor data acquired by an external sensor such as a laser range finder. *
「外界センサ」は、移動体の外部の状態をセンシングするセンサである。外界センサには、例えば、レーザレンジファインダ(測域センサともいう)、カメラ(またはイメージセンサ)、LIDAR(Light Detection and Ranging)、ミリ波レーダ、超音波センサ、および磁気センサがある。  An “external sensor” is a sensor that senses an external state of a moving body. Examples of the external sensor include a laser range finder (also referred to as a range sensor), a camera (or an image sensor), a LIDAR (Light Detection and Ranging), a millimeter wave radar, an ultrasonic sensor, and a magnetic sensor. *
「内界センサ」は、移動体の内部の状態をセンシングするセンサである。内界センサには、例えばロータリエンコーダ(以下、単に「エンコーダ」と称することがある)、加速度センサ、および角加速度センサ(たとえばジャイロセンサ)がある。  The “inner world sensor” is a sensor that senses the state inside the moving body. Examples of the internal sensor include a rotary encoder (hereinafter sometimes simply referred to as “encoder”), an acceleration sensor, and an angular acceleration sensor (for example, a gyro sensor). *
「SLAM(スラム)」とは、Simultaneous Localization and Mappingの略語であり、自己位置推定と環境地図作成を同時に行うことを意味する。  “SLAM” is an abbreviation of “Simultaneous” “Localization” and “Mapping”, and means that self-location estimation and environmental map creation are performed simultaneously. *
<本開示における移動体の基本構成> 図1を参照する。本開示の移動体10は、図1に示される例示的な実施形態において、環境をスキャンしてスキャンデータを周期的に出力する外界センサ102を備える。外界センサ102の典型例は、レーザレンジファインダ(LRF)である。LRFは周期的に例えば赤外線または可視光のレーザビームを周囲に放射して周囲の環境をスキャンする。レーザビームは、例えば、壁、柱等の構造物、床の上に置かれた物体などの表面で反射される。LRFは、レーザビームの反射光を受けて各反射点までの距離を計算し、各反射点の位置が示された測定結果のデータを出力する。各反射点の位置には、反射光の到来方向および距離が反映されている。測定結果のデータ(スキャンデータ)は、「環境計測データ」または「センサデータ」と呼ばれることがある。  <Basic Configuration of Moving Body in Present Disclosure> Reference is made to FIG. The mobile body 10 of the present disclosure includes an external sensor 102 that scans an environment and periodically outputs scan data in the exemplary embodiment shown in FIG. 1. A typical example of the external sensor 102 is a laser range finder (LRF). The LRF periodically emits, for example, an infrared or visible laser beam to the surroundings to scan the surrounding environment. The laser beam is reflected by a surface such as a structure such as a wall or a pillar or an object placed on the floor. The LRF receives the reflected light of the laser beam, calculates the distance to each reflection point, and outputs measurement result data indicating the position of each reflection point. The direction and distance of the reflected light are reflected at the position of each reflection point. The measurement result data (scan data) may be referred to as “environmental measurement data” or “sensor data”. *
外界センサ102による環境のスキャンは、例えば、外界センサ102の正面を基準として左右135度(合計270度)の範囲の環境に対して行われる。具体的には、水平面内において所定のステップ角度ごとに方向を変化させながらパルス状のレーザビームを放射し、各レーザビームの反射光を検出して距離を計測する。ステップ角度が0.3度であれば、合計901ステップ分の角度で決まる方向における反射点までの距離の測定データを得ることができる。この例において、外界センサ102が行う周囲の空間のスキャンは実質的に床面に平行であり、平面的(二次元的)である。しかし、外界センサは、三次元的なスキャンを行ってもよい。  The environment scan by the external sensor 102 is performed, for example, on an environment in a range of 135 degrees to the left and right (total 270 degrees) with respect to the front of the external sensor 102. Specifically, a pulsed laser beam is emitted while changing the direction at predetermined step angles in the horizontal plane, and the reflected light of each laser beam is detected to measure the distance. If the step angle is 0.3 degree, measurement data of the distance to the reflection point in the direction determined by the angle corresponding to the total of 901 steps can be obtained. In this example, the scanning of the surrounding space performed by the external sensor 102 is substantially parallel to the floor surface and is planar (two-dimensional). However, the external sensor may perform a three-dimensional scan. *
スキャンデータの典型例は、スキャンごとに取得される点群(point cloud)を構成する各点の位置座標によって表現され得る。点の位置座標は、移動体10とともに移動するローカル座標系によって規定される。このようなローカル座標系は、移動体座標系またはセンサ座標系と呼ばれ得る。本開示においては、移動体10に固定されたローカル座標系の原点を移動体10の「位置」と定義
し、ローカル座標系の向き(orientation)を移動体10の「姿勢」と定義する。以下、位置および姿勢を合わせて「ポーズ」と称することがある。また、簡単のため、「ポーズ」を単に「位置」と称する場合もある。 
A typical example of the scan data can be expressed by the position coordinates of each point constituting a point cloud acquired for each scan. The position coordinates of the points are defined by a local coordinate system that moves with the moving body 10. Such a local coordinate system may be referred to as a mobile coordinate system or a sensor coordinate system. In the present disclosure, the origin of the local coordinate system fixed to the moving body 10 is defined as the “position” of the moving body 10, and the orientation of the local coordinate system is defined as the “posture” of the moving body 10. Hereinafter, the position and orientation may be collectively referred to as “pose”. In addition, for the sake of simplicity, “pose” may be simply referred to as “position”.
スキャンデータは、極座標系で表示される場合、各点の位置をローカル座標系における原点からの「方向」および「距離」で示す数値セットから構成され得る。極座標系の表示は、直交座標系の表示に変換され得る。以下の説明では、簡単のため、外界センサから出力されたスキャンデータは、直交座標系で表示されているとする。  When the scan data is displayed in a polar coordinate system, the scan data may be constituted by a numerical set indicating the position of each point by “direction” and “distance” from the origin in the local coordinate system. The polar coordinate system representation may be converted to an orthogonal coordinate system representation. In the following description, for the sake of simplicity, it is assumed that the scan data output from the external sensor is displayed in an orthogonal coordinate system. *
移動体10は、環境地図を記憶する記憶装置(第1メモリ)104と、位置推定システム115とを備える。環境地図は、移動体10または他の地図作成装置によって用意され、記憶装置104に記憶されている。  The moving body 10 includes a storage device (first memory) 104 that stores an environment map, and a position estimation system 115. The environment map is prepared by the mobile object 10 or another map creation device and stored in the storage device 104. *
位置推定システム115は、外界センサ102に接続されて使用され、プロセッサ106と、プロセッサの動作を制御するコンピュータプログラムを記憶したメモリ(第2メモリ)107とを有している。図1には1個のプロセッサ106および1個のメモリ107が記載されているが、プロセッサ106およびメモリ107の個数は、それぞれ、2個または3個以上であってもよい。  The position estimation system 115 is used by being connected to the external sensor 102, and includes a processor 106 and a memory (second memory) 107 that stores a computer program for controlling the operation of the processor. Although FIG. 1 shows one processor 106 and one memory 107, the number of processors 106 and memory 107 may be two or three or more, respectively. *
位置推定システム115は、外界センサ102から取得したスキャンデータと、記憶装置104から読み出された環境地図とのマッチングを行い、移動体10の位置および姿勢、すなわちポーズを推定する。このマッチングは、パターンマッチングまたはスキャンマッチングと呼ばれ、種々のアルゴリズムにしたがって実行され得る。マッチングアルゴリズムの典型例は、Iterative Closest Point (ICP:反復最近接点)アルゴリズムである。  The position estimation system 115 performs matching between the scan data acquired from the external sensor 102 and the environment map read from the storage device 104, and estimates the position and orientation of the moving body 10, that is, the pose. This matching is called pattern matching or scan matching and can be performed according to various algorithms. A typical example of the matching algorithm is an Iterative Closest Point (ICP: iterative nearest neighbor) algorithm. *
位置推定システム115は、後述するように、外界センサ102から出力される複数のスキャンデータをマッチングによって整合させ、連結することにより、環境地図を作成することができる。  As will be described later, the position estimation system 115 can create an environmental map by matching and connecting a plurality of scan data output from the external sensor 102 by matching. *
本開示の実施形態における位置推定システム115は、プロセッサ106と、プロセッサ106を動作させるコンピュータプログラムを記憶するメモリ107とによって実現される。プロセッサ106は、コンピュータプログラムの指令にしたがって、以下の動作を実行する。  The position estimation system 115 in the embodiment of the present disclosure is realized by the processor 106 and the memory 107 that stores a computer program that operates the processor 106. The processor 106 performs the following operations in accordance with instructions from the computer program. *
(A)第1位置推定処理(オフラインSLAM): 環境地図とセンサデータとの照合結果に基づき、外界センサ102の位置および姿勢(すなわち移動体10の位置および姿勢)の第1推定値を生成する。  (A) First position estimation process (offline SLAM): Generates a first estimated value of the position and posture of the external sensor 102 (that is, the position and posture of the moving body 10) based on the result of collation between the environmental map and the sensor data. . *
(B)第2位置推定処理(オンラインSLAM): センサデータを利用して周囲の地図(参照地図)を生成しながら、当該参照地図とセンサデータの照合結果に基づき、移動体10の位置および姿勢の第2推定値を生成する。  (B) Second position estimation process (online SLAM): While generating a surrounding map (reference map) using sensor data, the position and orientation of the moving body 10 based on the result of matching the reference map with the sensor data The second estimated value is generated. *
(C)第1推定値および第2推定値の一方を選択し、選択された一方の推定値を、移動体10の位置および姿勢の「選択された推定値」として出力する。  (C) One of the first estimated value and the second estimated value is selected, and the selected estimated value is output as the “selected estimated value” of the position and orientation of the moving body 10. *
本開示の実施形態において、プロセッサ106は、第1位置推定処理および第2位置推定処理を同時また並行して実行する。例えばプロセッサ106は、第1位置推定処理を実行する第1プロセッサ部分と、第2位置推定処理を実行する第2プロセッサ部分とを有していてもよい。また、1個のプロセッサ106が第1位置推定処理および第2位置推定処理のそれぞれに必要な演算を交互に実行してもよい。  In the embodiment of the present disclosure, the processor 106 executes the first position estimation process and the second position estimation process simultaneously or in parallel. For example, the processor 106 may include a first processor part that executes a first position estimation process and a second processor part that executes a second position estimation process. Further, one processor 106 may alternately perform operations necessary for the first position estimation process and the second position estimation process. *
上記の処理(C)において、プロセッサ106は、選択された推定値として第1推定値を出力しているときに、以下のいずれかの事象が生じると、第1推定値ではなく第2推定値を選択された推定値として出力する。  In the process (C), when the processor 106 outputs the first estimated value as the selected estimated value and any of the following events occurs, the processor 106 outputs the second estimated value instead of the first estimated value. Is output as the selected estimated value. *
事象 ・第1推定値の信頼度を算出し、この信頼度が閾値よりも低下した。  The reliability of the event / first estimated value was calculated, and this reliability was lower than the threshold. *
・第1推定値の信頼度(第1信頼度)および第2推定値の信頼度(第2信頼度)を算出し、第1信頼度が第2信頼度よりも低下した。  The reliability of the first estimated value (first reliability) and the reliability of the second estimated value (second reliability) were calculated, and the first reliability was lower than the second reliability. *
・現在の第1推定値が過去の第1推定値(時系列データ)から所定範囲を超えて変化した。  The current first estimated value has changed from the past first estimated value (time-series data) beyond a predetermined range. *
・内界センサから取得した位置および/または姿勢の測定値(オドメトリ値)と第1推定値との差が閾値を超えた。  The difference between the measured value (odometry value) of the position and / or orientation acquired from the internal sensor and the first estimated value exceeded the threshold value. *
・第1推定値を決定する演算を所定時間内に完了できなかった(マッチング誤差が所定時間内に充分に小さなレベルに収束しなかった)。  The calculation for determining the first estimated value could not be completed within a predetermined time (the matching error did not converge to a sufficiently small level within the predetermined time). *
信頼度は、例えば、後述するICPマッチングの最終的な誤差(「位置ずれ量」または「一致率」)によって定量的に表現され得る。  The reliability can be expressed quantitatively by, for example, a final error (“position shift amount” or “match rate”) of ICP matching described later. *
上記の各事象は、オフラインSLAMで用いる環境地図が現在の環境を正確に反映していないときに発生する。例えば工場および物流倉庫などで一時的にレイアウトが変更されたとき、環境地図上では移動体が通行可能な場所(自由空間)に荷物、装置、または他の移動体が置かれたときである。このような環境の変化があったにもかかわらず、以前の環境地図をそのまま用いてマッチングを行うと、上記の事象が発生しやすくなる。  Each of the above events occurs when the environment map used in offline SLAM does not accurately reflect the current environment. For example, when a layout is temporarily changed in a factory or a distribution warehouse, a load, a device, or another moving body is placed in a place (free space) where the moving body can pass on the environmental map. In spite of such an environmental change, if matching is performed using the previous environmental map as it is, the above-mentioned phenomenon is likely to occur. *
本開示の実施形態では、第1位置推定処理(オフラインSLAM)によって位置推定を行う一方、第2位置推定処理(オンラインSLAM)による位置推定もバックグランウトで実行する。このため、環境地図と現実の環境との間にずれが生じた場合、速やかに(オンラインSLAM)による推定値(第2推定値)を出力して利用可能にすることができる。  In the embodiment of the present disclosure, position estimation is performed by the first position estimation process (offline SLAM), while position estimation by the second position estimation process (online SLAM) is also performed in the background. For this reason, when a shift | offset | difference arises between an environmental map and real environment, the estimated value (2nd estimated value) by (online SLAM) can be output rapidly and can be utilized. *
なお、第2位置推定処理(オンラインSLAM)は、例えば以下の処理によって実行することができる。  The second position estimation process (online SLAM) can be executed by the following process, for example. *
(1)外界センサ102からスキャンデータを取得し、スキャンデータから参照地図(局所地図)を作成する。  (1) Scan data is acquired from the external sensor 102, and a reference map (local map) is created from the scan data. *
(2)外界センサ102からスキャンデータを新たに取得したとき、新たに取得した最新スキャンデータと参照地図とのマッチングを行うことにより、参照地図上における移動体10の位置および姿勢を推定し、最新スキャンデータを参照地図に追加して参照地図を更新する。  (2) When the scan data is newly acquired from the external sensor 102, the position and orientation of the moving body 10 on the reference map are estimated by matching the newly acquired latest scan data with the reference map. Update the reference map by adding scan data to the reference map. *
(3)複数回更新された参照地図から、最新スキャンデータを含む一部分以外の部分を削除して、参照地図のリセットを行う。  (3) The reference map is reset by deleting a part other than the part including the latest scan data from the reference map updated a plurality of times. *
また、必要に応じて。以下の処理を追加的に実行してもよい。  Also if necessary. The following processing may be additionally executed. *
(4)リセットを行うとき、リセット前の複数回更新された参照地図に基づいて環境地図を更新する(オンライン地図更新)。  (4) When resetting, the environment map is updated based on the reference map updated multiple times before resetting (online map update). *
このような環境地図そのものを更新することにより、環境の変化した状態(たとえばレイアウト変更)が継続するときも、上記の事象が生じることなく、第1位置推定処理(オフラインSLAM)による第1推定値を選択された推定値として利用することが可能になる。  By updating the environment map itself, the first estimated value obtained by the first position estimation process (offline SLAM) can be obtained without the above-described phenomenon even when the environment change state (for example, layout change) continues. Can be used as the selected estimated value. *
図示されている例において、移動体10は、さらに、駆動装置108、自動走行制御装置110、および通信回路112を備えている。駆動装置108は、移動体10が移動するための駆動力を発生する装置である。駆動装置108の例は、電気モータまたはエンジンによって回転する車輪(駆動輪)、モータまたは他のアクチュエータによって動作する二足または多足歩行装置を含む。車輪は、メカナムホイールなどの全方位ホイールであってもよい。また、移動体10が空中または水中を移動する移動体、あるいはホバークラフトであってもよく、その場合の駆動装置108は、モータによって回転するプロペラを含む。  In the illustrated example, the moving body 10 further includes a drive device 108, an automatic travel control device 110, and a communication circuit 112. The driving device 108 is a device that generates a driving force for the moving body 10 to move. Examples of drive device 108 include a biped or multi-legged walking device that is operated by a wheel (drive wheel) that is rotated by an electric motor or engine, a motor or other actuator. The wheel may be an omnidirectional wheel such as a Mecanum wheel. Further, the moving body 10 may be a moving body that moves in the air or underwater, or a hovercraft. In this case, the driving device 108 includes a propeller that is rotated by a motor. *
自動走行制御装置110は、駆動装置108を操作して移動体10の移動条件(速度、加速度、移動方向など)を制御する。自動走行制御装置110は、所定の走行経路に沿って移動体10を移動させてもよく、外部から与えられる指令にしたがって移動させてもよい。位置推定システム115は、移動体10の移動中または停止中において、移動体10の位置および姿勢の推定値を算出する。自動走行制御装置110は、この推定値を参照して移動体10の走行を制御する。  The automatic travel control device 110 operates the driving device 108 to control the moving conditions (speed, acceleration, moving direction, etc.) of the moving body 10. The automatic traveling control device 110 may move the moving body 10 along a predetermined traveling route, or may move it according to a command given from the outside. The position estimation system 115 calculates an estimated value of the position and orientation of the moving body 10 while the moving body 10 is moving or stopped. The automatic traveling control device 110 controls traveling of the moving body 10 with reference to the estimated value. *
位置推定システム115および自動走行制御装置110を、全体として、走行制御装置120と呼んでもよい。自動走行制御装置110も、位置推定システム115とともに、上述したプロセッサ106と、プロセッサ106の動作を制御するコンピュータプログラムを格納したメモリ107によって構成され得る。このようなプロセッサ106およびメモリ107は、1個または複数の半導体集積回路によって実現され得る。  The position estimation system 115 and the automatic travel control device 110 may be collectively referred to as a travel control device 120. The automatic traveling control device 110 may be configured by the position estimation system 115 and the processor 106 described above and a memory 107 storing a computer program for controlling the operation of the processor 106. Such a processor 106 and memory 107 can be realized by one or a plurality of semiconductor integrated circuits. *
通信回路112は、移動体10が外部の管理装置、他の移動体、または操作者のモバイル端末機器などを含む通信ネットワークに接続してデータおよび/または指令のやりとりを行う回路である。  The communication circuit 112 is a circuit in which the mobile unit 10 is connected to a communication network including an external management device, another mobile unit, or an operator's mobile terminal device, and exchanges data and / or commands. *
<環境地図> 図2は、移動体10が移動する環境200の例を模式的に示す平面レイアウト図である。環境200は、より広い環境の一部である。図2において、太い直線は、例えば建造物の固定壁202を示している。  <Environment Map> FIG. 2 is a plan layout diagram schematically showing an example of an environment 200 in which the moving body 10 moves. The environment 200 is part of a wider environment. In FIG. 2, a thick straight line indicates a fixed wall 202 of a building, for example. *
図3は、図2に示される環境200の地図(環境地図M)を示す図である。図中の各ドット204は、環境地図Mを構成する点群の各点に相当する。本開示において、環境地図Mの点群を「参照点群」と称し、スキャンデータの点群を「データ点群」または「ソース点群」と称する場合がある。マッチングは、例えば、位置が固定された環境地図(参照点群)に対して、スキャンデータ(データ点群)の位置合わせを行うことである。ICPアルゴリズムによるマッチングを行う場合、具体的には、参照点群とデータ点群との間で対応する点のペアが選択され、各ペアを構成する点同士の距離(誤差)を最小化するようにデータ点群の位置および向きが調整される。  FIG. 3 is a diagram showing a map (environment map M) of the environment 200 shown in FIG. Each dot 204 in the figure corresponds to each point of the point group constituting the environment map M. In the present disclosure, the point cloud of the environment map M may be referred to as a “reference point cloud”, and the scan data point cloud may be referred to as a “data point cloud” or a “source point cloud”. Matching is, for example, aligning scan data (data point group) with respect to an environmental map (reference point group) having a fixed position. When matching using the ICP algorithm, specifically, a pair of corresponding points is selected between the reference point group and the data point group, and the distance (error) between the points constituting each pair is minimized. The position and orientation of the data point group are adjusted. *
図3において、ドット204は、簡単のため、複数の線分上に等間隔で配列されている。現実の環境地図Mにおける点群は、より複雑な配置パターンを有していてもよい。環境地図Mは、点群地図に限定されず、直線または曲線を構成要素とする地図であってもよいし、占有格子地図であってもよい。すなわちスキャンデータと環境地図Mとの間でマッチングを行うことが可能な構造を環境地図Mが備えていればよい。なお、占有格子地図の場合、モンテカルロ位置推定を行うことができる。以下、ICPアルゴリズムによるマッチングを例にとって本開示の実施形態を説明するが、本開示の実施形態は、この例に限定されない。  In FIG. 3, the dots 204 are arranged at equal intervals on a plurality of line segments for simplicity. The point cloud in the actual environment map M may have a more complicated arrangement pattern. The environment map M is not limited to a point cloud map, and may be a map having a straight line or a curve as a constituent element, or may be an occupied grid map. That is, it is only necessary that the environment map M has a structure capable of performing matching between the scan data and the environment map M. In the case of an occupied grid map, Monte Carlo position estimation can be performed. Hereinafter, although an embodiment of the present disclosure will be described by taking matching by the ICP algorithm as an example, the embodiment of the present disclosure is not limited to this example. *
移動体10が図3に示される位置PA、位置PB、および位置PCのそれぞれにあるときに、移動体10の外界センサ102が取得するスキャンデータは、それぞれ、異なる点群の配列を有する。移動体10が位置PAから、位置PBを経て位置PCに達するまでの移動時間が、外界センサ102によるスキャンの周期に比べて十分に長い場合、すなわち、移動体10の移動が遅い場合、時間軸上で隣接する2つのスキャンデータは極めて類似している。しかし、移動体10の移動が著しく速い場合は、時間軸上で隣接する2つのスキャンデータは大きく異なってしまう可能性がある。  When the moving body 10 is at each of the position PA, position PB, and position PC shown in FIG. 3, the scan data acquired by the external sensor 102 of the moving body 10 has a different point cloud arrangement. If the moving time from the position PA to the position PC via the position PB is sufficiently long compared to the scanning period of the external sensor 102, that is, if the moving of the moving body 10 is slow, the time axis The two adjacent scan data above are very similar. However, when the moving body 10 moves remarkably fast, the two adjacent scan data on the time axis may be greatly different. *
このように外界センサ102から順次出力されるスキャンデータのうち、最新のスキャンデータがその直前のスキャンデータに類似している場合は、マッチングは相対的に容易であり、短時間で信頼度の高いマッチングが期待される。しかし、移動体10の移動速度が相対的に高い場合は、最新のスキャンデータがその直前のスキャンデータに類似していない可能性があり、マッチングに要する時間が長くなったり、所定時間内にマッチングが完了しないことがあり得る。  As described above, when the latest scan data among the scan data sequentially output from the external sensor 102 is similar to the scan data immediately before, the matching is relatively easy and the reliability is high in a short time. Matching is expected. However, when the moving speed of the moving body 10 is relatively high, there is a possibility that the latest scan data is not similar to the immediately preceding scan data, and the time required for matching becomes long, or matching within a predetermined time May not complete. *
<地図作成時のマッチング> 本開示の実施形態において、地図作成は、オンラインおよびオフラインのいずれでも可能である。このため、地図作成の原理については、以下の説明がオフラインSLAMおよびオンラインSLAMのいずれの場合にも適用され得る。ただし、オフラインSLAMの場合は、スキャンデータを用いて地図を作成するために必要な演算の時間に特段の制約がなく、時間をかけ、より誤差の少ないマッチングを遂行することが可能になる。  <Matching at Map Creation> In the embodiment of the present disclosure, map creation can be performed online or offline. For this reason, as for the principle of map creation, the following description can be applied to both offline SLAM and online SLAM. However, in the case of offline SLAM, there is no particular restriction on the time required for creating a map using scan data, and it is possible to perform matching with less error by taking time. *
図4Aは、外界センサ102が時刻tに取得したスキャンデータSD(t)の例を模式的に示す図である。スキャンデータSD(t)は、移動体1
0とともに位置および姿勢が変わるセンサ座標系で表示されている。スキャンデータSD(t)は、外界センサ102の真正面をV軸として、V軸から時計周りに90°回転した方向をU軸とするUV座標系によって表現される。UV座標系の原点に移動体10、より正確には外界センサ102が位置している。本開示では、移動体10が前進するとき、移動体10は、外界センサ102の真正面、すなわちV軸の方向に進む。わかりやすさのため、スキャンデータSD(t)を構成する点は黒丸で記載されている。 
FIG. 4A is a diagram schematically illustrating an example of scan data SD (t) acquired by the external sensor 102 at time t. The scan data SD (t)
It is displayed in a sensor coordinate system whose position and orientation change with zero. The scan data SD (t) is represented by a UV coordinate system in which the front surface of the external sensor 102 is the V axis, and the direction rotated 90 ° clockwise from the V axis is the U axis. The moving body 10, more precisely, the external sensor 102 is located at the origin of the UV coordinate system. In the present disclosure, when the moving body 10 moves forward, the moving body 10 advances in front of the external sensor 102, that is, in the direction of the V-axis. For easy understanding, the points constituting the scan data SD (t) are indicated by black circles.
本明細書において位置推定システム115が外界センサ102からスキャンデータを取得する周期をΔtとする。Δtは例えば200ミリ秒である。移動体10が移動しているとき、外界センサ102から周期的に取得されるスキャンデータの内容は変化し得る。  In this specification, the period at which the position estimation system 115 acquires scan data from the external sensor 102 is denoted by Δt. Δt is, for example, 200 milliseconds. When the moving body 10 is moving, the content of the scan data periodically acquired from the external sensor 102 can change. *
図4Bは、外界センサ102が時刻t+Δtに取得したスキャンデータSD(t+Δt)の例を模式的に示す図である。わかりやすさのため、スキャンデータSD(t+Δt)を構成する点は白丸で記載されている。  FIG. 4B is a diagram schematically illustrating an example of scan data SD (t + Δt) acquired by the external sensor 102 at time t + Δt. For easy understanding, the points constituting the scan data SD (t + Δt) are indicated by white circles. *
Δtが例えば200ミリ秒である場合、移動体10が毎秒1メールの速さで移動していると、Δtの間に移動体10は20センチメートル程度移動する。通常、20センチメートル程度の移動によって、移動体10の環境は大きく変化しないため、外界センサ102が時刻t+Δtにスキャンした環境と、時刻tにスキャンした環境との間には広い範囲で重複した部分が含まれる。したがって、スキャンデータSD(t)の点群とスキャンデータSD(t+Δt)の点群との間には多くの対応点が含まれることになる。  When Δt is, for example, 200 milliseconds, if the moving body 10 moves at a speed of 1 mail per second, the moving body 10 moves about 20 centimeters during Δt. Normally, the environment of the moving body 10 does not change greatly due to movement of about 20 centimeters. Therefore, there is a wide overlap between the environment scanned by the external sensor 102 at time t + Δt and the environment scanned at time t. Is included. Therefore, many corresponding points are included between the point group of the scan data SD (t) and the point group of the scan data SD (t + Δt). *
図4Cは、スキャンデータSD(t)とスキャンデータSD(t+Δt)とのマッチングが完了した状態を模式的に示している。この例では、スキャンデータSD(t)に対してスキャンデータSD(t+Δt)が整合するように位置合わせが行われている。図4CのUV座標系の原点には時刻tにおける移動体10が位置し、時刻t+Δtにおける移動体10は、UV座標系の原点から移動した位置にある。2枚のスキャンデータのマッチング行うことにより、一方のローカル座標系に対する他方ローカル座標系の配置関係が求められる。  FIG. 4C schematically shows a state where matching between the scan data SD (t) and the scan data SD (t + Δt) is completed. In this example, alignment is performed so that the scan data SD (t + Δt) is aligned with the scan data SD (t). The moving body 10 at time t is located at the origin of the UV coordinate system in FIG. 4C, and the moving body 10 at time t + Δt is at a position moved from the origin of the UV coordinate system. By matching two pieces of scan data, the arrangement relationship of the other local coordinate system with respect to one local coordinate system is obtained. *
こうして、周期的に取得される複数のスキャンデータSD(t)、SD(t+Δt)、・・・、SD(t+N×Δt)を連結することにより、局所的な環境地図を作成することができる。ここで、Nは1以上の整数である。複数の局所的な環境地図を合成することにより、全体の環境地図が得られる。オンラインSLAMの場合、この合成を行った全体の環境地図を完成する。本開示では、移動体が移動中に作成する「局所的な環境地図」を「参照地図」と呼び、オフラインSLAMで使用する「環境地図」とは区別することにする場合がある。  In this way, a local environment map can be created by connecting a plurality of scan data SD (t), SD (t + Δt),..., SD (t + N × Δt) acquired periodically. Here, N is an integer of 1 or more. By synthesizing a plurality of local environment maps, an entire environment map can be obtained. In the case of online SLAM, the entire environment map that has been synthesized is completed. In the present disclosure, a “local environment map” created by a moving body while moving is referred to as a “reference map” and may be distinguished from an “environment map” used in offline SLAM. *
図5は、時刻tにおけるスキャンデータを構成する点群が初期の位置から回転および並進して、地図の点群に近づく様子を模式的に示す図である。時刻tにおけるスキャンデータの点群を構成するK個の点のうちのk番目(k=1、2、・・・、K-1、K)の点の座標値をZt,k、この点に対応する地図上の点の座標値をmとする。このとき、2つの点群における対応点の誤差は、K個の対応点について計算した誤差の二乗和であるΣ(Zt,k-mをコスト関数として評価することができる。Σ(Zt,k-mを小さくするように回転および並進の剛体変換を決定する。剛体変換は、回転の角度および並進のベクトルをパラメータとして含む変換行列(同次変換行列)によって規定される。  FIG. 5 is a diagram schematically showing how the point group constituting the scan data at time t rotates and translates from the initial position and approaches the point group on the map. The coordinate value of the kth point (k = 1, 2,..., K−1, K) of the K points constituting the point group of the scan data at time t is represented by Z t, k . Let m k be the coordinate value of the point on the map corresponding to. At this time, the error of the corresponding points in the two point groups can be evaluated by using Σ (Z t, k −m k ) 2 that is the sum of squares of the errors calculated for the K corresponding points as a cost function. Rotational and translational rigid body transformations are determined so as to reduce Σ (Z t, k −m k ) 2 . The rigid transformation is defined by a transformation matrix (homogeneous transformation matrix) that includes a rotation angle and a translation vector as parameters.
図6は、スキャンデータの剛体変換後の位置および姿勢を示す図である。図6に示される例において、スキャンデータと地図とのマッチングは完了しておらず、2つの点群の間には、まだ大きな誤差(位置ずれ)が存在している。この位置ずれを縮小するため、剛体変換をさらに行う。こうして、誤差が所定値を下回る大きさになったとき、マッチングは完了する。  FIG. 6 is a diagram illustrating the position and orientation of the scan data after the rigid body conversion. In the example shown in FIG. 6, matching between the scan data and the map is not completed, and a large error (positional deviation) still exists between the two point groups. In order to reduce this displacement, rigid body transformation is further performed. Thus, when the error becomes smaller than a predetermined value, matching is completed. *
<参照地図の作成> 以下、オンラインSLAM中に行う参照地図の作成について説明する。  <Creation of Reference Map> Hereinafter, creation of a reference map performed during online SLAM will be described. *
図7Aは、新たに取得した最新のスキャンデータSD(b)と、前回に取得したスキャンデータSD(a)とのマッチングが完了した状態を模式的に示す図である。図7Aにおいて、黒円の点群は前回のスキャンデータを表し、白丸の点群は最新のスキャンデータを表している。図7Aには、前回のスキャンデータを取得したときの移動体10の位置a、および、最新のスキャンデータを取得したときの移動体10の位置bが示されている。  FIG. 7A is a diagram schematically illustrating a state where matching between the latest scan data SD (b) newly acquired and the scan data SD (a) acquired last time is completed. In FIG. 7A, a black circle point group represents the previous scan data, and a white circle point group represents the latest scan data. FIG. 7A shows the position a of the moving body 10 when the previous scan data is acquired and the position b of the moving body 10 when the latest scan data is acquired. *
この例において、前回に取得したスキャンデータSD(a)は「参照地図RM」を構成している。参照地図RMは、作成されつつある環境地図の一部である。最新のスキャンデータSD(b)の位置および向きは、前回に取得したスキャンデータSD(a)の位置および向きに対して整合するようにマッチングが実行される。  In this example, the scan data SD (a) acquired last time constitutes a “reference map RM”. The reference map RM is a part of the environmental map that is being created. Matching is executed so that the position and orientation of the latest scan data SD (b) matches the position and orientation of the previously acquired scan data SD (a). *
このようなマッチングを行うことにより、参照地図RM上における移動体10bの位置および姿勢を知ることができる。マッチングが完了した後、スキャンデータSD(b)を参照地図RMに追加して参照地図RMを更新する。  By performing such matching, the position and orientation of the moving body 10b on the reference map RM can be known. After the matching is completed, the scan data SD (b) is added to the reference map RM to update the reference map RM. *
スキャンデータSD(b)の座標系は、スキャンデータSD(a)の座標系に連結される。この連結は、2つの座標系の回転および並進の変換(剛体変換)を規定する行列に表される。このような変換の行列によれば、スキャンデータSD(b)上の各点の座標値を、スキャンデータSD(a)の座標系における座標値に変換することができる。  The coordinate system of the scan data SD (b) is connected to the coordinate system of the scan data SD (a). This connection is represented in a matrix that defines rotation and translational transformation (rigid transformation) of the two coordinate systems. According to such a conversion matrix, the coordinate value of each point on the scan data SD (b) can be converted into the coordinate value in the coordinate system of the scan data SD (a). *
図7Bは、次に取得したスキャンデータを図7Aの参照地図RMに追加して更新した参照地図RMを示している。図7Bにおいて、黒円の点群は更新前の参照地図RMを表し、白丸の点群は最新のスキャンデータSD(c)を表している。図7Bには、前々回、前回、および、最新のスキャンデータを取得したときの移動体10の位置a、b、cが示されている。図7Bの白丸の点群および黒円の点群の全体は、更新された参照地図RMを構成している。  FIG. 7B shows a reference map RM obtained by adding the next acquired scan data to the reference map RM of FIG. 7A and updating it. In FIG. 7B, a black circle point cloud represents the reference map RM before update, and a white circle dot cloud represents the latest scan data SD (c). FIG. 7B shows the positions a, b, and c of the moving body 10 when the previous scan data, the previous scan data, and the latest scan data are acquired. The whole point group of white circles and the point group of black circles in FIG. 7B constitute an updated reference map RM. *
図7Cは、新たに取得したスキャンデータSD(d)を図7Bの参照地図RMに追加して更新した参照地図RMを示している。図7Cにおいて、黒円の点群は更新前の参照地図RMを表し、白丸の点群は最新のスキャンデータSD(d)を表している。図7Cには、過去の推定位置にある移動体10の位置a、b、cに加えて、最新のスキャンデータSD(d)のマッチングによって推定した位置にある移動体10の位置dが示されている。図7Cの白丸の点群および黒円の点群の全体は、更新された参照地図RMを構成している。  FIG. 7C shows a reference map RM updated by adding newly acquired scan data SD (d) to the reference map RM of FIG. 7B. In FIG. 7C, the black circle point cloud represents the reference map RM before update, and the white circle dot cloud represents the latest scan data SD (d). In FIG. 7C, in addition to the positions a, b, and c of the moving body 10 at the past estimated position, the position d of the moving body 10 at the position estimated by matching of the latest scan data SD (d) is shown. ing. The whole of the white circle point group and the black circle point group in FIG. 7C constitute an updated reference map RM. *
このようにして、参照地図RMは次々と更新されるため、参照地図RM内の点の個数が外界センサ102のスキャンごとに増大していく。このことは、最新スキャンデータと参照地図RMとのマッチングを行うときの演算量の増加を引き起こす。例えば1枚のスキャンデータが最大約1000個の点を含む場合、2000枚のスキャンデータをつなぎ合わせて1枚の参照地図RMを作成すると、その参照地図RM内の点の個数は、最大で約200万個にも達する。対応する点を見つけてマッチングのための演算を反復するとき、参照地図RMの点群が大きすぎると、スキャン周期であるΔtの期間内にマッチングが完了しない可能性がある。  In this way, since the reference map RM is updated one after another, the number of points in the reference map RM increases every time the external sensor 102 scans. This causes an increase in the amount of calculation when matching the latest scan data with the reference map RM. For example, when one piece of scan data includes a maximum of about 1000 points, when 2000 pieces of scan data are connected to create one reference map RM, the number of points in the reference map RM is about maximum. It reaches 2 million. When the corresponding point is found and the calculation for matching is repeated, if the point group of the reference map RM is too large, the matching may not be completed within the period of Δt that is the scan cycle. *
本開示の位置推定システムでは、複数回更新された参照地図から、最新スキャンデータを含む一部分以外の部分を削除して参照地図のリセットを行う。また、リセットを行うとき、リセット前の複数回更新された参照地図に基づいて環境地図を更新することも可能である。オフラインSLAMによって前もって用意していた環境地図そのものは、更新することなく、保持し得る。  In the position estimation system according to the present disclosure, the reference map is reset by deleting a part other than the part including the latest scan data from the reference map updated a plurality of times. Moreover, when resetting, it is also possible to update an environmental map based on the reference map updated several times before resetting. The environment map itself prepared in advance by the offline SLAM can be maintained without being updated. *
参照地図のリセットは、例えば、(i)参照地図を更新する回数が所定数に達したとき、(ii)参照地図のデータ量が所定量に達したとき、または(iii)前回のリセットからの経過時間が所定長さに達したときに行うことができる。(i)の場合の「所定数」は、例えば100回であり得る。(ii)の場合の「所定量」は、例えば10000であり得る。(iii)の場合の「所定長さ」は、例えば5分であり得る。  For example, (i) when the number of times the reference map is updated reaches a predetermined number, (ii) when the data amount of the reference map reaches a predetermined amount, or (iii) since the previous reset. This can be done when the elapsed time reaches a predetermined length. In the case of (i), the “predetermined number” may be 100 times, for example. The “predetermined amount” in the case of (ii) may be 10,000, for example. The “predetermined length” in the case of (iii) can be, for example, 5 minutes. *
リセット後の参照地図のデータ量を最小にするには、最新のスキャンデータ、すなわち、リセットを行う時点で最も新しい1回のスキャンによって取得したデータのみを残して他のスキャンデータを削除すればよい。最新のスキャンデータに含まれる点の個数が所定値以下の場合、リセット後のマッチング精度を高めるため、最新のスキャンデータの加えて、現在に近い複数のスキャンデータをリセット後の参照地図に含めてもよい。  To minimize the amount of reference map data after reset, it is only necessary to delete the other scan data while leaving only the latest scan data, that is, the data acquired by the most recent one scan at the time of reset. . When the number of points included in the latest scan data is less than or equal to the predetermined value, in order to improve the matching accuracy after reset, in addition to the latest scan data, include multiple scan data close to the current in the reference map after reset. Also good. *
複数のスキャンデータから参照地図を作成するとき、点群の単位面積あたり点の密度が所定値を超えて増加することはマッチングにとって無駄になり得る。例えば、環境内で10×10cmのサイズを有する矩形の領域に相当する部分に多数の点(測定点)が存在した場合、マッチングに要する演算量が増加する割合に比べてマッチング精度が充分に向上せずに飽和することが起こり得る。このような無駄を抑制するため、スキャンデータおよび/または参照地図を構成する点群の密度が所定密度を超えたときには、点群から幾つかの点を間引き、点群の密度を所定密度以下に低下させる処理を行ってもよい。「所定密度」は、例えば1個/(10cm)であり得る。  When creating a reference map from a plurality of scan data, it is useless for matching that the density of points per unit area of a point group increases beyond a predetermined value. For example, when there are a large number of points (measurement points) in a portion corresponding to a rectangular area having a size of 10 × 10 cm 2 in the environment, the matching accuracy is sufficiently higher than the rate at which the amount of calculation required for matching increases. Saturation can occur without improvement. In order to suppress such waste, when the density of the point cloud constituting the scan data and / or the reference map exceeds a predetermined density, several points are thinned out from the point cloud, and the density of the point cloud is set to a predetermined density or less. You may perform the process to reduce. The “predetermined density” may be, for example, 1 piece / (10 cm) 2 .
<環境地図を用いた位置推定> 図8Aは、外界センサが時刻tに取得したスキャンデータSD(t)の例を模式的に示す図である。スキャンデータSD(t)は、移動体10とともに位置および姿勢が変わるセンサ座標系で表示され、スキャンデータSD(t)を構成する点は白丸で記載されている。  <Position Estimation Using Environmental Map> FIG. 8A is a diagram schematically illustrating an example of scan data SD (t) acquired by an external sensor at time t. The scan data SD (t) is displayed in a sensor coordinate system whose position and orientation change together with the moving body 10, and points constituting the scan data SD (t) are described by white circles. *
図8Bは、環境地図Mに対するスキャンデータSD(t)のマッチングを開始するときの状態を模式的に示す図である。図1のプロセッサ106は、外界センサ102からスキャンデータSD(t)を取得すると、スキャンデータSD(t)と記憶装置104から読み出した環境地図Mとのマッチングを行うことにより、移動体10の環境地図M上における位置および姿勢を推定することができる。このようなマッチングを開始するとき、時刻tにおける移動体10の位置および姿勢の初期値を決定する必要がある(図5参照)。初期値が、実際の移動体10の位置および姿勢に近いほど、マッチングに要する時間は短縮され得る。  FIG. 8B is a diagram schematically illustrating a state when matching of the scan data SD (t) with the environment map M is started. When the processor 106 in FIG. 1 acquires the scan data SD (t) from the external sensor 102, the processor 106 performs matching between the scan data SD (t) and the environment map M read from the storage device 104, thereby The position and orientation on the map M can be estimated. When starting such matching, it is necessary to determine initial values of the position and orientation of the moving body 10 at time t (see FIG. 5). The closer the initial value is to the actual position and posture of the moving body 10, the shorter the time required for matching. *
図8Cは、環境地図Mに対するスキャンデータSD(t)のマッチングが完了した状態を模式的に示す図である。  FIG. 8C is a diagram schematically illustrating a state where the matching of the scan data SD (t) with the environment map M is completed. *
本開示の実施形態では、この初期値の決定に際して2種類の方法を採用することができる。  In the embodiment of the present disclosure, two types of methods can be employed for determining the initial value. *
第1の方法では、前回のマッチングによって推定した位置および姿勢から変化量をオドメトリによって計測することである。例えば、移動体10が2個の駆動輪によって移動するとき、それぞれの駆動輪またはモータに取り付けられたエンコーダにより、移動体10の移動量および移動方向を求めることができる。オドメトリを用いる方法は公知であるため、さらに詳細な説明は特に必要ない。  In the first method, the amount of change is measured by odometry from the position and orientation estimated by the previous matching. For example, when the moving body 10 is moved by two driving wheels, the moving amount and moving direction of the moving body 10 can be obtained by encoders attached to the respective driving wheels or motors. Since the method using odometry is known, no further detailed explanation is necessary. *
第2の方法は、移動体10の位置および姿勢の推定値の履歴に基づいて、現在の位置および姿勢を予測することである。以下、この点を説明する。  The second method is to predict the current position and posture based on the history of the estimated values of the position and posture of the moving body 10. Hereinafter, this point will be described. *
<初期値の予測> 図9は、図1の位置推定システム115によって過去に得られた移動体10の位置および姿勢の履歴と、現在の位置および姿勢の予測値を模式的に示す図である。位置および姿勢の履歴は、位置推定システム115の内部のメモリ107に記憶される。このような履歴の一部または全部は、位置推定装置105の外部の記憶装置、例えば図1の記憶装置104に記憶されていても良い。  <Prediction of Initial Value> FIG. 9 is a diagram schematically showing the history of the position and orientation of the moving body 10 obtained in the past by the position estimation system 115 of FIG. 1, and the predicted value of the current position and orientation. . The position and orientation history is stored in the memory 107 inside the position estimation system 115. Part or all of such history may be stored in a storage device outside the position estimation device 105, for example, the storage device 104 in FIG. *
図9には、移動体10のローカル座標系(センサ座標系)であるUV座標
系も示されている。スキャンデータは、UV座標系によって表現される。環境地図M上における移動体10の位置は、環境地図Mの座標系におけるUV座標系の原点の座標値(xi,yi)である。移動体10の姿勢(向き)は、環境地図Mの座標系に対するUV座標系の向き(θi)である。θiは半時計回りを「正」とする。 
FIG. 9 also shows a UV coordinate system that is a local coordinate system (sensor coordinate system) of the moving body 10. Scan data is expressed in the UV coordinate system. The position of the moving body 10 on the environment map M is the coordinate value (xi, yi) of the origin of the UV coordinate system in the coordinate system of the environment map M. The posture (orientation) of the moving body 10 is the orientation (θi) of the UV coordinate system with respect to the coordinate system of the environment map M. θi is “positive” in the counterclockwise direction.
本開示の実施形態では、位置推定装置によって過去に得られた位置および姿勢の履歴から、現在の位置および姿勢の予測値を算出する。  In the embodiment of the present disclosure, the predicted value of the current position and orientation is calculated from the history of positions and orientations obtained in the past by the position estimation device. *
前回のマッチングによって得られた移動体の位置および姿勢を(xi-1,yi-1,θi-1)、さらにその前のマッチングによって得られた移動体の位置および姿勢を(xi-2,yi-2,θi-2)とする。また、現在の移動体の位置および姿勢の予測値を(x,y,θ)とする。このとき、以下の仮定が成立するとする。  The position and orientation of the moving body obtained by the previous matching are (x i−1 , y i−1 , θ i−1 ), and the position and orientation of the moving body obtained by the previous matching are further represented by (x i -2 , y i-2 , θ i-2 ). Further, the predicted value of the current position and orientation of the moving object is (x i , y i , θ i ). At this time, it is assumed that the following assumptions hold.
仮定1:位置(xi-1,yi-1)から位置(x,y)までの移動に要する時間は、位置(xi-2,yi-2)から位置(xi-1,yi-1)までの移動に要した時間に等しい。  Assumption 1: The time required for the movement from the position (x i−1 , y i−1 ) to the position (x i , y i ) is from the position (x i−2 , y i−2 ) to the position (x i− 1 , y i-1 ) equal to the time required for movement.
仮定2:位置(xi-1,yi-1)から位置(x,y)までの移動時の移動速度は、位置(xi-2,yi-2)から位置(xi-1,yi-1)までの移動時の移動速度に等しい。  Assumption 2: The moving speed from the position (x i−1 , y i−1 ) to the position (x i , y i ) is from the position (x i−2 , y i−2 ) to the position (x i −1 , y i−1 ) equal to the moving speed during movement.
上記の過程のもと、以下の数1の式が設立する。  
Figure JPOXMLDOC01-appb-M000001
ここで、Δθは、θ-θi-1である。 
Under the above process, the following formula 1 is established.
Figure JPOXMLDOC01-appb-M000001
Here, Δθ is θ i −θ i−1 .
移動体の姿勢(向き)については、以下の数2の関係が成立する。(数2) θ=θi-1+Δθ  Regarding the posture (orientation) of the moving body, the following relationship of Formula 2 is established. (Equation 2) θ i = θ i−1 + Δθ
なお、Δθがゼロであるとの近似を行うと、数2の右辺第2項の行列は単位行列として計算が単純化され得る。  When approximation that Δθ is zero is performed, the matrix of the second term on the right side of Equation 2 can be simplified as a unit matrix. *
上記の仮定1が成立しない場合、位置(xi-1,yi-1)から位置(x,y)までの移動に要する時間をΔt、位置(xi-2,yi-2)から位置(xi-1,yi-1)までの移動に要した時間をΔsとする。この場合、数1の右辺における(xi-1-xi-2)および(yi-1-yi-2)を、それぞれ、Δt/Δs倍する補正と、数1の右辺の行列におけるΔθをΔt/Δs倍する補正を行えばよい。  When the above assumption 1 does not hold, the time required for the movement from the position (x i−1 , y i−1 ) to the position (x i , y i ) is Δt, and the position (x i−2 , y i−2). ) To the position (x i−1 , y i−1 ) is Δs. In this case, (x i−1 −x i−2 ) and (y i−1 −y i−2 ) on the right side of Equation 1 are respectively multiplied by Δt / Δs, and the matrix on the right side of Equation 1 is Correction may be performed by multiplying Δθ by Δt / Δs.
<位置推定システムの動作フロー> 図1、図10から図13を参照しながら、本開示の実施形態における位置推定システムの動作フローを説明する。  <Operation Flow of Position Estimation System> The operation flow of the position estimation system in the embodiment of the present disclosure will be described with reference to FIGS. 1 and 10 to 13. *
まず、図10を参照する。  First, referring to FIG. *
ステップS10において、位置推定システム115のプロセッサ106は、外界センサ102から最新(現在:current)のスキャンデータを取得する。  In step S <b> 10, the processor 106 of the position estimation system 115 acquires the latest scan data from the external sensor 102. *
ステップS12において、プロセッサ106は、オドメトリによって現在の位置および姿勢の値を取得する。このとき、図9を参照しながら説明したようにして現在の位置および姿勢の値を予測してもよい。  In step S12, the processor 106 acquires the current position and orientation values by odometry. At this time, the current position and orientation values may be predicted as described with reference to FIG. *
ステップS14において、プロセッサ106は、オドメトリから取得した現在の位置および姿勢の値を初期値として、環境地図に対する最新スキャンデータの初期位置合わせを行う。  In step S14, the processor 106 performs initial alignment of the latest scan data with respect to the environment map using the current position and orientation values acquired from odometry as initial values. *
ステップS16において、プロセッサ106は、ICPアルゴリズムによる位置ずれ補正を行う。  In step S16, the processor 106 performs misregistration correction by the ICP algorithm. *
ステップS18において、プロセッサ106は、オフラインSLAMによる位置および姿勢の第1推定値を生成する。  In step S18, the processor 106 generates a first estimated value of the position and orientation by offline SLAM. *
ステップS20において、オフラインSLAMによる位置および姿勢の第1推定値に代えて、オンラインSLAMによる位置および姿勢の第2推定値を、選択された推定値として出力する事象が発生しているか否かを判定する。Noの場合、ステップS21に進み、選択された推定値として第1推定値を出力する。その後、ステップS10に戻り、次のスキャンデータを取得する。Yesの場合は、ステップS22に進む。  In step S20, it is determined whether an event has occurred in which the second estimated value of the position and orientation based on the online SLAM is output as the selected estimated value instead of the first estimated value of the position and orientation based on the offline SLAM. To do. In No, it progresses to step S21 and outputs a 1st estimated value as a selected estimated value. Thereafter, the process returns to step S10, and the next scan data is acquired. In the case of Yes, it progresses to step S22. *
Yesと判定される場合の例を説明する。  An example in the case of being determined as Yes will be described. *
図11Aは、第1位置推定処理(オフラインSLAM)による第1推定値の変化量ΔP、すなわちP-Pt-1が変動する例を示す図である。これは、現在の時刻tにおける第1推定値をPとし、1時刻前(例えば200ミリ秒前)における前回の第1推定値をPt-1とするとき、その差異をモニタすることにより、推定の異常を検知することが可能になる。例えば、図11Aの破線で示された閾値を変化量ΔPが超えたとき、第1位置推定処理(オフラインSLAM)による第1推定値ではなく、第2位置推定処理(オンラインSLAM)による第2推定値を、より正確度の高い推定値とし選択することができる。この場合、変化量ΔPが1回だけ閾値を超えたとき、ただちに第2位置推定処理(オンラインSLAM)による第2推定値を選択するのではなく、所定の回数(例えば3回)続けて閾値を超えたときに第2推定値を選択するようにしてもよい。  FIG. 11A is a diagram illustrating an example in which the change amount ΔP t of the first estimated value by the first position estimation process (offline SLAM), that is, P t −P t−1 fluctuates. This is because when the first estimated value at the current time t is P t and the previous first estimated value one time before (for example, 200 milliseconds before) is P t−1 , the difference is monitored. It is possible to detect an estimated abnormality. For example, when the change amount ΔP t exceeds the threshold value indicated by the broken line in FIG. 11A, the second estimated value by the second position estimating process (online SLAM) is used instead of the first estimated value by the first position estimating process (offline SLAM). The estimated value can be selected as a more accurate estimated value. In this case, when the amount of change ΔP t exceeds the threshold value only once, the threshold value is not selected immediately after the second position estimation process (online SLAM) but is continued for a predetermined number of times (for example, three times). You may make it select a 2nd estimated value when exceeding.
図11Bは、第1位置推定処理(オフラインSLAM)による第1推定値とセンサによる計測値との差が変動する例を示す図である。センサは、ロータリエンコーダなどのオドメトリによって計測された移動体の位置および姿勢の値である。第1推定と計測値と差異が所定範囲(Range)を超えたとき、第1位置推定処理(オフラインSLAM)による第1推定値ではなく、第2位置推定処理(オンラインSLAM)による第2推定値を、より正確度の高い推定値とし選択することができる。この場合も、差異が1回だけ所定範囲から外れたとき、ただちに第2位置推定処理(オンラインSLAM)による第2推定値を選択するのではなく、所定の回数(例えば3回)続けて所定範囲から外れたときに第2推定値を選択するようにしてもよい。  FIG. 11B is a diagram illustrating an example in which the difference between the first estimated value obtained by the first position estimating process (offline SLAM) and the measured value obtained by the sensor varies. The sensor is a position and orientation value of the moving body measured by odometry such as a rotary encoder. When the difference between the first estimated value and the measured value exceeds a predetermined range (Range), the second estimated value based on the second position estimating process (online SLAM) is used instead of the first estimated value based on the first position estimating process (offline SLAM). Can be selected as a more accurate estimate. Also in this case, when the difference deviates from the predetermined range only once, the second estimated value by the second position estimation process (online SLAM) is not selected immediately, but the predetermined range is continued for a predetermined number of times (for example, three times). The second estimated value may be selected when deviating from the above. *
図11Cは、第1位置推定処理(オフラインSLAM)による第1推定値の信頼度および第2位置推定処理(オンラインSLAM)による第2推定値の信頼度の変動する例を示す図である。図中、黒丸で示される第1推定の信頼度が、白丸で示される第2推定値の信頼度よりも低下したとき、第1位置推定処理(オフラインSLAM)による第1推定値ではなく、第2位置推定処理(オンラインSLAM)による第2推定値を、より正確度の高い推定値として選択することができる。この場合も、第1推定の信頼度が第2推定値の信頼度よりも低下した回数が所定数を超えた場合に、第2推定値を選択するようにしてもよい。  FIG. 11C is a diagram illustrating an example in which the reliability of the first estimated value by the first position estimation process (offline SLAM) and the reliability of the second estimated value by the second position estimation process (online SLAM) vary. In the figure, when the reliability of the first estimation indicated by the black circle is lower than the reliability of the second estimation value indicated by the white circle, the first estimation value by the first position estimation process (offline SLAM) The second estimated value obtained by the two-position estimation process (online SLAM) can be selected as a more accurate estimated value. Also in this case, the second estimated value may be selected when the number of times that the reliability of the first estimation is lower than the reliability of the second estimated value exceeds a predetermined number. *
再び図10を参照する。  Refer to FIG. 10 again. *
ステップS22において、プロセッサ106は、オンラインSLAMによる位置および姿勢の推定を行う。具体的には、ステップS40の処理に進む。オンラインSLAMのフローについては、後述する。  In step S22, the processor 106 performs position and orientation estimation by online SLAM. Specifically, the process proceeds to step S40. The online SLAM flow will be described later. *
次に図12を参照して、ステップS16における位置ずれ補正を説明する。  Next, with reference to FIG. 12, the displacement correction in step S16 will be described. *
まず、ステップS32において、プロセッサ106は、2組の点群から対応点の探索を行う。具体的には、プロセッサ106は、スキャンデータに含まれる点群を構成する各点に対応する、環境地図上の点を選択する。  First, in step S32, the processor 106 searches for corresponding points from two sets of point groups. Specifically, the processor 106 selects a point on the environment map corresponding to each point constituting the point group included in the scan data. *
ステップS34において、プロセッサ106は、スキャンデータと環境地図との間にある対応点間距離を縮小するように、スキャンデータの回転および並進の剛体変換(座標変換)を行う。これは、対応点間距離、すなわち、対応点の誤差の総和(二乗和)を小さくするように、座標変換行列のパラメータを最適化することである。この最適化は反復計算によって行われる。  In step S34, the processor 106 performs rigid transformation (coordinate transformation) of rotation and translation of the scan data so as to reduce the distance between corresponding points between the scan data and the environment map. This is to optimize the parameters of the coordinate transformation matrix so as to reduce the distance between corresponding points, that is, the total sum (square sum) of errors of corresponding points. This optimization is performed by iterative calculation. *
ステップS36において、プロセッサ106は、反復計算の結果が収束したか否かを判定する。具体的には、プロセッサ106は、座標変換行列のパラメータを変化させても対応点の誤差の総和(二乗和)の減少量が所定値を下回ったとき、収束したと判定する。収束しなかったときは、ステップS32に戻り、プロセッサ106は、対応点の探索からの処理を繰りかえす。ステップS36において、収束したと判定されたときは、ステップS38に進む。  In step S36, the processor 106 determines whether or not the result of the iterative calculation has converged. Specifically, the processor 106 determines that it has converged when the amount of reduction in the sum of the errors (corresponding to the squares) of the corresponding points falls below a predetermined value even if the parameters of the coordinate transformation matrix are changed. If not converged, the process returns to step S32, and the processor 106 repeats the processing from the corresponding point search. If it is determined in step S36 that the process has converged, the process proceeds to step S38. *
ステップS38において、プロセッサ106は、座標変換行列を用いてスキャンデータの座標値をセンサ座標系の値から環境地図の座標系の値に変換する。こうして得たスキャンデータの座標値は、環境地図の更新に用いることができる。  In step S38, the processor 106 converts the coordinate value of the scan data from the value of the sensor coordinate system to the value of the coordinate system of the environment map using the coordinate conversion matrix. The coordinate values of the scan data obtained in this way can be used for updating the environmental map. *
次に、図13を参照してオンラインSLAMによる位置および姿勢の推定について説明する。  Next, position and orientation estimation by online SLAM will be described with reference to FIG. *
ステップS40において、位置推定システム115のプロセッサ106は、外界センサ102から最新(現在:current)のスキャンデータを取得する。  In step S <b> 40, the processor 106 of the position estimation system 115 acquires the latest scan data from the external sensor 102. *
ステップS42において、プロセッサ106は、オドメトリによって現在の位置および姿勢の値を取得する。  In step S42, the processor 106 acquires the current position and orientation values by odometry. *
ステップS44において、プロセッサ106は、オドメトリから取得した現在の位置および姿勢の値を初期値として、参照地図に対する最新スキャンデータの初期位置合わせを行う。  In step S44, the processor 106 performs initial alignment of the latest scan data with respect to the reference map using the current position and orientation values acquired from odometry as initial values. *
ステップS46において、プロセッサ106は、ICPアルゴリズムによる位置ずれ補正を行う。  In step S <b> 46, the processor 106 performs misalignment correction using an ICP algorithm. *
ステップS48において、プロセッサ106は、参照地図との照合結果として得られた移動体の位置および姿勢の推定値(第2推定値)を生成する。この第2推定値は、図10のステップS20において「Yes」の判定がなされたとき、第1推定値に代えて、選択された推定値として出力される。なお、ステップS20における判定の結果によらず、オンラインSLAMによる位置および姿勢の推定は継続的に実行され、第2推定値は生成されている。この第2推定値を選択された推定値として採用するか否かは、図11を参照して説明した事象の有無に依存する。  In step S48, the processor 106 generates an estimated value (second estimated value) of the position and orientation of the moving object obtained as a result of matching with the reference map. This second estimated value is output as a selected estimated value instead of the first estimated value when a “Yes” determination is made in step S20 of FIG. Regardless of the result of the determination in step S20, the estimation of the position and orientation by online SLAM is continuously executed, and the second estimated value is generated. Whether or not to adopt the second estimated value as the selected estimated value depends on the presence or absence of the event described with reference to FIG. *
ステップS50において、参照地図が更新条件を満たしたか否かを判定する。更新条件は、前述したように、(i)参照地図を更新する回数が所定数に達したとき、(ii)参照地図のデータ量が所定量に達したとき、または(iii)前回のリセットからの経過時間が所定長さに達したとき、などの条件である。Noの場合、ステップS40に戻り、次のスキャンデータを取得する。Yesの場合は、ステップS52に進む。  In step S50, it is determined whether or not the reference map satisfies the update condition. As described above, the update conditions are (i) when the number of times of updating the reference map reaches a predetermined number, (ii) when the data amount of the reference map reaches a predetermined amount, or (iii) since the previous reset. This is a condition such as when the elapsed time of a predetermined length has been reached. In No, it returns to step S40 and acquires the next scan data. In the case of Yes, it progresses to step S52. *
ステップS52において、プロセッサ106は、複数回更新された参照地図から、最新スキャンデータを含む一部分以外の部分を削除して、参照地図のリセットを行う。こうして、参照地図を構成する点群内の点の個数および密度を低減することができる。  In step S52, the processor 106 deletes a portion other than the portion including the latest scan data from the reference map updated a plurality of times, and resets the reference map. In this way, the number and density of points in the point group constituting the reference map can be reduced. *
なお、図10のステップS20で行った判定は、オンラインSLAMの間も、随時または周期的に実行される。このため、オンラインSLAMを行う必要がなくったとき、オンラインSLAMからオフラインSLAMへ速やかに復帰する。  Note that the determination performed in step S20 of FIG. 10 is performed at any time or periodically during the online SLAM. For this reason, when there is no need to perform online SLAM, the online SLAM quickly returns to the offline SLAM. *
本開示による位置推定システムは、多用な駆動装置によって移動する種々の移動体に適用可能である。本開示における位置推定システムは、駆動装置を備えた移動体に搭載されて使用されなくてもよい。例えばユーザによって駆動される手押し車に載せられて地図作成に用いられてもよい。  The position estimation system according to the present disclosure can be applied to various moving bodies that are moved by various driving devices. The position estimation system according to the present disclosure may not be used by being mounted on a moving body including a driving device. For example, it may be placed on a handcart driven by a user and used for map creation. *
<例示的な実施形態> 以下、本開示による位置推定システムを備える移動体の実施形態をより詳細に説明する。本実施形態では、移動体の一例として無人搬送車を挙げる。以下の説明では、略語を用いて、無人搬送車を「AGV:Automatic Guided Vehicle)」と記述する。以下、「AGV」
についても、移動体10と同様に参照符号「10」を付す。 
<Exemplary Embodiment> Hereinafter, an embodiment of a moving object including the position estimation system according to the present disclosure will be described in more detail. In this embodiment, an automatic guided vehicle is taken as an example of the moving body. In the following description, an abbreviation is used to describe an automatic guided vehicle as “AGV: Automatic Guided Vehicle”. “AGV”
As for the mobile object 10 as well, the reference symbol “10” is attached thereto.
(1)システムの基本構成 図14は、本開示による例示的な移動体管理システム100の基本構成例を示している。移動体管理システム100は、少なくとも1台のAGV10と、AGV10の運行管理を行う運行管理装置50とを含む。図14には、ユーザ1によって操作される端末装置20も記載されている。  (1) Basic Configuration of System FIG. 14 shows a basic configuration example of an exemplary mobile management system 100 according to the present disclosure. The mobile management system 100 includes at least one AGV 10 and an operation management device 50 that manages the operation of the AGV 10. FIG. 14 also shows a terminal device 20 operated by the user 1. *
AGV10は、走行に磁気テープなどの誘導体が不要な「ガイドレス式」走行が可能な無人搬送台車である。AGV10は、自己位置推定を行い、推定の結果を端末装置20および運行管理装置50に送信することができる。AGV10は、運行管理装置50からの指令にしたがって環境S内を自動走行することが可能である。  The AGV 10 is an automatic guided vehicle capable of “guideless type” traveling that does not require a derivative such as a magnetic tape for traveling. The AGV 10 can perform self-position estimation and transmit the estimation result to the terminal device 20 and the operation management device 50. The AGV 10 can automatically travel in the environment S according to a command from the operation management device 50. *
運行管理装置50は各AGV10の位置をトラッキングし、各AGV10の走行を管理するコンピュータシステムである。運行管理装置50は、デスクトップ型PC、ノート型PC、および/または、サーバコンピュータであり得る。運行管理装置50は、複数のアクセスポイント2を介して、各AGV10と通信する。例えば、運行管理装置50は、各AGV10が次に向かうべき位置の座標のデータを各AGV10に送信する。各AGV10は、定期的に、例えば250ミリ秒ごとに自身の位置および姿勢(orientation)を示すデータを運行管理装置50に送信する。指示した位置にAGV10が到達すると、運行管理装置50は、さらに次に向かうべき位置の座標のデータを送信する。AGV10は、端末装置20に入力されたユーザ1の操作に応じて環境S内を走行することも可能である。端末装置20の一例はタブレットコンピュータである。  The operation management device 50 is a computer system that tracks the position of each AGV 10 and manages the running of each AGV 10. The operation management device 50 may be a desktop PC, a notebook PC, and / or a server computer. The operation management device 50 communicates with each AGV 10 via the plurality of access points 2. For example, the operation management device 50 transmits the data of the coordinates of the position to which each AGV 10 should go next to each AGV 10. Each AGV 10 transmits data indicating its position and orientation to the operation management device 50 periodically, for example, every 250 milliseconds. When the AGV 10 arrives at the instructed position, the operation management device 50 transmits data on the coordinates of the position to be further headed. The AGV 10 can also travel in the environment S according to the operation of the user 1 input to the terminal device 20. An example of the terminal device 20 is a tablet computer. *
図15は、3台のAGV10a,10bおよび10cが存在する環境Sの一例を示している。いずれのAGVも図中の奥行き方向に走行しているとする。AGV10aおよび10bは天板に載置された荷物を搬送中である。AGV10cは、前方のAGV10bに追従して走行している。なお、説明の便宜のため、図15では参照符号10a,10bおよび10cを付したが、以下では、「AGV10」と記述する。  FIG. 15 shows an example of an environment S in which three AGVs 10a, 10b, and 10c exist. Assume that all AGVs are traveling in the depth direction in the figure. The AGVs 10a and 10b are transporting loads placed on the top board. The AGV 10c travels following the front AGV 10b. For convenience of explanation, reference numerals 10a, 10b, and 10c are attached in FIG. 15, but are described as “AGV10” below. *
AGV10は、天板に載置された荷物を搬送する方法以外に、自身と接続された牽引台車を利用して荷物を搬送することも可能である。図16は接続される前のAGV10および牽引台車5を示している。牽引台車5の各足にはキャスターが設けられている。AGV10は牽引台車5と機械的に接続される。図17は、接続されたAGV10および牽引台車5を示している。AGV10が走行すると、牽引台車5はAGV10に牽引される。牽引台車5を牽引することにより、AGV10は、牽引台車5に載置された荷物を搬送できる。  In addition to the method of transporting a load placed on the top board, the AGV 10 can also transport the load using a tow cart connected to itself. FIG. 16 shows the AGV 10 and the traction cart 5 before being connected. A caster is provided on each foot of the traction cart 5. The AGV 10 is mechanically connected to the traction cart 5. FIG. 17 shows the AGV 10 and the traction cart 5 connected to each other. When the AGV 10 travels, the tow cart 5 is pulled by the AGV 10. By pulling the tow cart 5, the AGV 10 can transport the load placed on the tow cart 5. *
AGV10と牽引台車5との接続方法は任意である。ここでは一例を説明する。AGV10の天板にはプレート6が固定されている。牽引台車5には、スリットを有するガイド7が設けられている。AGV10は牽引台車5に接近し、プレート6をガイド7のスリットに差し込む。差し込みが完了すると、AGV10は、図示されない電磁ロック式ピンをプレート6およびガイド7に貫通させ、電磁ロックをかける。これにより、AGV10と牽引台車5とが物理的に接続される。  The connection method between the AGV 10 and the traction cart 5 is arbitrary. Here, an example will be described. A plate 6 is fixed to the top plate of the AGV 10. The pulling cart 5 is provided with a guide 7 having a slit. The AGV 10 approaches the tow truck 5 and inserts the plate 6 into the slit of the guide 7. When the insertion is completed, the AGV 10 passes an electromagnetic lock pin (not shown) through the plate 6 and the guide 7 and applies an electromagnetic lock. Thereby, AGV10 and tow cart 5 are physically connected. *
再び図14を参照する。各AGV10と端末装置20とは、例えば1対1で接続されてBluetooth(登録商標)規格に準拠した通信を行うことができる。各AGV10と端末装置20とは、1または複数のアクセスポイント2を利用してWi-Fi(登録商標)に準拠した通信を行うこともできる。複数のアクセスポイント2は、例えばスイッチングハブ3を介して互いに接続されている。図14には2台のアクセスポイント2a,2bが記載されている。AGV10はアクセスポイント2aと無線で接続されている。端末装置20はアクセスポイント2bと無線で接続されている。AGV10が送信したデータはアクセスポイント2aで受信された後、スイッチングハブ3を介してアクセスポイント2bに転送され、アクセスポイント2bから端末装置20に送信される。また、端末装置20が送信したデータは、アクセスポイント2bで受信された後、スイッチングハブ3を介してアクセスポイント2aに転送され、アクセスポイント2aからAGV10に送信される。これにより、AGV10および端末装置20の間の双方向通信が実現される。複数のアクセスポイント2はスイッチングハブ3を介して運行管理装置50とも接続されている。これにより、運行管理装置50と各AGV10との間でも双方向通信が実現される。  Refer to FIG. 14 again. Each AGV 10 and the terminal device 20 can be connected, for example, on a one-to-one basis, and can perform communication based on the Bluetooth (registered trademark) standard. Each AGV 10 and the terminal device 20 can perform communication based on Wi-Fi (registered trademark) using one or a plurality of access points 2. The plurality of access points 2 are connected to each other via, for example, the switching hub 3. FIG. 14 shows two access points 2a and 2b. The AGV 10 is wirelessly connected to the access point 2a. The terminal device 20 is wirelessly connected to the access point 2b. The data transmitted by the AGV 10 is received by the access point 2a, then transferred to the access point 2b via the switching hub 3, and transmitted from the access point 2b to the terminal device 20. The data transmitted by the terminal device 20 is received by the access point 2b, then transferred to the access point 2a via the switching hub 3, and transmitted from the access point 2a to the AGV 10. Thereby, bidirectional communication between the AGV 10 and the terminal device 20 is realized. The plurality of access points 2 are also connected to the operation management device 50 via the switching hub 3. Thereby, bidirectional communication is also realized between the operation management device 50 and each AGV 10. *
(2)環境地図の作成 オフラインSLAMによって自己位置を推定しながらAGV10が走行できるようにするため、前もって環境S内の地図が作成される。AGV10には位置推定装置およびLRFが搭載されており、LRFの出力を利用して環境地図を作成できる。  (2) Creation of environmental map A map in the environment S is created in advance so that the AGV 10 can run while estimating its own position by offline SLAM. The AGV 10 is equipped with a position estimation device and an LRF, and an environmental map can be created using the output of the LRF. *
AGV10は、ユーザの操作によってデータ取得モードに遷移する。データ取得モードにおいて、AGV10はLRFを用いたセンサデータ(スキャンデータ)の取得を開始する。その後の処理は、前述した通りである。  The AGV 10 transitions to a data acquisition mode by a user operation. In the data acquisition mode, the AGV 10 starts acquiring sensor data (scan data) using LRF. Subsequent processing is as described above. *
なお、センサデータを取得するための環境S内の移動は、ユーザの操作にしたがってAGV10が走行することによって実現し得る。例えば、AGV10は、端末装置20を介して無線でユーザから前後左右の各方向への移動を指示する走行指令を受け取る。AGV10は走行指令にしたがって環境S内を前後左右に走行し、地図を作成する。AGV10がジョイスティック等の操縦装置と有線で接続されている場合には、当該操縦装置からの制御信号にしたがって環境S内を前後左右に走行し、地図を作成してもよい。LRFを搭載した計測台車を人が押し歩くことによってセンサデータを取得してもよい。  The movement in the environment S for acquiring the sensor data can be realized by the AGV 10 traveling according to the user's operation. For example, the AGV 10 receives a travel command instructing movement in the front, rear, left, and right directions from the user via the terminal device 20 wirelessly. The AGV 10 travels forward / backward / left / right in the environment S according to the travel command and creates a map. When the AGV 10 is connected to a steering device such as a joystick in a wired manner, a map may be created by traveling in the environment S in the front / rear and left / right directions according to a control signal from the steering device. Sensor data may be acquired by a person walking on a measurement carriage equipped with an LRF. *
なお、図14および図15には複数台のAGV10が示されているが、AGVは1台であってもよい。複数台のAGV10が存在する場合、ユーザ1は端末装置20を利用して、登録された複数のAGVのうちから1台のAGV10を選択して、環境Sの地図を作成させることができる。  14 and 15 show a plurality of AGVs 10, the number of AGVs may be one. When there are a plurality of AGVs 10, the user 1 can use the terminal device 20 to select one AGV 10 from among the plurality of registered AGVs and create a map of the environment S. *
(3)AGVの構成 図18は、本実施形態にかかる例示的なAGV10の外観図である。AGV10は、2つの駆動輪11aおよび11bと、4つのキャスター11c、11d、11eおよび11fと、フレーム12と、搬送テーブル13と、走行制御装置14と、LRF15とを有する。2つの駆動輪11aおよび11bは、AGV10の右側および左側にそれぞれ設けられている。4つのキャスター11c、11d、11eおよび11fは、AGV10の4隅に配置されている。なお、AGV10は、2つの駆動輪11aおよび11bに接続される複数のモータも有するが、複数のモータは図18には示されていない。また、図18には、AGV10の右側に位置する1つの駆動輪11aおよび2つのキャスター11cおよび11eと、左後部に位置するキャスター11fとが示されているが、左側の駆動輪11bおよび左前部のキャスター11dはフレーム12の蔭に隠れているため明示されていない。4つのキャスター11c、11d、11eおよび11fは、自由に旋回することができる。以下の説明では、駆動輪11aおよび駆動輪11bを、それぞれ車輪11aおよび車輪11bとも称する。  (3) Configuration of AGV FIG. 18 is an external view of an exemplary AGV 10 according to the present embodiment. The AGV 10 includes two drive wheels 11a and 11b, four casters 11c, 11d, 11e, and 11f, a frame 12, a transport table 13, a travel control device 14, and an LRF 15. The two drive wheels 11a and 11b are provided on the right side and the left side of the AGV 10, respectively. The four casters 11c, 11d, 11e, and 11f are arranged at the four corners of the AGV 10. The AGV 10 also has a plurality of motors connected to the two drive wheels 11a and 11b, but the plurality of motors are not shown in FIG. Also, FIG. 18 shows one drive wheel 11a and two casters 11c and 11e located on the right side of the AGV 10, and a caster 11f located on the left rear part, but the left drive wheel 11b and the left front part. The caster 11d is not clearly shown because it is hidden behind the frame 12. The four casters 11c, 11d, 11e, and 11f can freely turn. In the following description, the drive wheels 11a and the drive wheels 11b are also referred to as wheels 11a and wheels 11b, respectively. *
走行制御装置14は、AGV10の動作を制御する装置であり、主としてマイコン(後述)を含む集積回路、電子部品およびそれらが搭載された基板を含む。走行制御装置14は、上述した、端末装置20とのデータの送受信、および、前処理演算を行う。  The travel control device 14 is a device that controls the operation of the AGV 10, and mainly includes an integrated circuit including a microcomputer (described later), electronic components, and a board on which they are mounted. The traveling control device 14 performs the above-described data transmission / reception with the terminal device 20 and the preprocessing calculation. *
LRF15は、例えば赤外のレーザビーム15aを放射し、当該レーザビーム15aの反射光を検出することにより、反射点までの距離を測定する光学機器である。本実施形態では、AGV10のLRF15は、例えばAGV10の正面を基準として左右135度(合計270度)の範囲の空間に、0.25度ごとに方向を変化させながらパルス状のレーザビーム15aを放射し、各レーザビーム15aの反射光を検出する。これにより、0.25度ごと、合計1081ステップ分の角度で決まる方向における反射点までの距離のデータを得ることができる。なお、本実施形態では、LRF15が行う周囲の空間のスキャンは実質的に床面に平行であり、平面的(二次元的)である。しかしながら、LRF15は高さ方向のスキャンを行ってもよい。  The LRF 15 is an optical device that measures the distance to a reflection point by, for example, emitting an infrared laser beam 15a and detecting the reflected light of the laser beam 15a. In the present embodiment, the LRF 15 of the AGV 10 emits a pulsed laser beam 15a while changing its direction every 0.25 degrees in a space of 135 degrees left and right (total 270 degrees) with respect to the front of the AGV 10, for example. Then, the reflected light of each laser beam 15a is detected. Thereby, data of the distance to the reflection point in the direction determined by the angle corresponding to the total of 1081 steps every 0.25 degrees can be obtained. In the present embodiment, the scanning of the surrounding space performed by the LRF 15 is substantially parallel to the floor surface and is planar (two-dimensional). However, the LRF 15 may scan in the height direction. *
AGV10の位置および姿勢(向き)と、LRF15のスキャン結果とにより、AGV10は、環境Sの地図を作成することができる。地図には、AGVの周囲の壁、柱等の構造物、床の上に載置された物体の配置が反映され得る。地図のデータは、AGV10内に設けられた記憶装置に格納される。  The AGV 10 can create a map of the environment S based on the position and orientation (orientation) of the AGV 10 and the scan result of the LRF 15. The map may reflect the arrangement of walls, pillars and other structures around the AGV, and objects placed on the floor. The map data is stored in a storage device provided in the AGV 10. *
AGV10の位置および姿勢、すなわちポーズ(x,y,θ)を、以下、単に「位置」と呼ぶことがある。  Hereinafter, the position and orientation of the AGV 10, that is, the pose (x, y, θ) may be simply referred to as “position”. *
走行制御装置14は、前述したようにして、LRF15の測定結果と、自身が保持する地図データとを比較して、自身の現在位置を推定する。地図データは、他のAGV10が作成した地図データであってもよい。  As described above, the traveling control device 14 compares the measurement result of the LRF 15 with the map data held by itself, and estimates its current position. The map data may be map data created by another AGV 10. *
図19Aは、AGV10の第1のハードウェア構成例を示している。また図19Aは、走行制御装置14の具体的な構成も示している。  FIG. 19A shows a first hardware configuration example of the AGV 10. FIG. 19A also shows a specific configuration of the travel control device 14. *
AGV10は、走行制御装置14と、LRF15と、2台のモータ16aおよび16bと、駆動装置17と、車輪11aおよび11bとを備えている。  The AGV 10 includes a travel control device 14, an LRF 15, two motors 16a and 16b, a drive device 17, and wheels 11a and 11b. *
走行制御装置14は、マイコン14aと、メモリ14bと、記憶装置14cと、通信回路14dと、位置推定装置14eとを有している。マイコン14a、メモリ14b、記憶装置14c、通信回路14dおよび位置推定装置14eは通信バス14fで接続されており、相互にデータを授受することが可能である。LRF15もまた通信インタフェース(図示せず)を介して通信バス14fに接続されており、計測結果である計測データを、マイコン14a、位置推定装置14eおよび/またはメモリ14bに送信する。  The travel control device 14 includes a microcomputer 14a, a memory 14b, a storage device 14c, a communication circuit 14d, and a position estimation device 14e. The microcomputer 14a, the memory 14b, the storage device 14c, the communication circuit 14d, and the position estimation device 14e are connected by a communication bus 14f and can exchange data with each other. The LRF 15 is also connected to the communication bus 14f via a communication interface (not shown), and transmits measurement data as a measurement result to the microcomputer 14a, the position estimation device 14e, and / or the memory 14b. *
マイコン14aは、走行制御装置14を含むAGV10の全体を制御するための演算を行うプロセッサまたは制御回路(コンピュータ)である。典型的にはマイコン14aは半導体集積回路である。マイコン14aは、制御信号であるPWM(Pulse Width Modulation)信号を駆動装置17に送信して駆動装置17を制御し、モータに印加する電圧を調整させる。これによりモータ16aおよび16bの各々が所望の回転速度で回転する。  The microcomputer 14 a is a processor or a control circuit (computer) that performs an operation for controlling the entire AGV 10 including the travel control device 14. Typically, the microcomputer 14a is a semiconductor integrated circuit. The microcomputer 14a transmits a PWM (Pulse Width Modulation) signal, which is a control signal, to the drive device 17 to control the drive device 17 and adjust the voltage applied to the motor. As a result, each of the motors 16a and 16b rotates at a desired rotation speed. *
左右のモータ16aおよび16bの駆動を制御する1つ以上の制御回路(たとえばマイコン)を、マイコン14aとは独立して設けてもよい。例えば、モータ駆動装置17が、モータ16aおよび16bの駆動をそれぞれ制御する2つのマイコンを備えていてもよい。  One or more control circuits (for example, a microcomputer) for controlling the driving of the left and right motors 16a and 16b may be provided independently of the microcomputer 14a. For example, the motor driving device 17 may include two microcomputers that control the driving of the motors 16a and 16b, respectively. *
メモリ14bは、マイコン14aが実行するコンピュータプログラムを記憶する、揮発性の記憶装置である。メモリ14bは、マイコン14aおよび位置推定装置14eが演算を行う際のワークメモリとしても利用され得る。  The memory 14b is a volatile storage device that stores a computer program executed by the microcomputer 14a. The memory 14b can also be used as a work memory when the microcomputer 14a and the position estimation device 14e perform calculations. *
記憶装置14cは、不揮発性の半導体メモリ装置である。ただし、記憶装置14cは、ハードディスクに代表される磁気記録媒体、または、光ディスクに代表される光学式記録媒体であってもよい。さらに、記憶装置14cは、いずれかの記録媒体にデータを書き込みおよび/または読み出すためのヘッド装置および当該ヘッド装置の制御装置を含んでもよい。  The storage device 14c is a nonvolatile semiconductor memory device. However, the storage device 14c may be a magnetic recording medium typified by a hard disk or an optical recording medium typified by an optical disk. Furthermore, the storage device 14c may include a head device for writing and / or reading data on any recording medium and a control device for the head device. *
記憶装置14cは、走行する環境Sの環境地図M、および、1または複数の走行経路のデータ(走行経路データ)Rを記憶する。環境地図Mは、AGV10が地図作成モードで動作することによって作成され記憶装置14cに記憶される。走行経路データRは、環境地図Mが作成された後に外部から送信される。本実施形態では、環境地図Mおよび走行経路データRは同じ記憶装置14cに記憶されているが、異なる記憶装置に記憶されてもよい。  The storage device 14c stores an environment map M of the traveling environment S and data (travel route data) R of one or more travel routes. The environmental map M is created by the AGV 10 operating in the map creation mode and stored in the storage device 14c. The travel route data R is transmitted from the outside after the environment map M is created. In the present embodiment, the environment map M and the travel route data R are stored in the same storage device 14c, but may be stored in different storage devices. *
走行経
路データRの例を説明する。 
An example of the travel route data R will be described.
端末装置20がタブレットコンピュータである場合には、AGV10はタブレットコンピュータから走行経路を示す走行経路データRを受信する。このときの走行経路データRは、複数のマーカの位置を示すマーカデータを含む。「マーカ」は走行するAGV10の通過位置(経由点)を示す。走行経路データRは、走行開始位置を示す開始マーカおよび走行終了位置を示す終了マーカの位置情報を少なくとも含む。走行経路データRは、さらに、1以上の中間経由点のマーカの位置情報を含んでもよい。走行経路が1以上の中間経由点を含む場合には、開始マーカから、当該走行経由点を順に経由して終了マーカに至る経路が、走行経路として定義される。各マーカのデータは、そのマーカの座標データに加えて、次のマーカに移動するまでのAGV10の向き(角度)および走行速度のデータを含み得る。AGV10が各マーカの位置で一旦停止し、自己位置推定および端末装置20への通知などを行う場合には、各マーカのデータは、当該走行速度に達するまでの加速に要する加速時間、および/または、当該走行速度から次のマーカの位置で停止するまでの減速に要する減速時間のデータを含み得る。  When the terminal device 20 is a tablet computer, the AGV 10 receives travel route data R indicating a travel route from the tablet computer. The travel route data R at this time includes marker data indicating the positions of a plurality of markers. “Marker” indicates the passing position (route point) of the traveling AGV 10. The travel route data R includes at least position information of a start marker indicating a travel start position and an end marker indicating a travel end position. The travel route data R may further include position information of one or more intermediate waypoint markers. When the travel route includes one or more intermediate waypoints, a route from the start marker to the end marker via the travel route point in order is defined as the travel route. The data of each marker may include data on the direction (angle) and traveling speed of the AGV 10 until moving to the next marker, in addition to the coordinate data of the marker. When the AGV 10 temporarily stops at the position of each marker and performs self-position estimation and notification to the terminal device 20, the data of each marker includes acceleration time required for acceleration to reach the travel speed, and / or Further, it may include data of deceleration time required for deceleration from the traveling speed until the vehicle stops at the position of the next marker. *
端末装置20ではなく運行管理装置50(例えば、PCおよび/またはサーバコンピュータ)がAGV10の移動を制御してもよい。その場合には、運行管理装置50は、AGV10がマーカに到達する度に、次のマーカへの移動をAGV10に指示してもよい。例えば、AGV10は、運行管理装置50から、次に向かうべき目的位置の座標データ、または、当該目的位置までの距離および進むべき角度のデータを、走行経路を示す走行経路データRとして受信する。  The operation management device 50 (for example, a PC and / or a server computer) may control the movement of the AGV 10 instead of the terminal device 20. In that case, the operation management device 50 may instruct the AGV 10 to move to the next marker every time the AGV 10 reaches the marker. For example, the AGV 10 receives, from the operation management device 50, coordinate data of a target position to be next, or data of a distance to the target position and an angle to be traveled as travel route data R indicating a travel route. *
AGV10は、作成された地図と走行中に取得されたLRF15が出力したセンサデータとを利用して自己位置を推定しながら、記憶された走行経路に沿って走行することができる。この点の動作の詳細については、前述した通りである。本開示の実施形態によれば、前もって用意していた環境地図の一部が現実の環境を反映していない場合でもオンラインSLAMに速やかに切り替えることにより、継続して自己位置を推定し得る。  The AGV 10 can travel along the stored travel route while estimating its own position using the created map and the sensor data output by the LRF 15 acquired during travel. The details of the operation at this point are as described above. According to the embodiment of the present disclosure, even when a part of the environmental map prepared in advance does not reflect the actual environment, the self-position can be continuously estimated by quickly switching to the online SLAM. *
通信回路14dは、例えば、Bluetooth(登録商標)および/またはWi-Fi(登録商標)規格に準拠した無線通信を行う無線通信回路である。いずれの規格も、2.4GHz帯の周波数を利用した無線通信規格を含む。例えばAGV10を走行させて地図を作成するモードでは、通信回路14dは、Bluetooth(登録商標)規格に準拠した無線通信を行い、1対1で端末装置20と通信する。  The communication circuit 14d is a wireless communication circuit that performs wireless communication conforming to, for example, Bluetooth (registered trademark) and / or Wi-Fi (registered trademark) standards. Each standard includes a wireless communication standard using a frequency of 2.4 GHz band. For example, in the mode in which the AGV 10 is run to create a map, the communication circuit 14d performs wireless communication based on the Bluetooth (registered trademark) standard and communicates with the terminal device 20 one-on-one. *
位置推定装置14eは、地図の作成処理、および、走行時には自己位置の推定処理を行う。位置推定装置14eは、AGV10の位置および姿勢とLRFのスキャン結果とにより、環境Sの地図を作成する。走行時には、位置推定装置14eは、LRF15からセンサデータを受け取り、また、記憶装置14cに記憶された環境地図Mを読み出す。LRF15のスキャン結果から作成された局所的地図データ(センサデータ)を、より広範囲の環境地図Mとのマッチングを行うことにより、環境地図M上における自己位置(x,y,θ)を同定する。位置推定装置14eは、局所的地図データが環境地図Mに一致した程度を表す「信頼度」のデータを生成する。自己位置(x,y,θ)、および、信頼度の各データは、AGV10から端末装置20または運行管理装置50に送信され得る。端末装置20または運行管理装置50は、自己位置(x,y,θ)、および、信頼度の各データを受信して、内蔵または接続された表示装置に表示することができる。  The position estimation device 14e performs map creation processing and self-position estimation processing during traveling. The position estimation device 14e creates a map of the environment S based on the position and orientation of the AGV 10 and the LRF scan result. During traveling, the position estimation device 14e receives sensor data from the LRF 15 and reads the environmental map M stored in the storage device 14c. The local map data (sensor data) created from the scan result of the LRF 15 is matched with a wider range of the environment map M to identify the self-position (x, y, θ) on the environment map M. The position estimation device 14e generates “reliability” data representing the degree to which the local map data matches the environmental map M. Each data of the self position (x, y, θ) and the reliability can be transmitted from the AGV 10 to the terminal device 20 or the operation management device 50. The terminal device 20 or the operation management device 50 can receive each data of its own position (x, y, θ) and reliability and display it on a built-in or connected display device. *
本実施形態では、マイコン14aと位置推定装置14eとは別個の構成要素であるとしているが、これは一例である。マイコン14aおよび位置推定装置14eの各動作を独立して行うことが可能な1つのチップ回路または半導体集積回路であってもよい。図19Aには、マイコン14aおよび位置推定装置14eを包括するチップ回路14gが示されている。以下では、マイコン14aおよび位置推定装置14eが別個独立に設けられている例を説明する。  In the present embodiment, the microcomputer 14a and the position estimation device 14e are separate components, but this is an example. It may be a single chip circuit or a semiconductor integrated circuit capable of independently performing the operations of the microcomputer 14a and the position estimation device 14e. FIG. 19A shows a chip circuit 14g including the microcomputer 14a and the position estimation device 14e. Below, the example in which the microcomputer 14a and the position estimation apparatus 14e are provided independently is demonstrated. *
2台のモータ16aおよび16bは、それぞれ2つの車輪11aおよび11bに取り付けられ、各車輪を回転させる。つまり、2つの車輪11aおよび11bはそれぞれ駆動輪である。本明細書では、モータ16aおよびモータ16bは、それぞれAGV10の右輪および左輪を駆動するモータであるとして説明する。  The two motors 16a and 16b are attached to the two wheels 11a and 11b, respectively, and rotate each wheel. That is, the two wheels 11a and 11b are drive wheels, respectively. In the present specification, the motor 16a and the motor 16b are described as being motors that drive the right wheel and the left wheel of the AGV 10, respectively. *
移動体10は、さらに、車輪11aおよび11bの回転位置または回転速度を測定するロータリエンコーダをさらに備えていてもよい。マイコン14aは、位置推定装置14eから受信した信号だけでなく、ロータリエンコーダから受信した信号を利用して移動体10の位置および姿勢を推定してもよい。  The moving body 10 may further include a rotary encoder that measures the rotational positions or rotational speeds of the wheels 11a and 11b. The microcomputer 14a may estimate not only the signal received from the position estimation device 14e but also the position and orientation of the moving body 10 using a signal received from the rotary encoder. *
駆動装置17は、2台のモータ16aおよび16bの各々に印加される電圧を調整するためのモータ駆動回路17aおよび17bを有する。モータ駆動回路17aおよび17bの各々はいわゆるインバータ回路を含む。モータ駆動回路17aおよび17bは、マイコン14aまたはモータ駆動回路17a内のマイコンから送信されたPWM信号によって各モータに流れる電流をオンまたはオフし、それによりモータに印加される電圧を調整する。  The drive device 17 has motor drive circuits 17a and 17b for adjusting the voltage applied to each of the two motors 16a and 16b. Each of motor drive circuits 17a and 17b includes a so-called inverter circuit. The motor drive circuits 17a and 17b turn on or off the current flowing through each motor by a PWM signal transmitted from the microcomputer 14a or the microcomputer in the motor drive circuit 17a, thereby adjusting the voltage applied to the motor. *
図19Bは、AGV10の第2のハードウェア構成例を示している。第2のハードウェア構成例は、レーザ測位システム14hを有する点、および、マイコン14aが各構成要素と1対1で接続されている点において、第1のハードウェア構成例(図19A)と相違する。  FIG. 19B shows a second hardware configuration example of the AGV 10. The second hardware configuration example is different from the first hardware configuration example (FIG. 19A) in that it has a laser positioning system 14h and the microcomputer 14a is connected to each component in a one-to-one relationship. To do. *
レーザ測位システム14hは、位置推定装置14eおよびLRF15を有する。位置推定装置14eおよびLRF15は、例えばイーサネット(登録商標)ケーブルで接続されている。位置推定装置14eおよびLRF15の各動作は上述した通りである。レーザ測位システム14hは、AGV10のポーズ(x,y,θ)を示す情報をマイコン14aに出力する。  The laser positioning system 14h includes a position estimation device 14e and an LRF 15. The position estimation device 14e and the LRF 15 are connected by, for example, an Ethernet (registered trademark) cable. Each operation of the position estimation device 14e and the LRF 15 is as described above. The laser positioning system 14h outputs information indicating the pause (x, y, θ) of the AGV 10 to the microcomputer 14a. *
マイコン14aは、種々の汎用I/Oインタフェースまたは汎用入出力ポート(図示せず)を有している。マイコン14aは、通信回路14d、レーザ測位システム14h等の、走行制御装置14内の他の構成要素と、当該汎用入出力ポートを介して直接接続されている。  The microcomputer 14a has various general purpose I / O interfaces or general purpose input / output ports (not shown). The microcomputer 14a is directly connected to other components in the travel control device 14 such as the communication circuit 14d and the laser positioning system 14h via the general-purpose input / output port. *
図19Bに関して上述した構成以外は、図19Aの構成と共通である。よって共通の構成の説明は省略する。  The configuration other than the configuration described above with reference to FIG. 19B is the same as the configuration of FIG. 19A. Therefore, description of a common structure is abbreviate | omitted. *
本開示の実施形態におけるAGV10は、図示されていない障害物検知センサおよびバンパースイッチなどのセーフティセンサを備えていてもよい。  AGV10 in embodiment of this indication may be provided with safety sensors, such as an obstacle detection sensor and a bumper switch which are not illustrated. *
(4)運行管理装置の構成例 図20は、運行管理装置50のハードウェア構成例を示している。運行管理装置50は、CPU51と、メモリ52と、位置データベース(位置DB)53と、通信回路54と、地図データベース(地図DB)55と、画像処理回路56とを有する。  (4) Configuration Example of Operation Management Device FIG. 20 shows a hardware configuration example of the operation management device 50. The operation management device 50 includes a CPU 51, a memory 52, a position database (position DB) 53, a communication circuit 54, a map database (map DB) 55, and an image processing circuit 56. *
CPU51、メモリ52、位置DB53、通信回路54、地図DB55および画像処理回路56は通信バス57で接続されており、相互にデータを授受することが可能である。  The CPU 51, the memory 52, the position DB 53, the communication circuit 54, the map DB 55, and the image processing circuit 56 are connected by a communication bus 57, and can exchange data with each other. *
CPU51は、運行管理装置50の動作を制御する信号処理回路(コンピュータ)である。典型的にはCPU51は半導体集積回路である。  The CPU 51 is a signal processing circuit (computer) that controls the operation of the operation management device 50. Typically, the CPU 51 is a semiconductor integrated circuit. *
メモリ52は、CPU51が実行するコンピュータプログラムを記憶する、揮発性の記憶装置である。メモリ52は、CPU51が演算を行う際のワークメモリとしても利用され得る。  The memory 52 is a volatile storage device that stores a computer program executed by the CPU 51. The memory 52 can also be used as a work memory when the CPU 51 performs calculations. *
位置DB53は、各AGV10の行き先となり得る各位置を示す位置データを格納する。位置データは、例えば管理者によって工場内に仮想的に設定された座標によって表され得る。位置データは管理者によって決定される。  The position DB 53 stores position data indicating each position that can be a destination of each AGV 10. The position data can be represented by coordinates virtually set in the factory by an administrator, for example. The location data is determined by the administrator. *
通信回路54は、例えばイーサネット(登録商標)規格に準拠した有線通信を行う。通信回路54はアクセスポイント2(図14)と有線で接続されており、アクセスポイント2を介して、AGV10と通信することができる。通信回路54は、AGV10に送信すべきデータを、バス57を介してCPU51から受信する。また通信回路54は、AGV10から受信したデータ(通知)を、バス57を介してCPU51および/またはメモリ52に送信する。  The communication circuit 54 performs wired communication based on, for example, the Ethernet (registered trademark) standard. The communication circuit 54 is connected to the access point 2 (FIG. 14) by wire, and can communicate with the AGV 10 via the access point 2. The communication circuit 54 receives data to be transmitted to the AGV 10 from the CPU 51 via the bus 57. The communication circuit 54 transmits the data (notification) received from the AGV 10 to the CPU 51 and / or the memory 52 via the bus 57. *
地図DB55は、AGV10が走行する工場等の内部の地図のデータを格納する。各AGV10の位置と1対1で対応関係を有する地図であれば、データの形式は問わない。例えば地図DB55に格納される地図は、CADによって作成された地図であってもよい。  The map DB 55 stores internal map data of a factory or the like where the AGV 10 travels. As long as the map has a one-to-one correspondence with the position of each AGV 10, the format of the data is not limited. For example, the map stored in the map DB 55 may be a map created by CAD. *
位置DB53および地図DB55は、不揮発性の半導体メモリ上に構築されてもよいし、ハードディスクに代表される磁気記録媒体、または光ディスクに代表される光学式記録媒体上に構築されてもよい。  The position DB 53 and the map DB 55 may be constructed on a nonvolatile semiconductor memory, or may be constructed on a magnetic recording medium represented by a hard disk or an optical recording medium represented by an optical disk. *
画像処理回路56はモニタ58に表示される映像のデータを生成する回路である。画像処理回路56は、専ら、管理者が運行管理装置50を操作する際に動作する。本実施形態では特にこれ以上の詳細な説明は省略する。なお、モニタ59は運行管理装置50と一体化されていてもよい。また画像処理回路56の処理をCPU51が行ってもよい。  The image processing circuit 56 is a circuit that generates video data to be displayed on the monitor 58. The image processing circuit 56 operates exclusively when the administrator operates the operation management device 50. In the present embodiment, further detailed explanation is omitted. The monitor 59 may be integrated with the operation management device 50. Further, the CPU 51 may perform the processing of the image processing circuit 56. *
上述の実施形態の説明では、一例として二次元空間(床面)を走行するAGVを挙げた。しかしながら本開示は三次元空間を移動する移動体、例えば飛行体(ドローン)、にも適用され得る。ドローンが飛行しながら三次元空間地図を作成する場合には、二次元空間を三次元空間に拡張することができる。  In the description of the above-described embodiment, an AGV that travels in a two-dimensional space (floor surface) is taken as an example. However, the present disclosure can also be applied to a moving object that moves in a three-dimensional space, such as a flying object (drone). When a 3D space map is created while a drone is flying, the 2D space can be expanded to a 3D space. *
上記の包括的または具体的な態様は、システム、方法、集積回路、コンピュータプログラム、または記録媒体によって実現されてもよい。あるいは、システム、装置、方法、集積回路、コンピュータプログラム、および記録媒体の任意な組み合わせによって実現されてもよい。 The comprehensive or specific aspect described above may be realized by a system, a method, an integrated circuit, a computer program, or a recording medium. Alternatively, the present invention may be realized by any combination of a system, an apparatus, a method, an integrated circuit, a computer program, and a recording medium.
本開示の移動体は、工場、倉庫、建設現場、物流、病院などで荷物、部品、完成品などの物の移動および搬送に好適に利用され得る。 The mobile body of the present disclosure can be suitably used for moving and transporting goods such as luggage, parts, and finished products in factories, warehouses, construction sites, logistics, hospitals, and the like.
1・・・ユーザ、2a、2b・・・アクセスポイント、10・・・AGV(移動体)、11a、11b・・・駆動輪(車輪)、11c、11d、11e、11f・・・キャスター、12・・・フレーム、13・・・搬送テーブル、14・・・走行制御装置、14a・・・マイコン、14b・・・メモリ、14c・・・記憶装置、14d・・・通信回路、14e・・・位置推定装置、16a、16b・・・モータ、15・・・レーザレンジファインダ、17a、17b・・・モータ駆動回路、20・・・端末装置(タブレットコンピュータなどのモバイルコンピュータ)、50・・・運行管理装置、51・・・CPU、52・・・メモリ、53・・・位置データベース(位置DB)、54・・・通信回路、55・・・地図データベース(地図DB)、56・・・画像処理回路、100・・・移動体管理システム DESCRIPTION OF SYMBOLS 1 ... User, 2a, 2b ... Access point, 10 ... AGV (moving body), 11a, 11b ... Drive wheel (wheel), 11c, 11d, 11e, 11f ... Caster, 12 ... Frame, 13 ... Conveyance table, 14 ... Travel controller, 14a ... Microcomputer, 14b ... Memory, 14c ... Storage device, 14d ... Communication circuit, 14e ... Position estimation device, 16a, 16b ... motor, 15 ... laser range finder, 17a, 17b ... motor drive circuit, 20 ... terminal device (mobile computer such as tablet computer), 50 ... operation Management device, 51 ... CPU, 52 ... memory, 53 ... position database (position DB), 54 ... communication circuit, 55 ... map database (map D) ), 56 ... image processing circuit, 100 ... mobile management system

Claims (16)

  1. 環境を繰り返しスキャンしてスキャンごとにセンサデータを出力する外界センサに接続されて使用される、移動体の位置推定システムであって、 少なくとも1個のプロセッサと、 予め用意された環境地図を記憶する第1メモリと、 前記プロセッサを動作させるコンピュータプログラムを記憶する第2メモリと、を備え、 前記少なくとも1個のプロセッサは、前記コンピュータプログラムの指令にしたがって、 (A)前記環境地図と前記センサデータとの照合結果に基づき、前記移動体の位置および姿勢の第1推定値を生成する第1位置推定処理と、 (B)前記センサデータを利用して周囲の参照地図を生成しながら、前記参照地図と前記センサデータの照合結果に基づき、前記移動体の位置および姿勢の第2推定値を生成する第2位置推定処理と、 (C)前記第1推定値および前記第2推定値の一方を選択し、選択された一方の推定値を、前記移動体の位置および姿勢の選択された推定値として出力すること、を
    実行する、位置推定システム。
    A position estimation system for a moving body that is used by being connected to an external sensor that repeatedly scans the environment and outputs sensor data for each scan, and stores at least one processor and a prepared environment map A first memory; and a second memory for storing a computer program for operating the processor, wherein the at least one processor is configured to: (A) the environment map and the sensor data according to a command of the computer program; A first position estimation process for generating a first estimated value of the position and orientation of the moving body based on the collation result of (2), (B) while generating a reference map of the surroundings using the sensor data, And a second position estimate that generates a second estimated value of the position and orientation of the mobile body based on the comparison result of the sensor data. And (C) selecting one of the first estimated value and the second estimated value, and outputting the selected estimated value as a selected estimated value of the position and orientation of the moving body, Perform a position estimation system.
  2. 前記第2位置推定処理を実行しているとき、前記少なくとも1個のプロセッサは、前記コンピュータプログラムの指令にしたがって、 前記外界センサから前記スキャンデータを取得し、前記スキャンデータから参照地図を作成すること、 前記外界センサから前記スキャンデータを新たに取得したとき、新たに取得した最新スキャンデータと前記参照地図とのマッチングを行うことにより、前記参照地図上における前記外界センサの位置および姿勢を推定し、前記最新スキャンデータを前記参照地図に追加して前記参照地図を更新すること、 複数回更新された参照地図から、前記最新スキャンデータを含む一部分以外の部分を削除して、前記参照地図のリセットを行うこと、を実行する、請求項1に記載の位置推定システム。 When executing the second position estimation process, the at least one processor acquires the scan data from the external sensor and creates a reference map from the scan data according to a command of the computer program. When the scan data is newly acquired from the external sensor, the position and orientation of the external sensor on the reference map are estimated by matching the newly acquired latest scan data with the reference map, Updating the reference map by adding the latest scan data to the reference map, deleting a part other than a part including the latest scan data from the reference map updated multiple times, and resetting the reference map. The position estimation system according to claim 1, wherein
  3. 前記プロセッサは、前記第1推定値の信頼度を算出し、前記信頼度が閾値よりも低下したとき、前記選択された推定値として前記第2推定値を出力する、請求項1または2に記載の位置推定システム。 3. The processor according to claim 1, wherein the processor calculates a reliability of the first estimated value, and outputs the second estimated value as the selected estimated value when the reliability falls below a threshold value. 4. Position estimation system.
  4. 前記プロセッサは、前記第1推定値の第1信頼度および前記第2推定値の第2信頼度を算出し、前記第1信頼度が前記第2信頼度よりも低下したとき、前記選択された推定値として前記第2推定値を出力する、請求項1から3のいずれかに記載の位置推定システム。 The processor calculates a first reliability of the first estimated value and a second reliability of the second estimated value, and the selected when the first reliability is lower than the second reliability The position estimation system according to claim 1, wherein the second estimated value is output as an estimated value.
  5. 前記プロセッサは、前記第1推定値の現在の値が前記第1推定値の過去の値から所定範囲を超えて変化したとき、前記選択された推定値として前記第2推定値を出力する、請求項1から4のいずれかに記載の位置推定システム。 The processor outputs the second estimated value as the selected estimated value when a current value of the first estimated value changes from a past value of the first estimated value beyond a predetermined range. Item 5. The position estimation system according to any one of Items 1 to 4.
  6. 前記プロセッサは、内界センサから取得した位置および/または姿勢の測定値と前記第1推定値との差を算出し、前記差が閾値を超えたとき、前記選択された推定値として前記第2推定値を出力する、請求項1から5のいずれかに記載の位置推定システム。 The processor calculates a difference between the measurement value of the position and / or orientation acquired from the internal sensor and the first estimated value, and when the difference exceeds a threshold value, the second estimated value is selected as the second estimated value. The position estimation system according to claim 1, which outputs an estimated value.
  7. 前記プロセッサは、前記第1推定値を決定する演算を所定時間内に完了できなかったとき、前記選択された推定値として前記第2推定値を出力する、請求項1から6のいずれかに記載の位置推定システム。 7. The processor according to claim 1, wherein the processor outputs the second estimated value as the selected estimated value when the operation for determining the first estimated value cannot be completed within a predetermined time. 8. Position estimation system.
  8. 前記外界センサから前記スキャンデータを取得し、前記スキャンデータから参照地図を作成すること、 前記外界センサから前記スキャンデータを新たに取得したとき、新たに取得した最新スキャンデータと前記参照地図とのマッチングを行うことにより、前記参照地図上における前記外界センサの位置および姿勢を推定し、前記最新スキャンデータを前記参照地図に追加して前記参照地図を更新すること、 複数回更新された参照地図から、前記最新スキャンデータを含む一部分以外の部分を削除して、前記参照地図のリセットを行うこと、および、 前記リセットを行うとき、前記リセット前の前記複数回更新された参照地図に基づいて環境地図を更新すること、を実行する、請求項1から7のいずれかに記載の位置推定システム。 Acquiring the scan data from the external sensor and creating a reference map from the scan data; when the scan data is newly acquired from the external sensor, matching the newly acquired latest scan data with the reference map To estimate the position and orientation of the external sensor on the reference map, add the latest scan data to the reference map and update the reference map, from the reference map updated multiple times, Deleting a part other than the part including the latest scan data, resetting the reference map, and when performing the reset, an environment map is obtained based on the reference map updated a plurality of times before the reset. The position estimation system according to claim 1, wherein updating is performed.
  9. 前記プロセッサは、前記参照地図を更新する回数が所定数に達したとき、前記参照地図のリセットを行う、請求項8に記載の位置推定システム。 The position estimation system according to claim 8, wherein the processor resets the reference map when the number of times of updating the reference map reaches a predetermined number.
  10. 前記プロセッサは、前記参照地図のデータ量が所定量に達したとき、前記参照地図のリセットを行う、請求項8に記載の位置推定システム。 The position estimation system according to claim 8, wherein the processor resets the reference map when a data amount of the reference map reaches a predetermined amount.
  11. 前記プロセッサは、前回の前記リセットからの経過時間が所定長さに達したとき、前記参照地図のリセットを行う、請求項8に記載の位置推定システム。 The position estimation system according to claim 8, wherein the processor resets the reference map when an elapsed time from the previous reset reaches a predetermined length.
  12. 前記プロセッサは、内界センサの出力に基づいて前記外界センサの移動量を測定し、 前記マッチングを行うときに用いる前記外界センサの前記位置および前記姿勢の初期値を、前記外界センサの移動量に基づいて決定する、請求項1から11のいずれかに記載の位置推定システム。 The processor measures the amount of movement of the external sensor based on the output of the internal sensor, and uses the initial value of the position and orientation of the external sensor used when performing the matching as the amount of movement of the external sensor. The position estimation system according to claim 1, wherein the position estimation system is determined based on the determination.
  13. 前記プロセッサは、前記外界センサの位置および姿勢の履歴に基づいて、前記外界センサの現在の位置および姿勢の予測値を算出し、 前記マッチングを行うときに用いる前記外界センサの前記位置および前記姿勢の初期値として前記予測値を用いる、請求項1から11のいずれかに記載の位置推定システム。 The processor calculates a predicted value of the current position and orientation of the external sensor based on the history of the position and orientation of the external sensor, and determines the position and orientation of the external sensor used when performing the matching. The position estimation system according to claim 1, wherein the predicted value is used as an initial value.
  14. 請求項1から13のいずれかに記載の位置推定システムと、 前記外界センサと、 移動のための駆動装置と、を備える移動体。 A moving body comprising the position estimation system according to claim 1, the external sensor, and a driving device for movement.
  15. 内界センサをさらに備える、請求項14に記載の移動体。 The mobile body according to claim 14, further comprising an internal sensor.
  16. 請求項1から13のいずれかに記載の位置推定システムに使用されるコンピュータプログラム。 The computer program used for the position estimation system in any one of Claim 1 to 13.
PCT/JP2019/013741 2018-04-02 2019-03-28 Position estimation system, moving body comprising said position estimation system, and computer program WO2019194079A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980022370.6A CN111971633B (en) 2018-04-02 2019-03-28 Position estimation system, mobile body having the position estimation system, and recording medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018070527 2018-04-02
JP2018-070527 2018-04-02

Publications (1)

Publication Number Publication Date
WO2019194079A1 true WO2019194079A1 (en) 2019-10-10

Family

ID=68100670

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/013741 WO2019194079A1 (en) 2018-04-02 2019-03-28 Position estimation system, moving body comprising said position estimation system, and computer program

Country Status (2)

Country Link
CN (1) CN111971633B (en)
WO (1) WO2019194079A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210293973A1 (en) * 2020-03-20 2021-09-23 Abb Schweiz Ag Position estimation for vehicles based on virtual sensor response
CN116466382A (en) * 2023-04-24 2023-07-21 贵州一招信息技术有限公司 GPS-based high-precision real-time positioning system
JP7424438B1 (en) 2022-09-22 2024-01-30 いすゞ自動車株式会社 Vehicle position estimation device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010277548A (en) * 2009-06-01 2010-12-09 Hitachi Ltd Robot management system, robot management terminal, method for managing robot, and program
JP2017045447A (en) * 2015-08-28 2017-03-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Map generation method, own position estimation method, robot system and robot
JP2017146893A (en) * 2016-02-19 2017-08-24 トヨタ自動車株式会社 Self-position estimation method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102057251B (en) * 2008-06-04 2012-11-14 株式会社日立制作所 Navigation device, navigation method and navigation system
JP5245139B2 (en) * 2008-09-29 2013-07-24 鹿島建設株式会社 Mobile object guidance system and guidance method
CN105953798B (en) * 2016-04-19 2018-09-18 深圳市神州云海智能科技有限公司 The pose of mobile robot determines method and apparatus
CN107167148A (en) * 2017-05-24 2017-09-15 安科机器人有限公司 Synchronous superposition method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010277548A (en) * 2009-06-01 2010-12-09 Hitachi Ltd Robot management system, robot management terminal, method for managing robot, and program
JP2017045447A (en) * 2015-08-28 2017-03-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Map generation method, own position estimation method, robot system and robot
JP2017146893A (en) * 2016-02-19 2017-08-24 トヨタ自動車株式会社 Self-position estimation method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210293973A1 (en) * 2020-03-20 2021-09-23 Abb Schweiz Ag Position estimation for vehicles based on virtual sensor response
CN113494912A (en) * 2020-03-20 2021-10-12 Abb瑞士股份有限公司 Position estimation of a vehicle based on virtual sensor responses
US11953613B2 (en) * 2020-03-20 2024-04-09 Abb Schweiz Ag Position estimation for vehicles based on virtual sensor response
JP7424438B1 (en) 2022-09-22 2024-01-30 いすゞ自動車株式会社 Vehicle position estimation device
CN116466382A (en) * 2023-04-24 2023-07-21 贵州一招信息技术有限公司 GPS-based high-precision real-time positioning system

Also Published As

Publication number Publication date
CN111971633B (en) 2023-10-20
CN111971633A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
JP6816830B2 (en) A position estimation system and a mobile body equipped with the position estimation system.
JP6825712B2 (en) Mobiles, position estimators, and computer programs
TWI665538B (en) A vehicle performing obstacle avoidance operation and recording medium storing computer program thereof
US20200110410A1 (en) Device and method for processing map data used for self-position estimation, mobile body, and control system for mobile body
JP2019168942A (en) Moving body, management device, and moving body system
JPWO2019026761A1 (en) Mobile and computer programs
JP7111424B2 (en) Mobile object, position estimation device, and computer program
JP7136426B2 (en) Management device and mobile system
WO2019054208A1 (en) Mobile body and mobile body system
WO2019054209A1 (en) Map creation system and map creation device
JP2019053391A (en) Mobile body
WO2019194079A1 (en) Position estimation system, moving body comprising said position estimation system, and computer program
JP2019175137A (en) Mobile body and mobile body system
JP2019175136A (en) Mobile body
JP2019179497A (en) Moving body and moving body system
JP2019067001A (en) Moving body
CN112578789A (en) Moving body
JP2020166702A (en) Mobile body system, map creation system, route creation program and map creation program
JP2019148871A (en) Movable body and movable body system
JPWO2019059299A1 (en) Operation management device
JP2020166701A (en) Mobile object and computer program
JP2019175138A (en) Mobile body and management device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19782084

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19782084

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP