WO2022024563A1 - Own-position estimation device and program - Google Patents

Own-position estimation device and program Download PDF

Info

Publication number
WO2022024563A1
WO2022024563A1 PCT/JP2021/022270 JP2021022270W WO2022024563A1 WO 2022024563 A1 WO2022024563 A1 WO 2022024563A1 JP 2021022270 W JP2021022270 W JP 2021022270W WO 2022024563 A1 WO2022024563 A1 WO 2022024563A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
score
information
cloud data
unit
Prior art date
Application number
PCT/JP2021/022270
Other languages
French (fr)
Japanese (ja)
Inventor
アレックス益男 金子
久洋 腰塚
隆 筒井
道彦 池田
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Publication of WO2022024563A1 publication Critical patent/WO2022024563A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions

Definitions

  • the present invention relates to a self-position estimation device and a program, and particularly to a technique for estimating the position of a moving object such as a robot or an automobile.
  • Autonomous driving technology and driving support technology have been developed in which moving objects such as robots and automobiles collect information around them, estimate the current position and traveling state of the moving objects, and control the traveling of the moving objects.
  • the reliability of the current position estimation depends on the accuracy and amount of the collected surrounding information. On the other hand, when the amount of collected information is large, the processing load becomes high and the reliability of position estimation becomes low.
  • Patent Document 1 discloses a mobile body position estimation device and a mobile body position estimation method that can reduce the processing load required for the position estimation of the mobile body.
  • Patent Document 1 by securing a predetermined number of predetermined feature points satisfying a predetermined standard and tracking only the predetermined number of the predetermined feature points, the position of the moving body can be estimated with a relatively small processing load. I have to.
  • Patent Document 1 when the number of predetermined feature points satisfying a predetermined standard is large, the processing load cannot be reduced. Further, when there are few predetermined feature points satisfying a predetermined standard, highly accurate position estimation cannot be performed.
  • the self-position estimation device of one aspect of the present invention is a self-position estimation device that estimates its own position by comparing the information of the traveling environment collected by the sensor with the map information. And take the following configuration.
  • the sensor acquires information based on the current position estimation unit that temporarily estimates the current position of itself and the current temporary position estimated by the current position estimation unit among the point cloud data included in the map information.
  • the point cloud data selection unit that calculates the possible range and selects the point cloud data in the acquireable range, the score of the point cloud data selected from the map information, and the maximum allowable processing time when estimating the position.
  • a matching unit that matches the group data with the point cloud data of the information acquired by the sensor, and a current position correction unit that corrects its own current temporary position estimated by the current position estimation unit based on the matching result of the matching unit.
  • the "three-dimensional point” represents the coordinates in the space located on the surface and inside of the object having a shape, and refers to the points obtained by the sensor, the points included in the map, and the like. It shall include. Further, a plurality of three-dimensional points are called a "point cloud”.
  • the three-dimensional point data may include color information.
  • FIG. 1 is a block diagram of a position estimation device according to an embodiment of the present invention.
  • the position estimation device 1 (an example of the self-position estimation device) is mounted on a moving body 100 such as an automobile or a robot.
  • the position estimation device 1 can communicate with the external storage device 19 existing outside the position estimation device 1. Wireless is desirable as this communication means.
  • the position estimation device 1 has a signal reception unit 2, one or more sensors 12a, 12b, ..., 12n, and an information processing device 13. These components are interconnected by a bus 18.
  • the sensors 12a, 12b, ..., 12n are referred to as sensors 12 when it is not necessary to distinguish them.
  • the information processing device 13 is, for example, a general computer (electronic computer), and includes a sensor processing unit 14 that processes information acquired by the sensor 12 and a control unit 15 (for example, a CPU) that performs processing based on the sensor processing results. , A memory 16 and a display unit 17 such as a display.
  • each function according to the present embodiment is realized by the sensor processing unit 14 and the control unit 15 reading and executing a computer program recorded in the memory 16 or a storage medium (not shown).
  • the signal reception unit 2 receives a signal from the outside.
  • the signal receiving unit 2 is a receiver of a Global Positioning System (GPS) that estimates the current position in absolute coordinates of the world.
  • the signal receiving unit 2 may be a receiver of RTK-GPS (RealTimeKinematicGPS) that estimates the current position more accurately than GPS.
  • the signal receiving unit 2 may be a receiver of the quasi-zenith satellite system.
  • the signal receiving unit 2 may receive a signal from a beacon fixed at a known position.
  • the signal receiving unit 2 may receive a signal from a sensor that estimates the position in relative coordinates, such as a wheel encoder, an inertial measurement unit (IMU), and a gyro.
  • IMU inertial measurement unit
  • the signal receiving unit 2 may receive information such as a lane, a sign, a traffic condition, a shape, a size, and a height of a three-dimensional object in the traveling environment.
  • any method may be used as long as it can be used for the current position estimation, control, and recognition of the mobile body 100 on which the position estimation device 1 is mounted.
  • the sensor 12 is, for example, a still camera or a video camera. Further, the sensor 12 may be a monocular camera or a compound eye camera. Further, the sensor 12 may be a laser sensor. Finally, any sensor may be used as long as the shape information of the traveling environment (around the moving body 100) can be extracted.
  • the information processing device 13 processes the information acquired by the sensor 12 to calculate the position or the amount of movement of the moving body 100.
  • the information processing apparatus 13 may display according to the calculated position or movement amount. Further, the information processing apparatus 13 may output a signal related to the control of the mobile body 100.
  • the sensor 12a is installed in front of the moving body 100, for example.
  • the sensor 12a has a lens and acquires distant view information in front of the moving body 100.
  • the information acquired from the distant view may be such that features such as a three-dimensional object or a landmark (predetermined stationary object) for position estimation are extracted.
  • the other sensors 12b, ..., 12n are installed at positions different from the sensor 12a, and image a direction or region different from the sensor 12a.
  • the sensor 12b may be installed downward, for example, behind the moving body 100.
  • the sensor 12b acquires the near view information behind the moving body 100.
  • the near view information may be the road surface around the moving body 100, or the white line around the moving body 100, the road surface paint, or the like may be detected.
  • the pixel position on the image and the actual ground position relationship (x, y) are constant, so the distance from the sensor 12 to the feature point is geometrically Can be calculated.
  • the distance from the moving body 100 to the object corresponding to the feature point is based on the amount of movement of the feature points on the image in time series and the amount of movement of the moving body 100 received from the signal receiving unit 2. The distance can be estimated. Further, when the sensor 12 is a stereo camera, the distance to the feature point on the image can be measured more accurately. Further, when the sensor 12 is a laser sensor, it is possible to acquire distant information more accurately.
  • the sensors 12a, 12b, ..., 12n are arranged so as not to be affected by environmental disturbances such as rain and sunlight at the same time.
  • environmental disturbances such as rain and sunlight
  • the sensors 12a to 12n may be acquired under different acquisition conditions (aperture value, white balance, period, etc.). For example, by installing a sensor whose parameters are adjusted for a bright place and a sensor whose parameters are adjusted for a dark place, it may be possible to take an image regardless of the brightness of the environment.
  • Sensors 12a to 12n acquire information when receiving a command to start acquisition from the control unit 15 or at regular time intervals.
  • the acquired information data is stored in the memory 16 together with the acquisition time.
  • the memory 16 includes a main storage device (main memory) of the information processing device 13 and an auxiliary storage device such as storage. Information such as a computer program, a table, and a file that realizes each function according to the present embodiment is recorded in the memory 16.
  • a semiconductor memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card or an optical disk can be used.
  • the sensor processing unit 14 performs various information processing based on the information data stored in the memory 16 and the acquisition time. In this information processing, for example, intermediate information is created and stored in the memory 16. The intermediate information may be used not only for processing by the sensor processing unit 14 but also for determination and processing by the control unit 15 and the like.
  • the bus 18 can be configured by IEBUS (Inter Equipment Bus), LIN (Local Interconnect Network), CAN (Controller Area Network), or the like.
  • IEBUS Inter Equipment Bus
  • LIN Local Interconnect Network
  • CAN Controller Area Network
  • the external storage device 19 stores information on the environment in which the mobile body 100 travels, including map information.
  • the information stored in the external storage device 19 is, for example, the shape and position of a stationary object (tree, building, road, lane, signal, sign, road surface paint, roadside, etc.) in the traveling environment.
  • Each information of the external storage device 19 may be expressed by a mathematical formula.
  • the line information does not consist of a plurality of points, but only the slope and intercept of the line.
  • the information of the external storage device 19 (for example, the type of a stationary object) may be represented by a point cloud without distinction.
  • the point cloud may be represented by 3D (x, y, z), 4D (x, y, z, color) or the like.
  • the traveling environment (surrounding information) can be detected from the current position of the moving body 100 and map matching can be performed, the information of the external storage device 19 may be stored in any format.
  • the external storage device 19 When receiving a command to start acquisition from the control unit 15, the external storage device 19 sends information to the memory 16. When the external storage device 19 is installed in the mobile body 100, the external storage device 19 transmits / receives information to / from the memory 16 via the bus 18. On the other hand, when the external storage device 19 is not installed in the mobile body 100, the signal receiving unit 2 transmits / receives information between the position estimation device 1 and the external storage device 19.
  • This communication can be performed by, for example, LAN (Local Area Network), WAN (Wide Area Network), or the like.
  • the sensor processing unit 14 processes the information acquired by the sensor 12. For example, the sensor processing unit 14 processes the information acquired by the sensor 12 while the moving body 100 is traveling to detect an obstacle. Further, the sensor processing unit 14 processes, for example, the information acquired by the sensor 12 while the moving body 100 is traveling, and recognizes a predetermined landmark.
  • the sensor processing unit 14 is based on the current position of the moving body 100 stored in the external storage device 19 and the memory 16 and the internal parameters of the sensor 12, and the information of the external storage device 19 is based on the information acquired by the sensor 12. (For example, point group data) is acquired.
  • the sensor processing unit 14 identifies a plurality of position candidates of the moving body based on the information acquired by the sensor 12, and estimates the position of the moving body 100 based on the plurality of position candidates and the moving speed of the moving body 100. ..
  • the sensor processing unit 14 may process the information acquired by the sensor 12 while the moving body 100 is traveling to estimate the position of the moving body 100. For example, the sensor processing unit 14 calculates the movement amount of the moving body 100 from the information acquired by the sensor 12 in time series, and adds the movement amount to the past position to estimate the current position. The sensor processing unit 14 may extract features from each of the information acquired in time series. The sensor processing unit 14 further extracts the same features from the next and subsequent information. Then, the sensor processing unit 14 calculates the movement amount of the moving body 100 by tracking the feature.
  • the sensor processing unit 14 may perform different tasks by using the information acquired by the sensors 12a to 12n, respectively.
  • the position of the moving body 100 is estimated based on the information acquired by the sensor 12a and the sensor 12b, and obstacle detection is performed by two sensors (not shown).
  • the results based on the information obtained by the respective sensors 12a to 12n are fused and the moving body 100 can be controlled by the control unit 15.
  • the CPU of the control unit 15 can process only by a single thread, the information obtained by each of the sensors 12a to 12n is processed in the order of the sensors 12a to 12n.
  • the CPU of the control unit 15 can process in multiple threads, the information obtained by the sensors 12a to 12n is processed at the same time.
  • the control unit 15 outputs a command regarding the movement speed to the moving body 100 based on the result of information processing of the sensor processing unit 14. For example, the control unit 15 commands and decreases the moving speed of the moving body 100 according to the resolution of the three-dimensional object in the information, the number of outliers among the features in the information, the type of information processing, and the like. A command or a command to maintain may be output.
  • FIG. 2 is a block diagram showing an example of the internal configuration of the sensor processing unit 14.
  • the sensor processing unit 14 includes an input / output unit 141, a current position estimation unit 142, a point cloud data acquisition unit 143, a filter unit 144, a data score adjustment unit 145, a matching unit 146, an obstacle detection unit 147, and a current position correction unit 148.
  • a current position estimation unit 142 includes an input / output unit 141, a current position estimation unit 142, a point cloud data acquisition unit 143, a filter unit 144, a data score adjustment unit 145, a matching unit 146, an obstacle detection unit 147, and a current position correction unit 148.
  • the input / output unit 141 inputs / outputs necessary information for correcting the position of the moving body 100 in the sensor processing unit 14. For example, the input / output unit 141 reads out the information on the traveling environment acquired by the sensors 12a, 12b, ..., 12n stored in the memory 16. Further, the input / output unit 141 acquires information from the external storage device 19 by the information extraction process (external storage device) (S5 in FIG. 3) registered in the memory 16. When the amount of information acquired from the external storage device 19 is too large to be stored in the memory 16, only the information close to the current position of the moving body 100 estimated by the current position estimation process (S3 in FIG. 3) may be acquired. Further, only the information contained in a certain area when viewed from the current position of the moving body 100 estimated by the current position estimation process may be acquired.
  • the information extraction process external storage device
  • the current position estimation unit 142 tentatively estimates its own current position.
  • the position estimated by the current position estimation unit 142 is not limited to the position in absolute coordinates (latitude, longitude), but may be the position in relative coordinates.
  • the point cloud data acquisition unit 143 acquires a three-dimensional point based on the information acquired by the input / output unit 141 and temporarily records it.
  • the sensors 12a, 12b, ..., 12n are cameras and the information acquired by the input / output unit 141 is an image
  • the two-dimensional image is converted into a three-dimensional point.
  • the conversion method may be, for example, to make each pixel of the image a three-dimensional point based on the parallax image.
  • technologies such as SFM (Structure From Motion), Flat surface model, ORB-SLAM, and LSD-SLAM may be used.
  • the point cloud data acquisition unit 143 calculates the shape of the traveling environment of the moving body 100 by using these technologies such as SFM and SLAM.
  • SFM Session Initiation Force
  • SLAM Synchronization Advanced Mobile Network
  • the filter unit 144 filters the three-dimensional points acquired by the point cloud data acquisition unit 143.
  • the filter unit 144 may calculate the average value of the three-dimensional points within a certain area and perform the processing to narrow down to one three-dimensional point (voxel). Further, the three-dimensional points in the area of the driving environment having few features such as corners and edges may be deleted.
  • the filter unit 144 the visible three-dimensional point of the external storage device 19 at the temporary position of the moving body 100 estimated by the current position estimation process (step S3 in FIG. 3) is filtered.
  • the sensor 12 provides information on the traveling environment based on the current temporary position estimated by the current position estimation unit 142 among the information (point cloud data) included in the map information stored in the external storage device 19. Is calculated, and the point cloud data within the acquireable range is selected from the external storage device 19.
  • the filter unit 144 is an example of a point cloud data selection unit.
  • step S10 in FIG. 3 if the number and shape of the information acquired by the sensors 12a, 12b, ..., 12n and the information acquired from the external storage device 19 are significantly different, the matching (matching) fails. The possibility is high. Therefore, by extracting the visible three-dimensional points of the external storage device 19 at the temporary position of the moving body 100 having a high matching probability, accurate matching can be performed in the matching process. In addition, when the matching can be performed with a predetermined accuracy in this way, it is called "matching is successful".
  • the data score adjustment unit 145 compares the score of the point cloud data selected from the map information with the maximum processable score Nmax (see FIG. 4), and based on the comparison result, performs the maximum processing of the score of the selected point cloud data. Adjust within the range of possible points or less.
  • the maximum processable score is the maximum score that can be processed in the maximum allowable processing time when estimating the position.
  • the matching unit 146 matches the three-dimensional points acquired by the point cloud data acquisition unit 143 with the visible three-dimensional points acquired by the filter unit 144.
  • the matching unit 146 matches the point cloud data after the score is adjusted by the data score adjusting unit 145 with the point cloud data of the information acquired by the sensor 12.
  • the matching unit 146 uses ICP (Iterative Closest Point) technology to perform map matching in which the information acquired by the sensor 12 is compared with the shape information of the driving environment created in advance.
  • the obstacle detection unit 147 processes the information acquired by the sensor 12 while the moving body 100 is traveling, and detects obstacles around the sensor 12.
  • the filter unit 144 described above selects point cloud data from the external storage device 19 in consideration of the obstacle detection result in the obstacle detection unit 147.
  • the current position correction unit 148 corrects the current temporary position of the moving body 100.
  • the current position correction unit 148 self estimated by the current position estimation unit 142 based on the matching result of the matching unit 146 (the amount of deviation between the point cloud data obtained by the sensor 12 and the point cloud data of the external storage device 19). Correct the current temporary position of. Then, the current position correction unit 148 outputs the corrected current position information to the control unit 15, and the like.
  • the current position correction unit 148 transmits the corrected current position information to the input / output unit 141, and the input / output unit 141 transmits the corrected current position information according to the command of the control unit 15 to the memory 16. Further, the input / output unit 141 may directly transmit the current position information corrected by the current position correction unit 148 to the control unit 15 by a command of the control unit 15.
  • the matching unit 146 described above may also serve as the current position estimation unit 142 and / or the obstacle detection unit 147.
  • the input / output unit 141, the current position estimation unit 142, the point cloud data acquisition unit 143, the filter unit 144, the data score adjustment unit 145, the matching unit 146, and the obstacle detection unit Although 147 and the current position correction unit 148 are included in the sensor processing unit 14, the present invention is not limited thereto.
  • the input / output unit 141, the current position estimation unit 142, the point cloud data acquisition unit 143, the filter unit 144, the data point adjustment unit 145, the matching unit 146, the obstacle detection unit 147, and the current position correction unit 148 are used as the sensor processing unit 14. It may be an independent component that is not included.
  • FIG. 3 is a flowchart showing an example of an information processing procedure executed by the sensor processing unit 14.
  • the sensor processing unit 14 executes a sensor acquisition range recording process for recording the acquisition range of the sensor 12 in the memory 16 (S1). For example, the maximum angle (angle of view) that can be acquired in the horizontal and vertical directions from the center of the sensor 12 is recorded in the memory 16. Further, since the accuracy of the information acquired by the sensor 12 depends on the distance to the object, the maximum distance (range) in which the accuracy does not deteriorate from the predetermined value is recorded in the memory 16. This process may be performed in advance before the sensor processing unit 14 starts this flowchart.
  • the sensor processing unit 14 executes a maximum score recording process for recording the maximum score calculated based on the calculation speed of the information processing apparatus 13 in the memory 16 (S2). The details of this maximum score recording process will be described later with reference to FIG.
  • the current position estimation unit 142 executes the current position estimation process for estimating the current temporary position based on the information received from the signal reception unit 2 (S3). Further, the current position estimation process may estimate the current temporary position based on the information acquired by the sensor 12 such as odometry and landmark matching. In the landmark matching, the information acquired by the sensor 12 and the landmark information acquired from the external storage device 19 are collated. Further, the current position estimation process may be a fusion result of the various position estimation methods described above (odometry, landmark matching, information received from the signal receiving unit 2, etc.). For example, the current position may be predicted based on the result of fusing various position estimation methods with a Kalman filter. Finally, any sensor or method may be used as long as the current temporary position of the moving body 100 can be estimated.
  • the point cloud data acquisition unit 143 executes an information extraction process (sensor) for extracting information (surrounding information) of the traveling environment of the moving body 100 by the sensor 12 (S4).
  • the information extracted by this information extraction process (sensor) is used as a point cloud.
  • a method of using a mathematical formula (function, curve) or a color (image matching) as information can be considered.
  • the point cloud data acquisition unit 143 extracts information around the moving body 100 from the external storage device 19 based on the current temporary position obtained in the current position estimation process (S3) (external storage). Device) is executed (S5).
  • the information extracted by this information extraction process is set as a point cloud, and the extracted points are taken as N.
  • the sensor processing unit 14 performs a memory reference process for extracting the maximum score (maximum processable score) and the sensor acquisition range recorded in the maximum score recording process (S2) and the sensor acquisition range recording process (S1) from the memory 16. Execute (S6).
  • the maximum number of points that can be processed is Nmax.
  • the data score adjustment unit 145 executes a score confirmation process for comparing the score N of the point cloud with the maximum processable score Nmax (S7).
  • the data score adjusting unit 145 proceeds to the score increasing process in step S8 when N ⁇ Nmax, and proceeds to the score reducing process in step S9 when N ⁇ Nmax.
  • the data score adjusting unit 145 increases the score N to the maximum processable score Nmax when N ⁇ Nmax (S8). The details of this score increase process will be described later with reference to FIG.
  • the data score adjusting unit 145 reduces the score N to the maximum processable score Nmax when N ⁇ Nmax (S9). The details of this score reduction process will be described later with reference to FIG.
  • the matching unit 146 performs the information (point group data) extracted by the sensor 12 in the information extraction processing (sensor) of step S4 and the information extraction processing (external storage device) of step S5.
  • the information (point group data) extracted from the external storage device 19 is matched (S10).
  • the obstacle detection unit 147 executes an obstacle detection process for detecting the three-dimensional shape of an obstacle (another vehicle, a pedestrian, an obstacle to movement, etc.) around the moving body 100 and its position (the obstacle detection unit 147). S11).
  • an obstacle is detected based on the information of the driving environment acquired by the sensor 12.
  • the obstacle detection unit 147 detects an obstacle by using an image processing technique or a deep learning technique.
  • detection may be performed based on the information received by the signal receiving unit 2.
  • an obstacle is detected by a surveillance camera installed in the traveling environment of the mobile body 100, and the detection result and the position of the surveillance camera with respect to the mobile body 100 are transmitted to the signal receiving unit 2.
  • the matching unit 146 executes the matching process in step S10, and then executes the unmatchable area calculation process for calculating the information and the area that could not be matched due to an obstacle or the like (S12). The details of this unmatchable region calculation process will be described later with reference to FIGS. 5 and 6.
  • the matching unit 146 executes a matching load area recording process of recording the unmatched area calculated in the matching non-matchable area calculation process of step S12 in the memory 16 (S13).
  • the current position correction unit 148 executes the current position correction process for correcting the current position tentatively estimated by the current position estimation process in step S3 using the result of the matching process in step S10 (S14).
  • the sensor processing unit 14 determines whether or not to end a series of processes in this flowchart (S15).
  • the determination criteria are, for example, the number of predetermined position estimations, the mileage of the moving body 100, the current position of the moving body 100, and the end command received by the signal receiving unit 2.
  • the sensor processing unit 14 ends a series of processes in this flowchart. If the end determination is not made (NO in S15), the sensor processing unit 14 returns to the current position estimation process in step S3.
  • FIG. 4 is a graph showing a method for calculating the maximum score in the maximum score recording process in step S2.
  • the function F400 is a function representing the relationship between the score of the point cloud and the processing time (ms) at the score.
  • the calculation speed of the information processing apparatus 13 depends on the score extracted by the information extraction process (sensor) in step S4 and the score extracted by the information extraction process (external storage device) in step S5.
  • To obtain the function F400 first execute the matching process of step S10 with a plurality of different points, calculate the time required for each point (hereinafter referred to as "processing time"), and combine each point and the processing time 401 (plot). Data) is calculated.
  • a polynomial or a spline is applied to the plurality of obtained combinations 401 to perform interpolation processing.
  • the maximum processable score Nmax403 is a score obtained when the maximum processing time Tmax402 is substituted into the function F400. In FIG. 4, the function F is obtained and stored in advance, but the present invention is not limited to this example. For example, a reference table in which the relationship between the score of the point cloud and the processing time at that score is registered may be created, and the processing time may be obtained by interpolation for the score not in the reference table.
  • FIG. 5 is a diagram showing an example (without obstacles) of the non-matching area calculation process and the non-matchable area recording process.
  • the alternate long and short dash line indicates the range in which the sensor 12 installed in the moving body 100 can acquire information (for example, the shooting range (angle of view) of the camera).
  • the information acquisition range of the sensor 12 is shown in a state where the moving body 100, the landmark 500a, and the landmark 500b are overlooked.
  • the landmark 500a and the landmark 500b are in the traveling environment (surroundings) of the moving body 100.
  • the fact that the object is in the traveling environment can be rephrased as the object is included in the information acquisition range of the sensor 12.
  • the information 501 is information (for example, a point cloud) extracted from the landmarks 500a and 500b by the information extraction process (sensor) (S4 in FIG. 3) at the current position (provisional) of the moving body 100.
  • the information 502 is information on the traveling environment (for example, a point cloud) extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current position (provisional) of the moving body 100.
  • the matching result 503 is a result obtained by executing a matching process (S10) using the information 501 acquired by the sensor 12 and the information 502 extracted from the external storage device 19.
  • S10 a matching process
  • the matching of the information 501 and the information 502 can be performed well. Therefore, there is no unmatchable area.
  • FIG. 6 is a diagram showing an example (with obstacles) of the non-matching area calculation process and the non-matchable area recording process.
  • the information of the landmark 500b cannot be extracted due to the influence of the obstacle 610 (for example, a passerby) existing in the information acquisition range of the sensor 12 indicated by the alternate long and short dash line, and the information extraction process (sensor) (S4).
  • the information extracted in is information 501b.
  • the information 501b includes information obtained from the landmark 500a and information on the obstacle 610.
  • the matching process (S10) is performed using the information 501b acquired by the sensor 12 and the information 502 extracted from the external storage device 19, the matching result 503b is output.
  • the matching unit 146 represents a plurality of three-dimensional positions of the information that could not be matched in the non-matching region calculation process (S12) by the three-dimensional region 504b (non-matching region) shown by the alternate long and short dash line.
  • the matching unit 146 cannot match the information of the position (for example, the distance from the moving body 100 or the absolute position), the length (depth), the height (H), and the width (W) of the three-dimensional region 504b. Recording is performed in the memory 16 in the area recording process (S13).
  • the three-dimensional region 504b has been described as a cube, but the three-dimensional region 504b may be represented by a sphere or a cylinder.
  • the three-dimensional region 504b may have any three-dimensional shape as long as it contains information that could not be matched.
  • the three-dimensional region 504b may be obtained by using the obstacle detection process (S11) by the obstacle detection unit 147.
  • the three-dimensional shape and position of the three-dimensional region 504b are set based on the shape and position of the obstacle 610 output from the obstacle detection process. Specific examples of the unmatchable region calculation process (S12) by the obstacle detection process will be described later with reference to FIGS. 10 and 11.
  • the matching non-matching area calculation process (S12) was described using a moving object (obstacle 610), but the matching process (S10) may fail due to a cause other than the obstacle.
  • the environment may change depending on the construction work or the season, and the environment may differ between when the information is registered in the external storage device 19 and when matching. Therefore, when the information extracted by the information extraction process (sensor) (S4) and the information extracted by the information extraction process (external storage device) (S5) are matched by the matching process (S10), there is a region that cannot be matched. As described above, it is possible to detect a region that could not be matched by the present invention.
  • FIG. 7 is a diagram showing details of the score reduction process by the data score adjusting unit 145.
  • the landmarks 701a, 701b, and 701c are landmarks in the traveling environment (surroundings) of the moving body 100, and the information (point cloud data) of each landmark is registered in the external storage device 19.
  • Information 702 is information extracted from the external storage device 19 based on the current position of the mobile body 100.
  • the information 702 is used as a point cloud, and the score N of the point cloud included in the information 702 is assumed to be larger than the maximum processable score Nmax403 (see FIG. 4) (N> Nmax).
  • the score N of the information 702 is larger than the maximum processable score Nmax403, the processing cannot be completed within the maximum processing time Tmax402. Therefore, it is necessary to reduce the score N of the information 702 to the maximum processable score Nmax403.
  • the score is randomly reduced, or the score is reduced by using voxels. Further, only the information that can be matched at the current position of the moving body 100 may be left, and other information may be reduced. Finally, any method may be used as long as the score N becomes smaller than the maximum processable score Nmax403.
  • the score of the point cloud of the landmarks 701a, 701b, 701c included in the information 702 is reduced.
  • FIG. 8 is a diagram showing details of the score increasing process by the data score adjusting unit 145.
  • the information 702b is information extracted from the external storage device 19 based on the current position of the mobile body 100.
  • the information 702b is used as a point cloud, and the score Nb of the point cloud included in the information 702b is assumed to be larger than the maximum processable score Nmax403 (see FIG. 4) (Nb> Nmax).
  • the area 704 is a non-matching area calculated by the non-matching area calculation process in step S11.
  • the information 702b includes the area 704 calculated by the unmatchable area calculation process in step S12.
  • the information contained in this area 704 is not used when the matching process (S10) is performed. Therefore, the information (point cloud) in the area 704 included in the information 703b is deleted, and the score Nb'becomes the score Nc (Nc ⁇ Nb'). Since Nc ⁇ Nmax, the processing time is shorter than the maximum processing time Tmax402 when the matching processing (S10) is performed.
  • the score increasing process (S8), a point not selected outside the area 704 is selected from the information 702b, and the score is increased from the information 703b.
  • Information 705 is information after increasing the score Nc to the maximum processable score Nmax403. Further, when the point cloud is increased by the score increase processing (S8), the data score adjustment unit 145 increases the point cloud by searching for a random, voxel, or matching point cloud based on the same criteria as the score reduction processing (S9). .. The shaded point cloud in the information 705 is the point cloud increased from the information 703b.
  • FIG. 9 is a diagram showing a first example of self-position estimation processing by the sensor processing unit 14. For the sake of simplicity, it is assumed that the moving body 100 is not moving.
  • the landmarks 901a and the landmarks 901b are landmarks in the traveling environment (surroundings) of the moving body 100.
  • Information (point cloud data) of each landmark is registered in the external storage device 19.
  • the information 902a is information on the landmarks 901a and the landmarks 901b extracted by the information extraction process (sensor) (S4) at the current temporary position of the moving body 100.
  • information 902a is used as a point cloud.
  • Information 903a is information on the traveling environment of the mobile body 100 extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current temporary position of the mobile body 100.
  • the score of the point cloud included in the information 903a is defined as the score Na.
  • the matching unit 146 refers to the maximum processable score Nmax403 registered in the memory 16 in advance in the memory reference process (S6), and compares the score Na with the maximum processable score Nmax403 in the score confirmation process (S7).
  • Na ⁇ Nmax is set in the example of FIG.
  • the matching result 904a is a result of matching the information 902a acquired by the sensor 12 in the matching process (S10) with the information 903a extracted from the external storage device 19.
  • the current position correction unit 148 corrects the current position tentatively estimated in the current position correction process (S14) based on the matching result 904a. Further, in the example of FIG. 9, it is assumed that there is no region that could not be matched (matching OK).
  • FIG. 10 is a diagram showing a second example of self-position estimation processing by the sensor processing unit 14, and shows a state at the next timing of FIG.
  • the timing interval is, for example, a cycle in which the sensor processing unit 14 executes information processing.
  • the obstacle 1005 is an obstacle within the acquisition range of the sensor 12 installed in the moving body 100.
  • the information 902b is information on the landmark 901a and the obstacle 1005 extracted by the information extraction process (sensor) (S4) at the current temporary position of the moving body 100.
  • Information 903b is information on the traveling environment of the mobile body 100 extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current temporary position of the mobile body 100. It is assumed that the point cloud score Nb included in the information 903b is smaller than the maximum processable score Nmax403 (Nb ⁇ Nmax).
  • the matching result 904b is a result of matching the information 902b acquired by the sensor in the matching process (S10) with the information 903b extracted from the external storage device 19.
  • the matching non-matchable region calculation process (S12) there is a region that could not be matched due to the influence of the obstacle 1005, and the corresponding region 906b is calculated by the matching non-matchable region calculation process (S12). Therefore, information such as the shape and position of the area 906b is registered in the memory 16 in the non-matching area recording process (S13) (matching NG).
  • FIG. 11 is a diagram showing a third example of self-position estimation processing by the sensor processing unit 14, and shows a state at the next timing of FIG.
  • the information 902c is information on the landmark 901a and the obstacle 1005 extracted by the information extraction process (sensor) (S4) at the current temporary position of the moving body 100. That is, the information 902c is the same as the information 902b.
  • Information 903c is information on the traveling environment of the mobile body 100 extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current temporary position of the mobile body 100. It is assumed that the score Nc of the point cloud included in the information 903c is smaller than the maximum processable score Nmax403 (Nc ⁇ Nmax).
  • the data score adjustment unit 145 refers to the area 906b (non-matching area) registered in the memory 16 in the memory reference process (S6).
  • the matching unit 146 performs the matching process (S10) at the previous timing, and then outputs the area 906b in the matching non-matching area calculation process (S12). Therefore, the data score adjustment unit 145 deletes the information in the area 906b in the score reduction process (S9) before performing the matching process this time.
  • the score Nc of the information 903c is reduced to the score Nc'. Since Nc' ⁇ Nmax, the point cloud outside the region 906b is increased to the maximum processable point Nmax403, and the matching process is performed. Therefore, as shown by the matching result 904c, the influence of the obstacle 1005 is reduced, and the matching process can be completed accurately and within the maximum processing time Tmax402.
  • FIG. 12 is a diagram showing a fourth example of self-position estimation processing by the sensor processing unit 14, and shows a state at the next timing of FIG. At this timing, the passerby as the obstacle 1005 is moving from the inside to the outside of the information acquisition range of the sensor 12.
  • the information 902d is the information of the landmarks 901a and the landmarks 901b extracted by the information extraction process (sensor) (S4) at the current temporary position of the moving body 100. That is, the information 902d is the same as the information 902a in FIG.
  • Information 903d is information on the traveling environment of the mobile body 100 extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current temporary position of the mobile body 100. It is assumed that the score Nd of the point cloud included in the information 903d is smaller than the maximum processable score Nmax403 (Nd ⁇ Nmax).
  • the data score adjusting unit 145 deletes the information in the area 906b (non-matching area) in the score reduction process (S9) before performing the matching process (S10), and the information 903d.
  • the score Nd of is reduced to the score Nd'.
  • Nd' ⁇ Nmax the point cloud outside the region 906b is increased to the maximum processable point Nmax403, and the matching process is performed.
  • the information of the landmark 901b in the information 902d does not match the information obtained by deleting the area 906 from the information 903d.
  • FIG. 13 is a diagram showing a fifth example of self-position estimation processing by the sensor processing unit 14, and shows a state at the next timing of FIG. The passerby as the obstacle 1005 remains out of the information acquisition range of the sensor 12.
  • the information 902e is the information of the landmarks 901a and the landmarks 901b extracted by the information extraction process (sensor) (S4) at the current temporary position of the moving body 100. That is, the information 902e is the same as the information 902d in FIG.
  • the information 902e is the same as the information 902d in FIG.
  • the information 903e is information on the traveling environment of the mobile body 100 extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current temporary position of the mobile body 100. It is assumed that the point cloud score Ne included in the information 903e is smaller than the maximum processable score Nmax403 (Ne ⁇ Nmax).
  • the area 906e represents an area after the area 906b is reduced.
  • the score reduction process (S9) the information in the area 906e (matching impossible area) is deleted, and the score Ne of the information 903e is reduced to the score Ne'.
  • the data score adjusting unit 145 considers the area 906e (the score of the included point cloud), increases the score Ne of the information 903e to the maximum processable score Nmax403 by the score increasing process (S8), and then the matching section 146.
  • the matching process (S10) is performed in. After that, the matching result 904e is obtained. As a result, on the outside of the region 906e, matching is possible even at the location where the region 906b was.
  • the area is made smaller in the unmatchable area calculation process (S12), it is made smaller by using a constant parameter K (0 ⁇ K ⁇ 1) in time series.
  • K ⁇ K ⁇ 1
  • the size of the region 906e is set to Lb * K, Hb * K, and Wb * K in the unmatchable region calculation process (S12). do.
  • the area 906e becomes smaller in time series and the area 906e does not contain any one point of the information 903e, the area 906e is deleted.
  • the shape of the region 906b is not limited to a rectangle.
  • FIG. 13 shows an example in which the obstacle 1005 is out of the information acquisition range of the sensor 12 as expected, but the case where the obstacle 905 remains within the information acquisition range of the sensor 12 will be described.
  • the region 906e that could not be matched is detected by the matching non-matching region calculation process (S12), so that the region 906e is newly increased by the matching non-region calculation process.
  • the area 906e may be increased to about the area 606.
  • FIG. 14 is a diagram showing a sixth example of self-position estimation processing by the sensor processing unit 14, and shows a state at the next timing of FIG.
  • the information 902f is the information of the landmarks 901a and the landmarks 901b extracted by the information extraction process (sensor) (S4) at the current temporary position of the moving body 100. That is, the information 902f is the same as the information 902e in FIG.
  • Information 903f is information on the traveling environment of the mobile body 100 extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current temporary position of the mobile body 100. It is assumed that the score Nf of the point cloud included in the information 903f is smaller than the maximum processable score Nmax403 (Nf ⁇ Nmax).
  • the area 906e (FIG. 13) excluded from the matching process becomes smaller in time series and completely disappears. Therefore, the information 903f including the landmark 901a and the point cloud corresponding to the landmark 901b and the information 902f acquired by the sensor 12 are matched in the matching process of step S10, and the matching result 904f is output.
  • the result obtained by the matching process it may be displayed by the display unit 17 for debugging or user reference. Further, not only the result of the matching process but also the score at the time of the matching process, the maximum processable score Nmax403 registered in the memory 16, the result of the unmatchable area calculation process, and the like may be displayed.
  • FIG. 15 is a diagram illustrating a selection criterion of point cloud data when the data score adjusting unit 145 performs the score reduction process (S9).
  • the upper side of FIG. 15 shows an example of the positional relationship between the sensor 12 and the landmark, and the lower side of FIG. 15 shows a point cloud extracted from the external storage device 19. For the sake of simplicity, it is assumed that the moving body 100 is not moving.
  • Coordinates 1500 are coordinates (x, y, z) based on the sensor 12 installed on the moving body 100.
  • the direction in which the sensor 12 is facing is the z-axis
  • the direction orthogonal to the z-axis is the y-axis, y-axis, and the height direction of the landmark.
  • the direction orthogonal to the z-axis is defined as the x-axis.
  • the landmark 1501 is a landmark in the traveling environment of the moving body 100.
  • the landmark 1502 is a landmark in the traveling environment of the moving body 100, and is farther from the moving body 100 on the z-axis than the landmark 1501.
  • the positions of the moving body 100, the landmark 1501 and the landmark 1502 in FIG. 15 are fixed with respect to the x-axis.
  • Landmarks 1501 and 1502 are included in the information acquisition range of the sensor 12.
  • the point cloud 1503 is information (point cloud) of the landmarks 1501 and the landmarks 1502 registered in the external storage device 19.
  • the point cloud 1504 is information on the traveling environment of the moving body 100 (point group) extracted from the external storage device 19 by an information extraction process (external storage device) (S5 in FIG. 3) based on the current temporary position of the moving body 100. ). It is assumed that the score of the point cloud included in the point cloud 1504 is Nt, and the score Nt is larger than the maximum processable score Nmax403 (Nt> Nmax).
  • the priority according to the distance will be described as a selection criterion of the point cloud when the score increase process (S8) and the score decrease process (S9) are performed.
  • the point cloud that is likely to match the point cloud registered in the external storage device 19 when extracted from the current position of the moving body 100 by the sensor 12 is preferentially selected.
  • the sensor 12 extracts (acquires) information from the surface (boundary line) of the three-dimensional object, so that even if there is information on the back side of the three-dimensional object, it cannot be extracted. Therefore, in the score reduction process, the point cloud 1505 (indicated by a black circle) on the front side (current position side) of the landmark 1501 and the point cloud on the front side of the landmark 1502, which can be easily extracted (referenced) from the current position of the moving body 100, are used.
  • the matching process can be performed accurately and with a low load.
  • the accuracy of the sensor 12 installed on the mobile body 100 depends on the resolution and calibration.
  • the accuracy of the measured distance deteriorates due to the influence of the reflectance of the object and the reflection (noise) of the environment.
  • the laser sensor has a limited resolution and information acquisition range, the measurement accuracy of a distant object or an object having a certain range (distance) or more deteriorates.
  • the measurement accuracy varies greatly depending on the resolution of the image and the algorithm used when creating the parallax image. Basically, the farther the object is from the camera, the worse the measurement accuracy.
  • the external parameter is the fixed position of the sensor.
  • Internal parameters include the lens arrangement (distortion) in the sensor, the adjustment parameters of the parallax image, and the like.
  • the information far from the current position of the moving body 100 on the z-axis is deleted in the score reduction process.
  • the point cloud 1506 on the surface of the landmark 1502 is deleted, and the matching process is performed leaving the point cloud 1505 on the surface of the landmark 1501 closer to the moving body 100.
  • the score of the point cloud 1505 is set to be smaller than the maximum processable score Nmax403, but if it is necessary to further reduce the score of the point cloud 1505, it is moved more from the point cloud 1505 as described above.
  • the point cloud far from the body 100 is preferentially deleted.
  • FIG. 16 is a diagram showing an example of self-position estimation processing using the point cloud selected in FIG.
  • the point cloud 1508 is a point cloud of the landmark 1502 extracted by the information extraction process (sensor) (S4) from the current position of the moving body 100 at the timing t0.
  • the point cloud 1509 is a point cloud of the landmark 1501 extracted by the information extraction process (sensor) (S4) from the current position of the moving body 100 at the timing t0.
  • the score Nt of the point cloud 1504 in the driving environment extracted from the external storage device 19 in the information extraction process (external storage device) (S5) is larger than the maximum processable score Nmax403. Further, it is assumed that the point cloud 1505 and the point cloud 1506 representing the boundary line between the landmark 1501 and the landmark 1502 are extracted by the score reduction process (S9). Further, it is assumed that the total score Nt'of the point cloud 1505 and the point cloud 1506 is still larger than the maximum processable score Nmax403. In this case, the point cloud 1505 that is preferentially left closer to the moving body 100 is preferentially left in the point reduction process, and the point cloud 1506 that is farther than the point cloud 1505 from the moving body 100 is deleted, and the matching process is performed.
  • the obstacle 1601 enters the information acquisition range of the sensor 12, and the point cloud 1508b of the landmark 1502 is subjected to the information extraction process (sensor) (S4) from the current position of the moving body 100. And, the point cloud 1601b of the obstacle 1601 is extracted. Then, before the information extraction process (external storage device) (S5) extracts the driving environment information from the external storage device 19, the obstacle detection process (S11) detects the obstacle 1601 and the matching impossible area calculation process (S5). It is assumed that the region 1510 (non-matching region) representing the obstacle 1601 is calculated in S12). Next, the information extraction process (external storage device) (S5) extracts the point cloud 1504 of the traveling environment from the external storage device 19.
  • the score Nt is larger than the maximum processable score Nmax403, the score Nt is reduced to the maximum processable score Nmax403 in the score reduction process (S9).
  • the point cloud 1505 (lower side in FIG. 16) close to the moving body 100 is included in the region 1510 with reference to the region 1510 representing the obstacle 1601 registered in the memory 16, the point cloud 1505 is selected. Is impossible. Therefore, the point cloud 1506 that represents the boundary line of the distant landmark 1502 is selected and the matching process is performed.
  • the lower part of FIG. 16 shows the case where the obstacle 1601 is not within the information acquisition range of the sensor 12 at the next timing t2.
  • the obstacle detection process S11 detects the obstacle, but the obstacle 1601 disappears and the obstacle is detected. There is no output of detection processing. Therefore, the non-matching area is not output in the non-matching area calculation process (S12).
  • the information extraction process extracts the point cloud 1504 of the driving environment from the external storage device 19, and since the score Nt is larger than the maximum processable score Nmax403, the score is reduced by the score reduction process (S9). ..
  • the score reduction process S9. ..
  • the point cloud 1505 of the landmark 1501 close to the moving body 100 is left, and the point cloud 1506 of the landmark 1502 farther away is deleted. do.
  • the self-position estimation device (position estimation device 1) according to the embodiment is self-estimating the self-position by comparing the driving environment information collected by the sensor (sensor 12) with the map information. It is a position estimation device.
  • This self-position estimation device has a current position estimation unit (current position estimation unit 142) that temporarily estimates its current position, and a current temporary position estimation unit that estimates the point cloud data included in the map information.
  • the point cloud data selection unit (filter unit 144) that calculates the range in which the sensor can acquire information based on the position and selects the point cloud data in the acquireable range, the score of the point cloud data selected from the map information, and Compare with the maximum processable score, which is the maximum score that can be processed in the maximum allowable processing time when estimating the position, and based on the comparison result, the score of the selected point cloud data is less than or equal to the maximum processable score. Matching between the point cloud adjustment unit (data point cloud adjustment unit 145) that adjusts within the range, the matching unit (matching unit 146) that matches the adjusted point cloud data with the point cloud data of the information acquired by the sensor, and the matching unit. Based on the result, it includes a current position correction unit (current position correction unit 148) that corrects its own current temporary position estimated by the current position estimation unit.
  • the points of the point cloud data selected from the map information can be dynamically adjusted within the range of the maximum processable points or less corresponding to the maximum processing time, so that the position is estimated. It is possible to estimate the position by optimizing the accuracy and processing load of the.
  • the score adjustment unit (data score adjustment unit 145) of the selected point cloud data. It is configured to reduce the score to the maximum processable score Nmax.
  • the processing load can be reduced by reducing the number of points of the selected point cloud data to the maximum number of points that can be processed Nmax.
  • the point cloud adjustment unit collects the selected point cloud data when the score of the selected point cloud data is less than the maximum processable point cloud Nmax. It is configured to increase the score up to the maximum processable score Nmax.
  • the accuracy of position estimation can be improved by increasing the number of points of the selected point cloud data to the maximum number of points that can be processed Nmax.
  • the matching unit stores the point cloud data (for example, the information of the area 906b) that could not be matched, and the stored point cloud data is used for the next position estimation. At the time of, it is configured not to select from the point cloud data included in the map information.
  • the point cloud data that could not be matched is not selected from the point cloud data included in the map information at the next position estimation, the influence of the point cloud data that could not be matched is eliminated and the robustness is achieved. Can be improved.
  • the point cloud adjustment unit (data point cloud adjustment unit 145) is currently used among the selected point cloud data when reducing the points of the selected point cloud data to the maximum processable point cloud Nmax. It is configured to preferentially reduce point cloud data (eg, point cloud 1506) farther from the temporary position of.
  • the point cloud adjustment unit (data point cloud adjustment unit 145) is currently used among the selected point cloud data when increasing the points of the selected point cloud data to the maximum processable point cloud Nmax. It is configured to preferentially increase the point cloud data (for example, the point cloud data 1505) closer to the temporary position of.
  • the maximum processable score Nmax is a score calculated based on the correspondence relationship (for example, function F) between the different score obtained in advance and the processing time at the score. be. In this way, by obtaining the relationship between the score and the processing time in advance, the maximum processable score Nmax that changes in time series can be calculated at any time.
  • the matching unit (matching unit 146) is configured to calculate and store the position and shape of the region (for example, region 906b) including the point cloud data that could not be matched. ing. By storing the position and shape of the area including the point cloud data that could not be matched in this way, the corresponding point cloud data can be excluded in the next matching.
  • the matching unit determines the size of the region that could not be matched in time series with respect to the map information according to the matching result obtained in time series. It is configured to adjust and store the points of the point cloud data (for example, reduced from the area 906b to the area 906e).
  • the continuity of the current position of the time series is maintained by adjusting the area or point cloud data that could not be matched in the time series and reflecting it in the matching process of the time series.
  • the present invention is not limited to the above-described embodiment, and it goes without saying that various other application examples and modifications can be taken as long as the gist of the present invention described in the claims is not deviated.
  • the above-described embodiment is a detailed and specific description of the configuration of the position estimation device and the moving body in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the components described above. .. It is also possible to add, replace, or delete other components with respect to a part of the configuration of one embodiment.
  • each of the above configurations, functions, processing units, etc. may be realized by hardware, for example, by designing a part or all of them by an integrated circuit.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • a plurality of processes may be executed in parallel or the processing order may be changed as long as the processing results are not affected.
  • control lines and information lines are shown as necessary for explanation, and not all the control lines and information lines are necessarily shown in the product. In practice, it can be considered that almost all components are interconnected.

Abstract

An own-position estimation device according to one embodiment of the present invention is provided with: a current position estimation unit for provisionally estimating the current own-position; a point group data selection unit for calculating a range in which information can be acquired by a sensor of a moving body on the basis of the provisional own-position among point group data included in map information, and selecting point group data that is in the acquirable range; a point quantity adjustment unit for comparing the number of points of the point group data selected from the map information and a maximum processable number of points, which is the maximum number of points that can be processed in the maximum allowable processing time when the position is estimated, and adjusting the number of points of the selected point group data so as to be within a range that is no greater than the maximum processable number of points, on the basis of the comparison result; and a matching unit for matching the adjusted point group data and the point group data in the information acquired by the sensor.

Description

自己位置推定装置及びプログラムSelf-position estimator and program
 本発明は、自己位置推定装置及びプログラムに関し、特にロボットや自動車などの移動体の位置を推定する技術に関する。 The present invention relates to a self-position estimation device and a program, and particularly to a technique for estimating the position of a moving object such as a robot or an automobile.
 ロボット及び自動車などの移動体がその周囲の情報を収集し、移動体の現在位置及び走行状態を推定し、移動体の走行を制御する自律走行技術及び運転支援技術が開発されている。現在位置推定の信頼度は、収集した周囲の情報の精度や量に依存する。一方、収集した情報量が大きい場合、処理負荷が高くなり、位置推定の信頼度が低くなる。 Autonomous driving technology and driving support technology have been developed in which moving objects such as robots and automobiles collect information around them, estimate the current position and traveling state of the moving objects, and control the traveling of the moving objects. The reliability of the current position estimation depends on the accuracy and amount of the collected surrounding information. On the other hand, when the amount of collected information is large, the processing load becomes high and the reliability of position estimation becomes low.
 例えば、特許文献1には、移動体の位置推定に要する処理の負荷を低減することができるようにした移動体位置推定装置及び移動体位置推定方法が開示されている。特許文献1では、所定の基準を満たす所定の特徴点を規定数だけ確保し、それら規定数の所定の特徴点だけを追跡することで、移動体の位置を比較的少ない処理負荷で推定できるようにしている。 For example, Patent Document 1 discloses a mobile body position estimation device and a mobile body position estimation method that can reduce the processing load required for the position estimation of the mobile body. In Patent Document 1, by securing a predetermined number of predetermined feature points satisfying a predetermined standard and tracking only the predetermined number of the predetermined feature points, the position of the moving body can be estimated with a relatively small processing load. I have to.
国際公開第15/049717号International Publication No. 15/049717
 しかしながら、特許文献1のように、所定の基準を満たす所定の特徴点の数が多い場合、処理負荷を低減できない。また、所定の基準を満たす所定の特徴点が少ない場合、精度の高い位置推定ができない。 However, as in Patent Document 1, when the number of predetermined feature points satisfying a predetermined standard is large, the processing load cannot be reduced. Further, when there are few predetermined feature points satisfying a predetermined standard, highly accurate position estimation cannot be performed.
 上記の状況から、位置推定の精度と処理負荷を最適化した位置推定を可能にする手法が要望されていた。 From the above situation, there has been a demand for a method that enables position estimation with optimized position estimation accuracy and processing load.
 上記課題を解決するために、本発明の一態様の自己位置推定装置は、センサで収集した走行環境の情報と、地図情報とを比較することで自己の位置を推定する自己位置推定装置であって次の構成をとる。
 自己位置推定装置は、自己の現在の位置を仮に推定する現在位置推定部と、地図情報に含まれる点群データのうち、現在位置推定部で推定した現在の仮位置に基づきセンサが情報を取得可能な範囲を算出し、取得可能な範囲の点群データを選択する点群データ選択部と、地図情報から選択した点群データの点数と、位置を推定する際の許容可能な最大の処理時間で処理できる最大の点数である最大処理可能点数とを比較し、比較結果に基づいて、選択した点群データの点数を最大処理可能点数以下の範囲で調整する点数調整部と、調整後の点群データとセンサで取得した情報の点群データとのマッチングを行うマッチング部と、マッチング部のマッチング結果に基づいて、現在位置推定部で推定した自己の現在の仮位置を修正する現在位置修正部と、を備える。
In order to solve the above problems, the self-position estimation device of one aspect of the present invention is a self-position estimation device that estimates its own position by comparing the information of the traveling environment collected by the sensor with the map information. And take the following configuration.
In the self-position estimation device, the sensor acquires information based on the current position estimation unit that temporarily estimates the current position of itself and the current temporary position estimated by the current position estimation unit among the point cloud data included in the map information. The point cloud data selection unit that calculates the possible range and selects the point cloud data in the acquireable range, the score of the point cloud data selected from the map information, and the maximum allowable processing time when estimating the position. Compare with the maximum processable score, which is the maximum score that can be processed in, and adjust the score of the selected point cloud data within the range below the maximum processable score based on the comparison result. A matching unit that matches the group data with the point cloud data of the information acquired by the sensor, and a current position correction unit that corrects its own current temporary position estimated by the current position estimation unit based on the matching result of the matching unit. And.
 本発明の少なくとも一態様によれば、位置推定の精度と処理負荷を最適化した位置推定が可能となる。
 上記した以外の課題、構成及び効果は、以下の実施形態の説明により明らかにされる。
According to at least one aspect of the present invention, position estimation with optimized position estimation accuracy and processing load is possible.
Issues, configurations and effects other than those described above will be clarified by the following description of the embodiments.
本発明の一実施形態に係る移動体に搭載された位置推定装置の全体構成図である。It is an overall block diagram of the position estimation apparatus mounted on the moving body which concerns on one Embodiment of this invention. 図1のセンサ処理部の内部構成例を示すブロック図である。It is a block diagram which shows the internal structure example of the sensor processing part of FIG. センサ処理部が実行する情報処理の手順例を示すフローチャートである。It is a flowchart which shows the procedure example of the information processing which a sensor processing part executes. ステップS2の最大点数記録処理における最大点数算出の詳細を示すグラフである。It is a graph which shows the detail of the maximum score calculation in the maximum score recording process of step S2. ステップS12のマッチング不可領域算出処理と、ステップS13のマッチング不可領域記録処理の例(障害物なし)を示す図である。It is a figure which shows the example (no obstacle) of the unmatchable area calculation process of step S12, and the unmatchable area recording process of step S13. ステップS12のマッチング不可領域算出処理と、ステップS13のマッチング不可領域記録処理の例(障害物あり)を示す図である。It is a figure which shows the example (with obstacles) of the unmatchable area calculation process of step S12, and the unmatchable area recording process of step S13. ステップS9の点数低減処理の詳細を示す図である。It is a figure which shows the detail of the score reduction process of a step S9. ステップS8の点数増加処理の詳細を示す図である。It is a figure which shows the detail of the score increase process of a step S8. 本発明の一実施形態に係る自己位置推定処理の第1の例を示す図である。It is a figure which shows the 1st example of the self-position estimation process which concerns on one Embodiment of this invention. 本発明の一実施形態に係る自己位置推定処理の第2の例を示す図である。It is a figure which shows the 2nd example of the self-position estimation process which concerns on one Embodiment of this invention. 本発明の一実施形態に係る自己位置推定処理の第3の例を示す図である。It is a figure which shows the 3rd example of the self-position estimation process which concerns on one Embodiment of this invention. 本発明の一実施形態に係る自己位置推定処理の第4の例を示す図である。It is a figure which shows the 4th example of the self-position estimation process which concerns on one Embodiment of this invention. 本発明の一実施形態に係る自己位置推定処理の第5の例を示す図である。It is a figure which shows the 5th example of the self-position estimation process which concerns on one Embodiment of this invention. 本発明の一実施形態に係る自己位置推定処理の第6の例を示す図である。It is a figure which shows the sixth example of the self-position estimation process which concerns on one Embodiment of this invention. 点数低減を実施する場合の点群データの選択基準を説明する図である。It is a figure explaining the selection criteria of the point cloud data at the time of carrying out the point cloud reduction. 図15で選択された点群を用いた自己位置推定処理の例を示す図である。It is a figure which shows the example of the self-position estimation process using the point cloud selected in FIG.
 以下、本発明を実施するための形態の例について、添付図面を参照して説明する。本明細書及び添付図面において実質的に同一の機能又は構成を有する構成要素については、同一の符号を付して重複する説明を省略する。 Hereinafter, an example of a mode for carrying out the present invention will be described with reference to the attached drawings. In the present specification and the accompanying drawings, components having substantially the same function or configuration are designated by the same reference numerals and duplicated description will be omitted.
 なお、本明細書においては、「三次元点」は、形状を有する物体の表面及び内部に位置する空間上の座標を表すものであり、センサにより得られた点、マップに含まれる点等を含むものとする。また、複数の三次元点を「点群」と呼ぶ。なお、三次元点データ(点群データ)に色の情報が含まれていてもよい。 In the present specification, the "three-dimensional point" represents the coordinates in the space located on the surface and inside of the object having a shape, and refers to the points obtained by the sensor, the points included in the map, and the like. It shall include. Further, a plurality of three-dimensional points are called a "point cloud". The three-dimensional point data (point cloud data) may include color information.
[位置推定装置の全体構成]
 図1は、本発明の一実施形態に係る位置推定装置の構成図である。
 位置推定装置1(自己位置推定装置の一例)は、自動車又はロボットなどの移動体100に搭載される。位置推定装置1は、位置推定装置1の外部に存在する外部記憶装置19と通信可能となっている。この通信手段は無線が望ましい。
[Overall configuration of position estimation device]
FIG. 1 is a block diagram of a position estimation device according to an embodiment of the present invention.
The position estimation device 1 (an example of the self-position estimation device) is mounted on a moving body 100 such as an automobile or a robot. The position estimation device 1 can communicate with the external storage device 19 existing outside the position estimation device 1. Wireless is desirable as this communication means.
 位置推定装置1は、信号受付部2と、一台以上のセンサ12a,12b,・・・,12nと、情報処理装置13とを有する。これらの構成要素は、バス18により相互に接続されている。本明細書では、センサ12a,12b,・・・,12nを特に区別する必要がない場合には、センサ12と記載する。 The position estimation device 1 has a signal reception unit 2, one or more sensors 12a, 12b, ..., 12n, and an information processing device 13. These components are interconnected by a bus 18. In the present specification, the sensors 12a, 12b, ..., 12n are referred to as sensors 12 when it is not necessary to distinguish them.
 情報処理装置13は、例えば一般的なコンピューター(電子計算機)であって、センサ12によって取得した情報を処理するセンサ処理部14と、センサ処理結果に基づく処理を行う制御部15(例えばCPU)と、メモリ16と、ディスプレイなどの表示部17と、を含む。情報処理装置13においては、センサ処理部14及び制御部15が、メモリ16又は図示しない記憶媒体に記録されたコンピュータープログラムを読み出して実行することにより、本実施形態に係る各機能が実現される。 The information processing device 13 is, for example, a general computer (electronic computer), and includes a sensor processing unit 14 that processes information acquired by the sensor 12 and a control unit 15 (for example, a CPU) that performs processing based on the sensor processing results. , A memory 16 and a display unit 17 such as a display. In the information processing apparatus 13, each function according to the present embodiment is realized by the sensor processing unit 14 and the control unit 15 reading and executing a computer program recorded in the memory 16 or a storage medium (not shown).
 信号受付部2は、外部からの信号を受信する。例えば、信号受付部2は、世界の絶対座標で現在位置を推定する全地球測位システム(Global Positioning System : GPS)の受信機である。また、信号受付部2は、GPSよりも精度よく現在位置を推定するRTK-GPS(Real Time Kinematic GPS)の受信機でもよい。また、信号受付部2は、準天頂衛星システムの受信機でもよい。また、信号受付部2は、既知の位置に固定されたビーコンからの信号を受信するものでもよい。また、信号受付部2は、車輪エンコーダ、慣性計測装置(Inertial Measurement Unit : IMU)、ジャイロなど、相対座標で位置を推定するセンサから信号を受信してもよい。また、信号受付部2は、走行環境の車線、標識、交通状態、立体物の形状、大きさ、高さなどの情報を受信してもよい。最終的に、位置推定装置1が搭載された移動体100の現在位置推定や制御や認知に利用できれば、どのような方式でもよい。 The signal reception unit 2 receives a signal from the outside. For example, the signal receiving unit 2 is a receiver of a Global Positioning System (GPS) that estimates the current position in absolute coordinates of the world. Further, the signal receiving unit 2 may be a receiver of RTK-GPS (RealTimeKinematicGPS) that estimates the current position more accurately than GPS. Further, the signal receiving unit 2 may be a receiver of the quasi-zenith satellite system. Further, the signal receiving unit 2 may receive a signal from a beacon fixed at a known position. Further, the signal receiving unit 2 may receive a signal from a sensor that estimates the position in relative coordinates, such as a wheel encoder, an inertial measurement unit (IMU), and a gyro. Further, the signal receiving unit 2 may receive information such as a lane, a sign, a traffic condition, a shape, a size, and a height of a three-dimensional object in the traveling environment. Finally, any method may be used as long as it can be used for the current position estimation, control, and recognition of the mobile body 100 on which the position estimation device 1 is mounted.
 センサ12は、例えばスチルカメラ又はビデオカメラである。また、センサ12は、単眼カメラ又は複眼カメラでもよい。また、センサ12は、レーザセンサでもよい。最終的に、走行環境(移動体100の周囲)の形状情報を抽出できればどのようなセンサでもよい。 The sensor 12 is, for example, a still camera or a video camera. Further, the sensor 12 may be a monocular camera or a compound eye camera. Further, the sensor 12 may be a laser sensor. Finally, any sensor may be used as long as the shape information of the traveling environment (around the moving body 100) can be extracted.
 情報処理装置13は、センサ12で取得した情報を処理して、移動体100の位置又は移動量を算出する。情報処理装置13は、算出された位置又は移動量に応じた表示を行ってもよい。また、情報処理装置13は、移動体100の制御に関する信号を出力してもよい。 The information processing device 13 processes the information acquired by the sensor 12 to calculate the position or the amount of movement of the moving body 100. The information processing apparatus 13 may display according to the calculated position or movement amount. Further, the information processing apparatus 13 may output a signal related to the control of the mobile body 100.
 次に、センサ12の詳細について説明する。
 センサ12aは、例えば、移動体100の前方に設置されている。センサ12aは、レンズを有し、移動体100の前方の遠景情報を取得する。この場合、遠景から取得した情報は、立体物、又は位置推定のためのランドマーク(所定の静止物)などの特徴が抽出されるようにしてもよい。
Next, the details of the sensor 12 will be described.
The sensor 12a is installed in front of the moving body 100, for example. The sensor 12a has a lens and acquires distant view information in front of the moving body 100. In this case, the information acquired from the distant view may be such that features such as a three-dimensional object or a landmark (predetermined stationary object) for position estimation are extracted.
 他のセンサ12b、…、12nは、センサ12aと異なる位置に設置され、センサ12aと異なる方向又は領域を撮像する。センサ12bは、例えば、移動体100の後方で下方に向けて設置されていてもよい。この場合、センサ12bは、移動体100の後方の近景情報を取得する。近景情報は、移動体100周辺の路面などでもよく、移動体100の周囲の白線、又は路面ペイントなどが検出されるようにしてもよい。 The other sensors 12b, ..., 12n are installed at positions different from the sensor 12a, and image a direction or region different from the sensor 12a. The sensor 12b may be installed downward, for example, behind the moving body 100. In this case, the sensor 12b acquires the near view information behind the moving body 100. The near view information may be the road surface around the moving body 100, or the white line around the moving body 100, the road surface paint, or the like may be detected.
 センサ12が単眼カメラの場合、路面が平らであれば、画像上のピクセル位置と実際の地上位置関係(x,y)が一定になるため、センサ12から特徴点までの距離を幾何学的に計算できる。路面が平ではない場合、画像上の特徴点の時系列での移動量と信号受付部2から受信した移動体100の移動量とに基づいて、移動体100から特徴点に相当する物体までの距離を推定できる。また、センサ12がステレオカメラの場合、画像上の特徴点までの距離をより正確に計測できる。また、センサ12がレーザセンサの場合、より正確に遠方の情報を取得できる。 When the sensor 12 is a monocular camera, if the road surface is flat, the pixel position on the image and the actual ground position relationship (x, y) are constant, so the distance from the sensor 12 to the feature point is geometrically Can be calculated. When the road surface is not flat, the distance from the moving body 100 to the object corresponding to the feature point is based on the amount of movement of the feature points on the image in time series and the amount of movement of the moving body 100 received from the signal receiving unit 2. The distance can be estimated. Further, when the sensor 12 is a stereo camera, the distance to the feature point on the image can be measured more accurately. Further, when the sensor 12 is a laser sensor, it is possible to acquire distant information more accurately.
 以下の説明では、単眼カメラ、ステレオカメラ又はレーザセンサを採用した事例について説明するが、周囲の立体物への距離を算出できれば、これ以外のセンサ(広角レンズを有するカメラ又はTOFカメラなど)でもよい。 In the following description, an example in which a monocular camera, a stereo camera, or a laser sensor is used will be described, but other sensors (camera with a wide-angle lens, TOF camera, etc.) may be used as long as the distance to the surrounding stereoscopic object can be calculated. ..
 また、センサ12a,12b,・・・,12nは、同時に雨や日差しなどの環境外乱の影響を受けないように配置されていることが望ましい。これにより、例えば、降雨時にセンサ12aのレンズに雨滴が付着した場合でも、進行方向の逆向き又は下向きのセンサ12bのレンズには雨滴が付着しにくい。このため、センサ12aが取得した情報が雨滴の影響で不鮮明であっても、センサ12bが取得した情報は雨滴の影響を受けにくい。また、日差しの影響でセンサ12aの情報が不鮮明であっても、センサ12bが取得した情報は鮮明である可能性がある。 Further, it is desirable that the sensors 12a, 12b, ..., 12n are arranged so as not to be affected by environmental disturbances such as rain and sunlight at the same time. As a result, for example, even if raindrops adhere to the lens of the sensor 12a during rainfall, the raindrops are unlikely to adhere to the lens of the sensor 12b in the opposite direction or downward in the traveling direction. Therefore, even if the information acquired by the sensor 12a is unclear due to the influence of raindrops, the information acquired by the sensor 12b is not easily affected by the raindrops. Further, even if the information of the sensor 12a is unclear due to the influence of sunlight, the information acquired by the sensor 12b may be clear.
 また、センサ12a~12nは、互いに異なる取得条件(絞り値、ホワイトバランス、周期、等)で取得してもよい。例えば、明るい場所用にパラメータを調整したセンサと、暗い場所用にパラメータを調整したセンサとを設置することで、環境の明暗によらず撮像可能としてもよい。 Further, the sensors 12a to 12n may be acquired under different acquisition conditions (aperture value, white balance, period, etc.). For example, by installing a sensor whose parameters are adjusted for a bright place and a sensor whose parameters are adjusted for a dark place, it may be possible to take an image regardless of the brightness of the environment.
 センサ12a~12nは、制御部15から取得開始の指令を受けたとき又は一定の時間間隔で、情報を取得する。取得された情報のデータは取得時刻とともに、メモリ16に格納される。 Sensors 12a to 12n acquire information when receiving a command to start acquisition from the control unit 15 or at regular time intervals. The acquired information data is stored in the memory 16 together with the acquisition time.
 メモリ16は、情報処理装置13の主記憶装置(メインメモリ)及びストレージなどの補助記憶装置を含む。メモリ16には、本実施形態に係る各機能を実現するコンピュータープログラム、テーブル、ファイル等の情報が記録されている。メモリ16として、半導体メモリやハードディスク、SSD(Solid State Drive)等の記録装置、又はICカード、光ディスク等の記録媒体を用いることができる。センサ処理部14は、メモリ16に格納された情報データ及び取得時刻に基づいて、様々な情報処理を行う。この情報処理では、例えば、中間情報が作成され、メモリ16に保存される。中間情報は、センサ処理部14による処理の他、制御部15などの判断や処理に利用してもよい。 The memory 16 includes a main storage device (main memory) of the information processing device 13 and an auxiliary storage device such as storage. Information such as a computer program, a table, and a file that realizes each function according to the present embodiment is recorded in the memory 16. As the memory 16, a semiconductor memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card or an optical disk can be used. The sensor processing unit 14 performs various information processing based on the information data stored in the memory 16 and the acquisition time. In this information processing, for example, intermediate information is created and stored in the memory 16. The intermediate information may be used not only for processing by the sensor processing unit 14 but also for determination and processing by the control unit 15 and the like.
 バス18は、IEBUS(Inter Equipment Bus)やLIN(Local Interconnect Network)やCAN(Controller Area Network)などで構成できる。 The bus 18 can be configured by IEBUS (Inter Equipment Bus), LIN (Local Interconnect Network), CAN (Controller Area Network), or the like.
 外部記憶装置19は、地図情報を含む移動体100が走行する環境の情報を記憶する。外部記憶装置19に記憶される情報は、例えば、走行環境にある静止物(木、建物、道路、車線、信号、標識、路面ペイント、路端など)の形状や位置である。外部記憶装置19のそれぞれの情報を数式で表してもよい。例えば、線情報を複数点で構成せず、線の傾きと切片のみでよい。また、外部記憶装置19の情報(例えば静止物の種類など)を区別せずに、点群で表してもよい。点群は、3D(x,y,z)、4D(x,y,z,色)などで表してもよい。最終的に、移動体100の現在位置から走行環境(周囲の情報)を検出し、マップマッチングができれば、外部記憶装置19の情報は、どのような形式で保存しておいてもよい。 The external storage device 19 stores information on the environment in which the mobile body 100 travels, including map information. The information stored in the external storage device 19 is, for example, the shape and position of a stationary object (tree, building, road, lane, signal, sign, road surface paint, roadside, etc.) in the traveling environment. Each information of the external storage device 19 may be expressed by a mathematical formula. For example, the line information does not consist of a plurality of points, but only the slope and intercept of the line. Further, the information of the external storage device 19 (for example, the type of a stationary object) may be represented by a point cloud without distinction. The point cloud may be represented by 3D (x, y, z), 4D (x, y, z, color) or the like. Finally, if the traveling environment (surrounding information) can be detected from the current position of the moving body 100 and map matching can be performed, the information of the external storage device 19 may be stored in any format.
 制御部15から取得開始の指令を受けたとき、外部記憶装置19が情報をメモリ16に送る。外部記憶装置19は、移動体100に設置されている場合には、バス18を介して情報をメモリ16に送受信する。一方、外部記憶装置19が移動体100に設置されていない場合、位置推定装置1と外部記憶装置19との間の情報の送受信は、信号受付部2で行う。この通信は、例えば、LAN(Local Area Network)、WAN(Wide Area Network)等により行うことができる。 When receiving a command to start acquisition from the control unit 15, the external storage device 19 sends information to the memory 16. When the external storage device 19 is installed in the mobile body 100, the external storage device 19 transmits / receives information to / from the memory 16 via the bus 18. On the other hand, when the external storage device 19 is not installed in the mobile body 100, the signal receiving unit 2 transmits / receives information between the position estimation device 1 and the external storage device 19. This communication can be performed by, for example, LAN (Local Area Network), WAN (Wide Area Network), or the like.
 センサ処理部14は、センサ12で取得された情報を処理する。センサ処理部14は、例えば、移動体100の走行中にセンサ12が取得した情報を処理して、障害物を検知する。また、センサ処理部14は、例えば、移動体100の走行中にセンサ12が取得した情報を処理して、事前に定められたランドマークを認識する。 The sensor processing unit 14 processes the information acquired by the sensor 12. For example, the sensor processing unit 14 processes the information acquired by the sensor 12 while the moving body 100 is traveling to detect an obstacle. Further, the sensor processing unit 14 processes, for example, the information acquired by the sensor 12 while the moving body 100 is traveling, and recognizes a predetermined landmark.
 センサ処理部14は、外部記憶装置19及びメモリ16に格納された移動体100の現在位置とセンサ12の内部パラメータとに基づいて、センサ12で取得された情報を基に外部記憶装置19の情報(例えば点群データ)を取得する。 The sensor processing unit 14 is based on the current position of the moving body 100 stored in the external storage device 19 and the memory 16 and the internal parameters of the sensor 12, and the information of the external storage device 19 is based on the information acquired by the sensor 12. (For example, point group data) is acquired.
 センサ処理部14は、センサ12で取得された情報に基づいて移動体の複数の位置候補を特定し、その複数の位置候補と移動体100の移動速度に基づいて移動体100の位置を推定する。 The sensor processing unit 14 identifies a plurality of position candidates of the moving body based on the information acquired by the sensor 12, and estimates the position of the moving body 100 based on the plurality of position candidates and the moving speed of the moving body 100. ..
 センサ処理部14は、移動体100の走行中にセンサ12が取得した情報を処理して、移動体100の位置を推定してもよい。例えば、センサ処理部14は、センサ12が時系列に取得した情報で移動体100の移動量を算出し、過去の位置に移動量を加算して現在位置を推定する。センサ処理部14は、時系列に取得した情報のそれぞれから特徴を抽出してもよい。センサ処理部14は、さらに、次回以降の情報で同じ特徴を抽出する。そして、センサ処理部14は、特徴のトラッキングにより移動体100の移動量を算出する。 The sensor processing unit 14 may process the information acquired by the sensor 12 while the moving body 100 is traveling to estimate the position of the moving body 100. For example, the sensor processing unit 14 calculates the movement amount of the moving body 100 from the information acquired by the sensor 12 in time series, and adds the movement amount to the past position to estimate the current position. The sensor processing unit 14 may extract features from each of the information acquired in time series. The sensor processing unit 14 further extracts the same features from the next and subsequent information. Then, the sensor processing unit 14 calculates the movement amount of the moving body 100 by tracking the feature.
 また、センサ処理部14は、センサ12a~12nがそれぞれ取得した情報を用いて、異なるタスクを実施してもよい。例えば、センサ12aとセンサ12bで取得した情報に基づいて移動体100の位置推定を行い、図示しない2個のセンサで障害物検知を行う。最終的に、それぞれのセンサ12a~12nで得られた情報に基づいた結果を融合し、制御部15による移動体100の制御ができればよい。また、制御部15のCPUがシングルスレッドでしか処理できない場合、それぞれのセンサ12a~12nで得られた情報をセンサ12a~12nの順番で処理する。一方、制御部15のCPUがマルチスレッドで処理できる場合は、センサ12a~12nで得られた情報を同時に処理する。 Further, the sensor processing unit 14 may perform different tasks by using the information acquired by the sensors 12a to 12n, respectively. For example, the position of the moving body 100 is estimated based on the information acquired by the sensor 12a and the sensor 12b, and obstacle detection is performed by two sensors (not shown). Finally, it suffices if the results based on the information obtained by the respective sensors 12a to 12n are fused and the moving body 100 can be controlled by the control unit 15. When the CPU of the control unit 15 can process only by a single thread, the information obtained by each of the sensors 12a to 12n is processed in the order of the sensors 12a to 12n. On the other hand, when the CPU of the control unit 15 can process in multiple threads, the information obtained by the sensors 12a to 12n is processed at the same time.
 制御部15は、センサ処理部14の情報処理の結果に基づいて、移動体100に対して移動速度に関する指令を出力する。例えば、制御部15は、情報内の立体物の解像度、情報内の特徴のうちの外れ値の数、又は情報処理の種類等に応じて、移動体100の移動速度を増加させる指令、減少させる指令又は維持させる指令を出力してもよい。 The control unit 15 outputs a command regarding the movement speed to the moving body 100 based on the result of information processing of the sensor processing unit 14. For example, the control unit 15 commands and decreases the moving speed of the moving body 100 according to the resolution of the three-dimensional object in the information, the number of outliers among the features in the information, the type of information processing, and the like. A command or a command to maintain may be output.
[センサ処理部の内部構成]
 次に、移動体100に設置されたセンサ処理部14の構成を説明する。
 図2は、センサ処理部14の内部構成例を示すブロック図である。
 センサ処理部14は、入出力部141、現在位置推定部142、点群データ取得部143、フィルター部144、データ点数調整部145、マッチング部146、障害物検知部147、及び現在位置修正部148を有する。
[Internal configuration of sensor processing unit]
Next, the configuration of the sensor processing unit 14 installed in the mobile body 100 will be described.
FIG. 2 is a block diagram showing an example of the internal configuration of the sensor processing unit 14.
The sensor processing unit 14 includes an input / output unit 141, a current position estimation unit 142, a point cloud data acquisition unit 143, a filter unit 144, a data score adjustment unit 145, a matching unit 146, an obstacle detection unit 147, and a current position correction unit 148. Has.
 入出力部141は、センサ処理部14にて移動体100の位置修正を行うための必要な情報の入出力をする。例えば、入出力部141は、メモリ16に格納されたセンサ12a,12b,…,12nで取得した走行環境の情報を読み出す。また、入出力部141は、メモリ16に登録された情報抽出処理(外部記憶装置)(図3のS5)で外部記憶装置19から情報を取得する。外部記憶装置19から取得する情報量が大きく、メモリ16に格納できない場合、現在位置推定処理(図3のS3)で推定した移動体100の現在位置に近い情報だけを取得してもよい。また、現在位置推定処理で推定した移動体100の現在位置から見たときのある領域に入っている情報だけを取得してもよい。 The input / output unit 141 inputs / outputs necessary information for correcting the position of the moving body 100 in the sensor processing unit 14. For example, the input / output unit 141 reads out the information on the traveling environment acquired by the sensors 12a, 12b, ..., 12n stored in the memory 16. Further, the input / output unit 141 acquires information from the external storage device 19 by the information extraction process (external storage device) (S5 in FIG. 3) registered in the memory 16. When the amount of information acquired from the external storage device 19 is too large to be stored in the memory 16, only the information close to the current position of the moving body 100 estimated by the current position estimation process (S3 in FIG. 3) may be acquired. Further, only the information contained in a certain area when viewed from the current position of the moving body 100 estimated by the current position estimation process may be acquired.
 現在位置推定部142は、自己の現在の位置を仮に推定する。現在位置推定部142が推定する位置は、絶対座標における位置(緯度、経度)に限らず、相対座標における位置でもよい。 The current position estimation unit 142 tentatively estimates its own current position. The position estimated by the current position estimation unit 142 is not limited to the position in absolute coordinates (latitude, longitude), but may be the position in relative coordinates.
 点群データ取得部143(点群データ選択部の一例)は、入出力部141が取得した情報に基づいて三次元点を取得し、一時的に記録する。センサ12a、12b、…、12nがカメラであり、入出力部141が取得した情報が画像の場合、二次元画像を三次元点に変換する。変換方法は、例えば、視差画像に基づいて画像のそれぞれのピクセルを三次元点にするものであってもよい。また、センサ12a~12nが単眼カメラの場合、SFM(Structure From Motion)、Flat surface model、ORB-SLAM、LSD-SLAMなどの技術を用いてもよい。点群データ取得部143は、これらSFMやSLAM等の技術を用いて、移動体100の走行環境の形状を算出する。なお、センサ12a~12nがレーザセンサの場合、走行環境の情報を三次元点で抽出するため、他の処理は不要となる。 The point cloud data acquisition unit 143 (an example of the point cloud data selection unit) acquires a three-dimensional point based on the information acquired by the input / output unit 141 and temporarily records it. When the sensors 12a, 12b, ..., 12n are cameras and the information acquired by the input / output unit 141 is an image, the two-dimensional image is converted into a three-dimensional point. The conversion method may be, for example, to make each pixel of the image a three-dimensional point based on the parallax image. When the sensors 12a to 12n are monocular cameras, technologies such as SFM (Structure From Motion), Flat surface model, ORB-SLAM, and LSD-SLAM may be used. The point cloud data acquisition unit 143 calculates the shape of the traveling environment of the moving body 100 by using these technologies such as SFM and SLAM. When the sensors 12a to 12n are laser sensors, information on the traveling environment is extracted at three-dimensional points, so that no other processing is required.
 フィルター部144は、点群データ取得部143で取得した三次元点をフィルタリングする。なお、処理時間を軽減させるために、フィルター部144において、あるエリア以内の三次元点の平均値を算出し、一つの三次元点(ボクセル)に絞る処理をしてもよい。また、コーナーやエッジなどの特徴が少ない走行環境の領域の三次元点を削除してもよい。 The filter unit 144 filters the three-dimensional points acquired by the point cloud data acquisition unit 143. In addition, in order to reduce the processing time, the filter unit 144 may calculate the average value of the three-dimensional points within a certain area and perform the processing to narrow down to one three-dimensional point (voxel). Further, the three-dimensional points in the area of the driving environment having few features such as corners and edges may be deleted.
 また、フィルター部144においては、現在位置推定処理(図3のステップS3)で推定した移動体100の仮位置における外部記憶装置19の可視三次元点をフィルタリングする。例えば、フィルター部144は、外部記憶装置19に保存された地図情報に含まれる情報(点群データ)のうち、現在位置推定部142で推定した現在の仮位置に基づきセンサ12が走行環境の情報を取得可能な範囲を算出し、外部記憶装置19からその取得可能な範囲内の点群データを選択する。フィルター部144は、点群データ選択部の一例である。 Further, in the filter unit 144, the visible three-dimensional point of the external storage device 19 at the temporary position of the moving body 100 estimated by the current position estimation process (step S3 in FIG. 3) is filtered. For example, in the filter unit 144, the sensor 12 provides information on the traveling environment based on the current temporary position estimated by the current position estimation unit 142 among the information (point cloud data) included in the map information stored in the external storage device 19. Is calculated, and the point cloud data within the acquireable range is selected from the external storage device 19. The filter unit 144 is an example of a point cloud data selection unit.
 後述するマッチング処理(図3のステップS10)において、センサ12a、12b、…、12nで取得した情報及び外部記憶装置19から取得した情報の数や形状が大きく異なると、マッチング(照合)が失敗する可能性が高くなる。したがって、マッチング確率の高い移動体100の仮位置における外部記憶装置19の可視三次元点を抽出することにより、マッチング処理において精度よくマッチングができる。なお、このように所定の精度でマッチングができた場合を、「マッチングが成功した」という。 In the matching process (step S10 in FIG. 3) described later, if the number and shape of the information acquired by the sensors 12a, 12b, ..., 12n and the information acquired from the external storage device 19 are significantly different, the matching (matching) fails. The possibility is high. Therefore, by extracting the visible three-dimensional points of the external storage device 19 at the temporary position of the moving body 100 having a high matching probability, accurate matching can be performed in the matching process. In addition, when the matching can be performed with a predetermined accuracy in this way, it is called "matching is successful".
 データ点数調整部145は、地図情報から選択した点群データの点数と最大処理可能点数Nmax(図4参照)とを比較し、比較結果に基づいて、上記選択した点群データの点数を最大処理可能点数以下の範囲で調整する。ここで、最大処理可能点数とは、位置を推定する際の許容可能な最大の処理時間で処理できる最大の点数である。 The data score adjustment unit 145 compares the score of the point cloud data selected from the map information with the maximum processable score Nmax (see FIG. 4), and based on the comparison result, performs the maximum processing of the score of the selected point cloud data. Adjust within the range of possible points or less. Here, the maximum processable score is the maximum score that can be processed in the maximum allowable processing time when estimating the position.
 マッチング部146は、点群データ取得部143で取得した三次元点と、フィルター部144で取得した可視三次元点とのマッチングを行う。ここで、マッチング部146は、データ点数調整部145で点数を調整後の点群データと、センサ12で取得した情報の点群データとのマッチングを行う。例えばマッチング部146は、ICP(Iterative Closest Point)技術を用いて、センサ12で取得した情報を、事前に作られた走行環境の形状情報と比較するマップマッチングを行う。 The matching unit 146 matches the three-dimensional points acquired by the point cloud data acquisition unit 143 with the visible three-dimensional points acquired by the filter unit 144. Here, the matching unit 146 matches the point cloud data after the score is adjusted by the data score adjusting unit 145 with the point cloud data of the information acquired by the sensor 12. For example, the matching unit 146 uses ICP (Iterative Closest Point) technology to perform map matching in which the information acquired by the sensor 12 is compared with the shape information of the driving environment created in advance.
 障害物検知部147は、移動体100の走行中にセンサ12が取得した情報を処理して、センサ12の周囲の障害物を検知する。上述したフィルター部144は、障害物検知部147における障害物の検知結果を考慮して、外部記憶装置19から点群データを選択する。 The obstacle detection unit 147 processes the information acquired by the sensor 12 while the moving body 100 is traveling, and detects obstacles around the sensor 12. The filter unit 144 described above selects point cloud data from the external storage device 19 in consideration of the obstacle detection result in the obstacle detection unit 147.
 現在位置修正部148は、移動体100の現在の仮位置を修正する。現在位置修正部148は、マッチング部146のマッチング結果(センサ12で得た点群データと外部記憶装置19の点群データとのずれ量)に基づいて、現在位置推定部142で推定された自己の現在の仮位置を修正する。そして、現在位置修正部148は、修正した現在位置の情報を制御部15等へ出力する。 The current position correction unit 148 corrects the current temporary position of the moving body 100. The current position correction unit 148 self estimated by the current position estimation unit 142 based on the matching result of the matching unit 146 (the amount of deviation between the point cloud data obtained by the sensor 12 and the point cloud data of the external storage device 19). Correct the current temporary position of. Then, the current position correction unit 148 outputs the corrected current position information to the control unit 15, and the like.
 現在位置修正部148は、修正した現在位置情報を入出力部141に送信し、入出力部141は、制御部15の指令により修正した現在位置情報をメモリ16に送信する。また、入出力部141は、現在位置修正部148で修正した現在位置情報を制御部15の指令で直接制御部15に送信してもよい。 The current position correction unit 148 transmits the corrected current position information to the input / output unit 141, and the input / output unit 141 transmits the corrected current position information according to the command of the control unit 15 to the memory 16. Further, the input / output unit 141 may directly transmit the current position information corrected by the current position correction unit 148 to the control unit 15 by a command of the control unit 15.
 なお、上述したマッチング部146が、現在位置推定部142及び/又は障害物検知部147を兼ねてもよい。 The matching unit 146 described above may also serve as the current position estimation unit 142 and / or the obstacle detection unit 147.
 さらに、上述の説明においては、図2の記載に従って、入出力部141、現在位置推定部142、点群データ取得部143、フィルター部144、データ点数調整部145、マッチング部146、障害物検知部147、及び現在位置修正部148がセンサ処理部14に含まれるものとしたが、本発明は、これに限定されるものではない。入出力部141、現在位置推定部142、点群データ取得部143、フィルター部144、データ点数調整部145、マッチング部146、障害物検知部147、及び現在位置修正部148がセンサ処理部14に含まれない独立の構成要素であってもよい。 Further, in the above description, according to the description of FIG. 2, the input / output unit 141, the current position estimation unit 142, the point cloud data acquisition unit 143, the filter unit 144, the data score adjustment unit 145, the matching unit 146, and the obstacle detection unit Although 147 and the current position correction unit 148 are included in the sensor processing unit 14, the present invention is not limited thereto. The input / output unit 141, the current position estimation unit 142, the point cloud data acquisition unit 143, the filter unit 144, the data point adjustment unit 145, the matching unit 146, the obstacle detection unit 147, and the current position correction unit 148 are used as the sensor processing unit 14. It may be an independent component that is not included.
[センサ処理部による情報処理]
 次に、センサ処理部14が実行する情報処理の手順について図3を参照して説明する。
 図3は、センサ処理部14が実行する情報処理の手順例を示すフローチャートである。
[Information processing by sensor processing unit]
Next, the procedure of information processing executed by the sensor processing unit 14 will be described with reference to FIG.
FIG. 3 is a flowchart showing an example of an information processing procedure executed by the sensor processing unit 14.
 まず、センサ処理部14は、センサ12の取得範囲をメモリ16に記録するセンサ取得範囲記録処理を実行する(S1)。例えば、センサ12の中心から横方向と縦方向の取得可能な最大角度(画角)をメモリ16に記録する。また、センサ12で取得する情報の精度が対象物までの距離に依存するため、精度が所定値よりも悪化しない最大距離(レンジ)をメモリ16に記録する。この処理は、センサ処理部14が本フローチャートを開始する前に予め実施しておいてもよい。 First, the sensor processing unit 14 executes a sensor acquisition range recording process for recording the acquisition range of the sensor 12 in the memory 16 (S1). For example, the maximum angle (angle of view) that can be acquired in the horizontal and vertical directions from the center of the sensor 12 is recorded in the memory 16. Further, since the accuracy of the information acquired by the sensor 12 depends on the distance to the object, the maximum distance (range) in which the accuracy does not deteriorate from the predetermined value is recorded in the memory 16. This process may be performed in advance before the sensor processing unit 14 starts this flowchart.
 次いで、センサ処理部14は、情報処理装置13の演算速度に基づいて算出された最大点数をメモリ16に記録する最大点数記録処理を実行する(S2)。この最大点数記録処理の詳細については図4により後述する。 Next, the sensor processing unit 14 executes a maximum score recording process for recording the maximum score calculated based on the calculation speed of the information processing apparatus 13 in the memory 16 (S2). The details of this maximum score recording process will be described later with reference to FIG.
 次いで、現在位置推定部142は、信号受付部2から受信された情報に基づいて現在の仮位置を推定する現在位置推定処理を実行する(S3)。また、現在位置推定処理は、オドメトリやランドマークマッチングなどのセンサ12で取得した情報に基づいて現在の仮位置を推定してもよい。ランドマークマッチングでは、センサ12で取得した情報と外部記憶装置19から取得したランドマークの情報とを照合する。さらに、現在位置推定処理は、前述した様々な位置推定方式(オドメトリ、ランドマークマッチング、信号受付部2から受信された情報など)の融合結果でもよい。例えば、様々な位置推定方式をカルマンフィルタで融合した結果に基づいて、現在位置を予測してもよい。最終的に、移動体100の現在の仮位置を推定できれば、どんなセンサや方式を用いてもよい。 Next, the current position estimation unit 142 executes the current position estimation process for estimating the current temporary position based on the information received from the signal reception unit 2 (S3). Further, the current position estimation process may estimate the current temporary position based on the information acquired by the sensor 12 such as odometry and landmark matching. In the landmark matching, the information acquired by the sensor 12 and the landmark information acquired from the external storage device 19 are collated. Further, the current position estimation process may be a fusion result of the various position estimation methods described above (odometry, landmark matching, information received from the signal receiving unit 2, etc.). For example, the current position may be predicted based on the result of fusing various position estimation methods with a Kalman filter. Finally, any sensor or method may be used as long as the current temporary position of the moving body 100 can be estimated.
 次いで、点群データ取得部143は、センサ12で移動体100の走行環境の情報(周囲の情報)を抽出する情報抽出処理(センサ)を実行する(S4)。簡単のため、本実施形態では、この情報抽出処理(センサ)で抽出する情報を点群とする。なお、この例に限らず、情報として数式(関数、曲線)を用いたり、色(画像マッチング)を用いたりする方法も考えられる。 Next, the point cloud data acquisition unit 143 executes an information extraction process (sensor) for extracting information (surrounding information) of the traveling environment of the moving body 100 by the sensor 12 (S4). For the sake of simplicity, in this embodiment, the information extracted by this information extraction process (sensor) is used as a point cloud. Not limited to this example, a method of using a mathematical formula (function, curve) or a color (image matching) as information can be considered.
 次いで、点群データ取得部143は、現在位置推定処理(S3)で得られた現在の仮位置に基づいて、外部記憶装置19から移動体100の周囲の情報を抽出する情報抽出処理(外部記憶装置)を実行する(S5)。簡単のため、本実施形態では、この情報抽出処理(外部記憶装置)で抽出した情報を点群とし、抽出した点数をNとする。 Next, the point cloud data acquisition unit 143 extracts information around the moving body 100 from the external storage device 19 based on the current temporary position obtained in the current position estimation process (S3) (external storage). Device) is executed (S5). For the sake of simplicity, in this embodiment, the information extracted by this information extraction process (external storage device) is set as a point cloud, and the extracted points are taken as N.
 次いで、センサ処理部14は、最大点数記録処理(S2)とセンサ取得範囲記録処理(S1)で記録した最大点数(最大処理可能点数)とセンサ取得範囲を、メモリ16から抽出するメモリ参照処理を実行する(S6)。ここで、抽出した最大処理可能点数をNmaxとする。 Next, the sensor processing unit 14 performs a memory reference process for extracting the maximum score (maximum processable score) and the sensor acquisition range recorded in the maximum score recording process (S2) and the sensor acquisition range recording process (S1) from the memory 16. Execute (S6). Here, the maximum number of points that can be processed is Nmax.
 次いで、データ点数調整部145は、点群の点数Nと最大処理可能点数Nmaxを比較する点数確認処理を実行する(S7)。ここで、データ点数調整部145は、N<Nmaxの場合に、ステップS8の点数増加処理に進み、N≧Nmaxの場合に、ステップS9の点数低減処理に進む。 Next, the data score adjustment unit 145 executes a score confirmation process for comparing the score N of the point cloud with the maximum processable score Nmax (S7). Here, the data score adjusting unit 145 proceeds to the score increasing process in step S8 when N <Nmax, and proceeds to the score reducing process in step S9 when N ≧ Nmax.
 次いで、データ点数調整部145は、N<Nmaxの状態のとき、点数Nを最大処理可能点数Nmaxまで増加させる(S8)。この点数増加処理の詳細については図8により後述する。 Next, the data score adjusting unit 145 increases the score N to the maximum processable score Nmax when N <Nmax (S8). The details of this score increase process will be described later with reference to FIG.
 また、データ点数調整部145は、N≧Nmaxの状態のとき、点数Nを最大処理可能点数Nmaxまで低減させる(S9)。この点数低減処理の詳細については図7により後述する。 Further, the data score adjusting unit 145 reduces the score N to the maximum processable score Nmax when N ≧ Nmax (S9). The details of this score reduction process will be described later with reference to FIG.
 次いで、ステップS8又はS9の処理後、マッチング部146は、ステップS4の情報抽出処理(センサ)においてセンサ12で抽出した情報(点群データ)と、ステップS5の情報抽出処理(外部記憶装置)において外部記憶装置19から抽出した情報(点群データ)をマッチングする(S10)。 Next, after the processing of step S8 or S9, the matching unit 146 performs the information (point group data) extracted by the sensor 12 in the information extraction processing (sensor) of step S4 and the information extraction processing (external storage device) of step S5. The information (point group data) extracted from the external storage device 19 is matched (S10).
 次いで、障害物検知部147は、移動体100の周囲の障害物(他車、歩行者、移動の妨げになるものなど)の3次元形状とその位置を検知する障害物検知処理を実行する(S11)。 Next, the obstacle detection unit 147 executes an obstacle detection process for detecting the three-dimensional shape of an obstacle (another vehicle, a pedestrian, an obstacle to movement, etc.) around the moving body 100 and its position (the obstacle detection unit 147). S11).
 障害物検知処理では、センサ12で取得された走行環境の情報に基づいて障害物を検知する。例えば、センサ12が画像取得装置(カメラ)の場合、障害物検知部147は、画像処理技術や深層学習技術を用いて障害物を検知する。また、障害物検知処理では、信号受付部2が受信した情報に基づいて検知してもよい。例えば、移動体100の走行環境に設置された監視カメラで障害物を検知し、検知結果と移動体100に対する監視カメラの位置を信号受付部2に送信する。最終的に、ステップS4の情報抽出処理(センサ)で抽出した情報と、ステップS5の情報抽出処理(外部記憶装置)で抽出した情報とのマッチングの妨げになりそうな障害物を検知すればよい。 In the obstacle detection process, an obstacle is detected based on the information of the driving environment acquired by the sensor 12. For example, when the sensor 12 is an image acquisition device (camera), the obstacle detection unit 147 detects an obstacle by using an image processing technique or a deep learning technique. Further, in the obstacle detection process, detection may be performed based on the information received by the signal receiving unit 2. For example, an obstacle is detected by a surveillance camera installed in the traveling environment of the mobile body 100, and the detection result and the position of the surveillance camera with respect to the mobile body 100 are transmitted to the signal receiving unit 2. Finally, it suffices to detect an obstacle that may hinder the matching between the information extracted by the information extraction process (sensor) in step S4 and the information extracted by the information extraction process (external storage device) in step S5. ..
 次いで、マッチング部146は、ステップS10のマッチング処理を実行した後、障害物等によってマッチングできなかった情報と領域を算出するマッチング不可領域算出処理を実行する(S12)。このマッチング不可領域算出処理の詳細については図5及び図6により後述する。 Next, the matching unit 146 executes the matching process in step S10, and then executes the unmatchable area calculation process for calculating the information and the area that could not be matched due to an obstacle or the like (S12). The details of this unmatchable region calculation process will be described later with reference to FIGS. 5 and 6.
 次いで、マッチング部146は、ステップS12のマッチング不可領域算出処理で算出したマッチングできなかった領域をメモリ16に記録するマッチング負荷領域記録処理を実行する(S13)。 Next, the matching unit 146 executes a matching load area recording process of recording the unmatched area calculated in the matching non-matchable area calculation process of step S12 in the memory 16 (S13).
 次いで、現在位置修正部148は、ステップS10のマッチング処理の結果を用いて、ステップS3の現在位置推定処理で仮に推定した現在位置を修正する現在位置修正処理を実行する(S14)。 Next, the current position correction unit 148 executes the current position correction process for correcting the current position tentatively estimated by the current position estimation process in step S3 using the result of the matching process in step S10 (S14).
 次いで、センサ処理部14は、本フローチャートの一連の処理を終了するかどうかを判定する(S15)。判定基準は、例えば、事前に定められた位置推定の回数や、移動体100の走行距離や、移動体100の現在位置や、信号受付部2が受信した終了指令である。センサ処理部14は、終了判定した場合(S15のYES)、本フローチャートの一連の処理を終了する。また、センサ処理部14は、終了判定しなかった場合(S15のNO)、ステップS3の現在位置推定処理に戻る。 Next, the sensor processing unit 14 determines whether or not to end a series of processes in this flowchart (S15). The determination criteria are, for example, the number of predetermined position estimations, the mileage of the moving body 100, the current position of the moving body 100, and the end command received by the signal receiving unit 2. When the end determination is made (YES in S15), the sensor processing unit 14 ends a series of processes in this flowchart. If the end determination is not made (NO in S15), the sensor processing unit 14 returns to the current position estimation process in step S3.
[最大点数記録処理(最大点数算出)]
 ここで、ステップS2の最大点数記録処理における最大点数算出の詳細について図4を参照して説明する。
[Maximum score recording process (maximum score calculation)]
Here, the details of the calculation of the maximum score in the maximum score recording process in step S2 will be described with reference to FIG.
 図4は、ステップS2の最大点数記録処理における最大点数算出方法を示すグラフである。図4において、関数F400は、点群の点数と、その点数での処理時間(ms)との関係を表す関数である。 FIG. 4 is a graph showing a method for calculating the maximum score in the maximum score recording process in step S2. In FIG. 4, the function F400 is a function representing the relationship between the score of the point cloud and the processing time (ms) at the score.
 情報処理装置13の演算速度は、ステップS4の情報抽出処理(センサ)で抽出した点数と、ステップS5の情報抽出処理(外部記憶装置)で抽出した点数に依存する。関数F400の求め方は、まず複数の異なる点数でステップS10のマッチング処理を実行し、それぞれの点数で要する時間(以下「処理時間」)を算出し、それぞれの点数と処理時間の組み合わせ401(プロットデータ)を求める。次に、求められた複数の組み合わせ401に多項式やsplineを適用して、補間処理を行う。正確に関数F400を求めるために、本実施形態の位置推定方法を実際に実行するときに用いる情報処理装置13で組み合わせ401を算出することが望ましい。 The calculation speed of the information processing apparatus 13 depends on the score extracted by the information extraction process (sensor) in step S4 and the score extracted by the information extraction process (external storage device) in step S5. To obtain the function F400, first execute the matching process of step S10 with a plurality of different points, calculate the time required for each point (hereinafter referred to as "processing time"), and combine each point and the processing time 401 (plot). Data) is calculated. Next, a polynomial or a spline is applied to the plurality of obtained combinations 401 to perform interpolation processing. In order to accurately obtain the function F400, it is desirable to calculate the combination 401 with the information processing apparatus 13 used when the position estimation method of the present embodiment is actually executed.
 なお、本実施形態の位置推定方法を実際に実行するときの情報処理装置13で組み合わせ401を算出することが難しい場合、別の情報処理装置で関数(F’)を算出する。そして、本実施形態の位置推定方法を実際に実行するときの情報処理装置13と、別の情報処理装置との間の演算速度の差Sに基づいて、関数F=f(F’,S)を求めてもよい。 If it is difficult for the information processing apparatus 13 to calculate the combination 401 when the position estimation method of the present embodiment is actually executed, the function (F') is calculated by another information processing apparatus. Then, the function F = f (F', S) is based on the difference S in the calculation speed between the information processing apparatus 13 when the position estimation method of the present embodiment is actually executed and another information processing apparatus. May be asked.
 最大処理時間Tmax402は、位置推定を行う際に許容可能な最大の処理時間である。高精度な位置推定を実現するには、所定の最大周期以内に位置推定の実施が必要であり、所定の最大周期に基づいて最大処理時間Tmax402を算出する。例えば、本実施形態の位置推定方法を実行するために必要な最大周期が10Hzの場合、Tmax=100msとする。最大処理可能点数Nmax403は、最大処理時間Tmax402を関数F400に代入したときに得られる点数である。なお、図4では、予め関数Fを求めて記憶しておいたが、この例に限らない。例えば、点群の点数とその点数での処理時間との関係を登録した参照テーブルを作成しておき、参照テーブルにない点数については補間により処理時間を求めるようにしてもよい。 Maximum processing time Tmax402 is the maximum processing time that can be tolerated when performing position estimation. In order to realize highly accurate position estimation, it is necessary to carry out position estimation within a predetermined maximum cycle, and the maximum processing time Tmax402 is calculated based on the predetermined maximum cycle. For example, when the maximum period required to execute the position estimation method of the present embodiment is 10 Hz, Tmax = 100 ms. The maximum processable score Nmax403 is a score obtained when the maximum processing time Tmax402 is substituted into the function F400. In FIG. 4, the function F is obtained and stored in advance, but the present invention is not limited to this example. For example, a reference table in which the relationship between the score of the point cloud and the processing time at that score is registered may be created, and the processing time may be obtained by interpolation for the score not in the reference table.
[マッチング不可領域算出処理とマッチング不可領域記録処理]
 次に、ステップS12のマッチング不可領域算出処理と、ステップS13のマッチング不可領域記録処理について、図5及び図6を参照して説明する。
[Unmatchable area calculation process and non-matching area recording process]
Next, the non-matching area calculation process in step S12 and the non-matching area recording process in step S13 will be described with reference to FIGS. 5 and 6.
(障害物がない場合)
 まず、移動体100の周囲に障害物がない場合について図5を参照して説明する。
 図5は、マッチング不可領域算出処理とマッチング不可領域記録処理の例(障害物なし)を示す図である。一点鎖線は、移動体100に設置されたセンサ12が情報を取得可能な範囲(例えばカメラの撮影範囲(画角))を示している。なお、図5では、センサ12の情報取得可能範囲を、移動体100、ランドマーク500a、及びランドマーク500bを俯瞰した状態で表している。
(When there are no obstacles)
First, a case where there is no obstacle around the moving body 100 will be described with reference to FIG.
FIG. 5 is a diagram showing an example (without obstacles) of the non-matching area calculation process and the non-matchable area recording process. The alternate long and short dash line indicates the range in which the sensor 12 installed in the moving body 100 can acquire information (for example, the shooting range (angle of view) of the camera). In addition, in FIG. 5, the information acquisition range of the sensor 12 is shown in a state where the moving body 100, the landmark 500a, and the landmark 500b are overlooked.
 図5において、ランドマーク500aとランドマーク500bは、移動体100の走行環境(周囲)にある。対象物が走行環境にあるとは、対象物がセンサ12の情報取得範囲に含まれると言い替えることもできる。 In FIG. 5, the landmark 500a and the landmark 500b are in the traveling environment (surroundings) of the moving body 100. The fact that the object is in the traveling environment can be rephrased as the object is included in the information acquisition range of the sensor 12.
 情報501は、移動体100の現在位置(仮)において情報抽出処理(センサ)(図3のS4)でランドマーク500aとランドマーク500bから抽出した情報(例えば点群)である。
 情報502は、移動体100の現在位置(仮)に基づいて情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から抽出した走行環境の情報(例えば点群)である。
The information 501 is information (for example, a point cloud) extracted from the landmarks 500a and 500b by the information extraction process (sensor) (S4 in FIG. 3) at the current position (provisional) of the moving body 100.
The information 502 is information on the traveling environment (for example, a point cloud) extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current position (provisional) of the moving body 100.
 マッチング結果503は、センサ12で取得した情報501と、外部記憶装置19から抽出した情報502とを用いてマッチング処理(S10)を実行して得られた結果である。図5の例では、移動体100の周囲に他車や歩行者のような障害物がないため、情報501と情報502のマッチングが良好に実行できている。そのため、マッチング不可領域が存在しない。 The matching result 503 is a result obtained by executing a matching process (S10) using the information 501 acquired by the sensor 12 and the information 502 extracted from the external storage device 19. In the example of FIG. 5, since there are no obstacles such as other vehicles and pedestrians around the moving body 100, the matching of the information 501 and the information 502 can be performed well. Therefore, there is no unmatchable area.
(障害物がある場合)
 次に、移動体100の周囲に障害物がある場合について図6を参照して説明する。
 図6は、マッチング不可領域算出処理とマッチング不可領域記録処理の例(障害物あり)を示す図である。
(If there are obstacles)
Next, the case where there is an obstacle around the moving body 100 will be described with reference to FIG.
FIG. 6 is a diagram showing an example (with obstacles) of the non-matching area calculation process and the non-matchable area recording process.
 図6の例では、一点鎖線で示すセンサ12の情報取得範囲内に存在する障害物610(例えば通行人)の影響でランドマーク500bの情報抽出ができなくなり、情報抽出処理(センサ)(S4)で抽出した情報が情報501bとなる。情報501bには、ランドマーク500aから得られた情報及び障害物610の情報が含まれる。 In the example of FIG. 6, the information of the landmark 500b cannot be extracted due to the influence of the obstacle 610 (for example, a passerby) existing in the information acquisition range of the sensor 12 indicated by the alternate long and short dash line, and the information extraction process (sensor) (S4). The information extracted in is information 501b. The information 501b includes information obtained from the landmark 500a and information on the obstacle 610.
 したがって、センサ12で取得した情報501bと、外部記憶装置19から抽出した情報502とを用いてマッチング処理(S10)した場合、マッチング結果503bが出力される。外部記憶装置19から情報抽出処理(外部記憶装置)(S5)で抽出した情報502の中には、マッチングできなかった情報がある。マッチング部146は、マッチング不可領域算出処理(S12)において、マッチングできなかった情報の複数の3次元位置を、一点鎖線で示した3次元領域504b(マッチング不可領域)で表す。そして、マッチング部146は、その3次元領域504bの位置(例えば移動体100からの距離、又は絶対位置)、長さ(奥行)、高さ(H)、幅(W)の情報を、マッチング不可領域記録処理(S13)においてメモリ16に記録する。 Therefore, when the matching process (S10) is performed using the information 501b acquired by the sensor 12 and the information 502 extracted from the external storage device 19, the matching result 503b is output. Among the information 502 extracted by the information extraction process (external storage device) (S5) from the external storage device 19, there is information that could not be matched. The matching unit 146 represents a plurality of three-dimensional positions of the information that could not be matched in the non-matching region calculation process (S12) by the three-dimensional region 504b (non-matching region) shown by the alternate long and short dash line. Then, the matching unit 146 cannot match the information of the position (for example, the distance from the moving body 100 or the absolute position), the length (depth), the height (H), and the width (W) of the three-dimensional region 504b. Recording is performed in the memory 16 in the area recording process (S13).
 なお、簡単のため、3次元領域504bを立方体として説明したが、3次元領域504bを球や円筒で表してもよい。最終的に、マッチングできなかった情報が入っていれば、3次元領域504bはどのような3次元形状でもよい。また、障害物検知部147による障害物検知処理(S11)を用いて、3次元領域504bを求めてもよい。障害物検知処理から出力された障害物610の形状と位置に基づいて、3次元領域504bの3次元形状と位置を設定する。障害物検知処理によるマッチング不可領域算出処理(S12)の具体例は、図10及び図11により後述する。 For the sake of simplicity, the three-dimensional region 504b has been described as a cube, but the three-dimensional region 504b may be represented by a sphere or a cylinder. Finally, the three-dimensional region 504b may have any three-dimensional shape as long as it contains information that could not be matched. Further, the three-dimensional region 504b may be obtained by using the obstacle detection process (S11) by the obstacle detection unit 147. The three-dimensional shape and position of the three-dimensional region 504b are set based on the shape and position of the obstacle 610 output from the obstacle detection process. Specific examples of the unmatchable region calculation process (S12) by the obstacle detection process will be described later with reference to FIGS. 10 and 11.
 簡単のため、移動物(障害物610)を用いてマッチング不可領域算出処理(S12)を説明したが、障害物以外の原因でマッチング処理(S10)が失敗することがある。例えば、工事や季節で環境が変わり、外部記憶装置19に情報が登録されたときとマッチング時とで環境が異なる場合がある。したがって、情報抽出処理(センサ)(S4)で抽出した情報と情報抽出処理(外部記憶装置)(S5)で抽出した情報をマッチング処理(S10)でマッチングを行う際、マッチングできない領域があるが、上記説明したとおり本発明によりマッチングできなかった領域の検出が可能である。 For the sake of simplicity, the matching non-matching area calculation process (S12) was described using a moving object (obstacle 610), but the matching process (S10) may fail due to a cause other than the obstacle. For example, the environment may change depending on the construction work or the season, and the environment may differ between when the information is registered in the external storage device 19 and when matching. Therefore, when the information extracted by the information extraction process (sensor) (S4) and the information extracted by the information extraction process (external storage device) (S5) are matched by the matching process (S10), there is a region that cannot be matched. As described above, it is possible to detect a region that could not be matched by the present invention.
[点数低減処理]
 次に、ステップS9の点数低減処理の詳細について図7を参照して説明する。
 図7は、データ点数調整部145による点数低減処理の詳細を示す図である。
 図7において、ランドマーク701a,701b,701cは、移動体100の走行環境(周囲)にあるランドマークであり、各ランドマークの情報(点群データ)が外部記憶装置19に登録されている。
[Score reduction processing]
Next, the details of the score reduction process in step S9 will be described with reference to FIG. 7.
FIG. 7 is a diagram showing details of the score reduction process by the data score adjusting unit 145.
In FIG. 7, the landmarks 701a, 701b, and 701c are landmarks in the traveling environment (surroundings) of the moving body 100, and the information (point cloud data) of each landmark is registered in the external storage device 19.
 情報702は、移動体100の現在位置に基づいて外部記憶装置19から抽出した情報である。ここでは、簡単のため情報702を点群とし、情報702に含まれる点群の点数Nを、最大処理可能点数Nmax403(図4参照)よりも大きいとする(N>Nmax)。 Information 702 is information extracted from the external storage device 19 based on the current position of the mobile body 100. Here, for the sake of simplicity, the information 702 is used as a point cloud, and the score N of the point cloud included in the information 702 is assumed to be larger than the maximum processable score Nmax403 (see FIG. 4) (N> Nmax).
 情報702の点数Nは、最大処理可能点数Nmax403よりも大きいため、最大処理時間Tmax402以内に処理を完了できない。したがって、情報702の点数Nを最大処理可能点数Nmax403まで減らす必要がある。ここで、ランダムに点数を減らすか、又はボクセルを用いて点数を減らす。また、移動体100の現在位置でマッチングできそうな情報だけを残し、他の情報を減らしてもよい。最終的に、点数Nが、最大処理可能点数Nmax403より小さくなれば、どのような減らし方でもよい。 Since the score N of the information 702 is larger than the maximum processable score Nmax403, the processing cannot be completed within the maximum processing time Tmax402. Therefore, it is necessary to reduce the score N of the information 702 to the maximum processable score Nmax403. Here, the score is randomly reduced, or the score is reduced by using voxels. Further, only the information that can be matched at the current position of the moving body 100 may be left, and other information may be reduced. Finally, any method may be used as long as the score N becomes smaller than the maximum processable score Nmax403.
 情報703は、外部記憶装置19から抽出した情報702の点数Nを、最大処理可能点数Nmax403まで減らした後の情報(N=Nmax)を示している。情報703では、情報702に含まれていたランドマーク701a,701b,701cの点群の点数が減少している。 Information 703 shows information (N = Nmax) after reducing the score N of the information 702 extracted from the external storage device 19 to the maximum processable score Nmax403. In the information 703, the score of the point cloud of the landmarks 701a, 701b, 701c included in the information 702 is reduced.
[点数増加処理]
 次に、ステップS8の点数増加処理の詳細について図8を参照して説明する。
 図8は、データ点数調整部145による点数増加処理の詳細を示す図である。
 図8において、情報702bは、移動体100の現在位置に基づいて外部記憶装置19から抽出した情報である。ここでは、簡単のため情報702bを点群とし、情報702bに含まれる点群の点数Nbを、最大処理可能点数Nmax403(図4参照)よりも大きいとする(Nb>Nmax)。
[Point increase processing]
Next, the details of the score increasing process in step S8 will be described with reference to FIG.
FIG. 8 is a diagram showing details of the score increasing process by the data score adjusting unit 145.
In FIG. 8, the information 702b is information extracted from the external storage device 19 based on the current position of the mobile body 100. Here, for the sake of simplicity, the information 702b is used as a point cloud, and the score Nb of the point cloud included in the information 702b is assumed to be larger than the maximum processable score Nmax403 (see FIG. 4) (Nb> Nmax).
 領域704は、ステップS11のマッチング不可領域算出処理で算出されたマッチング不可領域である。
 情報703bは、外部記憶装置19から抽出した情報702bの点数Nbを、最大処理可能点数Nmax403まで減らした後の情報であり、その点数をNb’(Nb’=Nmax)とする。
The area 704 is a non-matching area calculated by the non-matching area calculation process in step S11.
The information 703b is information after the score Nb of the information 702b extracted from the external storage device 19 is reduced to the maximum processable score Nmax403, and the score is Nb'(Nb'= Nmax).
 一方、情報702bには、ステップS12のマッチング不可領域算出処理で算出された領域704が含まれている。この領域704の中に入っている情報は、マッチング処理(S10)を実施するときに使わない。このため、情報703bに含まれていた領域704の中の情報(点群)が削除され、点数Nb’が点数Ncになる(Nc<Nb’)。Nc<Nmaxのため、マッチング処理(S10)を行う際、処理時間が最大処理時間Tmax402よりも短くなる。 On the other hand, the information 702b includes the area 704 calculated by the unmatchable area calculation process in step S12. The information contained in this area 704 is not used when the matching process (S10) is performed. Therefore, the information (point cloud) in the area 704 included in the information 703b is deleted, and the score Nb'becomes the score Nc (Nc <Nb'). Since Nc <Nmax, the processing time is shorter than the maximum processing time Tmax402 when the matching processing (S10) is performed.
 それゆえ、逆に点数Ncを最大処理可能点数Nmaxまで増やすことが可能になる。そこで、点数増加処理(S8)で情報702bから領域704外で選択されていない点を選択し、情報703bから点数を増やす。情報705は、点数Ncを最大処理可能点数Nmax403まで増やした後の情報である。また、点数増加処理(S8)で点群を増やす場合、データ点数調整部145は、点数低減処理(S9)と同じ基準で、ランダムやボクセル、マッチングできそうな点群の検索によって点群を増やす。情報705内の斜線を付した点群が、情報703bから増加した点群である。 Therefore, on the contrary, it is possible to increase the score Nc to the maximum processable score Nmax. Therefore, in the score increasing process (S8), a point not selected outside the area 704 is selected from the information 702b, and the score is increased from the information 703b. Information 705 is information after increasing the score Nc to the maximum processable score Nmax403. Further, when the point cloud is increased by the score increase processing (S8), the data score adjustment unit 145 increases the point cloud by searching for a random, voxel, or matching point cloud based on the same criteria as the score reduction processing (S9). .. The shaded point cloud in the information 705 is the point cloud increased from the information 703b.
[自己位置推定処理の例]
 次に、センサ処理部14による自己位置推定処理の具体例について図9~図14を参照して説明する。
[Example of self-position estimation processing]
Next, a specific example of the self-position estimation process by the sensor processing unit 14 will be described with reference to FIGS. 9 to 14.
 図9は、センサ処理部14による自己位置推定処理の第1の例を示す図である。説明を簡単にするため、移動体100が動いていないとする。 FIG. 9 is a diagram showing a first example of self-position estimation processing by the sensor processing unit 14. For the sake of simplicity, it is assumed that the moving body 100 is not moving.
 図9において、ランドマーク901aとランドマーク901bは、移動体100の走行環境(周囲)にあるランドマークである。各ランドマークの情報(点群データ)が外部記憶装置19に登録されている。
 情報902aは、移動体100の現在の仮位置において情報抽出処理(センサ)(S4)で抽出したランドマーク901aとランドマーク901bの情報である。簡単のため、情報902aを点群とする。
In FIG. 9, the landmarks 901a and the landmarks 901b are landmarks in the traveling environment (surroundings) of the moving body 100. Information (point cloud data) of each landmark is registered in the external storage device 19.
The information 902a is information on the landmarks 901a and the landmarks 901b extracted by the information extraction process (sensor) (S4) at the current temporary position of the moving body 100. For the sake of simplicity, information 902a is used as a point cloud.
 情報903aは、移動体100の現在の仮位置に基づいて情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から抽出した、移動体100の走行環境の情報である。情報903aに含まれる点群の点数を、点数Naとする。マッチング部146は、メモリ参照処理(S6)で事前にメモリ16に登録された最大処理可能点数Nmax403を参照し、点数確認処理(S7)において点数Naと最大処理可能点数Nmax403を比較する。説明のため、図9の例ではNa<Nmaxとする。 Information 903a is information on the traveling environment of the mobile body 100 extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current temporary position of the mobile body 100. The score of the point cloud included in the information 903a is defined as the score Na. The matching unit 146 refers to the maximum processable score Nmax403 registered in the memory 16 in advance in the memory reference process (S6), and compares the score Na with the maximum processable score Nmax403 in the score confirmation process (S7). For the sake of explanation, Na <Nmax is set in the example of FIG.
 マッチング結果904aは、マッチング処理(S10)においてセンサ12で取得した情報902aと、外部記憶装置19から抽出した情報903aとをマッチングした結果である。現在位置修正部148は、マッチング結果904aに基づいて、現在位置修正処理(S14)において仮に推定した現在位置を修正する。また、図9の例では、マッチングできなかった領域がなかったとする(マッチングOK)。 The matching result 904a is a result of matching the information 902a acquired by the sensor 12 in the matching process (S10) with the information 903a extracted from the external storage device 19. The current position correction unit 148 corrects the current position tentatively estimated in the current position correction process (S14) based on the matching result 904a. Further, in the example of FIG. 9, it is assumed that there is no region that could not be matched (matching OK).
 図10は、センサ処理部14による自己位置推定処理の第2の例を示す図であり、図9の次のタイミングにおける状態を示している。このタイミングの間隔は、例えばセンサ処理部14が情報処理を実行する周期である。 FIG. 10 is a diagram showing a second example of self-position estimation processing by the sensor processing unit 14, and shows a state at the next timing of FIG. The timing interval is, for example, a cycle in which the sensor processing unit 14 executes information processing.
 図10において、障害物1005は、移動体100に設置されたセンサ12の取得範囲内にある障害物である。
 情報902bは、移動体100の現在の仮位置において情報抽出処理(センサ)(S4)で抽出したランドマーク901aと障害物1005の情報である。
In FIG. 10, the obstacle 1005 is an obstacle within the acquisition range of the sensor 12 installed in the moving body 100.
The information 902b is information on the landmark 901a and the obstacle 1005 extracted by the information extraction process (sensor) (S4) at the current temporary position of the moving body 100.
 情報903bは、移動体100の現在の仮位置に基づいて情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から抽出した、移動体100の走行環境の情報である。情報903bに含まれる点群の点数Nbを、最大処理可能点数Nmax403よりも小さいとする(Nb<Nmax)。 Information 903b is information on the traveling environment of the mobile body 100 extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current temporary position of the mobile body 100. It is assumed that the point cloud score Nb included in the information 903b is smaller than the maximum processable score Nmax403 (Nb <Nmax).
 マッチング結果904bは、マッチング処理(S10)においてセンサで取得した情報902bと、外部記憶装置19から抽出した情報903bをマッチングした結果である。この場合、障害物1005の影響でマッチングできなかった領域があり、マッチング不可領域算出処理(S12)で該当する領域906bが算出される。したがって、領域906bの形状と位置などの情報が、マッチング不可領域記録処理(S13)においてメモリ16に登録される(マッチングNG)。 The matching result 904b is a result of matching the information 902b acquired by the sensor in the matching process (S10) with the information 903b extracted from the external storage device 19. In this case, there is a region that could not be matched due to the influence of the obstacle 1005, and the corresponding region 906b is calculated by the matching non-matchable region calculation process (S12). Therefore, information such as the shape and position of the area 906b is registered in the memory 16 in the non-matching area recording process (S13) (matching NG).
 図11は、センサ処理部14による自己位置推定処理の第3の例を示す図であり、図10の次のタイミングにおける状態を示している。 FIG. 11 is a diagram showing a third example of self-position estimation processing by the sensor processing unit 14, and shows a state at the next timing of FIG.
 図11において、情報902cは、移動体100の現在の仮位置において情報抽出処理(センサ)(S4)で抽出したランドマーク901aと障害物1005の情報である。すなわち、情報902cは、情報902bと同じである。 In FIG. 11, the information 902c is information on the landmark 901a and the obstacle 1005 extracted by the information extraction process (sensor) (S4) at the current temporary position of the moving body 100. That is, the information 902c is the same as the information 902b.
 情報903cは、移動体100の現在の仮位置に基づいて情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から抽出した、移動体100の走行環境の情報である。情報903cに含まれる点群の点数Ncを、最大処理可能点数Nmax403よりも小さいとする(Nc<Nmax)。 Information 903c is information on the traveling environment of the mobile body 100 extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current temporary position of the mobile body 100. It is assumed that the score Nc of the point cloud included in the information 903c is smaller than the maximum processable score Nmax403 (Nc <Nmax).
 データ点数調整部145は、メモリ参照処理(S6)においてメモリ16に登録された領域906b(マッチング不可領域)を参照する。マッチング部146が、一つ前のタイミングでマッチング処理(S10)を行った後、マッチング不可領域算出処理(S12)において領域906bを出力している。そのため、データ点数調整部145は、今回のマッチング処理を実施する前に、点数低減処理(S9)で領域906bの中にある情報を削除する。それにより、情報903cの点数Ncが点数Nc’に減少する。Nc’<Nmaxであるため、領域906b外の点群を最大処理可能点数Nmax403まで増やし、マッチング処理を実施する。したがって、マッチング結果904cが示すとおり、障害物1005による影響が軽減され、精度よくかつ最大処理時間Tmax402以内にマッチング処理が完了できる。 The data score adjustment unit 145 refers to the area 906b (non-matching area) registered in the memory 16 in the memory reference process (S6). The matching unit 146 performs the matching process (S10) at the previous timing, and then outputs the area 906b in the matching non-matching area calculation process (S12). Therefore, the data score adjustment unit 145 deletes the information in the area 906b in the score reduction process (S9) before performing the matching process this time. As a result, the score Nc of the information 903c is reduced to the score Nc'. Since Nc'<Nmax, the point cloud outside the region 906b is increased to the maximum processable point Nmax403, and the matching process is performed. Therefore, as shown by the matching result 904c, the influence of the obstacle 1005 is reduced, and the matching process can be completed accurately and within the maximum processing time Tmax402.
 図12は、センサ処理部14による自己位置推定処理の第4の例を示す図であり、図11の次のタイミングにおける状態を示している。このタイミングでは、障害物1005としての通行人が、センサ12の情報取得可能範囲の内側から外側に移動している。 FIG. 12 is a diagram showing a fourth example of self-position estimation processing by the sensor processing unit 14, and shows a state at the next timing of FIG. At this timing, the passerby as the obstacle 1005 is moving from the inside to the outside of the information acquisition range of the sensor 12.
 図12において、情報902dは、移動体100の現在の仮位置において情報抽出処理(センサ)(S4)で抽出したランドマーク901aとランドマーク901bの情報である。すなわち、情報902dは、図9の情報902aと同じである。 In FIG. 12, the information 902d is the information of the landmarks 901a and the landmarks 901b extracted by the information extraction process (sensor) (S4) at the current temporary position of the moving body 100. That is, the information 902d is the same as the information 902a in FIG.
 情報903dは、移動体100の現在の仮位置に基づいて情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から抽出した、移動体100の走行環境の情報である。情報903dに含まれる点群の点数Ndを、最大処理可能点数Nmax403よりも小さいとする(Nd<Nmax)。 Information 903d is information on the traveling environment of the mobile body 100 extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current temporary position of the mobile body 100. It is assumed that the score Nd of the point cloud included in the information 903d is smaller than the maximum processable score Nmax403 (Nd <Nmax).
 データ点数調整部145は、図11のタイミングと同じく、マッチング処理(S10)を実施する前に、点数低減処理(S9)で領域906b(マッチング不可領域)の中にある情報を削除し、情報903dの点数Ndが点数Nd’に減少する。また、Nd’<Nmaxであるため、領域906b外の点群を最大処理可能点数Nmax403まで増やし、マッチング処理を実施する。この場合、マッチング結果904dに示すとおり、情報902dにおけるランドマーク901bの情報は、情報903dから領域906を削除した情報と一致しない。 Similar to the timing of FIG. 11, the data score adjusting unit 145 deletes the information in the area 906b (non-matching area) in the score reduction process (S9) before performing the matching process (S10), and the information 903d. The score Nd of is reduced to the score Nd'. Further, since Nd'<Nmax, the point cloud outside the region 906b is increased to the maximum processable point Nmax403, and the matching process is performed. In this case, as shown in the matching result 904d, the information of the landmark 901b in the information 902d does not match the information obtained by deleting the area 906 from the information 903d.
 一方、このタイミングでは障害物1005がセンサ12の情報取得範囲外にあるため、領域906bが不要となる。以下、図13を参照して外部記憶装置19の情報から領域906bを削除する方法について説明する。 On the other hand, at this timing, since the obstacle 1005 is out of the information acquisition range of the sensor 12, the area 906b becomes unnecessary. Hereinafter, a method of deleting the area 906b from the information of the external storage device 19 will be described with reference to FIG.
 図13は、センサ処理部14による自己位置推定処理の第5の例を示す図であり、図12の次のタイミングにおける状態を示している。障害物1005としての通行人は、センサ12の情報取得可能範囲外のままである。 FIG. 13 is a diagram showing a fifth example of self-position estimation processing by the sensor processing unit 14, and shows a state at the next timing of FIG. The passerby as the obstacle 1005 remains out of the information acquisition range of the sensor 12.
 図13において、情報902eは、移動体100の現在の仮位置において情報抽出処理(センサ)(S4)で抽出したランドマーク901aとランドマーク901bの情報である。すなわち、情報902eは、図12の情報902dと同じである。 In FIG. 13, the information 902e is the information of the landmarks 901a and the landmarks 901b extracted by the information extraction process (sensor) (S4) at the current temporary position of the moving body 100. That is, the information 902e is the same as the information 902d in FIG.
 情報902eは、図12の情報902dと同じである。
 情報903eは、移動体100の現在の仮位置に基づいて情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から抽出した、移動体100の走行環境の情報である。情報903eに含まれる点群の点数Neを、最大処理可能点数Nmax403よりも小さいとする(Ne<Nmax)。
The information 902e is the same as the information 902d in FIG.
The information 903e is information on the traveling environment of the mobile body 100 extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current temporary position of the mobile body 100. It is assumed that the point cloud score Ne included in the information 903e is smaller than the maximum processable score Nmax403 (Ne <Nmax).
 センサ12の情報取得範囲内に入っていた障害物1005が、時系列にセンサ12の情報取得範囲外となる確率がある。一方、障害物1005がセンサ12の情報取得範囲内に留まっている確率もあるため、このタイミングで領域906b(マッチング不可領域)を削除せず、時系列(段階的)にマッチング不可領域算出処理(S12)で領域906bの大きさを小さくする。図13の場合、障害物1005が図12から引き続きセンサ12の情報取得範囲外である。 There is a probability that the obstacle 1005 that was within the information acquisition range of the sensor 12 will be out of the information acquisition range of the sensor 12 in chronological order. On the other hand, since there is a probability that the obstacle 1005 remains within the information acquisition range of the sensor 12, the area 906b (unmatchable area) is not deleted at this timing, and the unmatchable area calculation process is performed in chronological order (stepwise). In S12), the size of the region 906b is reduced. In the case of FIG. 13, the obstacle 1005 is continuously out of the information acquisition range of the sensor 12 from FIG.
 領域906eは、領域906bを縮小した後の領域を表す。点数低減処理(S9)で領域906e(マッチング不可領域)の中にある情報が削除され、情報903eの点数Neが点数Ne’に減少する。そして、データ点数調整部145により、領域906e(含まれる点群の点数)を考慮し、情報903eの点数Neを点数増加処理(S8)で最大処理可能点数Nmax403まで増やした上で、マッチング部146でマッチング処理(S10)を実施する。その後、マッチング結果904eが得られる。これにより、領域906eの外側では、領域906bであった場所でもマッチングが可能となる。 The area 906e represents an area after the area 906b is reduced. In the score reduction process (S9), the information in the area 906e (matching impossible area) is deleted, and the score Ne of the information 903e is reduced to the score Ne'. Then, the data score adjusting unit 145 considers the area 906e (the score of the included point cloud), increases the score Ne of the information 903e to the maximum processable score Nmax403 by the score increasing process (S8), and then the matching section 146. The matching process (S10) is performed in. After that, the matching result 904e is obtained. As a result, on the outside of the region 906e, matching is possible even at the location where the region 906b was.
 マッチング不可領域算出処理(S12)で領域を小さくする場合、時系列に一定のパラメータK(0<K<1)を用いて小さくする。例えば、領域906bの長さ、高さ、幅を、Lb、Hb、Wbとすると、マッチング不可領域算出処理(S12)で領域906eの大きさを、Lb*K、Hb*K、Wb*Kとする。また、時系列に領域906eが小さくなり、領域906eに情報903eの1点も入っていない場合、領域906eを削除する。なお、領域906bの形状は、矩形に限らない。 When the area is made smaller in the unmatchable area calculation process (S12), it is made smaller by using a constant parameter K (0 <K <1) in time series. For example, assuming that the length, height, and width of the region 906b are Lb, Hb, and Wb, the size of the region 906e is set to Lb * K, Hb * K, and Wb * K in the unmatchable region calculation process (S12). do. Further, when the area 906e becomes smaller in time series and the area 906e does not contain any one point of the information 903e, the area 906e is deleted. The shape of the region 906b is not limited to a rectangle.
 図13では、想定どおり障害物1005がセンサ12の情報取得範囲外となった例を示したが、障害物905がセンサ12の情報取得範囲内に残った場合を説明する。この場合、マッチング処理(S10)を行った後、マッチング不可領域算出処理(S12)でマッチングできなかった領域906eが検出されるので、マッチング不可領域算出処理で領域906eを新たに大きくする。一例として、領域906eを領域606程度に大きくしてもよい。 FIG. 13 shows an example in which the obstacle 1005 is out of the information acquisition range of the sensor 12 as expected, but the case where the obstacle 905 remains within the information acquisition range of the sensor 12 will be described. In this case, after the matching process (S10) is performed, the region 906e that could not be matched is detected by the matching non-matching region calculation process (S12), so that the region 906e is newly increased by the matching non-region calculation process. As an example, the area 906e may be increased to about the area 606.
 図14は、センサ処理部14による自己位置推定処理の第6の例を示す図であり、図13の次のタイミングにおける状態を示している。 FIG. 14 is a diagram showing a sixth example of self-position estimation processing by the sensor processing unit 14, and shows a state at the next timing of FIG.
 図14において、情報902fは、移動体100の現在の仮位置において情報抽出処理(センサ)(S4)で抽出したランドマーク901aとランドマーク901bの情報である。すなわち、情報902fは、図13の情報902eと同じである。 In FIG. 14, the information 902f is the information of the landmarks 901a and the landmarks 901b extracted by the information extraction process (sensor) (S4) at the current temporary position of the moving body 100. That is, the information 902f is the same as the information 902e in FIG.
 情報903fは、移動体100の現在の仮位置に基づいて情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から抽出した、移動体100の走行環境の情報である。情報903fに含まれる点群の点数Nfを、最大処理可能点数Nmax403よりも小さいとする(Nf<Nmax)。 Information 903f is information on the traveling environment of the mobile body 100 extracted from the external storage device 19 by the information extraction process (external storage device) (S5) based on the current temporary position of the mobile body 100. It is assumed that the score Nf of the point cloud included in the information 903f is smaller than the maximum processable score Nmax403 (Nf <Nmax).
 このタイミングで、マッチング処理の対象から除外する領域906e(図13)が時系列に小さくなり、完全になくなったとする。したがって、ランドマーク901aとランドマーク901bに相当する点群を含んだ情報903fと、センサ12で取得した情報902fとをステップS10のマッチング処理においてマッチングし、マッチング結果904fが出力される。 At this timing, it is assumed that the area 906e (FIG. 13) excluded from the matching process becomes smaller in time series and completely disappears. Therefore, the information 903f including the landmark 901a and the point cloud corresponding to the landmark 901b and the information 902f acquired by the sensor 12 are matched in the matching process of step S10, and the matching result 904f is output.
 また、マッチング処理で得られた結果の表示は必須でないが、デバッグやユーザーの参考のために表示部17により表示してもよい。また、マッチング処理の結果のみならず、マッチング処理を行う際の点数、メモリ16に登録された最大処理可能点数Nmax403、マッチング不可領域算出処理の結果などを表示してもよい。 Although it is not essential to display the result obtained by the matching process, it may be displayed by the display unit 17 for debugging or user reference. Further, not only the result of the matching process but also the score at the time of the matching process, the maximum processable score Nmax403 registered in the memory 16, the result of the unmatchable area calculation process, and the like may be displayed.
[点数低減時の点群選択基準]
 次に、ステップS9の点数低減処理を実施する場合の点群の選択基準について図15及び図16を参照して説明する。
[Point group selection criteria when reducing points]
Next, the selection criteria of the point cloud when the score reduction process of step S9 is performed will be described with reference to FIGS. 15 and 16.
 図15は、データ点数調整部145が点数低減処理(S9)を実施する場合の点群データの選択基準を説明する図である。図15上側は、センサ12とランドマークの位置関係の例を示し、図15下側は、外部記憶装置19から抽出する点群を示す。説明を簡単にするため、移動体100が動いていないとする。 FIG. 15 is a diagram illustrating a selection criterion of point cloud data when the data score adjusting unit 145 performs the score reduction process (S9). The upper side of FIG. 15 shows an example of the positional relationship between the sensor 12 and the landmark, and the lower side of FIG. 15 shows a point cloud extracted from the external storage device 19. For the sake of simplicity, it is assumed that the moving body 100 is not moving.
 座標1500は、移動体100に設置されたセンサ12を基準とする座標(x、y、z)である。座標1500において、センサ12が向いている方向(この例では移動体100の進行方向と同じ)をz軸、z軸と直交する方向であってランドマークの高さ方向をy軸、y軸及びz軸と直交する方向をx軸とする。 Coordinates 1500 are coordinates (x, y, z) based on the sensor 12 installed on the moving body 100. At coordinates 1500, the direction in which the sensor 12 is facing (the same as the traveling direction of the moving body 100 in this example) is the z-axis, and the direction orthogonal to the z-axis is the y-axis, y-axis, and the height direction of the landmark. The direction orthogonal to the z-axis is defined as the x-axis.
 ランドマーク1501は、移動体100の走行環境にあるランドマークである。
 ランドマーク1502は、移動体100の走行環境にあるランドマークであり、ランドマーク1501に比べ、z軸上において移動体100との距離が遠い。簡単のため、図15の移動体100、ランドマーク1501、及びランドマーク1502のx軸に対する位置を一定とする。ランドマーク1501,1502が、センサ12の情報取得範囲に含まれている。
The landmark 1501 is a landmark in the traveling environment of the moving body 100.
The landmark 1502 is a landmark in the traveling environment of the moving body 100, and is farther from the moving body 100 on the z-axis than the landmark 1501. For the sake of simplicity, the positions of the moving body 100, the landmark 1501 and the landmark 1502 in FIG. 15 are fixed with respect to the x-axis. Landmarks 1501 and 1502 are included in the information acquisition range of the sensor 12.
 点群1503は、外部記憶装置19に登録されたランドマーク1501とランドマーク1502の情報(点群)である。
 点群1504は、移動体100の現在の仮位置に基づいて情報抽出処理(外部記憶装置)(図3のS5)により外部記憶装置19から抽出した、移動体100の走行環境の情報(点群)である。点群1504に含まれる点群の点数をNtとし、点数Ntは最大処理可能点数Nmax403よりも大きいとする(Nt>Nmax)。
The point cloud 1503 is information (point cloud) of the landmarks 1501 and the landmarks 1502 registered in the external storage device 19.
The point cloud 1504 is information on the traveling environment of the moving body 100 (point group) extracted from the external storage device 19 by an information extraction process (external storage device) (S5 in FIG. 3) based on the current temporary position of the moving body 100. ). It is assumed that the score of the point cloud included in the point cloud 1504 is Nt, and the score Nt is larger than the maximum processable score Nmax403 (Nt> Nmax).
 ここで、点数増加処理(S8)と点数低減処理(S9)を実施するときの点群の選択基準として、距離に応じた優先度について説明する。 Here, the priority according to the distance will be described as a selection criterion of the point cloud when the score increase process (S8) and the score decrease process (S9) are performed.
 まず、移動体100の現在位置からセンサ12で抽出した場合の、外部記憶装置19に登録された点群とマッチングできそうな点群を優先的に選択する。センサ12は、例えば、カメラやレーザセンサの場合、立体物の表面(境界線)から情報を抽出(取得)するため、立体物の裏側に情報があっても抽出できない。したがって、点数低減処理では、移動体100の現在位置から抽出(参照)が容易なランドマーク1501の表側(現在位置側)の点群1505(黒丸で表示)と、ランドマーク1502の表側の点群1506(黒丸で表示)を抽出し、抽出が難しい点群1505bと点群1506b(グレー表示)を削除する。これにより、点数Ntが点数Nt’に減少する(Nt’<Nmax)。それゆえ、ランドマーク1501とランドマーク1502の境界線を代表する点群1505と点群1506を用いて、精度よく低負荷でマッチング処理ができる。 First, the point cloud that is likely to match the point cloud registered in the external storage device 19 when extracted from the current position of the moving body 100 by the sensor 12 is preferentially selected. For example, in the case of a camera or a laser sensor, the sensor 12 extracts (acquires) information from the surface (boundary line) of the three-dimensional object, so that even if there is information on the back side of the three-dimensional object, it cannot be extracted. Therefore, in the score reduction process, the point cloud 1505 (indicated by a black circle) on the front side (current position side) of the landmark 1501 and the point cloud on the front side of the landmark 1502, which can be easily extracted (referenced) from the current position of the moving body 100, are used. 1506 (indicated by black circles) is extracted, and the point cloud 1505b and the point cloud 1506b (indicated in gray) that are difficult to extract are deleted. As a result, the score Nt is reduced to the score Nt'(Nt'<Nmax). Therefore, by using the point cloud 1505 and the point cloud 1506 that represent the boundary line between the landmark 1501 and the landmark 1502, the matching process can be performed accurately and with a low load.
 一方、情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から抽出した走行環境の点群1504を点数低減処理で減らしても、ランドマーク1501とランドマーク1502の境界線を代表する点群1505と点群1506を合わせた点数Nt’が最大処理可能点数Nmax403よりも大きい場合(Nt’>Nmax)がある。このような場合の対応について説明する。 On the other hand, even if the point group 1504 of the driving environment extracted from the external storage device 19 in the information extraction process (external storage device) (S5) is reduced by the score reduction process, the points representing the boundary line between the landmark 1501 and the landmark 1502. There is a case where the total score Nt'of the group 1505 and the point group 1506 is larger than the maximum processable score Nmax403 (Nt'> Nmax). The correspondence in such a case will be described.
 移動体100に設置されたセンサ12の精度は、解像度やキャリブレーションに依存する。例えば、レーザセンサの場合、TOF原理でレーザセンサから対象物までの距離を測定するため、対象物の反射率や環境の反射(ノイズ)の影響により、測定した距離の精度が悪化する。また、レーザセンサは、解像度や情報取得範囲が限られているため、遠方の対象物、又はあるレンジ(距離)以上の対象物の測定精度が悪くなる。カメラの場合は、画像の解像度や視差画像を作成するときのアルゴリズムにより、測定精度が大きく変わる。基本的に、対象物がカメラから遠ければ遠いほど測定精度が悪くなる。 The accuracy of the sensor 12 installed on the mobile body 100 depends on the resolution and calibration. For example, in the case of a laser sensor, since the distance from the laser sensor to the object is measured by the TOF principle, the accuracy of the measured distance deteriorates due to the influence of the reflectance of the object and the reflection (noise) of the environment. Further, since the laser sensor has a limited resolution and information acquisition range, the measurement accuracy of a distant object or an object having a certain range (distance) or more deteriorates. In the case of a camera, the measurement accuracy varies greatly depending on the resolution of the image and the algorithm used when creating the parallax image. Basically, the farther the object is from the camera, the worse the measurement accuracy.
 なお、キャリブレーションには、外部パラメータと内部パラメータがある。外部パラメータは、センサの固定位置である。内部パラメータは、センサ内のレンズ配列(歪み)、視差画像の調整パラメータなどである。 There are external parameters and internal parameters in the calibration. The external parameter is the fixed position of the sensor. Internal parameters include the lens arrangement (distortion) in the sensor, the adjustment parameters of the parallax image, and the like.
 したがって、点数Nt’をさらに減らす必要がある場合、点数低減処理においてz軸上で移動体100の現在位置から遠い情報を削除していく。例えば、ランドマーク1502の表面の点群1506を削除し、移動体100により近いランドマーク1501の表面の点群1505を残してマッチング処理を実施する。簡単のため、点群1505の点数を、最大処理可能点数Nmax403よりも小さいとしたが、さらに点群1505の点数を減らす必要があれば、前述した説明と同じように点群1505のうちより移動体100から遠い点群を優先的に削除する。 Therefore, when it is necessary to further reduce the score Nt', the information far from the current position of the moving body 100 on the z-axis is deleted in the score reduction process. For example, the point cloud 1506 on the surface of the landmark 1502 is deleted, and the matching process is performed leaving the point cloud 1505 on the surface of the landmark 1501 closer to the moving body 100. For the sake of simplicity, the score of the point cloud 1505 is set to be smaller than the maximum processable score Nmax403, but if it is necessary to further reduce the score of the point cloud 1505, it is moved more from the point cloud 1505 as described above. The point cloud far from the body 100 is preferentially deleted.
 また、点数低減処理の場合を説明したが、点数増加処理で点数の調整が必要な場合でも、移動体100により近い点群を優先的に増やし、マッチング処理を行う。 Further, although the case of the score reduction process has been described, even if the score needs to be adjusted in the score increase process, the point cloud closer to the moving body 100 is preferentially increased and the matching process is performed.
 図16は、図15で選択された点群を用いた自己位置推定処理の例を示す図である。
 点群1508は、タイミングt0において移動体100の現在位置から情報抽出処理(センサ)(S4)で抽出したランドマーク1502の点群である。
 点群1509は、タイミングt0において移動体100の現在位置から情報抽出処理(センサ)(S4)で抽出したランドマーク1501の点群である。
FIG. 16 is a diagram showing an example of self-position estimation processing using the point cloud selected in FIG.
The point cloud 1508 is a point cloud of the landmark 1502 extracted by the information extraction process (sensor) (S4) from the current position of the moving body 100 at the timing t0.
The point cloud 1509 is a point cloud of the landmark 1501 extracted by the information extraction process (sensor) (S4) from the current position of the moving body 100 at the timing t0.
 情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から抽出した走行環境の点群1504の点数Ntが、最大処理可能点数Nmax403よりも大きいとする。また、点数低減処理(S9)でランドマーク1501とランドマーク1502の境界線を代表する点群1505と点群1506を抽出したとする。また、点群1505と点群1506を合わせた点数Nt’が、まだ最大処理可能点数Nmax403よりも大きいとする。この場合、点数低減処理で優先的に移動体100により近い点群1505を残し、移動体100から点群1505よりも遠方にある点群1506を削除し、マッチング処理を行う。 It is assumed that the score Nt of the point cloud 1504 in the driving environment extracted from the external storage device 19 in the information extraction process (external storage device) (S5) is larger than the maximum processable score Nmax403. Further, it is assumed that the point cloud 1505 and the point cloud 1506 representing the boundary line between the landmark 1501 and the landmark 1502 are extracted by the score reduction process (S9). Further, it is assumed that the total score Nt'of the point cloud 1505 and the point cloud 1506 is still larger than the maximum processable score Nmax403. In this case, the point cloud 1505 that is preferentially left closer to the moving body 100 is preferentially left in the point reduction process, and the point cloud 1506 that is farther than the point cloud 1505 from the moving body 100 is deleted, and the matching process is performed.
 次に、タイミングt1で障害物1601(例えば通行人)がセンサ12の情報取得範囲内に入り、移動体100の現在位置から情報抽出処理(センサ)(S4)で、ランドマーク1502の点群1508bと、障害物1601の点群1601bを抽出する。そして、情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から走行環境の情報を抽出する前に、障害物検知処理(S11)で障害物1601を検知し、マッチング不可領域算出処理(S12)で障害物1601を代表する領域1510(マッチング不可領域)が算出されたとする。次に、情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から走行環境の点群1504を抽出する。 Next, at the timing t1, the obstacle 1601 (for example, a passerby) enters the information acquisition range of the sensor 12, and the point cloud 1508b of the landmark 1502 is subjected to the information extraction process (sensor) (S4) from the current position of the moving body 100. And, the point cloud 1601b of the obstacle 1601 is extracted. Then, before the information extraction process (external storage device) (S5) extracts the driving environment information from the external storage device 19, the obstacle detection process (S11) detects the obstacle 1601 and the matching impossible area calculation process (S5). It is assumed that the region 1510 (non-matching region) representing the obstacle 1601 is calculated in S12). Next, the information extraction process (external storage device) (S5) extracts the point cloud 1504 of the traveling environment from the external storage device 19.
 ここで、点数Ntが最大処理可能点数Nmax403よりも大きいため、点数低減処理(S9)で点数Ntを最大処理可能点数Nmax403に減らす。ただし、メモリ16に登録された障害物1601を代表する領域1510を参照し、移動体100に近い点群1505(図16下側)が領域1510の中に入っているため、点群1505の選択が不可能である。したがって、より遠方のランドマーク1502の境界線を代表する点群1506を選択し、マッチング処理を行う。 Here, since the score Nt is larger than the maximum processable score Nmax403, the score Nt is reduced to the maximum processable score Nmax403 in the score reduction process (S9). However, since the point cloud 1505 (lower side in FIG. 16) close to the moving body 100 is included in the region 1510 with reference to the region 1510 representing the obstacle 1601 registered in the memory 16, the point cloud 1505 is selected. Is impossible. Therefore, the point cloud 1506 that represents the boundary line of the distant landmark 1502 is selected and the matching process is performed.
 図16下段に、次のタイミングt2で、障害物1601がセンサ12の情報取得範囲に入っていない場合を示す。情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から走行環境の情報を抽出する前に、障害物検知処理(S11)で障害物検知を行うが、障害物1601がなくなったため障害物検知処理の出力はない。したがって、マッチング不可領域算出処理(S12)でマッチング不可領域を出力しない。 The lower part of FIG. 16 shows the case where the obstacle 1601 is not within the information acquisition range of the sensor 12 at the next timing t2. Before the information extraction process (external storage device) (S5) extracts the driving environment information from the external storage device 19, the obstacle detection process (S11) detects the obstacle, but the obstacle 1601 disappears and the obstacle is detected. There is no output of detection processing. Therefore, the non-matching area is not output in the non-matching area calculation process (S12).
 情報抽出処理(外部記憶装置)(S5)で外部記憶装置19から走行環境の点群1504を抽出し、点数Ntが最大処理可能点数Nmax403よりも大きいため、点数低減処理(S9)で点数を減らす。ただし、タイミングt1のときと違って、メモリ16に登録されたマッチング不可領域はないため、移動体100に近いランドマーク1501の点群1505を残し、より遠方のランドマーク1502の点群1506を削除する。 The information extraction process (external storage device) (S5) extracts the point cloud 1504 of the driving environment from the external storage device 19, and since the score Nt is larger than the maximum processable score Nmax403, the score is reduced by the score reduction process (S9). .. However, unlike the timing t1, there is no unmatchable area registered in the memory 16, so the point cloud 1505 of the landmark 1501 close to the moving body 100 is left, and the point cloud 1506 of the landmark 1502 farther away is deleted. do.
 最後に、外部記憶装置19から抽出したランドマーク1501の点群1505と、移動体100の現在位置から情報抽出処理(センサ)(S4)で抽出したランドマーク1502の点群1508c及びランドマーク1501の点群1509cを用いて、マッチング処理を行う。 Finally, the point cloud 1505 of the landmark 1501 extracted from the external storage device 19, and the point cloud 1508c and the landmark 1501 of the landmark 1502 extracted by the information extraction process (sensor) (S4) from the current position of the moving body 100. Matching processing is performed using the point cloud 1509c.
 以上のとおり、一実施形態に係る自己位置推定装置(位置推定装置1)は、センサ(センサ12)で収集した走行環境の情報と、地図情報とを比較することで自己の位置を推定する自己位置推定装置である。この自己位置推定装置は、自己の現在の位置を仮に推定する現在位置推定部(現在位置推定部142)と、地図情報に含まれる点群データのうち、現在位置推定部で推定した現在の仮位置に基づきセンサが情報を取得可能な範囲を算出し、取得可能な範囲の点群データを選択する点群データ選択部(フィルター部144)と、地図情報から選択した点群データの点数と、位置を推定する際の許容可能な最大の処理時間で処理できる最大の点数である最大処理可能点数とを比較し、比較結果に基づいて、選択した点群データの点数を最大処理可能点数以下の範囲で調整する点数調整部(データ点数調整部145)と、調整後の点群データとセンサで取得した情報の点群データとのマッチングを行うマッチング部(マッチング部146)と、マッチング部のマッチング結果に基づいて、現在位置推定部で推定した自己の現在の仮位置を修正する現在位置修正部(現在位置修正部148)と、を備える。 As described above, the self-position estimation device (position estimation device 1) according to the embodiment is self-estimating the self-position by comparing the driving environment information collected by the sensor (sensor 12) with the map information. It is a position estimation device. This self-position estimation device has a current position estimation unit (current position estimation unit 142) that temporarily estimates its current position, and a current temporary position estimation unit that estimates the point cloud data included in the map information. The point cloud data selection unit (filter unit 144) that calculates the range in which the sensor can acquire information based on the position and selects the point cloud data in the acquireable range, the score of the point cloud data selected from the map information, and Compare with the maximum processable score, which is the maximum score that can be processed in the maximum allowable processing time when estimating the position, and based on the comparison result, the score of the selected point cloud data is less than or equal to the maximum processable score. Matching between the point cloud adjustment unit (data point cloud adjustment unit 145) that adjusts within the range, the matching unit (matching unit 146) that matches the adjusted point cloud data with the point cloud data of the information acquired by the sensor, and the matching unit. Based on the result, it includes a current position correction unit (current position correction unit 148) that corrects its own current temporary position estimated by the current position estimation unit.
 上記のように構成された自己位置推定装置によれば、地図情報から選択する点群データの点数を、最大処理時間に対応する最大処理可能点数以下の範囲で動的に調整できるため、位置推定の精度と処理負荷を最適化した位置推定が可能となる。 According to the self-position estimation device configured as described above, the points of the point cloud data selected from the map information can be dynamically adjusted within the range of the maximum processable points or less corresponding to the maximum processing time, so that the position is estimated. It is possible to estimate the position by optimizing the accuracy and processing load of the.
 また、本実施形態の自己位置推定装置において、点数調整部(データ点数調整部145)は、選択した点群データの点数が最大処理可能点数Nmaxよりも多い場合には、選択した点群データの点数を最大処理可能点数Nmaxに減らすように構成されている。 Further, in the self-position estimation device of the present embodiment, when the score of the selected point cloud data is larger than the maximum processable score Nmax, the score adjustment unit (data score adjustment unit 145) of the selected point cloud data. It is configured to reduce the score to the maximum processable score Nmax.
 上記構成によれば、選択した点群データの点数を最大処理可能点数Nmaxに減らすことで、処理負荷を軽減できる。 According to the above configuration, the processing load can be reduced by reducing the number of points of the selected point cloud data to the maximum number of points that can be processed Nmax.
 また、本実施形態の自己位置推定装置において、点数調整部(データ点数調整部145)は、選択した点群データの点数が最大処理可能点数Nmaxよりも少ない場合には、選択した点群データの点数を最大処理可能点数Nmaxまで増やすように構成されている。 Further, in the self-position estimation device of the present embodiment, the point cloud adjustment unit (data point cloud adjustment unit 145) collects the selected point cloud data when the score of the selected point cloud data is less than the maximum processable point cloud Nmax. It is configured to increase the score up to the maximum processable score Nmax.
 上記構成によれば、選択した点群データの点数を最大処理可能点数Nmaxに増やすことで、位置推定の精度を向上させることができる。 According to the above configuration, the accuracy of position estimation can be improved by increasing the number of points of the selected point cloud data to the maximum number of points that can be processed Nmax.
 また、本実施形態の自己位置推定装置において、マッチング部(マッチング部146)は、マッチングできなかった点群データ(例えば領域906bの情報)を記憶し、記憶した点群データを、次回の位置推定の際に地図情報に含まれる点群データから選択しないように構成されている。 Further, in the self-position estimation device of the present embodiment, the matching unit (matching unit 146) stores the point cloud data (for example, the information of the area 906b) that could not be matched, and the stored point cloud data is used for the next position estimation. At the time of, it is configured not to select from the point cloud data included in the map information.
 上記構成によれば、マッチングできなかった点群データを、次回の位置推定の際に地図情報に含まれる点群データから選択しないため、マッチングできなかった点群データの影響を排除し、ロバスト性を向上させることができる。 According to the above configuration, since the point cloud data that could not be matched is not selected from the point cloud data included in the map information at the next position estimation, the influence of the point cloud data that could not be matched is eliminated and the robustness is achieved. Can be improved.
 また、本実施形態の自己位置推定装置において、点数調整部(データ点数調整部145)は、選択した点群データの点数を最大処理可能点数Nmaxに減らす際に、選択した点群データのうち現在の仮位置からより遠い点群データ(例えば点群1506)を優先的に減らすように構成されている。 Further, in the self-position estimation device of the present embodiment, the point cloud adjustment unit (data point cloud adjustment unit 145) is currently used among the selected point cloud data when reducing the points of the selected point cloud data to the maximum processable point cloud Nmax. It is configured to preferentially reduce point cloud data (eg, point cloud 1506) farther from the temporary position of.
 上記構成によれば、現在の仮位置からより遠い点群データを優先的に減らすことにより、処理負荷を軽減しつつ、位置推定の精度の低下を最小限に抑えることができる。 According to the above configuration, by preferentially reducing the point cloud data farther from the current temporary position, it is possible to reduce the processing load and minimize the decrease in the accuracy of the position estimation.
 また、本実施形態の自己位置推定装置において、点数調整部(データ点数調整部145)は、選択した点群データの点数を最大処理可能点数Nmaxまで増やす際に、選択した点群データのうち現在の仮位置により近い点群データ(例えば点群データ1505)を優先的に増やすように構成されている。 Further, in the self-position estimation device of the present embodiment, the point cloud adjustment unit (data point cloud adjustment unit 145) is currently used among the selected point cloud data when increasing the points of the selected point cloud data to the maximum processable point cloud Nmax. It is configured to preferentially increase the point cloud data (for example, the point cloud data 1505) closer to the temporary position of.
 上記構成によれば、現在の仮位置からより近い点群データを優先的に増やすことにより、処理負荷の増大を最小限に抑えつつ、位置推定の精度を向上させることができる。 According to the above configuration, by preferentially increasing the point cloud data closer to the current temporary position, it is possible to improve the accuracy of the position estimation while minimizing the increase in the processing load.
 また、本実施形態の自己位置推定装置において、最大処理可能点数Nmaxは、予め求めておいた異なる点数と当該点数での処理時間との対応関係(例えば関数F)に基づいて算出される点数である。このように、点数と処理時間との関係を予め求めておくことで、時系列に変化する最大処理可能点数Nmaxを随時算出することができる。 Further, in the self-position estimation device of the present embodiment, the maximum processable score Nmax is a score calculated based on the correspondence relationship (for example, function F) between the different score obtained in advance and the processing time at the score. be. In this way, by obtaining the relationship between the score and the processing time in advance, the maximum processable score Nmax that changes in time series can be calculated at any time.
 また、本実施形態の自己位置推定装置において、マッチング部(マッチング部146)は、マッチングできなかった点群データを含む領域(例えば領域906b)の位置と形状を算出して記憶するように構成されている。このように、マッチングできなかった点群データを含む領域の位置と形状を記憶しておくことで、次回のマッチングにおいて該当する点群データを排除することができる。 Further, in the self-position estimation device of the present embodiment, the matching unit (matching unit 146) is configured to calculate and store the position and shape of the region (for example, region 906b) including the point cloud data that could not be matched. ing. By storing the position and shape of the area including the point cloud data that could not be matched in this way, the corresponding point cloud data can be excluded in the next matching.
 また、本実施形態の自己位置推定装置において、マッチング部(マッチング部146)は、時系列に得られたマッチング結果に応じて、地図情報に対して時系列にマッチングできなかった領域の大きさ又は点群データの点数を調整(例えば領域906bから領域906eに縮小)して記憶するように構成されている。 Further, in the self-position estimation device of the present embodiment, the matching unit (matching unit 146) determines the size of the region that could not be matched in time series with respect to the map information according to the matching result obtained in time series. It is configured to adjust and store the points of the point cloud data (for example, reduced from the area 906b to the area 906e).
 上記構成によれば、時系列にマッチングできなかった領域又は点群データを調整し、時系列のマッチング処理に反映することで、時系列の現在位置の連続性が維持される。 According to the above configuration, the continuity of the current position of the time series is maintained by adjusting the area or point cloud data that could not be matched in the time series and reflecting it in the matching process of the time series.
 なお、本発明は上述した一実施形態に限られるものではなく、請求の範囲に記載した本発明の要旨を逸脱しない限りにおいて、その他種々の応用例、変形例を取り得ることは勿論である。例えば、上述した一実施形態は本発明を分かりやすく説明するために位置推定装置及び移動体の構成を詳細かつ具体的に説明したものであり、必ずしも説明した全ての構成要素を備えるものに限定されない。また、一実施形態の構成の一部について、他の構成要素の追加又は置換、削除をすることも可能である。 It should be noted that the present invention is not limited to the above-described embodiment, and it goes without saying that various other application examples and modifications can be taken as long as the gist of the present invention described in the claims is not deviated. For example, the above-described embodiment is a detailed and specific description of the configuration of the position estimation device and the moving body in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the components described above. .. It is also possible to add, replace, or delete other components with respect to a part of the configuration of one embodiment.
 また、上記の各構成、機能、処理部等は、それらの一部又は全部を、例えば集積回路で設計するなどによりハードウェアで実現してもよい。ハードウェアとして、FPGA(Field Programmable Gate Array)やASIC(Application Specific Integrated Circuit)などを用いてもよい。 Further, each of the above configurations, functions, processing units, etc. may be realized by hardware, for example, by designing a part or all of them by an integrated circuit. As hardware, FPGA (Field Programmable Gate Array), ASIC (Application Specific Integrated Circuit), or the like may be used.
 また、図3に示すフローチャートにおいて、処理結果に影響を及ぼさない範囲で、複数の処理を並列的に実行したり、処理順序を変更したりしてもよい。 Further, in the flowchart shown in FIG. 3, a plurality of processes may be executed in parallel or the processing order may be changed as long as the processing results are not affected.
 また、上述した実施形態において、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。実際には殆ど全ての構成要素が相互に接続されていると考えてもよい。 Further, in the above-described embodiment, the control lines and information lines are shown as necessary for explanation, and not all the control lines and information lines are necessarily shown in the product. In practice, it can be considered that almost all components are interconnected.
 また、本明細書において、「平行」及び「直交」などの用語を使用する場合、各々の用語は、厳密な「平行」及び「直交」のみを意味する用語ではなく、厳格な意味での「平行」及び「直交」を含み、さらにその機能を発揮し得る範囲にある「略平行」及び「略直交」の意味をも含むものである。 In addition, when terms such as "parallel" and "orthogonal" are used in the present specification, each term does not mean only strict "parallel" and "orthogonal", but "parallel" in a strict sense. It includes "parallel" and "orthogonal", and also includes the meanings of "substantially parallel" and "substantially orthogonal" within the range in which the function can be exerted.
1…位置推定装置、12,12a~12n…センサ、13…情報処理装置、14…センサ処理部、15…制御部、16…メモリ、17…表示部、19…外部記憶装置、100…移動体、141…入出力部、142…現在位置推定部、143…点群データ取得部、144…フィルター部、145…データ点数調整部、146…マッチング部、147…障害物検知部、148…現在位置修正部、400…関数、402…最大処理時間、401…組み合わせ、403…最大処理可能点数 1 ... Position estimation device, 12, 12a to 12n ... Sensor, 13 ... Information processing device, 14 ... Sensor processing unit, 15 ... Control unit, 16 ... Memory, 17 ... Display unit, 19 ... External storage device, 100 ... Moving object , 141 ... Input / output unit, 142 ... Current position estimation unit, 143 ... Point cloud data acquisition unit, 144 ... Filter unit, 145 ... Data score adjustment unit, 146 ... Matching unit, 147 ... Obstacle detection unit, 148 ... Current position Correction part, 400 ... Function, 402 ... Maximum processing time, 401 ... Combination, 403 ... Maximum processing points

Claims (10)

  1.  センサで収集した走行環境の情報と、地図情報とを比較することで自己の位置を推定する自己位置推定装置であって、
     自己の現在の位置を仮に推定する現在位置推定部と、
     前記地図情報に含まれる点群データのうち、前記現在位置推定部で推定した現在の仮位置に基づき前記センサが前記情報を取得可能な範囲を算出し、前記取得可能な範囲の点群データを選択する点群データ選択部と、
     前記地図情報から選択した点群データの点数と、位置を推定する際の許容可能な最大の処理時間で処理できる最大の点数である最大処理可能点数とを比較し、比較結果に基づいて、選択した前記点群データの点数を前記最大処理可能点数以下の範囲で調整する点数調整部と、
     調整後の前記点群データと前記センサで取得した前記情報の点群データとのマッチングを行うマッチング部と、
     前記マッチング部のマッチング結果に基づいて、前記現在位置推定部で推定した自己の現在の仮位置を修正する現在位置修正部と、を備える
     自己位置推定装置。
    It is a self-position estimation device that estimates its own position by comparing the driving environment information collected by the sensor with the map information.
    The current position estimation unit that tentatively estimates the current position of itself,
    Of the point cloud data included in the map information, the range in which the sensor can acquire the information is calculated based on the current temporary position estimated by the current position estimation unit, and the point cloud data in the acquireable range is obtained. Point cloud data selection unit to be selected and
    The score of the point cloud data selected from the map information is compared with the maximum number of points that can be processed, which is the maximum number of points that can be processed in the maximum allowable processing time when estimating the position, and the selection is made based on the comparison result. A point cloud adjustment unit that adjusts the points of the point cloud data in the range equal to or less than the maximum processable points.
    A matching unit that matches the adjusted point cloud data with the point cloud data of the information acquired by the sensor.
    A self-position estimation device including a current position correction unit that corrects the current temporary position of the self estimated by the current position estimation unit based on the matching result of the matching unit.
  2.  前記点数調整部において、選択した点群データの点数が前記最大処理可能点数よりも多い場合には、選択した点群データの点数を前記最大処理可能点数に減らす
     請求項1に記載の自己位置推定装置。
    The self-position estimation according to claim 1, wherein in the score adjusting unit, when the score of the selected point cloud data is larger than the maximum processable score, the score of the selected point cloud data is reduced to the maximum processable score. Device.
  3.  前記点数調整部において、選択した点群データの点数が前記最大処理可能点数よりも少ない場合には、選択した点群データの点数を前記最大処理可能点数まで増やす
     請求項1又は2に記載の自己位置推定装置。
    The self according to claim 1 or 2, wherein in the score adjusting unit, when the score of the selected point cloud data is smaller than the maximum processable score, the score of the selected point cloud data is increased to the maximum processable score. Position estimator.
  4.  前記マッチング部において、マッチングできなかった点群データを記憶し、記憶した点群データを、次回の位置推定の際に前記地図情報に含まれる点群データから選択しないようにする
     請求項3に記載の自己位置推定装置。
    The third aspect of claim 3 is to store the point cloud data that could not be matched in the matching unit, and prevent the stored point cloud data from being selected from the point cloud data included in the map information at the next position estimation. Self-position estimation device.
  5.  前記点数調整部において、前記選択した点群データの点数を前記最大処理可能点数に減らす際に、前記選択した点群データのうち現在の仮位置からより遠い点群データを優先的に減らす
     請求項2に記載の自己位置推定装置。
    The claim that the point cloud adjusting unit preferentially reduces the point cloud data farther from the current temporary position among the selected point cloud data when the points of the selected point cloud data are reduced to the maximum processable points. 2. The self-position estimation device according to 2.
  6.  前記点数調整部において、前記選択した点群データの点数を前記最大処理可能点数まで増やす際に、前記選択した点群データのうち現在の仮位置により近い点群データを優先的に増やす
     請求項3に記載の自己位置推定装置。
    Claim 3 in which, when the score adjustment unit increases the score of the selected point cloud data to the maximum processable score, the point cloud data closer to the current temporary position among the selected point cloud data is preferentially increased. The self-position estimation device described in.
  7.  前記最大処理可能点数は、予め求めておいた異なる点数と当該点数での処理時間との対応関係に基づいて算出される点数である
     請求項1に記載の自己位置推定装置。
    The self-position estimation device according to claim 1, wherein the maximum processable score is a score calculated based on a correspondence relationship between a different score obtained in advance and a processing time at the score.
  8.  前記マッチング部は、マッチングできなかった点群データを含む領域の位置と形状を算出して記憶する
     請求項4に記載の自己位置推定装置。
    The self-position estimation device according to claim 4, wherein the matching unit calculates and stores the position and shape of a region including point cloud data that could not be matched.
  9.  前記マッチング部は、時系列に得られた前記マッチング結果に応じて、前記地図情報に対して時系列にマッチングできなかった領域の大きさ又は点群データの点数を調整して記憶する
     請求項4又は8に記載の自己位置推定装置。
    Claim 4 that the matching unit adjusts and stores the size of a region that could not be matched with the map information in time series or the score of point cloud data according to the matching result obtained in time series. Or the self-position estimation device according to 8.
  10.  センサで収集した走行環境の情報と、地図情報とを比較することで自己の位置を推定する自己位置推定装置が備えるコンピューターに、
     絶対座標における自己の現在の位置又は相対座標における自己の現在の位置を仮に推定する手順と、
     前記地図情報に含まれる点群データのうち、推定した現在の仮位置に基づき前記センサが前記情報を取得可能な範囲を算出し、前記取得可能な範囲の点群データを選択する手順と、
     前記地図情報から選択した点群データの点数と、位置を推定する際の許容可能な最大の処理時間で処理できる最大の点数である最大処理可能点数とを比較し、比較結果に基づいて、選択した前記点群データの点数を前記最大処理可能点数以下の範囲で調整する手順と、
     調整後の前記点群データと前記センサで取得した前記情報の点群データとのマッチングを行う手順と、
     マッチング結果に基づいて、推定した自己の現在の仮位置を修正する手順と、
     を実行させるためのプログラム。
    A computer equipped with a self-position estimation device that estimates its own position by comparing the driving environment information collected by the sensor with the map information.
    A procedure for tentatively estimating the current position of the self in absolute coordinates or the current position of the self in relative coordinates, and
    A procedure for calculating a range in which the sensor can acquire the information based on the estimated current temporary position among the point cloud data included in the map information, and selecting the point cloud data in the acquireable range.
    The score of the point cloud data selected from the map information is compared with the maximum number of points that can be processed, which is the maximum number of points that can be processed in the maximum allowable processing time when estimating the position, and the selection is made based on the comparison result. The procedure for adjusting the score of the point cloud data in the range of the maximum processable score or less, and
    A procedure for matching the adjusted point cloud data with the point cloud data of the information acquired by the sensor, and
    The procedure to correct the estimated self's current temporary position based on the matching result, and
    A program to execute.
PCT/JP2021/022270 2020-07-31 2021-06-11 Own-position estimation device and program WO2022024563A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-130479 2020-07-31
JP2020130479A JP7382910B2 (en) 2020-07-31 2020-07-31 Self-position estimation device and program

Publications (1)

Publication Number Publication Date
WO2022024563A1 true WO2022024563A1 (en) 2022-02-03

Family

ID=80035423

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/022270 WO2022024563A1 (en) 2020-07-31 2021-06-11 Own-position estimation device and program

Country Status (2)

Country Link
JP (1) JP7382910B2 (en)
WO (1) WO2022024563A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011215052A (en) * 2010-03-31 2011-10-27 Aisin Aw Co Ltd Own-vehicle position detection system using scenic image recognition
JP2017053795A (en) * 2015-09-11 2017-03-16 株式会社リコー Information processing apparatus, position attitude measurement method, and position attitude measurement program
WO2019130945A1 (en) * 2017-12-27 2019-07-04 ソニー株式会社 Information processing device, information processing method, program, and moving body
JP2019133318A (en) * 2018-01-30 2019-08-08 トヨタ自動車株式会社 Position estimation system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011215052A (en) * 2010-03-31 2011-10-27 Aisin Aw Co Ltd Own-vehicle position detection system using scenic image recognition
JP2017053795A (en) * 2015-09-11 2017-03-16 株式会社リコー Information processing apparatus, position attitude measurement method, and position attitude measurement program
WO2019130945A1 (en) * 2017-12-27 2019-07-04 ソニー株式会社 Information processing device, information processing method, program, and moving body
JP2019133318A (en) * 2018-01-30 2019-08-08 トヨタ自動車株式会社 Position estimation system

Also Published As

Publication number Publication date
JP2022026832A (en) 2022-02-10
JP7382910B2 (en) 2023-11-17

Similar Documents

Publication Publication Date Title
US20230152461A1 (en) Determining Yaw Error from Map Data, Lasers, and Cameras
US11024055B2 (en) Vehicle, vehicle positioning system, and vehicle positioning method
CN110969655B (en) Method, device, equipment, storage medium and vehicle for detecting parking space
CN109631896B (en) Parking lot autonomous parking positioning method based on vehicle vision and motion information
CN111656136B (en) Vehicle positioning system using lidar
JP5966747B2 (en) Vehicle travel control apparatus and method
CN110859044B (en) Integrated sensor calibration in natural scenes
US9740942B2 (en) Moving object location/attitude angle estimation device and moving object location/attitude angle estimation method
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
US20220270358A1 (en) Vehicular sensor system calibration
JP7190261B2 (en) position estimator
US11151729B2 (en) Mobile entity position estimation device and position estimation method
CN110779538A (en) Allocating processing resources across local and cloud-based systems with respect to autonomous navigation
WO2021232160A1 (en) Vehicle localization system and method
JP6858681B2 (en) Distance estimation device and method
JP6815935B2 (en) Position estimator
WO2022024563A1 (en) Own-position estimation device and program
WO2020230410A1 (en) Mobile object
US10718620B2 (en) Navigation and positioning device and method of navigation and positioning
CN114264301A (en) Vehicle-mounted multi-sensor fusion positioning method and device, chip and terminal
CN113034538A (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
JP6704307B2 (en) Moving amount calculating device and moving amount calculating method
JP7302966B2 (en) moving body
CN112406861B (en) Method and device for carrying out Kalman filter parameter selection by using map data
CN117593503A (en) Vehicle positioning based on pose correction from remote cameras

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21851199

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21851199

Country of ref document: EP

Kind code of ref document: A1