WO2018212301A1 - Self-position estimation device, control method, program, and storage medium - Google Patents

Self-position estimation device, control method, program, and storage medium Download PDF

Info

Publication number
WO2018212301A1
WO2018212301A1 PCT/JP2018/019173 JP2018019173W WO2018212301A1 WO 2018212301 A1 WO2018212301 A1 WO 2018212301A1 JP 2018019173 W JP2018019173 W JP 2018019173W WO 2018212301 A1 WO2018212301 A1 WO 2018212301A1
Authority
WO
WIPO (PCT)
Prior art keywords
self
predicted
information
distance
position estimation
Prior art date
Application number
PCT/JP2018/019173
Other languages
French (fr)
Japanese (ja)
Inventor
加藤 正浩
岩井 智昭
多史 藤谷
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to JP2019518877A priority Critical patent/JP6968877B2/en
Publication of WO2018212301A1 publication Critical patent/WO2018212301A1/en
Priority to JP2021175327A priority patent/JP2022031266A/en
Priority to JP2022160983A priority patent/JP2022176322A/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/0969Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map

Definitions

  • the present invention relates to a self-position estimation technique.
  • Patent Document 1 discloses a technique for estimating a self-position by collating the output of a measurement sensor with the position information of a feature registered in advance on a map.
  • Patent Document 2 discloses a vehicle position estimation technique using a Kalman filter.
  • the present invention has been made to solve the above-described problems, and suitably suppresses a decrease in self-position estimation accuracy even when an error occurs in either a map or a measurement value. It is a main object to provide a self-position estimation apparatus capable of performing the above.
  • the invention according to claim 1 is a self-position estimation device, and shows a measurement distance by a first acquisition unit that acquires predicted position information indicating a predicted self-position and a measurement unit from a moving object to an object.
  • a second acquisition unit configured to acquire first distance information and second distance information indicating a distance from the moving object to the object predicted based on position information of the object; and self-position accuracy of the moving object
  • a third acquisition unit that acquires information; a correction unit that corrects the predicted self-position based on the difference value between the distances indicated by the first distance information and the second distance information; and the self-position accuracy information; .
  • the invention according to claim 7 is a control method executed by the self-position estimation device, the first obtaining step for obtaining the predicted position information indicating the predicted self-position, and the measurement unit from the moving body to the object
  • a correction step for correcting is a control method executed by the self-position estimation device, the first obtaining step for obtaining the predicted position information indicating the predicted self-position, and the measurement unit from the moving body to the object
  • the invention according to claim 8 is a program executed by a computer, and includes a first acquisition unit that acquires predicted position information indicating a predicted self-position, and a measurement distance by a measurement unit from a moving object to an object.
  • a second acquisition unit that acquires first distance information that indicates and second distance information that indicates a distance from the moving object to the object predicted based on position information of the object; and a self-position of the moving object
  • a third acquisition unit that acquires accuracy information, and a correction unit that corrects the predicted self-position based on the difference value between the distances indicated by the first distance information and the second distance information, and the self-position accuracy information To make the computer function.
  • the functional block of the own vehicle position estimation part is shown. The positional relationship between the estimated vehicle position of the vehicle and the feature is shown. It is a flowchart of the own vehicle position estimation process. The experimental result of the own vehicle position estimation process based on a present Example is shown. The experimental result of the own vehicle position estimation process based on a present Example is shown.
  • the experimental result of the conventional own vehicle position estimation process at the time of shifting the position coordinate of the feature information of a certain feature by 80 cm in the traveling direction is shown.
  • the experiment result of the own vehicle position estimation process based on the Example when the position coordinates of the feature information of a certain feature are shifted by 80 cm in the traveling direction is shown.
  • the self-position estimation device indicates a measurement distance from a first acquisition unit that acquires predicted position information indicating a predicted self-position and a measurement unit from the moving body to the object.
  • a second acquisition unit configured to acquire first distance information and second distance information indicating a distance from the moving object to the object predicted based on position information of the object; and self-position accuracy of the moving object
  • a third acquisition unit that acquires information; a correction unit that corrects the predicted self-position based on the difference value between the distances indicated by the first distance information and the second distance information; and the self-position accuracy information; .
  • the self-position estimation device can suitably correct the predicted self-position based on the distance difference value indicated by the first distance information and the second distance information and the self-position accuracy information.
  • the correction unit corrects the predicted self-position based on the difference value, the self-position accuracy information, and the measurement accuracy information of the measurement unit.
  • the self-position estimating apparatus can appropriately correct the self-position by further considering the measurement accuracy of the measurement unit.
  • the correction unit calculates an evaluation value for evaluating the difference value based on the difference value and the self-position accuracy information, and calculates the predicted self-position.
  • the degree of correction based on the difference value is determined based on the evaluation value.
  • the self-position estimation device can accurately determine the degree at which the predicted self-position is corrected by the above-described difference value by evaluating the above-described difference value based on the self-position accuracy information. it can. Thereby, even if it is a case where an error exists in the 1st distance information or the 2nd distance information, it can control suitably that self-vehicle position estimation accuracy falls.
  • the correction unit corrects the predicted self-position by a value obtained by multiplying the difference value by a predetermined gain, and the correction unit corrects the gain based on the evaluation value.
  • the correction coefficient may be determined. More preferably, the gain is a Kalman gain.
  • the second acquisition unit may calculate a distance indicated by the second distance information based on the predicted position information and the position information of the object recorded in map information. calculate.
  • the self-position estimation device can generate the second distance information based on the map information and can be suitably used for self-position estimation.
  • a control method executed by the self-position estimating device executed by the self-position estimating device, the first obtaining step of obtaining predicted position information indicating the predicted self-position, and the object from the moving object.
  • a correction step for correcting the self-position can suitably correct the predicted self-position by executing this control method.
  • a program executed by a computer a first acquisition unit that acquires predicted position information indicating a predicted self-position, and a measurement unit from a moving object to an object
  • a second acquisition unit that acquires first distance information indicating a measured distance by the second object and second distance information indicating a distance from the moving object to the object predicted based on position information of the object
  • a third acquisition unit for acquiring self-position accuracy information of the body; a distance difference value indicated by the first distance information and the second distance information; and the predicted self-position based on the self-position accuracy information.
  • the computer is caused to function as a correction unit for correction.
  • the computer can suitably correct the predicted self-position by executing this program.
  • the program is stored in a storage medium.
  • FIG. 1 is a schematic configuration diagram of a driving support system according to the present embodiment.
  • the driving support system shown in FIG. 1 is mounted on a vehicle and has an in-vehicle device 1 that performs control related to driving support of the vehicle, a lidar (Lidar: Light Detection and Ranging, or Laser Illuminated Detection And Ranging) 2, and a gyro sensor 3. And a vehicle speed sensor 4 and a GPS receiver 5.
  • a lidar Light Detection and Ranging, or Laser Illuminated Detection And Ranging
  • the in-vehicle device 1 is electrically connected to the rider 2, the gyro sensor 3, the vehicle speed sensor 4, and the GPS receiver 5, and based on these outputs, the position of the vehicle on which the in-vehicle device 1 is mounted ("own vehicle position"). Also called.) And the vehicle equipment 1 performs automatic driving
  • the in-vehicle device 1 stores a map database (DB: DataBase) 10 that stores road data and feature information that is information about a feature that is a landmark provided near the road.
  • DB DataBase
  • the features that serve as the above-mentioned landmarks are, for example, features such as kilometer posts, 100 m posts, delineators, traffic infrastructure facilities (for example, signs, direction signs, signals), utility poles, street lamps, and the like that are periodically arranged on the side of the road.
  • the vehicle equipment 1 estimates the own vehicle position by making it collate with the output of the lidar 2 etc. based on this feature information.
  • the in-vehicle device 1 is an example of the “self-position estimation device” in the present invention.
  • the lidar 2 emits a pulse laser in a predetermined angle range in the horizontal direction and the vertical direction, thereby discretely measuring the distance to an object existing in the outside world, and a three-dimensional point indicating the position of the object Generate group information.
  • the lidar 2 includes an irradiation unit that emits laser light while changing the irradiation direction, a light receiving unit that receives reflected light (scattered light) of the irradiated laser light, and scan data based on a light reception signal output by the light receiving unit. Output unit.
  • the scan data is generated based on the irradiation direction corresponding to the laser beam received by the light receiving unit and the response delay time of the laser beam specified based on the above-described received light signal.
  • the accuracy of the lidar distance measurement value is higher as the distance to the object is shorter, and the accuracy is lower as the distance is longer.
  • the rider 2, the gyro sensor 3, the vehicle speed sensor 4, and the GPS receiver 5 each supply output data to the in-vehicle device 1.
  • the lidar 2 is an example of the “measurement unit” in the present invention.
  • FIG. 2 is a block diagram showing a functional configuration of the in-vehicle device 1.
  • the in-vehicle device 1 mainly includes an interface 11, a storage unit 12, an input unit 14, a control unit 15, and an information output unit 16. Each of these elements is connected to each other via a bus line.
  • the interface 11 acquires output data from sensors such as the lidar 2, the gyro sensor 3, the vehicle speed sensor 4, and the GPS receiver 5, and supplies the output data to the control unit 15.
  • the storage unit 12 stores a program executed by the control unit 15 and information necessary for the control unit 15 to execute a predetermined process.
  • storage part 12 memorize
  • FIG. 3 shows an example of the data structure of the map DB 10.
  • the map DB 10 includes facility information, road data, and feature information.
  • the feature information is information in which information related to the feature is associated with each feature, and is represented by a feature ID corresponding to the feature index, latitude and longitude (and altitude), and the like. And at least position information indicating the absolute position of the feature.
  • the map DB 10 may be updated regularly.
  • the control unit 15 receives partial map information related to the area to which the vehicle position belongs from a server device that manages the map information via a communication unit (not shown), and reflects it in the map DB 10.
  • the input unit 14 is a button for operation by the user, a touch panel, a remote controller, a voice input device, or the like.
  • the information output unit 16 is, for example, a display or a speaker that outputs based on the control of the control unit 15.
  • the control unit 15 includes a CPU that executes a program and controls the entire vehicle-mounted device 1.
  • the control unit 15 includes a host vehicle position estimation unit 17.
  • the control unit 15 is an example of a “first acquisition unit”, “second acquisition unit”, “third acquisition unit”, “correction unit”, and “computer” that executes a program in the present invention.
  • the own vehicle position estimation unit 17 is based on the distance and angle measurement values by the lidar 2 for the feature and the position information of the feature extracted from the map DB 10, and the gyro sensor 3, the vehicle speed sensor 4, and / or the GPS receiver.
  • the vehicle position estimated from the output data of 5 is corrected.
  • the vehicle position estimation unit 17 estimates a vehicle position from output data from the gyro sensor 3 and the vehicle speed sensor 4 based on a state estimation method based on Bayesian estimation,
  • the measurement update step for correcting the estimated value of the vehicle position calculated in the prediction step is executed alternately.
  • Various filters developed to perform Bayesian estimation can be used as the state estimation filter used in these steps, and examples thereof include an extended Kalman filter, an unscented Kalman filter, and a particle filter.
  • an extended Kalman filter As described above, various methods have been proposed for position estimation based on Bayesian estimation. In the following, vehicle position estimation using an extended Kalman filter will be briefly described as an example.
  • FIG. 4 is a diagram showing the state variable vector x in two-dimensional orthogonal coordinates.
  • the vehicle position on a plane defined on the two-dimensional orthogonal coordinates of xy is represented by coordinates “(x, y)” and the direction “ ⁇ ” of the vehicle.
  • the direction ⁇ is defined as an angle formed by the traveling direction of the vehicle and the x axis.
  • the coordinates (x, y) indicate an absolute position corresponding to a combination of latitude and longitude, for example.
  • FIG. 5 is a diagram illustrating a schematic relationship between the prediction step and the measurement update step.
  • FIG. 6 shows an example of a functional block of the vehicle position estimation unit 17. As shown in FIG. 5, by repeating the prediction step and the measurement update step, calculation and update of the estimated value of the state variable vector “X” indicating the vehicle position are sequentially executed. Moreover, as shown in FIG. 6, the own vehicle position estimation part 17 has the position estimation part 21 which performs a prediction step, and the position estimation part 22 which performs a measurement update step.
  • the position prediction unit 21 includes a dead reckoning block 23 and a position prediction block 24, and the position estimation unit 22 includes a feature search / extraction block 25 and a position correction block 26.
  • FIG. 5 is a diagram illustrating a schematic relationship between the prediction step and the measurement update step.
  • FIG. 6 shows an example of a functional block of the vehicle position estimation unit 17. As shown in FIG. 5, by repeating the prediction step and the measurement update step, calculation and update of the estimated value of the state variable vector “X
  • the provisional estimated value (predicted value) estimated in the predicting step is appended with “ - ” on the character representing the predicted value, and the estimated value with higher accuracy updated in the measurement updating step. Is appended with “ ⁇ ” on the character representing the value.
  • the position prediction block 24 of the vehicle position estimation unit 17 adds the obtained moving distance and azimuth change to the state variable vector X ⁇ (t-1) at time t-1 calculated in the immediately preceding measurement update step. Then, the predicted value of the vehicle position at time t (also referred to as “predicted vehicle position”) X ⁇ (t) is calculated.
  • the covariance matrix “P ⁇ (t)” corresponding to the error distribution of the predicted vehicle position X ⁇ (t) is converted into the covariance at time t ⁇ 1 calculated in the immediately preceding measurement update step. It is calculated from the matrix “P ⁇ (t ⁇ 1)”.
  • the feature search / extraction block 25 of the vehicle position estimation unit 17 associates the position vector of the feature registered in the map DB 10 with the scan data of the lidar 2. Then, the feature search / extraction block 25 of the vehicle position estimation unit 17 calls the measured value (hereinafter referred to as “feature measurement value”) by the lidar 2 of the feature that can be associated when the association is made. ) Feature estimation value obtained by modeling the measurement process by the lidar 2 using “Z (t)”, the predicted vehicle position X ⁇ (t) and the location vector of the feature registered in the map DB 10. (Referred to as “feature predicted value”) “Z ⁇ (t)” is acquired.
  • the feature measurement value Z (t) is a two-dimensional vehicle body coordinate system obtained by converting the feature distance and scan angle measured by the rider 2 at time t into components with the vehicle traveling direction and the lateral direction as axes. Is a vector. Then, the position correction block 26 of the vehicle position estimation unit 17 calculates a difference value between the feature measurement value Z (t) and the feature prediction value Z ⁇ (t) as shown in the following equation (1). To do.
  • the position correction block 26 of the vehicle position estimation unit 17 calculates the Kalman to the difference value between the feature measurement value Z (t) and the feature prediction value Z ⁇ (t) as shown in the following equation (2). By multiplying the gain “K (t)” and adding this to the predicted own vehicle position X ⁇ (t), the updated state variable vector (also referred to as “estimated own vehicle position”) X ⁇ (t) calculate.
  • the position correction block 26 of the vehicle position estimating section 17 similarly to the prediction step, the covariance matrix corresponding to the error distribution of the estimated vehicle position X ⁇ (t) P ⁇ ( t) ( simply P (t)) is obtained from the covariance matrix P ⁇ (t).
  • Parameters such as the Kalman gain K (t) can be calculated in the same manner as a known self-position estimation technique using an extended Kalman filter, for example.
  • the own vehicle position estimation unit 17 can select any one selected ground when the position vector registered in the map DB 10 and the scan data of the lidar 2 can be associated with a plurality of features.
  • the measurement update step may be performed based on the object measurement value or the like, or the measurement update step may be performed a plurality of times based on all the feature measurement values or the like that can be associated.
  • the vehicle position estimation unit 17 considers that the rider measurement accuracy deteriorates as the feature farther from the rider 2, and the distance between the rider 2 and the feature becomes smaller. The longer it is, the smaller the weight related to the feature is.
  • the prediction step and the measurement update step are repeatedly performed, and the predicted vehicle position X ⁇ (t) and the estimated vehicle position X ⁇ (t) are sequentially calculated, so that the most likely vehicle position Is calculated.
  • the feature measurement value Z (t) is an example of the “first distance information” of the present invention, and the predicted feature value Z ⁇ (t) of the “second distance information” of the present invention. It is an example.
  • the vehicle position estimation unit 17 evaluates a difference value between the feature measurement value Z (t) and the feature prediction value Z ⁇ (t) (also referred to as “difference evaluation value”). And the Kalman gain K (t) is corrected according to the difference evaluation value.
  • the own vehicle position estimation part 17 suppresses the fall of the own vehicle position estimation precision suitably even if it is a case where an error arises in either the map DB10 or the measured value of the lidar 2.
  • EKF Extended Karman Filter
  • the feature measurement value Z (t) and the feature prediction value Z ⁇ (t) The vehicle position is calculated using the difference value.
  • the difference value shown in the equation (1) increases, the correction amount for the predicted host vehicle position X ⁇ (t) increases.
  • the difference value becomes large due to one of the following causes (A) to (C).
  • (C) The position coordinates of the feature of the map data used for obtaining the feature prediction value are shifted. This is because, for example, after the map data is created, the sign of the feature is tilted due to the collision of the vehicle, the structure of the feature is removed or moved, and the position of the feature is changed. This is a state in which the position coordinates of the feature in the data do not match the actual position coordinates of the feature. That is, there is a difference between the position of the feature on the map data and the position of the feature in the real environment.
  • the vehicle position estimation accuracy is lowered. Therefore, when either of the above (A) or (C) occurs, it is preferable to make the degree of correction with respect to the predicted host vehicle position X ⁇ (t) as low as possible.
  • FIG. 7 shows the positional relationship between the estimated vehicle position of the vehicle and the feature.
  • the coordinate system shown in FIG. 7 is a body coordinate system based on the vehicle, the x-axis indicates the traveling direction of the vehicle, and the y-axis indicates the direction perpendicular to the traveling direction (lateral direction of the vehicle).
  • the estimated vehicle position has an error range indicated by an error ellipse 40.
  • the error ellipse 40 is defined by the position estimation accuracy “ ⁇ P (x)” in the x direction and the position estimation accuracy “ ⁇ P (y)” in the y direction.
  • the position estimation accuracy ⁇ P is obtained by converting the covariance matrix P (t) into the body coordinate system using the Jacobian matrix “H (t)” by the following equation (3).
  • the predicted feature value Z ⁇ (t) is obtained by modeling the measurement process by the lidar 2 using the predicted vehicle position X ⁇ (t) and the position vector of the feature registered in the map DB 10.
  • the feature measurement value Z (t) is a measurement value obtained by the rider 2 of the feature when the feature position vector registered in the map DB 10 can be associated with the scan data of the rider 2.
  • the feature measurement value Z (t) has an error range indicated by an error ellipse 43.
  • the error ellipse 43 is defined by the measurement accuracy “ ⁇ L (x)” in the x direction and the measurement accuracy “ ⁇ L (y)” in the y direction.
  • the measurement error due to the rider increases in proportion to the square of the distance between the rider and the feature to be measured, and therefore, based on the distance from the estimated vehicle position to the feature measurement value Z (t).
  • the lidar measurement accuracy ⁇ L ( ⁇ L (x), ⁇ L (y)) is obtained.
  • the difference value between the predicted feature value Z ⁇ (t) and the measured feature value Z (t) is “dx” in the x direction and “dy” in the y direction, as shown in the equation (1).
  • the position estimation accuracy ⁇ P is an example of “self-position accuracy information” of the present invention
  • the lidar measurement accuracy ⁇ L is an example of “measurement accuracy information” of the present invention.
  • the own vehicle position estimation unit 17 uses the position estimation accuracy ⁇ P ( ⁇ P (x), ⁇ P (y)) and the lidar measurement accuracy ⁇ L ( ⁇ L (x), ⁇ L (y)), The difference value between the feature measurement value Z (t) and the feature prediction value Z ⁇ (t) is evaluated.
  • the vehicle position estimation unit 17 determines the validity of the above-described difference value based on the ratio of the difference value with respect to the error range 40 and the lidar error range 43 of the estimated vehicle position.
  • the vehicle position estimation unit 17 calculates the difference evaluation values “Ex” and “Ey” using the following evaluation formula (4). In this case, absolute values of the difference values dx and dy are used.
  • the host vehicle position estimation unit 17 determines that the above (A) or (C) has occurred as a cause of the difference value being increased. . Therefore, in this case, the own vehicle position estimation unit 17 calculates coefficient values “a X (t)” and “a Y (t)” for reducing the Kalman gain according to the difference evaluation values Ex and Ey. Multiply by Kalman gain.
  • the coefficient values a X (t) and a Y (t) are values from 0 to 1, and the smaller the evaluation values Ex and Ey, the smaller the values, using an expression or a table. Is set.
  • the coefficient values a X (t) and a Y (t) are set based on the following equation (5).
  • the vehicle position estimation unit 17 calculates the updated Kalman gain “K (t) ′” based on the following equation (6) using the coefficient values a X (t) and a Y (t). .
  • the vehicle position estimation unit 17 reduces the Kalman gain according to the reliability and predicts the vehicle position X ⁇ ( The correction amount for t) can be reduced. Therefore, in this case, inaccurate correction can be prevented, so that the estimated vehicle estimation accuracy is improved.
  • the coefficient values a X (t) and a Y (t) are examples of the “correction coefficient” in the present invention.
  • FIG. 8 is a flowchart of the vehicle position estimation process performed by the vehicle position estimation unit 17 of the in-vehicle device 1.
  • the in-vehicle device 1 repeatedly executes the process of the flowchart of FIG.
  • the vehicle position estimation unit 17 sets an initial value of the vehicle position based on the output of the GPS receiver 5 or the like (step S101).
  • the host vehicle position estimation unit 17 acquires the vehicle body speed from the vehicle speed sensor 4 and the angular velocity in the yaw direction from the gyro sensor 3 (step S102).
  • the own vehicle position estimation part 17 calculates the moving distance of a vehicle and the azimuth
  • the vehicle position estimating section 17 one time before the estimated vehicle position X ⁇ (t-1), by adding the moving distance and direction change calculated in step S103, the predicted vehicle position X - (t) Is calculated (step S104). Furthermore, in step S104, the host vehicle position estimation unit 17 obtains position estimation accuracy ⁇ P (x), ⁇ P (y) from the covariance matrix using Equation (3). Further, the own vehicle position estimating unit 17 refers to the feature information in the map DB 10 based on the predicted own vehicle position X ⁇ (t), and searches for the feature that is the measurement range of the rider 2 (step S105).
  • the vehicle position estimation unit 17 predicts the features in the traveling direction and the lateral direction of the vehicle from the predicted vehicle position X ⁇ (t) and the position coordinates indicated by the feature information of the feature searched in step S105.
  • the position that is, the predicted feature value Z ⁇ (t)
  • the vehicle position estimation unit 17 calculates a measurement distance (that is, a feature measurement value Z (t)) from the scan data of the rider 2 to each feature in the traveling direction and the lateral direction of the vehicle. Rider measurement accuracy ⁇ L (x), ⁇ L (y) is calculated (step S107).
  • the vehicle position estimation unit 17 calculates difference values dx and dy between the feature measurement value and the feature prediction value based on the equation (1) (step S108). And the own vehicle position estimation part 17 calculates difference evaluation value Ex and Ey based on Formula (4) (step S109).
  • the vehicle position estimation unit 17 obtains the coefficient values a X (t) and a Y (t) from the difference evaluation values Ex and Ey. Generate (step S111). For example, the vehicle position estimation unit 17 calculates the coefficient values a X (t) and a Y (t) based on the equation (5), so that the difference evaluation values Ex and Ey are in the range from 0 to 1. The coefficient values a X (t) and a Y (t) are calculated such that the larger the value, the smaller the value.
  • the vehicle position estimation unit 17 determines a coefficient value corresponding to the difference evaluation value (in the case of the difference evaluation value Ex, the coefficient value a X (t )) May be set to 1 regardless of the equation (5).
  • the vehicle position estimation unit 17 sets the coefficient values a X (t) and a Y (t) to “1” when the difference evaluation values Ex and Ey are equal to or smaller than the predetermined values (step S110; No) ( Step S112). That is, the coefficient value a X (t) is set by comparing the difference evaluation value Ex and a predetermined value, and the coefficient value a Y (t) is set by comparing the difference evaluation value Ey and the predetermined value.
  • the vehicle position estimation unit 17 multiplies the Kalman gain K (t) by the coefficient values a X (t) and a Y (t) based on the equation (6), thereby obtaining the Kalman gain K (t) ′. Is generated (step S113). Thereafter, the host vehicle position estimation unit 17 corrects the predicted host vehicle position X ⁇ (t) using the generated Kalman gain K (t) ′ instead of K (t) based on the equation (2), and estimates The own vehicle position X ⁇ (t) is calculated (step S114).
  • FIG. 9 (A) is a graph showing the transition of the position in the x direction and the y direction when the vehicle is running while executing the vehicle position estimation process shown in FIG. 8 in a certain driving test course.
  • the vehicle position also referred to as “reference position”
  • the estimated vehicle position calculated by the vehicle position estimation process shown in FIG. are drawn to the extent that they cannot be distinguished visually.
  • FIG. 9B is an enlarged view of data in the travel section in the frame 70 of FIG. 9A.
  • the positions of the features that are the measurement targets are plotted with circles.
  • 10A is a graph showing the transition of the difference value dx in the travel section of the frame 70
  • FIG. 10B shows the estimated vehicle position based on the embodiment in the travel section of the frame 70.
  • FIG. 10C illustrates the difference in the lateral direction between the estimated vehicle position based on the embodiment in the traveling section of the frame 70 and the reference position, and FIG. These show the azimuth
  • the vehicle position estimation unit 17 estimates the estimated vehicle position based on the feature information for features that exist at irregular intervals, and FIG. The predicted vehicle position is corrected based on the difference value shown in FIG.
  • the difference between the reference position in the traveling direction and the lateral direction of the vehicle and the estimated own vehicle position is within about 0.1 m.
  • the difference is also within about 0.1 degree.
  • 11A to 11D show the Kalman gain K (t) as a coefficient value a when the position coordinates of the feature information with respect to the feature indicated by the plot 71 in FIG. 9B are shifted by 80 cm in the traveling direction.
  • Experimental results similar to those shown in FIGS. 10A to 10D when used without correction by X (t) and a Y (t) are shown.
  • the position coordinates of the feature information of the feature indicated by the plot 71 are derived from the shift in the position coordinates of the feature information with respect to the feature indicated by the plot 71 of FIG. 9B.
  • the difference value dx calculated using the reference has a significantly large deviation compared to the difference value dx calculated using the position coordinates of the feature information of other features.
  • the estimated own vehicle position calculated using the position coordinates of the feature information with respect to the feature indicated by the plot 71 in FIG. 9B is the reference position in the traveling direction of the vehicle. (See a round frame 73). The error of the estimated own vehicle position generated in this way continues until the next feature used for the own vehicle position estimation (see the plot 74 in FIG. 9) is detected.
  • the difference value dx, the position estimation accuracy ⁇ P (x), and the lidar measurement accuracy ⁇ L (x) when the feature indicated by the plot 71 in FIG. 9B is detected are as follows.
  • Equation (5) when these values are substituted into Equation (5) the coefficient value a X (t) shown in, the coefficient value a X (t) is 0.854 ⁇ 10 -3 ⁇ 0 It becomes.
  • the Kalman gain K (t) ′ is obtained as shown in the following equation (7).
  • the vehicle position estimation unit 17 of the vehicle-mounted device 1 includes the position prediction unit 21 that generates information indicating the predicted vehicle position X ⁇ (t), and the position estimation unit 22.
  • the position estimation unit 22 calculates a feature measurement value Z (t) indicating the measurement distance by the lidar 2 from the moving body to the target object and a feature prediction value Z ⁇ (t) predicted based on the feature information. Then, the position estimation unit 22 responds to the difference evaluation values Ex and Ey based on the difference values dx and dy between the feature measurement value and the feature prediction value, the position estimation accuracy ⁇ P, and the lidar measurement accuracy ⁇ L.
  • the coefficient values a X (t) and a Y (t) for correcting the Kalman gain K (t) are determined. Thereby, the vehicle equipment 1 can suppress suitably the fall of the own vehicle position estimation precision even if it is a case where an error arises in either the map DB10 or the measured value of the lidar 2.
  • the vehicle position estimation unit 17 may multiply the difference values dx and dy by coefficient values a X (t) and a Y (t). That is, in this case, the host vehicle position estimation unit 17 calculates the estimated host vehicle position based on the following equation (8).
  • the Kalman gain K (t) is generated by the equation (10) and is not updated based on the equation (6), the covariance matrix P ( The update of t) does not reflect the use of feature information with low reliability. That is, while the position estimation reflects the reliability, the covariance matrix does not reflect the reliability, and thus processing that considers the difference is necessary.
  • the vehicle position estimation unit 17 uses the diagonal component of the observation noise matrix “R (t)” used when calculating the Kalman gain K (t) by the following general formula (10) as follows: As shown in equation (11), coefficient values a X (t) and a Y (t) may be multiplied.
  • the vehicle position estimation unit 17 sets the coefficient values a X (t) and a Y (t) so that the coefficient values a X (t) and a Y (t) become larger as the evaluation values Ex and Ey shown in Expression (4) are larger.
  • the vehicle position estimation unit 17 sets the coefficient values a X (t) and a Y (t) as shown in the following formula (12).
  • the configuration of the driving support system shown in FIG. 1 is an example, and the configuration of the driving support system to which the present invention is applicable is not limited to the configuration shown in FIG.
  • the electronic control device of the vehicle instead of having the in-vehicle device 1, the electronic control device of the vehicle may execute the processing of the vehicle position estimation unit 17 of the in-vehicle device 1.
  • map DB10 is memorize

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

A host vehicle position estimation unit 17 of an on-vehicle device 1 is provided with: a position prediction unit 21 which generates information indicating a predicted host vehicle position X-(t); and a position estimation unit 22. The position estimation unit 22 calculates a planimetric feature predicted value Z-(t) predicted on the basis of planimetric feature information and a planimetric feature measured value Z(t) indicating the distance from a moving body to an object measured by a lidar 2. Furthermore, the position estimation unit 22 determines coefficient values a­X(t),aY(t) for correcting the Kalman gain K(t), in accordance with difference evaluation values Ex, Ey based on difference values dx, dy between the planimetric feature measured value and the planimetric feature predicted value, the position estimation accuracy σP, and the lidar measurement accuracy σL.

Description

自己位置推定装置、制御方法、プログラム及び記憶媒体Self-position estimation apparatus, control method, program, and storage medium
 本発明は、自己位置推定技術に関する。 The present invention relates to a self-position estimation technique.
 従来から、車両の進行先に設置される地物をレーダやカメラを用いて検出し、その検出結果に基づいて自車位置を校正する技術が知られている。例えば、特許文献1には、計測センサの出力と、予め地図上に登録された地物の位置情報とを照合させることで自己位置を推定する技術が開示されている。また、特許文献2には、カルマンフィルタを用いた自車位置推定技術が開示されている。 2. Description of the Related Art Conventionally, a technique for detecting a feature installed at a destination of a vehicle using a radar or a camera and calibrating the position of the vehicle based on the detection result is known. For example, Patent Document 1 discloses a technique for estimating a self-position by collating the output of a measurement sensor with the position information of a feature registered in advance on a map. Patent Document 2 discloses a vehicle position estimation technique using a Kalman filter.
特開2013-257742号公報JP 2013-257742 A 特開2017-72422号公報JP 2017-72422 A
 計測センサの出力と、予め地図上に登録された地物の位置情報とを照合させることで自己位置を推定する場合、地図又は計測センサの出力のいずれかに誤差があると、自己位置が誤って補正され、自己位置推定精度が低下する。計測センサの出力に誤差が生じる状況としては、視線誘導標や他車両の反射板等の反射強度の高い物体が多数存在し、それらを誤って対象の地物と検出してしまう場合や、自車と対象の地物の間に他車両が存在し、計測センサによるスキャンの全てあるいは一部が遮蔽(オクルージョン)される場合などがある。また、地図に誤差が生じる状況としては、最新の情報が地図に反映されておらず、地物の位置座標が不正確な場合などがある。 When estimating the self-position by collating the output of the measurement sensor with the position information of the feature registered in advance on the map, if there is an error in either the map or the output of the measurement sensor, the self-position is incorrect. And the self-position estimation accuracy decreases. There are many situations where errors occur in the output of the measurement sensor, such as when there are many objects with high reflection strength, such as line-of-sight guides and reflectors of other vehicles, and they are mistakenly detected as target features. There are cases where there is another vehicle between the vehicle and the target feature, and all or part of the scan by the measurement sensor is occluded. In addition, as a situation where an error occurs in the map, there is a case where the latest information is not reflected on the map and the position coordinates of the feature are inaccurate.
 本発明は、上記のような課題を解決するためになされたものであり、地図又は計測値のいずれかに誤差が生じた場合であっても、自己位置推定精度の低下を好適に抑制することが可能な自己位置推定装置を提供することを主な目的とする。 The present invention has been made to solve the above-described problems, and suitably suppresses a decrease in self-position estimation accuracy even when an error occurs in either a map or a measurement value. It is a main object to provide a self-position estimation apparatus capable of performing the above.
 請求項1に記載の発明は、自己位置推定装置であって、予測された自己位置を示す予測位置情報を取得する第1取得部と、移動体から対象物までの計測部による計測距離を示す第1距離情報と、前記対象物の位置情報に基づき予測された前記移動体から前記対象物までの距離を示す第2距離情報とを取得する第2取得部と、前記移動体の自己位置精度情報を取得する第3取得部と、前記第1距離情報及び前記第2距離情報が示す距離の差分値、並びに、前記自己位置精度情報に基づき、前記予測された自己位置を補正する補正部と、を備える。 The invention according to claim 1 is a self-position estimation device, and shows a measurement distance by a first acquisition unit that acquires predicted position information indicating a predicted self-position and a measurement unit from a moving object to an object. A second acquisition unit configured to acquire first distance information and second distance information indicating a distance from the moving object to the object predicted based on position information of the object; and self-position accuracy of the moving object A third acquisition unit that acquires information; a correction unit that corrects the predicted self-position based on the difference value between the distances indicated by the first distance information and the second distance information; and the self-position accuracy information; .
 請求項7に記載の発明は、自己位置推定装置が実行する制御方法であって、予測された自己位置を示す予測位置情報を取得する第1取得工程と、移動体から対象物までの計測部による計測距離を示す第1距離情報、及び、前記対象物の位置情報に基づき予測された前記移動体から前記対象物までの距離を示す第2距離情報を取得する第2取得工程と、前記移動体の自己位置精度情報を取得する第3取得工程と、前記第1距離情報及び前記第2距離情報が示す距離の差分値、並びに、前記自己位置精度情報に基づき、前記予測された自己位置を補正する補正工程と、を有する。 The invention according to claim 7 is a control method executed by the self-position estimation device, the first obtaining step for obtaining the predicted position information indicating the predicted self-position, and the measurement unit from the moving body to the object A second acquisition step of acquiring first distance information indicating a measurement distance according to the first object, and second distance information indicating a distance from the moving body to the object predicted based on position information of the object; A third acquisition step of acquiring self-position accuracy information of the body, a distance difference value indicated by the first distance information and the second distance information, and the predicted self-position based on the self-position accuracy information And a correction step for correcting.
 請求項8に記載の発明は、コンピュータが実行するプログラムであって、予測された自己位置を示す予測位置情報を取得する第1取得部と、移動体から対象物までの計測部による計測距離を示す第1距離情報と、前記対象物の位置情報に基づき予測された前記移動体から前記対象物までの距離を示す第2距離情報とを取得する第2取得部と、前記移動体の自己位置精度情報を取得する第3取得部と、前記第1距離情報及び前記第2距離情報が示す距離の差分値、並びに、前記自己位置精度情報に基づき、前記予測された自己位置を補正する補正部として前記コンピュータを機能させる。 The invention according to claim 8 is a program executed by a computer, and includes a first acquisition unit that acquires predicted position information indicating a predicted self-position, and a measurement distance by a measurement unit from a moving object to an object. A second acquisition unit that acquires first distance information that indicates and second distance information that indicates a distance from the moving object to the object predicted based on position information of the object; and a self-position of the moving object A third acquisition unit that acquires accuracy information, and a correction unit that corrects the predicted self-position based on the difference value between the distances indicated by the first distance information and the second distance information, and the self-position accuracy information To make the computer function.
運転支援システムの概略構成図である。It is a schematic block diagram of a driving assistance system. 車載機の機能的構成を示すブロック図である。It is a block diagram which shows the functional structure of a vehicle equipment. 地図DBのデータ構造の一例である。It is an example of the data structure of map DB. 状態変数ベクトルを2次元直交座標で表した図である。It is the figure which represented the state variable vector by the two-dimensional orthogonal coordinate. 予測ステップと計測更新ステップとの概略的な関係を示す図である。It is a figure which shows the schematic relationship between a prediction step and a measurement update step. 自車位置推定部の機能ブロックを示す。The functional block of the own vehicle position estimation part is shown. 車両の推定自車位置と地物との位置関係を示す。The positional relationship between the estimated vehicle position of the vehicle and the feature is shown. 自車位置推定処理のフローチャートである。It is a flowchart of the own vehicle position estimation process. 本実施例に基づく自車位置推定処理の実験結果を示す。The experimental result of the own vehicle position estimation process based on a present Example is shown. 本実施例に基づく自車位置推定処理の実験結果を示す。The experimental result of the own vehicle position estimation process based on a present Example is shown. ある地物の地物情報の位置座標を進行方向に80cmずらした場合の従来の自車位置推定処理の実験結果を示す。The experimental result of the conventional own vehicle position estimation process at the time of shifting the position coordinate of the feature information of a certain feature by 80 cm in the traveling direction is shown. ある地物の地物情報の位置座標を進行方向に80cmずらした場合の実施例に基づく自車位置推定処理の実験結果を示す。The experiment result of the own vehicle position estimation process based on the Example when the position coordinates of the feature information of a certain feature are shifted by 80 cm in the traveling direction is shown.
 本発明の好適な実施形態によれば、自己位置推定装置は、予測された自己位置を示す予測位置情報を取得する第1取得部と、移動体から対象物までの計測部による計測距離を示す第1距離情報と、前記対象物の位置情報に基づき予測された前記移動体から前記対象物までの距離を示す第2距離情報とを取得する第2取得部と、前記移動体の自己位置精度情報を取得する第3取得部と、前記第1距離情報及び前記第2距離情報が示す距離の差分値、並びに、前記自己位置精度情報に基づき、前記予測された自己位置を補正する補正部と、を備える。この態様によれば、自己位置推定装置は、第1距離情報及び第2距離情報が示す距離の差分値と、自己位置精度情報とに基づき、予測された自己位置を好適に補正することができる。 According to a preferred embodiment of the present invention, the self-position estimation device indicates a measurement distance from a first acquisition unit that acquires predicted position information indicating a predicted self-position and a measurement unit from the moving body to the object. A second acquisition unit configured to acquire first distance information and second distance information indicating a distance from the moving object to the object predicted based on position information of the object; and self-position accuracy of the moving object A third acquisition unit that acquires information; a correction unit that corrects the predicted self-position based on the difference value between the distances indicated by the first distance information and the second distance information; and the self-position accuracy information; . According to this aspect, the self-position estimation device can suitably correct the predicted self-position based on the distance difference value indicated by the first distance information and the second distance information and the self-position accuracy information. .
 上記自己位置推定装置の一態様では、前記補正部は、前記差分値と、前記自己位置精度情報と、前記計測部の計測精度情報とに基づき、前記予測された自己位置を補正する。この態様では、自己位置推定装置は、計測部の計測精度をさらに勘案し、自己位置を好適に補正することができる。 In one aspect of the self-position estimation apparatus, the correction unit corrects the predicted self-position based on the difference value, the self-position accuracy information, and the measurement accuracy information of the measurement unit. In this aspect, the self-position estimating apparatus can appropriately correct the self-position by further considering the measurement accuracy of the measurement unit.
 上記自己位置推定装置の他の一態様では、前記補正部は、前記差分値と、前記自己位置精度情報とに基づき、前記差分値を評価する評価値を算出し、前記予測された自己位置を前記差分値により補正する度合いを前記評価値に基づいて決定する。この態様によれば、自己位置推定装置は、自己位置精度情報に基づき上述の差分値を評価することで、予測された自己位置を上述の差分値により補正する際の度合いを的確に定めることができる。これにより、第1距離情報又は第2距離情報に誤差が存在する場合であっても、自車位置推定精度が低下するのを好適に抑制することができる。この場合、好適には、前記補正部は、前記差分値に所定の利得を乗じた値により、前記予測された自己位置を補正し、前記補正部は、前記評価値に基づいて、前記利得に対する補正係数を決定するとよい。さらに好適には、前記利得は、カルマンゲインであるとよい。 In another aspect of the self-position estimation device, the correction unit calculates an evaluation value for evaluating the difference value based on the difference value and the self-position accuracy information, and calculates the predicted self-position. The degree of correction based on the difference value is determined based on the evaluation value. According to this aspect, the self-position estimation device can accurately determine the degree at which the predicted self-position is corrected by the above-described difference value by evaluating the above-described difference value based on the self-position accuracy information. it can. Thereby, even if it is a case where an error exists in the 1st distance information or the 2nd distance information, it can control suitably that self-vehicle position estimation accuracy falls. In this case, preferably, the correction unit corrects the predicted self-position by a value obtained by multiplying the difference value by a predetermined gain, and the correction unit corrects the gain based on the evaluation value. The correction coefficient may be determined. More preferably, the gain is a Kalman gain.
 上記自己位置推定装置の他の一態様では、前記第2取得部は、前記予測位置情報と、地図情報に記録された前記対象物の位置情報とに基づき、前記第2距離情報が示す距離を算出する。この態様により、自己位置推定装置は、地図情報に基づき第2距離情報を生成し、自己位置推定に好適に用いることができる。 In another aspect of the self-position estimation apparatus, the second acquisition unit may calculate a distance indicated by the second distance information based on the predicted position information and the position information of the object recorded in map information. calculate. According to this aspect, the self-position estimation device can generate the second distance information based on the map information and can be suitably used for self-position estimation.
 本発明の他の好適な実施形態によれば、自己位置推定装置が実行する制御方法であって、予測された自己位置を示す予測位置情報を取得する第1取得工程と、移動体から対象物までの計測部による計測距離を示す第1距離情報、及び、前記対象物の位置情報に基づき予測された前記移動体から前記対象物までの距離を示す第2距離情報を取得する第2取得工程と、前記移動体の自己位置精度情報を取得する第3取得工程と、前記第1距離情報及び前記第2距離情報が示す距離の差分値、並びに、前記自己位置精度情報に基づき、前記予測された自己位置を補正する補正工程と、を有する。自己位置推定装置は、この制御方法を実行することで、予測された自己位置を好適に補正することができる。 According to another preferred embodiment of the present invention, there is provided a control method executed by the self-position estimating device, the first obtaining step of obtaining predicted position information indicating the predicted self-position, and the object from the moving object. The second acquisition step of acquiring the first distance information indicating the measurement distance by the measuring unit until and the second distance information indicating the distance from the moving body to the object predicted based on the position information of the object A third acquisition step of acquiring the self-position accuracy information of the mobile body, the difference value between the distances indicated by the first distance information and the second distance information, and the prediction based on the self-position accuracy information. And a correction step for correcting the self-position. The self-position estimating apparatus can suitably correct the predicted self-position by executing this control method.
 本発明の他の好適な実施形態によれば、コンピュータが実行するプログラムであって、予測された自己位置を示す予測位置情報を取得する第1取得部と、移動体から対象物までの計測部による計測距離を示す第1距離情報と、前記対象物の位置情報に基づき予測された前記移動体から前記対象物までの距離を示す第2距離情報とを取得する第2取得部と、前記移動体の自己位置精度情報を取得する第3取得部と、前記第1距離情報及び前記第2距離情報が示す距離の差分値、並びに、前記自己位置精度情報に基づき、前記予測された自己位置を補正する補正部として前記コンピュータを機能させる。コンピュータは、このプログラムを実行することで、予測された自己位置を好適に補正することができる。好適には、上記プログラムは、記憶媒体に記憶される。 According to another preferred embodiment of the present invention, a program executed by a computer, a first acquisition unit that acquires predicted position information indicating a predicted self-position, and a measurement unit from a moving object to an object A second acquisition unit that acquires first distance information indicating a measured distance by the second object and second distance information indicating a distance from the moving object to the object predicted based on position information of the object; A third acquisition unit for acquiring self-position accuracy information of the body; a distance difference value indicated by the first distance information and the second distance information; and the predicted self-position based on the self-position accuracy information. The computer is caused to function as a correction unit for correction. The computer can suitably correct the predicted self-position by executing this program. Preferably, the program is stored in a storage medium.
 以下、図面を参照して本発明の好適な実施例について説明する。なお、任意の記号の上に「^」または「-」が付された文字を、本明細書では便宜上、「A」または「A」(「A」は任意の文字)と表す。
 [概略構成]
Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings. For convenience, a character with “^” or “-” attached on an arbitrary symbol is represented as “A ^ ” or “A ” (“A” is an arbitrary character) in this specification.
[Schematic configuration]
 図1は、本実施例に係る運転支援システムの概略構成図である。図1に示す運転支援システムは、車両に搭載され、車両の運転支援に関する制御を行う車載機1と、ライダ(Lidar:Light Detection and Ranging、または、Laser Illuminated Detection And Ranging)2と、ジャイロセンサ3と、車速センサ4と、GPS受信機5とを有する。 FIG. 1 is a schematic configuration diagram of a driving support system according to the present embodiment. The driving support system shown in FIG. 1 is mounted on a vehicle and has an in-vehicle device 1 that performs control related to driving support of the vehicle, a lidar (Lidar: Light Detection and Ranging, or Laser Illuminated Detection And Ranging) 2, and a gyro sensor 3. And a vehicle speed sensor 4 and a GPS receiver 5.
 車載機1は、ライダ2、ジャイロセンサ3、車速センサ4、及びGPS受信機5と電気的に接続し、これらの出力に基づき、車載機1が搭載される車両の位置(「自車位置」とも呼ぶ。)の推定を行う。そして、車載機1は、自車位置の推定結果に基づき、設定された目的地への経路に沿って走行するように、車両の自動運転制御などを行う。車載機1は、道路データ及び道路付近に設けられた目印となる地物に関する情報である地物情報を記憶した地図データベース(DB:DataBase)10を記憶する。上述の目印となる地物は、例えば、道路脇に周期的に並んでいるキロポスト、100mポスト、デリニエータ、交通インフラ設備(例えば標識、方面看板、信号)、電柱、街灯などの地物である。そして、車載機1は、この地物情報に基づき、ライダ2等の出力と照合させて自車位置の推定を行う。車載機1は、本発明における「自己位置推定装置」の一例である。 The in-vehicle device 1 is electrically connected to the rider 2, the gyro sensor 3, the vehicle speed sensor 4, and the GPS receiver 5, and based on these outputs, the position of the vehicle on which the in-vehicle device 1 is mounted ("own vehicle position"). Also called.) And the vehicle equipment 1 performs automatic driving | operation control etc. of a vehicle so that it drive | works along the path | route to the set destination based on the estimation result of the own vehicle position. The in-vehicle device 1 stores a map database (DB: DataBase) 10 that stores road data and feature information that is information about a feature that is a landmark provided near the road. The features that serve as the above-mentioned landmarks are, for example, features such as kilometer posts, 100 m posts, delineators, traffic infrastructure facilities (for example, signs, direction signs, signals), utility poles, street lamps, and the like that are periodically arranged on the side of the road. And the vehicle equipment 1 estimates the own vehicle position by making it collate with the output of the lidar 2 etc. based on this feature information. The in-vehicle device 1 is an example of the “self-position estimation device” in the present invention.
 ライダ2は、水平方向および垂直方向の所定の角度範囲に対してパルスレーザを出射することで、外界に存在する物体までの距離を離散的に測定し、当該物体の位置を示す3次元の点群情報を生成する。この場合、ライダ2は、照射方向を変えながらレーザ光を照射する照射部と、照射したレーザ光の反射光(散乱光)を受光する受光部と、受光部が出力する受光信号に基づくスキャンデータを出力する出力部とを有する。スキャンデータは、受光部が受光したレーザ光に対応する照射方向と、上述の受光信号に基づき特定される当該レーザ光の応答遅延時間とに基づき生成される。一般的に、対象物までの距離が近いほどライダの距離測定値の精度は高く、距離が遠いほど精度は低い。ライダ2、ジャイロセンサ3、車速センサ4、GPS受信機5は、それぞれ、出力データを車載機1へ供給する。ライダ2は、本発明における「計測部」の一例である。 The lidar 2 emits a pulse laser in a predetermined angle range in the horizontal direction and the vertical direction, thereby discretely measuring the distance to an object existing in the outside world, and a three-dimensional point indicating the position of the object Generate group information. In this case, the lidar 2 includes an irradiation unit that emits laser light while changing the irradiation direction, a light receiving unit that receives reflected light (scattered light) of the irradiated laser light, and scan data based on a light reception signal output by the light receiving unit. Output unit. The scan data is generated based on the irradiation direction corresponding to the laser beam received by the light receiving unit and the response delay time of the laser beam specified based on the above-described received light signal. Generally, the accuracy of the lidar distance measurement value is higher as the distance to the object is shorter, and the accuracy is lower as the distance is longer. The rider 2, the gyro sensor 3, the vehicle speed sensor 4, and the GPS receiver 5 each supply output data to the in-vehicle device 1. The lidar 2 is an example of the “measurement unit” in the present invention.
 図2は、車載機1の機能的構成を示すブロック図である。車載機1は、主に、インターフェース11と、記憶部12と、入力部14と、制御部15と、情報出力部16と、を有する。これらの各要素は、バスラインを介して相互に接続されている。 FIG. 2 is a block diagram showing a functional configuration of the in-vehicle device 1. The in-vehicle device 1 mainly includes an interface 11, a storage unit 12, an input unit 14, a control unit 15, and an information output unit 16. Each of these elements is connected to each other via a bus line.
 インターフェース11は、ライダ2、ジャイロセンサ3、車速センサ4、及びGPS受信機5などのセンサから出力データを取得し、制御部15へ供給する。 The interface 11 acquires output data from sensors such as the lidar 2, the gyro sensor 3, the vehicle speed sensor 4, and the GPS receiver 5, and supplies the output data to the control unit 15.
 記憶部12は、制御部15が実行するプログラムや、制御部15が所定の処理を実行するのに必要な情報を記憶する。本実施例では、記憶部12は、地物情報を含む地図DB10を記憶する。図3は、地図DB10のデータ構造の一例を示す。図3に示すように、地図DB10は、施設情報、道路データ、及び地物情報を含む。地物情報は、地物ごとに当該地物に関する情報が関連付けられた情報であり、ここでは、地物のインデックスに相当する地物IDと、緯度及び経度(及び標高)等により表わされた地物の絶対的な位置を示す位置情報とを少なくとも含んでいる。なお、地図DB10は、定期的に更新されてもよい。この場合、例えば、制御部15は、図示しない通信部を介し、地図情報を管理するサーバ装置から、自車位置が属するエリアに関する部分地図情報を受信し、地図DB10に反映させる。 The storage unit 12 stores a program executed by the control unit 15 and information necessary for the control unit 15 to execute a predetermined process. In a present Example, the memory | storage part 12 memorize | stores map DB10 containing feature information. FIG. 3 shows an example of the data structure of the map DB 10. As shown in FIG. 3, the map DB 10 includes facility information, road data, and feature information. The feature information is information in which information related to the feature is associated with each feature, and is represented by a feature ID corresponding to the feature index, latitude and longitude (and altitude), and the like. And at least position information indicating the absolute position of the feature. The map DB 10 may be updated regularly. In this case, for example, the control unit 15 receives partial map information related to the area to which the vehicle position belongs from a server device that manages the map information via a communication unit (not shown), and reflects it in the map DB 10.
 入力部14は、ユーザが操作するためのボタン、タッチパネル、リモートコントローラ、音声入力装置等である。情報出力部16は、例えば、制御部15の制御に基づき出力を行うディスプレイやスピーカ等である。 The input unit 14 is a button for operation by the user, a touch panel, a remote controller, a voice input device, or the like. The information output unit 16 is, for example, a display or a speaker that outputs based on the control of the control unit 15.
 制御部15は、プログラムを実行するCPUなどを含み、車載機1の全体を制御する。本実施例では、制御部15は、自車位置推定部17を有する。制御部15は、本発明における「第1取得部」、「第2取得部」、「第3取得部」、「補正部」、及びプログラムを実行する「コンピュータ」の一例である。 The control unit 15 includes a CPU that executes a program and controls the entire vehicle-mounted device 1. In the present embodiment, the control unit 15 includes a host vehicle position estimation unit 17. The control unit 15 is an example of a “first acquisition unit”, “second acquisition unit”, “third acquisition unit”, “correction unit”, and “computer” that executes a program in the present invention.
 自車位置推定部17は、地物に対するライダ2による距離及び角度の計測値と、地図DB10から抽出した地物の位置情報とに基づき、ジャイロセンサ3、車速センサ4、及び/又はGPS受信機5の出力データから推定した自車位置を補正する。本実施例では、一例として、自車位置推定部17は、ベイズ推定に基づく状態推定手法に基づき、ジャイロセンサ3、車速センサ4等の出力データから自車位置を推定する予測ステップと、直前の予測ステップで算出した自車位置の推定値を補正する計測更新ステップとを交互に実行する。これらのステップで用いる状態推定フィルタは、ベイズ推定を行うように開発された様々のフィルタが利用可能であり、例えば、拡張カルマンフィルタ、アンセンテッドカルマンフィルタ、パーティクルフィルタなどが該当する。このように、ベイズ推定に基づく位置推定は、種々の方法が提案されている。以下では、一例として拡張カルマンフィルタを用いた自車位置推定について簡略的に説明する。 The own vehicle position estimation unit 17 is based on the distance and angle measurement values by the lidar 2 for the feature and the position information of the feature extracted from the map DB 10, and the gyro sensor 3, the vehicle speed sensor 4, and / or the GPS receiver. The vehicle position estimated from the output data of 5 is corrected. In the present embodiment, as an example, the vehicle position estimation unit 17 estimates a vehicle position from output data from the gyro sensor 3 and the vehicle speed sensor 4 based on a state estimation method based on Bayesian estimation, The measurement update step for correcting the estimated value of the vehicle position calculated in the prediction step is executed alternately. Various filters developed to perform Bayesian estimation can be used as the state estimation filter used in these steps, and examples thereof include an extended Kalman filter, an unscented Kalman filter, and a particle filter. As described above, various methods have been proposed for position estimation based on Bayesian estimation. In the following, vehicle position estimation using an extended Kalman filter will be briefly described as an example.
 図4は、状態変数ベクトルxを2次元直交座標で表した図である。図4に示すように、xyの2次元直交座標上で定義された平面での自車位置は、座標「(x、y)」、自車の方位「Ψ」により表される。ここでは、方位Ψは、車の進行方向とx軸とのなす角として定義されている。座標(x、y)は、例えば緯度及び経度の組合せに相当する絶対位置を示す。 FIG. 4 is a diagram showing the state variable vector x in two-dimensional orthogonal coordinates. As shown in FIG. 4, the vehicle position on a plane defined on the two-dimensional orthogonal coordinates of xy is represented by coordinates “(x, y)” and the direction “Ψ” of the vehicle. Here, the direction Ψ is defined as an angle formed by the traveling direction of the vehicle and the x axis. The coordinates (x, y) indicate an absolute position corresponding to a combination of latitude and longitude, for example.
 図5は、予測ステップと計測更新ステップとの概略的な関係を示す図である。また、図6は、自車位置推定部17の機能ブロックの一例を示す。図5に示すように、予測ステップと計測更新ステップとを繰り返すことで、自車位置を示す状態変数ベクトル「X」の推定値の算出及び更新を逐次的に実行する。また、図6に示すように、自車位置推定部17は、予測ステップを実行する位置予測部21と、計測更新ステップを実行する位置推定部22とを有する。位置予測部21は、デッドレコニングブロック23及び位置予測ブロック24を含み、位置推定部22は、地物探索・抽出ブロック25及び位置補正ブロック26を含む。なお、図5では、計算対象となる基準時刻(即ち現在時刻)「t」の状態変数ベクトルを、「X(t)」または「X(t)」と表記している(「状態変数ベクトルX(t)=(x(t)、y(t)、Ψ(t))」と表記する)。ここで、予測ステップで推定された暫定的な推定値(予測値)には当該予測値を表す文字の上に「」を付し、計測更新ステップで更新された,より精度の高い推定値には当該値を表す文字の上に「」を付す。 FIG. 5 is a diagram illustrating a schematic relationship between the prediction step and the measurement update step. FIG. 6 shows an example of a functional block of the vehicle position estimation unit 17. As shown in FIG. 5, by repeating the prediction step and the measurement update step, calculation and update of the estimated value of the state variable vector “X” indicating the vehicle position are sequentially executed. Moreover, as shown in FIG. 6, the own vehicle position estimation part 17 has the position estimation part 21 which performs a prediction step, and the position estimation part 22 which performs a measurement update step. The position prediction unit 21 includes a dead reckoning block 23 and a position prediction block 24, and the position estimation unit 22 includes a feature search / extraction block 25 and a position correction block 26. In FIG. 5, the state variable vector of the reference time (ie, current time) “t” to be calculated is represented as “X (t)” or “X ^ (t)” (“state variable Vector X (t) = (denoted as x (t), y (t), Ψ (t)) T ”). Here, the provisional estimated value (predicted value) estimated in the predicting step is appended with “ - ” on the character representing the predicted value, and the estimated value with higher accuracy updated in the measurement updating step. Is appended with “ ^ ” on the character representing the value.
 予測ステップでは、自車位置推定部17のデッドレコニングブロック23は、車両の移動速度「v」と角速度「ω」(これらをまとめて「制御値u(t)=(v(t)、ω(t))」と表記する。)を用い、前回時刻からの移動距離と方位変化を求める。自車位置推定部17の位置予測ブロック24は、直前の計測更新ステップで算出された時刻t-1の状態変数ベクトルX(t-1)に対し、求めた移動距離と方位変化を加えて、時刻tの自車位置の予測値(「予測自車位置」とも呼ぶ。)X(t)を算出する。また、これと同時に、予測自車位置X(t)の誤差分布に相当する共分散行列「P(t)」を、直前の計測更新ステップで算出された時刻t-1での共分散行列「P(t-1)」から算出する。 In the prediction step, the dead reckoning block 23 of the vehicle position estimation unit 17 moves the vehicle moving speed “v” and the angular velocity “ω” (collectively, “control value u (t) = (v (t), ω ( t)) T ”is used to determine the movement distance and azimuth change from the previous time. The position prediction block 24 of the vehicle position estimation unit 17 adds the obtained moving distance and azimuth change to the state variable vector X ^ (t-1) at time t-1 calculated in the immediately preceding measurement update step. Then, the predicted value of the vehicle position at time t (also referred to as “predicted vehicle position”) X (t) is calculated. At the same time, the covariance matrix “P (t)” corresponding to the error distribution of the predicted vehicle position X (t) is converted into the covariance at time t−1 calculated in the immediately preceding measurement update step. It is calculated from the matrix “P ^ (t−1)”.
 計測更新ステップでは、自車位置推定部17の地物探索・抽出ブロック25は、地図DB10に登録された地物の位置ベクトルとライダ2のスキャンデータとの対応付けを行う。そして、自車位置推定部17の地物探索・抽出ブロック25は、この対応付けができた場合に、対応付けができた地物のライダ2による計測値(「地物計測値」と呼ぶ。)「Z(t)」と、予測自車位置X(t)及び地図DB10に登録された地物の位置ベクトルを用いてライダ2による計測処理をモデル化して求めた地物の計測推定値(「地物予測値」と呼ぶ。)「Z(t)」とをそれぞれ取得する。地物計測値Z(t)は、時刻tにライダ2が計測した地物の距離及びスキャン角度から、車両の進行方向と横方向を軸とした成分に変換した車両のボディ座標系における2次元ベクトルである。そして、自車位置推定部17の位置補正ブロック26は、以下の式(1)に示すように、地物計測値Z(t)と地物予測値Z(t)との差分値を算出する。 In the measurement update step, the feature search / extraction block 25 of the vehicle position estimation unit 17 associates the position vector of the feature registered in the map DB 10 with the scan data of the lidar 2. Then, the feature search / extraction block 25 of the vehicle position estimation unit 17 calls the measured value (hereinafter referred to as “feature measurement value”) by the lidar 2 of the feature that can be associated when the association is made. ) Feature estimation value obtained by modeling the measurement process by the lidar 2 using “Z (t)”, the predicted vehicle position X (t) and the location vector of the feature registered in the map DB 10. (Referred to as “feature predicted value”) “Z (t)” is acquired. The feature measurement value Z (t) is a two-dimensional vehicle body coordinate system obtained by converting the feature distance and scan angle measured by the rider 2 at time t into components with the vehicle traveling direction and the lateral direction as axes. Is a vector. Then, the position correction block 26 of the vehicle position estimation unit 17 calculates a difference value between the feature measurement value Z (t) and the feature prediction value Z (t) as shown in the following equation (1). To do.
Figure JPOXMLDOC01-appb-M000001
 また、自車位置推定部17の位置補正ブロック26は、以下の式(2)に示すように、地物計測値Z(t)と地物予測値Z(t)との差分値にカルマンゲイン「K(t)」を乗算し、これを予測自車位置X(t)に加えることで、更新された状態変数ベクトル(「推定自車位置」とも呼ぶ。)X(t)を算出する。
Figure JPOXMLDOC01-appb-M000001
Further, the position correction block 26 of the vehicle position estimation unit 17 calculates the Kalman to the difference value between the feature measurement value Z (t) and the feature prediction value Z (t) as shown in the following equation (2). By multiplying the gain “K (t)” and adding this to the predicted own vehicle position X (t), the updated state variable vector (also referred to as “estimated own vehicle position”) X ^ (t) calculate.
Figure JPOXMLDOC01-appb-M000002
 また、計測更新ステップでは、自車位置推定部17の位置補正ブロック26は、予測ステップと同様、推定自車位置X(t)の誤差分布に相当する共分散行列P(t)(単にP(t)とも表記する)を共分散行列P(t)から求める。カルマンゲインK(t)等のパラメータについては、例えば拡張カルマンフィルタを用いた公知の自己位置推定技術と同様に算出することが可能である。
Figure JPOXMLDOC01-appb-M000002
Further, in the measurement update step, the position correction block 26 of the vehicle position estimating section 17, similarly to the prediction step, the covariance matrix corresponding to the error distribution of the estimated vehicle position X ^ (t) P ^ ( t) ( simply P (t)) is obtained from the covariance matrix P (t). Parameters such as the Kalman gain K (t) can be calculated in the same manner as a known self-position estimation technique using an extended Kalman filter, for example.
 なお、自車位置推定部17は、複数の地物に対し、地図DB10に登録された地物の位置ベクトルとライダ2のスキャンデータとの対応付けができた場合、選定した任意の一個の地物計測値等に基づき計測更新ステップを行ってもよく、対応付けができた全ての地物計測値等に基づき計測更新ステップを複数回行ってもよい。なお、複数の地物計測値等を用いる場合には、自車位置推定部17は、ライダ2から遠い地物ほどライダ計測精度が悪化することを勘案し、ライダ2と地物との距離が長いほど、当該地物に関する重み付けを小さくするとよい。 In addition, the own vehicle position estimation unit 17 can select any one selected ground when the position vector registered in the map DB 10 and the scan data of the lidar 2 can be associated with a plurality of features. The measurement update step may be performed based on the object measurement value or the like, or the measurement update step may be performed a plurality of times based on all the feature measurement values or the like that can be associated. When using a plurality of feature measurement values or the like, the vehicle position estimation unit 17 considers that the rider measurement accuracy deteriorates as the feature farther from the rider 2, and the distance between the rider 2 and the feature becomes smaller. The longer it is, the smaller the weight related to the feature is.
 このように、予測ステップと計測更新ステップが繰り返し実施され、予測自車位置X(t)と推定自車位置X(t)が逐次的に計算されることにより、もっとも確からしい自車位置が計算される。 In this way, the prediction step and the measurement update step are repeatedly performed, and the predicted vehicle position X (t) and the estimated vehicle position X ^ (t) are sequentially calculated, so that the most likely vehicle position Is calculated.
 なお、上記の説明において、地物計測値Z(t)は本発明の「第1距離情報」の一例であり、地物予測値Z(t)は本発明の「第2距離情報」の一例である。 In the above description, the feature measurement value Z (t) is an example of the “first distance information” of the present invention, and the predicted feature value Z (t) of the “second distance information” of the present invention. It is an example.
 [自車位置推定の詳細]
 次に、本実施例における自車位置推定の詳細について説明する。概略的には、自車位置推定部17は、地物計測値Z(t)と地物予測値Z(t)との差分値を評価する評価値(「差分評価値」とも呼ぶ。)を算出し、差分評価値に応じてカルマンゲインK(t)を補正する。これにより、自車位置推定部17は、地図DB10又はライダ2の計測値のいずれかに誤差が生じた場合であっても、自車位置推定精度の低下を好適に抑制する。
[Details of vehicle position estimation]
Next, details of the vehicle position estimation in the present embodiment will be described. Schematically, the vehicle position estimation unit 17 evaluates a difference value between the feature measurement value Z (t) and the feature prediction value Z (t) (also referred to as “difference evaluation value”). And the Kalman gain K (t) is corrected according to the difference evaluation value. Thereby, the own vehicle position estimation part 17 suppresses the fall of the own vehicle position estimation precision suitably even if it is a case where an error arises in either the map DB10 or the measured value of the lidar 2.
 ライダを用いた道路標識などによるEKF(Extended Karman Filter)位置推定においては、上記の式(2)に示すように、地物計測値Z(t)と地物予測値Z(t)との差分値を用いて自車位置を計算している。式(2)から理解されるように、式(1)に示す差分値が大きくなると、予測自車位置X(t)に対する補正量が大きくなる。ここで、差分値が大きくなるのは、以下の(A)~(C)のいずれかの原因による。 In EKF (Extended Karman Filter) position estimation by road signs using a lidar, as shown in the above equation (2), the feature measurement value Z (t) and the feature prediction value Z (t) The vehicle position is calculated using the difference value. As understood from the equation (2), when the difference value shown in the equation (1) increases, the correction amount for the predicted host vehicle position X (t) increases. Here, the difference value becomes large due to one of the following causes (A) to (C).
 (A)地物計測値に誤差がある。
 これには、例えば、ライダの計測精度が低い場合、又は、ライダの計測精度は低くないが、計測時に計測対象の地物と車両との間に他車両などの動体が存在していたため計測精度が低くなってしまった場合(即ちオクルージョンが発生した場合)、などが該当する。
(A) There is an error in the feature measurement value.
For example, if the measurement accuracy of the lidar is low, or the measurement accuracy of the lidar is not low, there is a moving object such as another vehicle between the measurement target feature and the vehicle at the time of measurement. Or the like becomes low (that is, when occlusion occurs).
 (B)地物予測値を求めるために使用する推定自車位置がずれている。
 これは、予測自車位置X(t)あるいは推定自車位置X(t)の誤差が大きい場合である。
(B) The estimated vehicle position used for obtaining the feature prediction value is shifted.
This is a case where the error of the predicted own vehicle position X (t) or the estimated own vehicle position X ^ (t) is large.
 (C)地物予測値を求めるために使用する地図データの地物の位置座標がずれている。
 これは、例えば地図データの作成時以降に、地物である標識が車両の衝突などにより傾いたり、地物である構造物が撤去又は移動されたりして地物の位置が変化した結果、地図データ中のその地物の位置座標と実際のその地物の位置座標とが一致しなくなっている状態である。すなわち、地図データ上の地物の位置と現実環境における地物の位置とにずれが生じている状態である。
(C) The position coordinates of the feature of the map data used for obtaining the feature prediction value are shifted.
This is because, for example, after the map data is created, the sign of the feature is tilted due to the collision of the vehicle, the structure of the feature is removed or moved, and the position of the feature is changed. This is a state in which the position coordinates of the feature in the data do not match the actual position coordinates of the feature. That is, there is a difference between the position of the feature on the map data and the position of the feature in the real environment.
 上記(A)または(C)に起因して差分値が大きくなっている場合に式(2)に基づき推定自車位置X(t)を算出すると、自車位置推定精度が低下する。よって、上記(A)または(C)のいずれかが生じている場合、予測自車位置X(t)に対する補正の度合いをなるべく低くすることが好ましい。 If the estimated vehicle position X ^ (t) is calculated based on the equation (2) when the difference value is large due to the above (A) or (C), the vehicle position estimation accuracy is lowered. Therefore, when either of the above (A) or (C) occurs, it is preferable to make the degree of correction with respect to the predicted host vehicle position X (t) as low as possible.
 図7は、車両の推定自車位置と地物との位置関係を示す。図7に示す座標系は車両を基準としたボディ座標系であり、x軸は車両の進行方向を示し、y軸はそれと垂直な方向(車両の横方向)を示す。図7において、推定自車位置は、誤差楕円40により示される誤差の範囲を有する。誤差楕円40は、x方向の位置推定精度「σ(x)」とy方向の位置推定精度「σ(y)」により規定される。ここで位置推定精度σは、以下の式(3)により、ヤコビ行列「H(t)」を用いて共分散行列P(t)をボディ座標系に変換することにより得られる。 FIG. 7 shows the positional relationship between the estimated vehicle position of the vehicle and the feature. The coordinate system shown in FIG. 7 is a body coordinate system based on the vehicle, the x-axis indicates the traveling direction of the vehicle, and the y-axis indicates the direction perpendicular to the traveling direction (lateral direction of the vehicle). In FIG. 7, the estimated vehicle position has an error range indicated by an error ellipse 40. The error ellipse 40 is defined by the position estimation accuracy “σ P (x)” in the x direction and the position estimation accuracy “σ P (y)” in the y direction. Here, the position estimation accuracy σ P is obtained by converting the covariance matrix P (t) into the body coordinate system using the Jacobian matrix “H (t)” by the following equation (3).
Figure JPOXMLDOC01-appb-M000003
 式(3)において、「σ(x)」はx方向の分散、「σ(y)」はy方向の分散であり、これらの平方根としてx方向の位置推定精度σ(x)とy方向の位置推定精度σ(y)が得られる。こうして、位置推定精度σ(σ(x),σ(y))が得られる。
Figure JPOXMLDOC01-appb-M000003
In Expression (3), “σ P (x) 2 ” is the variance in the x direction, and “σ P (y) 2 ” is the variance in the y direction, and the position estimation accuracy in the x direction σ P (x ) And y-direction position estimation accuracy σ P (y). Thus, the position estimation accuracy σ PP (x), σ P (y)) is obtained.
 図7において、地物予測値Z(t)は、予測自車位置X(t)及び地図DB10に登録された地物の位置ベクトルを用いてライダ2による計測処理をモデル化して求めたものである。また、地物計測値Z(t)は、地図DB10に登録された地物の位置ベクトルとライダ2のスキャンデータとの対応付けができた場合の地物のライダ2による計測値である。地物計測値Z(t)は、誤差楕円43で示す誤差の範囲を有する。誤差楕円43は、x方向の計測精度「σ(x)」とy方向の計測精度「σ(y)」により規定される。一般的に、ライダによる計測誤差はライダと測定対象となる地物との距離の2乗に比例して大きくなるため、推定自車位置から地物計測値Z(t)までの距離に基づいて誤差楕円43を算出することにより、ライダ計測精度σ(σ(x),σ(y))が求められる。また、地物予測値Z(t)と地物計測値Z(t)の差分値は、式(1)に示すように、x方向が「dx」、y方向が「dy」である。 In FIG. 7, the predicted feature value Z (t) is obtained by modeling the measurement process by the lidar 2 using the predicted vehicle position X (t) and the position vector of the feature registered in the map DB 10. Is. The feature measurement value Z (t) is a measurement value obtained by the rider 2 of the feature when the feature position vector registered in the map DB 10 can be associated with the scan data of the rider 2. The feature measurement value Z (t) has an error range indicated by an error ellipse 43. The error ellipse 43 is defined by the measurement accuracy “σ L (x)” in the x direction and the measurement accuracy “σ L (y)” in the y direction. In general, the measurement error due to the rider increases in proportion to the square of the distance between the rider and the feature to be measured, and therefore, based on the distance from the estimated vehicle position to the feature measurement value Z (t). By calculating the error ellipse 43, the lidar measurement accuracy σ LL (x), σ L (y)) is obtained. Further, the difference value between the predicted feature value Z (t) and the measured feature value Z (t) is “dx” in the x direction and “dy” in the y direction, as shown in the equation (1).
 なお、位置推定精度σは本発明の「自己位置精度情報」の一例であり、ライダ計測精度σは本発明の「計測精度情報」の一例である。 The position estimation accuracy σ P is an example of “self-position accuracy information” of the present invention, and the lidar measurement accuracy σ L is an example of “measurement accuracy information” of the present invention.
 次に、地物計測値Z(t)と地物予測値Z(t)との差分値を評価する評価値の算出方法について説明する。自車位置推定部17は、位置推定精度σ(σ(x),σ(y))とライダ計測精度σ(σ(x),σ(y))とを用いて、地物計測値Z(t)と地物予測値Z(t)との差分値を評価する。言い換えると、自車位置推定部17は、推定自車位置の誤差範囲40とライダ誤差範囲43に対する差分値の割合に基づき、上述の差分値の妥当性を判断する。具体的には、自車位置推定部17は、以下の評価式(4)により、差分評価値「Ex」、「Ey」を算出する。なお、この場合、差分値dx、dyの絶対値が用いられる。 Next, an evaluation value calculation method for evaluating the difference value between the feature measurement value Z (t) and the feature prediction value Z (t) will be described. The own vehicle position estimation unit 17 uses the position estimation accuracy σ PP (x), σ P (y)) and the lidar measurement accuracy σ LL (x), σ L (y)), The difference value between the feature measurement value Z (t) and the feature prediction value Z (t) is evaluated. In other words, the vehicle position estimation unit 17 determines the validity of the above-described difference value based on the ratio of the difference value with respect to the error range 40 and the lidar error range 43 of the estimated vehicle position. Specifically, the vehicle position estimation unit 17 calculates the difference evaluation values “Ex” and “Ey” using the following evaluation formula (4). In this case, absolute values of the difference values dx and dy are used.
Figure JPOXMLDOC01-appb-M000004
 この場合、差分評価値Ex、Eyが高いほど、上記の(A)又は(C)のいずれかが生じている可能性が高いことが推定される。よって、自車位置推定部17は、差分評価値Ex、Eyの少なくとも一方が所定値より大きい場合、差分値が大きくなった原因として上記の(A)又は(C)が生じていると判定する。よって、この場合、自車位置推定部17は、差分評価値Ex、Eyに応じて、カルマンゲインを小さくするための係数値「a(t)」、「a(t)」を算出し、カルマンゲインに乗じる。この場合、係数値a(t)、a(t)は、0から1までの値となり、かつ、差分評価値Ex、Eyが大きいほど小さい値になるように、式又はテーブルを用いて設定される。例えば、係数値a(t)、a(t)は、以下の式(5)に基づき設定される。
Figure JPOXMLDOC01-appb-M000004
In this case, it is estimated that the higher the difference evaluation values Ex and Ey, the higher the possibility that either (A) or (C) has occurred. Therefore, if at least one of the difference evaluation values Ex and Ey is larger than the predetermined value, the host vehicle position estimation unit 17 determines that the above (A) or (C) has occurred as a cause of the difference value being increased. . Therefore, in this case, the own vehicle position estimation unit 17 calculates coefficient values “a X (t)” and “a Y (t)” for reducing the Kalman gain according to the difference evaluation values Ex and Ey. Multiply by Kalman gain. In this case, the coefficient values a X (t) and a Y (t) are values from 0 to 1, and the smaller the evaluation values Ex and Ey, the smaller the values, using an expression or a table. Is set. For example, the coefficient values a X (t) and a Y (t) are set based on the following equation (5).
Figure JPOXMLDOC01-appb-M000005
 そして、自車位置推定部17は、係数値a(t)、a(t)を用いて、以下の式(6)に基づき更新後のカルマンゲイン「K(t)´」を算出する。
Figure JPOXMLDOC01-appb-M000005
Then, the vehicle position estimation unit 17 calculates the updated Kalman gain “K (t) ′” based on the following equation (6) using the coefficient values a X (t) and a Y (t). .
Figure JPOXMLDOC01-appb-M000006
 このようにカルマンゲインを設定することで、自車位置推定部17は、地物予測値の信頼度が低い場合に,その信頼度に応じてカルマンゲインを小さくし、予測自車位置X(t)に対する補正量を少なくすることができる。よって、この場合,不正確な補正を防止できるため、推定自車推定精度が向上する。係数値a(t)、a(t)は、本発明における「補正係数」の一例である。
Figure JPOXMLDOC01-appb-M000006
By setting the Kalman gain in this way, when the reliability of the predicted feature value is low, the vehicle position estimation unit 17 reduces the Kalman gain according to the reliability and predicts the vehicle position X ( The correction amount for t) can be reduced. Therefore, in this case, inaccurate correction can be prevented, so that the estimated vehicle estimation accuracy is improved. The coefficient values a X (t) and a Y (t) are examples of the “correction coefficient” in the present invention.
 図8は、車載機1の自車位置推定部17により行われる自車位置推定処理のフローチャートである。車載機1は、図8のフローチャートの処理を繰り返し実行する。 FIG. 8 is a flowchart of the vehicle position estimation process performed by the vehicle position estimation unit 17 of the in-vehicle device 1. The in-vehicle device 1 repeatedly executes the process of the flowchart of FIG.
 まず、自車位置推定部17は、GPS受信機5等の出力に基づき、自車位置の初期値を設定する(ステップS101)。次に、自車位置推定部17は、車速センサ4から車体速度を取得すると共に、ジャイロセンサ3からヨー方向の角速度を取得する(ステップS102)。そして、自車位置推定部17は、ステップS102の取得結果に基づき、車両の移動距離と車両の方位変化を計算する(ステップS103)。 First, the vehicle position estimation unit 17 sets an initial value of the vehicle position based on the output of the GPS receiver 5 or the like (step S101). Next, the host vehicle position estimation unit 17 acquires the vehicle body speed from the vehicle speed sensor 4 and the angular velocity in the yaw direction from the gyro sensor 3 (step S102). And the own vehicle position estimation part 17 calculates the moving distance of a vehicle and the azimuth | direction change of a vehicle based on the acquisition result of step S102 (step S103).
 その後、自車位置推定部17は、1時刻前の推定自車位置X(t-1)に、ステップS103で計算した移動距離と方位変化を加算し、予測自車位置X(t)を算出する(ステップS104)。さらに、ステップS104では、自車位置推定部17は、共分散行列から、式(3)により位置推定精度σ(x),σ(y)を求める。さらに、自車位置推定部17は、予測自車位置X(t)に基づき、地図DB10の地物情報を参照し、ライダ2の計測範囲となる地物を探索する(ステップS105)。 Then, the vehicle position estimating section 17, one time before the estimated vehicle position X ^ (t-1), by adding the moving distance and direction change calculated in step S103, the predicted vehicle position X - (t) Is calculated (step S104). Furthermore, in step S104, the host vehicle position estimation unit 17 obtains position estimation accuracy σ P (x), σ P (y) from the covariance matrix using Equation (3). Further, the own vehicle position estimating unit 17 refers to the feature information in the map DB 10 based on the predicted own vehicle position X (t), and searches for the feature that is the measurement range of the rider 2 (step S105).
 そして、自車位置推定部17は、予測自車位置X(t)及びステップS105で探索した地物の地物情報が示す位置座標から、車両の進行方向と横方向それぞれの地物の予測位置(即ち地物予測値Z(t))を算出する(ステップS106)。その後、自車位置推定部17は、ライダ2のスキャンデータから車両の進行方向と横方向それぞれの地物までの計測距離(即ち地物計測値Z(t))を算出し、当該計測距離からライダ計測精度σ(x),σ(y)を算出する(ステップS107)。 Then, the vehicle position estimation unit 17 predicts the features in the traveling direction and the lateral direction of the vehicle from the predicted vehicle position X (t) and the position coordinates indicated by the feature information of the feature searched in step S105. The position (that is, the predicted feature value Z (t)) is calculated (step S106). Thereafter, the vehicle position estimation unit 17 calculates a measurement distance (that is, a feature measurement value Z (t)) from the scan data of the rider 2 to each feature in the traveling direction and the lateral direction of the vehicle. Rider measurement accuracy σ L (x), σ L (y) is calculated (step S107).
 次に、自車位置推定部17は、地物計測値と地物予測値との差分値dx、dyを式(1)に基づき算出する(ステップS108)。そして、自車位置推定部17は、式(4)に基づき、差分評価値Ex、Eyを算出する(ステップS109)。 Next, the vehicle position estimation unit 17 calculates difference values dx and dy between the feature measurement value and the feature prediction value based on the equation (1) (step S108). And the own vehicle position estimation part 17 calculates difference evaluation value Ex and Ey based on Formula (4) (step S109).
 そして、自車位置推定部17は、差分評価値Ex、Eyが所定値より大きい場合(ステップS110;Yes)、差分評価値Ex、Eyから係数値a(t)、a(t)を生成する(ステップS111)。例えば、自車位置推定部17は、式(5)に基づき係数値a(t)、a(t)を算出することで、0から1までの値域において、差分評価値Ex、Eyが大きいほど小さい値になるように、係数値a(t)、a(t)を算出する。なお、自車位置推定部17は、所定値を超えていない差分評価値が存在する場合には、当該差分評価値に対応する係数値(差分評価値Exの場合には係数値a(t))を式(5)によらず1に設定してもよい。 Then, when the difference evaluation values Ex and Ey are larger than the predetermined values (step S110; Yes), the vehicle position estimation unit 17 obtains the coefficient values a X (t) and a Y (t) from the difference evaluation values Ex and Ey. Generate (step S111). For example, the vehicle position estimation unit 17 calculates the coefficient values a X (t) and a Y (t) based on the equation (5), so that the difference evaluation values Ex and Ey are in the range from 0 to 1. The coefficient values a X (t) and a Y (t) are calculated such that the larger the value, the smaller the value. When there is a difference evaluation value that does not exceed the predetermined value, the vehicle position estimation unit 17 determines a coefficient value corresponding to the difference evaluation value (in the case of the difference evaluation value Ex, the coefficient value a X (t )) May be set to 1 regardless of the equation (5).
 一方、自車位置推定部17は、差分評価値Ex、Eyが所定値以下の場合(ステップS110;No)、係数値a(t)、a(t)を「1」に設定する(ステップS112)。すなわち、差分評価値Exと所定値の比較により係数値a(t)が設定され、差分評価値Eyと所定値の比較により係数値a(t)が設定される。 On the other hand, the vehicle position estimation unit 17 sets the coefficient values a X (t) and a Y (t) to “1” when the difference evaluation values Ex and Ey are equal to or smaller than the predetermined values (step S110; No) ( Step S112). That is, the coefficient value a X (t) is set by comparing the difference evaluation value Ex and a predetermined value, and the coefficient value a Y (t) is set by comparing the difference evaluation value Ey and the predetermined value.
 次に、自車位置推定部17は、式(6)に基づき、カルマンゲインK(t)に係数値a(t)、a(t)を乗じることで、カルマンゲインK(t)´を生成する(ステップS113)。その後、自車位置推定部17は、式(2)に基づき、生成したカルマンゲインK(t)´をK(t)の代わりに用いて予測自車位置X(t)を補正し、推定自車位置X(t)を算出する(ステップS114)。 Next, the vehicle position estimation unit 17 multiplies the Kalman gain K (t) by the coefficient values a X (t) and a Y (t) based on the equation (6), thereby obtaining the Kalman gain K (t) ′. Is generated (step S113). Thereafter, the host vehicle position estimation unit 17 corrects the predicted host vehicle position X (t) using the generated Kalman gain K (t) ′ instead of K (t) based on the equation (2), and estimates The own vehicle position X ^ (t) is calculated (step S114).
 [具体例]
 次に、本実施例に基づく自車位置推定処理の効果について説明する。
[Concrete example]
Next, the effect of the own vehicle position estimation process based on a present Example is demonstrated.
 図9(A)は、ある走行試験コースにおいて図8に示す自車位置推定処理を実行しながら車両を走行させた場合のx方向とy方向の位置の推移を示すグラフである。なお、図9(A)では、RTK-GPSを用いて高精度に計測した自車位置(「リファレンス位置」とも呼ぶ。)と、図8に示す自車位置推定処理により算出した推定自車位置とがそれぞれ描画されているが、これらは視覚上区別できない程度に重なっている。 FIG. 9 (A) is a graph showing the transition of the position in the x direction and the y direction when the vehicle is running while executing the vehicle position estimation process shown in FIG. 8 in a certain driving test course. In FIG. 9A, the vehicle position (also referred to as “reference position”) measured with high accuracy using RTK-GPS and the estimated vehicle position calculated by the vehicle position estimation process shown in FIG. Are drawn to the extent that they cannot be distinguished visually.
 図9(B)は、図9(A)の枠70内の走行区間におけるデータを拡大した図である。なお、図9(B)では、計測対象となった地物の位置が丸印によりプロットされている。また、図10(A)は、枠70の走行区間での差分値dxの推移を示すグラフであり、図10(B)は、枠70の走行区間での実施例に基づく推定自車位置とリファレンス位置との進行方向における差を示し、図10(C)は、枠70の走行区間での実施例に基づく推定自車位置とリファレンス位置との横方向における差を示し、図10(D)は、枠70の走行区間での実施例に基づく推定自車位置とリファレンス位置との方位差を示す。 FIG. 9B is an enlarged view of data in the travel section in the frame 70 of FIG. 9A. In FIG. 9B, the positions of the features that are the measurement targets are plotted with circles. 10A is a graph showing the transition of the difference value dx in the travel section of the frame 70, and FIG. 10B shows the estimated vehicle position based on the embodiment in the travel section of the frame 70. FIG. 10C illustrates the difference in the lateral direction between the estimated vehicle position based on the embodiment in the traveling section of the frame 70 and the reference position, and FIG. These show the azimuth | direction difference of the estimated own vehicle position based on the Example in the driving | running | working area of the frame 70, and a reference position.
 図9(B)に示すように、自車位置推定部17は、不規則な間隔で存在する地物を対象として地物情報に基づき推定自車位置を推定しており、図10(A)に示す差分値等に基づき予測自車位置を補正している。その結果、図10(B)~(D)に示されるように、車両の進行方向及び横方向でのリファレンス位置と推定自車位置との差は、およそ0.1m以内となっており、方位差についても、およそ0.1度以内となっている。 As shown in FIG. 9B, the vehicle position estimation unit 17 estimates the estimated vehicle position based on the feature information for features that exist at irregular intervals, and FIG. The predicted vehicle position is corrected based on the difference value shown in FIG. As a result, as shown in FIGS. 10B to 10D, the difference between the reference position in the traveling direction and the lateral direction of the vehicle and the estimated own vehicle position is within about 0.1 m. The difference is also within about 0.1 degree.
 図11(A)~(D)は、図9(B)のプロット71が示す地物に対する地物情報の位置座標を進行方向に80cmずらした場合において、カルマンゲインK(t)を係数値a(t)、a(t)により補正せずに用いたときの図10(A)~(D)と同様の実験結果を示す。 11A to 11D show the Kalman gain K (t) as a coefficient value a when the position coordinates of the feature information with respect to the feature indicated by the plot 71 in FIG. 9B are shifted by 80 cm in the traveling direction. Experimental results similar to those shown in FIGS. 10A to 10D when used without correction by X (t) and a Y (t) are shown.
 図11(A)に示すように、図9(B)のプロット71が示す地物に対する地物情報の位置座標のずれに起因して、プロット71が示す地物の地物情報の位置座標を用いて算出した差分値dx(丸枠72参照)は、他の地物の地物情報の位置座標を用いて算出した差分値dxと比較して、ずれ量が顕著に大きくなっている。その結果、図11(B)に示すように、図9(B)のプロット71が示す地物に対する地物情報の位置座標を用いて算出した推定自車位置は、車両の進行方向においてリファレンス位置とずれが生じている(丸枠73参照)。このようにして生じた推定自車位置の誤差は、次に自車位置推定に用いる地物(図9のプロット74参照)の検出時まで継続してしまう。 As shown in FIG. 11A, the position coordinates of the feature information of the feature indicated by the plot 71 are derived from the shift in the position coordinates of the feature information with respect to the feature indicated by the plot 71 of FIG. 9B. The difference value dx calculated using the reference (see the round frame 72) has a significantly large deviation compared to the difference value dx calculated using the position coordinates of the feature information of other features. As a result, as shown in FIG. 11B, the estimated own vehicle position calculated using the position coordinates of the feature information with respect to the feature indicated by the plot 71 in FIG. 9B is the reference position in the traveling direction of the vehicle. (See a round frame 73). The error of the estimated own vehicle position generated in this way continues until the next feature used for the own vehicle position estimation (see the plot 74 in FIG. 9) is detected.
 図12(A)~(D)は、図9(B)のプロット71が示す地物に対する地物情報の位置座標を進行方向に80cmずらした場合において、カルマンゲインK(t)を係数値a(t)、a(t)により補正したときの図11(A)~(D)と同様の実験結果を示す。 12A to 12D show the Kalman gain K (t) as the coefficient value a when the position coordinates of the feature information with respect to the feature indicated by the plot 71 in FIG. 9B are shifted by 80 cm in the traveling direction. Experimental results similar to those in FIGS. 11A to 11D when corrected by X (t) and a Y (t) are shown.
 この場合、図9(B)のプロット71が示す地物を検出した時点での差分値dx、位置推定精度σ(x)、ライダ計測精度σ(x)は、以下のようになる。
       dx=-0.749
       σ(x)=0.048
       σ(x)=0.058
In this case, the difference value dx, the position estimation accuracy σ P (x), and the lidar measurement accuracy σ L (x) when the feature indicated by the plot 71 in FIG. 9B is detected are as follows.
dx = −0.749
σ P (x) = 0.048
σ L (x) = 0.058
 したがって、これらの値を式(5)に示す係数値a(t)に代入すると、係数値a(t)は
       0.854×10-3≒0
となる。この場合、カルマンゲインK(t)´は、以下の式(7)に示すように求められる。
Therefore, when these values are substituted into Equation (5) the coefficient value a X (t) shown in, the coefficient value a X (t) is 0.854 × 10 -3 ≒ 0
It becomes. In this case, the Kalman gain K (t) ′ is obtained as shown in the following equation (7).
Figure JPOXMLDOC01-appb-M000007
 よって、この場合、車両の進行方向については、予測自車位置X(t)の補正が行われないため、精度の良い位置推定が維持されることになる。
Figure JPOXMLDOC01-appb-M000007
Therefore, in this case, since the predicted vehicle position X (t) is not corrected in the traveling direction of the vehicle, accurate position estimation is maintained.
 以上説明したように、本実施例に係る車載機1の自車位置推定部17は、予測自車位置X(t)を示す情報を生成する位置予測部21と、位置推定部22とを有する。位置推定部22は、移動体から対象物までのライダ2による計測距離を示す地物計測値Z(t)及び地物情報に基づき予測された地物予測値Z(t)を算出する。そして、位置推定部22は、地物計測値と地物予測値との差分値dx、dyと、位置推定精度σと、ライダ計測精度σとに基づく差分評価値Ex、Eyに応じて、カルマンゲインK(t)を補正する係数値a(t)、a(t)を決定する。これにより、車載機1は、地図DB10又はライダ2の計測値のいずれかに誤差が生じた場合であっても、自車位置推定精度の低下を好適に抑制することができる。 As described above, the vehicle position estimation unit 17 of the vehicle-mounted device 1 according to the present embodiment includes the position prediction unit 21 that generates information indicating the predicted vehicle position X (t), and the position estimation unit 22. Have. The position estimation unit 22 calculates a feature measurement value Z (t) indicating the measurement distance by the lidar 2 from the moving body to the target object and a feature prediction value Z (t) predicted based on the feature information. Then, the position estimation unit 22 responds to the difference evaluation values Ex and Ey based on the difference values dx and dy between the feature measurement value and the feature prediction value, the position estimation accuracy σ P, and the lidar measurement accuracy σ L. The coefficient values a X (t) and a Y (t) for correcting the Kalman gain K (t) are determined. Thereby, the vehicle equipment 1 can suppress suitably the fall of the own vehicle position estimation precision even if it is a case where an error arises in either the map DB10 or the measured value of the lidar 2.
 [変形例]
 以下、実施例に好適な変形例について説明する。以下の変形例は、組み合わせて実施例に適用してもよい。
[Modification]
Hereinafter, modified examples suitable for the embodiments will be described. The following modifications may be applied to the embodiments in combination.
 (変形例1)
 信頼度が低いと判断される地物情報に基づく予測自車位置X(t)の補正量を低下させる方法は、係数値a(t)、a(t)をカルマンゲインK(t)に乗じることに限定されない。
(Modification 1)
The method of reducing the correction amount of the predicted vehicle position X (t) based on the feature information determined to have low reliability is to use the coefficient values a X (t) and a Y (t) as the Kalman gain K (t ) Is not limited.
 これに代えて、第1の例では、自車位置推定部17は、差分値dx、dyに対して係数値a(t)、a(t)を乗じてもよい。即ち、この場合、自車位置推定部17は、以下の式(8)に基づき、推定自車位置を算出する。 Instead, in the first example, the vehicle position estimation unit 17 may multiply the difference values dx and dy by coefficient values a X (t) and a Y (t). That is, in this case, the host vehicle position estimation unit 17 calculates the estimated host vehicle position based on the following equation (8).
Figure JPOXMLDOC01-appb-M000008
 なお、この場合、カルマンゲインK(t)は式(10)で生成された後、式(6)に基づく更新が行われないため、以下の一般式(9)により示される共分散行列P(t)の更新には、信頼度が低い地物情報を用いたということが反映されない。すなわち、位置推定は信頼度を反映する一方、共分散行列は信頼度が反映されないため、その違いを考慮した処理が必要となる。
Figure JPOXMLDOC01-appb-M000008
In this case, since the Kalman gain K (t) is generated by the equation (10) and is not updated based on the equation (6), the covariance matrix P ( The update of t) does not reflect the use of feature information with low reliability. That is, while the position estimation reflects the reliability, the covariance matrix does not reflect the reliability, and thus processing that considers the difference is necessary.
Figure JPOXMLDOC01-appb-M000009
 第2の例では、自車位置推定部17は、以下の一般式(10)によりカルマンゲインK(t)を算出する際に用いる観測雑音行列「R(t)」の対角成分に、以下の式(11)に示すように係数値a(t)、a(t)を乗じてもよい。
Figure JPOXMLDOC01-appb-M000009
In the second example, the vehicle position estimation unit 17 uses the diagonal component of the observation noise matrix “R (t)” used when calculating the Kalman gain K (t) by the following general formula (10) as follows: As shown in equation (11), coefficient values a X (t) and a Y (t) may be multiplied.
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000011
 この場合、自車位置推定部17は、係数値a(t)、a(t)を、式(4)に示される評価値Ex、Eyが大きいほど大きくなるように設定する。例えば、自車位置推定部17は、以下の式(12)に示すように係数値a(t)、a(t)を設定する。
Figure JPOXMLDOC01-appb-M000011
In this case, the vehicle position estimation unit 17 sets the coefficient values a X (t) and a Y (t) so that the coefficient values a X (t) and a Y (t) become larger as the evaluation values Ex and Ey shown in Expression (4) are larger. For example, the vehicle position estimation unit 17 sets the coefficient values a X (t) and a Y (t) as shown in the following formula (12).
Figure JPOXMLDOC01-appb-M000012
 この場合、係数値a(t)、a(t)の取り得る範囲は、1から無限大まであるため、上限値を定めるなどのオーバーフローを回避する処理が必要となる。
Figure JPOXMLDOC01-appb-M000012
In this case, since the possible range of the coefficient values a X (t) and a Y (t) is from 1 to infinity, a process for avoiding overflow, such as setting an upper limit value, is required.
 (変形例2)
 図1に示す運転支援システムの構成は一例であり、本発明が適用可能な運転支援システムの構成は図1に示す構成に限定されない。例えば、運転支援システムは、車載機1を有する代わりに、車両の電子制御装置が車載機1の自車位置推定部17の処理を実行してもよい。この場合、地図DB10は、例えば車両内の記憶部に記憶され、車両の電子制御装置は、地図DB10の更新情報を図示しないサーバ装置から受信してもよい。
(Modification 2)
The configuration of the driving support system shown in FIG. 1 is an example, and the configuration of the driving support system to which the present invention is applicable is not limited to the configuration shown in FIG. For example, in the driving support system, instead of having the in-vehicle device 1, the electronic control device of the vehicle may execute the processing of the vehicle position estimation unit 17 of the in-vehicle device 1. In this case, map DB10 is memorize | stored in the memory | storage part in a vehicle, for example, and the electronic control apparatus of a vehicle may receive the update information of map DB10 from the server apparatus which is not shown in figure.
 1 車載機
 2 ライダ
 3 ジャイロセンサ
 4 車速センサ
 5 GPS受信機
 10 地図DB
DESCRIPTION OF SYMBOLS 1 In-vehicle apparatus 2 Rider 3 Gyro sensor 4 Vehicle speed sensor 5 GPS receiver 10 Map DB

Claims (9)

  1.  予測された自己位置を示す予測位置情報を取得する第1取得部と、
     移動体から対象物までの計測部による計測距離を示す第1距離情報と、前記対象物の位置情報に基づき予測された前記移動体から前記対象物までの距離を示す第2距離情報とを取得する第2取得部と、
     前記移動体の自己位置精度情報を取得する第3取得部と、
     前記第1距離情報及び前記第2距離情報が示す距離の差分値、並びに、前記自己位置精度情報に基づき、前記予測された自己位置を補正する補正部と、
    を備える自己位置推定装置。
    A first acquisition unit that acquires predicted position information indicating the predicted self-position;
    Acquires first distance information indicating a distance measured by the measurement unit from the moving object to the object, and second distance information indicating a distance from the moving object to the object predicted based on the position information of the object. A second acquisition unit to perform,
    A third acquisition unit for acquiring self-position accuracy information of the moving body;
    A correction unit that corrects the predicted self-position based on the difference value of the distance indicated by the first distance information and the second distance information, and the self-position accuracy information;
    A self-position estimation apparatus comprising:
  2.  前記補正部は、前記差分値と、前記自己位置精度情報と、前記計測部の計測精度情報とに基づき、前記予測された自己位置を補正する請求項1に記載の自己位置推定装置。 The self-position estimation apparatus according to claim 1, wherein the correction unit corrects the predicted self-position based on the difference value, the self-position accuracy information, and measurement accuracy information of the measurement unit.
  3.  前記補正部は、前記差分値と、前記自己位置精度情報とに基づき、前記差分値を評価する評価値を算出し、前記予測された自己位置を前記差分値により補正する度合いを前記評価値に基づき決定する請求項1または2に記載の自己位置推定装置。 The correction unit calculates an evaluation value for evaluating the difference value based on the difference value and the self-position accuracy information, and sets the degree of correction of the predicted self-position by the difference value as the evaluation value. The self-position estimation apparatus according to claim 1, wherein the self-position estimation apparatus is determined based on the determination.
  4.  前記補正部は、前記差分値に所定の利得を乗じた値により、前記予測された自己位置を補正し、
     前記補正部は、前記評価値に基づいて、前記利得に対する補正係数を決定する請求項3に記載の自己位置推定装置。
    The correction unit corrects the predicted self-position by a value obtained by multiplying the difference value by a predetermined gain,
    The self-position estimating apparatus according to claim 3, wherein the correction unit determines a correction coefficient for the gain based on the evaluation value.
  5.  前記利得は、カルマンゲインである請求項4に記載の自己位置推定装置。 The self-position estimation apparatus according to claim 4, wherein the gain is a Kalman gain.
  6.  前記第2取得部は、前記予測位置情報と、地図情報に記録された前記対象物の位置情報とに基づき、前記第2距離情報が示す距離を算出する請求項1~5のいずれか一項に記載の自己位置推定装置。 6. The second acquisition unit according to claim 1, wherein the second acquisition unit calculates a distance indicated by the second distance information based on the predicted position information and position information of the object recorded in map information. The self-position estimation apparatus described in 1.
  7.  自己位置推定装置が実行する制御方法であって、
     予測された自己位置を示す予測位置情報を取得する第1取得工程と、
     移動体から対象物までの計測部による計測距離を示す第1距離情報、及び、前記対象物の位置情報に基づき予測された前記移動体から前記対象物までの距離を示す第2距離情報を取得する第2取得工程と、
     前記移動体の自己位置精度情報を取得する第3取得工程と、
     前記第1距離情報及び前記第2距離情報が示す距離の差分値、並びに、前記自己位置精度情報に基づき、前記予測された自己位置を補正する補正工程と、
    を有する制御方法。
    A control method executed by the self-position estimation apparatus,
    A first acquisition step of acquiring predicted position information indicating the predicted self-position;
    First distance information indicating a distance measured by a measurement unit from a moving object to an object and second distance information indicating a distance from the moving object to the object predicted based on position information of the object are acquired. A second acquisition step,
    A third acquisition step of acquiring self-position accuracy information of the moving body;
    A correction step of correcting the predicted self-position based on a difference value between distances indicated by the first distance information and the second distance information, and the self-position accuracy information;
    A control method.
  8.  コンピュータが実行するプログラムであって、
     予測された自己位置を示す予測位置情報を取得する第1取得部と、
     移動体から対象物までの計測部による計測距離を示す第1距離情報と、前記対象物の位置情報に基づき予測された前記移動体から前記対象物までの距離を示す第2距離情報とを取得する第2取得部と、
     前記移動体の自己位置精度情報を取得する第3取得部と、
     前記第1距離情報及び前記第2距離情報が示す距離の差分値、並びに、前記自己位置精度情報に基づき、前記予測された自己位置を補正する補正部
    として前記コンピュータを機能させるプログラム。
    A program executed by a computer,
    A first acquisition unit that acquires predicted position information indicating the predicted self-position;
    Acquires first distance information indicating a distance measured by the measurement unit from the moving object to the object, and second distance information indicating a distance from the moving object to the object predicted based on the position information of the object. A second acquisition unit to perform,
    A third acquisition unit for acquiring self-position accuracy information of the moving body;
    A program that causes the computer to function as a correction unit that corrects the predicted self-position based on a difference value between distances indicated by the first distance information and the second distance information and the self-position accuracy information.
  9.  請求項8に記載のプログラムを記憶した記憶媒体。 A storage medium storing the program according to claim 8.
PCT/JP2018/019173 2017-05-19 2018-05-17 Self-position estimation device, control method, program, and storage medium WO2018212301A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2019518877A JP6968877B2 (en) 2017-05-19 2018-05-17 Self-position estimator, control method, program and storage medium
JP2021175327A JP2022031266A (en) 2017-05-19 2021-10-27 Self-position estimation device, control method, program, and storage medium
JP2022160983A JP2022176322A (en) 2017-05-19 2022-10-05 Self-position estimation device, control method, program, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017100228 2017-05-19
JP2017-100228 2017-05-19

Publications (1)

Publication Number Publication Date
WO2018212301A1 true WO2018212301A1 (en) 2018-11-22

Family

ID=64274439

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/019173 WO2018212301A1 (en) 2017-05-19 2018-05-17 Self-position estimation device, control method, program, and storage medium

Country Status (2)

Country Link
JP (3) JP6968877B2 (en)
WO (1) WO2018212301A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2020158020A1 (en) * 2019-01-30 2021-03-25 三菱電機株式会社 Measuring device, measuring method and measuring program
JP2021103149A (en) * 2019-12-25 2021-07-15 株式会社デンソー Estimation device, estimation method, and estimation program
JP2021103148A (en) * 2019-12-25 2021-07-15 株式会社デンソー Estimation device, estimation method, and estimation program
CN116165885A (en) * 2022-11-29 2023-05-26 华东交通大学 Model-free adaptive robust control method and system for high-speed train

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008002906A (en) * 2006-06-21 2008-01-10 Toyota Motor Corp Positioning device
JP2011027595A (en) * 2009-07-27 2011-02-10 Toyota Infotechnology Center Co Ltd Map data verification system
JP2017072423A (en) * 2015-10-05 2017-04-13 パイオニア株式会社 Estimation device, control method, program, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008002906A (en) * 2006-06-21 2008-01-10 Toyota Motor Corp Positioning device
JP2011027595A (en) * 2009-07-27 2011-02-10 Toyota Infotechnology Center Co Ltd Map data verification system
JP2017072423A (en) * 2015-10-05 2017-04-13 パイオニア株式会社 Estimation device, control method, program, and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2020158020A1 (en) * 2019-01-30 2021-03-25 三菱電機株式会社 Measuring device, measuring method and measuring program
JP2021103149A (en) * 2019-12-25 2021-07-15 株式会社デンソー Estimation device, estimation method, and estimation program
JP2021103148A (en) * 2019-12-25 2021-07-15 株式会社デンソー Estimation device, estimation method, and estimation program
JP7318521B2 (en) 2019-12-25 2023-08-01 株式会社デンソー Estimation device, estimation method, estimation program
JP7318522B2 (en) 2019-12-25 2023-08-01 株式会社デンソー Estimation device, estimation method, estimation program
CN116165885A (en) * 2022-11-29 2023-05-26 华东交通大学 Model-free adaptive robust control method and system for high-speed train
CN116165885B (en) * 2022-11-29 2023-11-14 华东交通大学 Model-free adaptive robust control method and system for high-speed train

Also Published As

Publication number Publication date
JP2022031266A (en) 2022-02-18
JP6968877B2 (en) 2021-11-17
JP2022176322A (en) 2022-11-25
JPWO2018212301A1 (en) 2020-03-12

Similar Documents

Publication Publication Date Title
WO2018181974A1 (en) Determination device, determination method, and program
JP7155284B2 (en) Measurement accuracy calculation device, self-position estimation device, control method, program and storage medium
WO2018212301A1 (en) Self-position estimation device, control method, program, and storage medium
JP6980010B2 (en) Self-position estimator, control method, program and storage medium
JP6806891B2 (en) Information processing equipment, control methods, programs and storage media
JP2023164553A (en) Position estimation device, estimation device, control method, program and storage medium
WO2018180245A1 (en) Output device, control method, program, and storage medium
JP2023078138A (en) Output device, control method, program, and storage medium
WO2018212302A1 (en) Self-position estimation device, control method, program, and storage medium
JP2019174191A (en) Data structure, information transmitting device, control method, program, and storage medium
WO2019188886A1 (en) Terminal device, information processing method, and storage medium
WO2019188820A1 (en) Information transmission device, data structure, control method, program, and storage medium
WO2019188874A1 (en) Data structure, information processing device, and map data generation device
WO2018212290A1 (en) Information processing device, control method, program and storage medium
WO2019188877A1 (en) Information transmission device, data structure, control method, program, and storage medium
JP2019174675A (en) Data structure, map data generator, control method, program, and storage medium
US11822009B2 (en) Self-position estimation device, self-position estimation method, program, and recording medium
JP2019174194A (en) Data structure, information processor, and map data generator
US20240053440A1 (en) Self-position estimation device, self-position estimation method, program, and recording medium
CN114867989A (en) Information processing apparatus, control method, program, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18802163

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019518877

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18802163

Country of ref document: EP

Kind code of ref document: A1