WO2022195813A1 - 自車位置統合処理装置及び自車位置統合処理方法 - Google Patents
自車位置統合処理装置及び自車位置統合処理方法 Download PDFInfo
- Publication number
- WO2022195813A1 WO2022195813A1 PCT/JP2021/011141 JP2021011141W WO2022195813A1 WO 2022195813 A1 WO2022195813 A1 WO 2022195813A1 JP 2021011141 W JP2021011141 W JP 2021011141W WO 2022195813 A1 WO2022195813 A1 WO 2022195813A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle position
- observation
- time
- vehicle
- observation time
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 116
- 230000010354 integration Effects 0.000 title claims abstract description 66
- 238000003672 processing method Methods 0.000 title claims description 13
- 238000000034 method Methods 0.000 claims abstract description 72
- 238000010586 diagram Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 11
- 238000012937 correction Methods 0.000 description 9
- 230000001133 acceleration Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 241000607479 Yersinia pestis Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3844—Data obtained from position sensors only, e.g. from inertial navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
Definitions
- This application relates to a vehicle position integration processing device and a vehicle position integration processing method.
- GNSS Global Navigation Satellite System
- inertial navigation using internal sensors such as gyros
- information on features around the vehicle using external sensors for example,
- map matching method that detects feature information around the vehicle using peripheral monitoring cameras, lidar (light detection and ranging), millimeter wave radar) and matches the results with feature information stored in a known map.
- lidar light detection and ranging
- millimeter wave radar millimeter wave radar
- a map is obtained from the image captured by the camera.
- a second vehicle position is estimated by collating with information, and a third vehicle position estimation accuracy on the map of the vehicle is improved by using vehicle speed and other information.
- the accuracy of the position of the vehicle may decrease when a plurality of observation results of the positions of the vehicle are integrated.
- the following two points are conceivable as the factors.
- (b) Asynchronous operation between the observation device and the integrated processing device Regarding (a) in general, in devices that estimate the vehicle position, the vehicle position information update cycle differs for each method. Furthermore, when each device operates independently, the observation time of the own vehicle position is also different. When merging vehicle positions observed at different times, the accuracy of estimating the merged vehicle position will decrease unless the differences in observation times between observation devices are taken into account. I have a problem.
- the operation cycle and operation timing are not synchronized between the observation device and the integrated processing device.
- the observation time the time observed by the observation device
- the output time the time to be estimated by the integrated processing device
- the present application has been made to solve the above problems, and the problem of asynchronous operation between a plurality of observation devices that observe the vehicle position and the problem of asynchronous operation between the observation device and the integrated processing device. It is an object of the present invention to provide a vehicle position integration processing device and a vehicle position integration processing method capable of estimating the vehicle position with high accuracy.
- the vehicle position integration processing device disclosed in the present application includes vehicle motion information and observation time obtained from the vehicle motion information observation device, current observation time and previous observation time of the vehicle position information obtained from the vehicle position observation device.
- a prediction unit for calculating a predicted value of the current vehicle position at the current observation time using the observation time and the estimated value of the previous vehicle position at the previous observation time;
- an updating unit that calculates and updates an estimated value of the vehicle position at the current observation time using the current observation time and the current vehicle position; and the estimated value, the vehicle motion information, and the and an output unit that calculates and outputs an output value in accordance with a predetermined output time using the observation time.
- the vehicle position integration processing method disclosed in the present application includes vehicle motion information and observation time acquired from the vehicle motion information observation device, and current observation time of the vehicle position information acquired from the vehicle position observation device. and the previous observation time and the previous estimated value of the vehicle position at the previous observation time; calculating and updating an estimated value of the vehicle position at the current observation time using the current observation time and the current vehicle position; and calculating and updating the estimated value, the vehicle motion information, and the observation. and a step of calculating and outputting an output value in accordance with a predetermined output time using the time.
- the vehicle position integration processing device and vehicle position integration processing method of the present application when estimating the vehicle position by integrating vehicle position information observed by a plurality of methods, each observation time and a predetermined By integrating the information on the position of the vehicle in consideration of the output time, there is an effect that the position of the vehicle can be accurately estimated.
- FIG. 1 is a block diagram showing an example of a vehicle position integration processing device according to Embodiment 1;
- FIG. It is a figure explaining the relationship between the observed value of an own vehicle position, and an output value.
- 4 is a diagram schematically showing the operation of the vehicle position integration processing method according to Embodiment 1;
- FIG. 5 is a diagram showing the relationship between the observed value, predicted value, estimated value and output value of the vehicle position in FIG. 4;
- FIG. 6 is a diagram showing a flowchart of external information acquisition processing according to Embodiment 1.
- FIG. 4 is a diagram showing a time chart of the operation of the observation device and the operation of the vehicle position integration processing device according to Embodiment 1;
- FIG. 4 is a diagram showing a flowchart of vehicle position integration processing according to Embodiment 1;
- 4 is a diagram showing a time chart of correction time and observation time of own vehicle motion information in Embodiment 1.
- FIG. 4 is a diagram showing a time chart of correction time and observation time of own vehicle motion information in Embodiment 1.
- FIG. 1 is a block diagram showing an overall schematic configuration of a vehicle position integration processing system including a vehicle position integration processing apparatus according to Embodiment 1.
- FIG. 2 is a block diagram showing an example of the vehicle position integration processing device according to the first embodiment.
- FIG. 3 is a diagram for explaining the relationship between the observed value of the vehicle position and the output value.
- FIG. 4 is a diagram schematically showing the operation of the vehicle position integration processing method according to the first embodiment.
- FIG. 5 is a diagram showing the relationship between the observed value, predicted value, estimated value and output value of the vehicle position in FIG.
- FIG. 6 is a diagram showing a flowchart of external information acquisition processing according to the first embodiment.
- FIG. 1 is a block diagram showing an overall schematic configuration of a vehicle position integration processing system including a vehicle position integration processing apparatus according to Embodiment 1.
- FIG. 2 is a block diagram showing an example of the vehicle position integration processing device according to the first embodiment.
- FIG. 3 is a diagram for explaining the relationship between the observed value
- FIG. 7 is a diagram showing time charts of the operation of the observation device and the operation of the vehicle position integration processing device according to the first embodiment.
- FIG. 8 is a diagram showing a flowchart of vehicle position integration processing according to the first embodiment.
- FIG. 9 is a diagram showing a time chart of correction time and observation time of own vehicle motion information in the first embodiment.
- a vehicle position integration processing system 10 includes a vehicle position integration processing device 1, a vehicle motion information observation device 11 that provides vehicle motion information to the vehicle position integration processing device 1, and a vehicle position information observation device that provides vehicle position information.
- Vehicle position observation devices 12, 13, and 14 (herein referred to as vehicle position observation devices A, B, and C), and a vehicle control device 20 that provides vehicle position information estimated by the vehicle position integration processing device 1. It consists of
- the vehicle position integration processing device 1 receives vehicle motion information u (speed, yaw rate , and acceleration, etc.) and sets the observation time t ego (m) of the motion information u(m); Acquisition of vehicle position information z a , z b , z c from the position observation device 12 , vehicle position observation device 13 , and vehicle position observation device 14 and observation time of the vehicle position information z(n) to obs (n ), vehicle motion information u(m) obtained from the vehicle motion information management unit 3, its observation time t ego (m), and observation time t obs (n ), and obtain the previous processing time tobs (n-1) and the estimated value x est (n-1) of the vehicle position from the updating unit 6 described later, and use these to obtain the observation time tobs (n)
- a prediction unit 5 that calculates the predicted value x pred (n) of the own vehicle position in the prediction unit 5, the predicted value x pred (n) calculated by the prediction unit 5, and the observation information z obs (n)
- the position integration processing device 1 receives a predetermined output time t out , the estimated value x est (n) and the observation time tobs (n) from the update unit 6 , and the motion information u(m) from the own vehicle motion information management unit 3 .
- An output unit 7 that acquires the observation time t ego (m), calculates the own vehicle position x out at the output time t out using these, and outputs it to an external vehicle control device 20, for example. It is In this embodiment, the case of acquiring three pieces of vehicle position information z a , z b , and z c is described as an example.
- the internal configuration of the position integration processing device 1 remains the same.
- the functional units 2 to 7 included in the vehicle position integration processing device 1 can be realized by a processing device 80, a storage device 81, an input device 82, an output device 83, and a display device 84. .
- the processing unit 80 is a CPU (also called a central processing unit, a microprocessor, a microcomputer, a processor, or a DSP) that executes programs stored in the storage device 81, even if it is dedicated hardware.
- a CPU also called a central processing unit, a microprocessor, a microcomputer, a processor, or a DSP
- processing unit 80 may be, for example, a single circuit, multiple circuits, programmed processor, parallel programmed processor, ASIC, FPGA, or any combination thereof.
- the functions of the time management unit 2, the vehicle motion information management unit 3, the observation information management unit 4, the prediction unit 5, and the update unit 6 may be realized by the processing device 80, or the functions of the respective units may be integrated into the processing device. 80 may be implemented.
- the output unit 7 can be realized by the output device 83. Further, although the input device 82 is implemented as part of the functions of the own vehicle motion information management section 3 and the observation information management section 4 in FIG. 1, it may be provided separately.
- the processing device 80 When the processing device 80 is a CPU, the functions of the time management unit 2, the vehicle motion information management unit 3, the observation information management unit 4, the prediction unit 5, the update unit 6, and the output unit 7 are software, firmware, or software. and firmware. Software and firmware are written as processing programs and stored in the storage device 81 . The processing device 80 reads out and executes processing programs stored in the storage device 81 to achieve the functions of each unit.
- the vehicle position integration processing device 1 when executed by the processing device 80, the vehicle motion information observation device 11, the vehicle position observation devices 12, 13, 14 to u, z a, z b, z c Processing process for importing data, processing process for setting the observation time of motion information based on the acquired data, processing process for setting the observation time of vehicle position information, processing process for calculating the predicted value at the observation time, estimation at the observation time
- a storage device 81 is provided for storing a processing program that results in the execution of the process of calculating the value, the process of calculating the position of the vehicle, and the process of outputting the position to an external vehicle control device.
- the storage device 81 includes, for example, nonvolatile or volatile semiconductor memories such as RAM, ROM, flash memory, EPROM, and EEPROM, magnetic disks, flexible disks, optical disks, compact disks, minidisks, and DVDs. Applicable.
- time management unit 2 vehicle motion information management unit 3, observation information management unit 4, prediction unit 5, and update unit 6 are partly realized by dedicated hardware and partly by software or firmware. It may be realized by for example, the time management unit 2, the vehicle motion information management unit 3, and the observation information management unit 4 are implemented by a processing device 80 as dedicated hardware, and the prediction unit 5 and update unit 6 are implemented by the processing device 80. can realize the function by reading out and executing the program stored in the storage device 81 .
- the processing device 80 can implement the functions described above by means of hardware, software, firmware, or a combination thereof.
- the storage device 81 stores a program for executing the above-described processing steps, as well as motion information and position information acquired from the vehicle motion information observation device 11 and the vehicle position observation devices 12 to 14, calculated predicted values, estimated store the value.
- the vehicle motion information management unit 3 and the observation information management unit 4 realize the functions thereof, but these are the vehicle motion information observation device 11 and the vehicle position observation device. Data output from 12 to 14 are acquired periodically at predetermined times.
- the output device 83 corresponds to the output unit 7 and outputs the processing result to the vehicle control device, which is an external device.
- a display device 84 appropriately displays the status of execution in the processing device 80 .
- vehicle position observation devices 12, 13, 14 here, vehicle position observation devices A, B, and C
- vehicle position A device 1 that integrates and processes a plurality of observation results related to is operating asynchronously.
- the number of vehicle positions output from a single vehicle position observation device between output times t out (n ⁇ 1) and t out (n) is not limited to one, and in the case of a plurality of observation values or Observations may not be entered. In order to cope with such a situation, it is necessary to integrate the own vehicle position considering the difference between the observation time and the output time.
- FIG. 1 shows the configuration of the vehicle position integration processing device 1 when realizing this
- FIG. 4 schematically shows the operation of the integration processing method.
- the vehicle position integration processing device 1 stores vehicle position information observation results z a, z b, z c are entered.
- the vehicle position integrated processing device 1 is composed of a time management unit 2, a vehicle motion information management unit 3, an observation information management unit 4, a prediction unit 5, an update unit 6, and an output unit .
- the vehicle position integration process is executed at a certain cycle T (processing times t proc (n-1), t proc (n), t proc (n+1), . . . ). .
- the vehicle position integrated processing device 1 asynchronously receives observation results z a , z b , z c of the vehicle position information from the respective vehicle position observation devices 12 , 13 , 14 .
- the summary of the processing in the vehicle position integration processing device 1 at this time is that the prediction processing, update processing, and output processing are sequentially performed in the order of 1 to 9 in FIG.
- FIG. 5 is a schematic diagram showing the vehicle position when the prediction process, update process, and output process are sequentially executed.
- the above prediction and updating are sequentially executed in order from the oldest observation time obtained by the vehicle motion information observation device 11 and the vehicle position observation devices 12 to 14 .
- the vehicle position x est (n) is estimated by appropriately changing the observation error parameter for each observation device.
- prediction processing (output processing) is performed using vehicle motion information at the output time t out . Estimate the position x out . In this way, by integrating the vehicle position information and performing processing in consideration of the observation time tobs (n) and the output time tout , the estimation accuracy of the vehicle position can be improved.
- FIG. 6 shows an operation flow when acquiring information (observation information regarding vehicle motion information u and vehicle position information z) obtained from the outside of the vehicle position integration processing device 1 . Acquisition of these pieces of information is executed when external information is input to the vehicle position integration processing device 1 .
- FIG. 8 shows an operation flow of processing for integrating vehicle position information obtained from the outside of the vehicle position integration processing device 1 . These processing flows are repeatedly executed at predetermined time intervals. The details of the external information acquisition process and the vehicle position integration process will be described below.
- Step S1-1 Current Time Acquisition Process>
- the process of acquiring the current time trcv at which the external information acquisition process is started is performed.
- the time when this process was called is obtained from the time management unit 2 .
- Step S1-2 External Information Acquisition Process>
- the type of acquired external information in this example, the vehicle position information z(n) of the vehicle position observation devices 12, 13, and 14 or the vehicle motion information u(n) of the vehicle motion information observation device 11
- An identifier sns_id for identifying m) is assigned by the observation information management unit 4 .
- Step S1-3 Step of Assigning Observation Time> Furthermore, in this step, a process of assigning observation time tobs is performed. Here, the time at which the external information (own vehicle motion information or own vehicle position observation information) acquired in this step is observed is given to the observation information.
- the observation time t obs observed by the observation device does not necessarily match the reception time t rcv at which the vehicle position integrated processing device 1 receives the observation information. This is because there is a delay time ⁇ t that is the sum of the time required for the calculation of the vehicle position in the observation device or the time required for transmission from the observation device to the vehicle position integrated processing device 1 . Therefore, this delay time ⁇ t is set in advance for each observation device.
- Step S1-4 Observation Information Accumulation Process>
- a process of accumulating (storing) observation information is performed.
- an identifier sns_id and an observation time t obs are given to the observation information obtained from the outside set as described above for each type of external information. 3, if it is own vehicle position information, it is accumulated in the observation information management unit 4; With the above, the external information acquisition processing ends.
- Step S2-1 Observation information sorting process>
- the observation information is sorted.
- a plurality of pieces of observation information z obs accumulated in the observation information management unit 4 before the start of this process are sorted in ascending order based on the observation time tobs given in the external information acquisition process.
- Step S2-2 Loop processing process for each observation information>
- loop processing is performed for each piece of observation information.
- the target observation information is sequentially selected from old observation information t obs (1) ⁇ t obs (2) ⁇ . process.
- Step S2-3 Own Vehicle Position Prediction Processing Process> Furthermore, in this step, prediction processing of the own vehicle position is performed.
- prediction processing is performed using the following as input/output information.
- Observation time input of M pieces of own vehicle motion information u(m): Input of M pieces of own vehicle motion information corresponding to t ego (m): t obs (n-1): Own vehicle in the previous update process
- Step S2-3-1 Own Vehicle Motion Information Correction Process>
- the vehicle motion information is corrected.
- the vehicle motion information u(m) of the input information is corrected to the vehicle motion information u(n) at the time tego .
- u(n) after correction is used to calculate the predicted value of the vehicle position at time tobs (n).
- the own vehicle motion information also includes the own vehicle motion information at a desired time t ego (for example, tobs (n), the midpoint time between tobs (n ⁇ 1) and tobs (n), etc.). It is not possible to obtain vehicle location information.
- a desired time t ego for example, tobs (n), the midpoint time between tobs (n ⁇ 1) and tobs (n), etc.
- t ego is the time of the vehicle motion information after correction
- t ego (1) is the observation time of the vehicle motion information closest to t ego
- t ego (2) is the second closest to t ego
- Obs (n-1) is the observation time of the vehicle motion information
- t obs (n-1) is the observation time of the vehicle position information in the previous update process
- t obs (n) is the observation time of the vehicle position information observed by the vehicle position observation device. is shown.
- FIG. 9 shows the case where the time t ego after correction is the middle time between tobs (n ⁇ 1) and tobs (n). Therefore, using the two vehicle motion information u(1) and u(2) at times tego (1) and tego (2) closest to the desired time t ego , for example, Information on vehicle motion at time t ego is calculated by linear approximation.
- Step S2-3-2 Process of calculating predicted value of own vehicle position>
- a process of calculating a predicted value of the vehicle position using the vehicle motion information is performed.
- the estimated value x est (n-1) in the previous update process, the vehicle motion information u(n), and the elapsed time ⁇ t from time tobs (n-1) to time tobs (n) are is used to calculate the own vehicle position x pred (n) at the predicted time tobs (n ⁇ 1).
- the calculation formula is the following formula. Note that A and B are coefficients that indicate the characteristics of the change in state x from the previous step to the next step, Ppred(n) is the prediction error covariance matrix, and Q is the system error.
- the predicted value x pred (n) can be calculated by setting the variables in the above equation as follows.
- Step S2-4 Process of Updating Estimated Vehicle Position>
- processing for updating the estimated value of the vehicle position is performed.
- the estimated value of the own vehicle position is updated using the following as input/output information.
- Step S2-4-1 Observation error setting process>
- the value of the observation error parameter R used in the calculation of the update process is changed.
- Step S2-4-2 Step of Calculating Estimated Position of Own Vehicle>
- the estimated value of the vehicle position is calculated using the predicted value of the vehicle position obtained in step S2-3-2, the observation error obtained in step S2-4-1, and the vehicle position information z(n). calculate.
- K(n) is the Kalman gain
- Pest(n) is the error covariance matrix after updating.
- the observation information z(n) observes (x, y, ⁇ , v) as follows, and the set values of the variables in the above equation in the case similar to the example shown in step S2-3-2 are It is as follows. This completes the loop processing for each observation value in step S2-2.
- Step S2-5 Output processing process for output value of own vehicle position>
- output processing of the output value of the vehicle position is performed.
- output processing is performed using the following as input/output information.
- Output time output of the vehicle position calculated by the processing device : x out Vehicle position at time tobs (n) calculated by the vehicle position integration processing device
- Step S2-5-1 Own Vehicle Motion Information Correction Process>
- the vehicle motion information u is corrected.
- the time used here is the desired predetermined output time t out from the observation time t obs (n). Specifically, it is as follows.
- the vehicle motion information u(m) of the input information is corrected to the vehicle motion information u(n) at time t ego .
- the corrected u(n) is used when calculating the predicted value xout of the vehicle position at the output time tout . As shown in FIG.
- t ego is the time of the vehicle motion information after correction
- t ego (1) is the observation time of the vehicle motion information closest to t ego
- t ego (2) is the second closest to t ego
- Obs (n-1) is the observation time of the vehicle motion information
- t obs (n-1) is the observation time of the vehicle position information in the previous update process
- t obs (n) is the observation time of the vehicle position information observed by the vehicle position observation device. is shown. Note that FIG.
- Step S2-5-2 Own Vehicle Position Output Value Calculation Process>
- the vehicle position output value is calculated using the vehicle motion information.
- the same processing as in step S2-3-2 is performed. Specifically, it is as follows. Using the estimated value x est (n) in the update process, the own vehicle motion information u(n), and the elapsed time ⁇ t from time tobs (n) to time t out , the own vehicle at output time t out (n) is calculated. Calculate the vehicle position x out (n).
- the calculation formula is as follows.
- the set values in the above equation are as follows. With this, the own vehicle position integration processing ends.
- a device using a satellite positioning method based on the GNSS (Global Navigation Satellite System) mentioned in Background Art an inertial navigation system using an internal sensor such as a gyro
- an observation device using an external sensor for example, a surrounding monitoring camera, Lidar (Light detection and ranging), millimeter wave radar
- a speedometer and an acceleration sensor can be used as the own vehicle motion information observation device.
- the vehicle position integration processing apparatus when estimating the vehicle position by integrating the vehicle position information observed by a plurality of methods, each observation time and output time By integrating the information on the position of the vehicle in consideration of the above, there is a remarkable effect that the position of the vehicle can be accurately estimated.
- vehicle position integration processing device may be realized as a part of the function of the vehicle driving support device, or may be an independent device.
- Vehicle position integration processing device 1 Vehicle position integration processing device, 2 Time management unit, 3 Vehicle motion information management unit, 4 Observation information management unit, 5 Prediction unit, 6 Update unit, 7 Output unit, 10 Vehicle position integration processing system, 11 Vehicle Motion information observation device, 12, 13, 14 Vehicle position observation device, 20 Vehicle control device, 80 Processing device, 81 Storage device, 82 Input device, 83 Output device, 84 Display device.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
(a)複数の観測装置間での非同期動作
(b)観測装置と統合処理装置との間での非同期動作
(a)については、一般的に自車位置を推定する装置においては、それぞれの手法における自車位置情報の更新周期が異なる。さらに、各装置が独立して動作している場合には、自車位置の観測時刻も異なる。このような複数の異なる時刻で観測された自車位置を統合する場合に、観測装置間の観測時刻の差を考慮しておかないと、統合後の自車位置の推定の精度が低下するといった課題がある。
(b)については、観測装置と統合処理装置との間においても、動作周期、及び動作タイミングが同期していない。つまり、観測時刻(観測装置で観測する時刻)と出力時刻(統合処理装置で推定したい時刻)とは一致しない。このため、これらの時刻差を考慮していない場合においても、出力時刻における自車位置の推定の精度が低下する要因となるといった課題がある。
図1は、実施の形態1に係る自車位置統合処理装置を含む自車位置統合処理システムの全体の概略構成を示すブロック図である。図2は、実施の形態1に係る自車位置統合処理装置の例を示すブロック図である。図3は、自車位置の観測値と出力値との関係を説明する図である。図4は、実施の形態1における自車位置統合処理方法の動作を模式的に示す図である。図5は、図4における自車位置の観測値、予測値、推定値及び出力値の関係を示す図である。図6は、実施の形態1における外部情報取得処理のフローチャートを示す図である。図7は、実施の形態1における観測装置の動作と自車位置統合処理装置の動作のタイムチャートを示す図である。図8は、実施の形態1における自車位置統合処理のフローチャートを示す図である。また、図9は、実施の形態1における自車運動情報の補正時刻と観測時刻のタイムチャートを示す図である。
自車位置統合処理システム10は、自車位置統合処理装置1と、自車位置統合処理装置1に自車運動情報を提供する自車運動情報観測装置11と、自車位置情報を提供する自車位置観測装置12,13,14(ここでは、自車位置観測装置A、B,Cとする)と、自車位置統合処理装置1で推定された自車位置情報を提供する車両制御装置20とで構成されている。
なお、本実施の形態では、3つの自車位置情報za, zb, zcを取得する場合を例として記載しているが、自車位置情報が1つ以上であればよく、自車位置統合処理装置1内の構成に変わりはない。
(1)前回の観測時刻tobs(n-1)で更新された自車位置の推定値xest(n-1)から観測時刻tobs(n)における自車運動情報観測装置11による自車位置の自車運動情報u(m)を用いて予測(図4中の1,3,5,7の処理が該当)を行う。その予測値xpred(n)と自車位置観測装置12から14による自車位置の観測値xobs(n)を用いて自車位置の更新(図4中の2,4,6,8の処理が該当)を行う。この予測処理と更新処理は、例えば、カルマンフィルタ他の公知の技術を用いて行うことができる。
(2)上記の予測と更新を、自車運動情報観測装置11、自車位置観測装置12から14により得られた観測時刻の古いものから順に逐次的に処理を実行する。
(3)この時、観測装置毎に観測誤差のパラメータを適宜変更して自車位置xest(n)を推定する。
(4)出力時刻toutに最も近い観測時刻tobs(n)における予測・更新処理の結果を基準に、出力時刻toutにおける車両運動情報により予測する処理(出力処理)を行うことで自車位置xoutを推定する。
このように、自車位置情報を統合することで観測時刻tobs(n)と出力時刻toutを考慮した処理を行うことにより自車位置の推定精度を向上させることができる。
<ステップS1-1:現在時刻の取得工程>
外部情報取得処理の開始により、まず、このステップでは、この外部情報取得処理を開始する現在時刻trcvを取得する処理を行う。ここでは、本処理が呼び出された時刻を時刻管理部2から取得する。
次に、このステップでは、外部情報を取得する処理を行う。ここでは、取得された外部情報の種類(本例では、自車位置観測装置12,13,14の自車位置情報z(n)もしくは、自車運動情報観測装置11の自車運動情報u(m))を識別する識別子sns_idが観測情報管理部4により付与される。
さらに、このステップでは、観測時刻tobsを付与する処理を行う。ここでは、本ステップで取得する外部情報(自車運動情報、あるいは、自車位置観測情報)が観測された時刻を観測情報に付与する。
最後に、このステップでは、観測情報を蓄積(記憶)する処理を行う。ここでは、以上により設定された外部から得られた観測情報に、外部情報の種別毎に識別子sns_idと観測時刻tobsを付与して、これらが自車運動情報であれば自車運動情報管理部3に、自車位置情報であれば観測情報管理部4に蓄積される。
以上で、外部情報取得処理は終了する。
各ステップの処理の内容は以下のとおりである。なお、本処理のステップS2-3-2の予測値の算出及びステップS2-4-2の更新処理における推定値の算出において、例としてカルマンフィルタを使用する場合の式を示す。
自車位置統合処理の開始により、まず、このステップでは、観測情報をソートする処理を行う。ここでは、本処理が開始するまでに観測情報管理部4に蓄積された複数の観測情報zobsに対して、外部情報取得処理で付与された観測時刻tobsを基に昇順でソートを行う。
次に、このステップでは、観測情報毎のループ処理を行う。ここでは、ステップS2-1でソートされた複数の観測情報のうち、出力時刻toutと以下のn = 1, 2, …, N個の観測情報に対して、ステップS2-3の予測処理とステップS2-4の更新処理を行う。対象となる観測情報は、ステップS2-1でソートされた観測時刻tobsの古い観測情報tobs(1) < tobs(2) < … < tobs(N) <= toutから順次選択して処理を行う。
さらに、このステップでは、自車位置の予測処理を行う。ここでは、以下を入出力情報として、予測処理を行う。
入力/出力:tobs(n):自車位置観測装置で観測した自車位置情報z(n)の観測時刻
入力 :tego(m):自車位置の観測時刻tobs(n)に近いM個の自車運動情報の観測時刻
入力 :u(m) :tego(m)に対応するM個の自車の運動情報
入力 :tobs(n-1):前回の更新処理における自車位置の情報の観測時刻
入力 :xest(n-1):前回の更新処理における自車位置の推定値
出力 :xpred(n) :予測処理で算出した時刻tobs(n)における自車位置の予測値
このステップでは、自車運動情報の補正を行う。ここでは、入力情報の自車運動情報u(m)を時刻tegoにおける自車運動情報u(n)に補正する。補正後のu(n)は、時刻tobs(n)における自車位置の予測値の算出に用いる。
図9に示すように、自車運動情報についても、所望の時刻tego(例えば、tobs(n)、tobs(n-1)とtobs(n)の中点の時刻など)における自車位置情報を取得することはできない。
図9においては、tegoは補正後の自車運動情報の時刻、tego(1) はtegoに最も近い自車運動情報の観測時刻、tego(2) はtegoに2番目に近い自車運動情報の観測時刻、tobs(n-1) は前回の更新処理における自車位置情報の観測時刻、tobs(n) は自車位置観測装置で観測した自車位置情報の観測時刻を示している。なお、図9では、補正後の時刻tegoをtobs(n-1)とtobs(n)の中央となる時刻の場合を示している。そこで、所望の時刻tegoに最も近い時刻tego(1)とtego(2)における2つの自車運動情報u(1)とu(2)を用いて、例えば、下記の式のような線形近似で時刻tegoにおける自車両運動の情報を算出する。
このステップでは、自車運動情報を用いた自車位置の予測値を算出する処理を行う。ここでは、前回の更新処理における推定値xest(n-1)と、自車運動情報u(n)と、時刻tobs(n-1)から時刻tobs(n)までの経過時間Δtを用いて、予測時刻tobs(n-1)における自車位置xpred(n)を算出する。その算出式は、下記の式となる。なお、A, Bは、1ステップ前から次のステップへの状態xの変化の特性を示す係数、Ppred(n)は予測誤差の共分散行列、Qはシステム誤差を示す。
このステップでは、自車位置の推定値を更新する処理を行う。ここでは、以下を入出力情報として、自車位置の推定値の更新処理を行う。
入力/出力:tobs(n):自車位置観測装置で観測した自車位置情報z(n)の観測時刻
入力 :z(n) :自車位置観測装置で観測した自車位置情報
入力 :sns_id:自車位置観測装置の種別を表す識別子
入力 :reli :自車位置観測装置が出力する自車位置の信頼度情報
入力 :xpred :予測処理で算出した時刻tobs(n)における自車位置の予測値
出力 :xest :更新処理で算出した時刻tobs(n)における自車位置の予測値
このステップでは、更新処理の算出で用いる観測誤差のパラメータR の値を変更する。
観測装置の識別sns_idと観測装置から出力される信頼度reliを説明変数として観測誤差のパラメータ R の値を変更する。そのために、sns_idとreliの値と観測誤差の R の値を対応付けたテーブルを予め用意しておく。
このステップでは、ステップS2-3-2で求めた自車位置の予測値とステップS2-4-1で求めた観測誤差と自車位置情報z(n)を用いて自車位置の推定値を算出する。その計算式を以下に示す。なおK(n)はカルマンゲインをPest(n)は更新後の誤差共分散行列を示す。
このステップでは、自車位置の出力値の出力処理を行う。ここでは、以下を入出力情報として、出力処理を行う。
入力 :tego(m):自車位置の観測時刻t(n) に近いM個の自車運動情報の観測時刻
入力 :u(m) :tego(m)に対応するM個の自車の運動情報
入力 :tobs(n):更新処理における自車位置の情報の観測時刻
入力 :xest(n):更新処理における自車位置の推定値
入力/出力:tout :自車位置統合処理装置で算出する自車位置の出力時刻
出力 :xout :自車位置統合処理装置で算出した時刻tobs(n)における自車位置
このステップでは、自車運動情報uの補正を行う。ここでは、ステップS2-3-1と同様な処理を行う。ここで用いる時刻は、観測時刻tobs(n)から目的とする所定の出力時刻toutとなる。具体的には以下の通りである。
入力情報の自車運動情報u(m)を時刻tegoにおける自車運動情報u(n)に補正する。補正後のu(n)は、出力時刻toutにおける自車位置の予測値xoutを算出する際に用いる。
図9に示すように、自車運動情報についても、補正後の自車運動情報の時刻tego(例えば、tobs(n)、toutとtobs(n)の中点の時刻など)における自車位置情報を取得することはできない。図9においては、tegoは補正後の自車運動情報の時刻、tego(1) はtegoに最も近い自車運動情報の観測時刻、tego(2) はtegoに2番目に近い自車運動情報の観測時刻、tobs(n-1) は前回の更新処理における自車位置情報の観測時刻、tobs(n) は自車位置観測装置で観測した自車位置情報の観測時刻を示している。なお、図9では、補正後の時刻tegoをtobs(n-1)とtobs(n)の中央となる時刻の場合を示している。
そこで、所望の時刻tegoに最も近い時刻tego(1)とtego(2)における2つの自車運動情報u(1)とu(2)を用いて、例えば下記の式のような線形近似で時刻tegoにおける自車運動情報を算出する。
このステップでは、自車運動情報を用いた自車位置の出力値の算出を行う。ここでは、ステップS2-3-2と同様な処理を行う。具体的には、以下の通りである。
更新処理における推定値xest(n)と、自車運動情報u(n)と、時刻tobs(n)から時刻toutまでの経過時間Δtを用いて、出力時刻tout(n)における自車位置xout(n)を算出する。その計算式は、次式となる。
従って、例示されていない無数の変形例が、本願に開示される技術の範囲内において想定される。例えば、少なくとも1つの構成要素を変形する場合、追加する場合または省略する場合が含まれるものとする。
Claims (14)
- 自車運動情報観測装置から取得した自車運動情報及び観測時刻と、自車位置観測装置から取得した自車位置情報の現在の観測時刻と前回の観測時刻及び前記前回の観測時刻における前回の自車位置の推定値と、を用いて前記現在の観測時刻における現在の前記自車位置の予測値を算出する予測部と、
前記予測部から取得した前記予測値と、前記現在の観測時刻及び前記現在の自車位置と、を用いて前記現在の観測時刻における前記自車位置の推定値を算出して更新する更新部と、
前記推定値及び前記自車運動情報及び前記観測時刻を用いて、所定の出力時刻に合わせて出力値を算出して出力する出力部と、
を備えたことを特徴とする自車位置統合処理装置。 - 前記予測値の算出と前記推定値の算出を、前記自車位置情報の前記観測時刻の古い順に逐次行うことを特徴とする請求項1に記載の自車位置統合処理装置。
- 前記出力値の算出において、前記出力時刻に最も近い前記自車位置情報の観測時刻における前記推定値を用いることを特徴とする請求項1に記載の自車位置統合処理装置。
- 前記予測値の算出において、1つ以上の前記自車運動情報を用いて前記前回の観測時刻、あるいは前記現在の観測時刻、あるいは、前記前回の観測時刻と前記現在の観測時刻の中間の時刻における前記自車運動情報に補正することを特徴とする請求項1に記載の自車位置統合処理装置。
- 前記推定値の算出において、前記自車運動情報及び前記自車位置情報を取得した前記観測時刻に応じて観測誤差のパラメータを変更することを特徴とする請求項1に記載の自車位置統合処理装置。
- 前記推定値の算出において、前記自車位置情報の信頼度に応じて観測誤差のパラメータを変更することを特徴とする請求項1に記載の自車位置統合処理装置。
- 前記自車位置観測装置による前記自車位置の観測時刻から前記自車位置の観測情報を取得するまでの時間を考慮して前記観測時刻を算出することを特徴とする請求項1に記載の自車位置統合処理装置。
- 自車運動情報観測装置から取得した自車運動情報及び観測時刻と、自車位置観測装置から取得した自車位置情報の現在の観測時刻と前回の観測時刻及び前回の観測時刻における前回の自車位置の推定値と、を用いて前記現在の観測時刻における現在の前記自車位置の予測値を算出する工程と、
前記予測値と、前記現在の観測時刻及び前記現在の自車位置と、を用いて前記現在の観測時刻における前記自車位置の推定値を算出して更新する工程と、
前記推定値及び前記自車運動情報及び前記観測時刻を用いて、所定の出力時刻に合わせて出力値を算出して出力する工程と、を備えたことを特徴とする自車位置統合処理方法。 - 前記予測値の算出と前記推定値の算出を、前記自車位置情報の前記観測時刻の古い順に逐次行うことを特徴とする請求項8に記載の自車位置統合処理方法。
- 前記出力値の算出において、前記出力時刻に最も近い前記自車位置情報の観測時刻における前記推定値を用いることを特徴とする請求項8に記載の自車位置統合処理方法。
- 前記予測値の算出において、1つ以上の前記自車運動情報を用いて前記前回の観測時刻、あるいは前記現在の観測時刻、あるいは、前記前回の観測時刻と前記現在の観測時刻の中間の時刻における前記自車運動情報に補正することを特徴とする請求項8に記載の自車位置統合処理方法。
- 前記推定値の算出において、前記自車運動情報及び前記自車位置情報を取得した前記観測時刻に応じて観測誤差のパラメータを変更することを特徴とする請求項8に記載の自車位置統合処理方法。
- 前記推定値の算出において、前記自車位置情報の信頼度に応じて観測誤差のパラメータを変更することを特徴とする請求項8に記載の自車位置統合処理方法。
- 前記自車位置観測装置による前記自車位置の観測時刻から前記自車位置の観測情報を取得するまでの時間を考慮して前記観測時刻を算出することを特徴とする請求項8に記載の自車位置統合処理方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/011141 WO2022195813A1 (ja) | 2021-03-18 | 2021-03-18 | 自車位置統合処理装置及び自車位置統合処理方法 |
US18/039,528 US20240102825A1 (en) | 2021-03-18 | 2021-03-18 | Own-vehicle position integration processing apparatus and own-vehicle position integration processing method |
CN202180095388.6A CN116964415A (zh) | 2021-03-18 | 2021-03-18 | 本车位置综合处理装置和本车位置综合处理方法 |
DE112021007310.9T DE112021007310T5 (de) | 2021-03-18 | 2021-03-18 | Eigenfahrzeug-Positionsintegrations-Verarbeitungsvorrichtung und Eigenfahrzeug-Positionsintegrations-Verarbeitungsverfahren |
JP2023506634A JPWO2022195813A1 (ja) | 2021-03-18 | 2021-03-18 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/011141 WO2022195813A1 (ja) | 2021-03-18 | 2021-03-18 | 自車位置統合処理装置及び自車位置統合処理方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022195813A1 true WO2022195813A1 (ja) | 2022-09-22 |
Family
ID=83322018
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/011141 WO2022195813A1 (ja) | 2021-03-18 | 2021-03-18 | 自車位置統合処理装置及び自車位置統合処理方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240102825A1 (ja) |
JP (1) | JPWO2022195813A1 (ja) |
CN (1) | CN116964415A (ja) |
DE (1) | DE112021007310T5 (ja) |
WO (1) | WO2022195813A1 (ja) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010066073A (ja) * | 2008-09-09 | 2010-03-25 | Nec Corp | 移動体位置推定システム、移動体位置推定方法、及び移動体位置推定プログラム |
WO2020209144A1 (ja) * | 2019-04-09 | 2020-10-15 | パイオニア株式会社 | 位置推定装置、推定装置、制御方法、プログラム及び記憶媒体 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9255989B2 (en) * | 2012-07-24 | 2016-02-09 | Toyota Motor Engineering & Manufacturing North America, Inc. | Tracking on-road vehicles with sensors of different modalities |
JP6776707B2 (ja) | 2016-08-02 | 2020-10-28 | トヨタ自動車株式会社 | 自車位置推定装置 |
-
2021
- 2021-03-18 US US18/039,528 patent/US20240102825A1/en active Pending
- 2021-03-18 WO PCT/JP2021/011141 patent/WO2022195813A1/ja active Application Filing
- 2021-03-18 DE DE112021007310.9T patent/DE112021007310T5/de active Pending
- 2021-03-18 JP JP2023506634A patent/JPWO2022195813A1/ja active Pending
- 2021-03-18 CN CN202180095388.6A patent/CN116964415A/zh active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010066073A (ja) * | 2008-09-09 | 2010-03-25 | Nec Corp | 移動体位置推定システム、移動体位置推定方法、及び移動体位置推定プログラム |
WO2020209144A1 (ja) * | 2019-04-09 | 2020-10-15 | パイオニア株式会社 | 位置推定装置、推定装置、制御方法、プログラム及び記憶媒体 |
Also Published As
Publication number | Publication date |
---|---|
DE112021007310T5 (de) | 2024-01-04 |
US20240102825A1 (en) | 2024-03-28 |
JPWO2022195813A1 (ja) | 2022-09-22 |
CN116964415A (zh) | 2023-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107884800B (zh) | 观测时滞系统的组合导航数据解算方法、装置及导航设备 | |
CN110968087B (zh) | 车辆控制参数的标定方法、装置、车载控制器和无人车 | |
US10422658B2 (en) | Method, fusion filter, and system for fusing sensor signals with different temporal signal output delays into a fusion data set | |
EP2264403B1 (en) | Positioning device and positioning method | |
US11036231B2 (en) | In-vehicle device and estimation method | |
CN109143304B (zh) | 用于确定无人驾驶车辆位姿的方法和装置 | |
US11802973B2 (en) | Method for an adaptive ascertainment of an integrity range of a parameter estimate | |
JP2017194456A (ja) | ナビゲーションシステム及び誤差補正のための方法 | |
JP2017015410A (ja) | センサ出力補正装置 | |
CN112781586A (zh) | 一种位姿数据的确定方法、装置、电子设备及车辆 | |
CN113405545A (zh) | 定位方法、装置、电子设备及计算机存储介质 | |
US20170122770A1 (en) | Method and system for providing dynamic error values of dynamic measured values in real time | |
JP7196876B2 (ja) | センサ遅延時間推定装置 | |
CN117367419A (zh) | 机器人定位方法、装置和计算可读存储介质 | |
WO2022195813A1 (ja) | 自車位置統合処理装置及び自車位置統合処理方法 | |
CN114739415A (zh) | 基于多传感器融合的多车定位方法、装置及计算机设备 | |
WO2017141469A1 (ja) | 位置推定装置 | |
JP2018179926A (ja) | 物体認識処理装置、物体認識処理方法および車両制御システム | |
EP4123370B1 (en) | Triggering system | |
CN114964270B (zh) | 融合定位方法、装置、车辆及存储介质 | |
CN114088093B (zh) | 一种点云地图构建方法、装置、系统及存储介质 | |
CN111982179B (zh) | 异常检测设备、异常检测方法以及计算机可读介质 | |
CN115290081A (zh) | 一种基于rtos的嵌入式ins/gps组合导航方法 | |
CN115037703A (zh) | 数据处理方法、装置、计算机存储介质及计算机程序产品 | |
US7120522B2 (en) | Alignment of a flight vehicle based on recursive matrix inversion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21931562 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023506634 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18039528 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180095388.6 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112021007310 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21931562 Country of ref document: EP Kind code of ref document: A1 |