WO2017130397A1 - Position estimation device, position estimation method, and position estimation program - Google Patents

Position estimation device, position estimation method, and position estimation program Download PDF

Info

Publication number
WO2017130397A1
WO2017130397A1 PCT/JP2016/052758 JP2016052758W WO2017130397A1 WO 2017130397 A1 WO2017130397 A1 WO 2017130397A1 JP 2016052758 W JP2016052758 W JP 2016052758W WO 2017130397 A1 WO2017130397 A1 WO 2017130397A1
Authority
WO
WIPO (PCT)
Prior art keywords
movement
moving object
image
imaging
trajectory
Prior art date
Application number
PCT/JP2016/052758
Other languages
French (fr)
Japanese (ja)
Inventor
藤田 卓志
水谷 政美
真司 神田
佐藤 裕一
直之 沢崎
Original Assignee
富士通株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社 filed Critical 富士通株式会社
Priority to PCT/JP2016/052758 priority Critical patent/WO2017130397A1/en
Publication of WO2017130397A1 publication Critical patent/WO2017130397A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network

Definitions

  • the present invention relates to a position estimation device, a position estimation method, and a position estimation program.
  • SLAM Simultaneous Localization and Mapping
  • Visual Odometry for analyzing an image captured by an imaging device.
  • SLAM Simultaneous Localization and Mapping
  • a target is detected from each of images captured at different times, and the movement direction and distance of the moving object are calculated by tracking changes in coordinates in which the target appears in the image.
  • the technology for estimating the position of a moving object from an image is useful when the moving object exists indoors where it is difficult to use a satellite positioning system such as GPS (Global Positioning System).
  • the technique for estimating the position of a moving object from an image is also useful when it is desired to estimate the position of a moving object with higher accuracy than a satellite positioning system.
  • a technique For position estimation using an image, a technique has been proposed in which a fisheye lens imaging device with a wide visual field range is mounted on a moving object, and the position is estimated by analyzing the fisheye lens image.
  • a technique has been proposed in which two imaging devices (stereo cameras) facing in the same direction are mounted on a moving object, and the position is estimated using stereo images captured by the two imaging devices.
  • the image picked up by the image pickup device may become an image in which it is difficult to detect the target around the moving object.
  • the precision which estimates the position of a moving object falls.
  • a plane such as a ground or a wall without a pattern occupies a large area of the image
  • the estimation accuracy is likely to decrease.
  • a halation in which an image is blurred in white occurs due to strong light such as sunlight hitting the imaging apparatus
  • the estimation accuracy is likely to be lowered.
  • the estimation accuracy tends to be lowered.
  • the estimation accuracy is likely to decrease.
  • an object of the present invention is to provide a position estimation device, a position estimation method, and a position estimation program that improve the accuracy of estimating the position of a moving object from an image.
  • a position estimation device having an image acquisition unit and a determination unit.
  • the image acquisition unit acquires a sequence of first image data having a different imaging time from a first imaging device provided in the moving object, and a second imaging device having a different imaging direction from the first imaging device provided in the moving object. From the second, a sequence of second image data having different imaging times is acquired.
  • the determination unit calculates a first movement trajectory indicating a first estimation of a route along which the moving object has moved from the first image data sequence, and a route along which the moving object has moved from the second image data sequence. A second movement trajectory indicating the second estimation is calculated, and the position of the moving object is determined using the first movement trajectory and the second movement trajectory.
  • a position estimation method executed by the position estimation device is provided.
  • a position estimation program to be executed by a computer is provided.
  • FIG. 1st Embodiment It is a figure which shows the example of the position estimation apparatus of 1st Embodiment. It is a figure which shows the example of arrangement
  • FIG. 1 is a diagram illustrating an example of a position estimation apparatus according to the first embodiment.
  • the position estimation device 10 estimates the position of the moving object 20 from images of a plurality of imaging devices mounted on the moving object 20.
  • the moving object 20 is a movable artificial object, such as a vehicle, a robot, or a drone.
  • the position estimation device 10 may be mounted on the moving object 20 or may exist outside the moving object 20.
  • Information indicating the estimated position of the moving object 20 may be output from an output device such as a display or a speaker, or may be used for autonomous traveling of the moving object 20 (for example, automatic parking of a vehicle).
  • the position estimation device 10 includes an image acquisition unit 11 and a determination unit 12.
  • the image acquisition unit 11 is an interface that captures image data from the imaging apparatus.
  • the determination unit 12 determines the position of the moving object 20 by processing the image data captured by the image acquisition unit 11.
  • the determination unit 12 for example, a processor such as a CPU (Central Processing Unit) or a DSP (Digital Signal Processor) can be used.
  • the determination unit 12 may include an electronic circuit for a specific application such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
  • the processor executes a program stored in a memory such as a RAM (Random Access Memory).
  • the program includes a position estimation program in which processing described below is described.
  • a set of multiple processors (multiprocessor) may be referred to as a “processor”.
  • the moving object 20 has a plurality of imaging devices including an imaging device 21 (first imaging device) and an imaging device 22 (second imaging device).
  • the imaging devices 21 and 22 are installed so that the imaging directions are different from each other. It is preferable that the visual field range of the imaging device 21 and the visual field range of the imaging device 22 have less overlap. The visual field range of the imaging device 21 and the visual field range of the imaging device 22 may not overlap at all.
  • the center of the field-of-view range of the imaging device 21 (the direction in which the center of the image is captured) and the center of the field-of-view range of the imaging device 22 may be 90 ° or more apart from the moving object 20.
  • the imaging device 21 is installed in front of the moving object 20 (in the normal traveling direction), and the imaging device 22 is installed in the rear of the moving object 20 (in the direction opposite to the normal traveling direction).
  • the imaging device 21 is installed on the left side surface of the moving object 20 and the imaging device 22 is installed on the right side surface of the moving object 20.
  • the imaging device 21 is installed in front of or behind the moving object 20 and the imaging device 22 is installed on the left side or right side of the moving object 20.
  • the image acquisition unit 11 acquires a sequence of image data with different imaging times from each of a plurality of imaging devices included in the moving object 20.
  • the image acquisition unit 11 acquires from the imaging device 21 a sequence 23 of image data (first sequence of image data) captured by the imaging device 21. Further, the image acquisition unit 11 acquires from the imaging device 22 a column 24 (second image data column) of image data captured by the imaging device 22.
  • the determination unit 12 calculates a movement trajectory indicating an estimation of a route traveled by the moving object 20 from each of the plurality of image data sequences acquired by the image acquisition unit 11.
  • the determination unit 12 calculates a movement trajectory 13 (first movement trajectory) indicating the first estimation of the route from the image data sequence 23 acquired from the imaging device 21.
  • the determination unit 12 calculates a movement trajectory 14 (second movement trajectory) indicating the second estimation of the route from the sequence 24 of image data acquired from the imaging device 22. Since the movement locus 13 and the movement locus 14 are calculated from different image data, there is a high possibility that they do not coincide.
  • Image processing technology such as SLAM can be used for calculating the movement trajectories 13 and 14.
  • the determination unit 12 detects a target object (for example, an outer wall of a building or a white line on the ground) around each moving object 20 from each image data in the image data row 23. Then, the determination unit 12 tracks the change in the coordinates in the image in which the target is shown, estimates the route along which the moving object 20 has moved, and calculates the movement trajectory 13. Similarly, the determination unit 12 detects a target from each image data in the image data row 24 and calculates the movement locus 14.
  • a target object for example, an outer wall of a building or a white line on the ground
  • the determination unit 12 determines the position 15 of the moving object 20 (for example, the current position of the moving object 20) using a plurality of movement loci including the movement loci 13 and 14. For example, the determination unit 12 generates a single combined movement track by combining a plurality of movement tracks, and determines the position 15 based on the combined movement track. The synthesis of the plurality of movement trajectories is performed, for example, by obtaining an average of the plurality of movement trajectories. However, the determination unit 12 may assign a weight to each movement locus in accordance with the shape of each movement locus, and obtain a weighted average of a plurality of movement loci. In that case, a small weight may be given to an unnatural movement locus such as a meandering movement locus.
  • the determination unit 12 compares the three or more movement loci and greatly increases the other movement loci. You may determine the abnormal movement trace which left
  • the image data sequence 23 is acquired from the imaging device 21 included in the moving object 20, and the movement trajectory 13 is calculated from the image data sequence 23. Further, the image data column 24 is acquired from the imaging device 22 having a different imaging direction from the imaging device 21 included in the moving object 20, and the movement track 14 is calculated from the image data column 24 independently of the movement track 13. . Then, the position 15 of the moving object 20 is estimated using the movement trajectories 13 and 14.
  • the imaging direction of the imaging device 22 can be largely shifted from the imaging direction of the imaging device 21 as compared to a method of calculating one movement locus from stereo images captured by the imaging devices 21 and 22. Therefore, the risk that both images of the imaging devices 21 and 22 are not suitable for position estimation can be reduced, and the accuracy of the determination of the position 15 is improved.
  • the vehicle 30 according to the second embodiment is a four-wheeled vehicle driven by a person.
  • the vehicle 30 includes four imaging devices as sensors that monitor the situation around the vehicle 30.
  • FIG. 2 is a diagram illustrating an arrangement example of the imaging devices in the vehicle.
  • the vehicle 30 includes imaging devices 31 to 34.
  • the imaging device 31 is installed in front of the vehicle 30 so that the imaging direction (direction perpendicular to the lens surface) coincides with the front front direction of the vehicle.
  • the imaging device 32 is installed behind the vehicle 30 such that the imaging direction is opposite to the front direction of the vehicle.
  • the imaging device 33 is installed on the left side surface of the vehicle so that the imaging direction is shifted 90 ° to the left from the front front direction of the vehicle.
  • the imaging device 34 is installed on the right side surface of the vehicle such that the imaging direction is shifted 90 ° to the right from the front front direction of the vehicle.
  • a fisheye lens is used for the imaging devices 31 to 34.
  • the field of view of each of the imaging devices 31 to 34 is 190 ° with respect to the lens surface. That is, the imaging devices 31 to have a visual field range of 190 ° in the horizontal direction and a visual field range of 190 ° in the vertical direction.
  • the vehicle 30 has four imaging devices whose imaging directions are shifted by 90 °.
  • the vehicle 30 may have an arbitrary number of imaging devices of three or more. .
  • the vehicle 30 may include six imaging devices whose imaging directions are shifted by 60 °.
  • FIG. 3 is a block diagram illustrating a hardware example of the vehicle.
  • the vehicle 30 includes an odometer 35, a GPS measurement unit 36, an automatic parking device 37, a position estimation device 100, and a navigation device 200 in addition to the imaging devices 31 to 34.
  • the position estimation apparatus 100 corresponds to the position estimation apparatus 10 of the first embodiment.
  • the odometer 35 measures the distance traveled by the vehicle 30 based on the rotational speed of the tire of the vehicle 30 and the like.
  • the mileage provided by the odometer 35 may be a cumulative mileage from the start of measurement, or may be a mileage in the latest fixed time.
  • the GPS measurement unit 36 receives a GPS signal from a GPS satellite, and calculates the current position of the vehicle 30 in the earth coordinate system based on the GPS signal.
  • the position in the earth coordinate system can be expressed by latitude and longitude.
  • the GPS measurement unit 36 may not receive a GPS signal and may not be able to calculate the current position.
  • the current position calculated by the GPS measurement unit 36 may include an error of several meters to several tens of meters.
  • the automatic parking device 37 moves the vehicle 30 to the parking space of the parking lot by automatic driving regardless of the user's driving.
  • the automatic parking device 37 shifts to an automatic parking mode in which an accelerator, a brake, a handle, and the like included in the vehicle 30 are automatically operated according to an instruction from the user.
  • the automatic parking device 37 uses the estimation result of the current position provided by the position estimation device 100 described later in order to determine an appropriate moving direction and moving amount.
  • the current position estimated by the position estimation device 100 is more accurate than the current position calculated by the GPS measurement unit 36.
  • the estimation error of the position estimation apparatus 100 is expected to be about several centimeters.
  • the position estimation apparatus 100 can estimate the current position even indoors where GPS signals cannot be received. Therefore, in the automatic parking mode, the estimation result of the position estimation device 100 is used.
  • the position estimation apparatus 100 analyzes images captured by the image capturing apparatuses 31 to 34 using SLAM, which is an image processing technique, and estimates the movement trajectory and the current position of the vehicle 30.
  • the movement trajectory and the current position output by the position estimation apparatus 100 can be expressed using absolute coordinates in the earth coordinate system, as in the case of the GPS measurement unit 36.
  • the position estimation apparatus 100 sets a reference position in the earth coordinate system using external information such as a GPS signal, detects a relative movement of the vehicle 30 from the reference position by image analysis, and detects a movement locus and a current position. Is calculated.
  • the position estimation device 100 outputs the estimated current position to the automatic parking device 37. Thus, automatic parking of the vehicle 30 is executed based on the estimated current position. Further, the position estimation device 100 outputs the estimated movement trajectory and the current position to the navigation device 200. Thereby, the movement trajectory and the current position are displayed on the screen of the navigation device 200 so as to overlap the map prepared in advance.
  • the position estimation apparatus 100 may use the output of the odometer 35 or the output of the GPS measurement unit 36. Details of the inside of the position estimation apparatus 100 will be described later.
  • the navigation device 200 is an in-vehicle device that supports the driving of the user of the vehicle 30 and presents the situation around the vehicle 30 to the user. For example, the navigation device 200 accepts designation of the destination from the user, calculates a recommended route from the current position measured by the GPS measurement unit 36 to the destination, and displays the recommended route on the display so as to overlap the map. The navigation device 200 may reproduce a voice message indicating a recommended route from a speaker.
  • the navigation device 200 acquires information on the movement locus and the current position from the position estimation device 100, and displays the movement locus and the current position on the display so as to overlap with the map.
  • the navigation device 200 may reproduce a voice message indicating the state of automatic parking from a speaker. Thereby, the user can confirm whether the vehicle 30 is moving appropriately during the automatic parking mode.
  • the automatic parking mode may be canceled by the user operating the navigation device 200 or another device.
  • position estimation device 100 and the automatic parking device 37 may be housed in separate housings or in the same housing. Further, the position estimation device 100 and the navigation device 200 may be housed in separate housings or in the same housing.
  • FIG. 4 is a diagram illustrating a hardware example of the position estimation device and the navigation device.
  • the position estimation apparatus 100 includes a processor 101, a RAM 102, a ROM (Read Only Memory) 103, an image signal interface 104, an input interface 105, and an output interface 106. These units are connected to the bus.
  • the processor 101 corresponds to the determination unit 12 of the first embodiment.
  • the image signal interface 104 corresponds to the image acquisition unit 11 of the first embodiment.
  • the processor 101 is a controller including an arithmetic circuit that executes program instructions.
  • the processor 101 may be called a CPU or an ECU (Electronic Control Unit).
  • the processor 101 loads at least a part of the program and data stored in the ROM 103 into the RAM 102 and executes the program.
  • the RAM 102 is a volatile semiconductor memory that temporarily stores programs executed by the processor 101 and data used by the processor 101 for operations.
  • the position estimation apparatus 100 may include a type of memory other than the RAM, or may include a plurality of memories.
  • the ROM 103 is a non-volatile storage device that stores programs and data.
  • the program includes a position estimation program.
  • the ROM 103 may be non-volatile and may be a rewritable storage device such as a flash memory.
  • the position estimation device 100 may include other types of storage devices, and may include a plurality of nonvolatile storage devices.
  • the image signal interface 104 is connected to the imaging devices 31 to 34, and acquires the image data generated by the imaging devices 31 to 34.
  • the input interface 105 is connected to the odometer 35, the GPS measurement unit 36, and the like, and acquires information on the measured mileage and the calculated current position.
  • the output interface 106 is connected to the automatic parking device 37, the navigation device 200, and the like, and outputs information on the estimated movement locus and the current position.
  • the navigation device 200 includes a processor 201, a RAM 202, a flash memory 203, a display 204, an input device 205, and a media reader 206. These units are connected to the bus.
  • the processor 201 is a controller including an arithmetic circuit that executes program instructions.
  • the processor 201 loads at least a part of the program and data stored in the flash memory 203 into the RAM 202 and executes the program.
  • the RAM 202 is a volatile semiconductor memory that temporarily stores programs executed by the processor 201 and data used by the processor 201 for calculation.
  • the flash memory 203 is a non-volatile storage device that stores programs and data.
  • the navigation device 200 may include other types of storage devices such as an HDD (Hard Disk Drive).
  • Display 204 displays an image in accordance with a command from processor 201.
  • various types of displays such as a liquid crystal display (LCD: Liquid Crystal Display) and an organic EL (OEL: Organic Electro-Luminescence) display can be used.
  • the input device 205 receives a user operation and outputs an input signal to the processor 201.
  • various types of input devices such as a touch panel, a keypad, and a trackball can be used.
  • the media reader 206 is a reading device that reads programs and data recorded on the recording medium 207.
  • the recording medium 207 includes a magnetic disk such as a flexible disk (FD) or HDD, an optical disk such as a CD (Compact Disk) or a DVD (Digital Versatile Disk), a magneto-optical disk (MO), a semiconductor.
  • Various types of recording media such as a memory can be used.
  • the medium reader 206 stores the read program and data in the RAM 202 or the flash memory 203.
  • FIG. 5 is a diagram illustrating an example of an imaging apparatus and a vehicle coordinate system.
  • the description will be made assuming that the vehicle 30 is placed horizontally with respect to the horizontal ground in the earth coordinate system.
  • the position estimation device 100 defines a coordinate system (camera coordinate system) unique to each imaging device for each of the imaging devices 31 to 34.
  • the camera coordinate system is a logical coordinate system for image analysis and is different from the earth coordinate system.
  • coordinate axes C 1X , C 1Y , and C 1Z are defined with the position of the imaging device 31 as the origin.
  • the positive direction of C 1X is the horizontal right direction when viewed from the imaging device 31.
  • the positive direction of C 1Y is a vertically downward direction when viewed from the imaging device 31.
  • the positive direction of C 1Z is the front direction (imaging direction) when viewed from the imaging device 31.
  • a coordinate axis C 2X, C 2Y to the origin position of the imaging device 32, C 2Z are defined.
  • coordinate axes C 3X , C 3Y , and C 3Z with the position of the imaging device 33 as the origin are defined.
  • coordinate axes C 4X , C 4Y , and C 4Z are defined with the position of the imaging device 34 as the origin.
  • the XY plane of each of the imaging devices 31 to 34 is a plane parallel to the lens of the imaging device, and the XZ plane is a horizontal plane. Therefore, the images picked up by the image pickup devices 31 to 34 represent the XY plane of the camera coordinate system.
  • C 2Z is parallel to the C 1Z rotated 180 ° in the XZ plane.
  • C 3Z is parallel to C 1Z rotated 90 ° to the left in the XZ plane.
  • C 4Z is parallel to the C 1Z rotated 90 ° to the right in the XZ plane.
  • C 2X is parallel to the C 1X rotated 180 ° in the XZ plane.
  • C 3X is parallel to the C 1X rotated 90 ° to the left in the XZ plane.
  • C 4X is parallel to the C 1X rotated 90 ° to the right in the XZ plane.
  • C 1Y , C 2Y , C 3Y and C 4Y are parallel.
  • the position estimation device 100 defines one coordinate system (vehicle coordinate system) for the vehicle 30 in addition to the camera coordinate system when calculating the movement trajectory of the vehicle 30.
  • vehicle coordinate system is also a logical coordinate system for image analysis and is different from the earth coordinate system.
  • the origin of the vehicle coordinate system is a predetermined position in the vehicle 30, and is set to the center of gravity of the vehicle 30, for example.
  • coordinate axes V X , V Y , V Z are defined.
  • the positive direction of V X is a horizontal right direction with respect to the front front direction of the vehicle 30.
  • the positive direction of V Y is the vertically downward direction.
  • the positive direction of V Z is a vehicle front front direction of the vehicle 30. That is, V X is parallel to C 1X , V Y is parallel to C 1Y , and V Z is parallel to C 1Z .
  • the coordinates of the camera coordinate system of the imaging devices 31 to 34 are converted to the coordinates of the vehicle coordinate system of the vehicle 30.
  • the coordinate transformation from the former to the latter may be implemented using a transformation matrix or the like.
  • FIG. 6 is a diagram illustrating an example of images of four imaging devices.
  • the imaging devices 31 to 34 continuously capture images and output a sequence of image data indicating a sequence of images with different imaging times. For example, the imaging devices 31 to 34 capture images at a 1/30 second period. Images captured by the imaging devices 31 to 34 may be color images or monochrome images.
  • the imaging device 31 captures the image 41 at time “9:30: 00”. Thereafter, the imaging device 31 captures an image at a period of 1/30 second, and captures an image 42 at time “9:30:05”. Since the vehicle 30 is moving in the depth direction of the image, in the sequence of images captured by the imaging device 31, objects near the vehicle 30, such as other parked vehicles and white lines indicating parking spaces, Looks like it is moving towards. On the other hand, a building far away from the vehicle 30 appears to have little change in position and size.
  • the imaging device 32 captures the image 43 at time “9:30”. Thereafter, the imaging device 32 captures an image at a period of 1/30 second, and captures an image 44 at time “9:30:05”. Since the vehicle 30 travels in the opposite direction to the depth direction of the image, in the sequence of images captured by the imaging device 32, objects near the vehicle 30 such as white lines indicating other vehicles and parking spaces are Looks like it is moving towards the center. On the other hand, a building far away from the vehicle 30 appears to have little change in position and size.
  • the imaging device 33 captures the image 45 at the time “9:30: 00”. Thereafter, the imaging device 33 captures an image at a period of 1/30 second, and captures an image 46 at time “9:30:05”. Since the vehicle 30 is moving in the right direction of the image, in the row of images captured by the imaging device 33, objects near the vehicle 30 such as white lines indicating other vehicles and parking spaces are directed from the right to the left. Looks like it ’s moving. In addition, a building far away from the vehicle 30 also appears to move from right to left.
  • the imaging device 34 captures the image 47 at the time “9:30”. Thereafter, the imaging device 34 captures an image at a period of 1/30 second, and captures an image 48 at time “9:30:05”. Since the vehicle 30 is moving in the left direction of the image, in the sequence of images captured by the imaging device 34, objects near the vehicle 30 such as white lines indicating other vehicles and parking spaces are directed from the left to the right. Looks like it ’s moving. In addition, a building far away from the vehicle 30 also appears to move from left to right.
  • FIG. 7 is a diagram illustrating an example of target point extraction in SLAM processing.
  • the position estimation device 100 performs SLAM processing separately for each of the imaging devices 31 to 34.
  • trajectory data indicating the estimation of the relative movement path from the reference position
  • point cloud data indicating the estimation of the position of each target point in the image in the three-dimensional space are generated. That is, the position estimation device 100 generates trajectory data and point cloud data as estimation results from the sequence of image data output from the imaging device 31.
  • the position estimation device 100 generates trajectory data and point cloud data, which are estimation results, from the sequence of image data output by the imaging device 32.
  • the position estimation device 100 generates trajectory data and point cloud data as estimation results from the sequence of image data output by the imaging device 33.
  • the position estimation device 100 generates trajectory data and point cloud data as estimation results from the sequence of image data output by the imaging device 34.
  • the image 51 is an image captured by the imaging device 33.
  • the image 52 is an image captured by the imaging device 33 next to the image 51 (for example, after 1/30 second).
  • the position estimation device 100 analyzes the image 51 and extracts a target point from the image 51.
  • the target point is, for example, a pixel having a color gradient (a change amount of a value from an adjacent pixel) of pixels of the image 51 that is equal to or greater than a threshold value.
  • a threshold value As the target point, pixels having a color different from that of adjacent pixels, such as a building outline, a window frame of the building, and a white line drawn on the ground, are extracted.
  • the position estimation apparatus 100 generates an analysis image 53 indicating the target point extracted from the image 51.
  • the position estimation apparatus 100 analyzes the image 52 and extracts target points from the image 52.
  • the position estimation apparatus 100 generates an analysis image 54 indicating the target point extracted from the image 52.
  • the position estimation device 100 compares the analysis image 53 and the analysis image 54 and determines the correspondence between the target point included in the analysis image 53 and the target point included in the analysis image 54. For example, for each target point of the analysis image 54, the position estimation apparatus 100 searches for a target point of the analysis image 53 whose distance is equal to or less than a threshold when the analysis image 53 is superimposed, and a target pointing to substantially the same object. Find a spot. Then, the position estimation device 100 determines the position change of the target point between the analysis image 53 and the analysis image 54. Thereby, the moving direction and moving distance of the imaging device 33 can be estimated. Further, the position of the target point extracted from the images 51 and 52 in the three-dimensional space can also be estimated.
  • the imaging device 33 when the target point is moving from the periphery of the image toward the center, it can be estimated that the imaging device 33 is moving in the negative direction of C 3Z .
  • the imaging device 33 is moving in the positive direction of C 3Z .
  • the target point If the target point is moving from left to right of the image, it can be estimated that the image pickup device 33 is moved in the negative direction of the C 3X.
  • the imaging device 33 When the target point is moving from the right to the left of the image, it can be estimated that the imaging device 33 is moving in the positive direction of C 3X .
  • a target point with a large movement amount exists near the imaging device 33 it can be estimated that a target point with a small amount of movement exists far from the imaging device 33.
  • FIG. 8 is a diagram illustrating an example of SLAM results for four imaging devices.
  • the position estimation device 100 calculates the movement locus 61 and the point group 65 using the image data output from the imaging device 31.
  • the position estimation device 100 calculates the movement locus 62 and the point group 66 using the image data output from the imaging device 32.
  • the position estimation device 100 calculates the movement locus 63 and the point group 67 using the image data output from the imaging device 33.
  • D The position estimation device 100 calculates the movement locus 64 and the point group 68 using the image data output from the imaging device 34.
  • the movement trajectories 61 to 64 are calculated independently of each other.
  • the point groups 65 to 68 are calculated independently of each other.
  • the scale of the movement trajectory calculated from the image of each imaging device depends on the camera coordinate system of the imaging device, and the scales of the movement trajectories 61 to 64 are not unified. Further, the scale of the point group calculated from the image of each imaging device also depends on the camera coordinate system of the imaging device, and the scales of the point groups 65 to 68 are not unified.
  • some of the imaging devices 31 to 34 may take images (images unsuitable for SLAM processing) for which it is difficult to accurately extract the target points. For example, an image in which a plane such as a ground or a wall without a pattern occupies a large area is unsuitable for SLAM processing. In addition, an image in which halation has occurred due to strong light such as sunlight hitting the imaging device is not suitable for SLAM processing. Further, when the shadow of the vehicle 30 appears in the image and the shadow moves as the vehicle 30 moves, the image is not suitable for SLAM processing. Further, when another vehicle in the image is moving, the image is not suitable for SLAM processing.
  • a low-accuracy movement trajectory or point cloud may be calculated.
  • a movement trajectory that is significantly different from the actual movement route of the vehicle 30 or a point group that is significantly different from the arrangement of the targets around the vehicle 30 may be calculated.
  • the image of the imaging device 32 (for example, the images 43 and 44 in FIG. 6) is an image in which it is difficult to extract a target point due to the influence of the ground or a distant building. is doing.
  • SLAM processing is performed independently of each other on the imaging devices 31 to 34 whose imaging directions are shifted by 90 °.
  • the images of the imaging devices 31, 33, and 34 are not images that are difficult to extract the target points, and the decrease in the estimation accuracy of the movement locus 62 does not affect the movement locus 61, 63, and 64.
  • the movement trajectories 61, 63, and 64 are movement trajectories calculated from different images, and thus do not completely match.
  • FIG. 9 is a diagram illustrating a synthesis example of SLAM results.
  • the position estimation apparatus 100 calculates the movement trajectories 61 to 64 as described above.
  • the movement trajectories 61 to 64 represent movement paths from the reference position, and represent movement paths for a predetermined time (for example, 5 seconds from time “9:30: 00” to time “9:30:05”). Yes.
  • the movement trajectories 61 to 64 are expressed in different camera coordinate systems. Therefore, the position estimation apparatus 100 converts the coordinate system of the movement trajectories 61 to 64 from the camera coordinate system to the vehicle coordinate system. As a result, the start points of the movement trajectories 61 to 64 move to the origin of the vehicle coordinate system. Further, the coordinate systems of the movement trajectories 61 to 64 are all unified to a coordinate system defined by V X , V Y , and V Z.
  • the position estimation apparatus 100 converts the movement trajectories 61 to 64 into the movement trajectories 71 to 74 so that the lengths are the same in the vehicle coordinate system. For example, the position estimation apparatus 100 determines a uniform length, and performs similar transformation on each of the movement trajectories 61 to 64 so that the uniform length is obtained.
  • the unified length may be any one of the movement trajectories 61 to 64 (for example, the length of the longest movement trajectory), or may be different from any of the movement trajectories 61 to 64.
  • the position estimation apparatus 100 searches for an abnormal movement trajectory that is significantly different from other movement trajectories, and when an abnormal movement trajectory is found, excludes the abnormal movement trajectory from the subsequent processing.
  • the position estimation apparatus 100 calculates an average movement trajectory obtained by averaging the movement trajectories 71 to 74, and calculates a degree of deviation (“distance” between the movement trajectories) between the average movement trajectory and each of the movement trajectories 71 to 74. To do.
  • the position estimation apparatus 100 determines that a movement locus whose degree of deviation from the average movement locus is equal to or greater than a threshold is an abnormal movement locus.
  • the position estimation apparatus 100 calculates a divergence degree for each pair of two movement trajectories, and determines a movement trajectory whose deviation degree is equal to or greater than a threshold value among all other movement trajectories as an abnormal movement trajectory. .
  • the movement locus 72 is determined to be an abnormal movement locus.
  • the position estimation apparatus 100 synthesizes movement trajectories other than the abnormal movement trajectory to calculate a movement trajectory 70 that is a composite movement trajectory.
  • the movement locus 70 is regarded as a correct movement locus.
  • the position estimation apparatus 100 sets the average of the movement trajectories 71, 73, and 74 as the combined movement trajectory.
  • the position estimation apparatus 100 may assign weights to the movement trajectories 71, 73, and 74 and use a weighted average of the movement trajectories 71, 73, and 74 as a combined movement trajectory. The weight may be determined based on the shape of each movement locus.
  • the weight of an unnatural movement locus such as a meandering movement locus.
  • the weight of the abnormal trajectory may be reduced and the weighted average of the movement trajectories 71 to 74 may be calculated.
  • the position estimation apparatus 100 converts the coordinate system of the movement locus 70 from the vehicle coordinate system to the earth coordinate system.
  • the length of the movement locus 70 may be different from the actual movement distance of the vehicle 30 in the earth coordinate system. Therefore, the position estimation apparatus 100 performs scaling of the movement locus 70 and adjusts the scale of the movement locus 70.
  • the scaling method of the movement locus 70 As a first method, a method using the travel distance of the vehicle 30 measured by the odometer 35 can be considered. Based on the information output from the odometer 35, the position estimation apparatus 100 performs the process from the time at the reference position to the present (for example, 5 seconds from the time “9: 30: 0” to the time “9:30:05”). ) Of the vehicle 30 is obtained. The position estimation apparatus 100 performs similarity conversion on the movement locus 70 so that the length of the movement locus 70 matches the travel distance. When the actual travel distance is used, when the position estimation apparatus 100 converts the travel trajectories 61 to 64 into the travel trajectories 71 to 74 in FIG. 9, the length of the travel trajectories 71 to 74 matches the travel distance. You may let them. In that case, scaling of the movement locus 70 is not necessary.
  • the position estimation apparatus 100 selects point groups 65, 67, and 68 excluding the point group 66 corresponding to the abnormal movement locus from the point groups 65 to 68 calculated by the SLAM process.
  • the position estimation apparatus 100 matches the point groups 65, 67, and 68 with the map.
  • the position estimation apparatus 100 changes the scale for each of the point groups 65, 67, and 68, and calculates the magnification at which the target point and the line drawn on the map overlap most.
  • the position estimation apparatus 100 obtains the desired length of the movement locus 70 by applying the magnification calculated for the point groups 65, 67, and 68 to the movement locus 61, 63, 64.
  • FIG. 10 is a diagram illustrating an example of the scale adjustment of the movement trajectory using the point group.
  • the position estimation apparatus 100 selects point groups 65, 67, and 68 excluding the point group 66 corresponding to the movement locus 62 that is an abnormal movement locus among the point groups 65 to 68 calculated by the SLAM process.
  • the position estimation apparatus 100 resembles the point group 65 with the point group 75 around the origin of the camera coordinate system of the imaging apparatus 31 at the magnification when the movement locus 61 is converted into the movement locus 71.
  • the position estimation apparatus 100 resembles the point group 67 with the point group 77 around the origin of the camera coordinate system of the imaging apparatus 33 at the magnification when the movement locus 63 is converted into the movement locus 73.
  • the position estimation apparatus 100 resembles the point group 68 with the point group 78 around the origin of the camera coordinate system of the imaging apparatus 34 at the magnification when the movement locus 64 is converted into the movement locus 74.
  • the position estimation apparatus 100 scales the point groups 75, 77, 78 with a common magnification.
  • the scaling of the point group 75 is performed with the origin of the camera coordinate system of the imaging device 31 as the center.
  • the scaling of the point group 77 is performed with the origin of the camera coordinate system of the imaging device 33 as the center.
  • the scaling of the point group 78 is performed with the origin of the camera coordinate system of the imaging device 34 as the center. Since the origins of the coordinate systems are different from each other, the overlapping degree of the target points between the point groups 75, 77, and 78 changes when the magnification is changed.
  • the position estimation apparatus 100 calculates a magnification that maximizes the degree of overlap. For example, for each target point, a probability distribution that extends in a certain range around the target point is defined. The position estimation apparatus 100 adjusts the magnification so that the sum of the overlapping amounts of the probability distributions is maximized. In addition, for example, the position estimation apparatus 100 counts other target points that are within a predetermined range from the target point for each target point, and adjusts the magnification so that the total of the counts is maximized.
  • the position estimation apparatus 100 converts the point group 75 into the point group 85 at the common magnification, converts the point group 77 into the point group 87 at the common magnification, and converts the point group 78 into the point group 85.
  • the point group 88 is converted at a common magnification.
  • the position estimation apparatus 100 obtains the desired length of the movement track 70 by applying the common magnification to the movement tracks 71, 73, and 74. As a result, a movement locus 80 that matches the scale of the earth coordinate system is calculated.
  • FIG. 11 is a block diagram illustrating an example of functions of the position estimation apparatus.
  • the position estimation apparatus 100 includes a SLAM result storage unit 111, a parameter storage unit 112, a map data storage unit 113, a SLAM processing unit 121 to 124, a trajectory comparison unit 125, a point group comparison unit 126, a position determination unit 127, and a travel distance acquisition unit. 128 and a GPS information acquisition unit 129.
  • the SLAM result storage unit 111, the parameter storage unit 112, and the map data storage unit 113 can be mounted using the storage area of the RAM 102 or the ROM 103.
  • the SLAM processing units 121 to 124, the trajectory comparison unit 125, the point group comparison unit 126, the position determination unit 127, the travel distance acquisition unit 128, and the GPS information acquisition unit 129 can be implemented using program modules.
  • the SLAM result storage unit 111 stores trajectory data and point cloud data generated by the SLAM processing units 121-124.
  • the SLAM result storage unit 111 stores intermediate data generated in the course of processing by the trajectory comparison unit 125 and the point group comparison unit 126.
  • the intermediate data includes data converted from the initial trajectory data and point cloud data.
  • the parameter storage unit 112 stores parameters indicating the camera coordinate system of the imaging devices 31 to 34 and the vehicle coordinate system of the vehicle 30.
  • the parameter is used for converting the coordinate system of the trajectory data and the point cloud data.
  • the parameters are defined in advance according to the arrangement of the imaging devices 31 to 34.
  • the map data storage unit 113 stores map data indicating roads and parking lots.
  • the map data includes, for example, the coordinates (latitude and longitude) of the earth coordinate system indicating the location of the road and the parking lot, and line data indicating the shape of the road and the parking lot.
  • the map data storage unit 113 may store only map data related to the parking lot.
  • the map data related to the parking lot preferably represents a detailed shape in the parking lot such as the arrangement of the parking space.
  • the SLAM processing units 121 to 124 generate trajectory data and point cloud data by image processing.
  • the SLAM processing units 121 to 124 operate independently of each other.
  • the SLAM processing units 121 to 124 output the generated trajectory data to the trajectory comparison unit 125.
  • the SLAM processing units 121 to 124 output the generated point group data to the point group comparison unit 126.
  • the SLAM processing unit 121 acquires image data from the imaging device 31, and generates trajectory data and point cloud data corresponding to the imaging device 31.
  • the SLAM processing unit 122 acquires image data from the imaging device 32 and generates trajectory data and point cloud data corresponding to the imaging device 32.
  • the SLAM processing unit 123 acquires image data from the imaging device 33 and generates trajectory data and point cloud data corresponding to the imaging device 33.
  • the SLAM processing unit 124 acquires image data from the imaging device 34 and generates trajectory data and point cloud data corresponding to the imaging device 34.
  • the trajectory comparison unit 125 uses the trajectory data generated by the SLAM processing units 121 to 124 to calculate a movement trajectory 70 (composite movement trajectory) in the vehicle coordinate system.
  • the trajectory comparison unit 125 uses the parameters stored in the parameter storage unit 112 to convert the coordinate system of the movement trajectories 61 to 64 indicated by the trajectory data of the SLAM processing units 121 to 124 from the camera coordinate system to the vehicle coordinate system.
  • the trajectory comparison unit 125 adjusts the scales of the movement trajectories 61 to 64 so that the movement trajectories 61 to 64 have the same length, and generates the movement trajectories 71 to 74.
  • the trajectory comparison unit 125 notifies the point group comparison unit 126 of the magnification applied to the movement trajectories 61 to 64.
  • the trajectory comparison unit 125 generates the movement trajectory 70 by combining the movement trajectories 71 to 74 having the same length, and notifies the position determination unit 127 of the movement trajectory 70.
  • the synthesis of the movement trajectories 71 to 74 may include detecting an abnormal movement trajectory from the movement trajectories 71 to 74 and excluding the abnormal movement trajectory. In that case, the trajectory comparison unit 125 notifies the point group comparison unit 126 of the abnormal movement trajectory. Further, the synthesis of the movement trajectories 71 to 74 may include calculating an average of all or a part of the movement trajectories 71 to 74.
  • the composition of the movement trajectories 71 to 74 may include assigning weights to the movement trajectories 71 to 74 and calculating a weighted average of the movement trajectories 71 to 74.
  • the point group comparison unit 126 uses the point group data generated by the SLAM processing units 121 to 124 to determine a magnification for adjusting the scale of the moving locus 70 generated by the locus comparison unit 125.
  • the point group comparison unit 126 applies the magnification of the movement trajectories 61 to 64 notified from the trajectory comparison unit 125 to the point groups 65 to 68 to generate point groups 75 to 78 corresponding to the movement trajectories 71 to 74. That is, the point group comparison unit 126 applies the magnification applied to the movement locus 61 to the point group 65, applies the magnification applied to the movement locus 62 to the point group 66, and calculates the magnification applied to the movement locus 63.
  • the magnification applied to the point group 67 and the magnification applied to the movement locus 64 is applied to the point group 68.
  • the point group comparison unit 126 scales the point groups 75 to 78 by a common magnification, and determines the magnification at which the target points overlap most between the point groups 75 to 78.
  • the point group comparison unit 126 may exclude the point group corresponding to the abnormal movement locus from the point groups 75 to 78.
  • the point group comparison unit 126 notifies the position determination unit 127 of the determined magnification.
  • the position determination unit 127 adjusts the length of the movement locus 70 notified from the locus comparison unit 125 based on the magnification notified from the point group comparison unit 126, and calculates the movement locus 80.
  • the position determination unit 127 maps the movement trajectory 80 to the earth coordinate system so that the starting point of the movement trajectory 80 becomes the reference position, and outputs an estimation result of the movement trajectory and the current position of the vehicle 30 in the earth coordinate system.
  • the current position corresponds to the end point of the movement track 80.
  • the movement trajectory and the current position can be expressed using the coordinates (latitude and longitude) of the earth coordinate system.
  • the position determination unit 127 may perform scaling of the movement locus 70 using the travel distance acquired from the travel distance acquisition unit 128 instead of using the processing result of the point group comparison unit 126. Further, the position determining unit 127 determines the magnification by collating the point cloud data generated by the SLAM processing unit 123 with the map data stored in the map data storage unit 113, and using the determined magnification, the position of the moving locus 70 is determined. Scaling may be performed. As the reference position in the earth coordinate system, the current position of the vehicle 30 last measured by the GPS information acquisition unit 129 may be used.
  • the position of the ETC (Electronic Toll Collection System) gate through which the vehicle 30 last passed may be specified from the map data, and the position of the ETC gate may be used as the reference position. Further, the current position previously estimated by the position determination unit 127 may be used as the reference position in the next estimation.
  • ETC Electronic Toll Collection System
  • the travel distance acquisition unit 128 acquires travel distance information from the travel distance meter 35.
  • the GPS information acquisition unit 129 acquires current position information from the GPS measurement unit 36. However, the GPS information acquisition unit 129 cannot acquire the current position information in a place where the GPS signal does not reach.
  • FIG. 12 is a diagram illustrating an example of a parameter table.
  • the parameter table 114 is stored in the parameter storage unit 112.
  • the parameter table 114 includes items of an imaging device ID, an X coordinate, a Y coordinate, a Z coordinate, a yaw (Yaw), a pitch (Pitch), and a roll (Roll).
  • the imaging device ID is identification information of the imaging devices 31 to 34 mounted on the vehicle 30.
  • the X coordinate, the Y coordinate, and the Z coordinate are coordinates in the vehicle coordinate system that indicate the locations where the imaging devices 31 to 34 are arranged.
  • the imaging device 31 exists at (0.0 m, 0.0 m, 2.5 m).
  • the imaging device 32 exists at (0.0 m, 0.0 m, ⁇ 2.5 m).
  • the imaging device 33 exists at ( ⁇ 1.0 m, 0.0 m, 0.8 m).
  • the imaging device 34 exists at (1.0 m, 0.0 m, 0.8 m).
  • the yaw is an angle in the imaging direction of the imaging devices 31 to 34 on the XZ plane, that is, an angle in the left-right direction with respect to the front of the vehicle 30.
  • the pitch is an angle in the imaging direction of the imaging devices 31 to 34 on the YZ plane, that is, an angle in the vertical direction with respect to the front of the vehicle 30.
  • the roll is an angle in the imaging direction of the imaging devices 31 to 34 on the XY plane, that is, an angle between an upward direction of an image to be captured and a vertical upward direction of the vehicle 30.
  • the imaging directions of the imaging devices 31 to 34 are parallel to the horizontal plane, and the pitch and roll are all 0 °.
  • the imaging devices 31 to 34 are set so as to be shifted by 90 ° forward, backward, left and right of the vehicle 30. Therefore, the yaw of the imaging device 31 is 0 °, the yaw of the imaging device 32 is 180 °, the yaw of the imaging device 33 is ⁇ 90 °, and the yaw of the imaging device 34 is 90 °.
  • FIG. 13 is a diagram illustrating an example of trajectory data generated by SLAM processing.
  • trajectory data 131 to 134 are stored in the SLAM result storage unit 111.
  • the trajectory data 131 is generated from the image of the imaging device 31 by the SLAM processing unit 121.
  • the trajectory data 132 is generated from the image of the imaging device 32 by the SLAM processing unit 122.
  • the trajectory data 133 is generated from the image of the imaging device 33 by the SLAM processing unit 123.
  • the trajectory data 134 is generated from the image of the imaging device 34 by the SLAM processing unit 124.
  • Each of the trajectory data 131 to 134 includes a plurality of records in which time, X coordinate, Y coordinate, Z coordinate, yaw, pitch, roll, and distance are associated with each other. However, since the pitch and roll are all 0 ° in the second embodiment, the description is omitted in FIG. In the example of FIG. 13, each of the trajectory data 131 to 134 includes six records corresponding to the time from the time “9: 30: 0” to the time “9:30:05”. The first time “9:30:30” is the time when the vehicle 30 was at the reference position (reference time).
  • the time of the trajectory data 131 to 134 is the time when the image is captured.
  • the X coordinate, Y coordinate, and Z coordinate of the trajectory data 131 to 134 are coordinates in the camera coordinate system, and represent the estimation of the relative position from the position of the imaging device at the reference time.
  • the yaw, pitch, and roll of the trajectory data 131 to 134 are directions in the camera coordinate system, and represent estimation of the moving direction at each time.
  • the distance of the trajectory data 131 to 134 is a distance in the camera coordinate system, and represents a cumulative moving distance from the position of the imaging device at the reference time. This distance is the sum of the amount of change in the relative position at adjacent times.
  • the distance at the last time “9:30:05” represents the length of the movement locus.
  • the trajectory data 132 includes a time “9:30:01”, an X coordinate “0.080 m”, a Y coordinate “0.000 m”, a Z coordinate “ ⁇ 0.500 m”, a yaw “ ⁇ 9.090 °”, A record of distance “0.506 m” is included. This represents that the imaging device 32 has moved from (0.000 m, 0.000 m, 0.000 m) to (0.080 m, 0.000 m, ⁇ 0.500 m) one second after the reference time. . This also indicates that the traveling direction has changed by 9.090 ° to the left from the traveling direction at the reference time. This also indicates that the imaging device 32 has moved by 0.506 m from the reference time position.
  • FIG. 14 is a diagram illustrating an example of point cloud data generated by SLAM processing.
  • point cloud data 135 to 138 are stored.
  • the point cloud data 135 is generated from the image of the imaging device 31 by the SLAM processing unit 121.
  • the point cloud data 136 is generated from the image of the imaging device 32 by the SLAM processing unit 122.
  • the point cloud data 137 is generated from the image of the imaging device 33 by the SLAM processing unit 123.
  • the point cloud data 138 is generated from the image of the imaging device 34 by the SLAM processing unit 124.
  • Each of the point cloud data 135 to 138 includes a plurality of records in which the X coordinate, the Y coordinate, and the Z coordinate are associated with each other.
  • One record of the point cloud data 135 to 138 corresponds to one target point.
  • the X coordinate, Y coordinate, and Z coordinate of the point group data 135 to 138 are coordinates in the camera coordinate system, and represent the estimation of the position where the target point exists in the three-dimensional space.
  • the number of records included in the point cloud data 135 to 138 that is, the number of target points recognized by the SLAM processing units 121 to 124 may be different from each other.
  • the position of each target point is not classified by time because it is estimated by combining analysis results of a plurality of images having different imaging times.
  • FIG. 15 is a diagram illustrating a first conversion example of trajectory data.
  • the SLAM result storage unit 111 stores trajectory data 141 to 144.
  • the trajectory data 141 to 144 are converted from the trajectory data 131 to 134 by the trajectory comparison unit 125.
  • the trajectory data 131 is converted into trajectory data 141.
  • the trajectory data 132 is converted into trajectory data 142.
  • the trajectory data 133 is converted into trajectory data 143.
  • the trajectory data 134 is converted into trajectory data 144.
  • the trajectory data 141 is obtained by converting the coordinate system of the trajectory data 131 from the camera coordinate system of the imaging device 31 to the vehicle coordinate system using the parameters of the imaging device 31 included in the parameter table 114.
  • the trajectory data 142 is obtained by converting the coordinate system of the trajectory data 132 from the camera coordinate system of the imaging device 32 to the vehicle coordinate system using the parameters of the imaging device 32 included in the parameter table 114.
  • the trajectory data 143 is obtained by converting the coordinate system of the trajectory data 133 from the camera coordinate system of the imaging device 33 to the vehicle coordinate system using the parameters of the imaging device 33 included in the parameter table 114.
  • the trajectory data 144 is obtained by converting the coordinate system of the trajectory data 134 from the camera coordinate system of the imaging device 34 to the vehicle coordinate system using the parameters of the imaging device 34 included in the parameter table 114.
  • the X coordinate, Y coordinate, and Z coordinate of the trajectory data 141 to 144 are coordinates in the vehicle coordinate system and represent the estimation of the relative position of the vehicle 30 from the reference position.
  • the reference position is, for example, the center position of the vehicle 30 at the reference time.
  • the relative position at each time is, for example, the center position of the vehicle 30 at that time.
  • the yaw, pitch, and roll of the trajectory data 141 to 144 represent the estimation of the moving direction at each time in the vehicle coordinate system.
  • the distance of the trajectory data 141 to 144 represents the cumulative moving distance from the reference position in the vehicle coordinate system.
  • the trajectory data 142 includes time “9:30:01”, X coordinate “ ⁇ 0.475 m”, Y coordinate “0.000 m”, Z coordinate “0.500 m”, yaw “ ⁇ 9.090 °”, A record of distance “0.690 m” is included.
  • the X coordinate at the time “9:30:01” is adjusted.
  • FIG. 16 is a diagram illustrating a second conversion example of trajectory data.
  • Trajectory data 145 to 148 are stored in the SLAM result storage unit 111.
  • the trajectory data 145 to 148 are converted from the trajectory data 141 to 144 by the trajectory comparison unit 125.
  • the trajectory data 141 is converted into trajectory data 145.
  • the trajectory data 142 is converted into trajectory data 146.
  • the trajectory data 143 is converted into trajectory data 147.
  • the trajectory data 144 is converted into trajectory data 148.
  • the trajectory data 145 is obtained by similarity conversion of the trajectory data 141 so that the lengths of the moving trajectories are aligned.
  • the trajectory data 145 corresponds to the movement trajectory 71.
  • the trajectory data 146 is obtained by similarity conversion of the trajectory data 142 so that the lengths of the movement trajectories are uniform.
  • the trajectory data 146 corresponds to the movement trajectory 72.
  • the trajectory data 147 is obtained by similarity conversion of the trajectory data 143 so that the lengths of the movement trajectories are uniform.
  • the trajectory data 147 corresponds to the movement trajectory 73.
  • the trajectory data 148 is obtained by similarity conversion of the trajectory data 144 so that the lengths of the moving trajectories are uniform.
  • the trajectory data 148 corresponds to the movement trajectory 74.
  • the length of the movement trajectories 71 to 74 indicated by the trajectory data 145 to 148 is unified to 5.000 m.
  • the trajectory data 146 includes time “9:30:01”, X coordinate “ ⁇ 0.525 m”, Y coordinate “0.000 m”, Z coordinate “0.811 m”, yaw “ ⁇ 9.090 °”, A record of distance “0.965 m” is included.
  • the length of the moving trajectory is increased from 3.432 m to 5.000 m. Therefore, the distance at time “9:30:01” also increases from 0.690 m to 0.965 m.
  • FIG. 17 is a diagram illustrating an example of synthesized trajectory data.
  • the trajectory comparison unit 125 generates trajectory data 151 from the trajectory data 145 to 148. This corresponds to calculating the movement trajectory 70 by combining the movement trajectories 71 to 74.
  • the locus comparison unit 125 determines that the movement locus 72 indicated by the locus data 146 is an abnormal movement locus.
  • the trajectory comparison unit 125 generates trajectory data 151 by averaging the trajectory data 145, 147, and 148. In the averaging of the trajectory data 145, 147, and 148, the trajectory comparison unit 125 calculates the average value of the X coordinate, Y coordinate, Z coordinate, yaw, pitch, and low of the records having the same time, and calculates the distance.
  • the trajectory data 151 includes time “9:30:01”, X coordinate “ ⁇ 0.037 m”, Y coordinate “0.000 m”, Z coordinate “1.014 m”, yaw “ ⁇ 2.839 °”, A record of distance “1.016 m” is included.
  • the trajectory comparison unit 125 performs scaling on the trajectory data 151 and generates trajectory data 152. This corresponds to calculating the movement locus 80 by converting the length of the movement locus 70 into an actual distance in the earth coordinate system.
  • the length of the movement locus 80 indicated by the locus data 152 is extended to 8.000 m.
  • the trajectory data 152 includes time “9:30:01”, X coordinate “ ⁇ 0.007 m”, Y coordinate “0.000 m”, Z coordinate “1.615 m”, yaw “ ⁇ 2.839 °”, distance “ The record “1.615m” is included.
  • FIG. 18 is a flowchart illustrating an exemplary procedure for position estimation.
  • the position determination unit 127 sets a reference position. Setting of the reference position at the start of position estimation is performed using GPS or the like. After the start of position estimation, the position determination unit 127 may set the current position estimated last time as the reference position.
  • the SLAM processing unit 121 analyzes the sequence of images captured by the imaging device 31, and generates trajectory data 131 and point cloud data 135.
  • the SLAM processing unit 122 analyzes the sequence of images captured by the imaging device 32 and generates trajectory data 132 and point cloud data 136.
  • the SLAM processing unit 123 analyzes the sequence of images captured by the imaging device 33 and generates trajectory data 133 and point cloud data 137.
  • the SLAM processing unit 124 analyzes the sequence of images captured by the imaging device 34 and generates trajectory data 134 and point cloud data 138.
  • the trajectory comparison unit 125 refers to the parameter table 114 stored in the parameter storage unit 112, and converts the trajectory data 131 to 134 expressed in the camera coordinate system into the trajectory data 141 to 144 expressed in the vehicle coordinate system. Convert to
  • the trajectory comparison unit 125 performs scaling so that the lengths of the movement trajectories 61 to 64 indicated by the trajectory data 141 to 144 are equal, and calculates the movement trajectories 71 to 74. That is, the trajectory comparison unit 125 converts the trajectory data 141 to 144 into trajectory data 145 to 148 so that the distance at the last time is the same between the trajectory data 141 to 144. When the travel distance can be acquired from the travel distance acquisition unit 128, the trajectory comparison unit 125 may match the lengths of the travel tracks 71 to 74 with the travel distance at this time.
  • the point group comparison unit 126 refers to the parameter table 114 stored in the parameter storage unit 112, determines the coordinates of the point group data 135 to 138 expressed in the camera coordinate system, and the origin of each camera coordinate system. Rotate to the center to align with the vehicle coordinate system.
  • the point group comparison unit 126 performs scaling by applying the same magnification as the movement trajectories 61 to 64 to the point groups 65 to 68 indicated by the point group data 135 to 138. Scaling is performed around the position of the imaging devices 31 to 34 (the origin of the camera coordinate system).
  • the point group comparison unit 126 applies the magnification obtained when the movement locus 61 is converted into the movement locus 71 to the point group 65. Further, the point group comparison unit 126 applies the magnification obtained when the movement locus 62 is converted into the movement locus 72 to the point group 66.
  • the point group comparison unit 126 applies the magnification obtained when the movement locus 63 is converted into the movement locus 73 to the point group 67.
  • the point group comparison unit 126 applies the magnification obtained when the movement locus 64 is converted to the movement locus 74 to the point group 68.
  • the trajectory comparison unit 125 detects an abnormal movement trajectory greatly different from other movement trajectories from the movement trajectories 71 to 74 indicated by the trajectory data 145 to 148, and excludes the abnormal movement trajectory. For example, the trajectory comparison unit 125 calculates the average value of the coordinates indicated by the trajectory data 145 to 148 at the same time, and calculates the average movement trajectory. The trajectory comparison unit 125 calculates the square of the deviation between the coordinates indicated by the trajectory data 145 and the coordinates of the average moving trajectory at the same time, and defines the sum of squared deviations as the degree of divergence between the moving trajectory 71 and the average moving trajectory.
  • the trajectory comparison unit 125 calculates the divergence degree for the movement trajectories 72 to 74 as well. Then, the trajectory comparison unit 125 determines a movement trajectory having a deviation degree equal to or greater than a threshold among the movement trajectories 71 to 74 as an abnormal movement trajectory.
  • the trajectory comparison unit 125 generates the trajectory data 151 by synthesizing the remaining trajectory data other than the trajectory data indicating the abnormal movement trajectory from the trajectory data 145 to 148. For example, the trajectory comparison unit 125 calculates the average value of the coordinates indicated by the remaining trajectory data at the same time. This represents that the movement trajectory 70 is calculated by averaging the movement trajectories other than the abnormal movement trajectory.
  • the point group comparison unit 126 selects the remaining point group data excluding the point group data corresponding to the abnormal movement trajectory from the point group data indicating the point group scaled in step S6.
  • the point group comparison unit 126 scales the point group indicated by the selected point group data with the same magnification around the origin of each camera coordinate system, and searches for a common magnification that maximizes the overlap of the target points.
  • the position determination unit 127 generates trajectory data 152 indicating the movement trajectory 80 by applying the found common magnification to the trajectory data 151 generated in step S8.
  • the length of the movement locus 80 is an estimated value of the actual movement distance in the earth coordinate system.
  • the position determination unit 127 maps the movement locus 80 to the map space (latitude and longitude space) of the earth coordinate system from the reference position set in step S1 and the locus data 152 generated in step S9.
  • the position indicated by the end point of the movement locus 80 represents the current position of the vehicle 30.
  • the automatic parking device 37 controls automatic parking based on the current position estimated by the position estimation device 100.
  • the navigation device 200 displays the current position and movement trajectory estimated by the position estimation device 100 on the display 204 so as to overlap the map.
  • the position estimation apparatus 100 determines whether or not to continue position estimation. For example, the position estimation is continued while the automatic parking device 37 is in the automatic parking mode, and the position estimation is terminated when the automatic parking mode is canceled. When the position estimation is continued, the process proceeds to step S1. When the position estimation is not continued, the process of the position estimation device 100 ends.
  • FIG. 19 is a diagram illustrating an example of a navigation screen.
  • a navigation screen 90 is displayed on the display 204 of the navigation device 200.
  • Map data indicating a map of the parking lot is stored in the flash memory 203 of the navigation device 200.
  • the map of the parking lot shows the detailed shape of the parking lot such as the arrangement of parking spaces.
  • the estimated current position of the vehicle 30 is displayed on the navigation screen 90 so as to overlap the map of the parking lot.
  • the navigation screen 90 displays the movement trajectory of the vehicle 30 from at least a predetermined time before to the present. Thereby, the user can confirm that automatic driving
  • images of the imaging devices 31 to 34 having different imaging directions are acquired, and movement trajectories are calculated for each of the imaging devices 31 to 34 by SLAM processing.
  • An abnormal movement track is detected by comparing the four movement tracks, and the remaining movement tracks excluding the abnormal movement track are synthesized.
  • the imaging directions of the imaging devices 31 to 34 can be greatly shifted from each other as compared with a method of calculating one movement locus from a stereo image captured by a stereo camera. Therefore, it is possible to reduce a risk that all images of the imaging devices 31 to 34 are unsuitable for SLAM processing, and the accuracy of position estimation using SLAM is improved.
  • the information processing according to the first embodiment can be realized by causing the position estimation apparatus 10 to execute a program.
  • the information processing of the second embodiment can be realized by causing the position estimation device 100 and the navigation device 200 to execute a program.
  • the program can be recorded on a computer-readable recording medium (for example, the recording medium 207).
  • a computer-readable recording medium for example, the recording medium 207.
  • the recording medium for example, a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like can be used.
  • Magnetic disks include FD and HDD.
  • Optical disks include CD, CD-R (Recordable) / RW (Rewritable), DVD, and DVD-R / RW.
  • the program may be recorded and distributed on a portable recording medium. In that case, the program may be copied from a portable recording medium to another recording medium (for example, the flash memory 203) and executed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

To improve the accuracy of estimating the position of a moving object from an image. An image acquisition unit (11) acquires an image data row (23) from an imaging device (21) provided to a moving object (20), and acquires an image data row (24) from an imaging device (22) provided to the moving object (20), the imaging direction of the imaging device (22) being different from that of the imaging device (21). A determination unit (12) calculates, from the image data row (23), a movement trajectory (13) indicating an estimate of the path along which the moving object (20) has moved, and calculates, from the image data row (24), a movement trajectory (14) indicating an estimate of the path along which the moving object (20) has moved. The determination unit (12) determines the position (15) of the moving object (20) using the movement trajectory (13) and the movement trajectory (14).

Description

位置推定装置、位置推定方法および位置推定プログラムPOSITION ESTIMATION DEVICE, POSITION ESTIMATION METHOD, AND POSITION ESTIMATION PROGRAM
 本発明は位置推定装置、位置推定方法および位置推定プログラムに関する。 The present invention relates to a position estimation device, a position estimation method, and a position estimation program.
 車両などの移動物に周囲の状況を検出するセンサを搭載し、センサのデータを分析して当該移動物の位置を推定する技術が提案されている。移動物の位置を推定する1つの方法として、SLAM(Simultaneous Localization and Mapping)やVisual Odometryなどと呼ばれる、撮像装置によって撮像された画像を分析する方法が存在する。SLAMでは、例えば、異なる時刻に撮像された画像それぞれから目標物を検出し、画像内で目標物が写った座標の変化を追跡することで、移動物の移動方向や移動距離を算出する。 A technique has been proposed in which a sensor that detects surrounding conditions is mounted on a moving object such as a vehicle, and the position of the moving object is estimated by analyzing sensor data. As one method for estimating the position of a moving object, there is a method called SLAM (Simultaneous Localization and Mapping) or Visual Odometry for analyzing an image captured by an imaging device. In SLAM, for example, a target is detected from each of images captured at different times, and the movement direction and distance of the moving object are calculated by tracking changes in coordinates in which the target appears in the image.
 画像から移動物の位置を推定する技術は、例えば、GPS(Global Positioning System)などの衛星測位システムを利用することが難しい屋内に移動物が存在する場合に有用である。また、画像から移動物の位置を推定する技術は、衛星測位システムよりも高い精度で移動物の位置を推定したい場合にも有用である。 The technology for estimating the position of a moving object from an image is useful when the moving object exists indoors where it is difficult to use a satellite positioning system such as GPS (Global Positioning System). The technique for estimating the position of a moving object from an image is also useful when it is desired to estimate the position of a moving object with higher accuracy than a satellite positioning system.
 画像を利用した位置推定に関して、視野範囲の広い魚眼レンズの撮像装置を移動物に搭載し、魚眼レンズの画像を分析して位置を推定する技術が提案されている。また、同じ方向を向いた2台の撮像装置(ステレオカメラ)を移動物に搭載し、2台の撮像装置によって撮像されたステレオ画像を用いて位置を推定する技術が提案されている。 For position estimation using an image, a technique has been proposed in which a fisheye lens imaging device with a wide visual field range is mounted on a moving object, and the position is estimated by analyzing the fisheye lens image. In addition, a technique has been proposed in which two imaging devices (stereo cameras) facing in the same direction are mounted on a moving object, and the position is estimated using stereo images captured by the two imaging devices.
国際公開第2015/043872号International Publication No. 2015/043872
 しかし、撮像装置によって撮像された画像が、移動物の周囲にある目標物を検出しづらい画像になってしまうことがある。その場合、移動物の位置を推定する精度が低下するという問題がある。例えば、模様のない地面や壁などの平面が画像の大きな領域を占めている場合、推定精度が低下しやすい。また、太陽光などの強い光が撮像装置に当たることで、画像が白くぼやけるハレーションが生じた場合、推定精度が低下しやすい。また、移動物の影が画像に写っており、移動物の移動に伴って影も動く場合、推定精度が低下しやすい。また、画像に他の移動物が写っている場合、推定精度が低下しやすい。 However, the image picked up by the image pickup device may become an image in which it is difficult to detect the target around the moving object. In that case, there exists a problem that the precision which estimates the position of a moving object falls. For example, when a plane such as a ground or a wall without a pattern occupies a large area of the image, the estimation accuracy is likely to decrease. In addition, when a halation in which an image is blurred in white occurs due to strong light such as sunlight hitting the imaging apparatus, the estimation accuracy is likely to be lowered. In addition, when the shadow of the moving object is shown in the image and the shadow moves with the movement of the moving object, the estimation accuracy tends to be lowered. Moreover, when other moving objects are reflected in the image, the estimation accuracy is likely to decrease.
 なお、上記の非特許文献2に記載された技術のように、2台の撮像装置によって撮像されたステレオ画像から位置を推定する方法では、2台の撮像装置を同一またはほぼ同一の方向に向けることになる。このため、2台の撮像装置の両方に強い光が当たるなど、2台の撮像装置の画像が共に目標物を検出しづらい画像になってしまうリスクが高くなる。よって、2台の撮像装置によって撮像されたステレオ画像から位置を推定する方法では、推定精度の向上に限界がある。 Note that, as in the technique described in Non-Patent Document 2 above, in the method of estimating the position from the stereo image captured by the two imaging devices, the two imaging devices are directed in the same or substantially the same direction. It will be. For this reason, there is a high risk that the images of the two imaging devices both become images that are difficult to detect the target, such as strong light hitting both of the two imaging devices. Therefore, in the method of estimating the position from the stereo image captured by the two imaging devices, there is a limit to the improvement of the estimation accuracy.
 1つの側面では、本発明は、画像から移動物の位置を推定する精度を向上させる位置推定装置、位置推定方法および位置推定プログラムを提供することを目的とする。 In one aspect, an object of the present invention is to provide a position estimation device, a position estimation method, and a position estimation program that improve the accuracy of estimating the position of a moving object from an image.
 1つの態様では、画像取得部と判定部とを有する位置推定装置が提供される。画像取得部は、移動物が備える第1の撮像装置から、撮像時刻の異なる第1の画像データの列を取得し、移動物が備える第1の撮像装置と撮像方向の異なる第2の撮像装置から、撮像時刻の異なる第2の画像データの列を取得する。判定部は、第1の画像データの列から、移動物が移動した経路の第1の推定を示す第1の移動軌跡を算出し、第2の画像データの列から、移動物が移動した経路の第2の推定を示す第2の移動軌跡を算出し、第1の移動軌跡と第2の移動軌跡とを用いて移動物の位置を判定する。 In one aspect, a position estimation device having an image acquisition unit and a determination unit is provided. The image acquisition unit acquires a sequence of first image data having a different imaging time from a first imaging device provided in the moving object, and a second imaging device having a different imaging direction from the first imaging device provided in the moving object. From the second, a sequence of second image data having different imaging times is acquired. The determination unit calculates a first movement trajectory indicating a first estimation of a route along which the moving object has moved from the first image data sequence, and a route along which the moving object has moved from the second image data sequence. A second movement trajectory indicating the second estimation is calculated, and the position of the moving object is determined using the first movement trajectory and the second movement trajectory.
 また、1つの態様では、位置推定装置が実行する位置推定方法が提供される。また、1つの態様では、コンピュータに実行させる位置推定プログラムが提供される。 Also, in one aspect, a position estimation method executed by the position estimation device is provided. In one aspect, a position estimation program to be executed by a computer is provided.
 1つの側面では、画像から移動物の位置を推定する精度が向上する。
 本発明の上記および他の目的、特徴および利点は本発明の例として好ましい実施の形態を表す添付の図面と関連した以下の説明により明らかになるであろう。
In one aspect, the accuracy of estimating the position of the moving object from the image is improved.
These and other objects, features and advantages of the present invention will become apparent from the following description taken in conjunction with the accompanying drawings which illustrate preferred embodiments by way of example of the present invention.
第1の実施の形態の位置推定装置の例を示す図である。It is a figure which shows the example of the position estimation apparatus of 1st Embodiment. 車両における撮像装置の配置例を示す図である。It is a figure which shows the example of arrangement | positioning of the imaging device in a vehicle. 車両のハードウェア例を示すブロック図である。It is a block diagram which shows the hardware example of a vehicle. 位置推定装置とナビゲーション装置のハードウェア例を示す図である。It is a figure which shows the hardware example of a position estimation apparatus and a navigation apparatus. 撮像装置および車両の座標系の例を示す図である。It is a figure which shows the example of the coordinate system of an imaging device and a vehicle. 4つの撮像装置の画像例を示す図である。It is a figure which shows the example of an image of four imaging devices. SLAM処理における目標点の抽出例を示す図である。It is a figure which shows the example of extraction of the target point in SLAM processing. 4つの撮像装置に対するSLAM結果の例を示す図である。It is a figure which shows the example of the SLAM result with respect to four imaging devices. SLAM結果の合成例を示す図である。It is a figure which shows the example of a synthesis | combination of a SLAM result. 点群を用いた移動軌跡のスケール調整の例を示す図である。It is a figure which shows the example of the scale adjustment of the movement locus | trajectory using a point cloud. 位置推定装置の機能例を示すブロック図である。It is a block diagram which shows the function example of a position estimation apparatus. パラメータテーブルの例を示す図である。It is a figure which shows the example of a parameter table. SLAM処理によって生成される軌跡データの例を示す図である。It is a figure which shows the example of the locus | trajectory data produced | generated by SLAM processing. SLAM処理によって生成される点群データの例を示す図である。It is a figure which shows the example of the point cloud data produced | generated by SLAM processing. 軌跡データの第1の変換例を示す図である。It is a figure which shows the 1st conversion example of locus | trajectory data. 軌跡データの第2の変換例を示す図である。It is a figure which shows the 2nd example of conversion of locus | trajectory data. 合成された軌跡データの例を示す図である。It is a figure which shows the example of the synthetic | combination locus | trajectory data. 位置推定の手順例を示すフローチャートである。It is a flowchart which shows the example of a procedure of position estimation. ナビゲーション画面の例を示す図である。It is a figure which shows the example of a navigation screen.
 以下、本実施の形態を図面を参照して説明する。
 [第1の実施の形態]
 第1の実施の形態を説明する。
Hereinafter, the present embodiment will be described with reference to the drawings.
[First Embodiment]
A first embodiment will be described.
 図1は、第1の実施の形態の位置推定装置の例を示す図である。
 第1の実施の形態の位置推定装置10は、移動物20に搭載された複数の撮像装置の画像から移動物20の位置を推定する。移動物20は、移動可能な人工物であり、例えば、車両、ロボット、ドローンなどである。位置推定装置10は、移動物20に搭載されていてもよいし、移動物20の外部に存在していてもよい。推定された移動物20の位置を示す情報は、ディスプレイやスピーカなどの出力装置から出力されてもよいし、移動物20の自律走行(例えば、車両の自動駐車)に利用されてもよい。
FIG. 1 is a diagram illustrating an example of a position estimation apparatus according to the first embodiment.
The position estimation device 10 according to the first embodiment estimates the position of the moving object 20 from images of a plurality of imaging devices mounted on the moving object 20. The moving object 20 is a movable artificial object, such as a vehicle, a robot, or a drone. The position estimation device 10 may be mounted on the moving object 20 or may exist outside the moving object 20. Information indicating the estimated position of the moving object 20 may be output from an output device such as a display or a speaker, or may be used for autonomous traveling of the moving object 20 (for example, automatic parking of a vehicle).
 位置推定装置10は、画像取得部11および判定部12を有する。画像取得部11は、撮像装置から画像データを取り込むインタフェースである。判定部12は、画像取得部11によって取り込まれた画像データを処理して移動物20の位置を判定する。 The position estimation device 10 includes an image acquisition unit 11 and a determination unit 12. The image acquisition unit 11 is an interface that captures image data from the imaging apparatus. The determination unit 12 determines the position of the moving object 20 by processing the image data captured by the image acquisition unit 11.
 判定部12として、例えば、CPU(Central Processing Unit)やDSP(Digital Signal Processor)などのプロセッサを用いることができる。ただし、判定部12は、ASIC(Application Specific Integrated Circuit)やFPGA(Field Programmable Gate Array)などの特定用途の電子回路を含んでもよい。プロセッサは、RAM(Random Access Memory)などのメモリに記憶されたプログラムを実行する。プログラムには、以下に説明する処理を記載した位置推定プログラムが含まれる。複数のプロセッサの集合(マルチプロセッサ)を「プロセッサ」と言うこともある。 As the determination unit 12, for example, a processor such as a CPU (Central Processing Unit) or a DSP (Digital Signal Processor) can be used. However, the determination unit 12 may include an electronic circuit for a specific application such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array). The processor executes a program stored in a memory such as a RAM (Random Access Memory). The program includes a position estimation program in which processing described below is described. A set of multiple processors (multiprocessor) may be referred to as a “processor”.
 移動物20は、撮像装置21(第1の撮像装置)と撮像装置22(第2の撮像装置)とを含む複数の撮像装置を有する。撮像装置21,22は、互いに撮像方向が異なるように設置されている。撮像装置21の視野範囲と撮像装置22の視野範囲とは、重複が少ない方が好ましい。撮像装置21の視野範囲と撮像装置22の視野範囲とが、全く重複していなくてもよい。撮像装置21の視野範囲の中心(画像の中央が写る方向)と撮像装置22の視野範囲の中心とが、移動物20から見て90°以上離れていてもよい。 The moving object 20 has a plurality of imaging devices including an imaging device 21 (first imaging device) and an imaging device 22 (second imaging device). The imaging devices 21 and 22 are installed so that the imaging directions are different from each other. It is preferable that the visual field range of the imaging device 21 and the visual field range of the imaging device 22 have less overlap. The visual field range of the imaging device 21 and the visual field range of the imaging device 22 may not overlap at all. The center of the field-of-view range of the imaging device 21 (the direction in which the center of the image is captured) and the center of the field-of-view range of the imaging device 22 may be 90 ° or more apart from the moving object 20.
 例えば、撮像装置21を移動物20の前方に(通常の進行方向に向けて)設置し、撮像装置22を移動物20の後方に(通常の進行方向とは逆方向に向けて)設置することが考えられる。また、例えば、撮像装置21を移動物20の左側面に設置し、撮像装置22を移動物20の右側面に設置することが考えられる。また、例えば、撮像装置21を移動物20の前方または後方に設置し、撮像装置22を移動物20の左側面または右側面に設置することが考えられる。 For example, the imaging device 21 is installed in front of the moving object 20 (in the normal traveling direction), and the imaging device 22 is installed in the rear of the moving object 20 (in the direction opposite to the normal traveling direction). Can be considered. For example, it is conceivable that the imaging device 21 is installed on the left side surface of the moving object 20 and the imaging device 22 is installed on the right side surface of the moving object 20. Further, for example, it is conceivable that the imaging device 21 is installed in front of or behind the moving object 20 and the imaging device 22 is installed on the left side or right side of the moving object 20.
 これにより、撮像装置21,22の両方の画像が位置推定に利用しづらい画像になってしまうリスクを低減できる。例えば、撮像装置21の画像に模様のない地面や壁などの平面が写っていても、撮像装置22の画像にはそのような平面が写っていない可能性が高い。また、撮像装置21に太陽光などの強い光が当たってハレーションが生じていても、撮像装置22には強い光が当たっておらずハレーションが生じていない可能性が高い。また、撮像装置21の画像に移動物20の影が写っていても、撮像装置22の画像には移動物20の影が写っていない可能性が高い。また、撮像装置21の画像に他の移動物が写っていても、撮像装置22の画像には他の移動物が写っていない可能性が高い。 Thereby, it is possible to reduce a risk that both images of the imaging devices 21 and 22 become images that are difficult to use for position estimation. For example, even if a plane such as a ground or a wall without a pattern is captured in the image of the imaging device 21, there is a high possibility that such a plane is not captured in the image of the imaging device 22. In addition, even if strong light such as sunlight hits the image pickup device 21 and halation occurs, there is a high possibility that no strong light hits the image pickup device 22 and halation does not occur. In addition, even if the shadow of the moving object 20 appears in the image of the imaging device 21, there is a high possibility that the shadow of the moving object 20 does not appear in the image of the imaging device 22. In addition, even if another moving object appears in the image of the imaging device 21, there is a high possibility that the other moving object does not appear in the image of the imaging device 22.
 ここで、画像取得部11は、移動物20が備える複数の撮像装置それぞれから、撮像時刻の異なる画像データの列を取得する。画像取得部11は、撮像装置21から、撮像装置21によって撮像された画像データの列23(第1の画像データの列)を取得する。また、画像取得部11は、撮像装置22から、撮像装置22によって撮像された画像データの列24(第2の画像データの列)を取得する。 Here, the image acquisition unit 11 acquires a sequence of image data with different imaging times from each of a plurality of imaging devices included in the moving object 20. The image acquisition unit 11 acquires from the imaging device 21 a sequence 23 of image data (first sequence of image data) captured by the imaging device 21. Further, the image acquisition unit 11 acquires from the imaging device 22 a column 24 (second image data column) of image data captured by the imaging device 22.
 判定部12は、画像取得部11が取得した複数の画像データの列それぞれから、移動物20が移動した経路の推定を示す移動軌跡を算出する。判定部12は、撮像装置21から取得した画像データの列23から、経路の第1の推定を示す移動軌跡13(第1の移動軌跡)を算出する。また、判定部12は、撮像装置22から取得した画像データの列24から、経路の第2の推定を示す移動軌跡14(第2の移動軌跡)を算出する。移動軌跡13と移動軌跡14は、異なる画像データから算出されるため一致しない可能性が高い。 The determination unit 12 calculates a movement trajectory indicating an estimation of a route traveled by the moving object 20 from each of the plurality of image data sequences acquired by the image acquisition unit 11. The determination unit 12 calculates a movement trajectory 13 (first movement trajectory) indicating the first estimation of the route from the image data sequence 23 acquired from the imaging device 21. In addition, the determination unit 12 calculates a movement trajectory 14 (second movement trajectory) indicating the second estimation of the route from the sequence 24 of image data acquired from the imaging device 22. Since the movement locus 13 and the movement locus 14 are calculated from different image data, there is a high possibility that they do not coincide.
 移動軌跡13,14の算出には、SLAMなどの画像処理技術を用いることができる。例えば、判定部12は、画像データの列23の中の各画像データから、移動物20の周辺にある目標物(例えば、建物の外壁や地面の白線など)を検出する。そして、判定部12は、目標物が写っている画像内座標の変化を追跡し、移動物20が移動した経路を推定して移動軌跡13を算出する。同様に、判定部12は、画像データの列24の中の各画像データから目標物を検出して移動軌跡14を算出する。 Image processing technology such as SLAM can be used for calculating the movement trajectories 13 and 14. For example, the determination unit 12 detects a target object (for example, an outer wall of a building or a white line on the ground) around each moving object 20 from each image data in the image data row 23. Then, the determination unit 12 tracks the change in the coordinates in the image in which the target is shown, estimates the route along which the moving object 20 has moved, and calculates the movement trajectory 13. Similarly, the determination unit 12 detects a target from each image data in the image data row 24 and calculates the movement locus 14.
 判定部12は、移動軌跡13,14を含む複数の移動軌跡を用いて、移動物20の位置15(例えば、移動物20の現在位置)を判定する。例えば、判定部12は、複数の移動軌跡を合成して1つの合成移動軌跡を生成し、合成移動軌跡に基づいて位置15を判定する。複数の移動軌跡の合成は、例えば、それら複数の移動軌跡の平均を求めることで行う。ただし、判定部12は、各移動軌跡の形状に応じて各移動軌跡に重みを付与し、複数の移動軌跡の加重平均を求めてもよい。その場合、蛇行している移動軌跡など不自然な移動軌跡に対して、小さい重みを付与してもよい。 The determination unit 12 determines the position 15 of the moving object 20 (for example, the current position of the moving object 20) using a plurality of movement loci including the movement loci 13 and 14. For example, the determination unit 12 generates a single combined movement track by combining a plurality of movement tracks, and determines the position 15 based on the combined movement track. The synthesis of the plurality of movement trajectories is performed, for example, by obtaining an average of the plurality of movement trajectories. However, the determination unit 12 may assign a weight to each movement locus in accordance with the shape of each movement locus, and obtain a weighted average of a plurality of movement loci. In that case, a small weight may be given to an unnatural movement locus such as a meandering movement locus.
 また、判定部12は、3以上の移動軌跡を算出した場合(移動物20に3以上の撮像装置が搭載されている場合)、それら3以上の移動軌跡を比較し、他の移動軌跡から大きく離れた異常移動軌跡を判定してもよい。その場合、判定部12は、3以上の移動軌跡から異常移動軌跡を除外し、除外されずに残った移動軌跡のみを用いて位置15を判定してもよい。例えば、判定部12は、除外されずに残った2以上の移動軌跡の平均を求めるなどにより、当該2以上の移動軌跡を合成して合成移動軌跡を生成する。 Further, when the determination unit 12 calculates three or more movement loci (when three or more imaging devices are mounted on the moving object 20), the determination unit 12 compares the three or more movement loci and greatly increases the other movement loci. You may determine the abnormal movement trace which left | separated. In that case, the determination unit 12 may exclude the abnormal movement trajectory from the three or more movement trajectories and determine the position 15 using only the remaining movement trajectory without being excluded. For example, the determination unit 12 generates a combined movement locus by combining the two or more movement loci, for example, by obtaining an average of the two or more movement loci remaining without being excluded.
 第1の実施の形態の位置推定装置10によれば、移動物20が備える撮像装置21から画像データの列23が取得され、画像データの列23から移動軌跡13が算出される。また、移動物20が備える撮像装置21とは撮像方向の異なる撮像装置22から画像データの列24が取得され、移動軌跡13とは独立に、画像データの列24から移動軌跡14が算出される。そして、移動軌跡13,14を用いて移動物20の位置15が推定される。 According to the position estimation device 10 of the first embodiment, the image data sequence 23 is acquired from the imaging device 21 included in the moving object 20, and the movement trajectory 13 is calculated from the image data sequence 23. Further, the image data column 24 is acquired from the imaging device 22 having a different imaging direction from the imaging device 21 included in the moving object 20, and the movement track 14 is calculated from the image data column 24 independently of the movement track 13. . Then, the position 15 of the moving object 20 is estimated using the movement trajectories 13 and 14.
 これにより、撮像装置21,22の一方の画像が光の影響などによって位置推定に不向きになり、移動軌跡13,14の一方の精度が低下しても、最終的な位置15の判定の精度が向上する。また、撮像装置21,22によって撮像されたステレオ画像からから1つの移動軌跡を算出する方法と比べて、撮像装置22の撮像方向を撮像装置21の撮像方向から大きくずらすことが可能となる。よって、撮像装置21,22の両方の画像が位置推定に不向きになってしまうリスクを低減でき、位置15の判定の精度が向上する。 Thereby, even if one image of the imaging devices 21 and 22 becomes unsuitable for position estimation due to the influence of light or the like, even if the accuracy of one of the movement trajectories 13 and 14 decreases, the accuracy of the determination of the final position 15 is improved. improves. In addition, the imaging direction of the imaging device 22 can be largely shifted from the imaging direction of the imaging device 21 as compared to a method of calculating one movement locus from stereo images captured by the imaging devices 21 and 22. Therefore, the risk that both images of the imaging devices 21 and 22 are not suitable for position estimation can be reduced, and the accuracy of the determination of the position 15 is improved.
 [第2の実施の形態]
 次に、第2の実施の形態を説明する。
 第2の実施の形態の車両30は、人が運転する四輪自動車である。車両30は、車両30の周囲の状況を監視するセンサとして、4個の撮像装置を有する。
[Second Embodiment]
Next, a second embodiment will be described.
The vehicle 30 according to the second embodiment is a four-wheeled vehicle driven by a person. The vehicle 30 includes four imaging devices as sensors that monitor the situation around the vehicle 30.
 図2は、車両における撮像装置の配置例を示す図である。
 車両30は、撮像装置31~34を有する。撮像装置31は、車両30の前方に、撮像方向(レンズ面から垂直な方向)が車両前方正面方向に一致するよう設置される。撮像装置32は、車両30の後方に、撮像方向が車両前方正面方向の逆になるよう設置される。撮像装置33は、車両の左側面に、撮像方向が車両前方正面方向から左90°ずれるよう設置される。撮像装置34は、車両の右側面に、撮像方向が車両前方正面方向から右90°ずれるよう設置される。
FIG. 2 is a diagram illustrating an arrangement example of the imaging devices in the vehicle.
The vehicle 30 includes imaging devices 31 to 34. The imaging device 31 is installed in front of the vehicle 30 so that the imaging direction (direction perpendicular to the lens surface) coincides with the front front direction of the vehicle. The imaging device 32 is installed behind the vehicle 30 such that the imaging direction is opposite to the front direction of the vehicle. The imaging device 33 is installed on the left side surface of the vehicle so that the imaging direction is shifted 90 ° to the left from the front front direction of the vehicle. The imaging device 34 is installed on the right side surface of the vehicle such that the imaging direction is shifted 90 ° to the right from the front front direction of the vehicle.
 撮像装置31~34には、魚眼レンズが使用される。撮像装置31~34それぞれの視野範囲は、レンズ面に対して190°である。すなわち、撮像装置31~34は、左右方向に190°の視野範囲をもち、上下方向にも190°の視野範囲をもつ。なお、第2の実施の形態では、車両30は撮像方向が90°ずつずれた4個の撮像装置を有しているが、3個以上の任意の個数の撮像装置を有するようにしてもよい。例えば、車両30は、撮像方向が60°ずつずれた6個の撮像装置を有してもよい。 A fisheye lens is used for the imaging devices 31 to 34. The field of view of each of the imaging devices 31 to 34 is 190 ° with respect to the lens surface. That is, the imaging devices 31 to have a visual field range of 190 ° in the horizontal direction and a visual field range of 190 ° in the vertical direction. In the second embodiment, the vehicle 30 has four imaging devices whose imaging directions are shifted by 90 °. However, the vehicle 30 may have an arbitrary number of imaging devices of three or more. . For example, the vehicle 30 may include six imaging devices whose imaging directions are shifted by 60 °.
 図3は、車両のハードウェア例を示すブロック図である。
 車両30は、撮像装置31~34に加えて、走行距離計35、GPS測定部36、自動駐車装置37、位置推定装置100およびナビゲーション装置200を有する。位置推定装置100は、第1の実施の形態の位置推定装置10に対応する。
FIG. 3 is a block diagram illustrating a hardware example of the vehicle.
The vehicle 30 includes an odometer 35, a GPS measurement unit 36, an automatic parking device 37, a position estimation device 100, and a navigation device 200 in addition to the imaging devices 31 to 34. The position estimation apparatus 100 corresponds to the position estimation apparatus 10 of the first embodiment.
 走行距離計35は、車両30のタイヤの回転数などに基づいて、車両30が走行した距離を計測する。走行距離計35が提供する走行距離は、計測開始からの累積の走行距離でもよいし、直近の一定時間における走行距離でもよい。GPS測定部36は、GPS衛星からGPS信号を受信し、GPS信号に基づいて地球座標系における車両30の現在位置を算出する。地球座標系における位置は、緯度と経度で表現することができる。ただし、車両30が屋内にいる場合(例えば、屋根のある駐車場にいる場合)、GPS測定部36はGPS信号を受信できず現在位置を算出できないことがある。また、GPS測定部36が算出する現在位置は、数メートルから数十メートルの誤差を含むことがある。 The odometer 35 measures the distance traveled by the vehicle 30 based on the rotational speed of the tire of the vehicle 30 and the like. The mileage provided by the odometer 35 may be a cumulative mileage from the start of measurement, or may be a mileage in the latest fixed time. The GPS measurement unit 36 receives a GPS signal from a GPS satellite, and calculates the current position of the vehicle 30 in the earth coordinate system based on the GPS signal. The position in the earth coordinate system can be expressed by latitude and longitude. However, when the vehicle 30 is indoors (for example, when it is in a parking lot with a roof), the GPS measurement unit 36 may not receive a GPS signal and may not be able to calculate the current position. The current position calculated by the GPS measurement unit 36 may include an error of several meters to several tens of meters.
 自動駐車装置37は、車両30をユーザの運転によらず自動運転によって、駐車場の駐車スペースに移動させる。自動駐車装置37は、ユーザからの指示に応じて、車両30が備えるアクセルやブレーキ、ハンドルなどを自動的に操作する自動駐車モードに移行する。自動駐車モードでは、自動駐車装置37は、適切な移動方向や移動量を判断するため、後述する位置推定装置100が提供する現在位置の推定結果を利用する。位置推定装置100が推定する現在位置は、GPS測定部36が算出する現在位置よりも精度が高い。例えば、位置推定装置100の推定誤差は、数センチメートル程度であることが期待される。また、位置推定装置100は、GPS信号を受信できない屋内でも現在位置を推定できる。よって、自動駐車モードでは、位置推定装置100の推定結果が利用される。 The automatic parking device 37 moves the vehicle 30 to the parking space of the parking lot by automatic driving regardless of the user's driving. The automatic parking device 37 shifts to an automatic parking mode in which an accelerator, a brake, a handle, and the like included in the vehicle 30 are automatically operated according to an instruction from the user. In the automatic parking mode, the automatic parking device 37 uses the estimation result of the current position provided by the position estimation device 100 described later in order to determine an appropriate moving direction and moving amount. The current position estimated by the position estimation device 100 is more accurate than the current position calculated by the GPS measurement unit 36. For example, the estimation error of the position estimation apparatus 100 is expected to be about several centimeters. The position estimation apparatus 100 can estimate the current position even indoors where GPS signals cannot be received. Therefore, in the automatic parking mode, the estimation result of the position estimation device 100 is used.
 位置推定装置100は、画像処理技術であるSLAMを用いて、撮像装置31~34が撮像した画像を分析し、車両30の移動軌跡および現在位置を推定する。位置推定装置100が出力する移動軌跡および現在位置は、GPS測定部36の場合と同様に、地球座標系における絶対座標を用いて表すことができる。位置推定装置100は、GPS信号などの外部情報を利用して地球座標系における基準位置を設定し、画像分析によって基準位置からの相対的な車両30の移動を検出して、移動軌跡および現在位置を算出する。 The position estimation apparatus 100 analyzes images captured by the image capturing apparatuses 31 to 34 using SLAM, which is an image processing technique, and estimates the movement trajectory and the current position of the vehicle 30. The movement trajectory and the current position output by the position estimation apparatus 100 can be expressed using absolute coordinates in the earth coordinate system, as in the case of the GPS measurement unit 36. The position estimation apparatus 100 sets a reference position in the earth coordinate system using external information such as a GPS signal, detects a relative movement of the vehicle 30 from the reference position by image analysis, and detects a movement locus and a current position. Is calculated.
 位置推定装置100は、推定した現在位置を自動駐車装置37に出力する。これにより、推定した現在位置に基づいて車両30の自動駐車が実行される。また、位置推定装置100は、推定した移動軌跡や現在位置をナビゲーション装置200に出力する。これにより、予め用意された地図と重ねて、移動軌跡や現在位置がナビゲーション装置200の画面に表示される。位置推定装置100は、走行距離計35の出力やGPS測定部36の出力を利用することがある。位置推定装置100の内部の詳細は後述する。 The position estimation device 100 outputs the estimated current position to the automatic parking device 37. Thus, automatic parking of the vehicle 30 is executed based on the estimated current position. Further, the position estimation device 100 outputs the estimated movement trajectory and the current position to the navigation device 200. Thereby, the movement trajectory and the current position are displayed on the screen of the navigation device 200 so as to overlap the map prepared in advance. The position estimation apparatus 100 may use the output of the odometer 35 or the output of the GPS measurement unit 36. Details of the inside of the position estimation apparatus 100 will be described later.
 ナビゲーション装置200は、車両30のユーザの運転を支援し、また、車両30の周辺の状況をユーザに提示する車載装置である。例えば、ナビゲーション装置200は、目的地の指定をユーザから受け付け、GPS測定部36が測定した現在位置から目的地までの推奨経路を算出し、地図と重ねて推奨経路をディスプレイに表示する。ナビゲーション装置200は、推奨経路を示す音声メッセージをスピーカから再生してもよい。 The navigation device 200 is an in-vehicle device that supports the driving of the user of the vehicle 30 and presents the situation around the vehicle 30 to the user. For example, the navigation device 200 accepts designation of the destination from the user, calculates a recommended route from the current position measured by the GPS measurement unit 36 to the destination, and displays the recommended route on the display so as to overlap the map. The navigation device 200 may reproduce a voice message indicating a recommended route from a speaker.
 また、ナビゲーション装置200は、自動駐車モードにおいて、位置推定装置100から移動軌跡や現在位置の情報を取得し、地図と重ねて移動軌跡および現在位置をディスプレイに表示する。ナビゲーション装置200は、自動駐車の状況を示す音声メッセージをスピーカから再生してもよい。これにより、自動駐車モードの間、ユーザは車両30が適切に移動しているか確認することができる。ナビゲーション装置200または他の装置をユーザが操作することで、自動駐車モードを解除できるようにしてもよい。 In the automatic parking mode, the navigation device 200 acquires information on the movement locus and the current position from the position estimation device 100, and displays the movement locus and the current position on the display so as to overlap with the map. The navigation device 200 may reproduce a voice message indicating the state of automatic parking from a speaker. Thereby, the user can confirm whether the vehicle 30 is moving appropriately during the automatic parking mode. The automatic parking mode may be canceled by the user operating the navigation device 200 or another device.
 なお、位置推定装置100と自動駐車装置37とは、別個の筐体に収納されてもよいし、同一の筐体に収納されてもよい。また、位置推定装置100とナビゲーション装置200とは、別個の筐体に収納されてもよいし、同一の筐体に収納されてもよい。 Note that the position estimation device 100 and the automatic parking device 37 may be housed in separate housings or in the same housing. Further, the position estimation device 100 and the navigation device 200 may be housed in separate housings or in the same housing.
 図4は、位置推定装置とナビゲーション装置のハードウェア例を示す図である。
 位置推定装置100は、プロセッサ101、RAM102、ROM(Read Only Memory)103、画像信号インタフェース104、入力インタフェース105および出力インタフェース106を有する。これらのユニットはバスに接続されている。プロセッサ101は、第1の実施の形態の判定部12に対応する。画像信号インタフェース104は、第1の実施の形態の画像取得部11に対応する。
FIG. 4 is a diagram illustrating a hardware example of the position estimation device and the navigation device.
The position estimation apparatus 100 includes a processor 101, a RAM 102, a ROM (Read Only Memory) 103, an image signal interface 104, an input interface 105, and an output interface 106. These units are connected to the bus. The processor 101 corresponds to the determination unit 12 of the first embodiment. The image signal interface 104 corresponds to the image acquisition unit 11 of the first embodiment.
 プロセッサ101は、プログラムの命令を実行する演算回路を含むコントローラである。プロセッサ101は、CPUやECU(Electronic Control Unit)と呼ばれることがある。プロセッサ101は、ROM103に記憶されたプログラムやデータの少なくとも一部をRAM102にロードし、プログラムを実行する。 The processor 101 is a controller including an arithmetic circuit that executes program instructions. The processor 101 may be called a CPU or an ECU (Electronic Control Unit). The processor 101 loads at least a part of the program and data stored in the ROM 103 into the RAM 102 and executes the program.
 RAM102は、プロセッサ101が実行するプログラムやプロセッサ101が演算に使用するデータを一時的に記憶する揮発性の半導体メモリである。なお、位置推定装置100は、RAM以外の種類のメモリを備えてもよく、複数個のメモリを備えてもよい。ROM103は、プログラムやデータを記憶する不揮発性の記憶装置である。プログラムには、位置推定プログラムが含まれる。ROM103は不揮発性であればよく、フラッシュメモリなどの書き換え可能な記憶装置でもよい。位置推定装置100は、他の種類の記憶装置を備えてもよく、複数の不揮発性の記憶装置を備えてもよい。 The RAM 102 is a volatile semiconductor memory that temporarily stores programs executed by the processor 101 and data used by the processor 101 for operations. Note that the position estimation apparatus 100 may include a type of memory other than the RAM, or may include a plurality of memories. The ROM 103 is a non-volatile storage device that stores programs and data. The program includes a position estimation program. The ROM 103 may be non-volatile and may be a rewritable storage device such as a flash memory. The position estimation device 100 may include other types of storage devices, and may include a plurality of nonvolatile storage devices.
 画像信号インタフェース104は、撮像装置31~34と接続され、撮像装置31~34によって生成された画像データを取得する。入力インタフェース105は、走行距離計35やGPS測定部36などと接続され、計測された走行距離や算出された現在位置の情報を取得する。出力インタフェース106は、自動駐車装置37やナビゲーション装置200などと接続され、推定した移動軌跡や現在位置の情報を出力する。 The image signal interface 104 is connected to the imaging devices 31 to 34, and acquires the image data generated by the imaging devices 31 to 34. The input interface 105 is connected to the odometer 35, the GPS measurement unit 36, and the like, and acquires information on the measured mileage and the calculated current position. The output interface 106 is connected to the automatic parking device 37, the navigation device 200, and the like, and outputs information on the estimated movement locus and the current position.
 ナビゲーション装置200は、プロセッサ201、RAM202、フラッシュメモリ203、ディスプレイ204、入力デバイス205および媒体リーダ206を有する。これらのユニットはバスに接続されている。 The navigation device 200 includes a processor 201, a RAM 202, a flash memory 203, a display 204, an input device 205, and a media reader 206. These units are connected to the bus.
 プロセッサ201は、プロセッサ101と同様に、プログラムの命令を実行する演算回路を含むコントローラである。プロセッサ201は、フラッシュメモリ203に記憶されたプログラムやデータの少なくとも一部をRAM202にロードし、プログラムを実行する。RAM202は、プロセッサ201が実行するプログラムやプロセッサ201が演算に使用するデータを一時的に記憶する揮発性の半導体メモリである。フラッシュメモリ203は、プログラムやデータを記憶する不揮発性の記憶装置である。ナビゲーション装置200は、HDD(Hard Disk Drive)などの他の種類の記憶装置を備えてもよい。 Similar to the processor 101, the processor 201 is a controller including an arithmetic circuit that executes program instructions. The processor 201 loads at least a part of the program and data stored in the flash memory 203 into the RAM 202 and executes the program. The RAM 202 is a volatile semiconductor memory that temporarily stores programs executed by the processor 201 and data used by the processor 201 for calculation. The flash memory 203 is a non-volatile storage device that stores programs and data. The navigation device 200 may include other types of storage devices such as an HDD (Hard Disk Drive).
 ディスプレイ204は、プロセッサ201からの命令に従って画像を表示する。ディスプレイ204としては、液晶ディスプレイ(LCD:Liquid Crystal Display)や有機EL(OEL:Organic Electro-Luminescence)ディスプレイなど、様々な種類のディスプレイを用いることができる。入力デバイス205は、ユーザの操作を受け付け、入力信号をプロセッサ201に出力する。入力デバイス205としては、タッチパネルやキーパッドやトラックボールなど、様々な種類の入力デバイスを用いることができる。 Display 204 displays an image in accordance with a command from processor 201. As the display 204, various types of displays such as a liquid crystal display (LCD: Liquid Crystal Display) and an organic EL (OEL: Organic Electro-Luminescence) display can be used. The input device 205 receives a user operation and outputs an input signal to the processor 201. As the input device 205, various types of input devices such as a touch panel, a keypad, and a trackball can be used.
 媒体リーダ206は、記録媒体207に記録されたプログラムやデータを読み取る読み取り装置である。記録媒体207としては、フレキシブルディスク(FD:Flexible Disk)やHDDなどの磁気ディスク、CD(Compact Disc)やDVD(Digital Versatile Disc)などの光ディスク、光磁気ディスク(MO:Magneto-Optical disk)、半導体メモリなど、様々な種類の記録媒体を用いることができる。媒体リーダ206は、例えば、読み取ったプログラムやデータをRAM202またはフラッシュメモリ203に格納する。 The media reader 206 is a reading device that reads programs and data recorded on the recording medium 207. The recording medium 207 includes a magnetic disk such as a flexible disk (FD) or HDD, an optical disk such as a CD (Compact Disk) or a DVD (Digital Versatile Disk), a magneto-optical disk (MO), a semiconductor. Various types of recording media such as a memory can be used. For example, the medium reader 206 stores the read program and data in the RAM 202 or the flash memory 203.
 次に、位置推定装置100によるSLAM処理について説明する。
 図5は、撮像装置および車両の座標系の例を示す図である。説明を簡単にするため、ここでの座標軸の方向に関する説明では、車両30が地球座標系における水平な地面に対して水平に置かれているものとして説明する。
Next, the SLAM process by the position estimation apparatus 100 will be described.
FIG. 5 is a diagram illustrating an example of an imaging apparatus and a vehicle coordinate system. In order to simplify the description, in the description regarding the direction of the coordinate axis here, the description will be made assuming that the vehicle 30 is placed horizontally with respect to the horizontal ground in the earth coordinate system.
 位置推定装置100は、撮像装置31~34の画像を分析するにあたり、撮像装置31~34それぞれに対して各撮像装置固有の座標系(カメラ座標系)を定義している。カメラ座標系は、画像分析のための論理的な座標系であり地球座標系とは異なる。 In analyzing the images of the imaging devices 31 to 34, the position estimation device 100 defines a coordinate system (camera coordinate system) unique to each imaging device for each of the imaging devices 31 to 34. The camera coordinate system is a logical coordinate system for image analysis and is different from the earth coordinate system.
 撮像装置31に対しては、撮像装置31の位置を原点とする座標軸C1X,C1Y,C1Zが定義される。C1Xの正方向は、撮像装置31から見て水平右方向である。C1Yの正方向は、撮像装置31から見て鉛直下方向である。C1Zの正方向は、撮像装置31から見て正面方向(撮像方向)である。同様に、撮像装置32に対しては、撮像装置32の位置を原点とする座標軸C2X,C2Y,C2Zが定義される。撮像装置33に対しては、撮像装置33の位置を原点とする座標軸C3X,C3Y,C3Zが定義される。撮像装置34に対しては、撮像装置34の位置を原点とする座標軸C4X,C4Y,C4Zが定義される。 For the imaging device 31, coordinate axes C 1X , C 1Y , and C 1Z are defined with the position of the imaging device 31 as the origin. The positive direction of C 1X is the horizontal right direction when viewed from the imaging device 31. The positive direction of C 1Y is a vertically downward direction when viewed from the imaging device 31. The positive direction of C 1Z is the front direction (imaging direction) when viewed from the imaging device 31. Similarly, with respect to the imaging device 32, a coordinate axis C 2X, C 2Y to the origin position of the imaging device 32, C 2Z are defined. For the imaging device 33, coordinate axes C 3X , C 3Y , and C 3Z with the position of the imaging device 33 as the origin are defined. For the imaging device 34, coordinate axes C 4X , C 4Y , and C 4Z are defined with the position of the imaging device 34 as the origin.
 すなわち、撮像装置31~34それぞれのXY平面は当該撮像装置のレンズと平行な面であり、XZ平面は水平面である。よって、撮像装置31~34によって撮像される画像は、カメラ座標系のXY平面を表す。C2Zは、XZ平面においてC1Zを180°回転したものと平行である。C3Zは、XZ平面においてC1Zを左90°回転したものと平行である。C4Zは、XZ平面においてC1Zを右90°回転したものと平行である。同様に、C2Xは、XZ平面においてC1Xを180°回転したものと平行である。C3Xは、XZ平面においてC1Xを左90°回転したものと平行である。C4Xは、XZ平面においてC1Xを右90°回転したものと平行である。C1Y,C2Y,C3Y,C4Yは平行である。 That is, the XY plane of each of the imaging devices 31 to 34 is a plane parallel to the lens of the imaging device, and the XZ plane is a horizontal plane. Therefore, the images picked up by the image pickup devices 31 to 34 represent the XY plane of the camera coordinate system. C 2Z is parallel to the C 1Z rotated 180 ° in the XZ plane. C 3Z is parallel to C 1Z rotated 90 ° to the left in the XZ plane. C 4Z is parallel to the C 1Z rotated 90 ° to the right in the XZ plane. Similarly, C 2X is parallel to the C 1X rotated 180 ° in the XZ plane. C 3X is parallel to the C 1X rotated 90 ° to the left in the XZ plane. C 4X is parallel to the C 1X rotated 90 ° to the right in the XZ plane. C 1Y , C 2Y , C 3Y and C 4Y are parallel.
 また、位置推定装置100は、車両30の移動軌跡を算出するにあたり、カメラ座標系とは別に車両30に対して1つの座標系(車両座標系)を定義している。車両座標系も、画像分析のための論理的な座標系であり地球座標系とは異なる。 Further, the position estimation device 100 defines one coordinate system (vehicle coordinate system) for the vehicle 30 in addition to the camera coordinate system when calculating the movement trajectory of the vehicle 30. The vehicle coordinate system is also a logical coordinate system for image analysis and is different from the earth coordinate system.
 車両座標系の原点は、車両30の中の予め決められた位置であり、例えば、車両30の重心に設定される。車両30に対しては、座標軸VX,VY,VZが定義される。VXの正方向は、車両30の車体前方正面方向に対して水平右方向である。VYの正方向は、鉛直下方向である。VZの正方向は、車両30の車体前方正面方向である。すなわち、VXはC1Xと平行であり、VYはC1Yと平行であり、VZはC1Zと平行である。移動軌跡の算出にあたり、撮像装置31~34のカメラ座標系の座標が、車両30の車両座標系の座標に変換される。前者から後者への座標変換は、変換行列などを用いて実装してもよい。 The origin of the vehicle coordinate system is a predetermined position in the vehicle 30, and is set to the center of gravity of the vehicle 30, for example. For the vehicle 30, coordinate axes V X , V Y , V Z are defined. The positive direction of V X is a horizontal right direction with respect to the front front direction of the vehicle 30. The positive direction of V Y is the vertically downward direction. The positive direction of V Z is a vehicle front front direction of the vehicle 30. That is, V X is parallel to C 1X , V Y is parallel to C 1Y , and V Z is parallel to C 1Z . In calculating the movement locus, the coordinates of the camera coordinate system of the imaging devices 31 to 34 are converted to the coordinates of the vehicle coordinate system of the vehicle 30. The coordinate transformation from the former to the latter may be implemented using a transformation matrix or the like.
 図6は、4つの撮像装置の画像例を示す図である。
 撮像装置31~34は、継続的に画像を撮像し、撮像時刻の異なる画像の列を示す画像データの列を出力する。例えば、撮像装置31~34は、30分の1秒周期で画像を撮像する。撮像装置31~34が撮像する画像は、カラー画像でもよいしモノクロ画像でもよい。
FIG. 6 is a diagram illustrating an example of images of four imaging devices.
The imaging devices 31 to 34 continuously capture images and output a sequence of image data indicating a sequence of images with different imaging times. For example, the imaging devices 31 to 34 capture images at a 1/30 second period. Images captured by the imaging devices 31 to 34 may be color images or monochrome images.
 一例として、車両30が駐車場の中を前方に向かって進んでいる場合を考える。
 (A)撮像装置31は、時刻「9:30:00」において画像41を撮像する。その後、撮像装置31は、30分の1秒周期で画像を撮像し、時刻「9:30:05」において画像42を撮像する。車両30が画像の奥方向に進んでいるため、撮像装置31によって撮像される画像の列では、駐車中の他の車両や駐車スペースを示す白線など車両30の近くにある物は、中心から周辺に向かって移動しているように見える。一方、車両30の遠くにある建物は、その位置や大きさがあまり変化していないように見える。
As an example, consider a case where the vehicle 30 is traveling forward in a parking lot.
(A) The imaging device 31 captures the image 41 at time “9:30: 00”. Thereafter, the imaging device 31 captures an image at a period of 1/30 second, and captures an image 42 at time “9:30:05”. Since the vehicle 30 is moving in the depth direction of the image, in the sequence of images captured by the imaging device 31, objects near the vehicle 30, such as other parked vehicles and white lines indicating parking spaces, Looks like it is moving towards. On the other hand, a building far away from the vehicle 30 appears to have little change in position and size.
 (B)撮像装置32は、時刻「9:30:00」において画像43を撮像する。その後、撮像装置32は、30分の1秒周期で画像を撮像し、時刻「9:30:05」において画像44を撮像する。車両30が画像の奥方向とは逆方向に進んでいるため、撮像装置32によって撮像される画像の列では、他の車両や駐車スペースを示す白線など車両30の近くにある物は、周辺から中心に向かって移動しているように見える。一方、車両30の遠くにある建物は、その位置や大きさがあまり変化していないように見える。 (B) The imaging device 32 captures the image 43 at time “9:30”. Thereafter, the imaging device 32 captures an image at a period of 1/30 second, and captures an image 44 at time “9:30:05”. Since the vehicle 30 travels in the opposite direction to the depth direction of the image, in the sequence of images captured by the imaging device 32, objects near the vehicle 30 such as white lines indicating other vehicles and parking spaces are Looks like it is moving towards the center. On the other hand, a building far away from the vehicle 30 appears to have little change in position and size.
 (C)撮像装置33は、時刻「9:30:00」において画像45を撮像する。その後、撮像装置33は、30分の1秒周期で画像を撮像し、時刻「9:30:05」において画像46を撮像する。車両30が画像の右方向に進んでいるため、撮像装置33によって撮像される画像の列では、他の車両や駐車スペースを示す白線など車両30の近くにある物は、右から左に向かって移動しているように見える。また、車両30の遠くにある建物も、右から左に向かって移動しているように見える。 (C) The imaging device 33 captures the image 45 at the time “9:30: 00”. Thereafter, the imaging device 33 captures an image at a period of 1/30 second, and captures an image 46 at time “9:30:05”. Since the vehicle 30 is moving in the right direction of the image, in the row of images captured by the imaging device 33, objects near the vehicle 30 such as white lines indicating other vehicles and parking spaces are directed from the right to the left. Looks like it ’s moving. In addition, a building far away from the vehicle 30 also appears to move from right to left.
 (D)撮像装置34は、時刻「9:30:00」において画像47を撮像する。その後、撮像装置34は、30分の1秒周期で画像を撮像し、時刻「9:30:05」において画像48を撮像する。車両30が画像の左方向に進んでいるため、撮像装置34によって撮像される画像の列では、他の車両や駐車スペースを示す白線など車両30の近くにある物は、左から右に向かって移動しているように見える。また、車両30の遠くにある建物も、左から右に向かって移動しているように見える。 (D) The imaging device 34 captures the image 47 at the time “9:30”. Thereafter, the imaging device 34 captures an image at a period of 1/30 second, and captures an image 48 at time “9:30:05”. Since the vehicle 30 is moving in the left direction of the image, in the sequence of images captured by the imaging device 34, objects near the vehicle 30 such as white lines indicating other vehicles and parking spaces are directed from the left to the right. Looks like it ’s moving. In addition, a building far away from the vehicle 30 also appears to move from left to right.
 図7は、SLAM処理における目標点の抽出例を示す図である。
 位置推定装置100は、撮像装置31~34それぞれについて別個にSLAM処理を行う。SLAM処理により、基準位置からの相対的な移動経路の推定を示す軌跡データと、画像に写った目標点それぞれの3次元空間上の位置の推定を示す点群データとが生成される。すなわち、位置推定装置100は、撮像装置31が出力する画像データの列から、推定結果である軌跡データと点群データを生成する。位置推定装置100は、撮像装置32が出力する画像データの列から、推定結果である軌跡データと点群データを生成する。位置推定装置100は、撮像装置33が出力する画像データの列から、推定結果である軌跡データと点群データを生成する。位置推定装置100は、撮像装置34が出力する画像データの列から、推定結果である軌跡データと点群データを生成する。
FIG. 7 is a diagram illustrating an example of target point extraction in SLAM processing.
The position estimation device 100 performs SLAM processing separately for each of the imaging devices 31 to 34. By the SLAM processing, trajectory data indicating the estimation of the relative movement path from the reference position and point cloud data indicating the estimation of the position of each target point in the image in the three-dimensional space are generated. That is, the position estimation device 100 generates trajectory data and point cloud data as estimation results from the sequence of image data output from the imaging device 31. The position estimation device 100 generates trajectory data and point cloud data, which are estimation results, from the sequence of image data output by the imaging device 32. The position estimation device 100 generates trajectory data and point cloud data as estimation results from the sequence of image data output by the imaging device 33. The position estimation device 100 generates trajectory data and point cloud data as estimation results from the sequence of image data output by the imaging device 34.
 各SLAM処理の方法として、例えば、前述の非特許文献1~3などに記載された方法などを利用することができる。ここでは、一例として、各画像から目標点を抽出する方法について説明する。画像51は、撮像装置33によって撮像された画像である。画像52は、画像51の次(例えば、30分の1秒後)に撮像装置33によって撮像された画像である。 As a method for each SLAM processing, for example, the methods described in Non-Patent Documents 1 to 3 described above can be used. Here, as an example, a method for extracting a target point from each image will be described. The image 51 is an image captured by the imaging device 33. The image 52 is an image captured by the imaging device 33 next to the image 51 (for example, after 1/30 second).
 位置推定装置100は、画像51を分析して、画像51から目標点を抽出する。目標点は、例えば、画像51のピクセルのうち色の勾配(隣接するピクセルからの値の変化量)が閾値以上であるピクセルである。目標点としては、建物の輪郭、建物の窓枠、地面に描かれた白線など、隣接ピクセルと異なる色をもつピクセルが抽出される。位置推定装置100は、画像51から抽出された目標点を示す分析画像53を生成する。同様に、位置推定装置100は、画像52を分析して、画像52から目標点を抽出する。位置推定装置100は、画像52から抽出された目標点を示す分析画像54を生成する。 The position estimation device 100 analyzes the image 51 and extracts a target point from the image 51. The target point is, for example, a pixel having a color gradient (a change amount of a value from an adjacent pixel) of pixels of the image 51 that is equal to or greater than a threshold value. As the target point, pixels having a color different from that of adjacent pixels, such as a building outline, a window frame of the building, and a white line drawn on the ground, are extracted. The position estimation apparatus 100 generates an analysis image 53 indicating the target point extracted from the image 51. Similarly, the position estimation apparatus 100 analyzes the image 52 and extracts target points from the image 52. The position estimation apparatus 100 generates an analysis image 54 indicating the target point extracted from the image 52.
 位置推定装置100は、分析画像53と分析画像54とを比較し、分析画像53に含まれる目標点と分析画像54に含まれる目標点との対応関係を判定する。例えば、位置推定装置100は、分析画像54の各目標点について、分析画像53を重ねたときに距離が閾値以下となる分析画像53の目標点を探し、実質的に同じ物を指している目標点を見つける。そして、位置推定装置100は、分析画像53と分析画像54の間の目標点の位置変化を判定する。これにより、撮像装置33の移動方向と移動距離を推定できる。また、画像51,52から抽出した目標点の3次元空間上の位置も推定できる。 The position estimation device 100 compares the analysis image 53 and the analysis image 54 and determines the correspondence between the target point included in the analysis image 53 and the target point included in the analysis image 54. For example, for each target point of the analysis image 54, the position estimation apparatus 100 searches for a target point of the analysis image 53 whose distance is equal to or less than a threshold when the analysis image 53 is superimposed, and a target pointing to substantially the same object. Find a spot. Then, the position estimation device 100 determines the position change of the target point between the analysis image 53 and the analysis image 54. Thereby, the moving direction and moving distance of the imaging device 33 can be estimated. Further, the position of the target point extracted from the images 51 and 52 in the three-dimensional space can also be estimated.
 前述のように、目標点が画像の周辺から中心に向かって移動している場合、撮像装置33がC3Zの負方向に移動していると推定できる。目標点が画像の中心から周辺に向かって移動している場合、撮像装置33がC3Zの正方向に移動していると推定できる。目標点が画像の左から右に向かって移動している場合、撮像装置33がC3Xの負方向に移動していると推定できる。目標点が画像の右から左に向かって移動している場合、撮像装置33がC3Xの正方向に移動していると推定できる。また、同じ画像の中で移動量の異なる目標点が存在する場合、移動量が大きい目標点は撮像装置33の近くに存在すると推定できる。移動量が小さい目標点は撮像装置33から遠くに存在すると推定できる。 As described above, when the target point is moving from the periphery of the image toward the center, it can be estimated that the imaging device 33 is moving in the negative direction of C 3Z . When the target point is moving from the center of the image toward the periphery, it can be estimated that the imaging device 33 is moving in the positive direction of C 3Z . If the target point is moving from left to right of the image, it can be estimated that the image pickup device 33 is moved in the negative direction of the C 3X. When the target point is moving from the right to the left of the image, it can be estimated that the imaging device 33 is moving in the positive direction of C 3X . Further, when there are target points with different movement amounts in the same image, it can be estimated that a target point with a large movement amount exists near the imaging device 33. It can be estimated that a target point with a small amount of movement exists far from the imaging device 33.
 図8は、4つの撮像装置に対するSLAM結果の例を示す図である。
 ここでは、車両30の前方が上になるように移動軌跡および点群を回転している。
 (A)位置推定装置100は、撮像装置31が出力した画像データを用いて、移動軌跡61および点群65を算出する。(B)位置推定装置100は、撮像装置32が出力した画像データを用いて、移動軌跡62および点群66を算出する。(C)位置推定装置100は、撮像装置33が出力した画像データを用いて、移動軌跡63および点群67を算出する。(D)位置推定装置100は、撮像装置34が出力した画像データを用いて、移動軌跡64および点群68を算出する。移動軌跡61~64は互いに独立に算出される。また、点群65~68は互いに独立に算出される。
FIG. 8 is a diagram illustrating an example of SLAM results for four imaging devices.
Here, the movement trajectory and the point cloud are rotated so that the front of the vehicle 30 is up.
(A) The position estimation device 100 calculates the movement locus 61 and the point group 65 using the image data output from the imaging device 31. (B) The position estimation device 100 calculates the movement locus 62 and the point group 66 using the image data output from the imaging device 32. (C) The position estimation device 100 calculates the movement locus 63 and the point group 67 using the image data output from the imaging device 33. (D) The position estimation device 100 calculates the movement locus 64 and the point group 68 using the image data output from the imaging device 34. The movement trajectories 61 to 64 are calculated independently of each other. The point groups 65 to 68 are calculated independently of each other.
 ここで、1つの撮像装置によって撮像される画像からは、地球座標系における車両30の実際の移動距離や目標点の間の実際の距離を特定することは難しい。これは、平面的な画像からは、撮像装置から遠くに存在する大きな目標物と撮像装置の近くに存在する小さな目標物とを区別することが難しいためである。よって、各撮像装置の画像から算出される移動軌跡の縮尺はその撮像装置のカメラ座標系に依存し、移動軌跡61~64の縮尺は統一されていない。また、各撮像装置の画像から算出される点群の縮尺もその撮像装置のカメラ座標系に依存し、点群65~68の縮尺は統一されていない。 Here, it is difficult to specify the actual moving distance of the vehicle 30 in the earth coordinate system and the actual distance between the target points from the image captured by one imaging device. This is because it is difficult to distinguish a large target existing far from the imaging apparatus and a small target existing near the imaging apparatus from a planar image. Therefore, the scale of the movement trajectory calculated from the image of each imaging device depends on the camera coordinate system of the imaging device, and the scales of the movement trajectories 61 to 64 are not unified. Further, the scale of the point group calculated from the image of each imaging device also depends on the camera coordinate system of the imaging device, and the scales of the point groups 65 to 68 are not unified.
 また、撮像装置31~34の一部の撮像装置が、目標点を正確に抽出することが難しい画像(SLAM処理に不向きな画像)を撮像してしまうことがある。例えば、模様のない地面や壁などの平面が大きな領域を占めている画像は、SLAM処理に不向きである。また、太陽光などの強い光が撮像装置に当たることでハレーションが生じた画像は、SLAM処理に不向きである。また、車両30の影が画像に写っており車両30の移動に伴って影も動く場合、その画像はSLAM処理に不向きである。また、画像に写った他の車両が移動している場合、その画像はSLAM処理に不向きである。 Also, some of the imaging devices 31 to 34 may take images (images unsuitable for SLAM processing) for which it is difficult to accurately extract the target points. For example, an image in which a plane such as a ground or a wall without a pattern occupies a large area is unsuitable for SLAM processing. In addition, an image in which halation has occurred due to strong light such as sunlight hitting the imaging device is not suitable for SLAM processing. Further, when the shadow of the vehicle 30 appears in the image and the shadow moves as the vehicle 30 moves, the image is not suitable for SLAM processing. Further, when another vehicle in the image is moving, the image is not suitable for SLAM processing.
 SLAM処理に不向きな画像からは、精度の低い移動軌跡や点群が算出されてしまうことがある。すなわち、車両30の実際の移動経路とは大きく異なる移動軌跡や、車両30の周辺の目標物の配置と大きく異なる点群が算出されてしまうことがある。図8の例では、撮像装置32の画像(例えば、図6の画像43,44)が地面や遠くの建物の影響で目標点を抽出しづらい画像であるため、移動軌跡62の推定精度が低下している。 From an image unsuitable for SLAM processing, a low-accuracy movement trajectory or point cloud may be calculated. In other words, a movement trajectory that is significantly different from the actual movement route of the vehicle 30 or a point group that is significantly different from the arrangement of the targets around the vehicle 30 may be calculated. In the example of FIG. 8, the image of the imaging device 32 (for example, the images 43 and 44 in FIG. 6) is an image in which it is difficult to extract a target point due to the influence of the ground or a distant building. is doing.
 ただし、第2の実施の形態では、撮像方向が90°ずつずれた撮像装置31~34に対して互いに独立にSLAM処理を行っている。このため、撮像装置31,33,34の画像は目標点を抽出しづらい画像にはなっておらず、移動軌跡62の推定精度の低下は移動軌跡61,63,64には影響していない。なお、移動軌跡61,63,64は異なる画像から算出された移動軌跡であるため、完全には一致していない。 However, in the second embodiment, SLAM processing is performed independently of each other on the imaging devices 31 to 34 whose imaging directions are shifted by 90 °. For this reason, the images of the imaging devices 31, 33, and 34 are not images that are difficult to extract the target points, and the decrease in the estimation accuracy of the movement locus 62 does not affect the movement locus 61, 63, and 64. The movement trajectories 61, 63, and 64 are movement trajectories calculated from different images, and thus do not completely match.
 次に、複数のSLAM処理結果の合成について説明する。
 図9は、SLAM結果の合成例を示す図である。
 位置推定装置100は、前述のように移動軌跡61~64を算出する。移動軌跡61~64は基準位置からの移動経路を表しており、所定時間(例えば、時刻「9:30:00」から時刻「9:30:05」までの5秒間)の移動経路を表している。
Next, the synthesis of a plurality of SLAM processing results will be described.
FIG. 9 is a diagram illustrating a synthesis example of SLAM results.
The position estimation apparatus 100 calculates the movement trajectories 61 to 64 as described above. The movement trajectories 61 to 64 represent movement paths from the reference position, and represent movement paths for a predetermined time (for example, 5 seconds from time “9:30: 00” to time “9:30:05”). Yes.
 移動軌跡61~64は、異なるカメラ座標系で表現されている。そこで、位置推定装置100は、移動軌跡61~64の座標系をカメラ座標系から車両座標系に変換する。これにより、移動軌跡61~64の始点が車両座標系の原点に移動する。また、移動軌跡61~64の座標系が全てVX,VY,VZで定義される座標系に統一される。 The movement trajectories 61 to 64 are expressed in different camera coordinate systems. Therefore, the position estimation apparatus 100 converts the coordinate system of the movement trajectories 61 to 64 from the camera coordinate system to the vehicle coordinate system. As a result, the start points of the movement trajectories 61 to 64 move to the origin of the vehicle coordinate system. Further, the coordinate systems of the movement trajectories 61 to 64 are all unified to a coordinate system defined by V X , V Y , and V Z.
 また、前述のように、SLAM処理によって算出される移動軌跡61~64の縮尺は互いに異なる。一方、移動軌跡61~64は同一時間の移動経路を表しているため、地球座標系では移動軌跡61~64の長さはほぼ同一になると考えられる。そこで、位置推定装置100は、車両座標系において長さが同じになるように、移動軌跡61~64を移動軌跡71~74に変換する。例えば、位置推定装置100は、統一の長さを決定し、当該統一の長さになるように移動軌跡61~64それぞれを相似変換する。統一の長さは、移動軌跡61~64の何れか1つの長さ(例えば、最も長い移動軌跡の長さ)としてもよいし、移動軌跡61~64の何れとも異なる長さとしてもよい。 As described above, the scales of the movement trajectories 61 to 64 calculated by the SLAM process are different from each other. On the other hand, since the movement trajectories 61 to 64 represent movement paths of the same time, it is considered that the lengths of the movement trajectories 61 to 64 are almost the same in the earth coordinate system. Therefore, the position estimation apparatus 100 converts the movement trajectories 61 to 64 into the movement trajectories 71 to 74 so that the lengths are the same in the vehicle coordinate system. For example, the position estimation apparatus 100 determines a uniform length, and performs similar transformation on each of the movement trajectories 61 to 64 so that the uniform length is obtained. The unified length may be any one of the movement trajectories 61 to 64 (for example, the length of the longest movement trajectory), or may be different from any of the movement trajectories 61 to 64.
 また、前述のように、移動軌跡71~74の中に、撮像された画像がSLAM処理に不向きであるために推定精度が大きく低下しているものが存在する可能性がある。そこで、位置推定装置100は、他の移動軌跡と大きく異なる異常移動軌跡を探し、異常移動軌跡が見つかった場合には異常移動軌跡を以降の処理から除外する。 Also, as described above, there may be some of the movement trajectories 71 to 74 whose estimated accuracy is greatly reduced because the captured images are not suitable for SLAM processing. Therefore, the position estimation apparatus 100 searches for an abnormal movement trajectory that is significantly different from other movement trajectories, and when an abnormal movement trajectory is found, excludes the abnormal movement trajectory from the subsequent processing.
 例えば、位置推定装置100は、移動軌跡71~74を平均化した平均移動軌跡を算出し、平均移動軌跡と移動軌跡71~74それぞれとの乖離度(移動軌跡の間の「距離」)を算出する。位置推定装置100は、平均移動軌跡からの乖離度が閾値以上である移動軌跡を異常移動軌跡と判断する。また、例えば、位置推定装置100は、2つの移動軌跡の組毎に乖離度を算出し、他の全ての移動軌跡との間で乖離度が閾値以上である移動軌跡を異常移動軌跡と判断する。図9では、移動軌跡72が異常移動軌跡と判断されている。 For example, the position estimation apparatus 100 calculates an average movement trajectory obtained by averaging the movement trajectories 71 to 74, and calculates a degree of deviation (“distance” between the movement trajectories) between the average movement trajectory and each of the movement trajectories 71 to 74. To do. The position estimation apparatus 100 determines that a movement locus whose degree of deviation from the average movement locus is equal to or greater than a threshold is an abnormal movement locus. In addition, for example, the position estimation apparatus 100 calculates a divergence degree for each pair of two movement trajectories, and determines a movement trajectory whose deviation degree is equal to or greater than a threshold value among all other movement trajectories as an abnormal movement trajectory. . In FIG. 9, the movement locus 72 is determined to be an abnormal movement locus.
 そして、位置推定装置100は、異常移動軌跡以外の移動軌跡を合成して、合成移動軌跡である移動軌跡70を算出する。移動軌跡70が正しい移動軌跡であるとみなされる。例えば、位置推定装置100は、移動軌跡71,73,74の平均を合成移動軌跡とする。ただし、位置推定装置100は、移動軌跡71,73,74に重みを付与し、移動軌跡71,73,74の加重平均を合成移動軌跡としてもよい。重みは、各移動軌跡の形状に基づいて判断してもよい。例えば、蛇行している移動軌跡など不自然な移動軌跡の重みを小さくすることが考えられる。また、異常移動軌跡を除外する代わりに異常移動軌跡の重みを小さくし、移動軌跡71~74の加重平均を算出してもよい。 Then, the position estimation apparatus 100 synthesizes movement trajectories other than the abnormal movement trajectory to calculate a movement trajectory 70 that is a composite movement trajectory. The movement locus 70 is regarded as a correct movement locus. For example, the position estimation apparatus 100 sets the average of the movement trajectories 71, 73, and 74 as the combined movement trajectory. However, the position estimation apparatus 100 may assign weights to the movement trajectories 71, 73, and 74 and use a weighted average of the movement trajectories 71, 73, and 74 as a combined movement trajectory. The weight may be determined based on the shape of each movement locus. For example, it is conceivable to reduce the weight of an unnatural movement locus such as a meandering movement locus. Further, instead of excluding the abnormal movement trajectory, the weight of the abnormal trajectory may be reduced and the weighted average of the movement trajectories 71 to 74 may be calculated.
 移動軌跡70が算出されると、位置推定装置100は、移動軌跡70の座標系を車両座標系から地球座標系に変換する。ここで、移動軌跡70の長さは、地球座標系における車両30の実際の移動距離とは異なる可能性がある。そこで、位置推定装置100は、移動軌跡70のスケーリングを行い、移動軌跡70の縮尺を調整する。 When the movement locus 70 is calculated, the position estimation apparatus 100 converts the coordinate system of the movement locus 70 from the vehicle coordinate system to the earth coordinate system. Here, the length of the movement locus 70 may be different from the actual movement distance of the vehicle 30 in the earth coordinate system. Therefore, the position estimation apparatus 100 performs scaling of the movement locus 70 and adjusts the scale of the movement locus 70.
 移動軌跡70のスケーリング方法としては、様々な方法が考えられる。
 第1の方法として、走行距離計35によって計測された車両30の走行距離を用いる方法が考えられる。位置推定装置100は、走行距離計35が出力する情報に基づいて、基準位置にいた時刻から現在まで(例えば、時刻「9:30:00」から時刻「9:30:05」までの5秒間)の車両30の実際の走行距離を求める。位置推定装置100は、移動軌跡70の長さが当該走行距離に一致するように、移動軌跡70を相似変換する。なお、実際の走行距離を利用する場合、位置推定装置100は、図9において移動軌跡61~64を移動軌跡71~74に変換するとき、移動軌跡71~74の長さを当該走行距離に一致させてもよい。その場合、移動軌跡70のスケーリングは不要である。
Various methods are conceivable as the scaling method of the movement locus 70.
As a first method, a method using the travel distance of the vehicle 30 measured by the odometer 35 can be considered. Based on the information output from the odometer 35, the position estimation apparatus 100 performs the process from the time at the reference position to the present (for example, 5 seconds from the time “9: 30: 0” to the time “9:30:05”). ) Of the vehicle 30 is obtained. The position estimation apparatus 100 performs similarity conversion on the movement locus 70 so that the length of the movement locus 70 matches the travel distance. When the actual travel distance is used, when the position estimation apparatus 100 converts the travel trajectories 61 to 64 into the travel trajectories 71 to 74 in FIG. 9, the length of the travel trajectories 71 to 74 matches the travel distance. You may let them. In that case, scaling of the movement locus 70 is not necessary.
 第2の方法として、予め用意された地図を利用する方法が考えられる。位置推定装置100は、SLAM処理によって算出された点群65~68のうち、異常移動軌跡に対応する点群66を除いた点群65,67,68を選択する。位置推定装置100は、点群65,67,68と地図とをマッチングする。例えば、位置推定装置100は、点群65,67,68それぞれについて縮尺変更を行い、目標点と地図に描かれた線とが最も多く重なる倍率を算出する。位置推定装置100は、点群65,67,68について算出した倍率を移動軌跡61,63,64に適用することで、移動軌跡70のあるべき長さを求める。 As a second method, a method using a prepared map is conceivable. The position estimation apparatus 100 selects point groups 65, 67, and 68 excluding the point group 66 corresponding to the abnormal movement locus from the point groups 65 to 68 calculated by the SLAM process. The position estimation apparatus 100 matches the point groups 65, 67, and 68 with the map. For example, the position estimation apparatus 100 changes the scale for each of the point groups 65, 67, and 68, and calculates the magnification at which the target point and the line drawn on the map overlap most. The position estimation apparatus 100 obtains the desired length of the movement locus 70 by applying the magnification calculated for the point groups 65, 67, and 68 to the movement locus 61, 63, 64.
 第3の方法として、点群同士を比較する方法が考えられる。
 図10は、点群を用いた移動軌跡のスケール調整の例を示す図である。
 位置推定装置100は、SLAM処理によって算出された点群65~68のうち、異常移動軌跡である移動軌跡62に対応する点群66を除いた点群65,67,68を選択する。位置推定装置100は、移動軌跡61を移動軌跡71に変換したときの倍率で、撮像装置31のカメラ座標系の原点を中心にして点群65を点群75に相似変換する。同様に、位置推定装置100は、移動軌跡63を移動軌跡73に変換したときの倍率で、撮像装置33のカメラ座標系の原点を中心にして点群67を点群77に相似変換する。位置推定装置100は、移動軌跡64を移動軌跡74に変換したときの倍率で、撮像装置34のカメラ座標系の原点を中心にして点群68を点群78に相似変換する。
As a third method, a method of comparing point groups can be considered.
FIG. 10 is a diagram illustrating an example of the scale adjustment of the movement trajectory using the point group.
The position estimation apparatus 100 selects point groups 65, 67, and 68 excluding the point group 66 corresponding to the movement locus 62 that is an abnormal movement locus among the point groups 65 to 68 calculated by the SLAM process. The position estimation apparatus 100 resembles the point group 65 with the point group 75 around the origin of the camera coordinate system of the imaging apparatus 31 at the magnification when the movement locus 61 is converted into the movement locus 71. Similarly, the position estimation apparatus 100 resembles the point group 67 with the point group 77 around the origin of the camera coordinate system of the imaging apparatus 33 at the magnification when the movement locus 63 is converted into the movement locus 73. The position estimation apparatus 100 resembles the point group 68 with the point group 78 around the origin of the camera coordinate system of the imaging apparatus 34 at the magnification when the movement locus 64 is converted into the movement locus 74.
 これにより、移動軌跡70と点群75,77,78の縮尺は整合する。そして、位置推定装置100は、点群75,77,78を共通の倍率でスケーリングする。点群75のスケーリングは、撮像装置31のカメラ座標系の原点を中心にして行う。点群77のスケーリングは、撮像装置33のカメラ座標系の原点を中心にして行う。点群78のスケーリングは、撮像装置34のカメラ座標系の原点を中心にして行う。座標系の原点が互いに異なるため、倍率を変えると点群75,77,78の間の目標点の重複度が変化する。 Thereby, the scale of the movement trajectory 70 and the point groups 75, 77, 78 are matched. Then, the position estimation apparatus 100 scales the point groups 75, 77, 78 with a common magnification. The scaling of the point group 75 is performed with the origin of the camera coordinate system of the imaging device 31 as the center. The scaling of the point group 77 is performed with the origin of the camera coordinate system of the imaging device 33 as the center. The scaling of the point group 78 is performed with the origin of the camera coordinate system of the imaging device 34 as the center. Since the origins of the coordinate systems are different from each other, the overlapping degree of the target points between the point groups 75, 77, and 78 changes when the magnification is changed.
 位置推定装置100は、重複度が最大となる倍率を算出する。例えば、各目標点に対して当該目標点を中心に一定範囲に広がる確率分布を定義する。位置推定装置100は、確率分布同士の重複量の和が最大になるように倍率を調整する。また、例えば、位置推定装置100は、各目標点に対して当該目標点から所定範囲内に存在する他の目標点をカウントし、カウントの合計が最大になるように倍率を調整する。 The position estimation apparatus 100 calculates a magnification that maximizes the degree of overlap. For example, for each target point, a probability distribution that extends in a certain range around the target point is defined. The position estimation apparatus 100 adjusts the magnification so that the sum of the overlapping amounts of the probability distributions is maximized. In addition, for example, the position estimation apparatus 100 counts other target points that are within a predetermined range from the target point for each target point, and adjusts the magnification so that the total of the counts is maximized.
 共通の倍率が決定すると、位置推定装置100は、点群75を当該共通の倍率で点群85に変換し、点群77を当該共通の倍率で点群87に変換し、点群78を当該共通の倍率で点群88に変換する。位置推定装置100は、移動軌跡71,73,74に当該共通の倍率を適用することで、移動軌跡70のあるべき長さを求める。これにより、地球座標系の縮尺に合わせた移動軌跡80が算出される。 When the common magnification is determined, the position estimation apparatus 100 converts the point group 75 into the point group 85 at the common magnification, converts the point group 77 into the point group 87 at the common magnification, and converts the point group 78 into the point group 85. The point group 88 is converted at a common magnification. The position estimation apparatus 100 obtains the desired length of the movement track 70 by applying the common magnification to the movement tracks 71, 73, and 74. As a result, a movement locus 80 that matches the scale of the earth coordinate system is calculated.
 次に、位置推定装置100のデータ処理について説明する。
 図11は、位置推定装置の機能例を示すブロック図である。
 位置推定装置100は、SLAM結果記憶部111、パラメータ記憶部112、地図データ記憶部113、SLAM処理部121~124、軌跡比較部125、点群比較部126、位置決定部127、走行距離取得部128およびGPS情報取得部129を有する。SLAM結果記憶部111、パラメータ記憶部112および地図データ記憶部113は、RAM102またはROM103の記憶領域を用いて実装できる。SLAM処理部121~124、軌跡比較部125、点群比較部126、位置決定部127、走行距離取得部128およびGPS情報取得部129は、プログラムモジュールを用いて実装できる。
Next, data processing of the position estimation apparatus 100 will be described.
FIG. 11 is a block diagram illustrating an example of functions of the position estimation apparatus.
The position estimation apparatus 100 includes a SLAM result storage unit 111, a parameter storage unit 112, a map data storage unit 113, a SLAM processing unit 121 to 124, a trajectory comparison unit 125, a point group comparison unit 126, a position determination unit 127, and a travel distance acquisition unit. 128 and a GPS information acquisition unit 129. The SLAM result storage unit 111, the parameter storage unit 112, and the map data storage unit 113 can be mounted using the storage area of the RAM 102 or the ROM 103. The SLAM processing units 121 to 124, the trajectory comparison unit 125, the point group comparison unit 126, the position determination unit 127, the travel distance acquisition unit 128, and the GPS information acquisition unit 129 can be implemented using program modules.
 SLAM結果記憶部111は、SLAM処理部121~124によって生成された軌跡データおよび点群データを記憶する。また、SLAM結果記憶部111は、軌跡比較部125および点群比較部126の処理の過程で生成される中間データを記憶する。中間データには、当初の軌跡データおよび点群データから変換されたデータが含まれる。 The SLAM result storage unit 111 stores trajectory data and point cloud data generated by the SLAM processing units 121-124. In addition, the SLAM result storage unit 111 stores intermediate data generated in the course of processing by the trajectory comparison unit 125 and the point group comparison unit 126. The intermediate data includes data converted from the initial trajectory data and point cloud data.
 パラメータ記憶部112は、撮像装置31~34のカメラ座標系および車両30の車両座標系を示すパラメータを記憶する。パラメータは、軌跡データおよび点群データの座標系の変換に利用される。パラメータは、撮像装置31~34の配置に応じて予め定義されている。地図データ記憶部113は、道路や駐車場を示す地図データを記憶する。地図データには、例えば、道路や駐車場の場所を示す地球座標系の座標(緯度および経度)や、道路や駐車場の形状を示す線データが含まれる。地図データ記憶部113は、駐車場に関する地図データのみ記憶してもよい。駐車場に関する地図データは、駐車スペースの配置など駐車場内の詳細な形状を表していることが好ましい。 The parameter storage unit 112 stores parameters indicating the camera coordinate system of the imaging devices 31 to 34 and the vehicle coordinate system of the vehicle 30. The parameter is used for converting the coordinate system of the trajectory data and the point cloud data. The parameters are defined in advance according to the arrangement of the imaging devices 31 to 34. The map data storage unit 113 stores map data indicating roads and parking lots. The map data includes, for example, the coordinates (latitude and longitude) of the earth coordinate system indicating the location of the road and the parking lot, and line data indicating the shape of the road and the parking lot. The map data storage unit 113 may store only map data related to the parking lot. The map data related to the parking lot preferably represents a detailed shape in the parking lot such as the arrangement of the parking space.
 SLAM処理部121~124は、画像処理によって軌跡データおよび点群データを生成する。SLAM処理部121~124は互いに独立に動作する。SLAM処理部121~124は、生成した軌跡データを軌跡比較部125に出力する。また、SLAM処理部121~124は、生成した点群データを点群比較部126に出力する。 The SLAM processing units 121 to 124 generate trajectory data and point cloud data by image processing. The SLAM processing units 121 to 124 operate independently of each other. The SLAM processing units 121 to 124 output the generated trajectory data to the trajectory comparison unit 125. In addition, the SLAM processing units 121 to 124 output the generated point group data to the point group comparison unit 126.
 SLAM処理部121は、撮像装置31から画像データを取得し、撮像装置31に対応する軌跡データと点群データを生成する。SLAM処理部122は、撮像装置32から画像データを取得し、撮像装置32に対応する軌跡データと点群データを生成する。SLAM処理部123は、撮像装置33から画像データを取得し、撮像装置33に対応する軌跡データと点群データを生成する。SLAM処理部124は、撮像装置34から画像データを取得し、撮像装置34に対応する軌跡データと点群データを生成する。 The SLAM processing unit 121 acquires image data from the imaging device 31, and generates trajectory data and point cloud data corresponding to the imaging device 31. The SLAM processing unit 122 acquires image data from the imaging device 32 and generates trajectory data and point cloud data corresponding to the imaging device 32. The SLAM processing unit 123 acquires image data from the imaging device 33 and generates trajectory data and point cloud data corresponding to the imaging device 33. The SLAM processing unit 124 acquires image data from the imaging device 34 and generates trajectory data and point cloud data corresponding to the imaging device 34.
 軌跡比較部125は、SLAM処理部121~124が生成した軌跡データを用いて、車両座標系における移動軌跡70(合成移動軌跡)を算出する。軌跡比較部125は、パラメータ記憶部112に記憶されたパラメータを用いて、SLAM処理部121~124の軌跡データが示す移動軌跡61~64の座標系をカメラ座標系から車両座標系に変換する。また、軌跡比較部125は、移動軌跡61~64の長さが同じになるように移動軌跡61~64のスケールを調整し、移動軌跡71~74を生成する。軌跡比較部125は、移動軌跡61~64に適用した倍率を点群比較部126に通知する。 The trajectory comparison unit 125 uses the trajectory data generated by the SLAM processing units 121 to 124 to calculate a movement trajectory 70 (composite movement trajectory) in the vehicle coordinate system. The trajectory comparison unit 125 uses the parameters stored in the parameter storage unit 112 to convert the coordinate system of the movement trajectories 61 to 64 indicated by the trajectory data of the SLAM processing units 121 to 124 from the camera coordinate system to the vehicle coordinate system. In addition, the trajectory comparison unit 125 adjusts the scales of the movement trajectories 61 to 64 so that the movement trajectories 61 to 64 have the same length, and generates the movement trajectories 71 to 74. The trajectory comparison unit 125 notifies the point group comparison unit 126 of the magnification applied to the movement trajectories 61 to 64.
 軌跡比較部125は、長さを揃えた移動軌跡71~74を合成して移動軌跡70を生成し、移動軌跡70を位置決定部127に通知する。移動軌跡71~74の合成は、移動軌跡71~74の中から異常移動軌跡を検出し、異常移動軌跡を除外することを含んでもよい。その場合、軌跡比較部125は、異常移動軌跡を点群比較部126に通知する。また、移動軌跡71~74の合成は、移動軌跡71~74の全部または一部の平均を算出することを含んでもよい。また、移動軌跡71~74の合成は、移動軌跡71~74に重みを付与し、移動軌跡71~74の加重平均を算出することを含んでもよい。 The trajectory comparison unit 125 generates the movement trajectory 70 by combining the movement trajectories 71 to 74 having the same length, and notifies the position determination unit 127 of the movement trajectory 70. The synthesis of the movement trajectories 71 to 74 may include detecting an abnormal movement trajectory from the movement trajectories 71 to 74 and excluding the abnormal movement trajectory. In that case, the trajectory comparison unit 125 notifies the point group comparison unit 126 of the abnormal movement trajectory. Further, the synthesis of the movement trajectories 71 to 74 may include calculating an average of all or a part of the movement trajectories 71 to 74. The composition of the movement trajectories 71 to 74 may include assigning weights to the movement trajectories 71 to 74 and calculating a weighted average of the movement trajectories 71 to 74.
 点群比較部126は、SLAM処理部121~124が生成した点群データを用いて、軌跡比較部125が生成する移動軌跡70のスケールを調整するための倍率を決定する。点群比較部126は、軌跡比較部125から通知された移動軌跡61~64の倍率を点群65~68に適用して、移動軌跡71~74に対応する点群75~78を生成する。すなわち、点群比較部126は、移動軌跡61に適用された倍率を点群65に適用し、移動軌跡62に適用された倍率を点群66に適用し、移動軌跡63に適用された倍率を点群67に適用し、移動軌跡64に適用された倍率を点群68に適用する。 The point group comparison unit 126 uses the point group data generated by the SLAM processing units 121 to 124 to determine a magnification for adjusting the scale of the moving locus 70 generated by the locus comparison unit 125. The point group comparison unit 126 applies the magnification of the movement trajectories 61 to 64 notified from the trajectory comparison unit 125 to the point groups 65 to 68 to generate point groups 75 to 78 corresponding to the movement trajectories 71 to 74. That is, the point group comparison unit 126 applies the magnification applied to the movement locus 61 to the point group 65, applies the magnification applied to the movement locus 62 to the point group 66, and calculates the magnification applied to the movement locus 63. The magnification applied to the point group 67 and the magnification applied to the movement locus 64 is applied to the point group 68.
 点群比較部126は、点群75~78を共通の倍率でスケーリングし、点群75~78の間で目標点が最も重複する倍率を判定する。軌跡比較部125から異常移動軌跡が通知された場合、点群比較部126は、点群75~78のうち異常移動軌跡に対応する点群を除外してもよい。点群比較部126は、判定した倍率を位置決定部127に通知する。 The point group comparison unit 126 scales the point groups 75 to 78 by a common magnification, and determines the magnification at which the target points overlap most between the point groups 75 to 78. When the abnormal movement locus is notified from the locus comparison unit 125, the point group comparison unit 126 may exclude the point group corresponding to the abnormal movement locus from the point groups 75 to 78. The point group comparison unit 126 notifies the position determination unit 127 of the determined magnification.
 位置決定部127は、軌跡比較部125から通知された移動軌跡70の長さを、点群比較部126から通知された倍率に基づいて調整し、移動軌跡80を算出する。位置決定部127は、移動軌跡80の始点が基準位置となるように移動軌跡80を地球座標系にマッピングし、地球座標系における車両30の移動軌跡と現在位置の推定結果を出力する。現在位置は、移動軌跡80の終点に対応する。この移動軌跡や現在位置は、地球座標系の座標(緯度および経度)を用いて表現することができる。 The position determination unit 127 adjusts the length of the movement locus 70 notified from the locus comparison unit 125 based on the magnification notified from the point group comparison unit 126, and calculates the movement locus 80. The position determination unit 127 maps the movement trajectory 80 to the earth coordinate system so that the starting point of the movement trajectory 80 becomes the reference position, and outputs an estimation result of the movement trajectory and the current position of the vehicle 30 in the earth coordinate system. The current position corresponds to the end point of the movement track 80. The movement trajectory and the current position can be expressed using the coordinates (latitude and longitude) of the earth coordinate system.
 ただし、位置決定部127は、点群比較部126の処理結果を用いる代わりに、走行距離取得部128から取得する走行距離を用いて移動軌跡70のスケーリングを行ってもよい。また、位置決定部127は、SLAM処理部123が生成した点群データと地図データ記憶部113に記憶された地図データとを照合して倍率を決定し、決定した倍率を用いて移動軌跡70のスケーリングを行ってもよい。地球座標系における基準位置としては、GPS情報取得部129が最後に測定した車両30の現在位置を用いてもよい。また、車両30が最後に通過したETC(Electronic Toll Collection System)ゲートの位置を地図データから特定し、そのETCゲートの位置を基準位置としてもよい。また、位置決定部127が前回推定した現在位置を、次の推定における基準位置としてもよい。 However, the position determination unit 127 may perform scaling of the movement locus 70 using the travel distance acquired from the travel distance acquisition unit 128 instead of using the processing result of the point group comparison unit 126. Further, the position determining unit 127 determines the magnification by collating the point cloud data generated by the SLAM processing unit 123 with the map data stored in the map data storage unit 113, and using the determined magnification, the position of the moving locus 70 is determined. Scaling may be performed. As the reference position in the earth coordinate system, the current position of the vehicle 30 last measured by the GPS information acquisition unit 129 may be used. Further, the position of the ETC (Electronic Toll Collection System) gate through which the vehicle 30 last passed may be specified from the map data, and the position of the ETC gate may be used as the reference position. Further, the current position previously estimated by the position determination unit 127 may be used as the reference position in the next estimation.
 走行距離取得部128は、走行距離計35から走行距離の情報を取得する。GPS情報取得部129は、GPS測定部36から現在位置の情報を取得する。ただし、GPS信号が届かない場所では、GPS情報取得部129は現在位置の情報を取得できない。 The travel distance acquisition unit 128 acquires travel distance information from the travel distance meter 35. The GPS information acquisition unit 129 acquires current position information from the GPS measurement unit 36. However, the GPS information acquisition unit 129 cannot acquire the current position information in a place where the GPS signal does not reach.
 図12は、パラメータテーブルの例を示す図である。
 パラメータテーブル114は、パラメータ記憶部112に記憶されている。パラメータテーブル114は、撮像装置ID、X座標、Y座標、Z座標、ヨー(Yaw)、ピッチ(Pitch)およびロール(Roll)の項目を有する。
FIG. 12 is a diagram illustrating an example of a parameter table.
The parameter table 114 is stored in the parameter storage unit 112. The parameter table 114 includes items of an imaging device ID, an X coordinate, a Y coordinate, a Z coordinate, a yaw (Yaw), a pitch (Pitch), and a roll (Roll).
 撮像装置IDは、車両30に搭載された撮像装置31~34の識別情報である。X座標、Y座標およびZ座標は、撮像装置31~34の配置場所を示す車両座標系の座標である。撮像装置31は(0.0m,0.0m,2.5m)に存在する。撮像装置32は(0.0m,0.0m,-2.5m)に存在する。撮像装置33は(-1.0m,0.0m,0.8m)に存在する。撮像装置34は(1.0m,0.0m,0.8m)に存在する。 The imaging device ID is identification information of the imaging devices 31 to 34 mounted on the vehicle 30. The X coordinate, the Y coordinate, and the Z coordinate are coordinates in the vehicle coordinate system that indicate the locations where the imaging devices 31 to 34 are arranged. The imaging device 31 exists at (0.0 m, 0.0 m, 2.5 m). The imaging device 32 exists at (0.0 m, 0.0 m, −2.5 m). The imaging device 33 exists at (−1.0 m, 0.0 m, 0.8 m). The imaging device 34 exists at (1.0 m, 0.0 m, 0.8 m).
 ヨーは、XZ平面における撮像装置31~34の撮像方向の角度、すなわち、車両30の正面に対して左右方向の角度である。ピッチは、YZ平面における撮像装置31~34の撮像方向の角度、すなわち、車両30の正面に対して上下方向の角度である。ロールは、XY平面における撮像装置31~34の撮像方向の角度、すなわち、撮像される画像の上方向と車両30の鉛直上方向との間の角度である。 The yaw is an angle in the imaging direction of the imaging devices 31 to 34 on the XZ plane, that is, an angle in the left-right direction with respect to the front of the vehicle 30. The pitch is an angle in the imaging direction of the imaging devices 31 to 34 on the YZ plane, that is, an angle in the vertical direction with respect to the front of the vehicle 30. The roll is an angle in the imaging direction of the imaging devices 31 to 34 on the XY plane, that is, an angle between an upward direction of an image to be captured and a vertical upward direction of the vehicle 30.
 第2の実施の形態では、撮像装置31~34の撮像方向は水平面と平行であり、ピッチとロールは全て0°である。一方、撮像装置31~34は車両30の前後左右に90°ずつずらして設定されている。よって、撮像装置31のヨーは0°、撮像装置32のヨーは180°、撮像装置33のヨーは-90°、撮像装置34のヨーは90°である。 In the second embodiment, the imaging directions of the imaging devices 31 to 34 are parallel to the horizontal plane, and the pitch and roll are all 0 °. On the other hand, the imaging devices 31 to 34 are set so as to be shifted by 90 ° forward, backward, left and right of the vehicle 30. Therefore, the yaw of the imaging device 31 is 0 °, the yaw of the imaging device 32 is 180 °, the yaw of the imaging device 33 is −90 °, and the yaw of the imaging device 34 is 90 °.
 図13は、SLAM処理によって生成される軌跡データの例を示す図である。
 SLAM結果記憶部111には、軌跡データ131~134が格納される。軌跡データ131は、SLAM処理部121により撮像装置31の画像から生成される。軌跡データ132は、SLAM処理部122により撮像装置32の画像から生成される。軌跡データ133は、SLAM処理部123により撮像装置33の画像から生成される。軌跡データ134は、SLAM処理部124により撮像装置34の画像から生成される。
FIG. 13 is a diagram illustrating an example of trajectory data generated by SLAM processing.
In the SLAM result storage unit 111, trajectory data 131 to 134 are stored. The trajectory data 131 is generated from the image of the imaging device 31 by the SLAM processing unit 121. The trajectory data 132 is generated from the image of the imaging device 32 by the SLAM processing unit 122. The trajectory data 133 is generated from the image of the imaging device 33 by the SLAM processing unit 123. The trajectory data 134 is generated from the image of the imaging device 34 by the SLAM processing unit 124.
 軌跡データ131~134はそれぞれ、時刻、X座標、Y座標、Z座標、ヨー、ピッチ、ロールおよび距離を対応付けた複数のレコードを含む。ただし、第2の実施の形態ではピッチとロールは全て0°であるため、図13では記載を省略している。図13の例では、軌跡データ131~134はそれぞれ、時刻「9:30:00」から時刻「9:30:05」までの時間に対応する6個のレコードを含んでいる。最初の時刻「9:30:00」は、車両30が基準位置にいた時刻(基準時刻)である。 Each of the trajectory data 131 to 134 includes a plurality of records in which time, X coordinate, Y coordinate, Z coordinate, yaw, pitch, roll, and distance are associated with each other. However, since the pitch and roll are all 0 ° in the second embodiment, the description is omitted in FIG. In the example of FIG. 13, each of the trajectory data 131 to 134 includes six records corresponding to the time from the time “9: 30: 0” to the time “9:30:05”. The first time “9:30:30” is the time when the vehicle 30 was at the reference position (reference time).
 軌跡データ131~134の時刻は、画像が撮像された時刻である。軌跡データ131~134のX座標、Y座標およびZ座標は、カメラ座標系の座標であり、基準時刻の撮像装置の位置からの相対位置の推定を表す。軌跡データ131~134のヨー、ピッチおよびロールは、カメラ座標系の方向であり、各時刻における移動方向の推定を表す。軌跡データ131~134の距離は、カメラ座標系の距離であり、基準時刻の撮像装置の位置からの累積の移動距離を表す。この距離は、隣接する時刻の相対位置の変化量を合計したものである。最後の時刻「9:30:05」の距離は、移動軌跡の長さを表す。 The time of the trajectory data 131 to 134 is the time when the image is captured. The X coordinate, Y coordinate, and Z coordinate of the trajectory data 131 to 134 are coordinates in the camera coordinate system, and represent the estimation of the relative position from the position of the imaging device at the reference time. The yaw, pitch, and roll of the trajectory data 131 to 134 are directions in the camera coordinate system, and represent estimation of the moving direction at each time. The distance of the trajectory data 131 to 134 is a distance in the camera coordinate system, and represents a cumulative moving distance from the position of the imaging device at the reference time. This distance is the sum of the amount of change in the relative position at adjacent times. The distance at the last time “9:30:05” represents the length of the movement locus.
 例えば、軌跡データ132は、時刻「9:30:01」、X座標「0.080m」、Y座標「0.000m」、Z座標「-0.500m」、ヨー「-9.090°」、距離「0.506m」というレコードを含む。これは、基準時刻の1秒後に、撮像装置32が(0.000m,0.000m,0.000m)から(0.080m,0.000m,-0.500m)に移動したことを表している。また、これは、基準時刻における進行方向から、左に9.090°だけ進行方向が変わったことを表している。また、これは、撮像装置32が基準時刻の位置から0.506mだけ移動したことを表している。 For example, the trajectory data 132 includes a time “9:30:01”, an X coordinate “0.080 m”, a Y coordinate “0.000 m”, a Z coordinate “−0.500 m”, a yaw “−9.090 °”, A record of distance “0.506 m” is included. This represents that the imaging device 32 has moved from (0.000 m, 0.000 m, 0.000 m) to (0.080 m, 0.000 m, −0.500 m) one second after the reference time. . This also indicates that the traveling direction has changed by 9.090 ° to the left from the traveling direction at the reference time. This also indicates that the imaging device 32 has moved by 0.506 m from the reference time position.
 図14は、SLAM処理によって生成される点群データの例を示す図である。
 SLAM結果記憶部111には、点群データ135~138が格納される。点群データ135は、SLAM処理部121により撮像装置31の画像から生成される。点群データ136は、SLAM処理部122により撮像装置32の画像から生成される。点群データ137は、SLAM処理部123により撮像装置33の画像から生成される。点群データ138は、SLAM処理部124により撮像装置34の画像から生成される。
FIG. 14 is a diagram illustrating an example of point cloud data generated by SLAM processing.
In the SLAM result storage unit 111, point cloud data 135 to 138 are stored. The point cloud data 135 is generated from the image of the imaging device 31 by the SLAM processing unit 121. The point cloud data 136 is generated from the image of the imaging device 32 by the SLAM processing unit 122. The point cloud data 137 is generated from the image of the imaging device 33 by the SLAM processing unit 123. The point cloud data 138 is generated from the image of the imaging device 34 by the SLAM processing unit 124.
 点群データ135~138はそれぞれ、X座標、Y座標およびZ座標を対応付けた複数のレコードを含む。点群データ135~138の1つのレコードは、1つの目標点に対応する。点群データ135~138のX座標、Y座標およびZ座標は、カメラ座標系の座標であり、3次元空間上の目標点が存在する位置の推定を表す。点群データ135~138に含まれるレコードの数、すなわち、SLAM処理部121~124によって認識された目標点の数は、互いに異なっていてもよい。各目標点の位置は、撮像時刻の異なる複数の画像の分析結果を合成して推定されるため、時刻では分類されない。 Each of the point cloud data 135 to 138 includes a plurality of records in which the X coordinate, the Y coordinate, and the Z coordinate are associated with each other. One record of the point cloud data 135 to 138 corresponds to one target point. The X coordinate, Y coordinate, and Z coordinate of the point group data 135 to 138 are coordinates in the camera coordinate system, and represent the estimation of the position where the target point exists in the three-dimensional space. The number of records included in the point cloud data 135 to 138, that is, the number of target points recognized by the SLAM processing units 121 to 124 may be different from each other. The position of each target point is not classified by time because it is estimated by combining analysis results of a plurality of images having different imaging times.
 図15は、軌跡データの第1の変換例を示す図である。
 SLAM結果記憶部111には、軌跡データ141~144が格納される。軌跡データ141~144は、軌跡比較部125によって軌跡データ131~134から変換されたものである。軌跡データ131は、軌跡データ141に変換される。軌跡データ132は、軌跡データ142に変換される。軌跡データ133は、軌跡データ143に変換される。軌跡データ134は、軌跡データ144に変換される。
FIG. 15 is a diagram illustrating a first conversion example of trajectory data.
The SLAM result storage unit 111 stores trajectory data 141 to 144. The trajectory data 141 to 144 are converted from the trajectory data 131 to 134 by the trajectory comparison unit 125. The trajectory data 131 is converted into trajectory data 141. The trajectory data 132 is converted into trajectory data 142. The trajectory data 133 is converted into trajectory data 143. The trajectory data 134 is converted into trajectory data 144.
 軌跡データ141は、パラメータテーブル114に含まれる撮像装置31のパラメータを用いて、軌跡データ131の座標系を撮像装置31のカメラ座標系から車両座標系に変換したものである。軌跡データ142は、パラメータテーブル114に含まれる撮像装置32のパラメータを用いて、軌跡データ132の座標系を撮像装置32のカメラ座標系から車両座標系に変換したものである。軌跡データ143は、パラメータテーブル114に含まれる撮像装置33のパラメータを用いて、軌跡データ133の座標系を撮像装置33のカメラ座標系から車両座標系に変換したものである。軌跡データ144は、パラメータテーブル114に含まれる撮像装置34のパラメータを用いて、軌跡データ134の座標系を撮像装置34のカメラ座標系から車両座標系に変換したものである。 The trajectory data 141 is obtained by converting the coordinate system of the trajectory data 131 from the camera coordinate system of the imaging device 31 to the vehicle coordinate system using the parameters of the imaging device 31 included in the parameter table 114. The trajectory data 142 is obtained by converting the coordinate system of the trajectory data 132 from the camera coordinate system of the imaging device 32 to the vehicle coordinate system using the parameters of the imaging device 32 included in the parameter table 114. The trajectory data 143 is obtained by converting the coordinate system of the trajectory data 133 from the camera coordinate system of the imaging device 33 to the vehicle coordinate system using the parameters of the imaging device 33 included in the parameter table 114. The trajectory data 144 is obtained by converting the coordinate system of the trajectory data 134 from the camera coordinate system of the imaging device 34 to the vehicle coordinate system using the parameters of the imaging device 34 included in the parameter table 114.
 軌跡データ141~144のX座標、Y座標およびZ座標は、車両座標系の座標であり、基準位置からの車両30の相対位置の推定を表す。基準位置は、例えば、基準時刻における車両30の中心位置とする。各時刻の相対位置は、例えば、当該時刻における車両30の中心位置とする。軌跡データ141~144のヨー、ピッチおよびロールは、車両座標系における各時刻の移動方向の推定を表す。軌跡データ141~144の距離は、車両座標系における基準位置からの累積の移動距離を表す。 The X coordinate, Y coordinate, and Z coordinate of the trajectory data 141 to 144 are coordinates in the vehicle coordinate system and represent the estimation of the relative position of the vehicle 30 from the reference position. The reference position is, for example, the center position of the vehicle 30 at the reference time. The relative position at each time is, for example, the center position of the vehicle 30 at that time. The yaw, pitch, and roll of the trajectory data 141 to 144 represent the estimation of the moving direction at each time in the vehicle coordinate system. The distance of the trajectory data 141 to 144 represents the cumulative moving distance from the reference position in the vehicle coordinate system.
 例えば、軌跡データ142は、時刻「9:30:01」、X座標「-0.475m」、Y座標「0.000m」、Z座標「0.500m」、ヨー「-9.090°」、距離「0.690m」というレコードを含む。軌跡データ131から軌跡データ141への変換にあたって、時刻「9:30:01」のX座標が調整されている。 For example, the trajectory data 142 includes time “9:30:01”, X coordinate “−0.475 m”, Y coordinate “0.000 m”, Z coordinate “0.500 m”, yaw “−9.090 °”, A record of distance “0.690 m” is included. In the conversion from the trajectory data 131 to the trajectory data 141, the X coordinate at the time “9:30:01” is adjusted.
 図16は、軌跡データの第2の変換例を示す図である。
 SLAM結果記憶部111には、軌跡データ145~148が格納される。軌跡データ145~148は、軌跡比較部125によって軌跡データ141~144から変換されたものである。軌跡データ141は、軌跡データ145に変換される。軌跡データ142は、軌跡データ146に変換される。軌跡データ143は、軌跡データ147に変換される。軌跡データ144は、軌跡データ148に変換される。
FIG. 16 is a diagram illustrating a second conversion example of trajectory data.
Trajectory data 145 to 148 are stored in the SLAM result storage unit 111. The trajectory data 145 to 148 are converted from the trajectory data 141 to 144 by the trajectory comparison unit 125. The trajectory data 141 is converted into trajectory data 145. The trajectory data 142 is converted into trajectory data 146. The trajectory data 143 is converted into trajectory data 147. The trajectory data 144 is converted into trajectory data 148.
 軌跡データ145は、移動軌跡の長さが揃うように軌跡データ141を相似変換したものである。軌跡データ145は、移動軌跡71に対応する。軌跡データ146は、移動軌跡の長さが揃うように軌跡データ142を相似変換したものである。軌跡データ146は、移動軌跡72に対応する。軌跡データ147は、移動軌跡の長さが揃うように軌跡データ143を相似変換したものである。軌跡データ147は、移動軌跡73に対応する。軌跡データ148は、移動軌跡の長さが揃うように軌跡データ144を相似変換したものである。軌跡データ148は、移動軌跡74に対応する。 The trajectory data 145 is obtained by similarity conversion of the trajectory data 141 so that the lengths of the moving trajectories are aligned. The trajectory data 145 corresponds to the movement trajectory 71. The trajectory data 146 is obtained by similarity conversion of the trajectory data 142 so that the lengths of the movement trajectories are uniform. The trajectory data 146 corresponds to the movement trajectory 72. The trajectory data 147 is obtained by similarity conversion of the trajectory data 143 so that the lengths of the movement trajectories are uniform. The trajectory data 147 corresponds to the movement trajectory 73. The trajectory data 148 is obtained by similarity conversion of the trajectory data 144 so that the lengths of the moving trajectories are uniform. The trajectory data 148 corresponds to the movement trajectory 74.
 図16の例では、軌跡データ145~148が示す移動軌跡71~74の長さ、すなわち、最後の時刻「9:30:05」の距離が、5.000mに統一されている。例えば、軌跡データ146は、時刻「9:30:01」、X座標「-0.525m」、Y座標「0.000m」、Z座標「0.811m」、ヨー「-9.090°」、距離「0.965m」というレコードを含む。軌跡データ142から軌跡データ146への変換では、移動軌跡の長さが3.432mから5.000mに伸びている。よって、時刻「9:30:01」の距離も、0.690mから0.965mに伸びている。 In the example of FIG. 16, the length of the movement trajectories 71 to 74 indicated by the trajectory data 145 to 148, that is, the distance at the last time “9:30:05” is unified to 5.000 m. For example, the trajectory data 146 includes time “9:30:01”, X coordinate “−0.525 m”, Y coordinate “0.000 m”, Z coordinate “0.811 m”, yaw “−9.090 °”, A record of distance “0.965 m” is included. In the conversion from the trajectory data 142 to the trajectory data 146, the length of the moving trajectory is increased from 3.432 m to 5.000 m. Therefore, the distance at time “9:30:01” also increases from 0.690 m to 0.965 m.
 図17は、合成された軌跡データの例を示す図である。
 軌跡比較部125は、軌跡データ145~148から軌跡データ151を生成する。これは、移動軌跡71~74を合成して移動軌跡70を算出することに対応する。
FIG. 17 is a diagram illustrating an example of synthesized trajectory data.
The trajectory comparison unit 125 generates trajectory data 151 from the trajectory data 145 to 148. This corresponds to calculating the movement trajectory 70 by combining the movement trajectories 71 to 74.
 ここでは、軌跡比較部125は、軌跡データ146が示す移動軌跡72を異常移動軌跡と判定している。そして、軌跡比較部125は、軌跡データ145,147,148を平均化して軌跡データ151を生成している。軌跡データ145,147,148の平均化では、軌跡比較部125は、時刻が同じレコードのX座標、Y座標、Z座標、ヨー、ピッチおよびローの平均値を算出し、距離を算出する。例えば、軌跡データ151は、時刻「9:30:01」、X座標「-0.037m」、Y座標「0.000m」、Z座標「1.014m」、ヨー「-2.839°」、距離「1.016m」というレコードを含む。 Here, the locus comparison unit 125 determines that the movement locus 72 indicated by the locus data 146 is an abnormal movement locus. The trajectory comparison unit 125 generates trajectory data 151 by averaging the trajectory data 145, 147, and 148. In the averaging of the trajectory data 145, 147, and 148, the trajectory comparison unit 125 calculates the average value of the X coordinate, Y coordinate, Z coordinate, yaw, pitch, and low of the records having the same time, and calculates the distance. For example, the trajectory data 151 includes time “9:30:01”, X coordinate “−0.037 m”, Y coordinate “0.000 m”, Z coordinate “1.014 m”, yaw “−2.839 °”, A record of distance “1.016 m” is included.
 そして、軌跡比較部125は、軌跡データ151に対してスケーリングを行い、軌跡データ152を生成する。これは、移動軌跡70の長さを地球座標系における実際の距離に変換して移動軌跡80を算出することに対応する。 Then, the trajectory comparison unit 125 performs scaling on the trajectory data 151 and generates trajectory data 152. This corresponds to calculating the movement locus 80 by converting the length of the movement locus 70 into an actual distance in the earth coordinate system.
 図17の例では、軌跡データ152が示す移動軌跡80の長さ、すなわち、最後の時刻「9:30:05」の距離が、8.000mに伸びている。軌跡データ152は、時刻「9:30:01」、X座標「-0.007m」、Y座標「0.000m」、Z座標「1.615m」、ヨー「-2.839°」、距離「1.615m」というレコードを含む。 In the example of FIG. 17, the length of the movement locus 80 indicated by the locus data 152, that is, the distance at the last time “9:30:05” is extended to 8.000 m. The trajectory data 152 includes time “9:30:01”, X coordinate “−0.007 m”, Y coordinate “0.000 m”, Z coordinate “1.615 m”, yaw “−2.839 °”, distance “ The record “1.615m” is included.
 図18は、位置推定の手順例を示すフローチャートである。
 (S1)位置決定部127は、基準位置を設定する。位置推定の開始時における基準位置の設定は、GPSなどを利用して行う。位置推定の開始後は、位置決定部127は、前回推定した現在位置を基準位置に設定してもよい。
FIG. 18 is a flowchart illustrating an exemplary procedure for position estimation.
(S1) The position determination unit 127 sets a reference position. Setting of the reference position at the start of position estimation is performed using GPS or the like. After the start of position estimation, the position determination unit 127 may set the current position estimated last time as the reference position.
 (S2)SLAM処理部121は、撮像装置31によって撮像された画像の列を分析し軌跡データ131と点群データ135を生成する。SLAM処理部122は、撮像装置32によって撮像された画像の列を分析し軌跡データ132と点群データ136を生成する。SLAM処理部123は、撮像装置33によって撮像された画像の列を分析し軌跡データ133と点群データ137を生成する。SLAM処理部124は、撮像装置34によって撮像された画像の列を分析し軌跡データ134と点群データ138を生成する。 (S2) The SLAM processing unit 121 analyzes the sequence of images captured by the imaging device 31, and generates trajectory data 131 and point cloud data 135. The SLAM processing unit 122 analyzes the sequence of images captured by the imaging device 32 and generates trajectory data 132 and point cloud data 136. The SLAM processing unit 123 analyzes the sequence of images captured by the imaging device 33 and generates trajectory data 133 and point cloud data 137. The SLAM processing unit 124 analyzes the sequence of images captured by the imaging device 34 and generates trajectory data 134 and point cloud data 138.
 (S3)軌跡比較部125は、パラメータ記憶部112に記憶されたパラメータテーブル114を参照して、カメラ座標系で表現された軌跡データ131~134を車両座標系で表現された軌跡データ141~144に変換する。 (S3) The trajectory comparison unit 125 refers to the parameter table 114 stored in the parameter storage unit 112, and converts the trajectory data 131 to 134 expressed in the camera coordinate system into the trajectory data 141 to 144 expressed in the vehicle coordinate system. Convert to
 (S4)軌跡比較部125は、軌跡データ141~144が示す移動軌跡61~64の長さが揃うようにスケーリングを行い、移動軌跡71~74を算出する。すなわち、軌跡比較部125は、最後の時刻における距離が軌跡データ141~144の間で同じになるように、軌跡データ141~144を軌跡データ145~148に変換する。軌跡比較部125は、走行距離取得部128から走行距離を取得できる場合、この時点で移動軌跡71~74の長さを走行距離に一致させてもよい。 (S4) The trajectory comparison unit 125 performs scaling so that the lengths of the movement trajectories 61 to 64 indicated by the trajectory data 141 to 144 are equal, and calculates the movement trajectories 71 to 74. That is, the trajectory comparison unit 125 converts the trajectory data 141 to 144 into trajectory data 145 to 148 so that the distance at the last time is the same between the trajectory data 141 to 144. When the travel distance can be acquired from the travel distance acquisition unit 128, the trajectory comparison unit 125 may match the lengths of the travel tracks 71 to 74 with the travel distance at this time.
 (S5)点群比較部126は、パラメータ記憶部112に記憶されたパラメータテーブル114を参照して、カメラ座標系で表現された点群データ135~138の座標を、各カメラ座標系の原点を中心に回転させて車両座標系と整合させる。 (S5) The point group comparison unit 126 refers to the parameter table 114 stored in the parameter storage unit 112, determines the coordinates of the point group data 135 to 138 expressed in the camera coordinate system, and the origin of each camera coordinate system. Rotate to the center to align with the vehicle coordinate system.
 (S6)点群比較部126は、点群データ135~138が示す点群65~68に対して、移動軌跡61~64と同じ倍率を適用してスケーリングを行う。スケーリングは、撮像装置31~34の位置(カメラ座標系の原点)を中心に行う。点群比較部126は、移動軌跡61を移動軌跡71に変換したときの倍率を、点群65に対して適用する。また、点群比較部126は、移動軌跡62を移動軌跡72に変換したときの倍率を、点群66に対して適用する。点群比較部126は、移動軌跡63を移動軌跡73に変換したときの倍率を、点群67に対して適用する。点群比較部126は、移動軌跡64を移動軌跡74に変換したときの倍率を、点群68に対して適用する。 (S6) The point group comparison unit 126 performs scaling by applying the same magnification as the movement trajectories 61 to 64 to the point groups 65 to 68 indicated by the point group data 135 to 138. Scaling is performed around the position of the imaging devices 31 to 34 (the origin of the camera coordinate system). The point group comparison unit 126 applies the magnification obtained when the movement locus 61 is converted into the movement locus 71 to the point group 65. Further, the point group comparison unit 126 applies the magnification obtained when the movement locus 62 is converted into the movement locus 72 to the point group 66. The point group comparison unit 126 applies the magnification obtained when the movement locus 63 is converted into the movement locus 73 to the point group 67. The point group comparison unit 126 applies the magnification obtained when the movement locus 64 is converted to the movement locus 74 to the point group 68.
 (S7)軌跡比較部125は、軌跡データ145~148が示す移動軌跡71~74の中から、他の移動軌跡と大きく異なる異常移動軌跡を検出し、異常移動軌跡を除外する。
 例えば、軌跡比較部125は、同じ時刻毎に軌跡データ145~148が示す座標の平均値を算出して、平均移動軌跡を算出する。軌跡比較部125は、同じ時刻毎に軌跡データ145が示す座標と平均移動軌跡の座標との偏差の二乗を算出し、偏差二乗和を移動軌跡71と平均移動軌跡との乖離度と定義する。同様に、軌跡比較部125は、移動軌跡72~74についても乖離度を算出する。そして、軌跡比較部125は、移動軌跡71~74のうち乖離度が閾値以上の移動軌跡を異常移動軌跡と判定する。
(S7) The trajectory comparison unit 125 detects an abnormal movement trajectory greatly different from other movement trajectories from the movement trajectories 71 to 74 indicated by the trajectory data 145 to 148, and excludes the abnormal movement trajectory.
For example, the trajectory comparison unit 125 calculates the average value of the coordinates indicated by the trajectory data 145 to 148 at the same time, and calculates the average movement trajectory. The trajectory comparison unit 125 calculates the square of the deviation between the coordinates indicated by the trajectory data 145 and the coordinates of the average moving trajectory at the same time, and defines the sum of squared deviations as the degree of divergence between the moving trajectory 71 and the average moving trajectory. Similarly, the trajectory comparison unit 125 calculates the divergence degree for the movement trajectories 72 to 74 as well. Then, the trajectory comparison unit 125 determines a movement trajectory having a deviation degree equal to or greater than a threshold among the movement trajectories 71 to 74 as an abnormal movement trajectory.
 (S8)軌跡比較部125は、軌跡データ145~148のうち、異常移動軌跡を示す軌跡データ以外の残りの軌跡データを合成して、軌跡データ151を生成する。例えば、軌跡比較部125は、同じ時刻毎に残りの軌跡データが示す座標の平均値を算出する。これは、異常移動軌跡以外の移動軌跡を平均化して移動軌跡70を算出することを表す。 (S8) The trajectory comparison unit 125 generates the trajectory data 151 by synthesizing the remaining trajectory data other than the trajectory data indicating the abnormal movement trajectory from the trajectory data 145 to 148. For example, the trajectory comparison unit 125 calculates the average value of the coordinates indicated by the remaining trajectory data at the same time. This represents that the movement trajectory 70 is calculated by averaging the movement trajectories other than the abnormal movement trajectory.
 (S9)点群比較部126は、ステップS6でスケーリングされた点群を示す点群データのうち、異常移動軌跡に対応する点群データを除外した残りの点群データを選択する。点群比較部126は、選択した点群データが示す点群を、各カメラ座標系の原点を中心に同じ倍率でスケーリングし、目標点の重複が最大になる共通の倍率を探す。位置決定部127は、ステップS8で生成された軌跡データ151に対して、見つかった共通の倍率を適用することで、移動軌跡80を示す軌跡データ152を生成する。移動軌跡80の長さは、地球座標系における実際の移動距離の推定値である。 (S9) The point group comparison unit 126 selects the remaining point group data excluding the point group data corresponding to the abnormal movement trajectory from the point group data indicating the point group scaled in step S6. The point group comparison unit 126 scales the point group indicated by the selected point group data with the same magnification around the origin of each camera coordinate system, and searches for a common magnification that maximizes the overlap of the target points. The position determination unit 127 generates trajectory data 152 indicating the movement trajectory 80 by applying the found common magnification to the trajectory data 151 generated in step S8. The length of the movement locus 80 is an estimated value of the actual movement distance in the earth coordinate system.
 (S10)位置決定部127は、ステップS1で設定した基準位置とステップS9で生成した軌跡データ152とから、移動軌跡80を地球座標系の地図空間(緯度経度空間)にマッピングする。移動軌跡80の終点が示す位置は、車両30の現在位置を表す。自動駐車装置37は、位置推定装置100によって推定された現在位置に基づいて自動駐車を制御する。また、ナビゲーション装置200は、位置推定装置100によって推定された現在位置や移動軌跡を、地図と重ねてディスプレイ204に表示する。 (S10) The position determination unit 127 maps the movement locus 80 to the map space (latitude and longitude space) of the earth coordinate system from the reference position set in step S1 and the locus data 152 generated in step S9. The position indicated by the end point of the movement locus 80 represents the current position of the vehicle 30. The automatic parking device 37 controls automatic parking based on the current position estimated by the position estimation device 100. In addition, the navigation device 200 displays the current position and movement trajectory estimated by the position estimation device 100 on the display 204 so as to overlap the map.
 (S11)位置推定装置100は、位置推定を継続するか否か判断する。例えば、自動駐車装置37が自動駐車モードである間は位置推定を継続し、自動駐車モードが解除されると位置推定を終了する。位置推定を継続する場合は、ステップS1に処理が進む。位置推定を継続しない場合は、位置推定装置100の処理が終了する。 (S11) The position estimation apparatus 100 determines whether or not to continue position estimation. For example, the position estimation is continued while the automatic parking device 37 is in the automatic parking mode, and the position estimation is terminated when the automatic parking mode is canceled. When the position estimation is continued, the process proceeds to step S1. When the position estimation is not continued, the process of the position estimation device 100 ends.
 図19は、ナビゲーション画面の例を示す図である。
 ナビゲーション装置200のディスプレイ204には、例えば、ナビゲーション画面90が表示される。ナビゲーション画面90には、車両30が進入した駐車場の地図が表示される。駐車場の地図を示す地図データが、ナビゲーション装置200のフラッシュメモリ203に記憶されている。この駐車場の地図には、駐車スペースの配置など駐車場内の詳細な形状が描かれている。駐車場の地図と重ねて、ナビゲーション画面90には、推定された車両30の現在位置が表示される。また、ナビゲーション画面90には、少なくとも所定時間前から現在までの車両30の移動軌跡が表示される。これにより、自動運転が適切に行われていることをユーザが確認することができる。
FIG. 19 is a diagram illustrating an example of a navigation screen.
For example, a navigation screen 90 is displayed on the display 204 of the navigation device 200. On the navigation screen 90, a map of a parking lot where the vehicle 30 has entered is displayed. Map data indicating a map of the parking lot is stored in the flash memory 203 of the navigation device 200. The map of the parking lot shows the detailed shape of the parking lot such as the arrangement of parking spaces. The estimated current position of the vehicle 30 is displayed on the navigation screen 90 so as to overlap the map of the parking lot. In addition, the navigation screen 90 displays the movement trajectory of the vehicle 30 from at least a predetermined time before to the present. Thereby, the user can confirm that automatic driving | running | working is performed appropriately.
 第2の実施の形態の位置推定装置100によれば、撮像方向の異なる撮像装置31~34の画像が取得され、撮像装置31~34それぞれに対してSLAM処理により移動軌跡が算出される。4つの移動軌跡の比較によって異常移動軌跡が検出され、異常移動軌跡を除く残りの移動軌跡が合成される。 According to the position estimation device 100 of the second embodiment, images of the imaging devices 31 to 34 having different imaging directions are acquired, and movement trajectories are calculated for each of the imaging devices 31 to 34 by SLAM processing. An abnormal movement track is detected by comparing the four movement tracks, and the remaining movement tracks excluding the abnormal movement track are synthesized.
 これにより、撮像装置31~34の一部の画像が光の影響などによってSLAM処理に不向きになり、一部の移動軌跡の信頼性が低下しても、合成された移動軌跡の信頼性が向上する。また、ステレオカメラによって撮像されたステレオ画像から1つの移動軌跡を算出する方法と比べて、撮像装置31~34の撮像方向を互いに大きくずらすことが可能となる。よって、撮像装置31~34の全ての画像がSLAM処理に不向きになってしまうリスクを低減でき、SLAMを利用した位置推定の精度が向上する。 As a result, even if some images of the imaging devices 31 to 34 become unsuitable for SLAM processing due to the influence of light or the like, the reliability of the combined movement trajectory is improved even if the reliability of some movement trajectories decreases. To do. In addition, the imaging directions of the imaging devices 31 to 34 can be greatly shifted from each other as compared with a method of calculating one movement locus from a stereo image captured by a stereo camera. Therefore, it is possible to reduce a risk that all images of the imaging devices 31 to 34 are unsuitable for SLAM processing, and the accuracy of position estimation using SLAM is improved.
 なお、前述のように、第1の実施の形態の情報処理は、位置推定装置10にプログラムを実行させることで実現できる。第2の実施の形態の情報処理は、位置推定装置100やナビゲーション装置200にプログラムを実行させることで実現できる。 Note that as described above, the information processing according to the first embodiment can be realized by causing the position estimation apparatus 10 to execute a program. The information processing of the second embodiment can be realized by causing the position estimation device 100 and the navigation device 200 to execute a program.
 プログラムは、コンピュータ読み取り可能な記録媒体(例えば、記録媒体207)に記録しておくことができる。記録媒体として、例えば、磁気ディスク、光ディスク、光磁気ディスク、半導体メモリなどを使用できる。磁気ディスクには、FDおよびHDDが含まれる。光ディスクには、CD、CD-R(Recordable)/RW(Rewritable)、DVDおよびDVD-R/RWが含まれる。プログラムは、可搬型の記録媒体に記録されて配布されることがある。その場合、可搬型の記録媒体から他の記録媒体(例えば、フラッシュメモリ203)にプログラムをコピーして実行してもよい。 The program can be recorded on a computer-readable recording medium (for example, the recording medium 207). As the recording medium, for example, a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like can be used. Magnetic disks include FD and HDD. Optical disks include CD, CD-R (Recordable) / RW (Rewritable), DVD, and DVD-R / RW. The program may be recorded and distributed on a portable recording medium. In that case, the program may be copied from a portable recording medium to another recording medium (for example, the flash memory 203) and executed.
 上記については単に本発明の原理を示すものである。更に、多数の変形や変更が当業者にとって可能であり、本発明は上記に示し、説明した正確な構成および応用例に限定されるものではなく、対応する全ての変形例および均等物は、添付の請求項およびその均等物による本発明の範囲とみなされる。 The above merely shows the principle of the present invention. In addition, many modifications and variations will be apparent to practitioners skilled in this art and the present invention is not limited to the precise configuration and application shown and described above, and all corresponding modifications and equivalents may be And the equivalents thereof are considered to be within the scope of the invention.
 10 位置推定装置
 11 画像取得部
 12 判定部
 13,14 移動軌跡
 15 位置
 20 移動物
 21,22 撮像装置
 23,24 画像データの列
DESCRIPTION OF SYMBOLS 10 Position estimation apparatus 11 Image acquisition part 12 Judgment part 13,14 Movement locus 15 Position 20 Moving object 21,22 Imaging device 23,24 Image data row | line | column

Claims (7)

  1.  移動物が備える第1の撮像装置から、撮像時刻の異なる第1の画像データの列を取得し、前記移動物が備える前記第1の撮像装置と撮像方向の異なる第2の撮像装置から、撮像時刻の異なる第2の画像データの列を取得する画像取得部と、
     前記第1の画像データの列から、前記移動物が移動した経路の第1の推定を示す第1の移動軌跡を算出し、前記第2の画像データの列から、前記移動物が移動した経路の第2の推定を示す第2の移動軌跡を算出し、前記第1の移動軌跡と前記第2の移動軌跡とを用いて前記移動物の位置を判定する判定部と、
     を有する位置推定装置。
    A sequence of first image data having a different imaging time is acquired from a first imaging device included in the moving object, and imaging is performed from a second imaging apparatus having a different imaging direction from the first imaging apparatus included in the moving object. An image acquisition unit that acquires a sequence of second image data at different times;
    A first movement trajectory indicating a first estimation of a route along which the moving object has moved is calculated from the first image data row, and a route along which the moving object has moved from the second image data row. A determination unit that calculates a second movement locus indicating the second estimation of the first object and determines the position of the moving object using the first movement locus and the second movement locus;
    A position estimation apparatus.
  2.  前記画像取得部は、前記第1の撮像装置および前記第2の撮像装置を含む3以上の撮像装置から、前記第1の画像データの列および前記第2の画像データの列を含む3以上の画像データの列を取得し、
     前記判定部は、前記3以上の画像データの列から、前記第1の移動軌跡および前記第2の移動軌跡を含む3以上の移動軌跡を算出し、前記3以上の移動軌跡の間の比較に基づいて前記移動物の位置を判定する、
     請求項1記載の位置推定装置。
    The image acquisition unit includes three or more image pickup units including the first image data row and the second image data row from three or more image pickup devices including the first image pickup device and the second image pickup device. Get a column of image data,
    The determination unit calculates three or more movement trajectories including the first movement trajectory and the second movement trajectory from the sequence of the three or more image data, and compares the three or more movement trajectories with each other. Based on the position of the moving object,
    The position estimation apparatus according to claim 1.
  3.  前記判定部は、前記比較に基づいて前記3以上の移動軌跡の中から一部の移動軌跡を除外し、除外されずに残った移動軌跡を用いて前記移動物の位置を判定する、
     請求項2記載の位置推定装置。
    The determination unit excludes a part of the movement trajectory from the three or more movement trajectories based on the comparison, and determines the position of the moving object using the remaining movement trajectory.
    The position estimation apparatus according to claim 2.
  4.  前記判定部は、前記第1の画像データの列から、移動していない目標物の位置の第1の推定を示す第1の目標物データを生成し、前記第2の画像データの列から、前記目標物の位置の第2の推定を示す第2の目標物データを生成し、
     前記判定部は、前記第1の目標物データと前記第2の目標物データとを比較し、比較結果を更に用いて前記移動物の位置を判定する、
     請求項1記載の位置推定装置。
    The determination unit generates first target data indicating a first estimation of a position of a target that has not moved from the first image data sequence, and from the second image data sequence, Generating second target data indicative of a second estimate of the target position;
    The determination unit compares the first target data and the second target data, and further determines the position of the moving object using a comparison result.
    The position estimation apparatus according to claim 1.
  5.  前記第1の撮像装置の視野範囲の中心と前記第2の撮像装置の視野範囲の中心とは、前記移動物から見て角度が90度以上ずれている、
     請求項1記載の位置推定装置。
    The center of the visual field range of the first imaging device and the center of the visual field range of the second imaging device are shifted by 90 degrees or more when viewed from the moving object,
    The position estimation apparatus according to claim 1.
  6.  位置推定装置が実行する位置推定方法であって、
     移動物が備える第1の撮像装置から、撮像時刻の異なる第1の画像データの列を取得し、前記移動物が備える前記第1の撮像装置と撮像方向の異なる第2の撮像装置から、撮像時刻の異なる第2の画像データの列を取得し、
     前記第1の画像データの列から、前記移動物が移動した経路の第1の推定を示す第1の移動軌跡を算出し、前記第2の画像データの列から、前記移動物が移動した経路の第2の推定を示す第2の移動軌跡を算出し、
     前記第1の移動軌跡と前記第2の移動軌跡とを用いて前記移動物の位置を判定する、
     位置推定方法。
    A position estimation method executed by a position estimation device,
    A sequence of first image data having a different imaging time is acquired from a first imaging device included in the moving object, and imaging is performed from a second imaging apparatus having a different imaging direction from the first imaging apparatus included in the moving object. Obtain a second sequence of image data at different times,
    A first movement trajectory indicating a first estimation of a route along which the moving object has moved is calculated from the first image data row, and a route along which the moving object has moved from the second image data row. Calculating a second movement trajectory indicating a second estimate of
    Determining the position of the moving object using the first movement locus and the second movement locus;
    Position estimation method.
  7.  コンピュータに、
     移動物が備える第1の撮像装置から、撮像時刻の異なる第1の画像データの列を取得し、前記移動物が備える前記第1の撮像装置と撮像方向の異なる第2の撮像装置から、撮像時刻の異なる第2の画像データの列を取得し、
     前記第1の画像データの列から、前記移動物が移動した経路の第1の推定を示す第1の移動軌跡を算出し、前記第2の画像データの列から、前記移動物が移動した経路の第2の推定を示す第2の移動軌跡を算出し、
     前記第1の移動軌跡と前記第2の移動軌跡とを用いて前記移動物の位置を判定する、
     処理を実行させる位置推定プログラム。
    On the computer,
    A sequence of first image data having a different imaging time is acquired from a first imaging device included in the moving object, and imaging is performed from a second imaging apparatus having a different imaging direction from the first imaging apparatus included in the moving object. Obtain a second sequence of image data at different times,
    A first movement trajectory indicating a first estimation of a route along which the moving object has moved is calculated from the first image data row, and a route along which the moving object has moved from the second image data row. Calculating a second movement trajectory indicating a second estimate of
    Determining the position of the moving object using the first movement locus and the second movement locus;
    A position estimation program that executes processing.
PCT/JP2016/052758 2016-01-29 2016-01-29 Position estimation device, position estimation method, and position estimation program WO2017130397A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/052758 WO2017130397A1 (en) 2016-01-29 2016-01-29 Position estimation device, position estimation method, and position estimation program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/052758 WO2017130397A1 (en) 2016-01-29 2016-01-29 Position estimation device, position estimation method, and position estimation program

Publications (1)

Publication Number Publication Date
WO2017130397A1 true WO2017130397A1 (en) 2017-08-03

Family

ID=59397676

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/052758 WO2017130397A1 (en) 2016-01-29 2016-01-29 Position estimation device, position estimation method, and position estimation program

Country Status (1)

Country Link
WO (1) WO2017130397A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107765263A (en) * 2017-10-30 2018-03-06 武汉海达数云技术有限公司 Laser scanning device and traverse measurement system
JP6430087B1 (en) * 2018-03-23 2018-11-28 三菱電機株式会社 Route generating apparatus and vehicle control system
JP2019172219A (en) * 2018-03-29 2019-10-10 トヨタ自動車株式会社 Vehicle travel management system
JP2020068499A (en) * 2018-10-26 2020-04-30 現代自動車株式会社Hyundai Motor Company Vehicle periphery image display system and vehicle periphery image display method
CN111738047A (en) * 2019-03-25 2020-10-02 本田技研工业株式会社 Self-position estimation method
CN111902692A (en) * 2018-09-14 2020-11-06 松下电器(美国)知识产权公司 Determination method and determination device
CN114435470A (en) * 2020-11-05 2022-05-06 长沙智能驾驶研究院有限公司 Automatic reversing control method and device, vehicle and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007278871A (en) * 2006-04-07 2007-10-25 Technical Research & Development Institute Ministry Of Defence Apparatus for computing amount of movement
WO2014070334A1 (en) * 2012-11-02 2014-05-08 Qualcomm Incorporated Using a plurality of sensors for mapping and localization
WO2015169338A1 (en) * 2014-05-05 2015-11-12 Hexagon Technology Center Gmbh Surveying system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007278871A (en) * 2006-04-07 2007-10-25 Technical Research & Development Institute Ministry Of Defence Apparatus for computing amount of movement
WO2014070334A1 (en) * 2012-11-02 2014-05-08 Qualcomm Incorporated Using a plurality of sensors for mapping and localization
WO2015169338A1 (en) * 2014-05-05 2015-11-12 Hexagon Technology Center Gmbh Surveying system

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107765263A (en) * 2017-10-30 2018-03-06 武汉海达数云技术有限公司 Laser scanning device and traverse measurement system
WO2019085376A1 (en) * 2017-10-30 2019-05-09 武汉海达数云技术有限公司 Laser scanning device and control method thereof, and mobile measurement system and control method thereof
JP6430087B1 (en) * 2018-03-23 2018-11-28 三菱電機株式会社 Route generating apparatus and vehicle control system
WO2019180919A1 (en) * 2018-03-23 2019-09-26 三菱電機株式会社 Route generation device and vehicle control system
CN111868801A (en) * 2018-03-23 2020-10-30 三菱电机株式会社 Route generation device and vehicle control system
JP2019172219A (en) * 2018-03-29 2019-10-10 トヨタ自動車株式会社 Vehicle travel management system
CN111902692A (en) * 2018-09-14 2020-11-06 松下电器(美国)知识产权公司 Determination method and determination device
JP2020068499A (en) * 2018-10-26 2020-04-30 現代自動車株式会社Hyundai Motor Company Vehicle periphery image display system and vehicle periphery image display method
JP7426174B2 (en) 2018-10-26 2024-02-01 現代自動車株式会社 Vehicle surrounding image display system and vehicle surrounding image display method
CN111738047A (en) * 2019-03-25 2020-10-02 本田技研工业株式会社 Self-position estimation method
CN114435470A (en) * 2020-11-05 2022-05-06 长沙智能驾驶研究院有限公司 Automatic reversing control method and device, vehicle and storage medium
CN114435470B (en) * 2020-11-05 2022-11-25 长沙智能驾驶研究院有限公司 Automatic reversing control method and device, vehicle and storage medium

Similar Documents

Publication Publication Date Title
WO2017130397A1 (en) Position estimation device, position estimation method, and position estimation program
EP2948927B1 (en) A method of detecting structural parts of a scene
JP7147119B2 (en) Device and method for autonomous self-localization
TWI695181B (en) Methods and systems for color point cloud generation
JP6595182B2 (en) Systems and methods for mapping, locating, and attitude correction
EP3650814B1 (en) Vision augmented navigation
Geiger et al. Are we ready for autonomous driving? the kitti vision benchmark suite
CN109443348B (en) Underground garage position tracking method based on fusion of look-around vision and inertial navigation
CN108759823B (en) Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching
CN110675307A (en) Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM
CN111862673B (en) Parking lot vehicle self-positioning and map construction method based on top view
US20220270358A1 (en) Vehicular sensor system calibration
KR102006291B1 (en) Method for estimating pose of moving object of electronic apparatus
JP6552448B2 (en) Vehicle position detection device, vehicle position detection method, and computer program for vehicle position detection
Song et al. End-to-end learning for inter-vehicle distance and relative velocity estimation in adas with a monocular camera
JP2019056629A (en) Distance estimation device and method
CN112455502A (en) Train positioning method and device based on laser radar
US11567497B1 (en) Systems and methods for perceiving a field around a device
US20190293444A1 (en) Lane level accuracy using vision of roadway lights and particle filter
CN112577499B (en) VSLAM feature map scale recovery method and system
KR20160125803A (en) Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest
CN116804553A (en) Odometer system and method based on event camera/IMU/natural road sign
WO2020113425A1 (en) Systems and methods for constructing high-definition map
KR102506411B1 (en) Method and apparatus for estimation of location and pose on vehicle and record medium for this
JP5557036B2 (en) Exit determination device, exit determination program, and exit determination method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16887983

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16887983

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP