WO2016020970A1 - 自己位置算出装置及び自己位置算出方法 - Google Patents
自己位置算出装置及び自己位置算出方法 Download PDFInfo
- Publication number
- WO2016020970A1 WO2016020970A1 PCT/JP2014/070480 JP2014070480W WO2016020970A1 WO 2016020970 A1 WO2016020970 A1 WO 2016020970A1 JP 2014070480 W JP2014070480 W JP 2014070480W WO 2016020970 A1 WO2016020970 A1 WO 2016020970A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pattern light
- vehicle
- road surface
- image
- self
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/002—Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/26—Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
- G06T2207/30208—Marker matrix
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the present invention relates to a self-position calculation device and a self-position calculation method.
- Patent Document 1 Conventionally, a technique is known in which an image in the vicinity of a vehicle is captured by a camera mounted on the vehicle, and the amount of movement of the vehicle is obtained based on a change in the image (see, for example, Patent Document 1).
- Patent Document 1 a feature point is detected from an image so that the amount of movement can be obtained accurately even when the vehicle moves at low speed and delicately, the position of the feature point is obtained, and the moving direction and movement of the feature point.
- the movement amount of the vehicle is obtained from the distance (movement amount).
- Patent Document 2 a technique for performing three-dimensional measurement using a laser projector that projects a lattice pattern (pattern light) is known (for example, see Patent Document 2).
- a pattern light projection area is captured by a camera, pattern light is detected from the captured image, and the behavior of the vehicle is obtained from the position of the pattern light.
- the present invention has been made in view of the above problems, and its purpose is to detect pattern light projected on the road surface with high accuracy and to calculate the self-position of the vehicle with high accuracy. To provide a calculation device and a self-position calculation method.
- the self-position calculation device projects pattern light from a projector onto a road surface around the vehicle, and images an image of the road surface around the vehicle including an area where the pattern light is projected by an imaging unit. Obtaining and extracting the position of the pattern light from the image acquired by the imaging unit, calculating the attitude angle of the vehicle with respect to the road surface from the extracted pattern light position, and a plurality of features on the road surface in the image acquired by the imaging unit Based on the time change of the point, the vehicle attitude change amount is calculated, and the vehicle current position and attitude angle are calculated by adding the attitude change amount to the initial position and attitude angle of the vehicle.
- a superimposed image is generated by superimposing the images between frames acquired by the imaging unit, and the position of the pattern light from the superimposed image Is extracted.
- FIG. 1 is a block diagram showing the overall configuration of the self-position calculation apparatus according to the first embodiment.
- FIG. 2 is an external view showing a mounting example of the projector 11 and the camera 12 on the vehicle 10.
- FIG. 3A shows the position on the road surface 31 irradiated with each spot light from the base line length Lb between the projector 11 and the camera 12 and the coordinates (U j , V j ) on the image of each spot light.
- FIG. 3B shows the movement direction 34 of the camera 12 based on the temporal change of the feature points detected from the other area 33 different from the area irradiated with the pattern light 32a. It is a schematic diagram which shows a mode that it obtains.
- FIG. 4A and 4B are diagrams showing an image of the pattern light 32a obtained by binarizing the image acquired by the camera 12, and FIG. 4A shows the pattern light 32a. shows an overall, FIG. 4 (b) shows an enlarged view of one of the spot light S p, FIG. 4 (c) shows the position H e of the center of gravity of the spot light S p extracted by the pattern light extraction unit 21
- FIG. 5 is a schematic diagram for explaining a method of calculating the change amount of the distance and the posture angle.
- FIG. 6A shows an example of the first frame (image) 38 acquired at time t.
- FIG. 6B shows the second frame 38 ′ acquired at time (t + ⁇ t) when time ⁇ t has elapsed from time t.
- FIG. 6A shows an example of the first frame (image) 38 acquired at time t.
- FIG. 6B shows the second frame 38 ′ acquired at time (t + ⁇ t) when time ⁇ t has elapsed from time t.
- FIG. 7A shows the amount of movement of the vehicle when generating a superimposed image when the external environment is bright.
- FIG. 7B shows a state in which a superimposed image is generated when the external environment is bright.
- FIG. 8A shows the amount of movement of the vehicle when generating a superimposed image when the external environment is bright.
- FIG. 8B shows how a superimposed image is generated when the external environment is bright.
- FIGS. 9A to 9D show changes in the reset flag, the number of images to be superimposed, the feature point detection state, and the number of associated feature points in the self-position calculation apparatus according to the first embodiment. It is a timing chart.
- FIG. 10 is a flowchart showing an example of a self-position calculation method using the self-position calculation apparatus of FIG. FIG.
- FIG. 11 is a flowchart showing a detailed procedure of step S18 of FIG.
- FIG. 12 is a block diagram showing the overall configuration of the self-position calculation apparatus according to the second embodiment.
- FIG. 13 is a diagram for explaining a method for estimating the amount of change in the height of the road surface from the position of the pattern light according to the second embodiment.
- 14A to 14E show the reset flag, the number of images to be superimposed, the quality of the road surface condition, and the size of the level difference (unevenness) of the road surface in the self-position calculation apparatus according to the second embodiment. It is a timing chart which shows change.
- FIG. 15 is a flowchart illustrating a processing procedure of self-position calculation processing by the self-position calculation apparatus according to the second embodiment.
- FIG. 15 is a flowchart illustrating a processing procedure of self-position calculation processing by the self-position calculation apparatus according to the second embodiment.
- FIG. 16 is a flowchart showing a detailed processing procedure of step S28 of FIG. 15 by the self-position calculating apparatus according to the second embodiment.
- FIG. 17 is a block diagram showing the overall configuration of the self-position calculating apparatus according to the second embodiment.
- FIG. 18A and FIG. 18B are timing charts showing changes in luminance and feature point detection flags of the self-position calculation apparatus according to the third embodiment.
- FIG. 19A to FIG. 19C are explanatory views showing pattern light and feature points of the self-position calculating apparatus according to the third embodiment.
- FIG. 20A to FIG. 20D are timing charts showing the reset flag, the timing at which each cycle ends, the number of cycles to be superimposed, and the change in light projection power of the self-position calculation device according to the third embodiment. It is.
- FIG. 21 is a flowchart illustrating a processing procedure of the self-position calculation apparatus according to the third embodiment.
- the self-position calculating device includes a projector 11, a camera 12, and an engine control unit (ECU) 13.
- the projector 11 is mounted on the vehicle and projects pattern light onto a road surface around the vehicle.
- the camera 12 is an example of an imaging unit that is mounted on a vehicle and captures an image of a road surface around the vehicle including an area where pattern light is projected.
- the ECU 13 is an example of a control unit that controls the projector 11 and executes a series of information processing cycles for estimating the amount of movement of the vehicle from the image acquired by the camera 12.
- the camera 12 is a digital camera using a solid-state imaging device such as a CCD and a CMOS, and acquires a digital image that can be processed.
- the imaging target of the camera 12 is a road surface around the vehicle, and the road surface around the vehicle includes the road surface of the front, rear, side, and bottom of the vehicle.
- the camera 12 can be mounted on the front portion of the vehicle 10, specifically on the front bumper.
- the height and direction in which the camera 12 is installed are adjusted so that the feature point (texture) on the road surface 31 in front of the vehicle 10 and the pattern light 32b projected by the projector 11 can be imaged.
- the focus and aperture of the lens provided are automatically adjusted.
- the camera 12 repeatedly captures images at a predetermined time interval and acquires a series of images (frames).
- the image data acquired by the camera 12 is transferred to the ECU 13 and stored in a memory provided in the ECU 13.
- the projector 11 projects pattern light 32 b having a predetermined shape including a square or rectangular lattice image toward the road surface 31 within the imaging range of the camera 12.
- the camera 12 images the pattern light irradiated by the road surface 31.
- the projector 11 includes, for example, a laser pointer and a diffraction grating. Pattern by diffracting the laser beam emitted from the laser pointer in the diffraction grating, the light projector 11, as shown in FIGS. 2 to 4, comprising a lattice image, or a matrix a plurality of which are arranged in the spot light S p Light (32b, 32a) is generated. In the example shown in FIGS. 3 and 4, and generates a pattern light 32a consisting spotlight S p of 5 ⁇ 7.
- the ECU 13 includes a microcontroller including a CPU, a memory, and an input / output unit, and configures a plurality of information processing units included in the self-position calculating device by executing a computer program installed in advance.
- the ECU 13 repeatedly executes a series of information processing cycles for calculating the current position of the vehicle from the image acquired by the camera 12 for each image (frame).
- the ECU 13 may also be used as an ECU used for other controls related to the vehicle 10.
- the plurality of information processing units include a pattern light extraction unit (superimposed image generation unit) 21, a posture angle calculation unit 22, a feature point detection unit 23, a posture change amount calculation unit 24, and a luminance determination unit (pattern light detection state determination unit). 25, a self-position calculation unit 26, a pattern light control unit 27, a detection state determination unit 28, and a calculation state determination unit 29 are included.
- the feature point detection unit 23 may be included in the posture change amount calculation unit 24.
- the pattern light extraction unit 21 reads an image acquired by the camera 12 from the memory and extracts the position of the pattern light from the image. As shown in FIG. 3A, for example, the pattern light 32a composed of a plurality of spot lights arranged in a matrix in the projector 11 is projected toward the road surface 31, and the pattern light 32a reflected by the road surface 31 is emitted. Detection is performed by the camera 12. Pattern light extractor 21, by performing binarization processing on the acquired image by the camera 12, as shown in FIGS. 4 (a) and 4 (b), only the image of the spot light S p Extract. Pattern light extractor 21, as shown in FIG. 4 (c), the position H e of the center of gravity of the spot light S p, i.e.
- the posture angle calculation unit 22 reads data indicating the position of the pattern light 32 a from the memory, and calculates the distance and posture angle of the vehicle 10 with respect to the road surface 31 from the position of the pattern light 32 a in the image acquired by the camera 12. For example, as shown in FIG. 3A, from the baseline length Lb between the projector 11 and the camera 12 and the coordinates (U j , V j ) of each spot light on the image, the principle of triangulation is used. The position on the road surface 31 irradiated with each spot light is calculated as a relative position with respect to the camera 12.
- the attitude angle calculation unit 22 calculates the plane type of the road surface 31 on which the pattern light 32a is projected, that is, the distance and the attitude angle of the camera 12 with respect to the road surface 31 (normal vector) from the relative position of each spot light with respect to the camera 12. ) Is calculated.
- the distance and posture angle of the camera 12 with respect to the road surface 31 are used as an example of the distance and posture angle of the vehicle 10 with respect to the road surface 31. calculate.
- the distance and posture angle of the camera 12 with respect to the road surface 31 are abbreviated as “distance and posture angle”.
- the distance and the posture angle calculated by the posture angle calculation unit 22 are stored in the memory.
- the attitude angle calculation unit 22 uses the principle of triangulation to determine the position on the road surface 31 irradiated with each spot light from the coordinates (U j , V j ) of each spot light on the camera 12. It can be obtained as a relative position (X j , Y j , Z j ).
- the feature point detection unit 23 reads an image acquired by the camera 12 from the memory, and detects a feature point on the road surface 31 from the image read from the memory.
- the feature point detection unit 23 detects, for example, “DG ⁇ ⁇ Lowe,“ Distinctive Image Features from Scale-Invariant Keypoints, ”Int. J. Comput. Vis., Vol.60, no. .2, pp. 91-110, Nov. 200, or ⁇ Kanazawa, ⁇ , Kanaya, Kenichi, “Image Feature Extraction for Computer Vision,” ⁇ ⁇ ⁇ , vol. 1043-1048, “Dec. 2004” can be used.
- the feature point detection unit 23 uses, for example, a Harris operator or a SUSAN operator to detect, as a feature point, a point whose luminance value changes greatly as compared to the surroundings, such as a vertex of an object.
- the feature point detection unit 23 may detect, as a feature point, a point where the luminance value changes under certain regularity using a SIFT (Scale-Invariant Feature Transform) feature amount. .
- SIFT Scale-Invariant Feature Transform
- the feature point detection unit 23 counts the total number N of feature points detected from one image, and assigns an identification number (i (1 ⁇ i ⁇ N)) to each feature point.
- the position (U i , V i ) on the image of each feature point is stored in a memory in the ECU 13.
- FIGS. 6 (a) and 6 (b) shows an example of the acquired detected from the image feature points T e by the camera 12.
- the feature points on the road surface 31 mainly assume asphalt mixture grains having a size of 1 cm to 2 cm.
- the resolution of the camera 12 is VGA (approximately 300,000 pixels).
- the distance of the camera 12 with respect to the road surface 31 is about 70 cm.
- the imaging direction of the camera 12 is inclined toward the road surface 31 by about 45 degrees from the horizontal plane.
- the luminance value when the image acquired by the camera 12 is transferred to the ECU 13 is in the range of 0 to 255 (0: the darkest, 255: the brightest).
- the posture change amount calculation unit 24 reads, from the memory, the positions (U i , V i ) on the image of a plurality of feature points included in the previous frame among the frames imaged for each fixed information processing cycle, and The position (U i , V i ) on the image of a plurality of feature points included in the current frame is read from the memory. Then, the amount of change in the attitude of the vehicle is obtained based on the change in position of the plurality of feature points on the image.
- the “vehicle attitude change amount” includes both the “distance and attitude angle” change amount with respect to the road surface 31 and the “vehicle (camera 12) movement amount” on the road surface.
- FIG. 6A shows an example of the first frame (image) 38 acquired at time t.
- the relative positions (X i , Y i , Z i ) of three feature points T e1 , T e2 , T e3 are respectively calculated.
- the plane G specified by the feature points T e1 , T e2 , and T e3 can be regarded as a road surface. Therefore, the posture change amount calculation unit 24 can obtain the distance and posture angle (normal vector) of the camera 12 with respect to the road surface (plane G) from the relative position (X i , Y i , Z i ).
- the posture change amount calculation unit 24 by known camera model, the distance between each feature point T e1, T e2, T e3 (l 1, l 2, l 3) and feature point of each T e1, T The angle formed by the straight line connecting e2 and Te3 can be obtained.
- the camera 12 in FIG. 5 shows the position of the camera in the first frame.
- the imaging direction of the camera 12 is set to the Z axis, the imaging direction is normal, and the plane including the camera 12 is included.
- X axis and Y axis orthogonal to each other are set.
- the horizontal direction and the vertical direction are set to the V axis and the U axis, respectively.
- FIG. 6B shows the second frame acquired at time (t + ⁇ t) when time ⁇ t has elapsed from time t.
- the camera 12 ′ in FIG. 5 indicates the position of the camera when the second frame 38 ′ is imaged.
- the camera 12 ′ captures the feature points T e1 , T e2 , T e3 , and the feature point detection unit 23 performs the feature points T e1 , T e2 and Te3 are detected.
- the posture change amount calculation unit 24 calculates the relative position (X i , Y i , Z i ) of each feature point T e1 , T e2 , T e3 at time t and the second frame 38 ′ of each feature point. From the position P 1 (U i , V i ) and the camera model of the camera 12, not only the movement amount ( ⁇ L) of the camera 12 at the time ⁇ t but also the amount of change in the distance and posture angle can be calculated. For example, by solving simultaneous equations composed of the following equations (1) to (4), the posture change amount calculation unit 24 calculates the movement amount ( ⁇ L) of the camera 12 (vehicle), and the change amounts of the distance and posture angle. Can be calculated. Equation (1) is modeled as an ideal pinhole camera in which the camera 12 has no distortion or optical axis deviation, ⁇ i is a constant, and f is a focal length. The camera model parameters may be calibrated in advance.
- FIG. 3B shows a state in which the moving direction 34 of the camera 12 is obtained from the temporal change of the feature points detected from another area 33 different from the area irradiated with the pattern light 32a in the imaging range of the camera 12. Is shown schematically. Further, in FIGS. 6 (a) and 6 (b) shows superimposes the vector D te showing change direction and the change amount of the position of each feature point T e in the image.
- the posture change amount calculation unit 24 can simultaneously calculate not only the movement amount ( ⁇ L) of the camera 12 at time ⁇ t but also the change amounts of the distance and the posture angle. Therefore, the posture change amount calculation unit 24 can accurately calculate the movement amount ( ⁇ L) with six degrees of freedom in consideration of the change amount of the distance and the posture angle. That is, even if the distance and the posture angle change due to the roll motion or the pitch motion caused by turning or acceleration / deceleration of the vehicle 10, the estimation error of the movement amount ( ⁇ L) can be suppressed.
- the posture change amount calculation unit 24 may select optimal feature points based on the positional relationship between the feature points, instead of using all the feature points whose relative positions are calculated.
- selection procedures for example, the epipolar geometry (epipolar geometry, RI Hartley:. “A linear method for reconstruction from lines and points,” Proc 5 th International Conference on Computer Vision, Cambridge, Massachusetts, pp.882-887 ( 1995)) can be used.
- an image of a small area around the detected feature points may be recorded in a memory and judged from the similarity of luminance and color information.
- the ECU 13 records an image for 5 ⁇ 5 (horizontal ⁇ vertical) pixels centered on the detected feature point in the memory. For example, if the luminance information is 20 pixels or more and the error is within 1% or less, the posture change amount calculation unit 24 determines that the feature point is a feature point that can be correlated between the previous and next frames.
- the posture change amount calculation unit 24 can calculate a “vehicle posture change amount” based on temporal changes of a plurality of feature points on the road surface.
- the self position calculation unit 26 calculates the distance and posture angle from the “distance and change amount of posture angle” calculated by the posture change amount calculation unit 24. Further, the current position of the vehicle is calculated from the “movement amount of the vehicle” calculated by the posture change amount calculation unit 24.
- the posture change amount calculating unit 24 is set with respect to the starting point (distance and posture angle).
- the distance and the posture angle are updated to the latest numerical values by sequentially adding (integral calculation) the distance and posture angle change amount calculated for each frame.
- the vehicle position when the distance and posture angle are calculated by the posture angle calculation unit 22 is set as a starting point (initial position of the vehicle), and the moving amount of the vehicle is sequentially added from this initial position (integral calculation).
- the current position of the vehicle is calculated.
- the current position of the vehicle on the map can be sequentially calculated by setting the starting point (the initial position of the vehicle) that is collated with the position on the map.
- the pattern light 32a can be obtained by continuing the process of adding the distance and the change amount of the posture angle (integral calculation).
- the distance and posture angle can be continuously updated to the latest values without using.
- a distance and posture angle calculated using the pattern light 32a, or a predetermined initial distance and initial posture angle may be used. That is, the distance and the attitude angle that are the starting points of the integration calculation may be calculated using the pattern light 32a, or may use predetermined initial values. It is desirable that the predetermined initial distance and initial posture angle are a distance and posture angle that take into account at least an occupant and a load on the vehicle 10.
- the pattern light 32a is projected, and the distance and posture angle calculated from the pattern light 32a are set to a predetermined value.
- the initial distance and the initial posture angle may be used. Thereby, the distance and the posture angle when the roll motion or the pitch motion due to the turning or acceleration / deceleration of the vehicle 10 is not generated can be obtained.
- the distance and posture angle change amount is calculated, and the distance and posture angle change amount are sequentially added to update the distance and posture angle to the latest numerical values.
- the attitude angle of the camera 12 with respect to the road surface 31 may be a target for calculating and updating the change amount.
- the distance of the camera 12 relative to the road surface 31 may be assumed to be constant. Accordingly, it is possible to reduce the calculation load of the ECU 13 and improve the calculation speed while suppressing the estimation error of the movement amount ( ⁇ L) in consideration of the change amount of the posture angle.
- the detection state of the feature point T e by the feature point detection unit 23 determines whether or not bad not satisfy the first criterion. For example, when there are few patterns such as asphalt mixture grains and unevenness on the road surface, such as a concrete road surface in a tunnel, the number of feature points that can be detected from the image is reduced. In this case, it is difficult to continuously detect feature points that can be correlated between the previous and subsequent frames, and the accuracy of updating the distance and the posture angle is lowered.
- the detection state determination unit 28 has a predetermined threshold that is the number of feature points that can be detected from an image acquired in a later information processing cycle among the feature points whose relative positions are calculated with respect to the camera 12.
- the value is equal to or less than the value (for example, three)
- the feature point detection state does not satisfy the first criterion and is determined to be in a bad state. That is, if the association can not detect more than 4 points feature points get in between the front and rear frames, the detection status judging section 28, the detection state of the feature point T e not satisfy the first criterion, it is determined that bad.
- the predetermined threshold value is desirably 4 or 5 or more.
- the self-position calculation unit 26 maintains the starting point of the integral calculation when the detection state determination unit 28 determines that the detection state of the feature points satisfies the first criterion. On the other hand, when the detection state of the feature point does not satisfy the first criterion and the detection state determination unit 28 determines that the detection state is bad, the self-position calculation unit 26 performs the posture angle calculation unit 22 (see FIG. 1) in the same information processing cycle. ) Is set as a new starting point (vehicle posture angle and initial position), and addition of the vehicle posture change amount is started from the starting point.
- the detection state determination unit 28 determines the detection state of the feature points based on the number of feature points that can be correlated between the previous and subsequent frames, but the feature detected from one image.
- the feature point detection state may be determined based on the total number N of points. That is, when the total number N of feature points is equal to or less than a predetermined threshold (for example, 9), it may be determined that the feature point detection state does not satisfy the first criterion and is in a bad state. Considering that there are feature points that cannot be associated among the detected feature points, a number (9) that is three times the predetermined threshold value (3) may be set as the threshold value.
- a predetermined threshold for example, 9
- the calculation state determination unit 29 determines whether the calculation state of the distance and the posture angle by the posture angle calculation unit 22 does not satisfy the second standard and is in a bad state. For example, when pattern light is irradiated to a step on the road surface 31, the step on the road surface 31 is larger than the unevenness of the asphalt, and the calculation accuracy of the distance and the attitude angle is extremely lowered. Means for accurately detecting the distance and posture angle and the amount of change when the feature point detection state is worse than the first reference, and the distance and posture angle calculation state does not satisfy the second reference and is in a bad state. It will disappear.
- the calculation state determination unit 29 calculates the distance and posture angle by the posture angle calculation unit 22. It is determined that the state does not satisfy the second standard and is in a bad state.
- the number of spot lights that can be detected among the 35 spot lights is less than three points, the plane formula of the road surface 31 cannot be obtained in principle, and therefore the distance and attitude angle of the attitude angle calculation unit 22 are determined. It is determined that the calculated state does not satisfy the second standard and is in a bad state.
- the distance and posture by the posture angle calculation unit 22 It may be determined that the corner calculation state does not satisfy the second criterion and is in a bad state.
- the detection state of the feature point does not satisfy the first reference, and the detection state determination unit 28 determines that the detection state is bad, and the calculation state of the distance and the posture angle by the posture angle calculation unit 22 does not satisfy the second reference.
- the self-position calculation unit 26 uses the distance and posture angle in the previous information processing cycle and the current position of the vehicle as the starting point of the integration calculation. Thereby, the calculation error of the moving amount of the vehicle can be suppressed.
- the pattern light control unit 27 controls the projection of the pattern light 32 a by the projector 11. For example, when the ignition switch of the vehicle 10 is turned on and the self-position calculating device is activated, the pattern light control unit 27 starts projecting the pattern light 32a. Thereafter, the pattern light control unit 27 continuously projects the pattern light 32a until the self-position calculating device stops. Alternatively, the on / off of the light projection may be repeated at a predetermined time interval. Alternatively, the detection state of the feature point T e not satisfy the first criterion, only when the detection status judging section 28 and the bad is determined, may be projected temporarily pattern light 32a.
- the brightness determination unit (pattern light detection state determination unit) 25 determines whether or not the detection state of the pattern light acquired by the camera 12 is equal to or greater than a predetermined threshold value. For example, the luminance determination unit 25 determines whether or not the average luminance value (pattern light detection state) of the image acquired by the camera 12 is equal to or greater than the road surface luminance threshold Bth (predetermined threshold). Note that the luminance determination unit 25 may determine whether the illuminance of the image acquired by the camera 12 is greater than or equal to a threshold value. An illuminance sensor may be mounted on the vehicle instead of the luminance determination unit 25.
- the road surface brightness threshold Bth can be obtained in advance by the following procedure, for example.
- the luminance of the asphalt road surface on the image is made to be substantially uniform.
- imaging is performed by adjusting the light environment so that 95% of the pixels in the portion where the pattern light is not reflected fall within ⁇ 20 from the luminance average value.
- the luminance value of the image acquired by the camera 12 is in the range of 0 to 255 (0: darkest, 255: brightest).
- the luminance average value Bp of the pixel irradiated with the pattern light is compared with the luminance average value Ba of the pixel on which the other asphalt road surface is reflected.
- This series of processes is repeated while increasing the average brightness value Ba of the portion where the asphalt road surface is reflected from 0 to 10 increments, and the average brightness value Ba when Bp ⁇ 0.7 ⁇ Ba is obtained is calculated as the road surface brightness threshold Bth.
- the luminance value at which the luminance of the asphalt road surface is about 30% of the luminance of the pattern light is set as the road surface luminance threshold Bth.
- the pattern light extraction unit 21 superimposes a predetermined number of images between the previous and next frames obtained by the camera 12 when the luminance determination unit 25 determines that the detection state of the pattern light is equal to or greater than a predetermined threshold.
- a predetermined threshold a case will be described in which images between previous and subsequent frames accumulated in the past memory from the present time are used.
- an image acquired by the camera 12 in the future from the present may be included.
- the pattern light extraction unit 21 further extracts the position of the pattern light from the generated superimposed image.
- the attitude angle calculation unit 22 can calculate the attitude angle of the vehicle with respect to the road surface from the position of the pattern light extracted from the superimposed image.
- the self-position calculating unit 26 may set the posture angle calculated from the current position of the vehicle and the superimposed image at that time to the initial position and posture angle (starting point) of the vehicle, and start adding the posture change amount. it can.
- the pattern light extraction unit 21 sets the number of predetermined images according to the detection state of the pattern light in the image acquired by the camera 12.
- the detection state of the pattern light is, for example, a ratio (S / N ratio) of the luminance value of the pattern light to the luminance value of the ambient light.
- the pattern light extraction unit 21 increases the number of images to be superimposed as the S / N ratio is smaller (the external environment is brighter).
- a superimposed image I2 is generated by superimposing a relatively large number of images I1.
- FIG. 8A when the external environment is relatively dark, the number of necessary images is relatively small, and therefore the amount of movement of the vehicle 100 for acquiring the image group is relatively small.
- a superimposed image I2 is generated by superimposing a relatively small number of images I1.
- the number of images necessary for pattern light extraction to be superimposed by the pattern light extraction unit 21 can be set, for example, by the following procedure.
- the S / N ratio Rap obtained by rounding off the 1's place of the average luminance value of the image acquired by the camera 12 is obtained by the following equation (5).
- the number of Sn rounded up to the first decimal place is set to the number of images required for pattern light extraction.
- Sn max ⁇ (0.5 ⁇ (1-Rap) 2), 50 ⁇ (5)
- the luminance average value Ba of the asphalt road surface is about 29% or less of the luminance average value Bp of the pattern light, it is 1 sheet, if it is 75%, it is 8 sheets, and if it is 90% or more, it is 50 sheets. Since the portion irradiated with the pattern light is sufficiently smaller than the entire image, the average brightness value may be calculated from the entire image. Further, in an experiment in which the average brightness Ba is actually increased from 0 to 10 and each Rap is obtained, spot light is actually extracted and the spot light extraction success rate of the pattern light is 95%. You may set with the number of sheets used above.
- the distance and posture angle (hereinafter simply referred to as the “previous value”) adopted in the previous information processing cycle.
- a predetermined initial distance and initial attitude angle also simply referred to as “initial value” of the vehicle described above.
- the integration time may be too long because the S / N ratio is too small (too bright).
- the ratio Rap when rounding the ones of the average luminance value of the image acquired by the camera 12, the ratio Rap is 0.95 or more, that is, the average luminance of the asphalt road surface is the average luminance of the pattern light. If it is 90% or more, it will take too much time to create a superimposed image, and the assumption that the change in the road surface during that time will not be valid, or the extraction of the pattern light itself will be determined to be difficult in principle, and the previous value or initial value will be Used as a starting point.
- road surface changes may be too large. For example, while capturing the set number of superimposed images, it is determined whether or not the road surface condition around the vehicle has changed by a threshold value or more. Then, when the number of images in which the road surface condition around the vehicle has changed more than the threshold value exceeds 5%, it is determined that the assumption that the road surface change is small does not hold, and the previous value or initial value is set. Used as a starting point. The details of the determination of the change in the road surface state will be described in the second embodiment.
- FIGS. 9A to 9D show changes in the reset flag, the number of images to be superimposed, the feature point detection state, and the number of associated feature points in the self-position calculation apparatus according to the first embodiment. .
- the number of associated feature points is less than 3 as shown in FIG. 9D, and it is determined that the feature point detection state is bad as shown in FIG. 9C. Is done. Accordingly, the reset flag is set to “1” as shown in FIG.
- the pattern light extraction unit 21 calculates the number of images to be superimposed from the average luminance value of the image of the camera 12 (that is, superimposed). Do not). Then, the pattern light extraction unit 21 extracts pattern light from one image acquired in the information processing cycle at the current time t11.
- the pattern light extraction unit 21 sets the number of images to be superimposed as two from the average luminance value of the images of the camera 12. Further, the pattern light extraction unit 21 superimposes two images, that is, the image acquired by the camera 12 at time t12 and the image acquired by the camera 12 in the previous information processing cycle (summing the luminance value). A superimposed image is generated. Then, the pattern light extraction unit 21 extracts pattern light from the superimposed image.
- the pattern light control unit 27 controls the projector 11 to project the pattern light 32a onto the road surface 31 around the vehicle.
- the pattern light 32a is continuously projected.
- step S11 the ECU 13 controls the camera 12 to capture the road surface 31 around the vehicle including the area where the pattern light 32a is projected, and obtain an image 38.
- the ECU 13 stores the image data acquired by the camera 12 in a memory.
- the ECU 13 can automatically control the aperture of the camera 12.
- the diaphragm of the camera 12 may be feedback controlled so that the average luminance of the image 38 acquired in the previous information processing cycle becomes an intermediate value between the maximum value and the minimum value of the luminance value.
- luminance value is high in the area
- the luminance determining unit 25 reads the image 38 acquired by the camera 12 from the memory, and determines whether the average luminance of the image is equal to or higher than the road surface luminance threshold Bth. When it is determined that the average luminance of the image is less than the road surface luminance threshold Bth, the process proceeds to step S15.
- step S15 the pattern light extraction unit 21 first reads the image 38 acquired by the camera 12 from the memory, and extracts the position of the pattern light 32a from the image 38 as shown in FIG. Pattern light extraction unit 21 stores the coordinates (U j, V j) on the image of the spot light S p calculated as data indicating the position of the pattern light 32a in the memory.
- step S12 determines whether the average luminance of the image is equal to or greater than the road surface luminance threshold Bth. If it is determined in step S12 that the average luminance of the image is equal to or greater than the road surface luminance threshold Bth, the process proceeds to step S13.
- step S ⁇ b> 13 the pattern light extraction unit 21 sets the number of images (number of frames) necessary for pattern light extraction from the average luminance of the images acquired by the camera 12.
- step S ⁇ b> 14 the pattern light extraction unit 21 reads the image 38 acquired by the camera 12 from the memory, and superimposes the images of the frames before and after the set number of images (adds luminance values) to generate a superimposed image. To do. Furthermore, the pattern light extraction unit 21 extracts the position of the pattern light 32a from the generated superimposed image. Pattern light extraction unit 21 stores the coordinates (U j, V j) on the image of the spot light S p calculated as data indicating the position of the pattern light 32a in the memory.
- step S16 the attitude angle calculation unit 22 reads data indicating the position of the pattern light 32a extracted in step S14 or step S15 from the memory, calculates a distance and an attitude angle from the position of the pattern light 32a, and stores them in the memory.
- the ECU 13 detects feature points from the image 38, extracts feature points that can be correlated between preceding and subsequent information processing cycles, and determines the positions of the feature points on the image (U i , V i ). The distance and the change amount of the posture angle and the movement amount of the vehicle are calculated.
- the feature point detection unit 23 reads an image 38 acquired by the camera 12 from the memory, detects a feature point on the road surface 31 from the image 38, and positions each feature point on the image (U i , V i ) is stored in memory.
- the posture change amount calculation unit 24 reads the position (U i , V i ) of each feature point on the image from the memory, and based on the distance and posture angle and the position (U i , V i ) of the feature point on the image. Then, the relative position (X i , Y i , Z i ) of the feature point with respect to the camera 12 is calculated.
- the posture change amount calculation unit 24 uses the distance and posture angle set in step S16 of the previous information processing cycle.
- the posture change amount calculation unit 24 stores the relative positions (X i , Y i , Z i ) of the feature points with respect to the camera 12 in the memory.
- the posture change amount calculation unit 24 then compares the position of the feature point on the image (U i , V i ) and the relative position of the feature point calculated in step S17 of the previous information processing cycle (X i , Y i , Z i ) is read from memory.
- the posture change amount calculation unit 24 uses the relative position (X i , Y i , Z i ) of the feature point and the position (U i , V i ) on the image that can be correlated between the previous and subsequent information processing cycles. The amount of change in distance and posture angle is calculated.
- step S17 the relative positions of feature points in the previous processing cycle (X i, Y i, Z i) and the relative positions of feature points in the current processing cycle (X i, Y i, Z i) from the, vehicle The amount of movement is calculated.
- the “amount of change in distance and posture angle” and “amount of movement of the vehicle” calculated in step S17 are used in the process of step S19.
- step S18 the ECU 13 sets the starting point of the integral calculation according to the detection state of the feature points and the calculation state of the distance and posture angle by the pattern light. Details will be described later with reference to FIG.
- the self-position calculating unit 26 calculates the current position of the vehicle from the starting point of the integral calculation set in the process of step S18 and the movement amount of the vehicle calculated in the process of step S17.
- the self-position calculation device can calculate the current position of the vehicle 10 by repeatedly executing the above-described series of information processing cycles and integrating the movement amount of the vehicle 10.
- step S100 the ECU 13 determines whether or not the current information processing cycle is the first time. If it is the first time, that is, if there is no data in the previous information processing cycle, the process proceeds to step S104, and if not, the process proceeds to step S101.
- step S ⁇ b > 101 the detection state determination unit 28 determines whether the detection state of the feature point Te by the feature point detection unit 23 does not satisfy the first criterion and is in a bad state. When it is determined that it is bad (YES in step S101), the process proceeds to step S102, and when it is determined that it is not bad (NO in step S101), the process proceeds to step S106. In step S106, the ECU 13 maintains the currently set starting point of the integral calculation.
- step S102 the ECU 13 determines whether a superimposed image has been generated. As a case where a superimposed image cannot be generated, for example, it takes time to acquire a predetermined number of images to include images acquired in the future, and a superimposed image is being generated, or in principle There are cases where it is impossible or difficult to generate a superimposed image. If it is determined that a superimposed image cannot be generated (YES in step S102), the process proceeds to step S103. If it is determined that a superimposed image can be generated (NO in step S102), the process proceeds to step S104.
- step S103 the calculation state determination unit 29 determines whether the calculation state of the distance and the posture angle by the posture angle calculation unit 22 does not satisfy the second standard and is in a bad state. For example, it is determined whether the calculation of the distance and the posture angle is successful. If it is determined that the process is successful (YES in step S103), the process proceeds to step S104. If it is determined that the process is not successful (NO in step S103), the process proceeds to step S105.
- step S104 the ECU 13 sets the current position of the vehicle as a starting point, and further sets the distance and posture angle calculated in step S16 of the same information processing cycle as the starting point of integration calculation. A new integration operation is started from this distance and posture angle as the starting point. Also, the integration calculation of the movement amount of the vehicle is newly started from the current position of the vehicle.
- step S105 the ECU 13 sets the current position of the vehicle as a starting point, and further sets the distance and posture angle employed in the previous information processing cycle as the starting point for integration calculation. A new integration operation is started from this distance and posture angle as the starting point. Also, the integration calculation of the movement amount of the vehicle is newly started from the current position of the vehicle. Thereafter, the process proceeds to step S19 in FIG.
- the pattern light extraction unit 21 determines that the pattern light extraction unit 21 By superimposing images to generate a superimposed image and extracting pattern light from the superimposed image, the pattern light projected on the road surface can be detected accurately even when the external environment is bright. The position can be calculated with high accuracy.
- the pattern light extraction unit 21 sets the number of images necessary for generating a superimposed image in accordance with the pattern detection state such as the average luminance of the image acquired by the camera 12. Accordingly, the brightness value of the pattern light to be detected can be adjusted, and the pattern light can be detected with high accuracy.
- the self-position is determined by using the distance and posture angle employed in the previous information processing cycle or the predetermined initial distance and initial posture angle as a starting point. An error in calculation can be suppressed.
- the ECU 13 may set a predetermined initial distance and initial attitude angle as the starting point of the integration calculation instead of the distance and attitude angle employed in the previous information processing cycle.
- the detection state of the feature point does not satisfy the first reference
- the detection state determination unit 28 determines that it is a bad state
- the calculation state of the distance and the posture angle by the posture angle calculation unit 22 does not satisfy the second reference.
- the self-position calculation unit 26 sets a predetermined initial distance and an initial posture angle in consideration of at least the occupant and the load on the vehicle 10 as the starting point of the integration calculation. May be.
- the distance and posture angle calculated in step S15 of the information processing cycle immediately after starting the self-position calculating device can be used. Thereby, the distance and posture angle can be updated and the amount of movement can be calculated from the distance and posture angle when the roll motion or pitch motion due to turning or acceleration / deceleration of the vehicle 10 is not generated.
- the self-position calculation apparatus includes a road surface state determination unit 30 instead of the detection state determination unit 28 and the calculation state determination unit 29. Different from the embodiment. Other configurations are the same as those of the first embodiment, and thus description thereof is omitted.
- the road surface state determination unit 30 detects a change in the road surface state around the vehicle and determines whether or not the road surface state has changed by a threshold value or more. And when it determines with the road surface state changing more than a threshold value, the self-position calculation part 26 fixes to the present position of the vehicle 10 calculated by the last information processing cycle, the distance with respect to a road surface, and an attitude angle. Thereby, the attitude angle calculation unit 22 stops calculating the distance and the attitude angle of the vehicle 10 with respect to the road surface.
- the self-position calculating unit 26 adds the posture change amount to the current position of the vehicle 10 calculated in the previous information processing cycle, the distance to the road surface, and the posture angle, so that the current position of the current vehicle, the distance to the road surface, Calculate the attitude angle.
- 35 (5 ⁇ 7) spot lights of the pattern light 32a are projected on the road surface. Therefore, for example, when only 80% or less, that is, 28 or less, can be detected on the image of the camera 12 out of 35 spot lights, the road surface state determination unit 30 causes the road surface step and unevenness to become severe, and the road surface state Is determined to have changed more than the threshold.
- the change in the road surface condition may be estimated from the amount of change in the road surface height.
- the amount of change in the height of the road surface can be detected from the vibration of the detection value of the stroke sensor attached to the suspension of each wheel of the vehicle. For example, when the vibration of the detection value of the stroke sensor becomes 1 Hz or more, the road surface state determination unit 30 estimates that the road surface level difference or unevenness has become severe, and determines that the road surface state has changed by more than a threshold value.
- the detection value of the acceleration sensor that measures the acceleration in the vertical direction is integrated to calculate the speed in the vertical direction, and when the change in the direction of the speed becomes 1 Hz or more, the road surface condition determination unit 30 It may be determined that the level difference or unevenness becomes intense and the road surface state has changed by more than a threshold value.
- the amount of change in the height of the road surface may be estimated from the position of the pattern light 32a in the image captured by the camera 12.
- pattern light 32 a as shown in FIG. 13 is projected onto the road surface 31. Therefore, a line segment 71 connecting the spot light of the pattern light 32a in the X direction and a line segment 73 connecting in the Y direction are drawn.
- the road surface state determination unit 30 estimates that the road surface step or unevenness has become intense, and the road surface state is greater than or equal to the threshold value. Judge that it has changed. Further, as shown in FIG. 13, when the difference between the distances d1 and d2 between the adjacent spot lights has changed by 50% or more, the road surface state determination unit 30 determines that the road surface state has changed by a threshold value or more. Good.
- the self-position calculating unit 26 fixes the current position of the vehicle 10, the distance to the road surface, and the attitude angle calculated in the previous information processing cycle. Therefore, the posture angle calculation unit 22 stops calculating the distance and posture angle of the vehicle 10 with respect to the road surface, and the self-position calculation unit 26 calculates the current position of the vehicle 10 calculated in the previous information processing cycle, the distance and posture with respect to the road surface. The posture change amount is added to the corner to calculate the current position of the current vehicle 10, the distance to the road surface, and the posture angle.
- the road surface state determination unit 30 monitors the number of detected spot lights, and sets the threshold value to 28, which is 80% of 35 spot lights. .
- the road surface state determination unit 30 sets the posture angle calculation flag to “1” while more than 28 spot lights can be detected.
- the posture angle calculation unit 22 calculates the distance and posture angle of the vehicle 10 with respect to the road surface
- the self-position calculation unit 26 uses the vehicle distance and posture angle calculated by the posture angle calculation unit 22 to present the current distance.
- the attitude angle is calculated, and the current position of the vehicle is calculated by adding the moving amount of the vehicle to the current position of the vehicle 10 calculated in the previous information processing cycle (continuing the integration calculation). .
- the self-position calculation unit 26 switches the posture angle calculation flag to “0”.
- the current position of the vehicle 10 the distance to the road surface and the posture angle are fixed to the current position of the vehicle 10, the distance to the road surface and the posture angle calculated in the previous information processing cycle, and the posture angle calculation unit 22 Stop calculating distance and posture angle.
- the self-position calculation unit 26 adds the posture change amount to the current position of the vehicle 10 calculated in the previous information processing cycle, the distance to the road surface, and the posture angle, so that the current position of the current vehicle, the distance to the road surface, and Calculate the attitude angle.
- the self-position calculation unit 26 calculates the current distance and posture angle of the vehicle 10 using the distance and posture angle of the vehicle 10 calculated by the posture angle calculation unit 22.
- the self-position calculation device uses the current position of the vehicle 10 calculated in the previous information processing cycle, the distance to the road surface, and the attitude angle when the road surface state changes greatly. Even if it changes greatly, the self-position of the vehicle 10 can be calculated accurately and stably.
- the pattern light extraction unit 21 generates a superimposed image by superimposing a predetermined number of images when the detection state of the pattern light is equal to or greater than a predetermined threshold.
- the pattern light extraction unit 21 extracts pattern light from the generated superimposed image.
- the previous value or initial value is used as the starting point. For example, while capturing the set number of superimposed images, the road surface state determination unit 30 determines whether the road surface state around the vehicle has changed by a threshold value or more. When the number of images determined to have changed exceeds 5%, it is determined that the assumption that the road surface change is small does not hold, and the previous value or initial value is used as the starting point.
- FIGS. 14 (a) to 14 (e) show the reset flag, the number of images to be superimposed, the quality of the road surface condition, and the change in level of the road surface and the unevenness of the self-position calculation device according to the second embodiment. Indicates.
- FIG. 14B it is determined whether or not to reset at a predetermined cycle.
- the predetermined cycle is 10 frames, but the predetermined cycle may be set to 10 seconds, for example.
- the reset timing has a predetermined cycle as shown in FIG. 14 (b), and it is determined that the road surface condition is good as shown in FIG. 14 (d).
- the reset flag is set to “1”.
- the pattern light extraction unit 21 sets the number of images to be superimposed as three from the average luminance value of the images of the camera 12. Furthermore, the pattern light extraction unit 21 generates a superimposed image by superimposing one image acquired by the camera 12 at time t23 and two images acquired by the camera 12 in the previous frame and the previous frame. . Then, the pattern light extraction unit 21 extracts pattern light from the generated superimposed image.
- step S28 the ECU 13 sets the starting point of the integral calculation for calculating the self position according to the change in the road surface condition around the vehicle.
- the detailed procedure of step S28 in FIG. 15 will be described with reference to the flowchart in FIG.
- step S201 the road surface state determination unit 30 determines whether or not a predetermined period has elapsed. What is necessary is just to set a predetermined period to the space
- the road surface state determination unit 30 monitors whether or not a cycle count pulse has occurred, and if so, determines that a predetermined cycle has elapsed and proceeds to step S202. . On the other hand, when the cycle count pulse has not occurred, it is determined that the predetermined cycle has not elapsed, and the process proceeds to step S205.
- the road surface state determination unit 30 detects a change in the road surface state around the vehicle. Specifically, the road surface state determination unit 30 detects the number of spot lights of the pattern light 32a, or detects the vibration of the detection value of the stroke sensor attached to each wheel. Further, the road surface state determination unit 30 may integrate the detection value of the acceleration sensor capable of measuring the acceleration in the vertical direction of the vehicle to calculate the velocity in the vertical direction, or detect the position of the pattern light 32a.
- the road surface state determination unit 30 determines whether or not the road surface state around the vehicle has changed by more than a threshold value. For example, in the case of detecting the number of spot lights of the pattern light 32a, the road surface state determination unit 30 determines the level difference or unevenness of the road surface when only 28 or less of the 35 spot lights can be detected on the camera image. It is determined that the road surface condition has changed more than a threshold value.
- the road surface state determination unit 30 determines that the road surface state has changed more than a threshold value. Furthermore, when using the acceleration sensor, the detection value of the acceleration sensor is integrated to calculate the vertical speed, and when the change in the direction of the speed becomes 1 Hz or more, the road surface state determination unit 30 determines the road surface state. Is determined to have changed more than the threshold.
- the road surface state determination unit 30 determines that the road surface state has changed by more than a threshold value when the slope of the line segment connecting the spot light has changed by 15 degrees or more. judge.
- the road surface state determination unit 30 may determine that the road surface state has changed by a threshold value or more when the difference in the interval between adjacent spot lights has changed by 50% or more.
- step S203 it is determined whether or not the road surface condition around the vehicle has changed by a threshold value or more. If the road surface condition determination unit 30 determines that the road surface condition has changed by a threshold value or more (YES in step S203), a step is performed. The process proceeds to S204. On the other hand, when the road surface state determination unit 30 determines that the change is not more than the threshold value (NO in step S203), the process proceeds to step S205.
- step S204 the self-position calculating unit 26 fixes the current position of the vehicle 10, the distance to the road surface, and the attitude angle to the current position, the distance to the road surface, and the attitude angle calculated in the previous information processing cycle. That is, the self-position calculating unit 26 sets the current position of the vehicle 10 calculated in the previous information processing cycle, the distance to the road surface, and the attitude angle as the starting point of the integration calculation.
- the attitude angle calculation unit 22 stops the calculation of the distance and the attitude angle of the vehicle 10 with respect to the road surface, and the self-position calculation unit 26 calculates the current position of the vehicle 10 calculated in the previous information processing cycle, the distance to the road surface, and The amount of posture change is added to the posture angle to calculate the current position of the current vehicle 10, the distance to the road surface, and the posture angle.
- step S205 the self-position calculation unit 26 sets the current position of the vehicle 10, the distance to the road surface, and the attitude angle calculated in step S15 of the current information processing cycle as the starting point of the integration calculation. Thereby, the self-position calculation unit 26 adds the posture change amount to the current position, the distance to the road surface, and the attitude angle of the vehicle 10 calculated in the current information processing cycle, and the current position, the distance to the road surface, and the attitude of the vehicle 10. Calculate the corner.
- step S28 ends and the process proceeds to step S29 of FIG.
- the pattern light extraction unit 21 detects the image between frames. By superimposing and generating a superimposed image, the pattern light projected on the road surface can be detected with high accuracy even when the external environment is bright, and the vehicle's own position can be calculated with high accuracy.
- the pattern light extraction unit 21 sets the number of images necessary for generating a superimposed image in accordance with the pattern detection state such as the average luminance of the image acquired by the camera 12. Accordingly, the brightness value of the pattern light to be detected can be adjusted, and the pattern light can be detected with high accuracy.
- the pattern light control unit 27 starts projecting the pattern light 32a whose luminance changes periodically. Thereafter, the pattern light control unit 27 continuously projects the pattern light 32a until the self-position calculating device stops. Alternatively, the pattern light 32a may be projected as necessary.
- the power supplied to the projector 11 is controlled so that the luminance of the light projection pattern changes in a sine wave shape with a predetermined frequency.
- the pattern light extraction unit 21 reads the image acquired by the camera 12 from the memory, and further performs the synchronous detection process at the above-described predetermined frequency, thereby extracting the pattern light 32a included in the image.
- the synchronous detection process will be described. If the measurement signal included in the image captured by the camera 12 is sin ( ⁇ 0 + ⁇ ), the measurement signal includes sunlight and artificial light having various frequency components in addition to the projection pattern. Therefore, when the reference signal sin ( ⁇ r + ⁇ ) whose frequency is the modulation frequency ⁇ r is multiplied by the measurement signal sin ( ⁇ 0 + ⁇ ), the result is cos ( ⁇ 0 ⁇ r + ⁇ ) / 2-cos ( ⁇ 0 + ⁇ r + ⁇ + ⁇ ) / 2.
- a signal of ⁇ 0 ⁇ ⁇ r that is, sunlight and artificial light other than the projection pattern whose frequency is not ⁇ r is removed.
- the pattern light control unit 27 modulates the luminance of the pattern light 32a with a predetermined modulation frequency ⁇ r set in advance. Accordingly, a light projection pattern whose luminance is modulated at the frequency ⁇ r is projected onto the road surface. Then, the attitude angle calculation unit 22 can extract only the light projection pattern by multiplying the image (measurement signal) captured by the camera 12 by the modulation frequency ⁇ r.
- FIG. 18A is a characteristic diagram showing a change in luminance of the pattern light 32a projected from the projector 11, and FIG. 18B is a characteristic diagram showing a change in the feature point detection flag.
- the luminance is controlled so as to change in a sine wave shape.
- the setting of the brightness of the pattern light 32a will be described.
- the maximum brightness of the pattern light 32a (above the brightness B1 shown in FIG. 18A) is detected so that the pattern light 32a can be detected even under the clear sky close to the summer solstice (June) with the largest amount of sunlight. Peak).
- the minimum brightness of the pattern light (below the brightness B1 shown in FIG. 18A) is selected so that the projection pattern is not erroneously detected with a feature point of the road surface with a probability of 99% or more. Set the peak.
- the luminance modulation frequency of the pattern light is, for example, 200 [Hz], and the frame rate of the camera 12 (number of images taken per second) is 2400 [fps].
- the maximum speed of the vehicle is assumed to be 72 km / h ( ⁇ 20 m / s)
- the movement amount in one cycle is suppressed to 0.1 m or less.
- it is desirable that the area where the pattern light 32a is projected and the area where the feature points are detected be as close as possible or the same.
- from the principle of synchronous detection if the amount of movement during one cycle increases and the road surface condition changes, sunlight and artificial light other than the light projection pattern change, and the assumption of synchronous detection may be lost. This is avoided by minimizing the amount of movement in one cycle. Therefore, further improvement in performance can be expected by using a faster camera and further shortening this fixed period.
- the feature point detector 23 determines whether or not the brightness of the pattern light 32a projected from the projector 11 is equal to or less than a preset threshold brightness Bth.
- the pattern light extraction unit 21 extracts the pattern light 32a by the above-described synchronous detection processing. For example, as shown in FIGS. 19 (a) and 19 (c), synchronous detection processing is performed from an image taken at time t1 or time t3 (time when luminance B1 is larger than threshold luminance Bth) shown in FIG. 18 (b). The pattern light 32a is extracted.
- the feature point detection unit 23 detects a feature point existing on the road surface 31. Specifically, when the luminance B1 of the pattern light 32a changes in a sine wave shape as shown in FIG. 18A, a feature point is detected in a time zone in which the luminance B1 is equal to or lower than the threshold luminance Bth. That is, as shown in FIG. 18B, the feature point detection flag is set to “1” in a time zone in which the brightness B1 is equal to or less than the threshold brightness Bth, and the feature point is detected in this time zone. For example, as shown in FIG.
- the posture change amount calculation unit 24 calculates the change amount of the feature point with respect to the camera 12 based on the position of the feature point detected by the feature point detection unit 23 on the image.
- the region for detecting the feature points is a region that is entirely or partially overlapped with the region that projects the pattern light 32a.
- the pattern light extraction unit 21 detects a synchronous image obtained by synchronously detecting an image captured by the camera 12 at a predetermined modulation frequency when the luminance determination unit 25 determines that the pattern detection state is equal to or greater than a predetermined threshold.
- a superimposed image is generated by superimposing (adding luminance values) for a predetermined period.
- An image obtained by extracting only the projection pattern is obtained from an image captured during one cycle of the reference signal.
- the luminance values are added together for each pixel of the image extracted in each cycle and are superimposed.
- spot light is extracted from the superimposed image, the luminance value of each pixel may be divided by the number of superimposed images and normalized, and then extracted by binarization processing.
- the cycle obtained by rounding up the first decimal place of Fn obtained by the following equation (7) is predetermined.
- the number of periods That is, if the average luminance of the asphalt road surface is less than 50% of the average luminance of the pattern light, one cycle is set, two cycles if 75%, and five cycles if 90% or more.
- Fn max ⁇ (0.5 ⁇ (1-Rap)), 10 ⁇ (7)
- each Rap is obtained by actually increasing the luminance average value Ba from 0 to 10 units, extraction by actual synchronous detection is performed, and the success rate of extracting each spot light of the pattern light becomes 95% or more. It may be set as a cycle.
- the previous value or the initial value is used as a starting point.
- the average brightness of the asphalt road surface is 75% of the average brightness of the pattern light
- two cycles of synchronous detection are required.
- the previous value or initial value is used as the starting point at 90 km / h or more as shown in the following equation (8).
- the previous value or initial value is used as the starting point.
- a determination method using a stroke sensor can be used.
- Times t31, t32, and t33 are timings at which one cycle of the reference signal projection ends as shown in FIG. 20B, and the periodic projection power decreases as shown in FIG. As shown in FIG. 20A, the reset flag is set to “1”.
- the number of periods to be superimposed is set as one period.
- the pattern light extraction unit 21 generates a superimposed image by superimposing images of one period before times t31 and t33.
- the number of periods to be superimposed is set to 2 periods.
- the pattern light extraction unit 21 generates a superimposed image by superimposing images for two periods T1 and T2 before time t32.
- the information processing cycle shown in FIG. 21 is started at the same time that the ignition switch of the vehicle 10 is turned on and the self-position calculation device 100 is started, and is repeatedly executed until the self-position calculation device 100 stops.
- the pattern light control unit 27 controls the projector 11 to project pattern light onto the road surface 31 around the vehicle.
- the pattern light control unit 27 controls the projection power so that the brightness B1 of the pattern light changes into a sine wave having a predetermined period.
- the frequency of the sine wave is 200 [Hz].
- step S32 the camera 12 captures an image of the road surface 31 including the region where the pattern light is projected.
- step S33 the ECU 1 determines whether or not the projection of the reference signal for synchronous detection has been completed for one cycle. When it is determined that the light projection of the reference signal for synchronous detection has been completed for one cycle, the process proceeds to step S35, and when it is determined that it has not been completed, the process proceeds to step S34.
- step S34 the feature point detection unit 23 detects a feature point (for example, an uneven portion present on the asphalt) from the image 38, and a feature that allows a correspondence relationship between the previous information processing cycle and the current information processing cycle. A point is extracted, and the distance and posture angle are updated from the position (Ui, Vi) of the feature point on the image.
- a feature point for example, an uneven portion present on the asphalt
- the feature point detection unit 23 reads the image 38 acquired by the camera 12 from the memory, detects the feature point on the road surface 31 from the image 38, and The position (Ui, Vi) of the feature point on the image is stored in the memory.
- the posture change amount calculation unit 24 reads the position (Ui, Vi) of each feature point on the image from the memory, and determines the position relative to the camera 12 from the distance and posture angle and the position (Ui, Vi) of the feature point on the image.
- the relative position (Xi, Yi, Zi) of the feature point is calculated.
- the posture change amount calculation unit 24 stores the relative position (Xi, Yi, Zi) of the feature point with respect to the camera 12 in the memory.
- the posture change amount calculation unit 24 stores the position of the feature point on the image (Ui, Vi) and the relative position (Xi, Yi, Zi) of the feature point calculated in step S31 of the previous information processing cycle. Read from.
- the posture change amount calculation unit 24 uses the relative position (Xi, Yi, Zi) of the feature point and the position (Ui, Vi) on the image that can be correlated between the current information processing cycle and the current information processing cycle. Thus, the change amount of the distance and the posture angle is calculated.
- the posture change amount calculation unit 24 updates the distance and posture angle by adding the above-described change amounts of the distance and posture angle to the distance and posture angle obtained in the previous information processing cycle. Then, the updated distance and posture angle are stored in the memory.
- the distance and posture angle for each current information processing cycle are integrated with the distance and posture angle set in the process of step S34 or step S37 (described later) of the previous cycle, thereby calculating the distance. And a process of updating the attitude angle. Thereafter, the process proceeds to step S38.
- step S35 the pattern light extraction unit 21 sets the frequency of synchronous detection necessary for pattern light extraction from the luminance average of the image acquired by the camera 12.
- step S36 the pattern light extraction unit 21 extracts pattern light from the image group acquired at the current reference signal period by synchronous detection.
- step S37 the pattern light extraction unit 21 superimposes the pattern light image extracted using the synchronous detection in the past cycle by the number of cycles set in step S35 to generate a superimposed image.
- the pattern light extraction unit 21 further extracts the position of the pattern light from the generated superimposed image.
- the posture angle calculation unit 22 calculates a distance and a posture angle based on the position of the pattern light.
- step S38 the ECU 13 selects the starting point of the integral calculation.
- the distance and posture angle calculated from the pattern light are selected and set. Further, when a preset condition is satisfied, for example, when the feature point detection state in the feature point detection unit 23 is lowered, and a plurality of feature points cannot be detected at the timing when the feature point flag is “1”.
- the distance and posture angle calculated from the pattern light that is, the distance and posture angle calculated in step S37 are reset.
- the distance and the posture angle are updated based on the position of the feature point.
- the distance and posture angle of the camera 12 cannot be set with high accuracy, and the low-accuracy distance and posture angle are adopted to the vehicle. If the amount of movement is calculated, the amount of movement of the vehicle cannot be detected with high accuracy. Accordingly, in such a case, the starting point of the movement amount calculation is reset to the distance and posture angle obtained from the position of the pattern light. By doing so, it is possible to prevent a large error in the distance and the posture angle.
- step S39 the self-position calculation unit 26 changes the distance and posture angle obtained in the process of step S34 or S37, the starting point of the integral calculation, and the position (Ui, Vi) of the feature point on the image.
- the amount of movement of the camera 12 relative to the road surface 31 ( ⁇ L), that is, the amount of movement of the vehicle 10 is calculated from the amount.
- the position of the vehicle 10 can be calculated by repeatedly executing the above-described series of information processing cycles and integrating the movement amount of the vehicle 10.
- the luminance determining unit 25 determines the pattern light detection state, and if the pattern light detection state is equal to or greater than the threshold, the pattern light extraction unit 21 detects the image between frames. By superimposing images for a predetermined period, the pattern light projected on the road surface can be detected accurately even when the external environment is bright, and the vehicle's self-position is calculated accurately. can do.
- the pattern light extraction unit 21 sets the number of periods necessary for generating a superimposed image in accordance with the pattern detection state such as the average luminance of the image acquired by the camera 12, thereby increasing the brightness of the external environment. Accordingly, the brightness value of the pattern light to be detected can be adjusted, and the pattern light can be detected with high accuracy.
- a superimposed image is generated by superimposing images acquired by the camera 12 from the past to the present.
- a superimposed image may be generated by superimposing one or more images to be superimposed.
- the pattern light extraction unit 21 generates a superimposed image when the number of patterns necessary for pattern light extraction is acquired. Until the number of patterns necessary for pattern light extraction is acquired (during generation of a superimposed image), the self-position calculation unit may start the integration calculation starting from the previous value or the initial value.
- FIG. 2 shows an example in which the camera 12 and the projector 11 are attached to the front surface of the vehicle 10.
- the camera 12 and the projector 11 may be installed sideways, rearward, or directly below the vehicle 10.
- a four-wheeled passenger car is shown as an example of the vehicle 10 in FIG. 2, but a feature point on a road surface or wall surface of a road such as a motorcycle, a freight car, or a special vehicle carrying a construction machine, for example.
- the present invention can be applied to all moving bodies (vehicles) capable of imaging
- ECU 10 vehicle 11 projector 12 camera (imaging part) 21 pattern light extraction unit (superimposed image generation unit) 22 posture angle calculation unit 23 feature point detection unit 24 posture change amount calculation unit 25 luminance determination unit (pattern light detection state determination unit) 26 self-position calculation unit 28 detection state determination unit 29 calculation state determination unit 30 road surface state determination unit 31 road surface 32a, 32b pattern light 36 plane calculation unit Te feature point
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Automation & Control Theory (AREA)
- Signal Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Navigation (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
Abstract
Description
[ハードウェア構成]
先ず、図1を参照して、第1の実施形態に係る自己位置算出装置のハードウェア構成を説明する。自己位置算出装置は、投光器11と、カメラ12と、エンジンコントロールユニット(ECU)13とを備える。投光器11は、車両に搭載され、車両周囲の路面にパターン光を投光する。カメラ12は、車両に搭載され、パターン光が投光された領域を含む車両周囲の路面を撮像して画像を取得する撮像部の一例である。ECU13は、投光器11を制御し、且つカメラ12により取得された画像から車両の移動量を推定する一連の情報処理サイクルを実行する制御部の一例である。
Sn=max{(0.5÷(1-Rap)2),50} ・・・(5)
0.2[m]÷(8[枚]÷1000[fps])
=25[m/s]
=90[km/h] ・・・(6)
次に、カメラ12により取得された画像38から車両10の移動量を推定する自己位置算出方法の一例として、ECU13により繰り返し実行される情報処理サイクルを、図10及び図11を参照しながら説明する。図10のフローチャートに示す情報処理サイクルは、車両10のイグニションスイッチがオン状態となり、自己位置算出装置が起動すると同時に開始され、自己位置算出装置が停止するまで、繰り返し実行される。
以上説明したように、第1の実施形態によれば、輝度判断部25がパターン光の検出状態を判断し、パターン光の検出状態が閾値以上の場合、パターン光抽出部21が前後フレーム間の画像を重ね合わせて重畳画像を生成し、重畳画像からパターン光を抽出することにより、外部環境が明るい場合であっても路面に投光したパターン光を精度よく検出することができ、車両の自己位置を精度よく算出することができる。
[ハードウェア構成]
本発明の第2の実施形態として、車両周囲の路面状態の変化に基づいて自己位置を算出する場合を説明する。本発明の第2の実施形態に係る自己位置算出装置は、図12に示すように、検出状態判断部28及び算出状態判断部29の代わりに、路面状態判定部30を備える点が、第1の実施形態と異なる。他の構成は、第1の実施形態と同様であるので説明を省略する。
次に、図15及び図16を用いて、本発明の第2の実施形態に係る自己位置算出方法を説明する。図15に示すステップS20~S27、S29の手順は、図10に示すステップS10~S17、S19の手順と同様であるので、説明を省略する。
以上説明したように、第2の実施形態によれば、輝度判断部25がパターン光の検出状態を判断し、パターン光の検出状態が閾値以上の場合、パターン光抽出部21がフレーム間の画像を重ね合わせて重畳画像を生成することにより、外部環境が明るい場合であっても路面に投光したパターン光を精度よく検出することができ、車両の自己位置を精度よく算出することができる。
[ハードウェア構成]
本発明の第3の実施形態として、同期検波により得られた画像を、所定周期分重ね合わせて重畳画像を生成する場合を説明する。本発明の第3の実施形態に係る自己位置算出装置は、図17に示すように、検出状態判断部28及び算出状態判断部29がない点が、第1の実施形態と異なる。他の構成は、第1の実施形態と同様であるので説明を省略する。
Fn=max{(0.5÷(1-Rap)),10} ・・・(7)
0.2[m]÷{(4[枚]×2[周期])÷1000[fps]}
=25[m/s]
=90[km/h] ・・・(8)
次に、カメラ12により取得された画像38(図5参照)から車両10の移動量を推定する自己位置算出方法の一例として、ECU13により繰り返し実行される情報処理サイクルを、図21に示すフローチャートを参照して説明する。
以上説明したように、第3の実施形態によれば、輝度判断部25がパターン光の検出状態を判断し、パターン光の検出状態が閾値以上の場合、パターン光抽出部21がフレーム間の画像を所定の周期分、重ね合わせて重畳画像を生成することにより、外部環境が明るい場合であっても路面に投光したパターン光を精度よく検出することができ、車両の自己位置を精度よく算出することができる。
上記のように、本発明の第1~第3の実施形態を記載したが、この開示の一部をなす論述及び図面はこの発明を限定するものであると理解すべきではない。この開示から当業者には様々な代替実施の形態、実施例及び運用技術が明らかとなろう。
10 車両
11 投光器
12 カメラ(撮像部)
21 パターン光抽出部(重畳画像生成部)
22 姿勢角算出部
23 特徴点検出部
24 姿勢変化量算出部
25 輝度判断部(パターン光検出状態判断部)
26 自己位置算出部
28 検出状態判断部
29 算出状態判断部
30 路面状態判定部
31 路面
32a、32b パターン光
36 平面算出部
Te 特徴点
Claims (5)
- 車両周囲の路面にパターン光を投光する投光器と、
前記車両に搭載され、前記パターン光が投光された領域を含む車両周囲の路面を撮像して画像を取得する撮像部と、
前記撮像部で取得した画像から、前記パターン光の位置を抽出するパターン光抽出部と、
前記抽出されたパターン光の位置から、前記路面に対する車両の姿勢角を算出する姿勢角算出部と、
前記撮像部で取得した画像における前記路面上の複数の特徴点の時間変化に基づいて、前記車両の姿勢変化量を算出する姿勢変化量算出部と、
前記車両の初期位置及び姿勢角に前記姿勢変化量を加算してゆくことによって前記車両の現在位置及び姿勢角を算出する自己位置算出部
とを備え、
前記パターン光抽出部が、前記パターン光の検出状態が閾値以上の場合、前記撮像部で取得されるフレーム間の画像を重ね合わせて重畳画像を生成し、前記重畳画像から前記パターン光の位置を抽出することを特徴とする自己位置算出装置。 - 前記パターン光抽出部が、前記撮像部で取得された画像の輝度値に応じて、前記重畳画像として重ね合わせる画像の数を設定することを特徴とする請求項1に記載の自己位置算出装置。
- 前記自己位置算出部は、前記パターン光抽出部による前記重畳画像の生成中は、前回の情報処理サイクルで採用した姿勢角又は初期姿勢角を起点として用いて、前記姿勢変化量の加算を開始することを特徴とする請求項1又は2に記載の自己位置算出装置。
- 前記パターン光を所定の変調周波数で輝度変調するパターン光制御部を更に備え、
前記パターン光抽出部が、前記撮像部で取得される画像を前記所定の変調周波数で同期検波することにより得られる同期画像を所定周期分、重ね合わせることにより前記重畳画像を生成することを特徴とする請求項1~3のいずれか1項に記載の自己位置算出装置。 - 投光器から車両周囲の路面にパターン光を投光するステップと、
前記パターン光が投光された領域を含む車両周囲の路面を撮像部により撮像して画像を取得するステップと、
前記撮像部で取得した画像から、前記パターン光の位置を抽出するステップと、
前記抽出されたパターン光の位置から、前記路面に対する車両の姿勢角を算出するステップと、
前記撮像部で取得した画像における前記路面上の複数の特徴点の時間変化に基づいて、前記車両の姿勢変化量を算出するステップと、
前記車両の初期位置及び姿勢角に前記姿勢変化量を加算してゆくことによって前記車両の現在位置及び姿勢角を算出するステップ
とを含み、
前記パターン光の位置を抽出するステップは、前記パターン光の検出状態が閾値以上の場合、前記撮像部で取得されるフレーム間の画像を重ね合わせて重畳画像を生成し、前記重畳画像から前記パターン光の位置を抽出することを特徴とする自己位置算出方法。
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP14899118.5A EP3179462B1 (en) | 2014-08-04 | 2014-08-04 | Device and method for calculating a position of a vehicle |
RU2017107021A RU2636235C1 (ru) | 2014-08-04 | 2014-08-04 | Устройство вычисления собственной позиции и способ вычисления собственной позиции |
US15/329,897 US9933252B2 (en) | 2014-08-04 | 2014-08-04 | Self-position calculating apparatus and self-position calculating method |
JP2016539706A JP6269838B2 (ja) | 2014-08-04 | 2014-08-04 | 自己位置算出装置及び自己位置算出方法 |
CN201480080905.2A CN106663376B (zh) | 2014-08-04 | 2014-08-04 | 自身位置计算装置以及自身位置计算方法 |
MX2017001248A MX357830B (es) | 2014-08-04 | 2014-08-04 | Aparato de cálculo de la posición propia y método de cálculo de la posición propia. |
BR112017002129-3A BR112017002129B1 (pt) | 2014-08-04 | 2014-08-04 | Aparelho de cálculo de autoposição e método de cálculo de autoposição |
PCT/JP2014/070480 WO2016020970A1 (ja) | 2014-08-04 | 2014-08-04 | 自己位置算出装置及び自己位置算出方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2014/070480 WO2016020970A1 (ja) | 2014-08-04 | 2014-08-04 | 自己位置算出装置及び自己位置算出方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016020970A1 true WO2016020970A1 (ja) | 2016-02-11 |
Family
ID=55263277
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/070480 WO2016020970A1 (ja) | 2014-08-04 | 2014-08-04 | 自己位置算出装置及び自己位置算出方法 |
Country Status (8)
Country | Link |
---|---|
US (1) | US9933252B2 (ja) |
EP (1) | EP3179462B1 (ja) |
JP (1) | JP6269838B2 (ja) |
CN (1) | CN106663376B (ja) |
BR (1) | BR112017002129B1 (ja) |
MX (1) | MX357830B (ja) |
RU (1) | RU2636235C1 (ja) |
WO (1) | WO2016020970A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018004435A (ja) * | 2016-07-01 | 2018-01-11 | 株式会社日立製作所 | 移動量算出装置および移動量算出方法 |
Families Citing this family (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016046788A1 (en) * | 2014-09-24 | 2016-03-31 | Bombardier Inc. | Laser vision inspection system and method |
CA2995228A1 (en) * | 2015-08-21 | 2017-03-02 | Adcole Corporation | Optical profiler and methods of use thereof |
US11100673B2 (en) | 2015-09-24 | 2021-08-24 | Apple Inc. | Systems and methods for localization using surface imaging |
US10832426B2 (en) * | 2015-09-24 | 2020-11-10 | Apple Inc. | Systems and methods for surface monitoring |
EP3174007A1 (en) | 2015-11-30 | 2017-05-31 | Delphi Technologies, Inc. | Method for calibrating the orientation of a camera mounted to a vehicle |
EP3173979A1 (en) | 2015-11-30 | 2017-05-31 | Delphi Technologies, Inc. | Method for identification of characteristic points of a calibration pattern within a set of candidate points in an image of the calibration pattern |
JP6707378B2 (ja) * | 2016-03-25 | 2020-06-10 | 本田技研工業株式会社 | 自己位置推定装置および自己位置推定方法 |
DE102016006390A1 (de) * | 2016-05-24 | 2017-11-30 | Audi Ag | Beleuchtungseinrichtung für ein Kraftfahrzeug zur Erhöhung der Erkennbarkeit eines Hindernisses |
JP6601352B2 (ja) * | 2016-09-15 | 2019-11-06 | 株式会社デンソー | 車両姿勢推定装置 |
JP6838340B2 (ja) * | 2016-09-30 | 2021-03-03 | アイシン精機株式会社 | 周辺監視装置 |
JP6840024B2 (ja) * | 2017-04-26 | 2021-03-10 | 株式会社クボタ | オフロード車両及び地面管理システム |
JP6499226B2 (ja) * | 2017-06-02 | 2019-04-10 | 株式会社Subaru | 車載カメラのキャリブレーション装置及び車載カメラのキャリブレーション方法 |
FR3068458B1 (fr) * | 2017-06-28 | 2019-08-09 | Micro-Controle - Spectra Physics | Procede et dispositif de generation d'un signal impulsionnel a des positions particulieres d'un element mobile. |
JP6849569B2 (ja) * | 2017-09-29 | 2021-03-24 | トヨタ自動車株式会社 | 路面検出装置 |
JP6932058B2 (ja) * | 2017-10-11 | 2021-09-08 | 日立Astemo株式会社 | 移動体の位置推定装置及び位置推定方法 |
JP7064163B2 (ja) * | 2017-12-07 | 2022-05-10 | コニカミノルタ株式会社 | 3次元情報取得システム |
EP3534333A1 (en) * | 2018-02-28 | 2019-09-04 | Aptiv Technologies Limited | Method for calibrating the position and orientation of a camera relative to a calibration pattern |
EP3534334B1 (en) | 2018-02-28 | 2022-04-13 | Aptiv Technologies Limited | Method for identification of characteristic points of a calibration pattern within a set of candidate points derived from an image of the calibration pattern |
CN108426556B (zh) * | 2018-03-09 | 2019-05-24 | 安徽农业大学 | 一种基于加速度的测力车轮转动角度测试方法 |
JP7124424B2 (ja) * | 2018-05-02 | 2022-08-24 | オムロン株式会社 | 3次元形状計測システム及び計測時間設定方法 |
US11221631B2 (en) | 2019-04-24 | 2022-01-11 | Innovation First, Inc. | Performance arena for robots with position location system |
US11200654B2 (en) * | 2019-08-14 | 2021-12-14 | Cnh Industrial America Llc | System and method for determining field characteristics based on a displayed light pattern |
KR20220026423A (ko) * | 2020-08-25 | 2022-03-04 | 삼성전자주식회사 | 지면에 수직인 평면들의 3차원 재구성을 위한 방법 및 장치 |
DE102020124785A1 (de) | 2020-09-23 | 2022-03-24 | Connaught Electronics Ltd. | Verfahren zum Überwachen eines Fokus einer an einem Kraftfahrzeug angeordneten Kamera, Computerprogrammprodukt, computerlesbares Speichermedium sowie System |
DE102022206404A1 (de) * | 2021-07-19 | 2023-01-19 | Robert Bosch Engineering And Business Solutions Private Limited | Ein System, das dazu angepasst ist, einen Straßenzustand in einem Fahrzeug zu erkennen, und ein Verfahren dafür |
DE102021214551A1 (de) * | 2021-12-16 | 2023-06-22 | Psa Automobiles Sa | Verfahren und Vorrichtung zum Erkennen von Objekten auf einer Fahrbahnoberfläche |
DE102022203951A1 (de) * | 2022-04-25 | 2023-10-26 | Psa Automobiles Sa | Verfahren und Vorrichtung zum Projizieren von Objekten |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06325298A (ja) * | 1993-05-13 | 1994-11-25 | Yazaki Corp | 車両周辺監視装置 |
JP2004177252A (ja) * | 2002-11-27 | 2004-06-24 | Nippon Telegr & Teleph Corp <Ntt> | 姿勢計測装置および方法 |
JP2007256090A (ja) * | 2006-03-23 | 2007-10-04 | Nissan Motor Co Ltd | 車両用環境認識装置及び車両用環境認識方法 |
JP2010101683A (ja) * | 2008-10-22 | 2010-05-06 | Nissan Motor Co Ltd | 距離計測装置および距離計測方法 |
WO2012172870A1 (ja) * | 2011-06-14 | 2012-12-20 | 日産自動車株式会社 | 距離計測装置及び環境地図生成装置 |
JP2013147114A (ja) * | 2012-01-18 | 2013-08-01 | Toyota Motor Corp | 周辺環境取得装置およびサスペンション制御装置 |
JP2013187862A (ja) * | 2012-03-09 | 2013-09-19 | Topcon Corp | 画像データ処理装置、画像データ処理方法および画像データ処理用のプログラム |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8066415B2 (en) * | 1999-06-17 | 2011-11-29 | Magna Mirrors Of America, Inc. | Exterior mirror vision system for a vehicle |
RU2247921C2 (ru) * | 2002-06-26 | 2005-03-10 | Анцыгин Александр Витальевич | Способ ориентирования на местности и устройство для его осуществления |
JP2004198211A (ja) * | 2002-12-18 | 2004-07-15 | Aisin Seiki Co Ltd | 移動体周辺監視装置 |
US20060095172A1 (en) * | 2004-10-28 | 2006-05-04 | Abramovitch Daniel Y | Optical navigation system for vehicles |
US20090309710A1 (en) * | 2005-04-28 | 2009-12-17 | Aisin Seiki Kabushiki Kaisha | Vehicle Vicinity Monitoring System |
JP4780614B2 (ja) | 2006-04-10 | 2011-09-28 | アルパイン株式会社 | 車体挙動測定装置 |
JP4914726B2 (ja) | 2007-01-19 | 2012-04-11 | クラリオン株式会社 | 現在位置算出装置、現在位置算出方法 |
US8559675B2 (en) * | 2009-04-23 | 2013-10-15 | Panasonic Corporation | Driving support device, driving support method, and program |
JP5817927B2 (ja) * | 2012-05-18 | 2015-11-18 | 日産自動車株式会社 | 車両用表示装置、車両用表示方法及び車両用表示プログラム |
AT514834B1 (de) * | 2013-02-07 | 2017-11-15 | Zkw Group Gmbh | Scheinwerfer für ein Kraftfahrzeug und Verfahren zum Erzeugen einer Lichtverteilung |
WO2014152470A2 (en) * | 2013-03-15 | 2014-09-25 | Tk Holdings, Inc. | Path sensing using structured lighting |
WO2015125296A1 (ja) * | 2014-02-24 | 2015-08-27 | 日産自動車株式会社 | 自己位置算出装置及び自己位置算出方法 |
EP2919057B1 (en) * | 2014-03-12 | 2022-01-19 | Harman Becker Automotive Systems GmbH | Navigation display method and system |
US20170094227A1 (en) * | 2015-09-25 | 2017-03-30 | Northrop Grumman Systems Corporation | Three-dimensional spatial-awareness vision system |
US9612123B1 (en) * | 2015-11-04 | 2017-04-04 | Zoox, Inc. | Adaptive mapping to navigate autonomous vehicles responsive to physical environment changes |
-
2014
- 2014-08-04 JP JP2016539706A patent/JP6269838B2/ja active Active
- 2014-08-04 MX MX2017001248A patent/MX357830B/es active IP Right Grant
- 2014-08-04 EP EP14899118.5A patent/EP3179462B1/en active Active
- 2014-08-04 US US15/329,897 patent/US9933252B2/en active Active
- 2014-08-04 WO PCT/JP2014/070480 patent/WO2016020970A1/ja active Application Filing
- 2014-08-04 RU RU2017107021A patent/RU2636235C1/ru active
- 2014-08-04 BR BR112017002129-3A patent/BR112017002129B1/pt active IP Right Grant
- 2014-08-04 CN CN201480080905.2A patent/CN106663376B/zh active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06325298A (ja) * | 1993-05-13 | 1994-11-25 | Yazaki Corp | 車両周辺監視装置 |
JP2004177252A (ja) * | 2002-11-27 | 2004-06-24 | Nippon Telegr & Teleph Corp <Ntt> | 姿勢計測装置および方法 |
JP2007256090A (ja) * | 2006-03-23 | 2007-10-04 | Nissan Motor Co Ltd | 車両用環境認識装置及び車両用環境認識方法 |
JP2010101683A (ja) * | 2008-10-22 | 2010-05-06 | Nissan Motor Co Ltd | 距離計測装置および距離計測方法 |
WO2012172870A1 (ja) * | 2011-06-14 | 2012-12-20 | 日産自動車株式会社 | 距離計測装置及び環境地図生成装置 |
JP2013147114A (ja) * | 2012-01-18 | 2013-08-01 | Toyota Motor Corp | 周辺環境取得装置およびサスペンション制御装置 |
JP2013187862A (ja) * | 2012-03-09 | 2013-09-19 | Topcon Corp | 画像データ処理装置、画像データ処理方法および画像データ処理用のプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP3179462A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018004435A (ja) * | 2016-07-01 | 2018-01-11 | 株式会社日立製作所 | 移動量算出装置および移動量算出方法 |
Also Published As
Publication number | Publication date |
---|---|
RU2636235C1 (ru) | 2017-11-21 |
CN106663376A (zh) | 2017-05-10 |
MX357830B (es) | 2018-07-26 |
JP6269838B2 (ja) | 2018-01-31 |
BR112017002129B1 (pt) | 2022-01-04 |
CN106663376B (zh) | 2018-04-06 |
EP3179462B1 (en) | 2018-10-31 |
MX2017001248A (es) | 2017-05-01 |
EP3179462A4 (en) | 2017-08-30 |
BR112017002129A2 (pt) | 2017-11-21 |
US20170261315A1 (en) | 2017-09-14 |
US9933252B2 (en) | 2018-04-03 |
JPWO2016020970A1 (ja) | 2017-05-25 |
EP3179462A1 (en) | 2017-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6269838B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP6237876B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP6237874B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP6237875B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
RU2627914C1 (ru) | Устройство вычисления собственной позиции и способ вычисления собственной позиции | |
JP6176387B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP2013005234A5 (ja) | ||
JP6547362B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP6398218B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP6369897B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP6299319B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP6398217B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP6492974B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP6459745B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP5310162B2 (ja) | 車両灯火判定装置 | |
JP6369898B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP6398219B2 (ja) | 自己位置算出装置及び自己位置算出方法 | |
JP6459701B2 (ja) | 自己位置算出装置及び自己位置算出方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14899118 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016539706 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15329897 Country of ref document: US Ref document number: MX/A/2017/001248 Country of ref document: MX |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112017002129 Country of ref document: BR |
|
REEP | Request for entry into the european phase |
Ref document number: 2014899118 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014899118 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2017107021 Country of ref document: RU Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 112017002129 Country of ref document: BR Kind code of ref document: A2 Effective date: 20170201 |