CN107044853B - Method and device for determining landmarks and method and device for positioning - Google Patents

Method and device for determining landmarks and method and device for positioning Download PDF

Info

Publication number
CN107044853B
CN107044853B CN201610941150.1A CN201610941150A CN107044853B CN 107044853 B CN107044853 B CN 107044853B CN 201610941150 A CN201610941150 A CN 201610941150A CN 107044853 B CN107044853 B CN 107044853B
Authority
CN
China
Prior art keywords
landmark
landmarks
data
constellation
localization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610941150.1A
Other languages
Chinese (zh)
Other versions
CN107044853A (en
Inventor
W·尼姆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of CN107044853A publication Critical patent/CN107044853A/en
Application granted granted Critical
Publication of CN107044853B publication Critical patent/CN107044853B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/04Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by terrestrial means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C23/00Combined instruments indicating more than one navigational value, e.g. for aircraft; Combined measuring devices for measuring two or more variables of movement, e.g. distance, speed or acceleration

Abstract

The invention relates to a method for determining landmarks (L1, L2) for locating moving objects. Wherein the method comprises a step of generating at least one constellation data record (150) using the determination rule (140) and the landmark data (130). Here, the landmark data represents landmarks in the surroundings of the object measured by the sensor. The determination rule can be used for determining at least one pair of landmarks suitable for localization from the landmark data, wherein a ratio between a lateral spacing (a) between a trajectory (105) of an object and a first landmark (L1) of the pair and a landmark spacing (L) between the first landmark (L1) and a second landmark (L2) of the pair lies within a predetermined suitable value range. At least one constellation data record has position data of a pair of landmarks suitable for localization relative to a geometric constellation of objects.

Description

Method and device for determining landmarks and method and device for positioning
Technical Field
The invention relates to an apparatus and a method according to the content of the independent claims. The invention also relates to a computer program.
Background
Some systems for highly automated driving, robotics and augmented reality may be based, for example, on analysis of highly accurate measured landmarks.
Disclosure of Invention
Against this background, the following methods are proposed in the solution described here, in addition to an apparatus using at least one of the methods, and finally a corresponding computer program according to the independent claims. Advantageous embodiments and refinements of the device specified in the independent claims can be achieved by the measures specified in the dependent claims.
According to an embodiment of the present invention, an optimized landmark acquisition for localization may be provided, in particular, based on a geometrical relationship of pre-existing markers and their own position. Optimal acquisition of landmarks for localization of highly automated drives, robots and augmented reality can be provided, in particular. In this case, the localization can be achieved in particular by using only a partial number of landmarks which are particularly suitable for localization and whose constellation makes it possible to minimize the localization error.
Advantageously, embodiments according to the invention make it possible in particular to optimally select and significantly reduce the number of landmarks which can be used, for example, for locating highly automated vehicles, robots. Landmarks identified as being particularly suitable for positioning by means of the selection process can enable more reliable, faster, less memory intensive, less computationally intensive and more accurate positioning with a reduced amount of data to be transmitted.
A method for determining a landmark for locating a moving object, wherein the method has the following steps:
at least one constellation data record is generated using a detection rule and landmark data, wherein the landmark data represents landmarks in the surroundings of the object acquired by a sensor, wherein the detection rule can be used in order to detect at least one pair of landmarks suitable for positioning from the landmark data, wherein a ratio between a lateral distance between a trajectory of the object and a first landmark of the pair and a landmark distance between the first landmark and a second landmark of the pair is in a predetermined suitable value range, wherein the at least one constellation data record has position data of a geometric constellation suitable for positioning a pair of landmarks relative to the object.
The method can be implemented, for example, in software or hardware or in a combination of software and hardware, for example, in a control unit. The object may be, for example, a vehicle, in particular a motor vehicle, a vehicle designed, for example, for at least partially autonomous driving, a robot or the like. The surroundings of the object may be a geographical area in which the object may be arranged. Landmarks in the surroundings of the object can be determined, for example, by means of at least one sensor device. The method for determining can be implemented, for example, in connection with a specific vehicle designed for determining landmarks in the surroundings of an object. The designated vehicle may have at least one sensor device. The at least one constellation data record may thus have position data of suitable landmarks, wherein the position data may relate to a predetermined coordinate system, for example a digital map. Landmarks may have position data, image data, or the like extracted from sensor data. The range of suitability values may include a maximum suitability value of a pair of landmarks, wherein the maximum suitability value may correspond to a ratio value of a ratio between the lateral spacing and the landmark spacing determined using the metric.
According to one embodiment, the method may have the step of extracting landmark data from original landmark data having a re-found value independent of a visibility condition, the re-found value being greater than a minimum re-found value, the original landmark data representing a total number of landmarks in a surrounding environment of the object. Here, the step of generating may be performed using the extracted landmark data. The found again value independent of the visibility condition may be generated under different visibility conditions using the line of sight existing between the object and the landmark. This embodiment offers the advantage that for a single positioning only the landmarks described below are determined, which themselves can be reliably identified again in unfavorable conditions. This can improve the reliability of positioning and, if necessary, the high speed of positioning.
Furthermore, the method may have the step of predetermining a range of suitable values using the determination rule and additionally or alternatively taking into account landmark data, the position of the object, the trajectory of the object and/or the velocity of the object. This embodiment offers the advantage that suitable positioning rules for landmarks can also be used in different movement situations of the object.
The method may also have the step of adding position data of at least one pair of landmarks suitable for positioning from at least one generated constellation data record to map data of the digital map. Here, the digital map may represent a reference coordinate system. Furthermore, digital maps may be designed to be usable by moving objects for localization.
A method for locating a moving object is also proposed, wherein the method has the following steps:
reading at least one constellation data record representing a data record generated according to one embodiment of the aforementioned method;
identifying landmarks in an image of the surroundings of the object under consideration of position data of landmarks suitable for localization from the at least one constellation data record; and
positioning is carried out using the position data of the landmarks identified in the step of identifying.
The method can be implemented, for example, in software or hardware or in a combination of software and hardware, for example, in a control unit. The method for locating can be carried out in conjunction with a moving object. The method for locating can also be implemented in combination with the embodiments of the method for determining landmarks described above.
According to one embodiment, in the step of performing, the localization is performed using landmark data, a position of the object, a trajectory of the object and/or a velocity of the object. The described embodiments provide the advantage that a reliable and accurate positioning is achieved depending on the motion situation of the object.
The method may also have the step of acquiring an image of the surroundings of the object. The step of determining can be carried out using at least one acquisition device. The object has at least one acquisition device. The embodiment provides the advantage that for the purpose of locating the object, images of the surroundings corresponding to an accurate and current motion constellation can be used.
The solution described here also proposes a device which is designed to carry out, control or carry out the steps of the method described here in a corresponding device. The object of the invention is also achieved in a device form quickly and efficiently by the described embodiment variants of the invention.
In this case, a device is understood to be an electronic device which processes sensor signals and outputs control signals and/or data signals as a function of the sensor signals. The device may have an interface, which may be embodied in hardware and/or in software. In the hardware design, the interfaces may be, for example, components of a so-called ASIC system, which contain the different functions of the device. However, it is also possible for the interface to be a circuit integrated in itself or to be formed at least partially from discrete components. In the software design, the interfaces are software modules which are present next to other software modules, for example, on the microcontroller.
A computer program product or a computer program with a program code may also be used, which is stored on a machine-readable carrier or storage medium, for example a semiconductor memory, a hard disk memory or an optical memory, and is used to carry out, implement and/or control the steps of the method according to one of the preceding embodiments when the program product or program is executed on a computer or a device.
Drawings
Embodiments of the invention are illustrated in the drawings and are set forth in detail in the following description. In the drawings:
fig. 1 shows a schematic illustration of a determination device according to an embodiment;
FIG. 2 shows a schematic illustration of a positioning device according to an embodiment;
FIG. 3 shows a flow diagram of a method for the determination, according to one embodiment;
FIG. 4 shows a flow diagram of a method for positioning according to one embodiment;
fig. 5 shows a schematic illustration of the determination means 110 from fig. 1;
FIG. 6 shows a schematic illustration of a camera model for recording landmarks according to one embodiment;
FIG. 7 shows a schematic illustration of a landmark L and a positioning error, according to one embodiment;
FIG. 8 shows a schematic top view of an error model, according to an embodiment;
fig. 9 shows a schematic illustration of a constellation of landmarks according to an embodiment;
FIG. 10 shows a graph of error as a function of viewing angle;
fig. 11 shows a schematic illustration of a geometric error model for a first constellation of signposts, according to an embodiment;
figure 12 shows a schematic illustration of a geometric error model for a second constellation of signposts according to one embodiment,
FIG. 13 illustrates an error plot according to a viewing angle φ in accordance with an embodiment;
FIG. 14 shows a schematic illustration of a determination error according to an embodiment;
FIG. 15 shows a schematic illustration for determining a landmark view angle, according to one embodiment;
FIG. 16 shows an optimization chart with respect to a constellation of landmarks according to one embodiment;
FIG. 17 illustrates an optimization chart with respect to a constellation of landmarks according to one embodiment; and
fig. 18 shows an optimization chart with respect to a constellation of landmarks according to one embodiment.
In the following description of advantageous embodiments of the invention, the same or similar reference numerals are used for elements which are shown in different figures and which function similarly, wherein repeated descriptions of said elements are omitted.
Detailed Description
Fig. 1 shows a schematic illustration of a determination device 110 according to an embodiment. Here, the determination device 110 is arranged merely as an example in the measuring vehicle 100. According to the exemplary embodiment shown in fig. 1, the measuring vehicle 100 travels along a track 105 on a road. For better illustration, the measuring vehicle 100 is again shown enlarged and schematically in fig. 1.
The first landmark L1 is arranged at a lateral distance a from the track 105. The second landmark L2 is arranged at a landmark spacing L from the first landmark L1. According to the exemplary embodiment shown in fig. 1, the landmark distances L extend along the trajectory 105. Thus, the second landmark L2 has a lateral spacing a from the locus 105. For purposes of illustration, a line of sight between landmarks L1 and L2 and the survey vehicle 100 is also shown in FIG. 1.
The measuring vehicle 100 has a determination device 110. The determination means 110 are designed to determine landmarks for locating moving objects. For this purpose, the determination means 110 have generation means 120. In other words, only the generating means 120 is shown in fig. 1 from the determining means 110. The generating means 120 are designed to generate at least one constellation data record 150 using the landmark data 130 and the determination rule 140.
The moving object is, for example, a further vehicle, for example a user vehicle. The additional vehicle or user vehicle may move at least partially along the same or similar trajectory as the trajectory 105 of the measuring vehicle 100. This is also illustrated in detail in fig. 2.
Here, the landmark data 130 represents landmarks in the surrounding environment in which a moving object can move, which are acquired by a sensor, here, the first landmark L1 and the second landmark L2. The landmark data 130 has, for example, position data of a first landmark L1 and a second landmark L2.
It is to be noted here that only two landmarks L1 and L2 are exemplarily shown according to the embodiment shown in fig. 1. Additional landmarks may also be present according to an additional embodiment.
The determination rule 140 can be used by the determination means 110, in particular the generation means 120, in order to determine at least one pair of landmarks suitable for localization from the landmark data 130, wherein the ratio between the lateral spacing a between the trajectory of the object and the first landmark L1 of the pair and the landmark spacing L between the first landmark L1 and the second landmark L2 of the pair lies within a predetermined suitable value range.
At least one constellation data record 150 which can be generated by means of the determination means 110, in particular the generation means 120, has position data of a pair of landmarks L1 and L2 suitable for localization relative to the geometric constellation of the object.
Using the determination device 110, it is possible to determine or map landmarks in highly accurate digital maps, for example for highly automated driving and robotics and data reception, optimally. The measuring vehicle 100 or the measuring vehicle 100 has, in particular, an RTK-GPS device (Real Time Kinematic Global Positioning System) and a digital map, and a sensor, such as a video device, a radar, a laser radar, etc., mounted on the vehicle for measuring the environment around the vehicle. In a first step, the landmarks L1 and L2 or features which are particularly suitable for being retrieved also in other visibility and illuminance are extracted from the data of the on-board sensor system, in particular from the landmark data 130. Examples are features of road signs, signal lights, markings on the road surface, etc. These typically also a large number of features are so filtered, reduced or thinned out using the determining means 110, resulting in the features of the geometric constellation that achieve high positioning accuracy being preserved. An advantageous landmark constellation is a constellation which results in the line of sight between the landmarks and the measuring vehicle 100 intersecting at as right an angle as possible, which in turn results in a minimal positioning error. The features are then written into the localization map with their characterization symbols and corresponding 3D coordinates as constellation data records 150 and then used as landmarks L1 and L2 for localization.
Fig. 2 shows a schematic illustration of a positioning device 210 according to an embodiment. The positioning device 210 is arranged in a moving object 200, which is designed as a vehicle 200 or a user vehicle 200, by way of example only. The representation in fig. 2 corresponds to the representation from fig. 1, with the exception of the user vehicle 200. Thus, the user vehicle 200 travels along the trajectory 105 on the road. For the sake of better illustration, the user vehicle 200 is also shown enlarged and schematically in fig. 2.
The user vehicle 200 has a positioning device 210. The positioning device 210 is designed for positioning a moving object 200 or a user vehicle 200. The locating device 210 is designed to locate the user vehicle 200 using at least one constellation data record 150, which is generated by means of the determination device 110 from fig. 1 or a similar determination device. In other words, the locating device 210 is designed to locate the moving object 200 or the user vehicle 200 using at least one constellation data record 150 from fig. 1 or a similar constellation data record.
For this purpose, the positioning device 210 has a reading device 212, an identification device 214 and an execution device 216. The reading device 212 is designed here for reading at least one constellation data record 150. The recognition means 214 are designed to recognize landmarks in the image of the surroundings of the user vehicle 200 taking into account the position data of the landmarks L1 and L2 suitable for positioning from the at least one constellation data record 150. Here, an image of the surrounding environment is represented by image data 220. The image data 220 is measured, for example, when a sensor device of the user vehicle 200 is used. The recognition means 214 are designed to receive or recall image data 220. The recognition means 214 are thus designed to recognize landmarks in the image data 220 that are suitable for localization using at least one constellation data record 150, here for example landmarks L1 and L2. The execution means 216 are designed for positioning using the position data of the landmarks L1 and L2 identified by means of the identification means 214.
This allows for an optimized selection of landmarks for positioning during driving operation, in particular for highly automated driving and robotics, by means of the positioning device 210. When the landmarks L1 and L2 are used for the automatic driving function in the vehicle 200 or in the user vehicle 200, it is provided for the user vehicle 200, for example, to read a GPS, a digital map, an on-board sensor and a positioning map with at least one constellation data record 150. In particular, when driving at least partially automatically, it is continuously attempted to find landmarks, for example landmarks L1 and L2, stored in a localization map again for the current position, speed and position of the user vehicle 200 by means of the localization device 210 and, for example, by means of the on-board sensor device. The locating device 210 is designed to identify or find landmarks precisely, i.e., preferably also landmarks L1 and L2, which lead to a landmark geometric constellation with the greatest locating accuracy at the current position and position of the user vehicle 200.
Fig. 3 shows a flow chart of a method 300 for said determining according to an embodiment. The method 300 can be implemented for determining landmarks for locating moving objects. The method 300 for determining can be implemented in conjunction with the determination device from fig. 1 or a similar determination device.
The method 300 has a step 310 of generating at least one constellation data record 150 using the measurement rules and landmark data. The landmark data represents landmarks in the surroundings of the object determined by the sensor, wherein a determination rule can be used such that at least one pair of landmarks suitable for localization is determined from the landmark data, wherein the ratio between the lateral distance between the trajectory of the object and the first landmark of the pair and the landmark distance between the first landmark and the second landmark of the pair lies within a predetermined range of suitable values. At least one constellation data record has position data of a pair of landmarks suitable for localization relative to the geometric constellation of the object.
The method 300 has the step 320 of extracting, the step 330 of predetermining and/or the step 340 of adding. Here, the extracting step 320 and the predetermining step 330 can be carried out before the generating step 310, wherein the adding step 340 can be carried out after the generating step 310. In this case, landmark data is extracted in the extraction step 320 from the original landmark data, which has a found again value independent of the visibility condition, which is greater than the minimum found again value, which represents the entire number of landmarks in the surroundings of the object. Here, the step 310 of generating is performed using the extracted landmark data. In the predetermining 330, the range of suitable values is predetermined using a determination rule and/or taking into account landmark data, the position of the object, the trajectory of the object and/or the velocity of the object. In the adding step 340, position data of at least one pair of landmarks suitable for positioning is added to map data of the digital map by at least one generated constellation data record.
Fig. 4 shows a flow diagram of a method 400 for positioning according to an embodiment. The method 400 can be implemented for locating moving objects. The method 400 for the localization can be implemented in conjunction with the localization device from fig. 2 or a similar localization device and, if appropriate, additionally with the determination device from fig. 1 or a similar determination device.
The method 400 has a step 410 of reading at least one constellation data record. Here, the constellation data records are generated according to the method of fig. 3 or a similar method. The method 400 can be implemented in conjunction with the method of fig. 3 or a similar method.
In the method 400, in a step 420 of the identification, which can be carried out subsequently with respect to the step 410 of reading, landmarks are identified in an image of the surroundings of the object, taking into account position data of the landmarks suitable for localization from at least one constellation data record. In a subsequent step 430 of the implementation in the method 400, the localization is carried out using the position data of the landmarks identified in the step of identifying.
According to one embodiment, in said step 430 of performing, the positioning is performed using landmark data, a position of the object, a trajectory of the object and/or a velocity of the object.
According to a further embodiment, the method 400 for localization also has a step 440 of determining an image of the surroundings of the object. In this case, step 440 of the determination can be carried out in particular before step 410 of the reading. Alternatively, the step 440 of determining can also be performed before the step 420 of identifying.
Fig. 5 shows a schematic illustration of the determination means 110 from fig. 1. For illustrative purposes, the determination device is shown separately from the measuring vehicle 100. In this case, the measuring vehicle 100 moves along a trajectory 105 or a pre-existing route section 105. The measuring vehicle 100 has a sensor device or at least one sensor device for determining the surroundings, for example a video device, a lidar or the like, and a sensor device for determining its position, and is provided with an electronic field of view.
The determination device 110 is designed to receive or read image data 532 of a field-of-view sensor (video device, lidar) of the measuring vehicle 100, road section data 534 about pre-existing road sections and electronic field of view, vehicle position data 536(RTK-GPS) about a model for evaluating a positioning error, and calculation data 542, for example.
In the illustration of fig. 5, the slave determination device 110 merely shows three processing blocks 522, 524 and 526 by way of example. In a first processing block 522, a favorable landmark constellation is calculated using road section data 534 and vehicle position data 536 and calculation data 542. So-called ROI data 538 (Region of interest) may be passed by the first processing block 522 to the second processing block 524. Finding landmarks in the ROI data 538 is implemented in a second processing block 524. This may be using SIFT (Scale-invariant feature transform), i.e. Scale-invariant feature transform and/or DIRD-features (DIRD ═ Illumination Robust Descriptor). Furthermore, finding landmarks is carried out in a second processing block 524 using the image data 532. There is an iterative loop back to the first processing block 522 from the second processing block 524. In a third processing block 526, the selection of a constellation of landmarks that can have a high positioning accuracy is achieved using the landmarks from the second processing block 524 and the calculation data 542.
The determination means 110 are designed to output or provide map data 550, wherein the map data 550 represents an electronic map 555 or vector diagram of a digital map with at least one constellation data record. Thus, the electronic map 555 represents a plurality of landmarks LM and the trajectory 105 or segment 105. In other words, the electronic map 555 represents the road segments 105 with the measured optimized landmark constellation.
In other words, fig. 5 shows a device for optimized landmark determination or mapping. The measuring vehicle 100 is, for example, a surroundings sensor (e.g. lidar, camera), an RTK-GPS and a digital map or an electronic field of view equipped with an output image. Using the determination device 110, it is determined what is advantageous if at least one geometric property of the pre-existing route section 105, for example a curvature, a steering radius, a gradient or the like, and the current position of the vehicle 100 are used, for which landmark constellation suitable landmarks are then found in the surroundings sensor outputting an image, wherein iterations are possible, and the 3D position of the landmarks is then entered into the digital map 555.
Fig. 6 shows a schematic illustration of a camera model 600 for recording landmarks according to an embodiment. The camera model 600 also represents a selection of a sensor coordinate system and a sensor model for recording landmarks. The camera model 600 can be used in conjunction with the determination device from fig. 1 and the method for the determination from fig. 3 and/or the localization device from fig. 2 and the method for the localization from fig. 4.
The camera model 600 is based on the use of a polar/cylindrical coordinate system, wherein the origin is arranged in the focal point F of the camera and imaging is effected on a cylindrical surface or imaging surface 610 or mapping surface 610, as is similar to a panoramic camera. Here, fig. 6 is given a line of sight LOS between a point LM or object point LM of a landmark in the real world and the focal point F. The intersection of the line of sight LOS with the imaging surface 610 represents the image point IL. Focal point F is surrounded by imaging surface 610.
Imaging surface 610 is configured, for example, to have R per 360 degreescamA camera of image resolution of pixels. A reference line 615 extends from the focal point F to the imaging surface 610. Furthermore, in fig. 6, an observation angle phi is shown, which extends between the line of sight LOS and the reference line 615.
Fig. 7 shows a schematic illustration of a landmark L and a positioning error E according to an embodiment. Fig. 7 shows in particular a model of the landmark positioning error E. Landmark positioning error E is represented as a sphere or circle having a diameter E, where the actual position of landmark LM does not coincide with the circle or within the sphere.
FIG. 8 shows a schematic top view on an error model according to one embodiment. In other words, fig. 8 shows the geometric error model in a 2D top view. The error model is based on the camera model from fig. 6, for example.
The imaging surface 610 of the cylindrical camera is shown in a 2D top view in fig. 8. Furthermore, the advance of two landmarks L1 and L2 in the coordinate system WKOS is given by way of example in fig. 8, which landmarks can be seen from the camera under the image points IL1 and IL 2. Two lines of sight LOS1 and LOS2 are also indicated, which extend from the landmark to the focus of the camera, respectively, which is the desired position Pf. Thus, the first line of sight LOS1 extends from the first landmark L1 through the first image point IL1 to the desired position Pf, wherein the second line of sight LOS2 extends from the second landmark L2 through the second image point IL2 to the desired position Pf. The line of sight LOS1 and LOS2 enclose an angle or viewing angle phi.
Considering a solution for positioning in combination with the determination means from fig. 1 and the method for the determination from fig. 3 and/or the positioning means from fig. 2 and the method for the positioning from fig. 4, respective lines of sight LOS1 and LOS2 are established in the coordinates (WKOS) of the given landmarks L1 and L2 and the measured image points IL1 and IL2, which lines of sight intersect in the desired position Pf. Thereby creating a constellation or geometric constellation between landmarks L1 and L2 that can be used for position location.
Fig. 9 shows a schematic illustration of a constellation of landmarks L1 and L2, according to an embodiment. The constellation of landmarks L1 and L2 corresponds, for example, to the constellation from fig. 8. Here, the constellation of landmarks L1 and L2 is shown with respect to a desired position Pf, which represents a moving object, for example a vehicle or similar object. Here, the unknown position Pf of the vehicle can be established by two known landmarks L1 and L2, which are observed by the vehicle (panoramic) camera.
The main error causes for determining the constellation of landmarks are exemplarily shown in fig. 10 to 14. This involves, on the one hand, position errors of the landmarks L1 and L2 and, on the other hand, measurement errors of the image points in the camera plane.
FIG. 10 illustrates an error plot according to a viewing angle φ in accordance with an embodiment. Here, the observation angle in degrees is represented on the axis of abscissa by the maximum error DmaxAnd the landmark positioning error E shown in fig. 7 are expressed on the ordinate. Also shown here is a graph 1010 of the error distribution with respect to the viewing angle phi. Graph 1010 has a minimum at an observation angle phi of about 90 degrees.
Fig. 11 shows a schematic illustration of a geometric error model for the first constellation of signposts L1 and L2, according to an embodiment. The first road sign L1 and the second road are shown hereThe label L2, which indicates the viewing angle phi with its positioning error E and corresponding intersecting line of sight and with a first value. According to the embodiment shown in fig. 1, the viewing angle phi is an acute angle. The intersection region of the lines of sight represents the maximum error D from fig. 10, taking into account the corresponding positioning error Emax. The desired position Pf is arranged in the intersection region.
Fig. 12 shows a schematic illustration of a geometric error model for the second constellation of landmarks L1 and L2, according to an embodiment. The representation in fig. 12 corresponds to the representation in fig. 11, except that the viewing angle Φ is an obtuse angle according to the exemplary embodiment shown in fig. 12, which represents the maximum error DmaxIs smaller than the intersection area in fig. 11.
FIG. 13 illustrates an error plot according to a viewing angle φ in accordance with an embodiment. The error diagram of fig. 13 corresponds to the error diagram of fig. 10, except for the case in which the maximum error D for a landmark positioning error E of, for example, 0.1 mmaxIndicated on the ordinate axis. Also shown here is a graph 1310 of the error distribution with respect to the viewing angle phi. Graph 1310 has a minimum at an observation angle φ of about 90 degrees.
It is additionally explained with reference to fig. 10 to 13 that a landmark positioning error or position error E of the road markings L1 and L2 results in a rhomboid or intersecting surface in which the desired position Pf of the moving object is arranged. Maximum error DmaxIn relation to the viewing angle phi and the landmark positioning error E, as is also evident from the following equation:
Dmax/E=2cos(φ/2)/tan(φ/2),0<φ≤90°
Dmax/E=2cos((180°-φ)/2)/tan((180°-φ)/2),90<φ≤180°
Dmax=2E cos(φ/2)/tan(φ/2),0<φ≤90°
Dmax=2E cos((180°-φ)/2)/tan((180°-φ)/2),90<φ≤180°
fig. 14 shows a schematic illustration of the determination error U according to an embodiment. In other words, FIG. 14 shows the camera planeOr the error U of the determination of the image point in the imaging surface 610. In fig. 14, landmarks LM are shown, which are arranged at a distance R from the focal point F of the camera. Shown here at dimimagerSub-pixel measurement accuracy Δ in the image of resolutioni. Sub-pixel measurement accuracy ΔiResulting in a position error or measurement error U of the measured landmarks LM in the range R. Here, the measurement error is found to be:
U=(2πRΔi)/dimimager
measurement accuracy Δ at sub-pixel using 0.5 pixeliAnd a horizontal image determination resolution dim of 2000 pixelsimagerIn the case of (2), for example, an error value of 0.02 m in the measurement error U for a pitch of 10 m, 0.03 m in the measurement error U for a pitch R of 20 m, 0.05 m in the measurement error U for a pitch R of 30 m, 0.06 m in the measurement error U for a pitch R of 40 m, 0.08 m in the measurement error U for a pitch R of 50 m, 0.09 m in the measurement error U for a pitch R of 60 m, and 0.11 m in the measurement error U for a pitch R of 70 m may be obtained.
Fig. 15 shows a schematic illustration for determining a landmark observation angle phi according to one embodiment. The illustration in fig. 15 is here similar to the partial sections of the illustrations of fig. 1 or 2. Fig. 15 shows a moving object in the form of a vehicle, for example a measuring vehicle from fig. 1 or a user vehicle from fig. 2. According to the exemplary embodiment shown in fig. 15, the vehicle travels along a trajectory 105 on the road. Furthermore, the movement of the vehicle along the trajectory 105 is shown with a movement parameter x, wherein the movement parameter x represents the vehicle position, speed or similar parameters.
The first landmark L1 is arranged at a lateral distance a from the track 105. The second landmark L2 is arranged at a landmark separation L from the first landmark L. According to the exemplary embodiment shown in fig. 15, the landmark distances L extend along the trajectory 105. Thus, the second landmark L2 also has a lateral spacing a from the track 105. Further, line of sight LOS1 and LOS2 between landmarks L1 and L2 and the vehicle are shown in fig. 15. Here, a first line of sight LOS1 extends from the first landmark L1 to the vehicle, wherein a second line of sight LOS2 extends from the second landmark L2 to the vehicle. The landmark observation angle phi extends between the first line of sight LOS1 and the second line of sight LOS 2.
As shown in fig. 15, a landmark observation angle Φ between two landmarks L1 and L2 can be expressed as:
φ(x,a,L)=180°-arctan(x/a)+arctan((L-x)/a))
therefore, the maximum error DmaxRelated to x, a, L. The optimized constellation of landmarks L1 and L2, i.e., the ratio a/L, can be derived for the case or trip shown in FIG. 15 by the following optimization rules:
Figure BDA0001140021550000141
fig. 16 shows an optimization chart with respect to a constellation of landmarks according to one embodiment. In other words, fig. 16 shows an example of a constellation suitable for localization for optimizing signposts. In this case, the optimization is combined in particular with the determination device from fig. 1 and the method for determining from fig. 3 and/or with the positioning device from fig. 2 and the method for positioning from fig. 4. The optimization can also be performed using the optimization rules described with reference to fig. 15.
Here, the optimization chart shows the movement parameter x from fig. 15 on the abscissa axis, and the landmark observation angle Φ in degrees associated with the movement parameter x is represented on the ordinate axis. A set 1600 of graphs or multiple graphs for different ratios a/L are shown in the optimization chart.
Fig. 17 shows an optimization chart with respect to a constellation of landmarks according to one embodiment. The diagram in fig. 17 corresponds to the diagram from fig. 16, except that the maximum error D to be associated with the motion parameter x is specified belowmaxAnd the landmark constellation error E on the ordinate axis. Another set 1700 of plots for different ratios a/L is shown in fig. 17.
Fig. 18 shows an optimization chart with respect to a constellation of landmarks according to one embodiment. In other words, fig. 18 shows an example of a constellation suitable for positioning for optimizing signposts. In this case, the optimization is combined in particular with the determination device from fig. 1 and the method for determining from fig. 3 and/or with the positioning device from fig. 2 and the method for positioning from fig. 4. The optimization can also be performed using the optimization rules described with reference to fig. 15.
The optimization diagram shows the ratio a/L on the abscissa, wherein the expression is used
Figure BDA0001140021550000151
Indicated on the ordinate axis. A graph 1810 is shown in the optimization chart, which has a minimum value when a/L equals about 0.4 as indicated by the arrow. A particularly advantageous constellation of landmarks for a/L of 0.4 is thus used.
The embodiments and the background are summarized again below with reference to fig. 1 to 18 and expressed in further sentences.
A number of systems for highly automated driving, robotics and augmented reality are increasingly based on the analysis of highly accurate measured landmarks LM, L1, L2, for example. From this the exact position of the vehicle 200 or robot in the world coordinate system WKS can be determined. In particular the constellation and the distance of the landmarks LM, L1, L2 to the vehicle 200 are important for how accurately the positioning can be made. Current systems, such as the Video-SLAM method (SLAM, instantaneous Localization and Mapping) require the analysis of multiple landmarks, which can then calculate the position, for example by beam compensation.
According to one embodiment, a system or method for optimally determining landmarks LM, L1, L2 is provided, wherein the determination of landmarks LM, L1, L2 and their constellations is optimized according to the trajectory 105 or geometric relationship of the digital road map in which they are located. The system can be used and helps to reduce the number of landmarks LM, L1, L2 to be stored and the number of landmarks LM, L1, L2 to be analyzed, which are required, not only when mapping the landmarks LM, L1, L2, but also when actually driving automatically. A software product for the above system or a method for a map provider is also provided, as well as methods for other applications, such as augmented reality systems, which need to be accurately positioned in order to mix a virtual field of view and a real field of view.
To improve the accuracy of landmark based positioning, the achievable accuracy of landmark based positioning may be calculated according to embodiments. Highly automated driving requires, in particular, an exact positioning in the order of, for example, decimetres. Typical positioning methods support high precision 3D landmarks. According to embodiments it may be determined how accurate the position of the respective landmarks LM, L1, L2 should be in order to obtain the required accuracy, how many landmarks LM, L1, L2 are needed, and/or how a favorable geometric distribution of landmarks is achieved.
The procedure for calculating the achievable accuracy of the landmark-based positioning includes, for example, the selection of a suitable sensor coordinate system, the modeling of landmark positioning errors, the modeling of simplified geometric errors in a 2D overhead view, the general description of position errors as a function of the accuracy of the landmarks LM, L1, L2, the viewing angle between the landmarks LM, L1, L2 and the measurement accuracy in the image plane, the checking of the positioning situation in a road scene for highly automated driving, and the steps for generating a suitable landmark distribution, as is illustrated in fig. 1 to 18.
If the example comprises an "and/or" connection between a first feature and a second feature, this can be interpreted as an example having both the first feature and the second feature according to one embodiment and either only the first feature or only the second feature according to a further embodiment.

Claims (7)

1. A method (300) for determining landmarks (L1, L2; LM) for locating moving objects (200), wherein the method has the following steps:
generating (310) at least one constellation data record (150) using a gauging rule (140; 542) and landmark data (130), wherein the landmark data (130) represents landmarks (L1, L2; LM) in the surroundings of the object (200) acquired by sensors, wherein the gauging rule (140; 542) is usable in order to gauge from the landmark data (130) at least one pair of landmarks (L1, L2; LM) suitable for localization, wherein a ratio between a lateral spacing (a) between a trajectory (105) of the object (200) and a first landmark (L1) of the pair and a landmark spacing (L) between the first landmark (L1) and a second landmark (L2) of the pair is in a predetermined suitable value range, wherein the at least one constellation data record (150) has a structure suitable for localizing a pair (L1) relative to the object (200), l2; LM), and
a step (330) of predetermining the range of suitable values using the determination rule (140; 542) and/or taking into account the landmark data (130) and the position (536) of the object (200) and the trajectory (105; 534) of the object (200) and/or the velocity (x) of the object (200).
2. The method (300) of claim 1, having the step (320) of extracting landmark data (130) from raw landmark data (532) representing a total number of landmarks (L1, L2; LM) in the surroundings of the object (200), wherein the step (310) of generating is carried out using the extracted landmark data (130).
3. The method (300) according to claim 1 or 2, having the step (340) of adding position data of at least one pair of landmarks (L1, L2; LM) suitable for localization from at least one generated constellation data record (150) to the map data (550) of the digital map (555).
4. A method (400) for locating a moving object (200), wherein the method (400) has the following steps:
reading (410) at least one constellation data record (150) representing a data record generated according to the method (300) of any one of the preceding claims;
identifying (420) landmarks (L1, L2; LM) in the image of the surroundings of the object (200) taking into account position data of landmarks (L1, L2; LM) from the at least one constellation data record (150) suitable for localization; and
-performing (430) a localization using the position data of the landmarks (L1, L2; LM) identified in the identifying step (420), wherein in the performing step (430) the localization is performed using the landmark data (130) and the position (536) of the object (200) and the trajectory (105; 534) of the object (200) and/or the velocity (x) of the object (200).
5. The method (400) of claim 4, having the step of acquiring (440) an image of the surroundings of the object (200).
6. An apparatus (110; 210) arranged to perform the method (300; 400) according to any one of the preceding claims.
7. A machine-readable storage medium, on which a computer program is stored, which is provided for carrying out the method according to any one of the preceding claims.
CN201610941150.1A 2015-10-26 2016-10-25 Method and device for determining landmarks and method and device for positioning Active CN107044853B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102015220831.1 2015-10-26
DE102015220831.1A DE102015220831A1 (en) 2015-10-26 2015-10-26 A method and apparatus for determining landmarks for a location determination for a moving object and method and apparatus for locating a moving object

Publications (2)

Publication Number Publication Date
CN107044853A CN107044853A (en) 2017-08-15
CN107044853B true CN107044853B (en) 2021-06-15

Family

ID=58490179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610941150.1A Active CN107044853B (en) 2015-10-26 2016-10-25 Method and device for determining landmarks and method and device for positioning

Country Status (2)

Country Link
CN (1) CN107044853B (en)
DE (1) DE102015220831A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017222810A1 (en) * 2017-12-14 2019-06-19 Robert Bosch Gmbh Method for creating a feature-based localization map for a vehicle taking into account characteristic structures of objects
DE102018209366B4 (en) 2018-06-12 2020-07-02 Audi Ag Method for determining a position and / or orientation of a device
CN109862509B (en) * 2019-03-27 2021-05-28 合肥科塑信息科技有限公司 Sensing node positioning system supporting WLAN fingerprint positioning
DE102019206918B3 (en) * 2019-05-13 2020-10-08 Continental Automotive Gmbh Position determination method and position determination device
DE102019003473A1 (en) 2019-05-16 2020-01-02 Daimler Ag Process for reliable vehicle localization using constellation code tables
CN111089568B (en) * 2019-12-25 2023-04-14 上海点甜农业专业合作社 Road sign calibration instrument based on RTK + camera

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007267231A (en) * 2006-03-29 2007-10-11 Toyota Motor Corp Optical axis offset detecting device of on-board camera and optical axis offset detecting method of the on-board camera
CN104457742A (en) * 2014-12-05 2015-03-25 歌尔声学股份有限公司 Target positioning method and target positioning equipment of object

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09218712A (en) * 1996-02-08 1997-08-19 Murata Mach Ltd Guide system of unmanned vehicle
US8381982B2 (en) * 2005-12-03 2013-02-26 Sky-Trax, Inc. Method and apparatus for managing and controlling manned and automated utility vehicles
CN101762274B (en) * 2010-02-01 2011-11-09 北京理工大学 Observation condition number-based method for selecting autonomously located road sign of deep space probe
DE102010042063B4 (en) * 2010-10-06 2021-10-28 Robert Bosch Gmbh Method and device for determining processed image data about the surroundings of a vehicle
BR112013026377A2 (en) * 2011-04-21 2016-12-27 Konecranes Plc techniques for positioning a vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007267231A (en) * 2006-03-29 2007-10-11 Toyota Motor Corp Optical axis offset detecting device of on-board camera and optical axis offset detecting method of the on-board camera
CN104457742A (en) * 2014-12-05 2015-03-25 歌尔声学股份有限公司 Target positioning method and target positioning equipment of object

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DAVID SINRIECH 等.Landmark con®guration for absolute positioning of autonomous vehicles.《IIE Transactions》.2000,第32卷 *
基于人工路标和立体视觉的移动机器人自定位;刘振宇 等;《计算机工程与应用》;20101231;第46卷(第9期);第190-211页 *

Also Published As

Publication number Publication date
CN107044853A (en) 2017-08-15
DE102015220831A1 (en) 2017-04-27

Similar Documents

Publication Publication Date Title
CN107044853B (en) Method and device for determining landmarks and method and device for positioning
CN109297510B (en) Relative pose calibration method, device, equipment and medium
CN110869700B (en) System and method for determining vehicle position
US10807236B2 (en) System and method for multimodal mapping and localization
CN108694882B (en) Method, device and equipment for labeling map
CN109270545B (en) Positioning true value verification method, device, equipment and storage medium
Lategahn et al. Vision-only localization
US20200124421A1 (en) Method and apparatus for estimating position
CA2903298A1 (en) Operating device, operating system, operating method, and program therefor
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
KR102006291B1 (en) Method for estimating pose of moving object of electronic apparatus
JP6479296B2 (en) Position / orientation estimation apparatus and position / orientation estimation method
JP2008249555A (en) Position-specifying device, position-specifying method, and position-specifying program
CN108885113A (en) Method for determining the posture of the vehicle at least partly automating traveling in ambient enviroment by terrestrial reference
US20190073542A1 (en) Vehicle lane detection system
JP2011112556A (en) Search target position locating device, method, and computer program
US20190293444A1 (en) Lane level accuracy using vision of roadway lights and particle filter
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN104166995A (en) Harris-SIFT binocular vision positioning method based on horse pace measurement
CN110298320B (en) Visual positioning method, device and storage medium
KR102195040B1 (en) Method for collecting road signs information using MMS and mono camera
US20230079899A1 (en) Determination of an absolute initial position of a vehicle
Del Pizzo et al. Reliable vessel attitude estimation by wide angle camera
CN113566817B (en) Vehicle positioning method and device
US11514588B1 (en) Object localization for mapping applications using geometric computer vision techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant