WO2021091989A1 - System and method for precise vehicle positioning using bar codes, polygons and projective transformation - Google Patents

System and method for precise vehicle positioning using bar codes, polygons and projective transformation Download PDF

Info

Publication number
WO2021091989A1
WO2021091989A1 PCT/US2020/058850 US2020058850W WO2021091989A1 WO 2021091989 A1 WO2021091989 A1 WO 2021091989A1 US 2020058850 W US2020058850 W US 2020058850W WO 2021091989 A1 WO2021091989 A1 WO 2021091989A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
fixed object
recited
focal point
relative
Prior art date
Application number
PCT/US2020/058850
Other languages
French (fr)
Inventor
Jonathan E. STONE
Matthew BERKEMEIER
Original Assignee
Continental Automotive Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Automotive Systems, Inc. filed Critical Continental Automotive Systems, Inc.
Publication of WO2021091989A1 publication Critical patent/WO2021091989A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/026Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring distance between sensor and object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments

Definitions

  • the present disclosure relates to vehicle positioning systems, and more specifically to vehicle positioning systems for determining a position in the absence of an external signals and information.
  • Vehicles global positioning system utilizes external signals broadcast from a constellation of GPS satellites. The position of a vehicle is determined based on the received signals for navigation and increasingly for autonomous vehicle functions. In some instances, a reliable GPS signal may not be available. However, autonomous vehicle functions still require sufficiently precise positioning information.
  • a vehicle positioning system includes, among other possible things, a camera disposed on a vehicle for reading an optic label disposed on fixed object and for obtaining an image of a polygonal shape also disposed on the fixed object and a controller operable to determine a distance and orientation of a focal point relative to the fixed object based on perspective dimensions of the polygonal shape captured by the camera and actual dimensions of the polygonal shape read from the optic label.
  • the optic label includes coordinates of the fixed object and the controller is operable to determine a position of the vehicle based on the determined distance and orientation of the focal point relative to the fixed object and the coordinates of the fixed object.
  • the optic label comprises a two-dimensional bar code that includes coordinate information of the fixed object and dimensional information of the polygonal shape.
  • the polygonal shape is a box surrounding the optic label.
  • the fixed object comprises a street sign disposed adjacent a roadway.
  • the controller is configured to determine a size of the image of the polygonal shape at the focal point.
  • the controller is configured to determine a set distance of the focal point of the image of the fixed object relative to the vehicle and to the fixed object.
  • the camera is a fisheye camera disposed at the back of the vehicle.
  • the camera is disposed in a forward looking direction on the vehicle.
  • a method determining a position of a vehicle includes, among other possible things, reading an optic label disposed on a fixed object with a sensor disposed on a vehicle, capturing an image of a polygonal shape disposed on the fixed object with a camera, determining a distance and orientation of the camera relative to the fixed object based on a projected geometry of perspective dimensions of the captured image of the polygonal shape and actual dimensions of the polygonal shape read from the optic label.
  • a further embodiment of the foregoing method comprises reading coordinates from the optic label indicative of a position of the fixed object and determining a position of the vehicle with the determined distance and orientation of the camera relative to the fixed object and the coordinates read from the optic label.
  • the optic label comprises a two-dimensional bar code and the polygonal shape comprises a box having equal length sides surrounding the two-dimensional bar code.
  • a further embodiment of any of the foregoing methods comprises, determining a size of the captured image and determining a focal point of the captured image relative the vehicle.
  • determining the focal point further comprises determining a position of the focal point relative to the vehicle and a position of the focal point relative to the fixed object.
  • a further embodiment of any of the foregoing methods comprises, determining a position of the vehicle relative to the fixed object based on the distance between the vehicle and the focal point and the distance between the focal point and the fixed object.
  • a vehicle positioning system includes, among other possible things, a controller configured to obtain information from indicative of a position of a fixed object and physical dimensions of the fixed object, analyze an image of the fixed object captured by a camera disposed on the vehicle, determine a size and location of the captured image, determine a geometric relationship between the captured image and the physical dimensions of the fixed object and determine a distance and orientation of a vehicle relative to the fixed object for determining a location of the vehicle.
  • the controller is further configured to determine a focal point relative of the captured image relative to the vehicle and the fixed object and the size of the image at the focal point.
  • the controller is further configured to determine a location of the fixed object relative to the focal point based on perspective dimensions of the captured image and actual physical dimensions of the fixed object obtained by an optically readable label disposed on the fixed object.
  • the controller if configured to obtain coordinates and dimensions of the fixed object from the optically readable label.
  • the controller is further configured to determine a position of the vehicle relative to the fixed object based on the distance between the vehicle and the focal point and the distance between the focal point and the fixed object.
  • Figure 1 is a schematic view of an example roadway and sign including position and dimension information embedded in a machine-readable optical label.
  • Figure 2 is a schematic representation of an example method of determining vehicle position according to an embodiment.
  • Figure 3 is a schematic top view of a vehicle with rear facing camera relative to a sign.
  • Figure 4 is a schematic view of a sign with five points and embedded coordinates.
  • Figure 5 is an image showing corresponding image points after using a fisheye camera model.
  • a vehicle 10 is shown schematically along a roadway.
  • the vehicle 10 includes a vehicle positioning system 15 that reads information from a machine-readable optic label disposed on a fixed object.
  • the optic label includes information regarding the coordinate position of the fixed object and dimensions of a visible symbol or shape on the fixed object.
  • the vehicle 10 includes a controller 25 that uses the communicated dimensions to determine a relative position of the vehicle relative to the fixed object 14.
  • the position of the fixed object is communicated by the coordinates provided within the optic label 16.
  • the position of the vehicle 10 relative to the fixed object is determined based on a difference between the communicated actual dimensions of the visible symbol and dimensions of an image of the visual symbol captured by a camera disposed on the vehicle.
  • the example disclosed vehicle positioning system 15 enables a determination of a precise vehicle positon without an external signal.
  • GPS radio signals are not accessible (urban settings, forests, tunnels and inside parking structures) there are limited ways to precisely identify an objects position.
  • the disclosed system 15 and method provides an alternative means for determining a position of an object.
  • vehicle 10 includes at least one camera 12 that communicates information to a controller 25. It should be understood, that a device separate from the camera 12 may be utilized to read the optic label. Information from the camera 12 may be limited to capturing the image 22 of the polygonal shape 34.
  • the example controller 25 may be a stand-alone controller for the example system and/or contained in software provided in a vehicle controller.
  • the camera 12 is shown as one camera, but may be multiple cameras 12 disposed at different locations on the vehicle 10. The camera 12 gathers images of objects along a roadway.
  • the example roadway includes a fixed structure, such as for example a road sign 14.
  • the example road sign 14 includes a machine-readable optic label 16 that contains information regarding the location of the road sign 14.
  • the optic label 16 further includes information regarding actual dimensions of a visible symbol 34.
  • the visible symbol is a box 34 surrounding the optic label 16.
  • the information regarding the box 34 includes height 20 and width 18.
  • the visible symbol is a box 34 with a common height and width 20, 18.
  • other polygon shapes with different dimensions could also be utilized and are within the contemplation of this disclosure.
  • the camera 12 captures an image 22 of the box 34 and communicates that captured image 22 to the controller 25.
  • the size of the captured image 22 will differ from the actual size of the box 34 due to the distance, angle and proximity of the camera 12 relative to the sign 14.
  • the differences between the captured image 22 and the actual size of the box 34 are due to the geometric perspective of the camera 12 relative to the box 34.
  • the controller 25 uses the known dimensions 20, 18 of the box 34 and the corresponding dimensions 24, 26, 28 and 30 of the captured images to determine a focal point 32.
  • the focal point 32 is determined utilizing projective geometric transformations based on the dimensions of the captured image 22 as compared to the actual dimensions communicated by the optic label 16.
  • the determined focal point 32 is at a set distance and orientation relative to the sign 14. The set distance and orientation are utilized to precisely position the vehicle 12 relative to the sign 14 and thereby a precise set of coordinates.
  • the captured image 22 is a perspective view of the actual box 34.
  • the geometry that results in the dimensions of the captured image 22 resulting from the orientation of the vehicle 10 relative to the actual box 34 are determinable by known and understood predictive perspective geometric transform methods. Accordingly, the example system 15 determines the distance and orientation of the focal point 32 relative to the sign 14 given the perspective view represented by the captured image 22 of the known box 34 geometry.
  • the optic label 16 is a QR code or two-dimensional bar code. It should be appreciated, that the optic label 16 may be any type of machine-readable labels such as an example bar code. Moreover, although the example system 16 is disclosed by way of example as part of motor vehicle 10, the example system 15 may be adapted to other applications including other vehicles and hand held devices.
  • the example disclosed system and method of positioning and localization uses computer readable labels and projective geometry to determine a size of an image and further a focal point of the captured image that is then utilized to determine a position of the vehicle.
  • the computer readable image includes is encoded with a position coordinate (e.g. GPS coordinates) and actual physical dimensions of an accompanying polygon (e.g. bounding box) inside of an encoded computer readable label (e.g. QR or Bar Code) on a sign or fixed surface the viewing object is able to read and interpret the position coordinate and polygon dimensions and perform a projective geometric transformation using its own perspective dimensions observed of the polygon in conjunction with the known polygon dimensions.
  • a position coordinate e.g. GPS coordinates
  • actual physical dimensions of an accompanying polygon e.g. bounding box
  • an encoded computer readable label e.g. QR or Bar Code
  • FIG. 3 and 4 an example method of localization of a vehicle 10 from a sign 14 with embedded coordinates 16 is schematically shown.
  • the sign 14 includes multiple points known physical dimensions and coordinates. These points are show by way of example as p ⁇ p2 ..., p n [0039]
  • the vehicle 10 has some unknown position and orientation with respect to the world and the sign 14. We can represent this with a position vector p and a rotation matrix R. This combination, (p, R ), involves 6 unknown variables (e.g. 3 position components and 3 Euler angles).
  • the vehicle 10 has a camera 12 which images the points on the sign 14.
  • the points in the image have only 2 components. Let these points be p v p T .. p n .
  • the indices indicate corresponding sign points (3 components) and image points (2 components).
  • the camera 12 has some intrinsic and extrinsic parameters. If the camera 12 is calibrated, then these are all known. These will be included in the map P.
  • the present disclosure relates to vehicle positioning systems, and more specifically to vehicle positioning systems for determining a position in the absence of an external signals and information.
  • a total of 2n equations and 6 unknowns (p, R). At least three sign points are needed to determine the vehicle position and orientation.
  • a fisheye camera is mounted on the rear of a truck.
  • a fisheye camera is disclosed by way of example, other camera configurations cold be utilized and are within the contemplation of this disclosure.
  • the example camera is mounted at the rear of the truck, other locations on the vehicle may also be utilized within the contemplation and scope of this disclosure.
  • the example truck 10 is located at the origin of a coordinate system, and the vehicle longitudinal axis is aligned with the x-axis. It should be appreciated, that such an alignment is provided by way of example and would not necessarily be the typical case.
  • the example coordinates include latitude, longitude and height, and may be converted to a local Cartesian coordinate system. In this disclosed example, the conversion to a Cartesian coordinate system is done.
  • FIG. 3 The setup is illustrated in Figure 3, where a vehicle “sees” the sign 14 with its rear facing camera 12 (e.g. backup camera).
  • Figure 4 shows the example sign 14 with the 5 points, which have their world coordinates embedded in the optic label 16.
  • Figure 5 shows the resulting image points when using the setup described, along with a specific fisheye camera model with specific extrinsic and intrinsic parameters.
  • Table 1 shows some example data generated by a model of a vehicle with an attached rear camera. In this case, 5 points are included, although more or fewer could be used.
  • Table 1 Example coordinates of 5 points and corresponding image coordinates.
  • Another disclosed example method of determining a vehicle position with the example system includes a one-shot approach.
  • a one-shot approach enables a determination of the vehicle position/orientation from a single measurement of a sign with multiple known points. As shown in Figure 4, there are multiple points on the sign with known world coordinates. For example, the sing includes points be pi,p2... pn.
  • the vehicle 10 has some unknown position and orientation with respect to the world and the sign.
  • the vehicle position is represented with a position vector p and a rotation matrix R.
  • the combination of the position vector and the rotation matrix, (p, R ), provides 6 unknown variables (e.g. 3 position components and 3 Euler angles).
  • the example vehicle has a camera which images the points on the sign.
  • the points in the image have only 2 components.
  • the points are: [0054]
  • the indices indicate corresponding sign points (3 components) and image points (2 components).
  • the camera 12 has some intrinsic and extrinsic parameters.
  • the example camera 12 is calibrated and therefore the intrinsic and extrinsic parameters are all known.
  • the intrinsic and extrinsic parameters are included in the map P. From the above known parameters, the following set of equations can be written: k P(P R,P
  • the example method provides a total of 2 n equations and 6 unknowns (p, R ). Accordingly, at least 3 sign points are needed to determine the vehicle position and orientation. As appreciated, although 3 sign points are utilized in this disclosed example, more points maybe utilized within the contemplation and scope of this disclosure.
  • Another disclosed example approach is to use one or more points of known locations and track those points over time as the vehicle moves. When points are tracked, it may be possible to utilize fewer than 3 points due to the use of a time history.
  • Vehicle relative motion is calculated, based on measured wheel rotations, steering wheel angle, vehicle speed, vehicle yaw rate, and possibly other vehicle data (e.g. IMU).
  • vehicle information is combined with a vehicle model.
  • the vehicle position and orientation can be determined. Once convergence to the correct position and orientation has occurred, the correct position and orientation can be maintained if the known points are still being tracked.
  • Another approach to solve this problem would be a Kalman filter or other nonlinear observer.
  • the unknown states would be the vehicle position and orientation.
  • a vehicle model could be used to predict future states from current states.
  • the measurement would consist of the image coordinate(s) of the known point position(s) on the sign.
  • Other methods also exist to solve this type of problem, such as nonlinear least squares or optimization methods.

Abstract

A disclosed vehicle positioning system includes a camera disposed on a vehicle for reading an optic label disposed on fixed object and for obtaining an image of a polygonal shape also disposed on the fixed object and a controller operable to determine a distance and orientation of a focal point relative to the fixed object based on perspective dimensions of the polygonal shape captured by the camera and actual dimensions of the polygonal shape read from the optic label.

Description

SYSTEM AND METHOD FOR PRECISE VEHICLE POSITIONING USING BAR CODES, POLYGONS AND PROJECTIVE TRANSFORMATION
CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to United States Provisional Application No. 62/930,757 filed on November 5, 2019, and is incorporated herein by reference.
TECHNICAL FIELD
[0002] The present disclosure relates to vehicle positioning systems, and more specifically to vehicle positioning systems for determining a position in the absence of an external signals and information.
BACKGROUND
[0003] Vehicles global positioning system utilizes external signals broadcast from a constellation of GPS satellites. The position of a vehicle is determined based on the received signals for navigation and increasingly for autonomous vehicle functions. In some instances, a reliable GPS signal may not be available. However, autonomous vehicle functions still require sufficiently precise positioning information.
[0004] The background description provided herein is for the purpose of generally presenting a context of this disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
SUMMARY
[0005] A vehicle positioning system according to a disclosed example embodiment includes, among other possible things, a camera disposed on a vehicle for reading an optic label disposed on fixed object and for obtaining an image of a polygonal shape also disposed on the fixed object and a controller operable to determine a distance and orientation of a focal point relative to the fixed object based on perspective dimensions of the polygonal shape captured by the camera and actual dimensions of the polygonal shape read from the optic label. [0006] In a further embodiment of the foregoing vehicle positioning system, the optic label includes coordinates of the fixed object and the controller is operable to determine a position of the vehicle based on the determined distance and orientation of the focal point relative to the fixed object and the coordinates of the fixed object.
[0007] In a further embodiment of any of the foregoing vehicle positioning systems, the optic label comprises a two-dimensional bar code that includes coordinate information of the fixed object and dimensional information of the polygonal shape.
[0008] In a further embodiment of any of the foregoing vehicle positioning systems, the polygonal shape is a box surrounding the optic label.
[0009] In a further embodiment of any of the foregoing vehicle positioning systems, the fixed object comprises a street sign disposed adjacent a roadway.
[0010] In a further embodiment of any of the foregoing vehicle positioning systems, the controller is configured to determine a size of the image of the polygonal shape at the focal point.
[0011] In a further embodiment of any of the foregoing vehicle positioning systems, the controller is configured to determine a set distance of the focal point of the image of the fixed object relative to the vehicle and to the fixed object.
[0012] In a further embodiment of any of the foregoing vehicle positioning systems, the camera is a fisheye camera disposed at the back of the vehicle.
[0013] In a further embodiment of any of the foregoing vehicle positioning systems, the camera is disposed in a forward looking direction on the vehicle.
[0014] A method determining a position of a vehicle according to another exemplary embodiment of this disclosure includes, among other possible things, reading an optic label disposed on a fixed object with a sensor disposed on a vehicle, capturing an image of a polygonal shape disposed on the fixed object with a camera, determining a distance and orientation of the camera relative to the fixed object based on a projected geometry of perspective dimensions of the captured image of the polygonal shape and actual dimensions of the polygonal shape read from the optic label.
[0015] A further embodiment of the foregoing method comprises reading coordinates from the optic label indicative of a position of the fixed object and determining a position of the vehicle with the determined distance and orientation of the camera relative to the fixed object and the coordinates read from the optic label.
[0016] In a further embodiment of any of the foregoing methods, the optic label comprises a two-dimensional bar code and the polygonal shape comprises a box having equal length sides surrounding the two-dimensional bar code.
[0017] A further embodiment of any of the foregoing methods comprises, determining a size of the captured image and determining a focal point of the captured image relative the vehicle.
[0018] In a further embodiment of any of the foregoing methods, determining the focal point further comprises determining a position of the focal point relative to the vehicle and a position of the focal point relative to the fixed object.
[0019] A further embodiment of any of the foregoing methods comprises, determining a position of the vehicle relative to the fixed object based on the distance between the vehicle and the focal point and the distance between the focal point and the fixed object. A vehicle positioning system according to another exemplary embodiment of this disclosure includes, among other possible things, a controller configured to obtain information from indicative of a position of a fixed object and physical dimensions of the fixed object, analyze an image of the fixed object captured by a camera disposed on the vehicle, determine a size and location of the captured image, determine a geometric relationship between the captured image and the physical dimensions of the fixed object and determine a distance and orientation of a vehicle relative to the fixed object for determining a location of the vehicle.
[0020] In a further embodiment of the foregoing vehicle positioning system, the controller is further configured to determine a focal point relative of the captured image relative to the vehicle and the fixed object and the size of the image at the focal point.
[0021] In a further embodiment of any of the foregoing vehicle positioning systems, the controller is further configured to determine a location of the fixed object relative to the focal point based on perspective dimensions of the captured image and actual physical dimensions of the fixed object obtained by an optically readable label disposed on the fixed object. [0022] In a further embodiment of any of the foregoing vehicle positioning systems, the controller if configured to obtain coordinates and dimensions of the fixed object from the optically readable label.
[0023] In a further embodiment of any of the foregoing vehicle positioning systems, the controller is further configured to determine a position of the vehicle relative to the fixed object based on the distance between the vehicle and the focal point and the distance between the focal point and the fixed object.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] Figure 1 is a schematic view of an example roadway and sign including position and dimension information embedded in a machine-readable optical label.
[0025] Figure 2 is a schematic representation of an example method of determining vehicle position according to an embodiment.
[0026] Figure 3 is a schematic top view of a vehicle with rear facing camera relative to a sign.
[0027] Figure 4 is a schematic view of a sign with five points and embedded coordinates.
[0028] Figure 5 is an image showing corresponding image points after using a fisheye camera model.
DETAILED DESCRIPTION
[0029] Referring to Figure 1, a vehicle 10 is shown schematically along a roadway. The vehicle 10 includes a vehicle positioning system 15 that reads information from a machine-readable optic label disposed on a fixed object. The optic label includes information regarding the coordinate position of the fixed object and dimensions of a visible symbol or shape on the fixed object.
[0030] The vehicle 10 includes a controller 25 that uses the communicated dimensions to determine a relative position of the vehicle relative to the fixed object 14. The position of the fixed object is communicated by the coordinates provided within the optic label 16. The position of the vehicle 10 relative to the fixed object is determined based on a difference between the communicated actual dimensions of the visible symbol and dimensions of an image of the visual symbol captured by a camera disposed on the vehicle.
[0031] Accordingly, the example disclosed vehicle positioning system 15 enables a determination of a precise vehicle positon without an external signal. In cases where GPS radio signals are not accessible (urban settings, forests, tunnels and inside parking structures) there are limited ways to precisely identify an objects position. The disclosed system 15 and method provides an alternative means for determining a position of an object.
[0032] In the disclosed example, vehicle 10 includes at least one camera 12 that communicates information to a controller 25. It should be understood, that a device separate from the camera 12 may be utilized to read the optic label. Information from the camera 12 may be limited to capturing the image 22 of the polygonal shape 34. The example controller 25 may be a stand-alone controller for the example system and/or contained in software provided in a vehicle controller. The camera 12 is shown as one camera, but may be multiple cameras 12 disposed at different locations on the vehicle 10. The camera 12 gathers images of objects along a roadway.
[0033] The example roadway includes a fixed structure, such as for example a road sign 14. The example road sign 14 includes a machine-readable optic label 16 that contains information regarding the location of the road sign 14. The optic label 16 further includes information regarding actual dimensions of a visible symbol 34. In this disclosed example, the visible symbol is a box 34 surrounding the optic label 16. The information regarding the box 34 includes height 20 and width 18. In this example, the visible symbol is a box 34 with a common height and width 20, 18. However, other polygon shapes with different dimensions could also be utilized and are within the contemplation of this disclosure.
[0034] The camera 12 captures an image 22 of the box 34 and communicates that captured image 22 to the controller 25. The size of the captured image 22 will differ from the actual size of the box 34 due to the distance, angle and proximity of the camera 12 relative to the sign 14. The differences between the captured image 22 and the actual size of the box 34 are due to the geometric perspective of the camera 12 relative to the box 34. The controller 25 uses the known dimensions 20, 18 of the box 34 and the corresponding dimensions 24, 26, 28 and 30 of the captured images to determine a focal point 32. The focal point 32 is determined utilizing projective geometric transformations based on the dimensions of the captured image 22 as compared to the actual dimensions communicated by the optic label 16. The determined focal point 32 is at a set distance and orientation relative to the sign 14. The set distance and orientation are utilized to precisely position the vehicle 12 relative to the sign 14 and thereby a precise set of coordinates.
[0035] The captured image 22 is a perspective view of the actual box 34. The geometry that results in the dimensions of the captured image 22 resulting from the orientation of the vehicle 10 relative to the actual box 34 are determinable by known and understood predictive perspective geometric transform methods. Accordingly, the example system 15 determines the distance and orientation of the focal point 32 relative to the sign 14 given the perspective view represented by the captured image 22 of the known box 34 geometry.
[0036] In this example, the optic label 16 is a QR code or two-dimensional bar code. It should be appreciated, that the optic label 16 may be any type of machine-readable labels such as an example bar code. Moreover, although the example system 16 is disclosed by way of example as part of motor vehicle 10, the example system 15 may be adapted to other applications including other vehicles and hand held devices.
[0037] Accordingly, the example disclosed system and method of positioning and localization uses computer readable labels and projective geometry to determine a size of an image and further a focal point of the captured image that is then utilized to determine a position of the vehicle. The computer readable image includes is encoded with a position coordinate (e.g. GPS coordinates) and actual physical dimensions of an accompanying polygon (e.g. bounding box) inside of an encoded computer readable label (e.g. QR or Bar Code) on a sign or fixed surface the viewing object is able to read and interpret the position coordinate and polygon dimensions and perform a projective geometric transformation using its own perspective dimensions observed of the polygon in conjunction with the known polygon dimensions.
[0038] Referring to Figures 3 and 4, an example method of localization of a vehicle 10 from a sign 14 with embedded coordinates 16 is schematically shown. The sign 14 includes multiple points known physical dimensions and coordinates. These points are show by way of example as p{ p2 ..., pn [0039] The vehicle 10 has some unknown position and orientation with respect to the world and the sign 14. We can represent this with a position vector p and a rotation matrix R. This combination, (p, R ), involves 6 unknown variables (e.g. 3 position components and 3 Euler angles).
[0040] The vehicle 10 has a camera 12 which images the points on the sign 14. The points in the image have only 2 components. Let these points be pv pT .. pn. The indices indicate corresponding sign points (3 components) and image points (2 components).
[0041] The camera 12 has some intrinsic and extrinsic parameters. If the camera 12 is calibrated, then these are all known. These will be included in the map P.
[0042] A set of equations can be written as shown in the below examples:
Figure imgf000009_0001
pa = P(p,R,p )
[0043] The present disclosure relates to vehicle positioning systems, and more specifically to vehicle positioning systems for determining a position in the absence of an external signals and information.
[0044] A total of 2n equations and 6 unknowns (p, R). At least three sign points are needed to determine the vehicle position and orientation. In this example disclosed specific embodiment, a fisheye camera is mounted on the rear of a truck. As appreciated, although a fisheye camera is disclosed by way of example, other camera configurations cold be utilized and are within the contemplation of this disclosure. Moreover, although the example camera is mounted at the rear of the truck, other locations on the vehicle may also be utilized within the contemplation and scope of this disclosure.
[0045] The example truck 10 is located at the origin of a coordinate system, and the vehicle longitudinal axis is aligned with the x-axis. It should be appreciated, that such an alignment is provided by way of example and would not necessarily be the typical case. Note that the example coordinates include latitude, longitude and height, and may be converted to a local Cartesian coordinate system. In this disclosed example, the conversion to a Cartesian coordinate system is done. [0046] In this disclosed example, the sign is 10 m behind the truck (world x = - 10). In this example the sign includes 5 points: the center of a rectangle and its 4 corners. The rectangle is 30 cm wide and 50 cm tall.
[0047] The setup is illustrated in Figure 3, where a vehicle “sees” the sign 14 with its rear facing camera 12 (e.g. backup camera). Figure 4 shows the example sign 14 with the 5 points, which have their world coordinates embedded in the optic label 16. Figure 5 shows the resulting image points when using the setup described, along with a specific fisheye camera model with specific extrinsic and intrinsic parameters.
[0048] Table 1 shows some example data generated by a model of a vehicle with an attached rear camera. In this case, 5 points are included, although more or fewer could be used.
Figure imgf000010_0002
[0049] Table 1: Example coordinates of 5 points and corresponding image coordinates.
[0050] Another disclosed example method of determining a vehicle position with the example system includes a one-shot approach. A one-shot approach enables a determination of the vehicle position/orientation from a single measurement of a sign with multiple known points. As shown in Figure 4, there are multiple points on the sign with known world coordinates. For example, the sing includes points be pi,p2... pn.
[0051] The vehicle 10 has some unknown position and orientation with respect to the world and the sign. The vehicle position is represented with a position vector p and a rotation matrix R. The combination of the position vector and the rotation matrix, (p, R ), provides 6 unknown variables (e.g. 3 position components and 3 Euler angles).
[0052] The example vehicle has a camera which images the points on the sign. The points in the image have only 2 components. For example, the points are:
Figure imgf000010_0001
[0054] The indices indicate corresponding sign points (3 components) and image points (2 components). The camera 12 has some intrinsic and extrinsic parameters. The example camera 12 is calibrated and therefore the intrinsic and extrinsic parameters are all known. The intrinsic and extrinsic parameters are included in the map P. From the above known parameters, the following set of equations can be written:
Figure imgf000011_0001
k P(P R,P
[0055] The example method provides a total of 2 n equations and 6 unknowns (p, R ). Accordingly, at least 3 sign points are needed to determine the vehicle position and orientation. As appreciated, although 3 sign points are utilized in this disclosed example, more points maybe utilized within the contemplation and scope of this disclosure.
[0056] Another disclosed example approach is to use one or more points of known locations and track those points over time as the vehicle moves. When points are tracked, it may be possible to utilize fewer than 3 points due to the use of a time history.
[0057] Vehicle relative motion is calculated, based on measured wheel rotations, steering wheel angle, vehicle speed, vehicle yaw rate, and possibly other vehicle data (e.g. IMU). The vehicle information is combined with a vehicle model. By combining the motion of the point(s) in the image with the relative motion of the vehicle, over time, the vehicle position and orientation can be determined. Once convergence to the correct position and orientation has occurred, the correct position and orientation can be maintained if the known points are still being tracked.
[0058] Another approach to solve this problem would be a Kalman filter or other nonlinear observer. The unknown states would be the vehicle position and orientation.
[0059] As mentioned earlier, a vehicle model could be used to predict future states from current states. The measurement would consist of the image coordinate(s) of the known point position(s) on the sign. Other methods also exist to solve this type of problem, such as nonlinear least squares or optimization methods.
[0060] The disclosed system enables camera and computer vision system to derive a precise position by viewing a sign and determining an offset from the sign. [0061] Although an example embodiment has been disclosed, a worker of ordinary skill in this art would recognize that certain modifications would come within the scope of this disclosure. For that reason, the following claims should be studied to determine the scope and content of this disclosure.

Claims

CLAIMS What is claimed is:
1. A vehicle positioning system comprising: a camera disposed on a vehicle for reading an optic label disposed on fixed object and for obtaining an image of a polygonal shape also disposed on the fixed object; and a controller operable to determine a distance and orientation of a focal point relative to the fixed object based on perspective dimensions of the polygonal shape captured by the camera and actual dimensions of the polygonal shape read from the optic label.
2. The vehicle positioning system as recited in claim 1 , wherein the optic label includes coordinates of the fixed object and the controller is operable to determine a position of the vehicle based on the determined distance and orientation of the focal point relative to the fixed object and the coordinates of the fixed object.
3. The vehicle positioning system as recited in claim 1, wherein the optic label comprises a two-dimensional bar code that includes coordinate information of the fixed object and dimensional information of the polygonal shape.
4. The vehicle positioning system as recited in claim 3, wherein the polygonal shape is a box surrounding the optic label.
5. The vehicle positioning system as recited in claim 1, wherein the fixed object comprises a street sign disposed adjacent a roadway.
6. The vehicle positioning system as recited in claim 1, wherein the controller is configured to determine a size of the image of the polygonal shape at the focal point.
7. The vehicle positioning system as recited in claim 6, wherein the controller is configured to determine a set distance of the focal point of the image of the fixed object relative to the vehicle and to the fixed object.
8. The vehicle positioning system as recited in claim 7, wherein the camera is a fisheye camera disposed at the back of the vehicle.
9. The vehicle positioning system as recited in claim 8, wherein the camera is disposed in a forward looking direction on the vehicle.
10. A method determining a position of a vehicle comprising: reading an optic label disposed on a fixed object with a sensor disposed on a vehicle; capturing an image of a polygonal shape disposed on the fixed object with a camera; determining a distance and orientation of the camera relative to the fixed object based on a projected geometry of perspective dimensions of the captured image of the polygonal shape and actual dimensions of the polygonal shape read from the optic label.
11. The method as recited in claim 10, further comprising reading coordinates from the optic label indicative of a position of the fixed object and determining a position of the vehicle with the determined distance and orientation of the camera relative to the fixed object and the coordinates read from the optic label.
12. The method as recited in claim 11, wherein the optic label comprises a two- dimensional bar code and the polygonal shape comprises a box having equal length sides surrounding the two-dimensional bar code.
13. The method as recited in claim 11, further comprising determining a size of the captured image and determining a focal point of the captured image relative the vehicle.
14. The method as recited in claim 13, wherein determining the focal point further comprises determining a position of the focal point relative to the vehicle and a position of the focal point relative to the fixed object.
15. The method as recited in claim 14, further comprising determining a position of the vehicle relative to the fixed object based on the distance between the vehicle and the focal point and the distance between the focal point and the fixed object.
16. A vehicle positioning system comprising: a controller configured to obtain information from indicative of a position of a fixed object and physical dimensions of the fixed object, analyze an image of the fixed object captured by a camera disposed on the vehicle, determine a size and location of the captured image, determine a geometric relationship between the captured image and the physical dimensions of the fixed object and determine a distance and orientation of a vehicle relative to the fixed object for determining a location of the vehicle.
17. The vehicle positioning system as recited in claim 16, wherein the controller is further configured to determine a focal point relative of the captured image relative to the vehicle and the fixed object and the size of the image at the focal point.
18. The vehicle positioning system as recited in claim 17, wherein the controller if further configured to determine a location of the fixed object relative to the focal point based on perspective dimensions of the captured image and actual physical dimensions of the fixed object obtained by an optically readable label disposed on the fixed object.
19. The vehicle positioning system as recited in claim 18, wherein the controller if configured to obtain coordinates and dimensions of the fixed object from the optically readable label.
20. The vehicle positioning system as recited in claim 19, wherein the controller is further configured to determine a position of the vehicle relative to the fixed object based on the distance between the vehicle and the focal point and the distance between the focal point and the fixed object.
PCT/US2020/058850 2019-11-05 2020-11-04 System and method for precise vehicle positioning using bar codes, polygons and projective transformation WO2021091989A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962930757P 2019-11-05 2019-11-05
US62/930,757 2019-11-05

Publications (1)

Publication Number Publication Date
WO2021091989A1 true WO2021091989A1 (en) 2021-05-14

Family

ID=73554530

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/058850 WO2021091989A1 (en) 2019-11-05 2020-11-04 System and method for precise vehicle positioning using bar codes, polygons and projective transformation

Country Status (1)

Country Link
WO (1) WO2021091989A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023202957A1 (en) * 2022-04-22 2023-10-26 Auki Labs Ag Using reference measurements to reduce geospatial uncertainty

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110121068A1 (en) * 2004-12-14 2011-05-26 Sky-Trax, Inc. Method and apparatus for determining position and rotational orientation of an object
US20180322653A1 (en) * 2016-08-31 2018-11-08 Limited Liability Company "Topcon Positioning Systems" Apparatus and method for providing vehicular positioning
WO2019173585A2 (en) * 2018-03-08 2019-09-12 Global Traffic Technologies, Llc Determining position of vehicle based on image of tag

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110121068A1 (en) * 2004-12-14 2011-05-26 Sky-Trax, Inc. Method and apparatus for determining position and rotational orientation of an object
US20180322653A1 (en) * 2016-08-31 2018-11-08 Limited Liability Company "Topcon Positioning Systems" Apparatus and method for providing vehicular positioning
WO2019173585A2 (en) * 2018-03-08 2019-09-12 Global Traffic Technologies, Llc Determining position of vehicle based on image of tag

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023202957A1 (en) * 2022-04-22 2023-10-26 Auki Labs Ag Using reference measurements to reduce geospatial uncertainty

Similar Documents

Publication Publication Date Title
US11829138B1 (en) Change detection using curve alignment
CN107328411B (en) Vehicle-mounted positioning system and automatic driving vehicle
US10788830B2 (en) Systems and methods for determining a vehicle position
US9255805B1 (en) Pose estimation using long range features
US11551458B1 (en) Plane estimation for contextual awareness
EP1901259A1 (en) Vehicle and lane recognizing device
EP3842751B1 (en) System and method of generating high-definition map based on camera
JP4978615B2 (en) Target identification device
JP2009053059A (en) Object specifying device, object specifying method, and object specifying program
JP2009199572A (en) Three-dimensional machine map, three-dimensional machine map generating device, navigation device, and automatic driving device
JP6415583B2 (en) Information display control system and information display control method
CN110458885B (en) Positioning system and mobile terminal based on stroke perception and vision fusion
US11341684B1 (en) Calibration of detection system to vehicle using a mirror
US20210180958A1 (en) Graphic information positioning system for recognizing roadside features and method using the same
EP3945336A1 (en) Method and apparatus for positioning movable device, and movable device
Moras et al. Drivable space characterization using automotive lidar and georeferenced map information
JP7114165B2 (en) Position calculation device and position calculation program
EP4034841A1 (en) Method and system of vehicle driving assistance
WO2021091989A1 (en) System and method for precise vehicle positioning using bar codes, polygons and projective transformation
CN115735168A (en) Control loop for navigating a vehicle
EP4113063A1 (en) Localization of autonomous vehicles using camera, gps, and imu
CN113405555B (en) Automatic driving positioning sensing method, system and device
JP6083261B2 (en) Information processing center and in-vehicle equipment
WO2022226531A1 (en) Vision based cooperative vehicle localization system and method for gps-denied environments
WO2022226529A1 (en) Distributed multi-vehicle localization for gps-denied environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20812201

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20812201

Country of ref document: EP

Kind code of ref document: A1