WO2008009965A1 - Generating a map - Google Patents

Generating a map Download PDF

Info

Publication number
WO2008009965A1
WO2008009965A1 PCT/GB2007/002770 GB2007002770W WO2008009965A1 WO 2008009965 A1 WO2008009965 A1 WO 2008009965A1 GB 2007002770 W GB2007002770 W GB 2007002770W WO 2008009965 A1 WO2008009965 A1 WO 2008009965A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
points
image
map
further image
Prior art date
Application number
PCT/GB2007/002770
Other languages
French (fr)
Inventor
Mark Richard Tucker
Adam John Heenan
Original Assignee
Trw Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Trw Limited filed Critical Trw Limited
Priority to EP07766328.4A priority Critical patent/EP2047213B1/en
Publication of WO2008009965A1 publication Critical patent/WO2008009965A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3811Point data, e.g. Point of Interest [POI]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • G01C21/3822Road feature data, e.g. slope data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/003Maps
    • G09B29/004Map manufacture or repair; Tear or ink or water resistant maps; Long-life maps

Definitions

  • This invention relates to a method of generating a map, and a parking assistance apparatus generating a map.
  • GPS Global Positioning System
  • a method of generating a map comprising the location of points relating to features, comprising moving a vehicle past a scene, capturing an image of a scene from the vehicle, detecting points relating to features in the captured image, and generating the map by recording the position of the points.
  • this is a simple method of generating a map using images captured from a vehicle.
  • the map is of a road, typically of a road intersection and may be of a single intersection.
  • the map may additionally or alternatively be a map of a car park, a single car parking space or so on.
  • the map may comprise points relating to visible features on the surface of the road, typically salient features such as lane markings and so on or visible salient features on or adjacent to the road (such as street furniture such as traffic lights, bus stops and the like) .
  • the processor may select only salient features for inclusion on the map.
  • the method comprises capturing at least one further image as the vehicle moves, detecting points relating to features in the at least one further image, and comparing the points in the image and the at least one further image.
  • the method may also comprise updating the map based on the comparison of the points between the image and the at least one further image.
  • the system may be able to build up more confidence in points that are repeatedly detected.
  • the method may comprise assigning each point recorded on the map a confidence and increasing that confidence when the point is repeatedly detected.
  • the method may include the step of repeatedly moving the vehicle past the scene, and capturing images of the scene from the vehicle each time the vehicle passes the scene; each time the vehicle passes the scene, identifying points relating to features in each image and using these points to update the map. The accuracy of the map may therefore be increased.
  • the method will comprise the detection of edges visible in the captured images.
  • the step of detection of points relating to features in the captured images may comprise detecting lines in the captured images, then calculating the end points of the lines.
  • the method may further comprise the step of measuring the motion of the vehicle and from this predicting where the points in the at least one further image will be. Indeed, the method may comprise the step of looking for points in the at least one further image in the region where they are predicted to be. This has been found to improve the processing efficiency of the method.
  • the motion may be detected by means of a speed sensor and/or yaw rate sensor associated with the vehicle.
  • the method may comprise the step of determining the motion of the vehicle.
  • motion we may mean at least one of speed, heading angle, velocity and yaw rate.
  • the motion of the vehicle determined by comparing the points from the differing images may be more accurate than the measurement used to predict the location of the points in the at least one further image; the method may therefore, as a useful by-product, more accurately output the motion of the vehicle than the estimate used during the method.
  • the step of determining the position of points in the image may comprise the step of transforming the position of points in the image into a position relative to the vehicle. This may comprise carrying out a perspective transform.
  • the pitch, roll and/or vertical heave of the vehicle may be calculated in order to compensate therefor, or they may be assumed to be constant.
  • the image and the at least one further image may be captured using a video camera and may be optical images of the scene.
  • optical we typically mean an image captured using visible light, or alternatively IR or UV light.
  • the image may be RADAR, LIDAR, SONAR or other similar images of the scene. "Visible” should be construed accordingly.
  • the step of predicting where points will be in the at least one further image may comprise modelling the motion of the vehicle.
  • a predictive filter such as a Kalman filter, or an Extended Kalman filter (an extended Kalman Filter typically incorporates non-linear relationships that are useful for this idea, for example the relationships between measurements, states and outputs and may incorporate necessary algorithms, for example perspective transforms and inverse perspective algorithms) .
  • the predictive filter may take, as an input, the position of the points in the image and the speed and heading angle (and/or yaw rate) . The filter would then output predicted positions of the points .
  • an inverse perspective transform to transform the position of the points relative to a surface (such as a road surface) into the position in the at least one further image.
  • the transform may be of the form:
  • the step of comparing points in the image and the at least one further image may comprising determining the angle of the line relative to a datum and comparing the points corresponding to a line only if the angle is within an expected range. This avoids confusing points in successive images that are not related.
  • the actual position in the at least one further image of the points whose position was predicted in the at least one further image may be calculated. From these actual positions, the position of the vehicle on the map may be calculated. Where a predictive filter is used, the actual positions may be used to update the predictive filter. This is especially useful with a Kalman filter or Extended Kalman Filter.
  • the step of updating the predictive filter may comprise the determination of the vehicle motion from the comparison of the points from the image and the at least one further image. This is especially true of the use of an (extended) Kalman filter, where the motion of the vehicle can conveniently form part of the state inputs to the Kalman filter.
  • the step of recording the position of points on the map may include only recording points that meet at least one criterion.
  • the at least one criterion may include at least one of: • the confidence of that point • the feature with which the point is associated, such as the shape of a line for which the point is an end.
  • the vehicle may be a road vehicle, such as a car. Alternatively, it may be a waterborne vessel such as a boat or an aircraft such as an aeroplane or helicopter.
  • the map may be used by the vehicle in order to determine its location at a later time. Alternatively, the map may be stored and supplied to another vehicle, or preferably a plurality of vehicles, for use in locating the other vehicles when they are in the same locality - be it an intersection or any other location - as the original vehicle.
  • a parking assistance apparatus for a vehicle comprising a video camera arranged to, in use, capture images of a parking space as the vehicle is driven past the space, the parking space having visible lines demarcating the space, and a processor arranged to, in use, map the lines demarcating the space using the method of the first aspect of the invention and captured images from the video camera, and guidance means to guide the vehicle into the space.
  • a vehicle equipped with such an apparatus may generate a map of a space "on-the-fly" , and then provide guidance to a driver of the vehicle on the best mode of driving into the space.
  • This system does not require any special markings of the space; simple painted lines as are well known in the prior art will suffice.
  • Figure 1 shows a car fitted with an apparatus according to a first embodiment of the invention
  • Figure 2 shows the use of the apparatus of Figure 1 in capturing images of the scene surrounding the vehicle
  • Figure 3 shows the vehicle of Figure 1 located on a map
  • Figure 4 is a flow diagram showing the method carried out by the apparatus of Figure 1 ;
  • Figure 5 shows a sample image captured by the apparatus of Figure 1;
  • Figure 6 shows the relationship between different coordinate systems for locating the vehicle on the map of Figure 3;
  • Figure 7 shows the data flow through the apparatus of Figure 1 ;
  • Figure 8 shows the operation of the Extended Kalman Filter of the apparatus of Figure 1 ;
  • Figure 9 shows a parking space as used by the apparatus of the second embodiment of the invention.
  • a car 100 is shown in Figure 1, fitted with an apparatus according to an embodiment of the present invention.
  • the apparatus comprises a video camera 102 arranged to capture images of the scene ahead of the vehicle.
  • the camera is, in the present embodiment, based on a National Semiconductor greyscale CMOS device with a 640x480 pixel resolution. Once every 40ms, a window - 640x240 pixels in size - is captured from the centre of the imager field of view using a CameralinkTM Framegrabber.
  • processor 103 This has an input 105 for an output of a wheel speed and yaw rate sensor 106 of the car 100 (although this sensor could be two discrete sensors) .
  • the processor 103 also has an output 107, on which the position of the vehicle on a map is output as described below. This output can be displayed to the driver on a suitable display (not shown).
  • the system also comprises a memory and removable media drive 108 connected to the processor, which stores a map as it is generated.
  • the system design is based upon the concept of generating a low-level feature map of visual landmarks of, say, an intersection from a processed image from the video camera 102 mounted in the vehicle 100.
  • the map (shown in Figure 3) comprises a list of points relating to features on the road throughout an intersection. The points denote the ends of lines forming shapes on the road.
  • the position of the points with respect to the vehicle and, assuming the vehicle's initial position and motion through the area of interest, can be estimated by tracking the measurements of the positions of these features relative to the vehicle using a tracking filter. This is shown in Figure 2 and 3 of the accompanying drawings.
  • the system can predict the position of lines on the road surface for the next image captured. The position of each line is then measured by searching for the predicted lines in the captured image. Each predicted line then has a position measurement associated with it.
  • Map creation starts at step 200, with the map origin at the current location and map axis in line with the camera axis.
  • an image is captured into the processor 103.
  • the image is made available to the image processing functions as a 2-D array of integer values representing pixel intensity (0 to 255).
  • a timestamp which is read from the vehicle CAN bus, is stored.
  • the image-processing step 204 follows.
  • the image is processed with a Sobel edge detection kernel, plus optimised line tracing and line combination algorithms to extract parameterised line endpoints in image coordinates.
  • the processor 103 looks preferentially for features that have clear transitions between dark and light.
  • the system will detect centre lines or outlines of visible road markings (for example, centre of dashed lines, centre of stop lines, outline of directional arrows, etc).
  • the system will search for all lines within the image.
  • the system determines the position of the lines by recording the points corresponding to their endpoints .
  • each of the detected lines is transformed from the image into the equivalent position on the road surface using a perspective transform (PT) .
  • Perspective transforms are well known in the art.
  • the PT uses a simple model of the imager and lens to translate pixel positions in the image into relative positions on the road surface.
  • the current PT implementation is based on the assumption that the road ahead of the vehicle is flat; whilst this reduces the complexity of the algorithm, this assumption is not strictly necessary.
  • the system will have made a prediction of where those points will be in the present iteration as will be discussed below.
  • the newly detected points are compared as to the angle and position of the lines they form against predicted lines. If a good match is found, the association function assigns a high confidence to that line measurement, which is recorded together with the position of the point as the map. If no match is found, the association function returns the position of the new point to be recorded on the map as a newly-discovered feature with , a relatively low confidence .
  • This image at step 220 can be accelerated by limiting the search for pre-existing points to regions of interest (ROI) around the predicted position of lines within the image.
  • ROI regions of interest
  • the system knows what type of line to search for (because it is stored in the feature map) . This improves the robustness for line searching because it can select the appropriate line-tracing algorithm. However, it is still desirable to search the entire map for new features.
  • FIG. 5 of the accompanying drawings An example captured image can be seen in Figure 5 of the accompanying drawings.
  • This shows an example road marking 500, in which the system has selected a region of interest 502, shown in more detail in the inset of that Figure.
  • the lines relating to that road marking will only be searched for in that region of interest 502. In many cases such as that shown, there will be many lines close together and the image processing will return more than one measured (solid in the Figure) line for each predicted (dashed in the Figure) line. Matching of the length and angle of the line will help make a correct match.
  • Line prediction errors are calculated in the image plane. For these errors to be used to update the vehicle location model, the errors must be transformed from the image plane into the vehicle relative coordinates.
  • the image plane to relative coordinates transformation is a function of the imager and lens combination and is the opposite of the IPT discussed above.
  • the combined relationship between line prediction errors and absolute position prediction error is called the measurement to state transformation.
  • the vehicle location model states exist in the absolute position coordinates system so that the vehicle position within the intersection can be tracked.
  • the transformation between measurements and states is re-calculated for each iteration of the localisation algorithm and is used to transform both the line measurement and confidence in the measurement into the absolute coordinates system.
  • an Extended Kalman Filter (EKF) has been chosen to model and predict the vehicle motion.
  • Kalman filters are well known as a means of estimating information from indirectly related or noisy measurements and are often applied to tracking problems .
  • the model is a constant velocity, constant yaw rate model with inputs from vehicle wheel speed, yaw rate and the line measurements.
  • the initial location and heading are set as the origin as discussed above.
  • the initial vehicle dynamics (that is, velocity and yaw rate) are taken from vehicle sensors (wheel speed and yaw rate sensor 106) .
  • the EKF can then be used to predict, from the motion of the vehicle and the previously detected lines, where lines will occur in the next image.
  • the EKF is shown in more detail in Figure 7 and 8 of the accompanying drawings.
  • the EKF takes as inputs the measurements from both the imager and vehicle dynamics sensors combined to form the current measurement vector z k ⁇ k .
  • the confidence in the measurements (variances) are combined to form diagonal matrix R k ⁇ k .
  • the gain of the filter - the Kalman Gain ⁇ K k ) - is calculated based on both the measurement variance (confidence that the measurement is correct, R k ⁇ k ) and the covariance (confidence that the previous prediction is correct, .
  • This is combined with the innovation (e k ) - the error between the measurements, transformed into state domain using h k ⁇ k .,.
  • This is a calculation which transforms the system state predictions into the measurement domain; this allows the predictions to be subtracted from the measurements to give a measurement error.
  • a similar calculation is embedded into the Kalman Gain (K k ) calculation, which is derived from measurement variances and the state covariance. This in turn converts the measurement error (when multiplied by K k ) into the state update term according to:
  • Updated State Predicted State + K k * e k
  • the previous predictions of the system states and covariance are corrected to give the current estimate of the vehicle state (x k]k , including the vehicle position and velocity, and the line point positions) and covariance (i%) .
  • the constant velocity, constant yaw-rate model is then used to predict the state of the vehicle (x k+] ⁇ k ) at the next iteration.
  • the covariance is also predicted using the system model plus a term to describe the process noise of the system. This process noise is a measure of how accurate the model is expected to be (for example, how constant the velocity and yaw- rate is expected to be) .
  • the output from the EKF is therefore the corrected state estimate and covariance for the current image. This provides the current estimate of the vehicles location and heading plus the confidence in that estimate. It also provides an estimate of the position of the lines, and the corresponding confidences, in the next captured image. The method then repeats from step 202.
  • step 230 it is also necessary to select (at step 230) which features are recorded as the map. Not all features that have been tracked will be required on the final map.
  • the set of features tracked may be edited automatically or manually to remove unwanted features and leave only salient ones. Automatic removal would typically involve discardal of features that do not meet criteria such as the confidence the system has in the relevant feature, the amount of data recorded by the feature or so on.
  • the map is transformed from the reference frame where the vehicle starts at the origin to a desired reference frame. This could be achieved by measuring accurately the position of the vehicle at the origin, or by measuring accurately the position of one of the features found by the system.
  • the transformed map thus generated is stored by media drive 108 for distribution to other systems.
  • FIG. 9 of the accompanying drawings shows an example "bay" parking space 800, demarcated by painted lines 801, 802, 803.
  • a user of the apparatus desires to park in a given space 800, they first drive past the space, in a direction 810 generally perpendicular to the space. They activate the apparatus such that it carries out the mapping procedure of the first embodiment of the invention and maps the position of the lines 801, 802, 803. This map is stored in the processor 103.
  • the driver then drives the vehicle back toward the space in the direction of arrow 812.
  • the apparatus has mapped the space 800 relative to the car 100 and so can tell where the vehicle is relative to the lines 801, 802, 803.
  • the apparatus can therefore warn the driver should they overshoot the lines, be in danger of not fitting in the space or so on.
  • the system can, if desired, calculate an optimal trajectory for the vehicle and display steering and/or speed instructions to the driver.
  • the apparatus can directly control the engine, brakes and steering of the vehicle to drive the vehicle automatically into the space given the mapping of the space.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

A method of generating a map comprising the location of points relating to features, comprising moving a vehicle (100) past a scene, capturing (202) an image of a scene from the vehicle (100), detecting (204) points relating to features in the captured image, and generating the map (230) by recording the position of the points. The map may be of a road intersection, a car park or a single car parking space (800). In addition, a parking assistance apparatus for a vehicle (100) is disclosed, comprising a video camera (102) arranged to, in use, capture images of parking space (800) as the vehicle (100) is driven past the space, the parking space (800) having visible lines (801, 802, 803) demarcating the space, and a processor (103) arranged to, in use, map the lines demarcating the space using the method of any preceding claim and captured images from the video camera and guidance means to guide the vehicle into the space.

Description

GENERATING A MAP
This invention relates to a method of generating a map, and a parking assistance apparatus generating a map.
In the United Kingdom, vehicular accident statistics show that 36% of fatal accidents are at, or within, 20 metres of intersections. In Japan, 46% of fatal vehicular accidents occur at or near intersections, whereas in the United States of America, 23% of fatal accidents are at or related to intersections. It is therefore desirable to increase intersection safety.
One of the ways it is proposed this may be achieved is for the vehicle's automatic systems being able to accurately locate the vehicle within an intersection relative to a map of the known features of the intersection. In conjunction with other information about the intersection (such as other vehicle positions, traffic light status and so on) information regarding the position of the vehicle can be used to warn the driver of the vehicle about potential hazards. Position-determining systems such as the Global Positioning System (GPS) are well known, but are of limited accuracy - a positioning error of tens of metres as is possible with GPS is not acceptable in measuring the positioning of a vehicle in an intersection.
However, in order that such a system work, it is important to be able to simply and reliably generate the map.
According to a first aspect of the invention, we provide a method of generating a map comprising the location of points relating to features, comprising moving a vehicle past a scene, capturing an image of a scene from the vehicle, detecting points relating to features in the captured image, and generating the map by recording the position of the points. Accordingly, this is a simple method of generating a map using images captured from a vehicle. Preferably the map is of a road, typically of a road intersection and may be of a single intersection. The map may additionally or alternatively be a map of a car park, a single car parking space or so on. The map may comprise points relating to visible features on the surface of the road, typically salient features such as lane markings and so on or visible salient features on or adjacent to the road (such as street furniture such as traffic lights, bus stops and the like) . The processor may select only salient features for inclusion on the map.
Preferably, the method comprises capturing at least one further image as the vehicle moves, detecting points relating to features in the at least one further image, and comparing the points in the image and the at least one further image. The method may also comprise updating the map based on the comparison of the points between the image and the at least one further image. By comparing the points detected in successive images, the system may be able to build up more confidence in points that are repeatedly detected. Indeed, the method may comprise assigning each point recorded on the map a confidence and increasing that confidence when the point is repeatedly detected.
The method may include the step of repeatedly moving the vehicle past the scene, and capturing images of the scene from the vehicle each time the vehicle passes the scene; each time the vehicle passes the scene, identifying points relating to features in each image and using these points to update the map. The accuracy of the map may therefore be increased.
Typically, the method will comprise the detection of edges visible in the captured images. The step of detection of points relating to features in the captured images may comprise detecting lines in the captured images, then calculating the end points of the lines. The method may further comprise the step of measuring the motion of the vehicle and from this predicting where the points in the at least one further image will be. Indeed, the method may comprise the step of looking for points in the at least one further image in the region where they are predicted to be. This has been found to improve the processing efficiency of the method. The motion may be detected by means of a speed sensor and/or yaw rate sensor associated with the vehicle.
Furthermore, by comparing the points in the image and the at least one further image, the method may comprise the step of determining the motion of the vehicle. By motion we may mean at least one of speed, heading angle, velocity and yaw rate. The motion of the vehicle determined by comparing the points from the differing images may be more accurate than the measurement used to predict the location of the points in the at least one further image; the method may therefore, as a useful by-product, more accurately output the motion of the vehicle than the estimate used during the method.
The step of determining the position of points in the image may comprise the step of transforming the position of points in the image into a position relative to the vehicle. This may comprise carrying out a perspective transform. The pitch, roll and/or vertical heave of the vehicle may be calculated in order to compensate therefor, or they may be assumed to be constant.
The image and the at least one further image may be captured using a video camera and may be optical images of the scene. By optical, we typically mean an image captured using visible light, or alternatively IR or UV light. Alternatively, the image may be RADAR, LIDAR, SONAR or other similar images of the scene. "Visible" should be construed accordingly.
The step of predicting where points will be in the at least one further image may comprise modelling the motion of the vehicle. This may be performed by using a predictive filter, such as a Kalman filter, or an Extended Kalman filter (an extended Kalman Filter typically incorporates non-linear relationships that are useful for this idea, for example the relationships between measurements, states and outputs and may incorporate necessary algorithms, for example perspective transforms and inverse perspective algorithms) . The predictive filter may take, as an input, the position of the points in the image and the speed and heading angle (and/or yaw rate) . The filter would then output predicted positions of the points .
In order to predict the position of the points thus predicted in the at least one further image, it may be necessary to perform an inverse perspective transform to transform the position of the points relative to a surface (such as a road surface) into the position in the at least one further image. The transform may be of the form:
hX fh
X = and Z =
H - Y H - Y
where X and Y are the image co-ordinates referenced from the centre of the line of the captured image, H is indicative of the position of the horizon, f is the focal length of the camera, h is the height of the camera above the ground, z is the distance from the camera in the direction in which the camera is pointing and x the distance from the camera in the perpendicular direction. Where lines are detected, the step of comparing points in the image and the at least one further image may comprising determining the angle of the line relative to a datum and comparing the points corresponding to a line only if the angle is within an expected range. This avoids confusing points in successive images that are not related.
Once the points have been compared, the actual position in the at least one further image of the points whose position was predicted in the at least one further image may be calculated. From these actual positions, the position of the vehicle on the map may be calculated. Where a predictive filter is used, the actual positions may be used to update the predictive filter. This is especially useful with a Kalman filter or Extended Kalman Filter.
The step of updating the predictive filter may comprise the determination of the vehicle motion from the comparison of the points from the image and the at least one further image. This is especially true of the use of an (extended) Kalman filter, where the motion of the vehicle can conveniently form part of the state inputs to the Kalman filter.
The step of recording the position of points on the map may include only recording points that meet at least one criterion. The at least one criterion may include at least one of: • the confidence of that point • the feature with which the point is associated, such as the shape of a line for which the point is an end.
The vehicle may be a road vehicle, such as a car. Alternatively, it may be a waterborne vessel such as a boat or an aircraft such as an aeroplane or helicopter. The map may be used by the vehicle in order to determine its location at a later time. Alternatively, the map may be stored and supplied to another vehicle, or preferably a plurality of vehicles, for use in locating the other vehicles when they are in the same locality - be it an intersection or any other location - as the original vehicle.
According to a second aspect of the invention, we provide a parking assistance apparatus for a vehicle, comprising a video camera arranged to, in use, capture images of a parking space as the vehicle is driven past the space, the parking space having visible lines demarcating the space, and a processor arranged to, in use, map the lines demarcating the space using the method of the first aspect of the invention and captured images from the video camera, and guidance means to guide the vehicle into the space.
Accordingly, a vehicle equipped with such an apparatus may generate a map of a space "on-the-fly" , and then provide guidance to a driver of the vehicle on the best mode of driving into the space. This system does not require any special markings of the space; simple painted lines as are well known in the prior art will suffice.
There now follows, by way of example only, embodiments of the invention described with reference to the accompanying drawings, in which:
Figure 1 shows a car fitted with an apparatus according to a first embodiment of the invention;
Figure 2 shows the use of the apparatus of Figure 1 in capturing images of the scene surrounding the vehicle; Figure 3 shows the vehicle of Figure 1 located on a map;
Figure 4 is a flow diagram showing the method carried out by the apparatus of Figure 1 ;
Figure 5 shows a sample image captured by the apparatus of Figure 1;
Figure 6 shows the relationship between different coordinate systems for locating the vehicle on the map of Figure 3;
Figure 7 shows the data flow through the apparatus of Figure 1 ;
Figure 8 shows the operation of the Extended Kalman Filter of the apparatus of Figure 1 ; and
Figure 9 shows a parking space as used by the apparatus of the second embodiment of the invention.
A car 100 is shown in Figure 1, fitted with an apparatus according to an embodiment of the present invention. The apparatus comprises a video camera 102 arranged to capture images of the scene ahead of the vehicle. The camera is, in the present embodiment, based on a National Semiconductor greyscale CMOS device with a 640x480 pixel resolution. Once every 40ms, a window - 640x240 pixels in size - is captured from the centre of the imager field of view using a Cameralink™ Framegrabber.
These images are then fed into processor 103. This has an input 105 for an output of a wheel speed and yaw rate sensor 106 of the car 100 (although this sensor could be two discrete sensors) . The processor 103 also has an output 107, on which the position of the vehicle on a map is output as described below. This output can be displayed to the driver on a suitable display (not shown). The system also comprises a memory and removable media drive 108 connected to the processor, which stores a map as it is generated.
The system design is based upon the concept of generating a low-level feature map of visual landmarks of, say, an intersection from a processed image from the video camera 102 mounted in the vehicle 100. The map (shown in Figure 3) comprises a list of points relating to features on the road throughout an intersection. The points denote the ends of lines forming shapes on the road. The position of the points with respect to the vehicle and, assuming the vehicle's initial position and motion through the area of interest, can be estimated by tracking the measurements of the positions of these features relative to the vehicle using a tracking filter. This is shown in Figure 2 and 3 of the accompanying drawings.
With reference to Figure 2, assuming that the initial position of the vehicle is approximately known, for example using a GPS sensor (not shown) the system can predict the position of lines on the road surface for the next image captured. The position of each line is then measured by searching for the predicted lines in the captured image. Each predicted line then has a position measurement associated with it.
Once the lines have been associated, the error between the prediction and measurement can be determined. The error is used to correct the position estimate of the points within the intersection and the position of the vehicle, and hence position the vehicle on the map (Figure 3) . This new vehicle position estimate is then used to predict the position of the lines in the next image ready for the next iteration of the localisation algorithm. The process carried out by the processor 103 is shown in Figure 4 of the accompanying drawings. Map creation starts at step 200, with the map origin at the current location and map axis in line with the camera axis.
In the next step 202, an image is captured into the processor 103. The image is made available to the image processing functions as a 2-D array of integer values representing pixel intensity (0 to 255). Immediately after the image is captured, a timestamp, which is read from the vehicle CAN bus, is stored.
The image-processing step 204 follows. The image is processed with a Sobel edge detection kernel, plus optimised line tracing and line combination algorithms to extract parameterised line endpoints in image coordinates. The processor 103 looks preferentially for features that have clear transitions between dark and light. The system will detect centre lines or outlines of visible road markings (for example, centre of dashed lines, centre of stop lines, outline of directional arrows, etc). The system will search for all lines within the image. The system determines the position of the lines by recording the points corresponding to their endpoints .
In perspective transform step 206, each of the detected lines is transformed from the image into the equivalent position on the road surface using a perspective transform (PT) . Perspective transforms are well known in the art. The PT uses a simple model of the imager and lens to translate pixel positions in the image into relative positions on the road surface. To reduce algorithm complexity, the current PT implementation is based on the assumption that the road ahead of the vehicle is flat; whilst this reduces the complexity of the algorithm, this assumption is not strictly necessary. Once points relating to features have been detected and transformed, line association 220 is performed in the image plane and finds whether the measured points detected in the previous step match the predicted position of points found in previous iterations. In the first iteration, there will of course be no previously determined points. If there are, then the system will have made a prediction of where those points will be in the present iteration as will be discussed below. The newly detected points are compared as to the angle and position of the lines they form against predicted lines. If a good match is found, the association function assigns a high confidence to that line measurement, which is recorded together with the position of the point as the map. If no match is found, the association function returns the position of the new point to be recorded on the map as a newly-discovered feature with , a relatively low confidence .
The processing of this image at step 220 can be accelerated by limiting the search for pre-existing points to regions of interest (ROI) around the predicted position of lines within the image. When the search for each line occurs, the system knows what type of line to search for (because it is stored in the feature map) . This improves the robustness for line searching because it can select the appropriate line-tracing algorithm. However, it is still desirable to search the entire map for new features.
An example captured image can be seen in Figure 5 of the accompanying drawings. This shows an example road marking 500, in which the system has selected a region of interest 502, shown in more detail in the inset of that Figure. The lines relating to that road marking will only be searched for in that region of interest 502. In many cases such as that shown, there will be many lines close together and the image processing will return more than one measured (solid in the Figure) line for each predicted (dashed in the Figure) line. Matching of the length and angle of the line will help make a correct match.
Line prediction errors are calculated in the image plane. For these errors to be used to update the vehicle location model, the errors must be transformed from the image plane into the vehicle relative coordinates.
They must then be transformed from the vehicle relative coordinates into the absolute coordinates system (as shown in Figure 5 of the accompanying drawings) . The image plane to relative coordinates transformation is a function of the imager and lens combination and is the opposite of the IPT discussed above.
As the vehicle moves through the intersection, the absolute vehicle heading will change. When the heading changes, the relative to absolute position function must be re-evaluated.
The combined relationship between line prediction errors and absolute position prediction error is called the measurement to state transformation. The vehicle location model states exist in the absolute position coordinates system so that the vehicle position within the intersection can be tracked. The transformation between measurements and states is re-calculated for each iteration of the localisation algorithm and is used to transform both the line measurement and confidence in the measurement into the absolute coordinates system.
It is then necessary to update, at step 222, the state of a model of the vehicle. In this embodiment, an Extended Kalman Filter (EKF) has been chosen to model and predict the vehicle motion. Kalman filters are well known as a means of estimating information from indirectly related or noisy measurements and are often applied to tracking problems . The model is a constant velocity, constant yaw rate model with inputs from vehicle wheel speed, yaw rate and the line measurements. The initial location and heading are set as the origin as discussed above. The initial vehicle dynamics (that is, velocity and yaw rate) are taken from vehicle sensors (wheel speed and yaw rate sensor 106) .
The EKF can then be used to predict, from the motion of the vehicle and the previously detected lines, where lines will occur in the next image. The EKF is shown in more detail in Figure 7 and 8 of the accompanying drawings. The EKF takes as inputs the measurements from both the imager and vehicle dynamics sensors combined to form the current measurement vector zk\k. The confidence in the measurements (variances) are combined to form diagonal matrix Rk\k.
The gain of the filter - the Kalman Gain {Kk) - is calculated based on both the measurement variance (confidence that the measurement is correct, Rk\k) and the covariance (confidence that the previous prediction is correct,
Figure imgf000013_0001
. This is combined with the innovation (ek) - the error between the measurements, transformed into state domain using hk\k.,. This is a calculation which transforms the system state predictions into the measurement domain; this allows the predictions to be subtracted from the measurements to give a measurement error. A similar calculation is embedded into the Kalman Gain (K k) calculation, which is derived from measurement variances and the state covariance. This in turn converts the measurement error (when multiplied by Kk) into the state update term according to:
Updated State = Predicted State + Kk * ek
Once the Kalman Gain is calculated, the previous predictions of the system states
Figure imgf000013_0002
and covariance are corrected to give the current estimate of the vehicle state (xk]k, including the vehicle position and velocity, and the line point positions) and covariance (i%) .
The constant velocity, constant yaw-rate model is then used to predict the state of the vehicle (xk+]\k) at the next iteration. The covariance is also predicted using the system model plus a term to describe the process noise of the system. This process noise is a measure of how accurate the model is expected to be (for example, how constant the velocity and yaw- rate is expected to be) .
The output from the EKF is therefore the corrected state estimate and covariance for the current image. This provides the current estimate of the vehicles location and heading plus the confidence in that estimate. It also provides an estimate of the position of the lines, and the corresponding confidences, in the next captured image. The method then repeats from step 202.
It is also necessary to select (at step 230) which features are recorded as the map. Not all features that have been tracked will be required on the final map. The set of features tracked may be edited automatically or manually to remove unwanted features and leave only salient ones. Automatic removal would typically involve discardal of features that do not meet criteria such as the confidence the system has in the relevant feature, the amount of data recorded by the feature or so on.
Finally, at step 232, the map is transformed from the reference frame where the vehicle starts at the origin to a desired reference frame. This could be achieved by measuring accurately the position of the vehicle at the origin, or by measuring accurately the position of one of the features found by the system. The transformed map thus generated is stored by media drive 108 for distribution to other systems. A second embodiment of the invention will now be demonstrated with respect to Figure 9 of the accompanying drawings. The apparatus embodying this embodiment is the same as that shown in Figure 1 of the accompanying drawings, and features in common with the first embodiment of the invention are referred to using the same indicia.
This embodiment provides a parking assistance apparatus for a car 100. It is well known that parking a car can be, for some drivers, a tricky experience. It is therefore desirable to provide some assistance in this process. Figure 9 of the accompanying drawings shows an example "bay" parking space 800, demarcated by painted lines 801, 802, 803.
When a user of the apparatus desires to park in a given space 800, they first drive past the space, in a direction 810 generally perpendicular to the space. They activate the apparatus such that it carries out the mapping procedure of the first embodiment of the invention and maps the position of the lines 801, 802, 803. This map is stored in the processor 103.
In a second stage, the driver then drives the vehicle back toward the space in the direction of arrow 812. The apparatus has mapped the space 800 relative to the car 100 and so can tell where the vehicle is relative to the lines 801, 802, 803. The apparatus can therefore warn the driver should they overshoot the lines, be in danger of not fitting in the space or so on. The system can, if desired, calculate an optimal trajectory for the vehicle and display steering and/or speed instructions to the driver. In an alternative, the apparatus can directly control the engine, brakes and steering of the vehicle to drive the vehicle automatically into the space given the mapping of the space.

Claims

1. A method of generating a map comprising the location of points relating to features, comprising moving a vehicle past a scene, capturing an image of a scene from the vehicle, detecting points relating to features in the captured image, and generating the map by recording the position of the points .
2. The method of claim 1 in which the map is of a road intersection, a car park, or a single car parking space.
3. The method of claim 1 or claim 2 in which the map comprises points relating to visible features on or about the surface of a road.
4. The method of any preceding claim, in which the method comprises capturing at least one further image as the vehicle moves, detecting points relating to features in the at least one further image, and comparing the points in the image and the at least one further image.
5. The method of claim 4, also comprising the step of updating the map based on the comparison of the points between the image and the at least one further image.
6. The method of claim 4 or claim 5, in which the method comprises assigning each point recorded on the map a confidence and increasing that confidence when the point is repeatedly detected.
7. The method of any preceding claim, in which the method includes the step of repeatedly moving the vehicle past the scene, and capturing images of the scene from the vehicle each time the vehicle passes the scene; each time the vehicle passes the scene, identifying points relating to features in each image and using these points to update the map.
8. The method of any preceding claim in which the method comprises the detection of edges visible in the captured images .
9. The method of claim 8 in which the step of detection of points relating to features in the captured images comprises detecting lines in the captured images, then calculating the end points of the lines.
10. The method of claim 4 or any preceding claim dependent thereon, in which the method further comprises the step of measuring the motion of the vehicle and from this predicting where the points in the at least one further image will be.
11. The method of claim 10 in which the method comprises the step of looking for points in the at least one further image in the region where they are predicted to be.
12. The method of claim 10 or claim 11 in which the motion is detected by means of a speed sensor associated with the vehicle.
13. The method of claim 4 or any preceding claim dependent thereon, in which, the method comprise the step of determining the motion of the vehicle by comparing the points in the image and the at least one further image .
14. The method of any preceding claim in which the step of determining the position of points in the image may comprise the step of transforming the position of points in the image into a position relative to the vehicle.
15. The method of any preceding claim, in which the image is captured using a video camera.
16. The method of claim 10 or any preceding claim dependent thereon, in which the step of predicting where points will be in the at least one further image may comprise modelling the motion of the vehicle.
17. The method of claim 16, in which the step of modelling the motion of the vehicle is performed by using a predictive filter, such as a Kalman filter, or an Extended Kalman filter.
18. The method of claim 10 or any preceding claim dependent thereon, in which in order to predict the position of the points thus predicted in the at least one further image, an inverse perspective transform is performed to transform the position of the points relative to a surface into the position in the at least one further image.
19. The method of claim 4 or any preceding claim dependent thereon in which, lines are detected in the captured images and the step of comparing points in the image and the at least one further image comprises determining the angle of the lines detected in each image relative to a datum and comparing the points corresponding to a line only if the angle is within an expected range.
20. The method of claim 10 or any preceding claim dependent thereon in which, once the points have been compared, the actual position in the at least one further image of the points whose position was predicted in the at least one further image may be calculated.
21. The method of claim 20 in which from the actual positions, the position of the vehicle on the map may be calculated.
22. The method of claim 21 as dependent from claim 17 in which the actual positions are used to update the predictive filter.
23. The method of claim 22 in which the step of updating the predictive filter may comprise the determination of the vehicle motion from the comparison of the points from the image and the at least one further image.
24. The method of any preceding claim in which the step of recording the position of points on the map includes only recording points that meet at least one criterion based on at least one of: • the confidence of that point
• the feature with which the point is associated, such as the shape, position and/or size of a line for which the point is an end.
25. The method of any preceding claim in which the vehicle is a road vehicle, such as a car.
26. The method of any preceding claim in which the vehicle is a waterborne vessel such as a boat or an aircraft such as an aeroplane or helicopter.
27. A parking assistance apparatus for a vehicle, comprising a video camera arranged to, in use, capture images of a parking space as the vehicle is driven past the space, the parking space having visible lines demarcating the space, and a processor arranged to, in use, map the lines demarcating the space using the method of any preceding claim and captured images from the video camera, and guidance means to guide the vehicle into the space.
PCT/GB2007/002770 2006-07-21 2007-07-23 Generating a map WO2008009965A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP07766328.4A EP2047213B1 (en) 2006-07-21 2007-07-23 Generating a map

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0614529.6 2006-07-21
GBGB0614529.6A GB0614529D0 (en) 2006-07-21 2006-07-21 Generating A Map

Publications (1)

Publication Number Publication Date
WO2008009965A1 true WO2008009965A1 (en) 2008-01-24

Family

ID=36998505

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2007/002770 WO2008009965A1 (en) 2006-07-21 2007-07-23 Generating a map

Country Status (3)

Country Link
EP (1) EP2047213B1 (en)
GB (1) GB0614529D0 (en)
WO (1) WO2008009965A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2213980A2 (en) 2009-01-28 2010-08-04 Audi AG Method for operating a navigation device of a motor vehicle and motor vehicle for same
DE102014015073A1 (en) * 2014-10-11 2016-04-14 Audi Ag Method for updating and / or extending a map data set of a limited environment
WO2016206970A1 (en) * 2015-06-23 2016-12-29 Robert Bosch Gmbh Concept for producing a digital map of a car park
DE102018104243B3 (en) 2018-02-26 2019-05-16 Autoliv Development Ab Method and system for detecting parking spaces suitable for a vehicle
CN115797585A (en) * 2022-12-19 2023-03-14 北京百度网讯科技有限公司 Parking lot map generation method and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018123393A1 (en) * 2018-09-24 2020-03-26 Denso Corporation Detection of parking areas

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0358628A2 (en) * 1988-09-06 1990-03-14 Transitions Research Corporation Visual navigation and obstacle avoidance structured light system
JPH10264841A (en) 1997-03-25 1998-10-06 Nissan Motor Co Ltd Parking guiding device
US20040167669A1 (en) 2002-12-17 2004-08-26 Karlsson L. Niklas Systems and methods for using multiple hypotheses in a visual simultaneous localization and mapping system
US20040230340A1 (en) * 2003-03-28 2004-11-18 Masaki Fukuchi Behavior controlling apparatus, behavior control method, behavior control program and mobile robot apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0358628A2 (en) * 1988-09-06 1990-03-14 Transitions Research Corporation Visual navigation and obstacle avoidance structured light system
JPH10264841A (en) 1997-03-25 1998-10-06 Nissan Motor Co Ltd Parking guiding device
US20040167669A1 (en) 2002-12-17 2004-08-26 Karlsson L. Niklas Systems and methods for using multiple hypotheses in a visual simultaneous localization and mapping system
US20040230340A1 (en) * 2003-03-28 2004-11-18 Masaki Fukuchi Behavior controlling apparatus, behavior control method, behavior control program and mobile robot apparatus

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2213980A3 (en) * 2009-01-28 2012-01-04 Audi AG Method for operating a navigation device of a motor vehicle and motor vehicle for same
EP2213980A2 (en) 2009-01-28 2010-08-04 Audi AG Method for operating a navigation device of a motor vehicle and motor vehicle for same
DE102014015073B4 (en) * 2014-10-11 2021-02-25 Audi Ag Method for updating and / or expanding a map data set in a limited environment
DE102014015073A1 (en) * 2014-10-11 2016-04-14 Audi Ag Method for updating and / or extending a map data set of a limited environment
CN107787283B (en) * 2015-06-23 2021-06-15 罗伯特·博世有限公司 Scheme for creating digital map of parking lot
US10690502B2 (en) 2015-06-23 2020-06-23 Robert Bosch Gmbh Concept for drawing up a digital map of a parking lot
CN107787283A (en) * 2015-06-23 2018-03-09 罗伯特·博世有限公司 For the scheme for the numerical map for creating parking lot
WO2016206970A1 (en) * 2015-06-23 2016-12-29 Robert Bosch Gmbh Concept for producing a digital map of a car park
DE102018104243B3 (en) 2018-02-26 2019-05-16 Autoliv Development Ab Method and system for detecting parking spaces suitable for a vehicle
WO2019162794A1 (en) 2018-02-26 2019-08-29 Veoneer Sweden Ab Method and system for detecting parking spaces which are suitable for a vehicle
US11158192B2 (en) 2018-02-26 2021-10-26 Veoneer Sweden Ab Method and system for detecting parking spaces which are suitable for a vehicle
CN115797585A (en) * 2022-12-19 2023-03-14 北京百度网讯科技有限公司 Parking lot map generation method and device
CN115797585B (en) * 2022-12-19 2023-08-08 北京百度网讯科技有限公司 Parking lot map generation method and device

Also Published As

Publication number Publication date
EP2047213B1 (en) 2014-09-10
EP2047213A1 (en) 2009-04-15
GB0614529D0 (en) 2006-08-30

Similar Documents

Publication Publication Date Title
JP7461720B2 (en) Vehicle position determination method and vehicle position determination device
EP2052208B1 (en) Determining the location of a vehicle on a map
CN111856491B (en) Method and apparatus for determining geographic position and orientation of a vehicle
EP0810569B1 (en) Lane detection sensor and navigation system employing the same
RU2668459C1 (en) Position evaluation device and method
EP1395851B1 (en) Sensing apparatus for vehicles
JP4940168B2 (en) Parking space recognition device
US20180173970A1 (en) Method for estimating traffic lanes
CN110969055B (en) Method, apparatus, device and computer readable storage medium for vehicle positioning
US20030002713A1 (en) Vision-based highway overhead structure detection system
CN111091037B (en) Method and device for determining driving information
US20200290600A1 (en) Parking assistance device and parking assistance method
JP2005136946A (en) Camera based position recognization apparatus and method for road vehicle
Shunsuke et al. GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon
EP2047213B1 (en) Generating a map
US20210394782A1 (en) In-vehicle processing apparatus
JP2018048949A (en) Object recognition device
US20210180963A1 (en) Onboard device
Valldorf et al. Advanced Microsystems for Automotive Applications 2007
CN113566817B (en) Vehicle positioning method and device
US11904843B2 (en) Autonomous parking systems and methods for vehicles
US20220155455A1 (en) Method and system for ground surface projection for autonomous driving
US20230360529A1 (en) Vehicle Localization Based on Radar Detections in Garages
CN116524454A (en) Object tracking device, object tracking method, and storage medium
CN115597577A (en) Self-positioning of a vehicle in a parking facility

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07766328

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2007766328

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: RU

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)