WO2018114244A1 - Method for assisting a driver of a motor vehicle when parking using a driver assistance device, corresponding computer program product and driver assistance device - Google Patents

Method for assisting a driver of a motor vehicle when parking using a driver assistance device, corresponding computer program product and driver assistance device Download PDF

Info

Publication number
WO2018114244A1
WO2018114244A1 PCT/EP2017/080671 EP2017080671W WO2018114244A1 WO 2018114244 A1 WO2018114244 A1 WO 2018114244A1 EP 2017080671 W EP2017080671 W EP 2017080671W WO 2018114244 A1 WO2018114244 A1 WO 2018114244A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
assistance device
driver assistance
trajectory
motor vehicle
Prior art date
Application number
PCT/EP2017/080671
Other languages
French (fr)
Inventor
Catherine Enright
Ciáran HUGHES
Jonathan Horgan
Olivia Donnellan
German Feijoo
Gustavo Pelaez
Bassam Abdallah
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2018114244A1 publication Critical patent/WO2018114244A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D15/00Steering not otherwise provided for
    • B62D15/02Steering position indicators ; Steering position determination; Steering aids
    • B62D15/027Parking aids, e.g. instruction means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the invention relates to a method for assisting a driver of a motor vehicle when parking in a parking space using a driver assistance device of the motor vehicle.
  • the invention further relates to a corresponding computer program product and driver assistance device for assisting a driver of a motor vehicle when parking in a parking space.
  • Automated Parking systems are already in the market.
  • a new category of automated parking systems is the trained parking system.
  • Modern semi-autonomous cars are designed to park themselves. In order to do this they need to be aware of the geometry of their environment.
  • Trained parking systems use various sensors to record information from the environment (landmarks) corresponding to a driven ("trained") trajectory, and on a subsequent "replay” they relate the newly sensed information to the previously stored information to work out their position relative to the stored trajectory, which is then used to make decisions on how to manoeuvre the vehicle until it eventually parks at the stored parking place (park slot) location.
  • Document DE 10 2013 015 348 A1 describes a method for assisting a driver of a motor vehicle when parking in a parking space using a driver assistance device of the motor vehicle.
  • the method comprises a training (or learning) mode of the driver assistance device and an operating mode of the driver assistance device.
  • the training mode and the operating mode the motor vehicle has to drive along a trajectory (path) from a starting point to a final position of the parking procedure at the parking space, the parking position.
  • This object is achieved by a method, a computer program product, as well as driver assistance device having the features according to the respective independent claims.
  • Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
  • the method comprises (i) in a training mode of the driver assistance device: recording and storing reference data about a trajectory (path) to the parking space, while the motor vehicle is driven on said trajectory under driver control, wherein at least some of these reference data comprise image-based reference information created by use of at least one camera device mounted on the vehicle; and (ii) in a subsequent replay mode different from the training mode: recording, by the driver assistance device, image-based information by use of the at least one camera device and comparing the image-based information with the image-based reference information, wherein depending on a result of the
  • the method may comprise a step of determining the remaining section of the trajectory from the current position to the parking space and identify the parking space using the knowledge about the determined current position and the trained trajectory.
  • This method allows a trained parking assistance with cameras providing the only sensor input.
  • the core idea is that saved information extracted from camera images can be used to localise the vehicle position at a subsequent point in time.
  • the image-based reference information and/or the image-based information comprises characteristic features detected in an image frame of an image taken by the camera device, which
  • the characteristic feature is linked to a corresponding image descriptor.
  • the characteristic features are related to objects in the surrounding of the vehicle (landmarks) when driving along the trajectory.
  • the descriptors are such that they uniquely define a feature and they can be associated with the features extracted in the subsequent replay.
  • the image- based reference information and/or the image-based information preferably is a description of a 3D characteristic feature in the environment in the surrounding area of the trajectory. This description is based on the 2D representation of the 3D feature in the image.
  • saved characteristic features extracted from camera images can be used to localise the vehicle position at a subsequent point in time.
  • the training mode comprises the following steps:
  • Step 1 detection of the characteristic features in the image frames of the camera
  • Step 2 matching of the characteristic features to image frames
  • Step 3 3D-reconstruction of the positions of the characteristic features
  • Step 4 bundle adjustment of 3D-positions of the characteristic features and the vehicle position.
  • the 3D reconstruction step (Step 3) also provides an estimation of the vehicle motion via the estimation of the corresponding essential/fundamental matrix.
  • the bundle adjustment step (Step 4) involves a non-linear optimisation where the reprojection error of the 3D features is minimised.
  • the replay mode comprises the following steps:
  • Step 1 detection of the characteristic features in the image frames of the camera
  • Step 2 association of said characteristic features with the characteristic features detected in the training mode
  • Step 3 bundle adjustment of vehicle position using the 3D-positions of the characteristic features for vehicle pose optimisation.
  • there are two types of image frames namely key frames and normal frames, wherein in training both frame types are processed but only the trained reference points of the trajectory (trajectory points) and trained features associated with key frames are saved in a trained map and used for replay.
  • Key frames are dynamically selected based on distance travelled by the vehicle, the number of features matched to the previous key frame or a combination of both.
  • information of a navigation satellite system is stored for at least some of the image frames, especially the key frames. This information can be GPS data.
  • the computer program product according to the invention comprises computer- executable program code portions having program code instructions configured to execute the aforementioned method.
  • the invention further relates to a driver assistance device for assisting a driver of a motor vehicle when parking in a parking space, wherein the driver assistance device is configured
  • reference data comprise image-based reference information created by use of at least one camera device mounted on the vehicle;
  • the driver assistance device determines a current position of the motor vehicle with respect to the trajectory. Furthermore the driver assistance device can be configured to determine the remaining section of the trajectory from the current position to the parking space and identify the parking space using the knowledge about the determined current position and the trained trajectory.
  • the driver assistance device preferably is computer-based device comprising a processor and a data memory.
  • this driver assistance device is arranged for performing the aforementioned method.
  • Fig. 1 shows a map of the parking zone with a trained trajectory of a parking generated in the training mode
  • Fig. 2 shows a flow chart of a training mode of a driver assistance device for assisting a driver of a motor vehicle when parking in a parking space of a parking zone;
  • Fig. 3 shows an image frame and a map of the parking zone with a replay
  • Fig. 4 shows a flow chart of a replay mode of the driver assistance device for assisting the driver when parking in a parking space.
  • Fig. 1 shows the corresponding map 10 of the scene, e.g. a parking zone, with the motor vehicle 12 driving across this zone during the training mode.
  • the motor vehicle 12 is for example a passenger car.
  • the motor vehicle 12 comprises a driver assistance device with a plurality of cameras (not shown).
  • the cameras are, e.g., located at the front area and the rear area of the vehicle 12.
  • the cameras are environal regions at the front and rear area of the motor vehicle 12.
  • the vehicle 12 is driving along a trajectory 14 across a parking zone from a reference starting position not shown) to a parking space P of the parking zone, which parking space P is the reference target position of said trajectory 14.
  • the trajectory 14 comprises a straight section followed by a curved section and is sufficiently described by a plurality of reference points 16 (trajectory points) on the trajectory 14.
  • Fig. 2 shows a kind of flow chart of a training mode of a driver assistance device for assisting a driver of a motor vehicle when parking in a parking space P.
  • This training mode comprises four training steps TS1 - TS4.
  • the aim of the corresponding training phase is to create a sparse consistent map 10 of trained trajectory points and trained features.
  • Step TS1 is a step of feature detection;
  • step TS2 a step of feature matching;
  • step TS3 a step of 3D reconstruction and step TS4 a step of bundle adjustment of 3D positions/points and vehicle position.
  • Trained trajectory points 16 are key locations in the world that together identify the path (trajectory) 14 followed by the vehicle 12 in training.
  • Trained features are visual features with a 3D position "in the world" and an associated visual descriptor.
  • the descriptors are such that they uniquely define a feature and they can be associated with the features extracted in a subsequent replay.
  • the driver assistance device is designed to work with any feature descriptor but a sufficiently robust descriptor is recommended.
  • AKAZE is the proposed feature descriptor for this solution.
  • the feature descriptor storage during training and matching during replay is the key enabler for the vehicle 12 to localise (associate) itself during replay against the training data.
  • a set of visual features are extracted from each live camera frame, a visual descriptor is stored for each feature.
  • the number of features extracted is limited to a fixed number to ensure an efficient run-time is achieved.
  • 3D reconstruction is performed to give an estimate for each feature position in a 3D world.
  • the 3D reconstruction also provides an estimate of the vehicle motion, via the estimation of the essential/fundamental matrix.
  • Each window consisting of a key frame and subsequent normal frames is bundle adjusted to give the optimal trajectory positions and 3D feature positions.
  • the bundle adjustment step involves a non-linear optimisation where the reprojection error of the 3D features is minimised.
  • GPS information can be also stored for each key frame in the trajectory 14, obviously this position will not be accurate enough for replay but it can be used for absolute position reference.
  • Fig. 3 shows a map 10 of the scene in the replay phase and a corresponding image I taken by the camera from the current vehicle position.
  • the point 22 in the map 10 is the estimate of the current vehicle position on the sparse map.
  • the big points 24 are the trained features from the full trained trajectory.
  • the smaller points 26 are those trained features that have been matched to the current replay frame.
  • the characteristic features 24, 26 are connected to the different objects 28 in the scene.
  • a new set of features 24, 26 is detected, and each feature 24, 26 is given a visual descriptor. Using these visual descriptors, the new features 24, 26 are matched to the trained features 18, 20. For initial relocalisation a subset of the new features 24, 26 is selected and the complete trained trajectory 14 is searched to find matching descriptors. The key frame with the most matches is selected as the "closest key frame” and then an attempt is made to match the complete set of new features to the features in the "closest key frame". With GPS information for the current position and the key frames in the stored trajectory 14 a more refined search space for the feature descriptor matches can be used to reduce the runtime and ensure a more robust match.
  • the current vehicle position relative to the trained trajectory 14 is determined using a bundle adjustment technique.
  • the matched trained features 24 are re-projected into the current live frame and the reprojection error is calculated as the difference between the re-projected position and the observed position in the current frame. This error is minimised by selecting the optimal vehicle position relative to the trained trajectory 14. Strict filtering of outliers ensures a reliable position.
  • the search is restricted to the "closest key frame” and its neighbours.
  • the "closest key frame” is reset to one of this group as appropriate. If however a sufficient number of matches are not made within this set the complete trajectory is once again searched.
  • Fig. 4 shows a kind of flow chart of a replay mode of the driver assistance device for assisting a driver when parking in a parking space P.
  • This training mode comprises three replay steps RS1 - RS3.
  • Step RS1 is a step of feature detection
  • step RS2 is a step of feature association with trained features
  • step RS3 is a step of bundle adjustment for vehicle pose optimisation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Multimedia (AREA)
  • Transportation (AREA)
  • Combustion & Propulsion (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for assisting a driver of a motor vehicle (12) when parking in a parking space (P) using a driver assistance device of the motor vehicle (12), the method comprising: in a training mode of the driver assistance device: recording and storing reference data about a trajectory (14) to the parking space (P), while the motor vehicle (12) is driven on said trajectory (14) under driver control, wherein at least some of these reference data comprise image-based reference information created by use of at least one camera device mounted on the vehicle; and in a subsequent replay mode different from the training mode: recording, by the driver assistance device, image-based information by use of the at least one camera device and comparing the image-based information with the image-based reference information, wherein depending on a result of the comparison, determining, by the driver assistance device, a current position (22) of the motor vehicle (12) with respect to the trajectory (14). The invention further relates to a corresponding computer program product, and a corresponding driver assistance device.

Description

Method for assisting a driver of a motor vehicle when parking using a driver assistance device, corresponding computer program product and driver assistance device
The invention relates to a method for assisting a driver of a motor vehicle when parking in a parking space using a driver assistance device of the motor vehicle.
The invention further relates to a corresponding computer program product and driver assistance device for assisting a driver of a motor vehicle when parking in a parking space.
Automated Parking systems are already in the market. A new category of automated parking systems is the trained parking system. Modern semi-autonomous cars are designed to park themselves. In order to do this they need to be aware of the geometry of their environment. Trained parking systems use various sensors to record information from the environment (landmarks) corresponding to a driven ("trained") trajectory, and on a subsequent "replay" they relate the newly sensed information to the previously stored information to work out their position relative to the stored trajectory, which is then used to make decisions on how to manoeuvre the vehicle until it eventually parks at the stored parking place (park slot) location.
Document DE 10 2013 015 348 A1 describes a method for assisting a driver of a motor vehicle when parking in a parking space using a driver assistance device of the motor vehicle. The method comprises a training (or learning) mode of the driver assistance device and an operating mode of the driver assistance device. In both, the training mode and the operating mode, the motor vehicle has to drive along a trajectory (path) from a starting point to a final position of the parking procedure at the parking space, the parking position.
It is an object of the invention to provide a method for assisting a driver of a motor vehicle when parking in a parking space using a driver assistance device of the motor vehicle, a corresponding computer program product, and the corresponding driver assistance device which are cost-effective and suitable for many applications. This object is achieved by a method, a computer program product, as well as driver assistance device having the features according to the respective independent claims. Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
According to the inventive method for assisting a driver of a motor vehicle when parking in a parking space using a driver assistance device of the motor vehicle, the method comprises (i) in a training mode of the driver assistance device: recording and storing reference data about a trajectory (path) to the parking space, while the motor vehicle is driven on said trajectory under driver control, wherein at least some of these reference data comprise image-based reference information created by use of at least one camera device mounted on the vehicle; and (ii) in a subsequent replay mode different from the training mode: recording, by the driver assistance device, image-based information by use of the at least one camera device and comparing the image-based information with the image-based reference information, wherein depending on a result of the
comparison, determining, by the driver assistance device, a current position of the motor vehicle with respect to the trained trajectory. Furthermore the method may comprise a step of determining the remaining section of the trajectory from the current position to the parking space and identify the parking space using the knowledge about the determined current position and the trained trajectory.
This method allows a trained parking assistance with cameras providing the only sensor input. The core idea is that saved information extracted from camera images can be used to localise the vehicle position at a subsequent point in time.
According to a preferred embodiment of the invention, the image-based reference information and/or the image-based information comprises characteristic features detected in an image frame of an image taken by the camera device, which
characteristic feature is linked to a corresponding image descriptor. The characteristic features are related to objects in the surrounding of the vehicle (landmarks) when driving along the trajectory. The descriptors are such that they uniquely define a feature and they can be associated with the features extracted in the subsequent replay. The image- based reference information and/or the image-based information preferably is a description of a 3D characteristic feature in the environment in the surrounding area of the trajectory. This description is based on the 2D representation of the 3D feature in the image. The core idea is that saved characteristic features extracted from camera images can be used to localise the vehicle position at a subsequent point in time.
According to another preferred embodiment of the invention, the training mode comprises the following steps:
Step 1 :detection of the characteristic features in the image frames of the camera;
Step 2: matching of the characteristic features to image frames;
Step 3: 3D-reconstruction of the positions of the characteristic features; and
Step 4: bundle adjustment of 3D-positions of the characteristic features and the vehicle position.
According to a particularly preferred embodiment of the invention, the 3D reconstruction step (Step 3) also provides an estimation of the vehicle motion via the estimation of the corresponding essential/fundamental matrix.
According to a particularly preferred embodiment of the invention, the bundle adjustment step (Step 4) involves a non-linear optimisation where the reprojection error of the 3D features is minimised.
According to another preferred embodiment of the invention, the replay mode comprises the following steps:
Step 1 : detection of the characteristic features in the image frames of the camera;
Step 2: association of said characteristic features with the characteristic features detected in the training mode; and
Step 3: bundle adjustment of vehicle position using the 3D-positions of the characteristic features for vehicle pose optimisation.
According to another preferred embodiment of the invention, there are two types of image frames, namely key frames and normal frames, wherein in training both frame types are processed but only the trained reference points of the trajectory (trajectory points) and trained features associated with key frames are saved in a trained map and used for replay. Key frames are dynamically selected based on distance travelled by the vehicle, the number of features matched to the previous key frame or a combination of both. According to yet another preferred embodiment of the invention information of a navigation satellite system is stored for at least some of the image frames, especially the key frames. This information can be GPS data.
The computer program product according to the invention comprises computer- executable program code portions having program code instructions configured to execute the aforementioned method.
The invention further relates to a driver assistance device for assisting a driver of a motor vehicle when parking in a parking space, wherein the driver assistance device is configured
(i) to record and store reference data about a trajectory to the parking space in a training mode, while the motor vehicle is driven on said trajectory controlled by the driver;
wherein at least some of these reference data comprise image-based reference information created by use of at least one camera device mounted on the vehicle;
(ii) to record image-based information by use of the at least one camera device in a subsequent replay mode different from the training mode and
(iii) to compare the image-based information with the image-based reference information in said replay mode, wherein depending on a result of the comparison, the driver assistance device determines a current position of the motor vehicle with respect to the trajectory. Furthermore the driver assistance device can be configured to determine the remaining section of the trajectory from the current position to the parking space and identify the parking space using the knowledge about the determined current position and the trained trajectory. The driver assistance device preferably is computer-based device comprising a processor and a data memory.
According to a preferred embodiment of this driver assistance device, this driver assistance device is arranged for performing the aforementioned method.
Further features of the invention are apparent from the claims, the figures and the description of figures. All of the features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or else alone. Now, the invention is explained in more detail based on a preferred embodiment as well as with reference to the attached drawings.
In the drawings:
Fig. 1 shows a map of the parking zone with a trained trajectory of a parking generated in the training mode;
Fig. 2 shows a flow chart of a training mode of a driver assistance device for assisting a driver of a motor vehicle when parking in a parking space of a parking zone;
Fig. 3 shows an image frame and a map of the parking zone with a replay
trajectory of a parking, both generated in the replay mode; and
Fig. 4 shows a flow chart of a replay mode of the driver assistance device for assisting the driver when parking in a parking space.
In the following an example for a trained parking assistance case is described: The driver "trains" the vehicle by driving into a parking space P (parking slot) and selecting to save the trajectory; in "Replay" mode the vehicle recognises the scene and replays the trained trajectory.
This requires two modes of operations: "Training" and "Replay", in "Training" a map 10 of the scene is created while a motor vehicle 12 is driven on a trajectory 14 under driver control, wherein this trajectory 14 travelled by the vehicle 12 is recorded. In "Replay" mode the vehicle 12 must recognise the scene and its location within the scene.
Training Phase
Fig. 1 shows the corresponding map 10 of the scene, e.g. a parking zone, with the motor vehicle 12 driving across this zone during the training mode. The motor vehicle 12 is for example a passenger car. The motor vehicle 12 comprises a driver assistance device with a plurality of cameras (not shown). The cameras are, e.g., located at the front area and the rear area of the vehicle 12. The cameras are environal regions at the front and rear area of the motor vehicle 12. The vehicle 12 is driving along a trajectory 14 across a parking zone from a reference starting position not shown) to a parking space P of the parking zone, which parking space P is the reference target position of said trajectory 14. The trajectory 14 comprises a straight section followed by a curved section and is sufficiently described by a plurality of reference points 16 (trajectory points) on the trajectory 14. There is a number of characteristic features 18, 20 in a surrounding area of the trajectory 14 detected by the driver assistance device in image frames of images taken by the camera.
Fig. 2 shows a kind of flow chart of a training mode of a driver assistance device for assisting a driver of a motor vehicle when parking in a parking space P. This training mode comprises four training steps TS1 - TS4. The aim of the corresponding training phase is to create a sparse consistent map 10 of trained trajectory points and trained features. Step TS1 is a step of feature detection; step TS2 a step of feature matching; step TS3 a step of 3D reconstruction and step TS4 a step of bundle adjustment of 3D positions/points and vehicle position.
The aim of the training phase is to create a sparse consistent map 10 of trained trajectory points 16 and trained features, as shown in Fig.1 . Trained trajectory points 16 are key locations in the world that together identify the path (trajectory) 14 followed by the vehicle 12 in training.
Trained features are visual features with a 3D position "in the world" and an associated visual descriptor. The descriptors are such that they uniquely define a feature and they can be associated with the features extracted in a subsequent replay. The driver assistance device is designed to work with any feature descriptor but a sufficiently robust descriptor is recommended. AKAZE is the proposed feature descriptor for this solution. The feature descriptor storage during training and matching during replay is the key enabler for the vehicle 12 to localise (associate) itself during replay against the training data.
There are two types of image frames, namely key frames and normal frames, wherein during training both frame types are processed but only the trained trajectory points and trained features associated with key frames are saved in the trained map and used for replay. The concept of these "key frames" and "normal frames" is important for this solution. Frames are bundled into windows with a key frame at the start followed by normal frames. Key frames are dynamically selected based on a combination of distance travelled and the number of features matched to previous key frame.
In the training phase, a set of visual features are extracted from each live camera frame, a visual descriptor is stored for each feature. The number of features extracted is limited to a fixed number to ensure an efficient run-time is achieved.
Features are matched according to their descriptor to subsequent frames and using these matches a 3D reconstruction is performed to give an estimate for each feature position in a 3D world. The 3D reconstruction also provides an estimate of the vehicle motion, via the estimation of the essential/fundamental matrix.
Each window consisting of a key frame and subsequent normal frames is bundle adjusted to give the optimal trajectory positions and 3D feature positions. The bundle adjustment step involves a non-linear optimisation where the reprojection error of the 3D features is minimised. GPS information can be also stored for each key frame in the trajectory 14, obviously this position will not be accurate enough for replay but it can be used for absolute position reference.
Replay Phase
Fig. 3 shows a map 10 of the scene in the replay phase and a corresponding image I taken by the camera from the current vehicle position. The point 22 in the map 10 is the estimate of the current vehicle position on the sparse map. The big points 24 are the trained features from the full trained trajectory. The smaller points 26 are those trained features that have been matched to the current replay frame. The characteristic features 24, 26 are connected to the different objects 28 in the scene.
In the replay phase, a new set of features 24, 26 is detected, and each feature 24, 26 is given a visual descriptor. Using these visual descriptors, the new features 24, 26 are matched to the trained features 18, 20. For initial relocalisation a subset of the new features 24, 26 is selected and the complete trained trajectory 14 is searched to find matching descriptors. The key frame with the most matches is selected as the "closest key frame" and then an attempt is made to match the complete set of new features to the features in the "closest key frame". With GPS information for the current position and the key frames in the stored trajectory 14 a more refined search space for the feature descriptor matches can be used to reduce the runtime and ensure a more robust match.
If a sufficient number are matched the current vehicle position relative to the trained trajectory 14 is determined using a bundle adjustment technique. The matched trained features 24 are re-projected into the current live frame and the reprojection error is calculated as the difference between the re-projected position and the observed position in the current frame. This error is minimised by selecting the optimal vehicle position relative to the trained trajectory 14. Strict filtering of outliers ensures a reliable position.
On subsequent frames the search is restricted to the "closest key frame" and its neighbours. The "closest key frame" is reset to one of this group as appropriate. If however a sufficient number of matches are not made within this set the complete trajectory is once again searched.
Fig. 4 shows a kind of flow chart of a replay mode of the driver assistance device for assisting a driver when parking in a parking space P. This training mode comprises three replay steps RS1 - RS3. Step RS1 is a step of feature detection; step RS2 is a step of feature association with trained features; and step RS3 is a step of bundle adjustment for vehicle pose optimisation.
List of Reference Signs
10 Map
12 vehicle
14 trajectory, trained 16 trajectory points (trained trajectory)
18 3D-features (big circles) 20 3D-features, other (small circles)
22 point of view
24 features, trained
26 features, detected
28 object, environal
P parking space
I image
TS1 training step, first
TS2 training step, second
TS3 training step, third
TS4 training step, fourth
RS1 replay step, first
RS2 replay step, second
RS3 replay step, third

Claims

Claims A method for assisting a driver of a motor vehicle (12) when parking in a parking space (P) using a driver assistance device of the motor vehicle (12), the method comprising: in a training mode (TS1 - TS4) of the driver assistance device: recording and storing reference data about a trajectory (14) to the parking space (P), while the motor vehicle (12) is driven on said trajectory (14) under driver control, wherein at least some of these reference data comprise image-based reference information created by use of at least one camera device mounted on the vehicle; and in a subsequent replay mode (RS1 - RS3) different from the training mode: recording, by the driver assistance device, image-based information by use of the at least one camera device and comparing the image-based information with the image-based reference information, wherein depending on a result of the comparison, determining, by the driver assistance device, a current position (22) of the motor vehicle (12) with respect to the trajectory (14). The method according to claim 1 , characterized in that the image-based reference information and/or the image- based information comprises characteristic features (18, 20; 24, 26) detected in an image frame of an image taken by the camera device, which characteristic feature is linked to a corresponding image descriptor. The method according to claim 2, characterized by the following steps of the training mode: detection of the characteristic features (18, 20) in the image frames of the camera
(TS1 );
matching of the characteristic features (18, 20) to image frames
(TS2);
3D-reconstruction of the positions of the characteristic features (18, 20)
(TS3); and bundle adjustment of 3D-positions of the characteristic features (18, 20) and the vehicle position (TS4).
4. The method according to claim 3,
characterized in that the 3D reconstruction step (TS3) also provides an estimation of the vehicle motion.
5. The method according to claim 3 or 4,
characterized in that the bundle adjustment step (TS4) involves a non-linear optimisation where the reprojection error of the 3D features is minimised.
6. The method according to any one of claims 3 to 5,
characterized by the following steps of the replay mode:
detection of the characteristic features (24, 26) in the image frames of the camera ( S1 );
association of said characteristic features (24, 26) with the characteristic features (18, 20) detected in the training mode (RS2); and
bundle adjustment of vehicle position using the 3D-positions of the characteristic features (18, 20) and the vehicle position for vehicle pose optimisation (RS3).
7. The method according to any one of claims 2 to 6,
characterized by two types of image frames, namely key frames and normal frames, wherein in training both frame types are processed but only the trained reference points of the trajectory and trained features associated with key frames are saved in a trained map and used for replay.
8. The method according to any one of claims 2 to 7,
characterized in that information of a navigation satellite system is stored for at least some of the image frames, especially the key frames.
9. A computer program product comprising computer-executable program code portions having program code instructions configured to execute the method according to one of claims 1 to 8.
10. A driver assistance device for assisting a driver of a motor vehicle (12) when
parking in a parking spaces (P), wherein the driver assistance device is configured to record and store reference data about a trajectory (14) to the parking space (P) in a training mode, while the motor vehicle (12) is driven on said trajectory (14) controlled by the driver; wherein at least some of these reference data comprise image-based reference information created by use of at least one camera device mounted on the vehicle;
to record image-based information by use of at least one camera device in a subsequent replay mode different from the training mode and
to compare the image-based information with the image-based reference information in said replay mode, wherein depending on a result of the comparison, the driver assistance device determines a current position (22) of the motor vehicle (12) with respect to the trajectory.
The device according to claim 10,
characterized in that the driver assistance device is arranged for performing the method according to any one of claims 1 to 8.
PCT/EP2017/080671 2016-12-20 2017-11-28 Method for assisting a driver of a motor vehicle when parking using a driver assistance device, corresponding computer program product and driver assistance device WO2018114244A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102016124888.6 2016-12-20
DE102016124888.6A DE102016124888A1 (en) 2016-12-20 2016-12-20 A method for assisting a driver of a motor vehicle when parking using a driver assistance device, corresponding computer program product and driver assistance device

Publications (1)

Publication Number Publication Date
WO2018114244A1 true WO2018114244A1 (en) 2018-06-28

Family

ID=60484378

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/080671 WO2018114244A1 (en) 2016-12-20 2017-11-28 Method for assisting a driver of a motor vehicle when parking using a driver assistance device, corresponding computer program product and driver assistance device

Country Status (2)

Country Link
DE (1) DE102016124888A1 (en)
WO (1) WO2018114244A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3378737A1 (en) * 2017-03-21 2018-09-26 MAN Truck & Bus AG Parking assistance system and method for the same
WO2020140431A1 (en) * 2019-01-04 2020-07-09 南京人工智能高等研究院有限公司 Camera pose determination method and apparatus, electronic device and storage medium
US20230288932A1 (en) * 2022-03-08 2023-09-14 Ford Global Technologies, Llc Dynamic automatic unparking

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022132204A1 (en) 2022-12-05 2024-06-06 Connaught Electronics Ltd. Generating or updating a digital representation of a trajectory, self-localization of an ego vehicle and at least partially automatic guidance of an ego vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090243889A1 (en) * 2008-03-27 2009-10-01 Mando Corporation Monocular motion stereo-based free parking space detection apparatus and method
US20130085637A1 (en) * 2010-06-09 2013-04-04 Valeo Schalter Und Sensoren Gmbh Method for assisting a driver of a motor vehicle when parking in a parking space, driver assistance device and a motor vehicle
US20140078258A1 (en) * 2012-09-17 2014-03-20 Nec Laboratories America, Inc. Real-time monocular visual odometry
DE102013015348A1 (en) 2013-09-17 2014-04-10 Daimler Ag Method for operating vehicle, particularly for approaching parking space in parking zone that is non-visible or distant from road by vehicle, involves determining and storing multiple trajectories for home-parking space of home parking zone
WO2016099866A1 (en) * 2014-12-19 2016-06-23 Qualcomm Incorporated Scalable 3d mapping system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013003117A1 (en) * 2013-02-25 2013-08-29 Daimler Ag Method for self localization of vehicle and for detecting objects in surrounding of passenger car, involves determining position of vehicle from position data when image characteristics coincide with comparison characteristics

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090243889A1 (en) * 2008-03-27 2009-10-01 Mando Corporation Monocular motion stereo-based free parking space detection apparatus and method
US20130085637A1 (en) * 2010-06-09 2013-04-04 Valeo Schalter Und Sensoren Gmbh Method for assisting a driver of a motor vehicle when parking in a parking space, driver assistance device and a motor vehicle
US20140078258A1 (en) * 2012-09-17 2014-03-20 Nec Laboratories America, Inc. Real-time monocular visual odometry
DE102013015348A1 (en) 2013-09-17 2014-04-10 Daimler Ag Method for operating vehicle, particularly for approaching parking space in parking zone that is non-visible or distant from road by vehicle, involves determining and storing multiple trajectories for home-parking space of home parking zone
WO2016099866A1 (en) * 2014-12-19 2016-06-23 Qualcomm Incorporated Scalable 3d mapping system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3378737A1 (en) * 2017-03-21 2018-09-26 MAN Truck & Bus AG Parking assistance system and method for the same
WO2020140431A1 (en) * 2019-01-04 2020-07-09 南京人工智能高等研究院有限公司 Camera pose determination method and apparatus, electronic device and storage medium
US20230288932A1 (en) * 2022-03-08 2023-09-14 Ford Global Technologies, Llc Dynamic automatic unparking

Also Published As

Publication number Publication date
DE102016124888A1 (en) 2018-06-21

Similar Documents

Publication Publication Date Title
CN111060094B (en) Vehicle positioning method and device
US11273821B2 (en) Parking assistance method and parking assistance device
US9400897B2 (en) Method for classifying parking scenarios for a system for parking a motor vehicle
US9754173B2 (en) Smart parking assist apparatus and method
US8289189B2 (en) Camera system for use in vehicle parking
US10793142B2 (en) Server for operating a parking facility
US9046380B2 (en) Guiding apparatus, guiding method, and guiding program product
US9031731B2 (en) Apparatus and method for parking assistance
WO2018114244A1 (en) Method for assisting a driver of a motor vehicle when parking using a driver assistance device, corresponding computer program product and driver assistance device
US10983529B2 (en) Method and system for providing data for a first and second trajectory
EP3871935A1 (en) Parking space detection method and apparatus
US20180107207A1 (en) Automatic parking system and automatic parking method
US20170183001A1 (en) Parking assist apparatus
US10565875B2 (en) Apparatus and method for parking assist
CN103778795A (en) Apparatus and method for assisting parking in area without parking line
US20180178780A1 (en) Automatic parking method and system of vehicle
WO2018127365A1 (en) Method for assisting a driver of a motor vehicle when parking using a driver assistance device, corresponding computer program product and driver assistance device
CN106379235B (en) The implementation method and device of aid parking
US20150098622A1 (en) Image processing method and system of around view monitoring system
US20210380097A1 (en) Driving assistance apparatus, driving assistance method, and recording medium storing driving assistance program and readable by computer
JP2024050891A (en) Driving assistance device and driving assistance method
WO2018224356A1 (en) Method for providing stored data of a trained parking procedure, corresponding computer program product and system
WO2023122702A1 (en) Filtering of dynamic objects from vehicle generated map
CN113353068B (en) Parking control method and device, vehicle and medium
US11628829B2 (en) Operating a motor vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17805190

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17805190

Country of ref document: EP

Kind code of ref document: A1