WO2017042224A1 - Method for generating an environmental map of an environment of a motor vehicle based on an image of a camera, driver assistance system as well as motor vehicle - Google Patents

Method for generating an environmental map of an environment of a motor vehicle based on an image of a camera, driver assistance system as well as motor vehicle Download PDF

Info

Publication number
WO2017042224A1
WO2017042224A1 PCT/EP2016/071099 EP2016071099W WO2017042224A1 WO 2017042224 A1 WO2017042224 A1 WO 2017042224A1 EP 2016071099 W EP2016071099 W EP 2016071099W WO 2017042224 A1 WO2017042224 A1 WO 2017042224A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
determined
motor vehicle
environmental map
area
Prior art date
Application number
PCT/EP2016/071099
Other languages
French (fr)
Inventor
Sunil Chandra
Jonathan Horgan
Ciaran Hughes
Markus Heimberger
Jean-Francois Bariant
Axel DURBEC
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2017042224A1 publication Critical patent/WO2017042224A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Definitions

  • the present invention relates to a method for generating an environmental map of an environment of a motor vehicle, in which by means of an evaluation device at least one image of the environment is received from a camera of the motor vehicle and a freespace is determined in the environmental map based on the at least one image, which describes a part of the environment within a capturing range of the camera, which is free from obstacles for the motor vehicle.
  • the present invention relates to a driver assistance system for a motor vehicle.
  • the present invention relates to a motor vehicle with such a driver assistance system.
  • Driver assistance systems are known from the prior art hereto, which include for example at least one sensor, by which an environment of the motor vehicle can be captured. In particular, by the at least one sensor, it can be checked if an obstacle for the motor vehicle is located in the environment of the motor vehicle.
  • sensors can for example be cameras, ultrasonic sensors, radar sensors, lidar sensors or the like.
  • This environmental map describes the environment of the motor vehicle. Based on the data of the sensors, obstacles in the environment of the motor vehicle can be recognized and be entered into the environmental map. Based on the environmental map, the motor vehicle can then for example be autonomously parked or driven. Therein, the motor vehicle is in particular moved within a so-called freespace.
  • the freespace describes the part in the environment within a capturing range of the sensor, which is free from obstacles for the motor vehicle.
  • the environmental map can for example be generated as a vectorial map or as a grid map.
  • a probability of existence for each object is updated based on the measurements with respect to the freespace.
  • a grid map the information with respect to the freespace is integrated and is stored depending on the time.
  • the freespace is further used to delete dynamic and static obstacles in the environmental map, which are currently not captured or updated. This means that with a good model for the freespace, previous positions of dynamic obstacles are very fast deleted without deleting valid information to static obstacles. Within the freespace, previously captured static obstacles are also to be deleted, the position of which has changed since the last valid measurement. Further, static objects are to be deleted, which for example have an error with respect to their position as a result of update of the odometry data. Furthermore, objects erroneously captured can be deleted within the environmental map.
  • the freespace is determined in that a spatial area between the vehicle and a detected obstacle is determined.
  • a polygon can for example be entered into the environmental map, which represents the obstacle.
  • the polygon can represent the external boundaries, which can be captured by the sensor.
  • a model for the sensor can be taken into account, in which for example the capturing range, in which obstacles can be captured by the sensor, is taken into account.
  • DE 10 2010 018 994 A1 describes a method for operating a driver assistance system of a vehicle, wherein information about an environment of the vehicle is captured by at least one sensor of the driver assistance system and sensor data is provided from this information.
  • a digital environmental map can be calculated from the sensor data. Therein, it is provided that the environmental map is calculated in a common format for at least two different functionalities of the driver assistance system.
  • this object is solved by a method, by a driver assistance system as well as by a motor vehicle having the features of the respective independent claims.
  • Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
  • a method according to the invention serves for generating an environmental map of an environment of a motor vehicle.
  • an evaluation device at least one image of the environment is received from a camera of the motor vehicle and a freespace is determined in the environmental map based on the at least one image, which describes a part of the environment within a capturing range of the camera, which is free from obstacles for the motor vehicle.
  • at least one area is recognized in the at least one image by means of the evaluation device, which is traversable by the motor vehicle.
  • at least one border element is determined in the environmental map, which describes a spatial boundary of the at least one area.
  • the freespace is defined based on the at least one border element.
  • At least one image is received by means of an evaluation device, which was captured by a camera of the motor vehicle.
  • This at least one image of the camera describes the environment of the motor vehicle.
  • the at least one image of the camera describes an obstacle in the environment of the motor vehicle.
  • an image sequence with a plurality of images is provided by the camera and transmitted to the evaluation device.
  • the evaluation device can be constituted by a corresponding computing device, a microprocessor, a digital signal processor or the like.
  • the evaluation device is constituted by an electronic control unit of the motor vehicle.
  • the environmental map can then be provided by the evaluation device. In the digital environmental map, the environment of the motor vehicle is depicted, which is within the capturing range of the camera.
  • the capturing range describes the range, in which obstacles or objects can be captured by the camera.
  • multiple cameras are disposed on the motor vehicle and an image is provided by each camera.
  • the environmental map can then be determined based on the respective images of the cameras. This environmental map can for example be used to operate a driver assistance system of the motor vehicle. It can also be provided that the digital environmental map is presented on a display device of the motor vehicle.
  • a freespace can then be determined. For example, a model for the freespace can be determined.
  • the freespace in particular describes the area in the environment of the motor vehicle, which is free from obstacles for the motor vehicle.
  • An object in the environment of the motor vehicle is in particular an obstacle if the obstacle cannot be overrun by the motor vehicle. When the motor vehicle hits the obstacle, in particular damage of the motor vehicle impends.
  • Within the environmental map it can in particular be differentiated between static obstacles, thus non-moved obstacles, and dynamic obstacles, thus moved obstacles. Therein, it is in particular provided that the current position of the dynamic obstacles is displayed in the environmental map.
  • the earlier position of the dynamic obstacle which was for example determined to a preceding measurement cycle or from a previously determined image, is deleted.
  • a good model for the freespace is characterized in that dynamic obstacles are very fast deleted in their preceding position without deleting static obstacles or information to the static obstacles being lost.
  • obstacles erroneously detected are to be deleted in the environmental map.
  • static obstacles are further displayed in the environmental map and the position of dynamic obstacles can be fast updated.
  • an area is recognized in the at least one image provided by the camera by means of the evaluation device, which is traversable by the motor vehicle.
  • at least one border element is determined in the environmental map, which represents a spatial boundary of the at least one recognized area.
  • the freespace can then be defined within the environmental map.
  • obstacles in the environment of the motor vehicle are recognized with the aid of corresponding object recognition algorithms and the freespace is defined depending on the recognized position of the obstacle.
  • the obstacles cannot be reliably recognized or cannot be recognized with the aid of object recognition algorithms.
  • the present invention is based on the realization that such an erroneous recognition of obstacles can be countered in that that area in the environment of the motor vehicle is determined based on the image, which can be traversed by the motor vehicle.
  • it is taken into account that for definition of the freespace in the environmental map, the spatial boundaries of a captured obstacle are not taken into account.
  • it can be reliably prevented that upon erroneous recognition of an obstacle or upon non-recognition of an obstacle, collision between the motor vehicle and the obstacle impends.
  • a road is recognized in the at least one image
  • the spatial boundaries of the road are determined in the at least one image and the at least one border element is determined in the environmental map based on the spatial boundaries of the road.
  • a first approach for determining the at least one area in the environment, which can be traversed by the motor vehicle, provides that a road is recognized in the at least one image.
  • the at least one image can be
  • a picture of the road can then for example be determined in the environmental map.
  • This spatial picture can depict the spatial boundaries of the road in the real world in the environmental map.
  • the at least one border element can then be determined in the environmental map, which in particular bounds the freespace in the environmental map.
  • the at least one image is segmented for recognizing the road and the at least one image is divided into multiple image blocks hereto, an intensity in each of the image blocks is determined and the partial areas are classified as a road based on the determined intensity.
  • the at least one image provided by the camera can be segmented.
  • the generation of regions or areas contiguous in terms of content by combining adjacent pixels corresponding to a certain homogeneity criterion is to be understood by segmentation.
  • individual pixels can for example be compared to their neighboring pixels and be combined to a region if they are similar.
  • the at least one image is divided into a plurality of image blocks.
  • These image blocks can for example be arranged in multiple lines and multiple columns.
  • the image can be divided such that the image blocks are arranged in eight lines and eight columns.
  • an intensity is determined.
  • a variation of the intensity of the respective image blocks can be determined.
  • those image blocks, the variation of the intensity of which falls below a predetermined threshold value are selected.
  • those image blocks can be selected, in which the pixels have a substantially homogenous distribution to each other.
  • the respective brightness of the selected image blocks can be compared to a limit value and those image blocks can be determined, which are to be associated with a road.
  • those image blocks can be determined, which are to be associated with a road.
  • a ground is recognized in the at least one image as the at least one area, the spatial boundaries of the ground are determined in the at least one image and the at least one border element is determined in the environmental map based on the spatial boundaries of the ground.
  • a second approach to recognize the at least one area, which can be traversed by the motor vehicle involves recognition of a ground in the image.
  • the ground also includes a road.
  • the ground can also describe an area next to the road, which can be traversed by the motor vehicle.
  • the ground can be associated with a parking lot, which is formed separately to the road.
  • obstacles for the motor vehicle cannot be disposed.
  • there are no objects the height of which for example exceeds a
  • predetermined limit value the area, in which the motor vehicle can be moved without collision can be determined in reliable manner.
  • multiple objects are determined in the at least one image, a respective height of the objects in the at least one image is determined and those objects are associated with the ground, the height of which falls below a
  • predetermined limit value a corresponding three-dimensional object recognition algorithm
  • objects the height of which falls below a predetermined limit value
  • This predetermined limit value can for example be 10 cm or 25 cm.
  • these objects, the height of which falls below the limit value are sorted out or not taken into account.
  • an obstacle-free space is recognized in the at least one image as the at least one area, which is free from obstacles, spatial boundaries of the obstacle-free space are determined and the at least one border element is determined in the environmental map based on the spatial boundaries of the obstacle-free space.
  • the obstacles in the environment of the motor vehicle are actively recognized.
  • a corresponding three-dimensional object recognition algorithm can for example be used, by which objects can be recognized in the image, the height of which exceeds a
  • predetermined limit value therein, it can also be provided that individual pixels in the image are associated with the obstacle with the aid of the object recognition algorithm and the pixels are connected to each other or clustered.
  • a spatial uncertainty can be taken into account to determine the spatial boundaries of the obstacle-free space, thus the area in the image, which is free from obstacles. In this manner, the freespace can be reliably determined.
  • the capturing range is divided into a predetermined number of partial areas, a presence of the at least one area is checked in each of the partial areas and the at least one border element is determined for each of the partial areas, in which the at least one area is present.
  • the partial areas each have a shape of a circular segment, wherein a respective center of the partial areas is associated with a position of the environmental map.
  • the border element can be determined for each of the partial areas or segments.
  • a minimum value and a maximum value are preset for the at least one border element, wherein the minimum value and the maximum value each describe a distance to the camera.
  • a minimum value and a maximum value can be preset within the environmental map. The minimum value and the maximum value can be defined depending on the capturing range of the camera.
  • the border element is associated with the minimum value if the at least one area has a lower distance to the camera than the distance assigned to the minimum value.
  • the at least one border element is associated with the maximum value if the at least one area is farther away from the camera than the distance associated with the maximum value.
  • the at least one border element can be reliably defined within the capturing range of the camera.
  • At least two areas different from each other are recognized in the at least one image by means of the evaluation device, for each of the at least two areas, the at least one border element is determined in the environmental map and the freespace is defined based on the respective border elements.
  • the road, the ground and/or the obstacle-free area are recognized in the at least one image by means of the evaluation device.
  • the three previously described methods can be used for recognizing the area, which can be traversed by the motor vehicle.
  • the results from these three methods are fused with each other. In this manner, the freespace can be reliably determined.
  • sensor data of at least one further sensor of the motor vehicle is received by means of the evaluation device and the freespace is additionally determined depending on the sensor data.
  • the at least one further sensor of the motor vehicle can for example be an ultrasonic sensor, a lidar sensor, a laser scanner or a radar sensor. Obstacles in the environment of the motor vehicle can be captured with the aid of this at least one further sensor. These captured obstacles can then additionally be entered into the environmental map. Furthermore, the sensor data can be used to verify the environmental map, which was determined based on the at least one image.
  • the evaluation device can for example be constituted by an electronic control unit of the motor vehicle.
  • the driver assistance system includes a plurality of cameras, which are disposed distributed on the motor vehicle.
  • a motor vehicle according to the invention includes a driver assistance system according to the invention.
  • the motor vehicle is in particular formed as a passenger car.
  • Fig. 1 a motor vehicle according to an embodiment of the present invention, which has a driver assistance system with four cameras;
  • Fig. 2 a schematic illustration of an environment of the motor vehicle, in which an obstacle is located, and in which a freespace is identified;
  • Fig. 3 respective capturing ranges of the four cameras, which are divided into partial areas;
  • Fig. 4 a schematic flow diagram of a method for generating an environmental map according to a first embodiment
  • Fig. 5 an image of the camera, in which a road is recognized
  • Fig. 6 a digital environmental map, which has been determined based on the image according to Fig. 5;
  • Fig. 7 a digital environmental map in a further embodiment
  • Fig. 8 a schematic flow diagram of a method for generating an environmental map according to a second embodiment
  • Fig. 9 an image of the camera, in which a ground is recognized
  • Fig. 10 a digital environmental map, which has been created based on the image according to Fig. 9;
  • Fig. 1 a digital environmental map in a further embodiment
  • Fig. 12 a schematic flow diagram of a method for generating an environmental map according to a third embodiment
  • Fig. 13 an image of the camera, in which an obstacle-free space is recognized
  • Fig. 14 a digital environmental map, which has been determined based on the image according to Fig. 13;
  • Fig. 15 a digital environmental map according to a further embodiment
  • Fig. 16 a schematic flow diagram of a method for generating an environmental map according to a fourth embodiment.
  • Fig. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view.
  • the motor vehicle 1 is formed as a passenger car.
  • the motor vehicle 1 includes a driver assistance system 2, which serves for assisting the driver of the motor vehicle in driving the motor vehicle 1 .
  • the driver assistance system 2 includes an evaluation device 3, which can for example be constituted by an electronic control unit of the motor vehicle 1 .
  • the driver assistance system 2 includes at least one camera 4.
  • the driver assistance system 2 includes four cameras 4, which are disposed distributed on the motor vehicle 1 .
  • one of the cameras 4 is disposed in a front area 5
  • one of the cameras 4 is disposed in a rear area 6
  • two of the cameras 4 are disposed in respective lateral areas 7, in particular in areas of the wing mirrors.
  • At least one image 17 of an environmental region 8 of the motor vehicle 1 can be captured by the respective cameras 4.
  • the cameras 4 are connected to the evaluation device 3 for data transfer. Corresponding data lines are presently not illustrated for the sake of clarity.
  • the respective images 17 of the cameras 4 can be evaluated by means of the evaluation device.
  • Fig. 2 shows the motor vehicle 1 in a plan view.
  • An obstacle 9 is located in the
  • the obstacle 9 in particular represents an object, which cannot be overrun by the motor vehicle 1 .
  • the obstacle 9 can be captured by one of the cameras 4, in particular by the camera 4 disposed in the front area 5 of the motor vehicle 1 .
  • an image 17 can be captured by the camera 4 in a region of interest 10. This image 17 can be evaluated by the evaluation device 3.
  • the obstacle 9 can be recognized by a
  • the obstacle 9 can only be recognized with a spatial uncertainty. Presently, this is illustrated by the dashed lines 1 1 .
  • a so-called freespace 12 is now to be defined in the environment 8 of the motor vehicle 1 .
  • the freespace 12 describes an area in the environment 8 of the motor vehicle 1 , which is free from obstacles 9 for the motor vehicle 1 .
  • a digital environmental map 21 which describes the environment 8 of the motor vehicle 1 .
  • the freespace 12 or a picture of the freespace 12 is also to be defined within this digital environmental map 21 .
  • the information with respect to the freespace 12 can then be used by the driver assistance system 2 to for example semi-autonomously maneuver the motor vehicle 1 .
  • the freespace 12 is defined such that it does not coincide with the spatial boundaries of the obstacle 9. The reason for this is in that upon capture or recognition of the obstacle 9, spatial uncertainties can occur. In addition, it can be the case that the obstacle 9 is not reliably recognized or not recognized.
  • the freespace 12 can only be captured in a predetermined capturing range 13 (see Fig. 3) of the respective camera 4. Obstacles 9 cannot be captured outside of the capturing range 13. Furthermore, it is not possible that a freespace 12 is determined, which is determined starting from the camera 4 behind the obstacle 9. The area behind the obstacle 9 is covered by the obstacle 9 and thus cannot be reliably captured.
  • the obstacles 9 cannot be recognized or cannot be reliably recognized.
  • a confidence value is stored in the respective obstacles, which are recognized.
  • the confidence value describes the probability, with which the obstacle 9 was captured in the real environment 8 of the motor vehicle 1 .
  • a confidence value is stored in the digital environmental map for the freespace 12 or a picture of the freespace 12. If the obstacle 9 is for example in the digital environmental map 21 within the freespace 12, the confidence value of the obstacle 9 can be reduced and the confidence value of the freespace 12 can be increased. If the obstacle 9 of the digital environmental map 21 within the freespace 12 is disposed within the freespace 12 for a predetermined period of time and the confidence value of the freespace 12 is higher than the confidence value of the obstacle 9, the obstacle 9 can be removed from the environmental map 21 .
  • Fig. 3 shows the motor vehicle 1 and the respective capturing ranges 13 of the cameras 4, wherein the cameras 4 are presently not illustrated.
  • the respective capturing ranges 13 are divided into a plurality of partial areas 14, which here each have the shape of a circular sector.
  • a respective orientation is preset, which is indicated by the respective arrows 15. They can be related to a vehicle coordinate system 16.
  • a partial element 14 or a sector can be defined for defining the individual partial areas, starting from an end defined by the orientation 15 or the arrow.
  • Fig. 4 shows a schematic flow diagram of a method for generating an environmental map of the environment 8 of the motor vehicle 1 .
  • the image 17 captured by one of the cameras 4, which describes the environment 8 is received by the evaluation device 3.
  • a road 19 is recognized in the image 17.
  • the road 19 represents an area 25, which can be traversed by the motor vehicle 1 without a collision with an obstacle 9 impending.
  • spatial boundaries 20 of the road 19 are determined.
  • the freespace 12 is determined based on the spatial boundaries 20 of the road 19.
  • Fig. 5 shows the image 17 provided by the camera 4 of the motor vehicle 1 and transmitted to the evaluation device 3.
  • a region of interest 18 is defined within the image 17, in which a road 19 is to be recognized.
  • a segmentation can be performed.
  • the image 17 can be divided into a plurality of image blocks by means of the evaluation device 3. Then, an intensity can be determined for each of the image blocks. Those image blocks, the intensity of which falls below a predetermined threshold value, can be examined with respect to the brightness. Then, those image blocks can be recognized with the aid of a corresponding histogram, which are associated with the road.
  • the individual image blocks which are for example arranged in multiple columns and in multiple lines, can be examined.
  • the individual columns can be examined from the bottom to the top with the aid of a corresponding histogram. Therein, it can for example be checked if a predetermined number of contiguous image blocks, for example three contiguous image blocks, is present. In this case, the freespace 12 can then be determined. Therein, it can also be provided that the center of the last one of the image blocks, which are associated with the road 19, is determined in the row. This point can then be considered as the boundary 20 of the road 19. Thus, the spatial boundaries 20 of the road 19 can be defined.
  • Fig. 6 shows the environmental map 21 , which was determined based on the image 17 according to Fig. 5.
  • the areas 14 or segments of the capturing range 13 are apparent.
  • a border element 22 was determined based on the spatial boundaries 20 of the road 19.
  • a picture 23 of the motor vehicle 1 and a picture 24 of the camera 4 are presented in the digital environmental map 21 .
  • the freespace 12 can be defined in the environmental map 21 .
  • Fig. 7 shows an environmental map 21 according to a further embodiment.
  • the environmental map 21 shows lines 26, which describe the spatial boundaries 20 of a road 19.
  • the lines 26 represent a picture of the road 19 or of the boundaries 20 of the road 19.
  • the respective border elements 22 are determined.
  • a minimum value 27 and a maximum value 28 are preset for the freespace 12. If the area 25 in the respective partial area 14 between the camera 4 and a distance less than a distance associated with the minimum value 27, the border element 22 is associated with the minimum value 27. If the area 25 has a larger distance to the camera 4 than a distance associated with the maximum value 28, the border element 22 is associated with the maximum value 28.
  • the border element 22 is associated with the maximum value 28 and additionally the type unknown is associated with the border element 22.
  • this is the case in the area indicated by the arrow 31 .
  • a picture 32 of a further vehicle is presented in the environmental map 21 .
  • polylines 33 are presented, which represent the external boundaries of the further vehicle.
  • the further vehicle can for example be captured by a further sensor of the motor vehicle 1 .
  • the information extracted based on the image 17 and entered into the environmental map 21 is independent of the data determined by the further sensor of the motor vehicle 1 and entered into the environmental map 21 .
  • Fig. 8 shows a method for generating an environmental map 21 according to a further embodiment.
  • the image 17 is provided by the camera 4 in a step S1 .
  • a ground 34 is recognized as the at least one area 25.
  • the spatial boundaries 35 of the ground 34 are recognized.
  • the border elements 22 are determined in the environmental map 21 .
  • the freespace 12 is determined in the step S4.
  • Fig. 9 shows the image 17, which has been captured by the camera 4.
  • the image 17 is evaluated by means of the evaluation device 3 with the aid of a corresponding three- dimensional object recognition algorithm.
  • the ground 34 is recognized as the area 25 in the image 17.
  • those objects 36 are determined in the image 17, the height of which is less than a predetermined limit value, for example 25 cm.
  • Fig. 10 shows the environmental map 21 , which has been determined based on the image 17 according to Fig. 9.
  • the respective partial areas 14 or segments of the capturing range 13 are apparent.
  • the border elements 22 are determined.
  • Fig. 1 1 shows an environmental map 21 according to a further embodiment.
  • corresponding pictures 37 are presented, which represent the objects 36 of the ground 34.
  • the respective shape of the pictures 37 results from the spatial uncertainty of the capture.
  • the respective border lines 22 are determined.
  • Fig. 12 shows a method for generating an environmental map 21 according to a third embodiment.
  • the image 17 is provided by means of the camera 4 and transmitted to the evaluation device in a step S1 .
  • an object 38 is recognized in the mage 17.
  • an obstacle-free space 39 is recognized as the area 25.
  • the spatial boundaries 40 of the obstacle-free space 39 are determined in a step S3".
  • the freespace 12 is determined in a step S4.
  • Fig. 13 shows the image 17, which is provided by the camera 4.
  • objects 38 are determined by means of a three-dimensional object recognition algorithm. Based on the recognized objects 38, the obstacle-free space 39 is defined. In addition, the spatial boundaries 40 of the obstacle-free space 39 are defined.
  • Fig. 14 shows the environmental map 21 , which was determined based on the image 13
  • the individual partial areas 14 are apparent, in which the border elements 22 are determined based on the spatial boundaries 14 of the obstacle-free space 39.
  • Fig. 15 shows an environmental map 21 according to a further embodiment.
  • corresponding pictures 41 , 42 of the recognized objects 38 are presented.
  • the pictures 41 show the objects 38, which are not part of the polyline 33.
  • the pictures 42 describe those objects 38, which are part of the polyline 33.
  • the pictures 41 , 42 are indicated with a spatial uncertainty.
  • Fig. 16 shows a schematic flow diagram of a method for generating the environmental map 21 according to a fourth embodiment.
  • the method steps of the methods according to figures 4, 8 and 12 are combined.
  • the respective results of the individual methods for determining the areas 25 are fused with each other in an additional step S3a. In this manner, the freespace 12 can be more reliably determined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a method for generating an environmental map (21) of an environment (8) of a motor vehicle (1), in which by means of an evaluation device (3) at least one image (17) of the environment (8) is received from a camera (4) of the motor vehicle (1) and a freespace (12) is determined in the environmental map (21) based on the at least one image (17), which describes a part of the environment (8) within a capturing range (13) of the camera (4), which is free from obstacles (9) for the motor vehicle (1), wherein by means of the evaluation device (3) at least one area (25) is recognized in the at least one image (17), which is traversable by the motor vehicle (1), at least one border element (22) is determined in the environmental map (21), which describes a spatial boundary (20, 35, 40) of the at least one recognized area (25), and the freespace (12) is defined based on the at least one border element (22).

Description

Method for generating an environmental map of an environment of a motor vehicle based on an image of a camera, driver assistance system as well as motor vehicle
The present invention relates to a method for generating an environmental map of an environment of a motor vehicle, in which by means of an evaluation device at least one image of the environment is received from a camera of the motor vehicle and a freespace is determined in the environmental map based on the at least one image, which describes a part of the environment within a capturing range of the camera, which is free from obstacles for the motor vehicle. Moreover, the present invention relates to a driver assistance system for a motor vehicle. Finally, the present invention relates to a motor vehicle with such a driver assistance system.
Presently, the interest is in particular directed to driver assistance systems for motor vehicles, which serve for assisting the driver in driving the motor vehicle. Driver assistance systems are known from the prior art hereto, which include for example at least one sensor, by which an environment of the motor vehicle can be captured. In particular, by the at least one sensor, it can be checked if an obstacle for the motor vehicle is located in the environment of the motor vehicle. Such sensors can for example be cameras, ultrasonic sensors, radar sensors, lidar sensors or the like.
In order to avoid a collision of the motor vehicle and of an obstacle in the environment of the motor vehicle, today's driver assistance system more and more use a so-called environmental map. This environmental map describes the environment of the motor vehicle. Based on the data of the sensors, obstacles in the environment of the motor vehicle can be recognized and be entered into the environmental map. Based on the environmental map, the motor vehicle can then for example be autonomously parked or driven. Therein, the motor vehicle is in particular moved within a so-called freespace. The freespace describes the part in the environment within a capturing range of the sensor, which is free from obstacles for the motor vehicle.
The environmental map can for example be generated as a vectorial map or as a grid map. In a vectorial environmental map, a probability of existence for each object is updated based on the measurements with respect to the freespace. In a grid map, the information with respect to the freespace is integrated and is stored depending on the time. The freespace is further used to delete dynamic and static obstacles in the environmental map, which are currently not captured or updated. This means that with a good model for the freespace, previous positions of dynamic obstacles are very fast deleted without deleting valid information to static obstacles. Within the freespace, previously captured static obstacles are also to be deleted, the position of which has changed since the last valid measurement. Further, static objects are to be deleted, which for example have an error with respect to their position as a result of update of the odometry data. Furthermore, objects erroneously captured can be deleted within the environmental map.
In most of the driver assistance systems, the freespace is determined in that a spatial area between the vehicle and a detected obstacle is determined. Hereto, a polygon can for example be entered into the environmental map, which represents the obstacle. In particular, the polygon can represent the external boundaries, which can be captured by the sensor. Further, for providing the environmental map or for defining the freespace, a model for the sensor can be taken into account, in which for example the capturing range, in which obstacles can be captured by the sensor, is taken into account.
Hereto, DE 10 2010 018 994 A1 describes a method for operating a driver assistance system of a vehicle, wherein information about an environment of the vehicle is captured by at least one sensor of the driver assistance system and sensor data is provided from this information. A digital environmental map can be calculated from the sensor data. Therein, it is provided that the environmental map is calculated in a common format for at least two different functionalities of the driver assistance system.
It is the object of the present invention to demonstrate a solution, how an environmental map of an environment of a motor vehicle can be more reliably provided.
According to the invention, this object is solved by a method, by a driver assistance system as well as by a motor vehicle having the features of the respective independent claims. Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
A method according to the invention serves for generating an environmental map of an environment of a motor vehicle. Herein, by means of an evaluation device at least one image of the environment is received from a camera of the motor vehicle and a freespace is determined in the environmental map based on the at least one image, which describes a part of the environment within a capturing range of the camera, which is free from obstacles for the motor vehicle. In addition, at least one area is recognized in the at least one image by means of the evaluation device, which is traversable by the motor vehicle. In addition, at least one border element is determined in the environmental map, which describes a spatial boundary of the at least one area. Finally, the freespace is defined based on the at least one border element.
In the method, at least one image is received by means of an evaluation device, which was captured by a camera of the motor vehicle. This at least one image of the camera describes the environment of the motor vehicle. Preferably, the at least one image of the camera describes an obstacle in the environment of the motor vehicle. Therein, it can also be provided that an image sequence with a plurality of images is provided by the camera and transmitted to the evaluation device. The evaluation device can be constituted by a corresponding computing device, a microprocessor, a digital signal processor or the like. In particular, the evaluation device is constituted by an electronic control unit of the motor vehicle. The environmental map can then be provided by the evaluation device. In the digital environmental map, the environment of the motor vehicle is depicted, which is within the capturing range of the camera. The capturing range describes the range, in which obstacles or objects can be captured by the camera. Herein, it can also be provided that multiple cameras are disposed on the motor vehicle and an image is provided by each camera. The environmental map can then be determined based on the respective images of the cameras. This environmental map can for example be used to operate a driver assistance system of the motor vehicle. It can also be provided that the digital environmental map is presented on a display device of the motor vehicle.
Based on the digital environmental map, a freespace can then be determined. For example, a model for the freespace can be determined. The freespace in particular describes the area in the environment of the motor vehicle, which is free from obstacles for the motor vehicle. An object in the environment of the motor vehicle is in particular an obstacle if the obstacle cannot be overrun by the motor vehicle. When the motor vehicle hits the obstacle, in particular damage of the motor vehicle impends. Within the environmental map, it can in particular be differentiated between static obstacles, thus non-moved obstacles, and dynamic obstacles, thus moved obstacles. Therein, it is in particular provided that the current position of the dynamic obstacles is displayed in the environmental map. Hereto, it is required that the earlier position of the dynamic obstacle, which was for example determined to a preceding measurement cycle or from a previously determined image, is deleted. In particular, a good model for the freespace is characterized in that dynamic obstacles are very fast deleted in their preceding position without deleting static obstacles or information to the static obstacles being lost. Furthermore, obstacles erroneously detected are to be deleted in the environmental map. Thus, it can be achieved that static obstacles are further displayed in the environmental map and the position of dynamic obstacles can be fast updated.
According to the invention, it is now provided that an area is recognized in the at least one image provided by the camera by means of the evaluation device, which is traversable by the motor vehicle. Based on the recognized area in the image, at least one border element is determined in the environmental map, which represents a spatial boundary of the at least one recognized area. Based on the at least one border element, the freespace can then be defined within the environmental map. In known methods from the prior art, for example, obstacles in the environment of the motor vehicle are recognized with the aid of corresponding object recognition algorithms and the freespace is defined depending on the recognized position of the obstacle. However, therein, it can be the case that the obstacles cannot be reliably recognized or cannot be recognized with the aid of object recognition algorithms. The present invention is based on the realization that such an erroneous recognition of obstacles can be countered in that that area in the environment of the motor vehicle is determined based on the image, which can be traversed by the motor vehicle. Herein, it is taken into account that for definition of the freespace in the environmental map, the spatial boundaries of a captured obstacle are not taken into account. Thus, it can be reliably prevented that upon erroneous recognition of an obstacle or upon non-recognition of an obstacle, collision between the motor vehicle and the obstacle impends.
Preferably, as the at least one area, a road is recognized in the at least one image, the spatial boundaries of the road are determined in the at least one image and the at least one border element is determined in the environmental map based on the spatial boundaries of the road. A first approach for determining the at least one area in the environment, which can be traversed by the motor vehicle, provides that a road is recognized in the at least one image. Hereto, the at least one image can be
correspondingly evaluated with the aid of the evaluation device and the spatial boundaries or the spatial extension of the road can be recognized in the image. Based on the spatial boundaries of the road, a picture of the road can then for example be determined in the environmental map. This spatial picture can depict the spatial boundaries of the road in the real world in the environmental map. Based on the boundaries of the road, the at least one border element can then be determined in the environmental map, which in particular bounds the freespace in the environmental map. Thus, an area in the environment of the motor vehicle can be determined in reliable manner, which can be traversed by the motor vehicle without collision with the obstacle impending.
In an embodiment, the at least one image is segmented for recognizing the road and the at least one image is divided into multiple image blocks hereto, an intensity in each of the image blocks is determined and the partial areas are classified as a road based on the determined intensity. For recognizing the road, the at least one image provided by the camera can be segmented. In particular, the generation of regions or areas contiguous in terms of content by combining adjacent pixels corresponding to a certain homogeneity criterion is to be understood by segmentation. Hereto, individual pixels can for example be compared to their neighboring pixels and be combined to a region if they are similar.
Presently, the at least one image is divided into a plurality of image blocks. These image blocks can for example be arranged in multiple lines and multiple columns. For example, the image can be divided such that the image blocks are arranged in eight lines and eight columns. For each of the image blocks, an intensity is determined. In particular, a variation of the intensity of the respective image blocks can be determined. Furthermore, it can be provided that those image blocks, the variation of the intensity of which falls below a predetermined threshold value, are selected. Thus, those image blocks can be selected, in which the pixels have a substantially homogenous distribution to each other. These selected image blocks can then be differentiated with respect to the brightness and thus be associated with objects in the real world in the environment of the motor vehicle.
Therein, the respective brightness of the selected image blocks can be compared to a limit value and those image blocks can be determined, which are to be associated with a road. Therein, it can also be provided that multiple contiguous image blocks are determined, which are associated with the class of the road, and based on these contiguous image blocks, the border element is determined in the environmental map. Thereby, a road and its spatial boundaries can be determined in reliable manner.
According to a further embodiment, a ground is recognized in the at least one image as the at least one area, the spatial boundaries of the ground are determined in the at least one image and the at least one border element is determined in the environmental map based on the spatial boundaries of the ground. A second approach to recognize the at least one area, which can be traversed by the motor vehicle, involves recognition of a ground in the image. Therein, it can be provided that the ground also includes a road. The ground can also describe an area next to the road, which can be traversed by the motor vehicle. For example, the ground can be associated with a parking lot, which is formed separately to the road. In the area, which is recognized as the ground, in particular, obstacles for the motor vehicle cannot be disposed. In other words, in the area recognized as the ground, there are no objects, the height of which for example exceeds a
predetermined limit value. Thus, the area, in which the motor vehicle can be moved without collision can be determined in reliable manner.
Preferably, for recognizing the ground, multiple objects are determined in the at least one image, a respective height of the objects in the at least one image is determined and those objects are associated with the ground, the height of which falls below a
predetermined limit value. Hereto, a corresponding three-dimensional object recognition algorithm can for example be used. In usually used object recognition algorithms, objects, the height of which falls below a predetermined limit value, are not taken into account. This predetermined limit value can for example be 10 cm or 25 cm. In the usually used object recognition algorithms, these objects, the height of which falls below the limit value, are sorted out or not taken into account. These objects are now used to recognize the ground in the at least one image. Thus, the ground in the environment of the motor vehicle can be recognized in simple and reliable manner and the freespace can be determined depending on the recognized ground.
According to a further embodiment, it is provided that an obstacle-free space is recognized in the at least one image as the at least one area, which is free from obstacles, spatial boundaries of the obstacle-free space are determined and the at least one border element is determined in the environmental map based on the spatial boundaries of the obstacle-free space. According to a third approach, it is provided that the obstacles in the environment of the motor vehicle are actively recognized. Hereto, a corresponding three-dimensional object recognition algorithm can for example be used, by which objects can be recognized in the image, the height of which exceeds a
predetermined limit value. Therein, it can also be provided that individual pixels in the image are associated with the obstacle with the aid of the object recognition algorithm and the pixels are connected to each other or clustered. In recognizing the obstacles, in addition, a spatial uncertainty can be taken into account to determine the spatial boundaries of the obstacle-free space, thus the area in the image, which is free from obstacles. In this manner, the freespace can be reliably determined.
According to a further configuration, it is provided that the capturing range is divided into a predetermined number of partial areas, a presence of the at least one area is checked in each of the partial areas and the at least one border element is determined for each of the partial areas, in which the at least one area is present. Therein, it can in particular be provided that the partial areas each have a shape of a circular segment, wherein a respective center of the partial areas is associated with a position of the environmental map. Thus, for each of the partial areas or segments, it can be checked if an area is therein, which is traversable for the motor vehicle. Furthermore, the border element can be determined for each of the partial areas or segments. Thus, computational power can be saved and thus the digital environmental map can be faster provided.
Furthermore, it is advantageous if a minimum value and a maximum value are preset for the at least one border element, wherein the minimum value and the maximum value each describe a distance to the camera. A minimum value and a maximum value can be preset within the environmental map. The minimum value and the maximum value can be defined depending on the capturing range of the camera. Therein, it can be provided that the border element is associated with the minimum value if the at least one area has a lower distance to the camera than the distance assigned to the minimum value.
Furthermore, it can be provided that the at least one border element is associated with the maximum value if the at least one area is farther away from the camera than the distance associated with the maximum value. Thus, the at least one border element can be reliably defined within the capturing range of the camera.
According to a preferred embodiment, at least two areas different from each other are recognized in the at least one image by means of the evaluation device, for each of the at least two areas, the at least one border element is determined in the environmental map and the freespace is defined based on the respective border elements. Therein, it can for example be provided that the road, the ground and/or the obstacle-free area are recognized in the at least one image by means of the evaluation device. For recognizing the area, which can be traversed by the motor vehicle, thus, the three previously described methods can be used. Therein, it is in particular provided that the results from these three methods are fused with each other. In this manner, the freespace can be reliably determined. However, it is basically possible to consider the results of the three different approaches independently of each other or for example fuse the results of only two of the three approaches with each other.
Furthermore, it is advantageous if sensor data of at least one further sensor of the motor vehicle is received by means of the evaluation device and the freespace is additionally determined depending on the sensor data. The at least one further sensor of the motor vehicle can for example be an ultrasonic sensor, a lidar sensor, a laser scanner or a radar sensor. Obstacles in the environment of the motor vehicle can be captured with the aid of this at least one further sensor. These captured obstacles can then additionally be entered into the environmental map. Furthermore, the sensor data can be used to verify the environmental map, which was determined based on the at least one image.
A driver assistance system according to the invention for a motor vehicle includes a camera and an evaluation device, which is adapted to perform a method according to the invention. The evaluation device can for example be constituted by an electronic control unit of the motor vehicle. Therein, it can in particular also be provided that the driver assistance system includes a plurality of cameras, which are disposed distributed on the motor vehicle.
A motor vehicle according to the invention includes a driver assistance system according to the invention. The motor vehicle is in particular formed as a passenger car.
The preferred embodiments presented with respect to the method according to the invention and the advantages thereof correspondingly apply to the driver assistance system according to the invention as well as to the motor vehicle according to the invention.
Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or alone, without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations. Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim.
There show:
Fig. 1 a motor vehicle according to an embodiment of the present invention, which has a driver assistance system with four cameras; Fig. 2 a schematic illustration of an environment of the motor vehicle, in which an obstacle is located, and in which a freespace is identified;
Fig. 3 respective capturing ranges of the four cameras, which are divided into partial areas;
Fig. 4 a schematic flow diagram of a method for generating an environmental map according to a first embodiment;
Fig. 5 an image of the camera, in which a road is recognized;
Fig. 6 a digital environmental map, which has been determined based on the image according to Fig. 5;
Fig. 7 a digital environmental map in a further embodiment;
Fig. 8 a schematic flow diagram of a method for generating an environmental map according to a second embodiment;
Fig. 9 an image of the camera, in which a ground is recognized;
Fig. 10 a digital environmental map, which has been created based on the image according to Fig. 9;
Fig. 1 1 a digital environmental map in a further embodiment;
Fig. 12 a schematic flow diagram of a method for generating an environmental map according to a third embodiment;
Fig. 13 an image of the camera, in which an obstacle-free space is recognized;
Fig. 14 a digital environmental map, which has been determined based on the image according to Fig. 13;
Fig. 15 a digital environmental map according to a further embodiment; and Fig. 16 a schematic flow diagram of a method for generating an environmental map according to a fourth embodiment.
In the figures, identical and functionally identical elements are provided with the same reference characters.
Fig. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view. In the present case, the motor vehicle 1 is formed as a passenger car. The motor vehicle 1 includes a driver assistance system 2, which serves for assisting the driver of the motor vehicle in driving the motor vehicle 1 . The driver assistance system 2 includes an evaluation device 3, which can for example be constituted by an electronic control unit of the motor vehicle 1 . Moreover, the driver assistance system 2 includes at least one camera 4.
In the present embodiment, the driver assistance system 2 includes four cameras 4, which are disposed distributed on the motor vehicle 1 . Herein, one of the cameras 4 is disposed in a front area 5, one of the cameras 4 is disposed in a rear area 6 and two of the cameras 4 are disposed in respective lateral areas 7, in particular in areas of the wing mirrors. At least one image 17 of an environmental region 8 of the motor vehicle 1 can be captured by the respective cameras 4. The cameras 4 are connected to the evaluation device 3 for data transfer. Corresponding data lines are presently not illustrated for the sake of clarity. The respective images 17 of the cameras 4 can be evaluated by means of the evaluation device.
Fig. 2 shows the motor vehicle 1 in a plan view. An obstacle 9 is located in the
environment 8 of the motor vehicle 1 . The obstacle 9 in particular represents an object, which cannot be overrun by the motor vehicle 1 . When the motor vehicle 1 is moved towards the obstacle 9, a collision between the motor vehicle 1 and the obstacle 9 impends. The obstacle 9 can be captured by one of the cameras 4, in particular by the camera 4 disposed in the front area 5 of the motor vehicle 1 . Hereto, an image 17 can be captured by the camera 4 in a region of interest 10. This image 17 can be evaluated by the evaluation device 3. For example, the obstacle 9 can be recognized by a
corresponding object recognition algorithm. Therein, the obstacle 9 can only be recognized with a spatial uncertainty. Presently, this is illustrated by the dashed lines 1 1 . Presently, a so-called freespace 12 is now to be defined in the environment 8 of the motor vehicle 1 . The freespace 12 describes an area in the environment 8 of the motor vehicle 1 , which is free from obstacles 9 for the motor vehicle 1 .
With the aid of the evaluation device 3, now, a digital environmental map 21 is to be provided, which describes the environment 8 of the motor vehicle 1 . The freespace 12 or a picture of the freespace 12 is also to be defined within this digital environmental map 21 . The information with respect to the freespace 12 can then be used by the driver assistance system 2 to for example semi-autonomously maneuver the motor vehicle 1 . Presently, it is apparent that the freespace 12 is defined such that it does not coincide with the spatial boundaries of the obstacle 9. The reason for this is in that upon capture or recognition of the obstacle 9, spatial uncertainties can occur. In addition, it can be the case that the obstacle 9 is not reliably recognized or not recognized.
Therein, the freespace 12 can only be captured in a predetermined capturing range 13 (see Fig. 3) of the respective camera 4. Obstacles 9 cannot be captured outside of the capturing range 13. Furthermore, it is not possible that a freespace 12 is determined, which is determined starting from the camera 4 behind the obstacle 9. The area behind the obstacle 9 is covered by the obstacle 9 and thus cannot be reliably captured.
Furthermore, it should basically be taken into account that the obstacles 9 cannot be recognized or cannot be reliably recognized. Hereto, it can be provided that for the respective obstacles, which are recognized, a confidence value is stored in the
environmental map 21 for each of the obstacles 9. The confidence value describes the probability, with which the obstacle 9 was captured in the real environment 8 of the motor vehicle 1 . Therein, it can also be provided that a confidence value is stored in the digital environmental map for the freespace 12 or a picture of the freespace 12. If the obstacle 9 is for example in the digital environmental map 21 within the freespace 12, the confidence value of the obstacle 9 can be reduced and the confidence value of the freespace 12 can be increased. If the obstacle 9 of the digital environmental map 21 within the freespace 12 is disposed within the freespace 12 for a predetermined period of time and the confidence value of the freespace 12 is higher than the confidence value of the obstacle 9, the obstacle 9 can be removed from the environmental map 21 .
Fig. 3 shows the motor vehicle 1 and the respective capturing ranges 13 of the cameras 4, wherein the cameras 4 are presently not illustrated. Presently, the respective capturing ranges 13 are divided into a plurality of partial areas 14, which here each have the shape of a circular sector. Further, for defining the freespace 12, a respective orientation is preset, which is indicated by the respective arrows 15. They can be related to a vehicle coordinate system 16. For defining the individual partial areas, starting from an end defined by the orientation 15 or the arrow, a partial element 14 or a sector can
respectively be added. Therein, it can in particular be provided that the respective partial areas 14 or sectors are identically sized.
Fig. 4 shows a schematic flow diagram of a method for generating an environmental map of the environment 8 of the motor vehicle 1 . In a step S1 , the image 17 captured by one of the cameras 4, which describes the environment 8, is received by the evaluation device 3. In a following step S2, a road 19 is recognized in the image 17. The road 19 represents an area 25, which can be traversed by the motor vehicle 1 without a collision with an obstacle 9 impending. In a further step S3, spatial boundaries 20 of the road 19 are determined. Finally, in a step S4, the freespace 12 is determined based on the spatial boundaries 20 of the road 19.
Fig. 5 shows the image 17 provided by the camera 4 of the motor vehicle 1 and transmitted to the evaluation device 3. A region of interest 18 is defined within the image 17, in which a road 19 is to be recognized. For recognizing the road 19, a segmentation can be performed. Hereto, the image 17 can be divided into a plurality of image blocks by means of the evaluation device 3. Then, an intensity can be determined for each of the image blocks. Those image blocks, the intensity of which falls below a predetermined threshold value, can be examined with respect to the brightness. Then, those image blocks can be recognized with the aid of a corresponding histogram, which are associated with the road. For determining the freespace 12, the individual image blocks, which are for example arranged in multiple columns and in multiple lines, can be examined. Therein, the individual columns can be examined from the bottom to the top with the aid of a corresponding histogram. Therein, it can for example be checked if a predetermined number of contiguous image blocks, for example three contiguous image blocks, is present. In this case, the freespace 12 can then be determined. Therein, it can also be provided that the center of the last one of the image blocks, which are associated with the road 19, is determined in the row. This point can then be considered as the boundary 20 of the road 19. Thus, the spatial boundaries 20 of the road 19 can be defined.
Fig. 6 shows the environmental map 21 , which was determined based on the image 17 according to Fig. 5. Herein, the areas 14 or segments of the capturing range 13 are apparent. For each of the partial areas 14, a border element 22 was determined based on the spatial boundaries 20 of the road 19. In addition, a picture 23 of the motor vehicle 1 and a picture 24 of the camera 4 are presented in the digital environmental map 21 . Based on the respective border elements 22, the freespace 12 can be defined in the environmental map 21 .
Fig. 7 shows an environmental map 21 according to a further embodiment. Here too, the further partial areas 14 or segments are apparent. In addition, the environmental map 21 shows lines 26, which describe the spatial boundaries 20 of a road 19. The lines 26 represent a picture of the road 19 or of the boundaries 20 of the road 19. Based on the spatial boundaries 20 or based on the line 26, the respective border elements 22 are determined. Further, a minimum value 27 and a maximum value 28 are preset for the freespace 12. If the area 25 in the respective partial area 14 between the camera 4 and a distance less than a distance associated with the minimum value 27, the border element 22 is associated with the minimum value 27. If the area 25 has a larger distance to the camera 4 than a distance associated with the maximum value 28, the border element 22 is associated with the maximum value 28. Presently, this exists in the areas indicated by the arrows 30. If an area 25 was not recognized in a partial area of the partial areas 14, the border element 22 is associated with the maximum value 28 and additionally the type unknown is associated with the border element 22. Presently, this is the case in the area indicated by the arrow 31 . In addition, a picture 32 of a further vehicle is presented in the environmental map 21 . In addition, polylines 33 are presented, which represent the external boundaries of the further vehicle. The further vehicle can for example be captured by a further sensor of the motor vehicle 1 . The information extracted based on the image 17 and entered into the environmental map 21 is independent of the data determined by the further sensor of the motor vehicle 1 and entered into the environmental map 21 .
Fig. 8 shows a method for generating an environmental map 21 according to a further embodiment. Here too, the image 17 is provided by the camera 4 in a step S1 . In a step S2', a ground 34 is recognized as the at least one area 25. Further, the spatial boundaries 35 of the ground 34 are recognized. In a step S3', the border elements 22 are determined in the environmental map 21 . Finally, the freespace 12 is determined in the step S4.
Fig. 9 shows the image 17, which has been captured by the camera 4. The image 17 is evaluated by means of the evaluation device 3 with the aid of a corresponding three- dimensional object recognition algorithm. Therein, the ground 34 is recognized as the area 25 in the image 17. Thereto, those objects 36 are determined in the image 17, the height of which is less than a predetermined limit value, for example 25 cm. These respective objects 26 can also be combined or clustered. Fig. 10 shows the environmental map 21 , which has been determined based on the image 17 according to Fig. 9. Here too, the respective partial areas 14 or segments of the capturing range 13 are apparent. In addition, in those partial areas 13, in which the area 25 or the ground 34 has been recognized, the border elements 22 are determined.
Fig. 1 1 shows an environmental map 21 according to a further embodiment. In the environmental map 21 , corresponding pictures 37 are presented, which represent the objects 36 of the ground 34. The respective shape of the pictures 37 results from the spatial uncertainty of the capture. Based on the pictures 37, the respective border lines 22 are determined.
Fig. 12 shows a method for generating an environmental map 21 according to a third embodiment. Here too, the image 17 is provided by means of the camera 4 and transmitted to the evaluation device in a step S1 . In a subsequent step S2", an object 38 is recognized in the mage 17. Based on the objects 38, an obstacle-free space 39 is recognized as the area 25. In addition, the spatial boundaries 40 of the obstacle-free space 39 are determined in a step S3". Finally, the freespace 12 is determined in a step S4.
Fig. 13 shows the image 17, which is provided by the camera 4. Here, objects 38 are determined by means of a three-dimensional object recognition algorithm. Based on the recognized objects 38, the obstacle-free space 39 is defined. In addition, the spatial boundaries 40 of the obstacle-free space 39 are defined.
Fig. 14 shows the environmental map 21 , which was determined based on the image 13 Here too, the individual partial areas 14 are apparent, in which the border elements 22 are determined based on the spatial boundaries 14 of the obstacle-free space 39.
Fig. 15 shows an environmental map 21 according to a further embodiment. Here, corresponding pictures 41 , 42 of the recognized objects 38 are presented. Therein, the pictures 41 show the objects 38, which are not part of the polyline 33. The pictures 42 describe those objects 38, which are part of the polyline 33. Here too, the pictures 41 , 42 are indicated with a spatial uncertainty.
Fig. 16 shows a schematic flow diagram of a method for generating the environmental map 21 according to a fourth embodiment. Here, the method steps of the methods according to figures 4, 8 and 12 are combined. Herein, the respective results of the individual methods for determining the areas 25 are fused with each other in an additional step S3a. In this manner, the freespace 12 can be more reliably determined.

Claims

Claims
1 . Method for generating an environmental map (21 ) of an environment (8) of a motor vehicle (1 ), in which by means of an evaluation device (3) at least one image (17) of the environment (8) is received from a camera (4) of the motor vehicle (1 ) and a freespace (12) is determined in the environmental map (21 ) based on the at least one image (17), which describes a part of the environment (8) within a capturing range (13) of the camera (4), which is free from obstacles (9) for the motor vehicle (1 ),
characterized in that
by means of the evaluation device (3), at least one area (25) is recognized in the at least one image (17) which is traversable by the motor vehicle (1 ), at least one border element (22) is determined in the environmental map (21 ), which describes a spatial boundary (20, 35, 40) of the at least one recognized area (25), and the freespace (12) is defined based on the at least one border element (22).
2. Method according to claim 1 ,
characterized in that
a road (19) is recognized as the at least one area (25) in the at least one image (17), the spatial boundaries (20) of the road (19) are determined in the at least one image (17) and the at least one border element (22) is determined in the environmental map (21 ) based on the spatial boundaries (20) of the road (19).
3. Method according to claim 2,
characterized in that
for recognizing the road (19), the at least one image (17) is segmented and hereto the at least one image (17) is divided into multiple image blocks, an intensity is determined in each of the image blocks and the image blocks are classified as a road (19) based on the determined intensity.
4. Method according to any one of the preceding claims,
characterized in that
a ground (34) is recognized as the at least one area (25) in the at least one image (17), the spatial boundaries (35) of the ground (34) are determined in the at least one image (17) and the at least one border element (22) is determined in the environmental map (21 ) based on the spatial boundaries (35) of the ground (34).
5. Method according to claim 4,
characterized in that
for recognizing the ground (34), multiple objects (36) are determined in the at least one image (17), a respective height of the objects (36) is determined in the at least one image (17) and those objects (36) are associated with the ground (34), the height of which falls below a predetermined limit value.
6. Method according to any one of the preceding claims,
characterized in that
an obstacle-free space (39) is recognized as the at least one area (25) in the at least one image (17), which is free from obstacles (9), spatial boundaries (40) of the obstacle-free space (39) are determined and the at least one border element (22) is determined in the environmental map (21 ) based on the spatial boundaries (40) of the obstacle-free space (39).
7. Method according to claim 6,
characterized in that
a three-dimensional object recognition algorithm is used for recognizing the obstacle-free space (39).
8. Method according to any one of the preceding claims,
characterized in that
the capturing range (13) is divided into a predetermined number of partial areas (14), a presence of the at least one area (25) is verified in each of the partial areas (14) and the at least one border element (22) is determined for each of the partial areas (14), in which the at least one area (25) is present.
9. Method according to claim 8,
characterized in that
the partial areas (14) each have a shape of a circular segment, wherein a respective center of the partial areas (14) is associated with a position of the camera (4) in the environmental map (21 ).
10. Method according to any one of the preceding claims,
characterized in that
a minimum value (27) and a maximum value (28) are preset for the at least one border element (22), wherein the minimum value (27) and the maximum value (28) each describe a distance to the camera (4).
1 1 . Method according to any one of the preceding claims,
characterized in that
a picture (26, 37, 41 , 42) of the at least one area (25) is determined in the environmental map (21 ) with a spatial uncertainty and the at least one border element (22) is determined depending on the picture (26, 37, 41 , 42) of the at least one area (25).
12. Method according to any one of the preceding claims,
characterized in that
by means of the evaluation device (3), at least two areas (25) different from each other are recognized in the at least one image (17), the at least one border element (22) is determined in the environmental map (21 ) for each of the at least two areas (25) and the freespace (12) is defined based on the respective border elements (22).
13. Method according to any one of the preceding claims,
characterized in that
sensor data of at least one further sensor of the motor vehicle (1 ) is received and the freespace (12) is additionally determined depending on the sensor data by means of the evaluation device (3).
14. Driver assistance system (2) for a motor vehicle (1 ), which includes at least one camera (4) and an evaluation device (3), and which is adapted to perform a method according to any one of the preceding claims.
15. Motor vehicle (1 ) with a driver assistance system (2) according to claim 14.
PCT/EP2016/071099 2015-09-08 2016-09-07 Method for generating an environmental map of an environment of a motor vehicle based on an image of a camera, driver assistance system as well as motor vehicle WO2017042224A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102015115012.3A DE102015115012A1 (en) 2015-09-08 2015-09-08 Method for generating an environment map of an environment of a motor vehicle based on an image of a camera, driver assistance system and motor vehicle
DE102015115012.3 2015-09-08

Publications (1)

Publication Number Publication Date
WO2017042224A1 true WO2017042224A1 (en) 2017-03-16

Family

ID=56883799

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/071099 WO2017042224A1 (en) 2015-09-08 2016-09-07 Method for generating an environmental map of an environment of a motor vehicle based on an image of a camera, driver assistance system as well as motor vehicle

Country Status (2)

Country Link
DE (1) DE102015115012A1 (en)
WO (1) WO2017042224A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019102561A1 (en) 2019-02-01 2020-08-06 Connaught Electronics Ltd. Process for recognizing a plaster marking

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017103154A1 (en) 2017-02-16 2018-08-16 Connaught Electronics Ltd. Method for generating a digital environment map of an environment of a motor vehicle taking into account a free space, computing device, driver assistance system and motor vehicle
DE102017112393A1 (en) * 2017-06-06 2018-12-06 Connaught Electronics Ltd. Detection of a degree of concealment or contamination of a license plate of a motor vehicle
DE102017115475A1 (en) * 2017-07-11 2019-01-17 Valeo Schalter Und Sensoren Gmbh Method for detecting an obstacle in an environmental region of a motor vehicle, evaluation device, driver assistance system and motor vehicle
DE102017008569A1 (en) * 2017-09-08 2019-03-14 Valeo Schalter Und Sensoren Gmbh A method of operating a driver assistance system for a motor vehicle using a vector-based and a grid-based environment map and driver assistance system
DE102017120729A1 (en) 2017-09-08 2019-03-14 Connaught Electronics Ltd. Free space detection in a driver assistance system of a motor vehicle with a neural network
DE102018122374B4 (en) * 2018-09-13 2024-08-29 Valeo Schalter Und Sensoren Gmbh Method for determining a free space surrounding a motor vehicle, computer program product, free space determination device and motor vehicle
DE102019201690A1 (en) * 2019-02-11 2020-08-13 Zf Friedrichshafen Ag Procedure and assistance system for monitoring the surroundings of an ego vehicle
DE102020109789A1 (en) 2020-04-08 2021-10-14 Valeo Schalter Und Sensoren Gmbh Method for performing self-localization of a vehicle on the basis of a reduced digital map of the surroundings, computer program product and a self-localization system
DE102020134331A1 (en) 2020-12-21 2022-06-23 HELLA GmbH & Co. KGaA Method for determining a clearance in a vehicle environment
DE102022002620A1 (en) 2022-07-18 2023-10-19 Mercedes-Benz Group AG Method for fusing sensor data and vehicle

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006060893A1 (en) * 2006-05-12 2007-11-15 Adc Automotive Distance Control Systems Gmbh Device and method for determining a free space in front of a vehicle
DE102006047131A1 (en) * 2006-10-05 2008-04-10 Robert Bosch Gmbh Method for automatically controlling a vehicle
DE102009028660A1 (en) * 2009-08-19 2011-02-24 Robert Bosch Gmbh Method for imaging object found in surrounding region of vehicle, involves determining solid angle-fan according to assignment rule
DE102009058488B4 (en) * 2009-12-16 2014-07-31 Audi Ag A method of assisting in driving a motor vehicle
DE102010018994A1 (en) 2010-05-03 2011-11-03 Valeo Schalter Und Sensoren Gmbh Method for operating a driver assistance system of a vehicle, driver assistance system and vehicle

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CARSTEN HØILUND ET AL: "FREE SPACE COMPUTATION FROM STOCHASTIC OCCUPANCY GRIDS BASED ON ICONIC KALMAN FILTERED DISPARITY MAPS", PROC. OF THE INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, 1 May 2010 (2010-05-01), pages 164 - 167, XP055322577, Retrieved from the Internet <URL:http://cvrr.ucsd.edu/publications/2010/VISAPP_2010_117_CR.pdf> [retrieved on 20161124] *
NICOLAS HAUTIÈRE ET AL: "FreeSpaceDetectionforAutonomousNavigation inDaytimeFoggyWeather", 1 May 2009 (2009-05-01), XP055322861, Retrieved from the Internet <URL:https://www.researchgate.net/profile/Jean-Philippe_Tarel/publication/228949780_Free_Space_Detection_for_Autonomous_Navigation_in_Daytime_Foggy_Weather/links/0fcfd50ae3f285bf61000000.pdf> [retrieved on 20161124] *
RUIMIN ZOU: "Free Space Detection Based On Occupancy Gridmaps", 1 April 2012 (2012-04-01), XP055202458, Retrieved from the Internet <URL:http://www.ausy.informatik.tu-darmstadt.de/uploads/Theses/Zhou_MScThesis_2012.pdf> [retrieved on 20150715] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019102561A1 (en) 2019-02-01 2020-08-06 Connaught Electronics Ltd. Process for recognizing a plaster marking

Also Published As

Publication number Publication date
DE102015115012A1 (en) 2017-03-09

Similar Documents

Publication Publication Date Title
WO2017042224A1 (en) Method for generating an environmental map of an environment of a motor vehicle based on an image of a camera, driver assistance system as well as motor vehicle
CN110689761B (en) Automatic parking method
US10878288B2 (en) Database construction system for machine-learning
US9846812B2 (en) Image recognition system for a vehicle and corresponding method
CN111178122B (en) Detection and planar representation of three-dimensional lanes in road scene
US9042639B2 (en) Method for representing surroundings
CN111442776A (en) Method and equipment for sequential ground scene image projection synthesis and complex scene reconstruction
US20150367781A1 (en) Lane boundary estimation device and lane boundary estimation method
CN111448478A (en) System and method for correcting high-definition maps based on obstacle detection
US20110311108A1 (en) Method for detecting objects
CN111081064A (en) Automatic parking system and automatic passenger-replacing parking method of vehicle-mounted Ethernet
JP6139465B2 (en) Object detection device, driving support device, object detection method, and object detection program
KR101176693B1 (en) Method and System for Detecting Lane by Using Distance Sensor
JP7077910B2 (en) Bound line detection device and lane marking method
CN109421730B (en) Cross traffic detection using cameras
CN105006175A (en) Method and system for proactively recognizing an action of a road user and corresponding locomotive
US20210394782A1 (en) In-vehicle processing apparatus
EP3029602A1 (en) Method and apparatus for detecting a free driving space
CN113297881A (en) Target detection method and related device
US10343603B2 (en) Image processing device and image processing method
JP2020077293A (en) Lane line detection device and lane line detection method
JP2017117105A (en) Visibility determination device
CN114170499A (en) Target detection method, tracking method, device, visual sensor and medium
WO2015014882A1 (en) Method for detecting a target object by clustering of characteristic features of an image, camera system and motor vehicle
US20230266469A1 (en) System and method for detecting road intersection on point cloud height map

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16762804

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16762804

Country of ref document: EP

Kind code of ref document: A1